id
stringlengths 20
20
| score
int64 1
5
| normalized_score
float64 0.2
1
| content
stringlengths 216
2.36M
| sub_path
stringclasses 1
value |
|---|---|---|---|---|
BkiUcr3xK3xgpZzQn0yK
| 5
| 1
|
\section{\label{sec:intro}Introduction}
\section{Introduction}
The physics of open quantum systems spans many areas of research, ranging from optical physics to nano science, to atomic, and to nuclear physics. Of particular interest are long-lived metastable states and broad resonances: they
carry rich information about localized nucleonic states confined to the nuclear interior, about the multi-channel environment of scattering and decaying states, and about the coupling between these two spaces. With exciting advances in radioactive beam experimentation worldwide, many weakly-bound isotopes inhabiting the outskirts of the nuclear landscape can now be reached; they provide a fertile territory where
to study generic properties of open quantum systems \cite{OpenQS}.
To develop a microscopic theoretical framework that would unify structural and reaction-theoretical aspects of the nuclear many-body system remains a challenge. A step in this direction is the unification of bound states and resonant phenomena, often enabled by high-performance computing, and there has been an excellent progress in this area
\cite{Gamow_Rmatrix,Nunes_Ian,Deltuva, Navratil1, *Navratil2,*Navratil3,Nollett,gaute_michel,ncgsm,gaute_Oxygen,*gaute_ca48}.
One possible strategy in this area is to relate the resonance parameters directly to the complex-energy eigenvalues of the effective Hamiltonian. To this end, one
can solve the many-body eigenproblem with the hermitian Hamiltonian by imposing specific boundary conditions \cite{review_GSM},
or one can construct a manifestly a non-hermitian effective Hamiltonian \cite{Feshbach,Marek_rotter,Volya}. In both cases, the eigenstates that appear below the particle threshold are bound, and the complex-energy states above the threshold represent the many-body continuum.
The GSM \cite{review_GSM} and CS \cite{ykho,moiseyev,ikeda_review} methods
deal with effective non-hermitian Hamiltonians. In the GSM, one starts with a hermitian Hamiltonian and by imposing outgoing boundary conditions one ends up
with a complex-symmetric Hamiltonian matrix.
In the CS method, a non-Hermitian Hamiltonian appears as a result of a complex rotation of coordinates. The corresponding non-unitary transformation is characterized by a real parameter $\vartheta$. The transformed eigenstates
are square integrable;
this is a very attractive feature from the computational point of view.
Unfortunately, since the eigenvectors depend on $\vartheta$, they cannot be directly compared with
the eigenfunctions of the original Hamiltonian. To obtain the wave functions from the CS solutions, the so called-back rotation
must be employed. Since in most cases the eigenproblem is solved numerically,
the back-rotation constitutes an ill-posed inverse problem and a high-frequency ultraviolet
noise appears \cite{backroterror,atkpal}.
We are aware of at least two attempts \cite{pade,atombackrot} to overcome this problem. When the original wave function is
reconstructed by means of the Pad\'{e} approximation \cite{pade}, several calculations with different $\vartheta$
values can be carried out to perform the analytical continuation.
In Ref.~\cite{atombackrot}, special properties of the applied basis set were utilized to cure the
errors of the back rotated wave function.
In this work, we will present a new approach to the problem of back-rotation. Our procedure does not depend on the type of basis set used,
and it is based on sound mathematical foundations.
The CS method has been successfully
applied in quantum chemistry to solve many-body problems with an extremely
high accuracy \cite{Bardsley,moi79,ykho,moiseyev,varga_positron} and also in nuclear physics, in calculations of resonance parameters \cite{atkrgm,kru99} and cluster systems \cite{ikeda_review,Aoyama95,*Aoyama95a,Myo01,katoalphad}.
In the nuclear three body calculations, mainly Jacobi coordinates have been employed.
In the cluster orbital shell model \cite{ikeda_review}, besides the ``V" type coordinate, also a ``T" type Jacobi coordinate has been used in order to incorporate correlations.
In the field of quantum chemistry, on the other hand, mainly Hylleraas-type functions \cite{Hyll_basis,belen} are used, and the
achieved accuracy for the helium atom is spectacular \cite{drake1,drake2,korobov}.
In our CS calculations, we employ the Slater basis set \cite{Slater}, which is an approximation to the Hylleraas-type basis. The Slater
wave functions have a correct asymptotic behavior, making them a perfect choice for the description of weakly-bound systems.
A basis set of similar type, the Coulomb-Sturmian functions, has been recently introduced into the no-core shell model framework \cite{vary}.
Those functions are in fact linear superpositions of Slater orbits.
In this work, the precision of the new CS-Slater method is tested against the results of the GSM calculations. For the sake of benchmarking, we consider the energies and wave functions of the
$0^+_1$ and $2^+_1$ states of $^6$He.
The paper is organized as follows. Section~\ref{Hami_Slater} describes the Hamiltonian used,
many-body methods, and configuration spaces employed. In Sec.~\ref{regularization} we discuss the difficulties related to the back-rotation of the CS wave function and introduce the necessary regularization scheme.
Section~\ref{results} presents the results for $^6$He and the details of the CS-GSM benchmarking. Finally, conclusions and future plans are contained in Sec.~\ref{concl}.
\section{Models and methods} \label{Hami_Slater}
\subsection{Three body Hamiltonian}
For the description of the ground and excited state of $^6$He we assume a cluster ($\alpha + n + n$) picture of the nucleus. Consequently,
we consider a system of three particles with masses $m_i$ and single particle coordinates $\bm{r}_{i}$, where $i=1, 2$ for neutrons and $i=3$ for the $\alpha$-core.
We introduce the relative coordinates $\bm{r}_{ij}=\bm{r}_{i}-\bm{r}_{j}$
and $r_{ij}=|\bm{r}_{ij}|$.
The system Hamiltonian in the centre-of-mass frame reads:
\begin{eqnarray}\label{relham1}
H &=& -\frac{\hbar^2}{2\mu_1}\triangle_{\bm{r}_{13}}-\frac{\hbar^2}{2\mu_2}\triangle_{\bm{r}_{23}}
-\frac{\hbar^2}{m_3}\nabla_{\bm{r}_{13}}\nabla_{\bm{r}_{23}} \nonumber \\
&+& V_{12}(\bm{r}_{12})+ V_{13}(\bm{r}_{13})+V_{23}(\bm{r}_{23}),
\end{eqnarray}
where the reduced masses are:
\begin{equation}
\mu_1=\frac{m_1m_3}{m_1+m_3},~~~ \mu_2=\frac{m_2m_3}{m_2+m_3}.
\end{equation}
It is worth noting that the Hamiltonian
\eqref{relham1} represents the intrinsic properties
of the system, i.e., it is free from the spurious centre-of-mass motion.
After introducing the single-neutron Hamiltonian,
\begin{equation}\label{spham}
H_{i3}(\bm{r})= -\frac{\hbar^2}{2\mu_i}\triangle_{\bm{r}}+V_{i3}(\bm{r})~~~(i=1,2),
\end{equation}
the Hamiltonian \eqref{relham1} can be written as:
\begin{equation}\label{relham2}
H = H_{13}(\bm{r}_{13})+H_{23}(\bm{r}_{23})+V_{12}(\bm{r}_{12})-\frac{\hbar^2}{m_3}\nabla_{\bm{r}_{13}}\nabla_{\bm{r}_{23}},
\end{equation}
where the last term represents a two-body recoil term, which originates from the transformation to the relative coordinate frame.
\subsection{Complex Scaling Method}
The key element of the CS method is the complex-scaling operator
$U(\vartheta)$, which transforms an arbitrary function $\chi(\bm{r}_{13},\bm{r}_{23})$ according to:
\begin{equation} \label{complex_rot1}
U(\vartheta)\chi(\bm{r}_{13},\bm{r}_{23}) =
e^{i 3\vartheta}\chi( e^{i\vartheta}\bm{r}_{13}, e^{i\vartheta}\bm{r}_{23}).
\end{equation}
The transformed Shr\"{o}dinger equation becomes:
\begin{equation}\label{rot_Shroed}
H_{\vartheta}\Psi_{\vartheta} = E\Psi_{\vartheta},
\end{equation}
where
\begin{equation}\label{rotated_H}
H_\vartheta = U(\vartheta)H U(\vartheta)^{-1}
\end{equation}
is a complex-scaled Hamiltonian:
\begin{eqnarray}\label{csham}
H_{\vartheta}\ &=&e^{-2i \vartheta}\left( -\frac{\hbar^2}{2\mu_1}\triangle_{\bm{r}_{13}}-\frac{\hbar^2}{2\mu_2}\triangle_{\bm{r}_{23}}
-\frac{\hbar^2}{m_3}\nabla_{\bm{r}_{13}}\nabla_{\bm{r}_{23}}\right) \nonumber \\
&+& V_{12}(e^{i\vartheta}\bm{r}_{12})+ V_{13}(e^{i\vartheta}\bm{r}_{13})+V_{23}(e^{i\vartheta}\bm{r}_{23}).
\end{eqnarray}
The exact eigenfunctions $\Psi(\bm{r}_{13},\bm{r}_{23})$ and $\Psi_\vartheta(\bm{r}_{13},\bm{r}_{23})$ of the Hamiltonians (\ref{relham1}) and (\ref{csham}) satisfy the following relation:
\begin{equation}\label{rot}
\Psi_\vartheta(\bm{r}_{13},\bm{r}_{23})=
e^{i 3\vartheta}\Psi( e^{i\vartheta}\bm{r}_{13}, e^{i\vartheta}\bm{r}_{23})
\end{equation}
or the so-called back rotation relation:
\begin{equation}\label{backrot}
\Psi(\bm{r}_{13},\bm{r}_{23})=
e^{-i 3\vartheta}\Psi_\vartheta( e^{-i\vartheta}\bm{r}_{13}, e^{-i\vartheta}\bm{r}_{23}).
\end{equation}
According to the Aguilar-Balslev-Combes theorem \cite{abc1,abc2}, the resonant solutions of Eq.~\eqref{rot_Shroed}
are square integrable. This feature makes it possible to use bound-state methods to solve \eqref{rot_Shroed}, including
configuration interaction \cite{ykho,moiseyev}, Faddeev and Faddeev-Yakubovsky \cite{lazauskas1,lazauskas2}, and
Coupled Cluster method \cite{CC_chem}. As illustrated in Fig.~\ref{fig1},
the spectrum of the rotated Hamiltonian (\ref{rotated_H}) consists of bound and unbound states.
\begin{figure}[b]
\includegraphics[width=0.8\columnwidth]{Fig1}
\caption[T]{\label{fig1}
(Color online) Illustration of the complex scaling transformation of a many-body Hamiltonian. Bound
states and many-body thresholds are invariant. Resonant eigenvalues correspond to poles of the resolvent or the $S$-matrix,
are ``hidden" on a sheet with $\vartheta$ = 0 (a), but are exposed if the cuts associated with many-body continua are rotated (b) \cite{reinhardt}. }
\end{figure}
The continuum part of the spectrum is represented by
cuts in the complex energy plane at an angle $2\vartheta$ with the real-energy
axis, originating at many-body thresholds.
The resonant spectrum consists of bound states lying on the negative real energy axis
and positive-energy resonances. One attractive feature of the CS method is that one does not need to apply directly any boundary condition to obtain the resonant states.
Through the CS transformation $U(\vartheta)$,
all resonant wave functions have decaying asymptotic behavior.
Even though the solution of the complex-rotated Hamiltonian $H_{\vartheta}$ is square integrable, the back-rotated wave function is an outgoing solution of the Schr\"odinger
equation with the original Hamiltonian $H$. The back-rotation transformation, or analytical continuation, will be
investigated in the following.
While the rotated non-resonant continuum states depend on the
rotation angle, resonant states should be independent of $\vartheta$.
In practical applications, however, Eq.~\eqref{rot_Shroed} cannot be solved exactly and usually a truncated basis set is adopted. As a consequence,
the positions of resonant states move slightly
with $\vartheta$ and/or the size of the (truncated) basis. Since
the dependence on $\vartheta$ is radically different for the continuum spectrum and the resonant states, there exist practical techniques to identify the resonance solutions. One of them is the so-called $\vartheta$-trajectory method: using the generalization of the virial theorem to complex energies, one finds that the resonant
solution must change little with $\vartheta$ around certain value of
$\vartheta=\vartheta_{\rm opt}$. In this work, we checked carefully the dependence of resonant states on both $\vartheta$ and basis parameters.
\subsubsection{Slater-basis expansion}
To solve the CS problem, we use a finite Slater-type basis set \cite{Slater}.
Namely, the eigenstate of the original Hamiltonian is assumed to be
\begin{eqnarray}\label{wftot}
&\Psi^{JM}(\bm{x}_{13},\bm{x}_{23}) = \sum_{\{lj\}}\sum_{A} C_{A}^{\{lj\}} \chi_{A}^{\{lj\}}(r_{13},r_{23})\nonumber\\
&\times {\cal Y}_{\{lj\}}^{JMTT_z}(\bm{x}_{13},\bm{x}_{23}),
\end{eqnarray}
where the linear expansion coefficients $C_{A}^{\{lj\}}$ are determined by the Rayleigh-Ritz variational principle. Here $\bm{x}_{13}$, $\bm{x}_{23}$ denote the spatial and spin-isospin coordinates of first and second particle, respectively.
For brevity we introduce the compact notation $\{lj\}=l_{13}, j_{13}, l_{23}, j_{23}$.
Furthermore we introduce the spin-isospin part:
\begin{eqnarray}
&{\cal Y}_{\{lj\}}^{JMTT_z}(\bm{x}_{13},\bm{x}_{23})=
\chi_{TT_z}(1,2) \times \nonumber\\
&\left [\left [ {\cal Y}_{l_{13}}(\bm{r}_{13})\otimes\chi_{1/2}(1)\right]^{j_{13}}
\otimes\left [ {\cal Y}_{l_{23}}(\bm{r}_{23})\otimes\chi_{1/2}(2)\right]^{j_{23}}\right ]^{JM}\nonumber,
\end{eqnarray}
where the solid spherical harmonics are ${\cal Y}_{lm}(\bm{r})=r^l Y_{lm}(\hat{\bm{r}})$.
The symbol $[\otimes]^{JM}$ denotes the angular momentum coupling and $\hat{\bm{r}}_{ij}$ stands for
the angular coordinates of $\bm{r}_{ij}$. The total isospin and single-nucleon spin functions are, respectively, denoted by $\chi_{T,T_z}(1,2)$ and $\chi_{1/2}(i)\ i=1,2$.
For the radial part of the wave function we use the product of Slater-type functions:
\begin{equation}\label{radform}
\chi_{A}^{\{lj\}}(r_{13},r_{23})=r_{13}^{n}e^{-\alpha r_{13}}\ r_{23}^{m} e^{-\beta r_{23}},
\end{equation}
where the non-linear parameters of the basis may depend on the quantum numbers $\{lj\}$
and they are denoted by $A=\{\alpha,n,\beta,m\}$.
At this point, we neglect the inter-nucleon distance $r_{12}$ in the radial part in order to span the same subspace of the Hilbert space as the GSM.
(When the three-body wave function does not depend on the inter-particle distance $r_{12}$ one refers to the resulting set as the Slater basis. If all three coordinates are considered, the basis set is called Hylleraas basis.)
It has been found in quantum chemistry studies \cite{belen} that
by neglecting $r_{12}$
and by using 20-30 Slater orbits, the total energy is extremely close to the results of full Configuration Interaction calculations.
In the LS coupling, the wave function \eqref{wftot} can be written in the form:
\begin{eqnarray}\label{lswf}
&\Psi^{JM}(\bm{x}_{13},\bm{x}_{23}) = \sum_{\{lj\}}\sum_{LS}\sum_{A} C_{A}^{\{lj\}} \chi_{A}^{\{lj\}}(r_{13},r_{23}) \nonumber \\
&\times \gamma_{LS}(\{lj\})
\left [{\cal Y}^{L}_{l_{13}l_{23}}(\bm{r}_{13},\bm{r}_{23}) \otimes\chi_S(1,2)\right ]^{JM} \nonumber\\
&\times\chi_{TT_z}(1,2),
\end{eqnarray}
where
\begin{eqnarray}\label{caly}
&{\cal Y}^{LM}_{l_1l_2}(\bm{r}_1,\bm{r}_2)= ~~~~~~~~~~~~~\strut \nonumber \\
& \sum_{m_1,m_2}\langle l_1m_1,l_2m_2\vert LM\rangle
{\cal Y}_{l_1,m_1}(\bm{r}_1){\cal Y}_{l_2,m_2}(\bm{r}_2)
\end{eqnarray}
are the bipolar harmonics, $\chi_{SS_z}(1,2)$ are coupled total spin functions, and $\gamma_{LS}(\{lj\})$ are recoupling coefficients~\cite{law80}.
In the case of a many-body system, the trial wave function is expanded in a many-body antisymmetric basis in
a coupled or uncoupled scheme. In our formalism, we use the fully antisymmetrized wave functions expressed in both LS- and JJ-coupling schemes.
The trial wave function of the CS Hamiltonian has the same form as Eq.~(\ref{wftot}):
\begin{eqnarray}\label{wftotcs}
&\Psi_\vartheta^{JM}(\bm{x}_{13},\bm{x}_{23}) = \sum_{\{lj\}}\sum_{A} C_{A}^{\{lj\}}(\vartheta) \chi_{A}^{\{lj\}}(r_{13},r_{23})\nonumber\\
&\times {\cal Y}_{\{lj\}}^{JMTT_z}(\bm{x}_{13},\bm{x}_{23}), \nonumber \\
\end{eqnarray}
but the expansion coefficients $C_{A}^{\{lj\}}(\vartheta)$ now depend on $\vartheta$ and
they are determined using the generalized variational principle.
\subsubsection{Two-body matrix elements in CS}
Since the CS wave function is of Slater type, one needs to develop a technique to compute two-body matrix elements (TBMEs). In the following, we shortly review a
method developed in the context of atomic physics applications \cite{efr73,efr86,dra78}.
Since we employ the LS coupling scheme, for TBMEs we need to consider integrals of the type:
\small
\begin{eqnarray}\label{v12}
&\langle A'\{l'j'\} | V_{12} |A\{lj\} \rangle = \int d\tau\chi_{A'}^{\{l'j'\}}(r_{13},r_{23})
{\cal Y}^{L}_{l_{13}' l_{23}'}(\hat{\bm{r}}_{13},\hat{\bm{r}}_{23})^* \nonumber\\
&\times V_{12}(r_{12})\chi_{A}^{\{lj\}}(r_{13},r_{23})
{\cal Y}^L_{l_{13} l_{23}}(\hat{\bm{r}}_{13},\hat{\bm{r}}_{23}).
\end{eqnarray}
\normalsize
To compute (\ref{v12}), we make a coordinate transformation
to the three scalar relative coordinates $r_{12}, r_{13}, r_{23}$ and three
Euler angles ($\Omega$) corresponding to
a triangle formed by three particles.
The volume element $d\tau=d\bm{r}_{13} d\bm{r}_{13}$ can be then written as
$d\tau_rd\Omega$, where the radial volume element is given by
$d\tau_r=dr_{12} dr_{13} dr_{23} \, r_{12} r_{13} r_{23}$, and $d\Omega$ corresponds to angular volume element involving the Euler angles.
The angular integral
\begin{eqnarray}
& W_{l'_1l'_2,l_1l_2}^L(r_{12},r_{13},r_{23})= \nonumber \\
&\int d\Omega\
{\cal Y}^{L}_{l'_1 l'_2}(\bm{r}_{13},\bm{r}_{23})^*
{\cal Y}^L_{l_1 l_2}(\bm{r}_{13},\bm{r}_{23})
\end{eqnarray}
can be calculated analytically \cite{dra78}, and the result is:
\begin{eqnarray}\label{w1}
& W^L_{l'_1,l'_2,l_1,l_2}(r_{12},r_{13},r_{23}) = r_{13}^{l_1+l_1'}r_{23}^{l_2+l_2'} \times \nonumber \\
& \sum_\lambda A(l'_1,l'_2,l_1,l_2,L,\lambda) P_\lambda\left(\frac{r_{13}^2+r_{23}^2-r_{12}^2}{2r_{13}\,r_{23}}\right),
\end{eqnarray}
where
\begin{eqnarray}
&A(l'_1,l'_2,l_1,l_2,L,\lambda) = \frac{1}{2}(-1)^L \hat{l_1}\hat{l_2}\hat{l'_1}\hat{l'_2}(-1)^\lambda(2\lambda+1) \nonumber \\
&\times \left(
\begin{array}{ccc}
l'_1&l_1&\lambda\\
0&0&0
\end{array}
\right)
\left(
\begin{array}{ccc}
l'_2&l_2&\lambda\\
0&0&0
\end{array}
\right)
\left\{
\begin{array}{ccc}
l_1&l_2&L\\
l'_2&l'_1&\lambda
\end{array}
\right\},
\end{eqnarray}
with $\hat{j}\equiv \sqrt{2j+1}$.
The presence of the Legendre polynomial
$P_\lambda$ in (\ref{w1})
shows that the function $W^L_{l_1,l_2,l'_1,l'_2}(r_{12},r_{13},r_{23})$ is a multinomial in the variables $r_{12},r_{13}$ and $r_{23}$.
The interaction matrix element (\ref{v12}), can now be written in a compact form:
\begin{eqnarray}\label{wig}
&\langle A'\{l'j'\} | V_{12} | A\{lj\} \rangle = \\
&=\int_0^\infty dr_{13}\,r_{13} \int_0^\infty dr_{23}\, r_{23}
\int_{\vert r_{13}-r_{23}\vert}^{r_{13}+r_{23}}dr_{12}\, r_{12} \nonumber \\
& \times \chi_{A'}^{\{l'j'\}}(r_{13},r_{23})\chi_{A}^{\{lj\}}(r_{13},r_{23}) \nonumber\\
& \times \, V_{12}(r_{12})W^L_{l'_{13},l'_{23},l_{13},l_{23}}(r_{12},r_{13},r_{23}).
\nonumber
\end{eqnarray}
Finally we determine the radial integrals.
Using the functional form of the basis (\ref{radform}) and the dependence of the function
$W^L_{l_1,l_2,l'_1,l'_2}(r_{12},r_{13},r_{23})$ on the integration variables,
it follows that the building block of the calculation is the integral:
\small
\begin{eqnarray}\label{radint}
&I^{(\lambda)}(n_{13},n_{23})=\int_0^\infty dr_{13}\int_0^\infty dr_{23}
\int_{\vert r_{13}-r_{23}\vert}^{r_{13}+r_{23}} dr_{12}\ r_{12}
r_{13}^{n_{13}}r_{23}^{n_{23}}\nonumber\\
&\times V_{12}(r_{12})P_\lambda\left(\frac{r_{13}^2+r_{23}^2-r_{12}^2}{2r_{13}r_{23}}\right)
\exp(-a_{13}r_{13}-a_{23}r_{23}),
\end{eqnarray}
\normalsize
where
\begin{equation}
a_{13}=\alpha'+\alpha, ~~~a_{23}=\beta'+\beta,
\end{equation}
and
\small
\begin{equation}
n_{13}=n'+l'_{13}+n+l_{13}+1, ~n_{23}=m'+l'_{23}+m+l_{23}+1.
\end{equation}
\normalsize
The integral (\ref{radint}) can be easily calculated if the form factor of the interaction is exponential,
Yukawa-like, or Coulomb \cite{fro96}.
For a Gaussian form factor (e.g., Minnesota force), the integral (\ref{radint}) is more involved and the relevant expressions are given in Appendix A.
\subsection{Gamow Shell Model}
The Gamow Shell Model is a complex-energy configuration interaction method \cite{review_GSM}, where the many-body Hamiltonian is
diagonalized in a one-body Berggren ensemble \cite{Berggren}
that contains both resonant and non-resonant states. The total GSM wave function is expanded
in a set of basis states similar to Eq.~\eqref{wftot}.
The basis functions $\psi^{(\alpha)}_{lj}(r)$ can here be represented by the eigenfunctions of a single-particle (s.p.) Hamiltonian \eqref{spham} with
a finite-depth potential $V(r)$:
\begin{eqnarray}\label{gsm_sp_ham}
&\left ( -\frac{\hbar^2}{2\mu}\triangle_{\bm{r}} + V(r)\right)\psi^{(\alpha)}_{lj}(r)
\left [ Y_{l}(\hat{\bm{r}})\otimes\chi_{1/2}(1))\right]^{jm}\nonumber\\
&=\epsilon_\alpha\psi^{(\alpha)}_{lj}(r)\left [ Y_{l}({\hat{\bm{r}}})\otimes\chi_{1/2}(1))\right]^{jm}.
\end{eqnarray}
The resonant eigenstates (bound states and resonances), which correspond to the poles of the scattering $S$-matrix,
are obtained by a numerical integration of the radial part of Eq.~\eqref{gsm_sp_ham} assuming the outgoing boundary conditions:
\begin{equation}\label{boundary_cond}
\psi(r) \stackrel{r \to 0}{=} r^{l+1}, ~~~~
\psi(r) \stackrel{r \to \infty}{=} H^{+}_{l}(kr),
\end{equation}
where $H_{l}(kr)$ is a Hankel function (or Coulomb function for protons).
The resulting
s.p. energies $\epsilon_{\alpha}$ and the associated linear momenta ($k_{\alpha} = \sqrt{2me_{\alpha}}/\hbar$) are in general complex.
As illustrated in Fig.~\ref{gsm_k_pic}, bound states are located on the imaginary momentum axis in the complex $k$-plane whereas the resonances are
located in its forth quadrant.
\begin{figure}[t]
\includegraphics[width=0.8\columnwidth]{Fig2}
\caption[T]{\label{gsm_k_pic}
(Color online) Berggren ensemble in the complex-$k$ plane used to generate the s.p. basis of the GSM.}
\end{figure}
The s.p. Hamiltonian also generates non-resonant states, which are solutions obeying scattering boundary conditions. The resonant and non-resonant states form a complete set (Berggren ensemble) \cite{Berggren,Lind,*Lind1}:
\begin{equation}\label{complet}
\sum_{b,r} | \psi_{b,r}^{\alpha} \rangle \langle \psi_{b,r}^{\alpha} | + \int_{L_{+}} dk |\psi_{k}^{\alpha} \rangle \langle \psi_{k}^{\alpha}| = 1,
\end{equation}
which is a s.p. basis of the GSM. In Eq.~(\ref{complet})
$b$ (=bound) and $r$ (=resonance) are the resonant states, and the non-resonant states are distributed
along a complex contour $L_+$. In our implementations,
the continuum integral is discretized using a Gauss-Legendre quadrature.
The shape of the contour is arbitrary. The practical condition is that the contour should
enclose narrow resonances for a particular partial wave. Additionally, the contour is extended up to a certain momentum cut-off $k_{\rm max}$.
Then convergence of results is checked with respect to both the number of shells and the s.p. cut-off. For a sufficient number of points (shells), the basis (\ref{complet}) satisfies the
completeness relation to a very high accuracy.
The total wave function is expanded in the complete set of the Berggren's ensemble:
\small
\begin{eqnarray}\label{GSMwf}
&\Psi^{JM}(\bm{x}_{13},\bm{x}_{23}) = \sum_{\{lj\}}\sum_n \sum_m C_{\{lj\}}^{(n,m)}\psi^{(n)}_{l_{13}j_{13}}(r_{13})\psi^{(m)}_{l_{23}j_{23}}(r_{23}) \nonumber\\
&\times {\cal Y}_{\{lj\}}^{JMTT_z}(\bm{x}_{13},\bm{x}_{23}).
\end{eqnarray}
\normalsize
Comparing Eqs.~\eqref{GSMwf} and \eqref{wftot}, we notice that the GSM and CS-Slater wave functions differ by their radial parts.
The expansion coefficients $C^{(n)}_{lj}$'s are determined variationally
from the eigenvalue problem:
\begin{equation}
\sum_{\alpha_1' \, \alpha_2'} \left( H_{\alpha_1 \alpha_2 \alpha_1' \alpha_2'} - E C_{\alpha_1' \, \alpha_2'} \right) = 0,
\end{equation}
where, $\alpha$ indices represent the s.p. $nlj$ quantum numbers.
Since the basis is in general complex, $H_{\alpha_1 \alpha_2 \alpha_1' \alpha_2'}$ is a non-Hermitian complex symmetric matrix.
The Berggren ensemble involves functions
which are not $L^2$-integrable. Consequently, normalization integrals and matrix elements of operators are
calculated via the ``external" complex scaling technique \cite{Gya71}.
The GSM Hamiltonian is given by Eq.~\eqref{relham2}.
The s.p. potential $V(r)=V_{13}(r)=V_{23}(r)$ represents the interaction between the $\alpha$-core and the neutron, and
$\mu=\mu_1=\mu_2$.
The same interaction $V(r)$ is also used to generate the s.p. basis
\eqref{gsm_sp_ham}.
\subsubsection{Two-body matrix elements in GSM}
Once the basis is generated one needs to calculate TBMEs in the Berggren basis.
Since the Berggren basis is obtained numerically,
the standard Brody-Moshinsky bracket technology \cite{Mosh1,*Mosh3,*Mosh2}, developed in the context of the harmonic oscillator (HO) s.p. basis, cannot be employed.
To overcome this difficulty, we expand the NN interaction in a truncated HO basis \cite{hagen_morten_michel}:
\begin{equation}
V_{NN} = \sum_{\alpha \beta \gamma \delta}^{N_{\rm max}} |\alpha \beta \rangle \langle \alpha \beta|V_{NN}| \gamma \delta \rangle \langle \gamma \delta |.
\label{HO_exp}
\end{equation}
The TBMEs in the Berggren ensemble are given by:
\begin{equation}\label{inter_ho_exp}
\langle \widetilde{ab}| V_{NN} | cd \rangle = \sum_{\alpha \beta \gamma \delta}^{n_{\rm max}} \langle \widetilde{ab}|\alpha \beta \rangle \langle \alpha \beta|V_{NN}| \gamma \delta \rangle \langle \gamma \delta | cd \rangle,
\end{equation}
where the Latin letters denote Berggren s.p. wave functions and Greek letters -- HO states.
Due to the Gaussian fall-off of HO states, no external complex scaling is needed for the calculation of the overlaps $\langle \alpha \beta | a b \rangle$. Moreover, matrix elements $\langle \alpha \beta|V_{NN}| \gamma \delta \rangle$ of the NN interaction in the HO basis can be conveniently calculated using the Brody-Moshinsky technique \cite{Mosh1,*Mosh3,*Mosh2}.
This method of treating the TBMEs of the interaction is similar to the technique based on a separable expansion of the potential \cite{gyar_kruppa}.
The HO basis depends on the oscillator length $b$, which is an additional parameter. However, as it was demonstrated in Refs.~\cite{hagen_morten_michel,Mic10}, GSM eigenvalues and eigenfunctions converge for a sufficient number of $n_{\rm max}$, and
the dependence of the results on $b$ is negligible. We shall return to this point in Sec.~\ref{res:energies} below.
\subsubsection{Model space of GSM}
The CS and GSM calculations for the 0$^+$ g.s. of $^6$He have been performed in a model space of four partial waves: $p_{3/2}$, $p_{1/2}$, $s_{1/2}$, and $d_{5/2}$.
The Berggren basis consists of the 0$p_{3/2} $ resonant state, which is found at an energy of $0.737 -i0.292$\,MeV, and the $p_{3/2}$ complex contour in order to satisfy the Berggren's completeness relation. The remaining partial waves
$p_{1/2}$, $s_{1/2}$, and $d_{5/2}$ are taken along the real axis.
Each contour is discretized with sixty points; hence, our one-body space consists of 241 neutron shells total. Within such a basis, results are independent on the contour extension in the
$k$-space. For the present calculation we used a $k_{\rm max} = 3.5$\,fm$^{-1}$. The finite range Minnesota interaction was expanded in a set of HO states.
For the g.s., when a relatively large set of HO quanta is used, the dependence of the results on the HO parameter $b$ is negligible.
We took $b = 2$\,fm and we used all HO states with up to $n_{\rm max} = 18$ radial nodes. Since the $s$ wave
enters the Berggren ensemble, in order to satisfy the Pauli
principle between core and valence particles we project out the
Pauli forbidden $0s_{1/2}$ state ($b=1.4$\,fm) using the
Saito orthogonality-condition model \cite{Saito}.
For the excited unbound 2$^+$ state of $^6$He we limit ourselves to a $p_{3/2}$ model space. As concluded in Ref.~\cite{gsm_radii}, the structure of this state is dominated by a $(p_{3/2})^2$ parentage. Moreover, in this truncated
space the neutron radial density becomes less localized since the 2$^+$
becomes less bound when the model space is increased.
The width of this state increases from $\sim$250\,keV in the ($p_{3/2}$, $p_{1/2}$, $s_{1/2}$, $d_{5/2}$) space to $\sim$580\,keV in the truncated space of $p_{3/2}$ waves. Dealing with a broader resonance facilitates benchmarking with CS back-rotation results and helps pinning down dependence on HO parameters in GSM calculations. The $p_{3/2}$ continuum was discretized with a maximum of 60 points. This ensures fully converged results with respect
to the Berggren basis (both the number of discretization points and $k_{\rm max}$).
\section{Back rotation: from Complex Scaling to Gamow states}\label{regularization}
Even if the energies of resonant states in CS and GSM are the same, the wave functions
are different (see Eqs. \eqref{rot} and \eqref{backrot}). This implies that the respective expectation values of an observable
$\hat O$ in states $\Psi(\bm{r}_{13},\bm{r}_{23})$ and $\Psi_\vartheta(\bm{r}_{13},\bm{r}_{23})$ cannot be compared directly. Moreover, when the wave function $\Psi_\vartheta(\bm{r}_{13},\bm{r}_{23})$
is used, one has to deal with the transformed operator:
\begin{equation}\label{rot_operator}
\hat O_\vartheta = U(\vartheta)\hat O U(\vartheta)^{-1}.
\end{equation}
In some cases, it is straightforward to derive the transformed operator. For instance,
in the calculation of the root-mean-square radius, the transformed operator is
$e^{2i\vartheta}\bm{r}_{13}^2+e^{2i\vartheta}\bm{r}_{23}^2$. The transformed recoil operator is given by $-e^{-2i\vartheta}\frac{\hbar^2}{m_3}\nabla_{\bm{r}_{13}}\nabla_{\bm{r}_{23}}$, and the angular correlation function is the mean value of the operator $\delta(\theta_{12}-\theta)$, where
$\theta_{12}$ is the angle between the vectors $\bm{r}_{13}$ and $\bm{r}_{23}$.
For the radial density, the situation is not that simple and we shall discuss this point in the following.
In order to retrieve the Gamow wave function of the original Schr\"{o}dinger equation, it is tempting to carry out a direct back-rotation of the CS wave function (\ref{wftot}):
\begin{eqnarray}\label{wfbackrot}
& e^{-i3\vartheta}\sum_{\{lj\}}\sum_{A} C_{A}^{\{lj\}}(\vartheta) \chi_{A}^{\{lj\}}(e^{-i\vartheta}r_{13},e^{-i\vartheta}r_{23})\nonumber\\
& \times {\cal Y}_{\{lj\}}^{JMTT_z}(\bm{x}_{13},\bm{x}_{23}) .
\end{eqnarray}
It turns out, however, that this method is numerically unstable. Even for one particle moving in a potential well,
the direct back-rotation leads to unphysical large oscillations in the wave
function \cite{atkpal,backroterror}. To prevent this, a proper regularization procedure needs to be applied \cite {chu08,chu09}.
The radial density is defined as the mean value of the operator:
\begin{equation}\label{denop}
\frac{1}{2}\left[\delta(r_{13}-r)+\delta(r_{23}-r)\right].
\end{equation}
Using the CS wave function (\ref{wfbackrot}) and the Slater-type radial basis functions
(\ref{radform}), the density can be casted into the form:
\begin{equation}
\rho_\vartheta(r)=r^2\sum_j C_j(\vartheta) r^{n_j} \exp(-a_j r),
\end{equation}
where C$_j(\vartheta)$ are related to the linear expansion parameters \eqref{wftotcs},
obtained from the diagonalization
of the complex scaled Hamiltonian \eqref{rot_Shroed}.
If we consider the direct back-rotated wave function, the radial density is given by:
\begin{equation}\label{fdefcs}
\rho^{\rm back}_\vartheta(r)=e^{-i\vartheta}\tilde\rho_\vartheta(e^{-i\vartheta}r),
\end{equation}
where
\begin{equation}\label{fdef}
\tilde\rho_\vartheta(r)=r^2\sum_j C_j(\vartheta) r^{n_j} \exp(-a_j r).
\end{equation}
The factor $r^2$ comes from the volume element when the Dirac-delta function in
(\ref{denop}) is integrated.
We shall see that the density
calculated in this way leads to extremely inaccurate results. In the following, we briefly show how to calculate the density of the original Gamow state using the CS wave function. Illustrative numerical examples will be presented in Sec.~\ref{density_tikhonov}.
We may consider Eq. (\ref{fdef}) as a definition of a function defined along the non negative real axis
and $\tilde\rho_\vartheta(e^{-i\vartheta}r)$ can be viewed as an attempt to extend (\ref{fdef}) into the complex plane. However, since the coefficients $C_i(\vartheta)$ obtained numerically are not accurate enough, and moreover the Slater expansion is always truncated, the analytical continuation
of $\tilde\rho_\vartheta$ is not a simple task. To find a stable solution, we apply a method
based on the theory of Fourier transformations.
We first extend $\tilde\rho_\vartheta(r)$ from $(0,\infty)$ to $(-\infty,\infty)$ by means of the mapping:
\begin{equation}\label{fdeftrans}
f_\vartheta(x)=\tilde\rho_\vartheta(r_0 e^{-x}).
\end{equation}
The Fourier transform of (\ref{fdeftrans}) is:
\begin{eqnarray}\label{fourier1}
&\hat f_\vartheta(\xi)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-i x \xi}f_\vartheta (x) \,dx = \nonumber \\
&=\frac{1}{\sqrt{2\pi}}\sum_j C_j(\vartheta) r_0^{n_j+2}\frac{\Gamma(n_j+2+i\xi)}{(r_0 a_j)^{n_j+2+i\xi}},
\end{eqnarray}
where $\xi$ and $x$ are dimensionless variables.
Usually, $\hat f_\vartheta$ is determined with an error, which results in the appearance of high-frequency oscillations in $f_\vartheta$.
Now we shall apply the Tikhonov smoothing \cite{Tikh_orig} to $f_\vartheta (x+iy)$. To this end, we perform
the analytical continuation of $f_\vartheta(x)$ to
the complex plane $x+iy$ \cite{chu08}:
\begin{equation}\label{fourier_analytic_cont}
f_\vartheta(x+iy)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty d\xi\, e^{-i (x+iy) \xi}\hat f_\vartheta (\xi).
\end{equation}
The Tikhonov regularization \cite{chu09} removes the ultraviolet noise in \eqref{fourier_analytic_cont} by introducing a smoothing function:
\begin{eqnarray}\label{tikh_formula}
f^{reg}_\vartheta(x+iy) &=& \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-i (x+iy) \xi} \nonumber \\
&\times& \frac{\hat f_\vartheta (\xi)}{1+\kappa e^{-2y\xi}} d\xi,
\end{eqnarray}
where $\kappa$ is the Tikhonov smoothing parameter. In the actual calculation we take $x=-\ln(r/r_0)$, $y=\vartheta$, and $r_0=1$\,fm.
\section{Results}\label{results}
For the neutron-core interaction we employ the KKNN potential \cite{KKNN} and the interaction between the valence neutrons is approximated by the Minnesota force \cite{LeMere_Tang}.
We study the convergence properties of the CS-Slater method not only for energies of $0^+_1$ and $2^+_1$ of $^6$He and individual energy components, but also for radial properties and spatial correlations.
\subsection{Energies}\label{res:energies}
According to \eqref{relham2} the total Hamiltonian of $^6$He is the sum of one-body terms $H_{13}(\bm{r}_{13})+H_{23}(\bm{r}_{23})$ and two-body terms
$-\frac{\hbar^2}{m_3}\nabla_{\bm{r}_{13}}\nabla_{\bm{r}_{23}}+V_{12}(\bm{r}_{12})$.
\begin{figure}[htb]
\includegraphics[width=0.8\columnwidth]{total_one_two}
\caption[T]{\label{conv_Nmax}
(Color online) Convergence of the $^6$He total g.s. energy, two-body, and one-body terms, with respect to the number of
Slater orbitals $N_{\rm S}$ for $\alpha=\beta=0.8$.}
\end{figure}
Figure~\ref{conv_Nmax} illustrates the convergence of the CS energies with respect to the basis size $N_{\rm S} \ge n+m$ (see Eq.~\eqref{radform} for notation). A similar type of restriction was used in Refs.~\cite{drake1,drake2} in order to avoid the linear dependence of the basis functions. For the non-linear parameters of the Slater basis we assumed the value $\alpha=\beta=0.8$.
The dependence on the Slater basis parameter $\alpha$ is shown in
Fig.\ref{non_lin_dep} for $N_{\rm S}=27$.
In Figs.~\ref{conv_Nmax} and \ref{non_lin_dep}, horizontal solid lines correspond to GSM results. The maximum difference between CS and GSM energies is of the order of 2 keV for the total energy,
two-body, and one-body terms. As can be seen in Fig.~\ref{non_lin_dep}, two-body and one-body terms have no minima with respect to $\alpha$. This is expected as it is the total energy that that is supposed to exhibit a variational minimum, not its individual contributions.
\begin{figure}[h!]
\includegraphics[width=0.8\columnwidth]{total_two_one_var}
\caption[T]{\label{non_lin_dep}
(Color online) Similar as in Fig.~\ref{conv_Nmax} but versus
the non-linear Slater basis parameter $\alpha=\beta$
for $N_{\rm S}=27$.}
\end{figure}
The two and one body terms coincide with the GSM result for a slightly different variational parameter ($\alpha \sim 1.1$) than the one that corresponds to the
minimum of the total energy ($\alpha = 1.5$). Nevertheless, the difference at the minimum is very small, of the order of 2 keV.
Table \ref{Tab:1} displays the energy budget for the bound g.s. configuration of $^6$He in GSM and CS methods. Even though it is not necessary to use CS
for a bound state, we also show values for $\vartheta$ = 0.2, for the reasons that will be explained later in Sec.~\ref{density_tikhonov}.
In this case, the expectation value of the transformed operator $\hat O_\vartheta = U(\vartheta)\hat O U(\vartheta)^{-1}$ was computed.
It is seen that the excellent agreement is obtained between GSM and both CS variants not only for the total energy but also for {\em all} Hamiltonian terms.
\begin{table}[ht]
\caption{\label{Tab:1} Energy decomposition of $^6$He g.s. Values are in MeV. }
\begin{ruledtabular}
\begin{tabular}{lrrrr}
$\langle \hat{O} \rangle$ & GSM~\strut & CS ($\vartheta = 0$) & CS ($\vartheta = 0.2$)~~~~~\strut \\
\hline
$\langle \, \hat{H} \rangle$ & $-$0.249 & $-$0.24\textcolor{blue}{7}~~~\strut & $-0.24\textcolor{blue}{7} + i1.1\times 10^{-3}$ \\
$\langle \, \hat{T} \rangle$ & 24.729 & 24.7\textcolor{blue}{31}~~~\strut & $24.7\textcolor{blue}{33} - i7.27\times 10^{-3}$ \\
$\langle \, V_{c-n} \rangle$ & $-$21.642 & $-$21.64\textcolor{blue}{5}~~~\strut & $-21.64\textcolor{blue}{7} + i4.76\times 10^{-3}$ \\
$\langle \, V_{nn} \rangle$ & $-$2.711 & $-$2.71\textcolor{blue}{0}~~~\strut & $-2.71\textcolor{blue}{0} + i3.11\times 10^{-3}$ \\
$\langle \, \frac{ \vec{p_{1}} \cdot \vec{p_2}}{m_3} \rangle$ & $-$0.625 & $-$0.62\textcolor{blue}{3}~~~\strut & $-0.62\textcolor{blue}{3} + i5.04\times 10^{-3}$
\end{tabular}
\end{ruledtabular}
\end{table}
We now move on to the 2$^{+}$ unbound excited state of $^6$He.
To assess the accuracy of computing this state in GSM, we test the sensitivity of calculations to the HO expansion \eqref{inter_ho_exp}. It is worth noting that in the GSM only the two-body interaction and recoil term are treated within the HO expansion.
The kinetic term is calculated in the full Berggren basis; hence, the system maintains the correct asymptotic behavior. Moreover, for the $2^+$ state
in the $p_{3/2}$ model space, the recoil term vanishes.
\begin{figure}[h!]
\includegraphics[width=0.8\columnwidth]{N_max_HO_expansion_total}
\caption[T]{\label{nmax_dependence}
(Color online) Dependence of the energy (a) and width (b) of the unbound 2$^+_1$ state in $^6$He calculated with GSM on the HO expansion parameters $n_{\rm max}$ and $b$ (=1.2, 1.5, 2.0, and 2.4\,fm) in Eq.~\eqref{inter_ho_exp}. The CS-Slater result is marked by a dotted line.}
\end{figure}
The resonance position in the CS-Slater method is determined with the $\vartheta$-trajectory method.
Figure~\ref{nmax_dependence} displays the result of our tests.
Overall, we see a weak dependence of the energy and width of the 2$^+$ state predicted in GSM on the HO expansion parameters $n_{\rm max}$ and $b$. The increase of $n_{\rm max}$
from 6 to 28 results in energy (width) change of $\sim$20\,keV ($\sim$10\,keV). With increasing $n_{\rm max}$,
the results become less dependent on the
oscillator length $b$.
For the real part of the energy, there appears some stabilization at large values of $n_{\rm max}$.
but the pattern is different for different values of $b$.
The most stable results are obtained with $b=2$\,fm, where we find a broad plateau for both the energy and the energy modulus \cite{Moi78,moiseyev,Rot09} for $n_{\rm max}>16$. We adopt the value of $b_{\rm opt}=2$\,fm for the purpose of further benchmarking.
The pattern for the width is similar, with no clear
plateau but very small differences at large $n_{\rm max}$. Such a behavior is not unexpected. While the variational arguments do not apply to the interaction but to the trial wave function \cite{Moi78,moiseyev,Rot09}, one
can demonstrate \cite{hagen_morten_michel,Mic10} that while the matrix elements exhibit weak converge
with $n_{\rm max}$, eigenvectors and energies show strong convergence. However, the actual convergence is very slow for broad resonant states.
Based on our tests presented in Fig.~\ref{nmax_dependence} we conclude that the numerical error of GSM, due to HO expansion, on the energy and width of the $2^+_1$ resonance in $^6$He is $\sim 2$\,keV. This accuracy is more than needed to carry out the CS-GSM benchmarking.
Table \ref{Tab:2} displays the energy budget for the unbound 2$^+_1$ state of $^6$He.
\begin{table}[ht]
\caption{\label{Tab:2} Similar to Table~\ref{Tab:1} but for the 2$^+_1$ resonance.
In GSM calculations, we used $b_{\rm opt}=2$\,fm and $n_{\rm max}=20$ (GSM$_{\rm I}$)
and $n_{\rm max}=24$ (GSM$_{\rm II}$). The optimal scaling angle $\vartheta_{\rm{\rm opt}}=0.43$ was obtained with the $\vartheta$-trajectory method.
}
\begin{ruledtabular}
\begin{tabular}{crrrr}
$\langle \hat{O} \rangle$ & CS ($\vartheta$ = $\vartheta_{\rm{\rm opt}}$) & GSM$_{\rm I}$~~~~~\strut & GSM$_{\rm II}$~~~~~\strut \\
\hline
$\langle \, \hat{H} \rangle$ & $1.239 - i0.291$ & $1.239 - i0.29\textcolor{blue}{2}$ & $1.239 - i0.29\textcolor{blue}{0}$ \\
$\langle \, \hat{T} \rangle$ & $17.340 - i7.949$ & $17.3\textcolor{blue}{11} - i7.\textcolor{blue}{825}$ & $17.\textcolor{blue}{221} - i7.\textcolor{blue}{766}$ \\
$\langle \, V_{c-n} \rangle$ & $-15.831 + i7.408$ & $-15.8\textcolor{blue}{05} + i7.\textcolor{blue}{288}$ & $-15.\textcolor{blue}{717} + i7.\textcolor{blue}{231}$ \\
$\langle \, V_{nn} \rangle$ & $-0.270 + i0.250$ & $-0.2\textcolor{blue}{67} + i0.2\textcolor{blue}{44}$ & $-0.2\textcolor{blue}{65} + i0.2\textcolor{blue}{44}$
\end{tabular}
\end{ruledtabular}
\end{table}%
We show two variants of GSM calculations in which the interaction was expanded in a HO basis with $b_{\rm opt} = 2$\,fm and $n_{\rm max}$ = 20 (GSM$_{\rm I}$) and 24
(GSM$_{\rm II}$).
The real parts of the total energy are identical in both methods up to the third digit, and the imaginary parts up to second digit.
For the other parts of the Hamiltonian, results are not as precise as for the g.s. calculations in Table \ref{Tab:1}; nevertheless, we obtain an overall satisfactory agreement. It is encouraging, however,
that for the total complex energy the agreement is excellent. The benchmarking results presented in this section demonstrate the equivalence of GSM and CS-Slater methods for energies of bound and unbound resonance states. In the following, we shall see that this equivalence also holds for the many-body wave functions.
\subsection{One-body densities} \label{density_tikhonov}
To assess the quality of wave functions calculated with GSM and CS-Slater, we first calculate the radial one-neutron density of the g.s. of $^6$He.
Figure~\ref{den_gsm_cs_ho} shows that both methods are consistent with each other and they correctly predict exponential fall-off at large distances.
We also display the one-neutron density obtained with the radial part
of the wave function \eqref{wftot} spanned by the radial HO basis states with
$b=2$\,fm and $n_{\rm max}=18$. As expected, the HO result falls off too quickly at very large distances due to the incorrect asymptotic behavior.
\begin{figure}[h!]
\includegraphics[width=0.8\columnwidth]{density_GSM_CS_HO}
\caption[T]{\label{den_gsm_cs_ho}
(Color online) Ground-state one-neutron radial density in $^6$He predicted with GSM, CS-Slater, and HO basis sets.}
\end{figure}
The g.s. of $^6$He is a bound state; hence, its description does not require a complex rotation of the Hamiltonian. Nevertheless, it is instructive to study the effect of CS on its radial properties.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{density_theta_0_01}
\caption[T]{\label{den_theta}
(Color online) Ground-state one-neutron radial density in $^6$He predicted in CS-Slater using $\vartheta=0$ (dotted line) and 0.1 (solid line). The back-rotated $\vartheta=0.1$ result is marked by a dashed line.
}
\end{figure}
Figure~\ref{den_theta} shows the g.s. one-neutron density obtained in CS-Slater using $\vartheta=0.1$.
For comparison we also display the
unscaled ($\vartheta=0$) density of Fig.~\ref{den_gsm_cs_ho}.
We see that the one-particle density is $\vartheta$-dependent and
for $\vartheta>0$ it acquires an imaginary part.
Since the integral of the density is normalized to 1, the integral
of the imaginary part should be zero. This
was checked numerically to be indeed the case. Since the back-rotated density should be equivalent to the unscaled or GSM one, its imaginary part should vanish. However, as seen in Fig.~\ref{den_theta}, the back-rotated density at $\vartheta=0.1$ is nonzero. This is indicative of serious problems with
back-rotation in CS, if this method is applied directly \cite{backroterror,atkpal}.
In order to investigate back-rotation in more detail, we consider the $2^+_1$ resonance in $^6$He. As in Sec.~\ref{res:energies}, we limit ourselves to a $p_{3/2}$ model space to better see the effect of back-rotation; by adding more partial waves, the $2^+$ state becomes more localized and the CS density resembles the GSM result.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{density_2plus_CS_GSM}
\caption[T]{\label{GSM_vs_L2_state}
(Color online) Real part of one-neutron radial density for the unbound 2$^+$ state in $^6$He obtained in GSM (solid line) and CS-Slater ($\vartheta_{\rm{\rm opt}}=0.43$).}
\end{figure}
The one-body density derived from the rotated CS solution is very different from
the GSM density, see Fig.~\ref{GSM_vs_L2_state}. As the theory implies, the CS density is localized, and the degree of localization increases with
$\vartheta$ \cite{backroterror}. To compare with the GSM density, which has outgoing
asymptotics, we need to back-rotate the CS radial density.
The comparison of the back-rotated CS-Slater and GSM 2$^+$-state densities is presented in Figs.~\ref{real_tikh} and \ref{imag_tikh}.
Here the problem with the back-rotated CS density is far more pronounced
than for the g.s. case shown in Fig.~\ref{den_theta}: at $r>2$\,fm, the real part of the back-rotated density exhibits unphysical oscillations. The magnitude of those oscillations grows with $\vartheta$, even if the basis size is increased. The situation is even worse for the imaginary part of the density, which does not resemble the GSM density at $r>1$\,fm.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{Fig3}
\caption[T]{\label{real_tikh}
(Color online) Real part of one-neutron radial density for the unbound 2$^+$ state in $^6$He obtained in the back-rotated
(dashed line) and Tikhonov-regularized-back-rotated (solid) CS-Slater method at $\vartheta_{\rm{\rm opt}}$. The GSM density is marked by a dotted line.}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{Fig4}
\caption[T]{\label{imag_tikh}
(Color online) Similar to Fig.~\ref{real_tikh} but for the imaginary part of the density.}
\end{figure}
The numerical
instability of the back-rotated CS wave functions is an example of an ill-posed inverse problem~\cite{Tikh_book}.
The amplitudes of the wave function \eqref{fdef} are determined numerically, and the associated errors
are amplified during the back-rotation \eqref{fdefcs} causing instabilities seen in Figs.~\ref{real_tikh} and \ref{imag_tikh}.
Consequently, one needs a regularization method to minimize the errors that propagate from the coefficients to the solution.
In this paper, we apply
the Fourier analytical continuation and Tikhonov regularization procedure \cite{Tikh_orig,chu09} described in Sec.~\ref{regularization}.
We first investigate the Fourier transform \eqref{fourier_analytic_cont}, which provides us with an analytical continuation of the density.
It is understood that if one performs the integral in the full interval $(-\infty,\infty)$, the analytically-continued density would also
exhibit unwanted oscillations. Indeed, at large negative values of $\xi$ in \eqref{fourier_analytic_cont}, the exponent may become
very large amplifying numerical errors of the Fourier transform $\hat f_{\vartheta}(\xi)$ and causing
numerical instabilities. For this reason we cut the lower range of $\xi$ in
\eqref{fourier_analytic_cont} to obtain the expression for the Fourier-regularized function:
\begin{equation}\label{fourier_analytic_cont_cut}
f_\vartheta(x+iy)=\frac{1}{\sqrt{2\pi}}\int_{\Lambda_{\xi}}^\infty e^{i(x+iy)\xi}\hat f_\vartheta (\xi) d\xi.
\end{equation}
Figure~\ref{fourier_cut} compares the GSM density of the 2$^+$ resonance in $^6$He with back-rotated CS-Slater densities using the Fourier-regularized analytical continuation.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{Fourier_GSM_densities}
\caption[T]{\label{fourier_cut}
(Color online) Real part of one-neutron radial density for the 2$^+$ resonance in $^6$He obtained in back-rotated CS-Slater
using the Fourier-regularized analytical continuation with $\Lambda_{\xi}=-8$ (solid line) and $\Lambda_{\xi}=-16$ (dashed line). The GSM density is marked by a dotted line.}
\end{figure}
By taking the cutoff parameter $\Lambda_{\xi}=-8$ we obtain a density that is
almost identical to that of the GSM. With $\Lambda_{\xi}=-16$,
the analytically-continued density starts to oscillate around the GSM result, and with even larger negative values of cutoff used, the high-frequency components become amplified and eventually one recoups the highly-fluctuating direct back-rotation result of Fig.~\ref{real_tikh}.
In the Tikhonov method, the sharp cutoff $\Lambda_{\xi}$ is replaced by a smooth cutoff (or filter) characterized by a smoothing parameter $\kappa$. In Eq.~\eqref{tikh_formula} this has been achieved by means of the damping function (regulator) $[1+\kappa\exp(-2y\xi)]^{-1}$ that attenuates large negative values of $\xi$, with the parameter $\kappa$ controlling the degree of regularization. The functional form of the regulator is not
unique; it depends on the nature of the problem. In the applications presented in this study, the analytically-continued density coincides with the $\vartheta$-independent result for $\kappa$ = 4$\times$10$^{-4}$, which also corresponds to the original resonant GSM solution. The results presented in Figs.~\ref{real_tikh} and \ref{imag_tikh} demonstrate that both real and imaginary parts of the resonance's density obtained in the Tikhonov-regularized CS-Slater method are in excellent agreement with the GSM result.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{Fourier_Tikhonov_Integrand2}
\caption[T]{\label{Integrand_tikhonov}
(Color online) The real part of the integrand in Eq.~\eqref{tikh_formula}, calculated at
$r = 20$\,fm, $\vartheta_{\rm opt}=0.43$, and $\kappa$=0, 4$\times$10$^{-7}$, and 4$\times$10$^{-4}$. To see the detailed behavior at small negative values of $\xi$,
the region of $-18\le$ $\xi$ $\le -1$ is shown in the inset.}
\end{figure}
To understand in more detail the mechanism behind the Tikhonov regularization, in Fig.\ref{Integrand_tikhonov} we display the real part of the integrand in \eqref{fourier_analytic_cont} at $r = 20$\,fm, $\vartheta_{\rm opt}=0.43$, and $\kappa=0$ (no regularization), $\kappa=4\times$$10^{-7}$ and 4$\times$10$^{-4}$.
In the absence of regulator, at $\xi<-10$ the integrand exhibits oscillations with increasing amplitude. Below $\xi = -8$, all three variants of calculations are very close; this explains the excellent agreement between GSM and back-rotated CS result
in Fig.~\ref{fourier_cut} with $\Lambda_{\xi}=-8$.
In short, with the Tikhonov method, large values of the integrand at large negative values of $\xi$ are suppressed, thus enabling us
to obtain an excellent reproduction of the resonant density in GSM.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{Tikhonov_GSM}
\caption[T]{\label{Tikhonov_method}
(Color online)
Real part of one-neutron radial density for the 2$^+$ resonance in $^6$He obtained in back-rotated CS-Slater method using the Tikhonov regularization with several values of smoothing parameter $\kappa$. }
\end{figure}
It is instructive to study the behavior of the analytically continued back-rotated CS density for different Tikhonov regularization parameters $\kappa$.
As mentioned earlier, the value $\kappa$ = 4$\times$10$^{-4}$ was found to be optimal, i.e., it produces the CS-Slater densities that are closest to those of GSM. As seen in Fig.~\ref{Tikhonov_method},
for smaller values of $\kappa$, the damping function is too small to eliminate the oscillations at large negative $\xi$ values. This is also depicted in Fig.~\ref{Integrand_tikhonov}, where for $\kappa$ = 4$\times$10$^{-7}$
unwanted oscillations of the integrand appear around $\xi \sim 16$.
For larger values of $\kappa$, the integral is over-regulated and produces a suppressed density profile.
Similar patterns of $\kappa$-dependence have been found in other studies \cite{tikh_param1,tikh_param2,tikh_param3,tikh_param4}.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{fixed_density_vs_kappa_real_imag}
\caption[T]{\label{kappa_plateau}
(Color online)
Real (a) and imaginary (b) parts of one-neutron radial density at $r = 3$ and 6 \,fm for the $2^+$ resonance in $^6$He, as a function
of the Tikhonov regularization parameter $\kappa$. In an intermediate region of $\kappa$ values (grey shading), plateaus appear that coincide with the GSM results.}
\end{figure}
The behavior seen in Fig.~\ref{Tikhonov_method} suggests a way to determine the optimal value of the smoothing parameter $\kappa$, regardless of the availability of the GSM result. The idea behind our method is presented in Fig.~\ref{kappa_plateau}
that shows the values of $\rho(r)$ at two chosen large distances $r_\kappa$ (here $r_\kappa=3$ and 6\,fm) versus $\kappa$ in a fairly broad range. As expected, at large and small values of $\kappa$, $\rho(r_\kappa)$ shows strong variations with the Tikhonov smoothing parameter. However, at intermediate values, plateau in $\rho(r_\kappa)$ appears that nicely coincides with the GSM results. Our optimal choice, $\kappa_{\rm opt} = 4\times10^{-4}$, belongs to this plateau.
\subsection{Two-body angular densities}
The two-body density contains information about two-neutron correlations. It is defined as \cite{cor_def,*cor_def2,*cor_def3}:
\begin{equation}\label{formal_def}
\rho(\bm{r},\bm{r^{\prime}}) = \langle \Psi | \delta(\bm{r}-\bm{r}_1)\delta(\bm{r^{\prime}} - \bm{r}_2) | \Psi \rangle.
\end{equation}
In spherical coordinates, it is convenient to introduce \cite{gsm_radii}
\begin{equation}\label{cor_den}
\rho(r,r^{\prime},\theta) = \langle \Psi |\delta(r_1-r)\delta(r_2-r^{\prime})\delta(\theta_{12}-\theta)|\Psi\rangle,
\end{equation}
with $r_1$ ($r_2$) being the distance between the core and the
first (second) neutron and $\theta_{12}$ - the opening angle between the
two neutrons. The density $\rho(r,r^{\prime},\theta)$
differs from the two-particle density \eqref{formal_def} by the absence of the
Jacobian $8\pi^2 r^2 r'^2 \sin\theta$. Consequently, the two-body density is normalized according to
\begin{equation}
\int\rho(r,r^{\prime},\theta) drdr'd\theta = 1.
\end{equation}
In practical applications, \eqref{cor_den} is calculated and plotted for $r = r^{\prime}$.
By parametrizing the wave function in terms of the distance $r$ from the core nucleus and the angle $\theta$ between the valence particles,
one is able to investigate the particle correlations in the halo nucleus. Such calculations were performed recently \cite{gsm_radii}
to explain the observed charge radii differences in helium halo nuclei \cite{laser_prob}.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{density_theta_gs}
\caption[T]{\label{theta_den_gs}
(Color online) Angular two-neutron density for the $^6$He g.s. predicted in GSM and CS-Slater. }
\end{figure}
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{density_theta_cor_2plus}
\caption[T]{\label{theta_den_exc}
(Color online) Similar as in Fig.\ref{theta_den_gs} but for the 2$^+$ resonance. }
\end{figure}
To study angular correlations between valence particles, we introduce the angular density:
\begin{equation}\label{ang_den}
\rho(\theta_{12}) = \int \, dr \int \, dr' \rho(r,r',\theta_{12}).
\end{equation}
Figures~\ref{theta_den_gs} and \ref{theta_den_exc} display $\rho(\theta_{12})$
for the g.s. and $2^+_1$ resonance, respectively.
The agreement between GSM and CS-Slater is excellent.
It is worth noting that
the calculation of the angular density in the CS-Slater framework does not require back-rotation. Indeed, since the CS operator \eqref{complex_rot1} acts only on the radial coordinates, once
they are integrated out one obtains the unscaled result.
\section{Conclusions}\label{concl}
In this work, we introduce the new efficient CS method in a Slater basis to treat open many-body systems. We apply the new technique to the two-neutron halo nucleus $^6$He considered as a three body problem. The interaction between valence neutrons is modelled by a finite-range Minnesota force.
To benchmark the new method, we computed the weakly bound g.s. and $2^+_1$ resonance in $^6$He in both CS-Slater and GSM.
We carefully studied the numerical accuracy of both methods and found it more than sufficient for the purpose of benchmarking. Based on our tests,
we find both approaches equivalent for all the quantities studied.
In a parallel development \cite{masui1,masui}, the CS method in a Gaussian basis \cite{Hiyama}
has been compared with GSM for $^6$He and $^6$Be and a good overall agreement has been found.
An important aspect of our study was the application of the
Tikhonov regularization technique to CS-Slater back-rotated wave functions in order to minimize the ultraviolet numerical noise at finite scaling angles $\vartheta$. We traced back the origin of high-frequency oscillations to the high-frequency part of the Fourier transform associated with the analytic continuation of the CS wave function and found the practical way to determine the smoothing parameter defining the Tikhonov regularization.
The applied stabilization method allows to reconstruct
the correct radial asymptotic behavior by using localized complex-scaled wave functions. This can be of importance when
calculating observables that are directly related to the asymptotic behavior of the system, such as cross sections or decay widths. The proposed method is valid not only for narrow resonances (as for example Ref.~\cite{pade}), but also for broad resonant states, such as the
excited 2$^+$ state of $^6$He.
In the near future, we intend to include the inter-nucleon distance $r_{12}$ in Eq.~\eqref{radform} to obtain the full Hylleraas basis that promises somehow
improved numerical convergence and higher accuracy.
This will enable us to formulate the reaction theory directly in Hylleraas coordinates. The near-term application could include
the $\alpha + d$ elastic scattering and the radiative capture reactions
as in
\cite{katoalphad}, and atomic applications such as electron-hydrogen scattering.
|
train/arxiv
|
BkiUd005qhDBZQiepO8k
| 5
| 1
|
\section{Introduction}
\label{sec_intro}
The distribution of the magnetic field generated in the solar interior and connected into the solar wind influences most coronal phenomena, including large-scale and slowly evolving coronal structures.
The coronal density distribution can serve as a tracer of the configuration of the magnetic field (shape and general morphology rather than field strength), since the coronal plasma is frozen into the field \citep[for a review see, e.g.,][]{Wiegelmann2014A&ARv}.
Of particular interest are the observations of streamers and pseudo-streamers, referring to structures associated with large magnetic loops that separate coronal holes of opposite polarity,
and twin loop arcades that separate coronal holes of the same polarity respectively \citep{Wang2007ApJ_pseudostreamers}.
Another example is the study of magnetic structures above the solar polar regions, where the measurements of the line-of-sight (LOS) magnetograms are generally less reliable owing to the larger viewing angle with the magnetic field. The observed electron density distributions in coronal holes and polar plumes \citep{Barbey2008SoPh, dePatoul2013SoPh} could provide a better understanding of how the flux emergence near the equator affect the magnetic field configuration at the pole \citep{dePatoul2013AA}.
Finally, an accurate determination of the ambient coronal electron density provides a better estimation of the mass and the propagation of coronal mass ejections (CMEs) \citep{Vourlidas2000, Feng2015ApJa, Feng2015ApJb}.
In particular, the density is important for calculating the compression ratio of CME-driven shocks and the Alfv\'en Mach number, which has important implications for the localization of particle acceleration sites and hence space weather forecasts \citep{Bemporad2011, Chen2014}.
The first proposed empirical approach to obtain the electron density from remote sensing observations
was an inversion method using measurements from eclipses in polarized white light, with the assumption that the coronal electron density is axisymmetric \citep{vandeHulst1950}. \cite{Saito1977} used this method to calculate electron densities from polarized brightness (pB) observations obtained by \textit{Skylab} coronagraph data during the declining phase of the solar cycle from 1973 May to 1974 February. A good agreement of the density values was found using SOHO/LASCO-C2 data during 1998 February \citep{Hayes2001ApJ} and 1996 December \citep{QuemeraisAnA2002}.
Empirical methods to obtain the full 3D density distribution are given by solar rotational tomography (SRT). SRT has been specifically developed for optically thin structures and uses LOS-integrated coronal images from multiple viewpoints taking advantage of solar rotation.
White-light images of the K-corona, where the radiation is dominated by Thomson scattering, can be used to reconstruct density from 1.5~${\mathrm R}_{\odot}$\ up to 6.5~${\mathrm R}_{\odot}$\ using images from the LASCO-C2 or \textit{STEREO}/COR1 \citep[e.g.,][]{Frazin2000, Barbey2013SoPh, Kramar2014SoPh, Peillon2014}. When the sources for a tomographic inversion are EUV images, both density and temperature can be reconstructed by applying differential emission measure tomography \citep{Frazin2009ApJ}. However, even in the best cases, only reconstruction close to the surface from about 1.03~${\mathrm R}_{\odot}$\ to 1.25~${\mathrm R}_{\odot}$\ can be obtained.
An alternative physics-based approach to obtain a quantitative 3D density distribution is given by magnetohydrodynamic (MHD) models, which provide the global configuration of the magnetic field and the plasma parameters (i.e., density, temperature, and velocity) in the corona \citep{RileyJGR2001, RileyApJ2006, Lionello2009}.
Here we determine the electron density distribution in the corona during the two previous solar minima: 1996--1997 (solar cycle number 22/23) and 2008--2010 (solar cycle number 23/24).
In section~\ref{sec_meth}, we determine the 4D electron density distribution ($N_{e}$) from a newly developed time-dependent tomographic method. We look at the general morphology of the density structures in the empirical model from tomography and compare with a simple potential field source surface (PFSS) model and more advanced MHD models.
In section~\ref{sec_resu}, we contrast the density values found by tomography and the ones predicted by MHD models; especially, we discuss
(1) the temporal and radial profiles of the density,
(2) the location of the helmet streamer and pseudo-streamer, and
(3) the presence of a differential rotation of the structures in the corona.
\section{Determination of the electron density distribution}
\label{sec_meth}
\subsection{$N_{e}$ from Tomography}
\label{sec_meth_Tomo}
Since 1996 the SOHO/LASCO-C2 coronagraph has continuously produced sets of white-light and polarized images of the solar corona with a field of view ranging from about 1.5~${\mathrm R}_{\odot}$\ up to 6.5${\mathrm R}_{\odot}$\ \citep{BruecknerSoPh1995}.
To determine the electron density distribution ($N_e$) in the corona,
we use the pB images that are extracted from the total brightness LASCO-C2 images pre-processed as described by \cite{llebaria_2006, gardes2013,LamyJGR2014}.
The resulting pB images are dominated by the electron-scattered K corona, which is known to be strongly polarized \citep{Billings1966}, and not contaminated by the dust-scattered F corona,
which is essentially unpolarized at low heights and has been removed during the calibration.
The intensity measured in pB images, $I_{\rm pB}$, observed from a view direction at a rotation angle, $\vartheta$, of the Sun relative to the observer's longitude, is the integration of the electron density, $N_e$, along the LOS direction, $\vec{e}_{\rm LOS}(\vartheta)$,
\begin{equation}
I_{\rm pB} (\rho,\vartheta)
= \int_{\rm LOS}
N_{e} \bigg(\vec{r} \big(l\ \vec{e}_{\rm LOS}(\vartheta)\big) \bigg) \
K \bigg( \vec{r} \big(l\ \vec{e}_{\rm LOS}(\vartheta)\big), \ \rho\ \vec{e}^{\perp}_{\rm LOS}(\vartheta) \bigg) \ {\rm d}l,
\label{eq_IpB}
\end{equation}
where $\vec{e}^{\perp}_{\rm LOS}$ is the vector unit orthogonal to $\vec{e}_{\rm LOS}$,
$\rho$ is the distance from the Sun center to $\vec{e}_{\rm LOS}$ and
$\vec{r}$ is the radial vector.
The Thomson scattering function, $K$, is defined for a point source of luminosity, $4\pi L$, by \citep{Frazin2010}:
\begin{equation}
K = \frac{3\sigma_{e}}{16}\frac{L}{\rho^2} \sin^4\Theta,
\label{eq_ThomScatt}
\end{equation}
where $\Theta$ is the scattering angle defined by $\sin\Theta = \frac{\rho}{\|\vec{r}(l\ \vec{e}_{\rm LOS}(\vartheta))\|}$ and
$\sigma_e$ is the Thomson scattering cross section for a single electron.
A typical example of a pB image is shown in Figure~\ref{fig_lasco} (top panel), where a background subtraction has been applied to enhance the intensity along the radial direction.
In this work, we consider coronal heights above 2.5~${\mathrm R}_{\odot}$\ to avoid artifacts due to diffraction surrounding the occulter.
The pB image shown in Figure~\ref{fig_lasco} (top panel) was taken when a CME occurred at a position of 271$^\circ$. This CME had an angular width of 114.1$^\circ$ and traversed the corona from 2.5~${\mathrm R}_{\odot}$\ to 7.5~${\mathrm R}_{\odot}$\ in approximately 2.5 hr \citep{Boursier2006}.
Solar rotational tomographic methods cannot resolve fast temporal changes, and important artifacts are produced in the reconstructions.
To minimize this effect, we remove the CMEs from the pB images.
\cite{Morgan2010CME} proposed a method for separating CMEs from the quiescent corona in white-light coronagraph images based on the fact that the large-scale structures are close to radial, whilst CMEs are highly nonradial.
Here we consider CMEs listed in the CACTus \citep{RobbrechtAnA2004} and the ARTEMIS \citep{Boursier2006} catalogs that have an intensity larger than 0.8$\times 10^3$W sr$^{-1}$ m$^{-2}$.
Using the position angle and angular width from these catalogs, we simply exclude the angular portion of the pB image affected by the CME from the tomographic reconstruction procedure (Figure~\ref{fig_lasco}, bottom panel).
The electron density is obtained by inverting equation (\ref{eq_IpB}) using SRT.
We use the newly developed time-dependent tomographic method, which has been elaborated and described by \cite{Peillon2014, Peillon2014Poster}.
The method involves spatio-temporal regularization (Kalman filters) to mitigate the slow temporal variation of the corona
and assumes a nearly solid rotation of the Sun of 27.2753 days corresponding to the Carrington rotation.
It requires a continuous set of view directions uniformly distributed over half a rotation, with a minimum cadence of one pB image per day, i.e., a number of $n_I\geq$13 images for a given tomography reconstruction.
The corona is divided into a spherical grid $(r, \phi, \theta; t)$ with a size of ($60\times60\times120\times n_I$), covering the heliocentric distances from 2.5 to 8.5~${\mathrm R}_{\odot}$.
To assess the robustness and accuracy of the technique,
the method has been tested using a set of 14 projected images of a time-dependent MHD volume as ``observations''. The result could successfully reproduce the slow time-varying dynamic of the model.
The estimated density distribution, $\tilde{\bf{x}}$, is constructed on the grid cells by solving the following least-squares minimization problem:
\begin{equation}
\tilde{\bf{x}} = \arg\!\min_{\bf{x}\geqslant 0} \Big\{
\left\| \bf{y}-\bf{A}\bf{x}\right\|^{2}_{2}
+ \lambda_S^{2} \left\| \bf{R}_S \bf{x} \right\|^{2}_{2}
+ \lambda_T^{2} \left\| \bf{R}_T \bf{x} \right\|^{2}_{2}
+ \lambda_C^{2} \left\| \bf{R}_C \bf{x} \right\|^{2}_{2}
\Big\}.
\label{eq_tomo}
\end{equation}
The vector $\bf{y}$ contains the intensity measured in each pixel from the set of pB images over half a rotation, i.e., the $I_{\rm pB} (\rho_{ij},\vartheta)$ defined in Equation (\ref{eq_IpB}) with $\vartheta \in [0,2\pi]$ and $\rho_{ij}$ giving the position of the pixel in the image.
The vector $\bf{x}$ contains $N_e$ values defined in the spherical grid $(r, \phi, \theta; t)$.
$\bf{A}$ is a diagonal-like matrix composed of blocks of projection matrices
that are determined by the geometry and the physics of the problem, i.e., the relation between the volume element in $\bf{x}$ and the LOS-related pixel in the pB image defined by Thomson scattering function (\ref{eq_ThomScatt}).
The matrices $\bf{R_S}, \bf{R_T}$ and $\bf{R_C}$ in equation (\ref{eq_tomo}) are the spatio-temporal regularization terms, which introduce a prior knowledge of the solution.
This regularization minimizes the effects of the noise, the limited number of pB images available, and the unavoidable temporal change in the corona.
The spatial regularization matrix, $\bf{R_S}$, described in \cite{Frazin2007}, is a second derivative of the angular spherical coordinates $\theta$ and $\phi$, multiplied by $r^{-1}$ to reduce the radial distance noise.
The temporal regularization matrix, $\bf{R_T}$, is a first derivative to enforce smoothness between two successive views of the Sun.
The co-rotating regularization matrix, $\bf{R_C}$, is acting jointly in the space-time domain.
Its purpose is to prevent the reconstruction from concentrating material in the vicinity of the
plane of the sky (containing the Sun's center).
This is a plane that rotates in the Carrington coordinate system, and it is always orthogonal to the observer's LOS.
The regularization parameters, $(\lambda_S,\lambda_T,\lambda_C)$, are estimated by minimizing the normalized root means square error of the time-dependent 3D MHD model and its reconstruction ($\lambda_{S}=2.2\ 10^{-6}$, $\lambda_{T}=1.7\ 10^{-6}$ and $\lambda_{C}=0.2\ 10^{-6}$).
Further details about the method and the construction of these regularization operators can be found in \cite{Peillon2014}; see, \review{in particular discussion on the use of the temporal regularization, including examples of 3D and 4D tomographic reconstruction}.
A full 4D reconstruction is performed every 4 days, provided that a minimum of 13 pB images are available.
During 1996--1997, several data gaps are present for which the tomography was not carried out.
Panel (a) of Figure~\ref{fig_tomo_predsci_2077} shows a typical result from tomography during a relatively quiet period of the solar activity when the number of CMEs is reduced.
It was obtained using 14 pB images from 2008 November 21 to December 4, which is included in the Carrington rotation 2077.
The left panel of Figure~\ref{fig_tomo_predsci_2077}~(a) shows the 2D longitude--latitude map at 3.5~${\mathrm R}_{\odot}$\ centered on 2008 December 2. The right panel shows the latitude--radial average map constructed by integrating over the longitudes (a radial contrast enhancement has been applied).
It helps to represent the extent to which the helmet streamer spreads over the latitudes during this particular period.
Panel (a) of Figure~\ref{fig_tomo_predsci_2097} shows another result from tomography in the later phase of the extended solar minimum, when solar activity has started to increase.
It was obtained using 15 pB images from 2010 June 6 to 20, during the Carrington rotation 2097.
The latitudinal positions of the maximum of density evaluated for each longitude in the tomographic reconstruction are indicated by the white dots.
Some voxels near the higher-density structure have a density value close to zero,
for example, Figure~\ref{fig_tomo_predsci_2077}~(a), the region at longitude [30$^{\circ}$, 34$^{\circ}$] and latitude [-29$^{\circ}$, -32$^{\circ}$].
These zero-density artifacts are usually caused by the unavoidable rapid change in the corona.
Indeed, the inverse problem can set a negative value to account for an unexplained variation of intensity in the data from a single viewpoint \citep{Barbey2013SoPh}. This could also be caused by remaining instrumental artifacts in a pB image.
\subsection{$N_{e}$ from MHD models}
\label{sec_meth_MHDmod}
The PFSS model is a simple and popular current-free model capable of reproducing the basic coronal magnetic field configuration. It requires only the synoptic maps of LOS photospheric magnetic field component as lower boundary, and it assumes that all field lines become radial at the upper boundary (the source surface) at about 2.5--3.5~${\mathrm R}_{\odot}$.
The global magnetic field configuration predicted by the PFSS model can be used as a proxy of the density distribution in the corona. In particular, the neutral line at the source surface, which separates the large-scale opposite-polarity regimes of the coronal magnetic field, is often used to locate the heliospheric current sheet (HCS) and the helmet streamer. The PFSS/HCS calculated for a source surface at 2.5~${\mathrm R}_{\odot}$\ is displayed as the black line in Figures~\ref{fig_tomo_predsci_2077} and \ref{fig_tomo_predsci_2097}.
A more complex and elaborate way to predict the magnetic field configuration and the density distribution in the corona is to employ global MHD models. We use solutions from MHD models developed by the group at Predictive Science \citep[][see online, \url{www.predsci.com}]{RileyJGR2001, RileyApJ2006, Lionello2009}.
For the lower boundary condition, the models use the radial component of the magnetic field provided by the observed LOS measurements of \textit{SOHO}/MDI magnetograms and uniform characteristic values for the plasma density and temperature.
It assumes also that the electron and proton density are equal.
In the polytropic MHD model, the energy equation is approximated by a simple adiabatic energy equation with a polytropic index $\gamma=1.05$.
Since this approximation significantly simplifies the problem and reduces the time necessary to complete a simulation, its solutions can be obtained more routinely and are available between 1~${\mathrm R}_{\odot}$\ and 30~${\mathrm R}_{\odot}$\ for all the Carrington rotations under study.
This model reproduces well the geometrical and topological properties of the magnetic field,
such as the location and evolution of coronal holes, streamer structures, and the HCS;
however, such an approximation does not predict the density and temperature very accurately \citep{RileyApJ2006}.
In particular, \cite{Vasquez2008ApJ} compared a static tomographic reconstruction of the density
with two polytropic MHD models (Stanford: \cite{Hayes2001ApJ}; and Michigan: \cite{Cohen2007ApJ}
during Carrington rotations 2029.
They found that these polytropic MHD models could reproduced the density values only below 3.5~${\mathrm R}_{\odot}$\ and at low latitudes,
while both models had problems reproducing the correct density in the polar regions.
A more recent thermodynamic MHD model uses an improved equation for energy transport in the corona that includes parallel thermal conduction along the magnetic field lines, radiative losses, and parameterized coronal heating.
This thermodynamic MHD model produces more accurate estimates of plasma density and temperature in the corona \citep{Lionello2009, Riley2011SoPh}.
The electron density estimated by the polytropic MHD model (pMHD/$N_{e}$) for the Carrington rotations 2077 and 2097 are shown in panels~(b) of Figures~\ref{fig_tomo_predsci_2077} and \ref{fig_tomo_predsci_2097}, respectively. Panels~(d) show the radial field calculated by the polytropic MHD model (pMHD/$B_{r}$) for the same Carrington rotations.
\review{The density predicted by the thermodynamic MHD model (tMHD/$N_{e}$) is shown in panels~(c) of Figures~\ref{fig_tomo_predsci_2077} for the Carrington rotation 2077.}
In the left panel, we show the longitude--latitude Carrington map at 3.5~${\mathrm R}_{\odot}$;
in the right panel, we show the latitude--radial map obtained by averaging over the longitudes.
The latitudinal locations of the density maximum in pMHD/$N_{e}$ are shown as a green dashed line. The latitudes of the density maximum in tMHD/$N_{e}$ for the thermodynamic MHD model are nearly identical since both models reproduced the general observed configuration of the magnetic field.
It is important to note that the PFSS and the global MHD models require a series of magnetograms providing the nearest central meridian data on the photosphere and covering a full Carrington rotation (27.2753 days), while tomography requires observations of the coronal emission covering only half a rotation, since it relies on optically thin measurements.
Moreover, the photospheric measurements beyond $75^{\circ}$ absolute latitude are not reliable owing to the larger viewing angle with the magnetic field. Therefore, errors in polar field strength estimation at the surface can lead to discrepancies in the modeled magnetic field configuration of the corona. This is especially true during the solar minimum, when the polar fields are the strongest.
\section{Analysis and Comparison}
\label{sec_resu}
The overall density structure from the MHD models reproduces the essential features of tomography.
Nevertheless, we can see that the results obtained from tomography are more structured, in particular at the poles.
The location of the density maximum in pMHD/$N_{e}$ \review{and tMHD/$N_{e}$} (green dashed line, Figures~\ref{fig_tomo_predsci_2077} and \ref{fig_tomo_predsci_2097}) follows nearly exactly the HCS predicted by pMHD/$B_{r}$, which is expected since \review{the models MHD/$B_{r}$ and $N_{e}$ are not independent}.
We observe a clear mismatch between the locations of highest densities from tomography (white dots),
the PFSS/HCS (black line),
and the density maximum from the MHD solution (green dashed line).
Previous works showed a limitation of the PFSS model in adequately reproducing some of the observed magnetic structures, in particular when large parts of the solar atmosphere are filled with nonpotential magnetic fields owing to the presence of active regions \citep{Wang2007ApJ_pseudostreamers, Kramar2014SoPh}. Here we show that this is also the case for the HCS predicted by the MHD solutions.
The density values found for pMHD/$N_{e}$ spread over a narrower range
(6.3~10$^5$ -- 1.3~10$^6$ cm$^{-3}$) and overestimate the tomography values
(3.1~10$^3$ -- 3.2~10$^5$ cm$^{-3}$) by an order of
4 for the maximum values and up to 10$^2$ for the minimum values.
Our comparison illustrates the extent to which the plasma parameters predicted by the polytropic MHD model are \review{less realistic compared to the thermodynamic values tMHD/$N_{e}$ (1.9~10$^4$ -- 1.9~10$^5$ cm$^{-3}$)}.
\review{Typical histograms of the density distributions over the radial distances in Figure~\ref{fig_tomo_predsci_histo} show that tomography provides a larger range of density values at every solar radius.}
\subsection{Temporal evolution and radial profiles of the density}
To investigate the temporal evolution of the density during the two solar cycle minima, we first average over longitude all solutions obtained from tomography, pMHD/$N_e$ and the thermodynamic MHD solutions (tMHD/$N_e$), as it was done for Figures~\ref{fig_tomo_predsci_2077} and \ref{fig_tomo_predsci_2097} right panels of (a) and (b).
We evaluate the ``maximum equatorial'' electron density, $P^{\rm eq}_{N_e}(r,t)$, by taking the maximum density value over the latitudes at each radial distance. We evaluate the \lq{polar\rq} electron density, $P^{\rm pl}_{N_e}(r,t)$, by averaging the density values obtained above 65$^\circ$ and below -65$^\circ$ latitude at each radial distance. %
Figure~\ref{fig_profile_temp} shows the temporal evolution of these densities in the equatorial (red) and polar (blue) regions at a radial distance $r=3.5$~${\mathrm R}_{\odot}$.
Since the thermodynamic MHD model is more complex and takes more time to compute, we have fewer data solutions.
We note first that the temporal evolution of the density distribution from tomography shows a good agreement with the solar cycle; for reference we show the daily sunspot number (SN) and the yearly smooth SN in the top panel of Figure~\ref{fig_profile_temp}.
In particular, the density values at the equator are found to be lower during the 2008-2010 solar sunspot minimum
($N_{e}\sim$ 0.8~10$^{5}$ -- 1.1~10$^{5}$ cm$^{-3}$) compared to the 1996-1998 minimum ($N_{e}\sim$ 1.5~10$^{5}$ -- 2.0~10$^{5}$ cm$^{-3}$).
The minimum in 2008--2009 had 818 days where no sunspot was recorded, and had a yearly smooth SN~$\ge 2.1$, while the minimum in 1996--1997 had only 309 spotless days, with a yearly smooth SN~$\ge 10.4$.
To assess our methodology, we also show the values found by \cite{Saito1977} at $r=3.5$~${\mathrm R}_{\odot}$\ (squares) at the equator
(1.8~10$^{5}$ cm$^{-3}$) and in the polar regions (0.5~10$^{5}$ cm$^{-3}$).
Saito's densities were evaluated during a previous minimum (solar cycle number 20/21, with 272 spotless days, and a yearly smooth SN~$\ge 16.9$); nevertheless, \cite{Hayes2001ApJ} and \cite{QuemeraisAnA2002} observe good agreement during the first minimum \review{for polar and equatorial regions}.
At the equator, we consider the higher-density values, while these authors estimate average values of density.
The second minimum, in 2008--2010, shows a lower SN, which reveals how tomography can reproduce the variation of the density distributions that follow the solar cycle.
\review{At the poles,} the density from tomography is about 40\% that of Saito's for both minima.
\review{The density models from \cite{Saito1977}, \cite{Hayes2001ApJ} and \cite{QuemeraisAnA2002} are evaluated using the axi-symmetric assumption, which is less reliable than a tomographic inversion.}
During the separation of the K component in the processing and the calibration of the pB images, an overestimation of the F corona and the stray light cannot be excluded, which results in underestimating the K component and thus the estimated density.
\review{On the other hand, these models might also suffer from the missestimation of the background, resulting in incorrect higher values}.
In the future, a new calibration procedure as proposed by \cite{Morgan2015calib} could be used to refine these results.
As already noted, pMHD/$N_{e}$ overestimates the density found in the tomographic reconstruction by an order of magnitude. On the other hand, tMHD/$N_{e}$ provides more accurate values of the density albeit overestimated at the equator
(tomo/tMHD $\sim 52$\%)
and underestimated at the poles
(tMHD/tomo $\ge 70$\%)
These differences could be linked to the way the equatorial and polar values are computed: recall that the equatorial values correspond to maximum values, while the polar values are averages. It would appear more difficult to obtain a true maximum of a local parcel of plasma with the tomography than it is with the MHD simulation. The lack of resolution at the poles could explain the lower densities in the tMHD model.
No significant time evolution can be observed in pMHD/$N_{e}$, while the tMHD/$N_{e}$ values show time variations that follow the variations in tomography estimates during the minima of the two solar cycles. This is more obvious for the equatorial regions and during the second, more extended solar minimum. Therefore, we conclude that the main variations found in the tomography results are realistic and can be physically interpreted by changes in sunspot activity.
We next study the differences between the two solar minima and estimate radial profiles for the tomographic, pMHD/$N_e$ and tMHD/$N_e$ results.
The {\lq equatorial\rq} radial profiles are obtained by averaging the electron density profiles as follows:
\[\langle P^{\rm eq}_{N_e}(r,t) \rangle_{\rm 1996<t<1997}
\rm{\ \ \ and \ \ \ }
\langle P^{\rm eq}_{N_e}(r,t) \rangle_{\rm 2008<t<2010}.\]
Similarly, we estimate the {\lq polar\rq} radial profiles of the density:
\[\langle P^{\rm pl}_{N_e}(r,t) \rangle_{\rm 1996<t<1997}
\rm{\ \ \ and \ \ \ }
\langle P^{\rm pl}_{N_e}(r,t) \rangle_{\rm 2008<t<2010}.\]
Figure~\ref{fig_profile_rad} shows those radial profiles of the density for the first minimum ($1996<t<1997$) as a dashed line, for the second minimum ($2008<t<2010$) as a solid line, at the equator (red) and at the poles (blue).
Error bars represent the variance of the density values in the tomographic reconstruction over the given time period.
As a reference, we also show the radial profiles found by \cite{Saito1977}.
The general radial profile trends are in reasonably good agreement. Tomography results show profiles slightly more complex, and important changes between the two solar minima are observed. First, at the equator the densities differ by 62\% along the radial profile, showing that the variations between cycles at 3.5~${\mathrm R}_{\odot}$\ are found at all radial distances. Second, at the poles the profiles cross at 3.5~${\mathrm R}_{\odot}$, showing opposite variations between cycles, below and above this key radial distance, with larger densities in the outer corona during the second deeper minimum.
While the tMHD/$N_e$ profiles at the equator differ by a larger factor of 92\% between the two minima, there is no significant change at the poles. The tMHD/$N_e$ profiles are more consistent with tomography up to 3.4~${\mathrm R}_{\odot}$\ and produce lower values at larger radial distances.
\subsection{Location of the highest-density structures}
During the 2008--2010 minimum, comparing the two latitude--radial maps in the declining phase of cycle 23 and the rising phase of cycle 24 (right panels (a) of Figures~\ref{fig_tomo_predsci_2077} and \ref{fig_tomo_predsci_2097}) helps to
show that the denser region, presumably above active regions, spread more in latitude when solar activity increases. It is not obvious that the denser regions always correspond to the helmet streamer.
We investigate how locations in latitude of the density maximum and the HCS agree or differ with time during the 2008--2010 minimum.
To do so, we estimate the position in latitude of the density maximum in all the tomographic reconstructions and in the pMHD/$N_e$ models for every Carrington rotation from 2065 to 2106.
The latitude of the HCS is extracted both in the PFSS model at the source surface of 2.5~${\mathrm R}_{\odot}$\ and in the pMHD/$B_r$ model (as the neutral line where $B_r \simeq 0$) at 1.5 and 3.5~${\mathrm R}_{\odot}$.
\review{Panels (a)--(c) of Figure~\ref{fig_lat_temp} show the time evolution of the spread in latitude over all longitudes from the HCS predicted by pMHD/$B_r$ and the higher-density regions in tomography.}
\review{Panels (d)--(g) are longitude--time maps that show the latitudinal locations of the density maximum from tomographic reconstructions, pMHD/$N_e$, and the HCS from pMHD/$B_r$ and PFSS.}
\review{While panels (a) and (b) show that the spread of the HCS predicted by pMHD/$B_r$ is more confined with higher radial distance from 1.5~${\mathrm R}_{\odot}$\ to 3.5~${\mathrm R}_{\odot}$, the longitude--time maps of pMHD/$B_r$ were found to be the same at 1.5~${\mathrm R}_{\odot}$\ and 3.5~${\mathrm R}_{\odot}$ in panel (f).
The latitudinal spread of the tomographic highest-density region in panel (c) follows well the predicted HCS spread in panel (b), notably with a widening of the latitude range at the end of 2009.
This change coincides with the rise of the new solar cycle 24 when new sunspots appear at higher latitudes, which results in the streamer belt spanning over higher absolute latitudes.}
As expected, the results from pMHD/$N_e$ and pMHD/$B_r$ in panels (e) and (f) are nearly the same, which show a good agreement between the location of the density maximum and the location of the current sheet predicted by the MHD solution.
We see a reasonably good agreement between the PFSS/HCS in panel (g) and the current sheet predicted by pMHD/$B_r$, which is expected since both are based on the observed LOS measurements of the photospheric magnetic field and uniform characteristic values for the plasma density and temperature as lower boundaries.
The tomographic highest-density structure generally follows the predicted HCS as observed by \cite{Kramar2014SoPh}, especially close to the minimum of solar activity, from 2008 to mid-2009. Here this can be observed thanks to longitudinal drifts with time of the highest-density structures.
However, this is less clear during the rising phase of the solar cycle 24, towards the end of 2009.
To investigate this difference, we show latitude--radial planes in the extended minimum and rising phases of cycle 24. Figure~\ref{fig_tomo_predsci_2077_LON} shows latitude--radial planes at longitude 120$^\circ$ of the tomographic reconstruction (2008 November 21 to December 14) and of the pMHD/$N_e$ and pMHD/$B_r$ solution during Carrington 2077.
In this period of extended minimum, the maximum density in tomography follows the current sheet predicted by pMHD/$N_e$ and pMHD/$B_r$.
On the other hand, Figure~\ref{fig_tomo_predsci_2097_LON} shows two examples of planes taken during the rise of the solar cycle 24 at longitude 90$^\circ$ and 170$^\circ$ during Carrington rotation 2097, where we observe that the maximum density from tomography does not follow the HCS but more likely aligns with a pseudo-streamer.
Therefore, a pseudo-streamer can be found to be denser than a helmet streamer at the same longitude. We conclude that the highest-density structures do not always correspond to the predicted large-scale HCS or its helmet steamer but can follow the locations of pseudo-streamers. Since both structures contribute to the denser regions near the equator, both play a role in the wider spread in latitude as the activity increases.
\subsection{Longitudinal drifts of the highest-density structures}
Longitudinal drifts with time of \review{coronal structures at 4~${\mathrm R}_{\odot}$ have been first reported by \cite{Morgan2011ApJ_LongitudinalDrifts_a, Morgan2011ApJ_LongitudinalDrifts_b}.
The author measured the rotation rate of structures within specific latitudinal regions (as opposed to the maximum of density studied here) between -80$^\circ$ and 80$^\circ$ using a back-projection tomographic method. The rotation rates were found to vary considerably between latitudes with values between -3$^\circ$ and 3$^\circ$day$^{-1}$ relative to the Carrington rotation rate.
In Figure~\ref{fig_lat_temp} we observe a longitudinal drift at 3.5~${\mathrm R}_{\odot}$ of the highest-density structures that} are toward higher longitudes in the extended minimum phase and toward lower longitudes in the rising phase.
Knowing how the denser regions are spreading in latitudes as the activity increases, we propose that the highest-density structures show a differential rotation well above the surface depending on how they are magnetically connected to the surface.
The tomographic reconstruction method and the MHD models use the approximation of solar Carrington rotation.
The Carrington rotation rate of 27.2753~days corresponds to the rotation observed near $\pm 30^\circ$ latitudes on the surface of the Sun \citep[e.g.,][]{Snodgrass1990ApJ, Beck2000}. Thus, depending on the latitude of a structure on the surface, its rotation rate, $\omega$ in $^\circ$day$^{-1}$, is larger or smaller than the Carrington rotation rate,
$\omega_{\rm CR}=13.20^\circ$day$^{-1}$,
\begin{equation}
\omega = \omega_{\rm CR} + \alpha
\label{eq_SolRotation}
\end{equation}
where $\alpha$ is \review{positive} for the structures located between the latitudes $-30^\circ$ and $+30^\circ$ (showing a faster rotation), \review{negative} for the structures above $|\pm 30^\circ|$ (showing a slower rotation), and zero for structures located near $-30^\circ$ or $+30^\circ$.
During the extended minimum, the helmet streamer clustered near the equator. The structure rotated faster than $\omega_{\rm CR}$, and shifted toward the larger Carrington longitudes, resulting in a positive longitudinal drift. From 2008 up to mid-2009, we find a faster rotation rate with
$\alpha\simeq 0.25^\circ$day$^{-1}$, which means that the structure took only about 26.77~days to make a full rotation.
On the other hand, during the rising phase of the solar cycle, the denser regions spread over latitudes above $|\pm 30^\circ|$, and were associated with a negative longitudinal drift.
We find a slower rotation rate than $\omega_{\rm CR}$ with
$\alpha\simeq -0.75^\circ$day$^{-1}$, corresponding to about 28.89 days for a full rotation.
\review{The reversal in rotation rate coincides with the observed sudden extension in latitudes of the structures associated with the rise of solar activity toward the end of 2009 (panel (c) of Figure~\ref{fig_lat_temp}).}
This result shows that the effect of the differential rotation is still visible at 3.5~${\mathrm R}_{\odot}$\ although the structure might not spread above $\pm 30^\circ$ at this radial distance.
It also suggests that the rotation of high-density structures is determined by where they are magnetically connected to the surface of the Sun.
\section{Conclusion}
\label{sec_ccl}
The 3D electron density distribution in the corona was determined for two solar minima: 1996--1997 (solar cycle number 22/23) and 2008--2010 (solar cycle number 23/24) with both an empirical model from a newly time-dependent tomographic method and theoretical models from \review{both polytropic and thermodynamic} MHD solutions. \review{The density distribution is more structured in tomography than in the MHD solutions, in particular in the polar regions. In both MHD models the predicted density distribution is strongly related to the configuration of the calculated magnetic field, and the highest-density structures always follow the HCS. While in tomography the highest-density structures do not always correspond to the predicted current sheet, but can sometimes align with the locations of pseudo-streamers. }
In tomographic reconstructions, the highest density at the equator and the average density at the poles follow the temporal evolution observed in the sunspot cycle. The maximum values in thermodynamic MHD solutions, tMHD/$N_{e}$, along the HCS show also a solar cycle variation, while there is no temporal evolution in polytropic MHD solutions, pMHD/$N_{e}$. This confirms that tMHD/$N_{e}$ are more realistic values than pMHD/$N_{e}$ \citep{Lionello2009}.
The equatorial values of both tomography and tMHD/$N_{e}$ are found to be lower during 2008--2010 compared to 1996--1998, in agreement with differences in the solar sunspot minimum. The tMHD/$N_{e}$ overestimate the tomographic values found at the equator by 52\%, while at the poles the values are consistent up to 3.4~${\mathrm R}_{\odot}$\ and then differ.
At the poles the density from tomography is about 40\% lower compared to \cite{Saito1977} for both minima.
\review{In 2008--2010 the highest-density structures and the HCS predicted by the MHD models show a longitudinal drift, which confirms that the structures do not perfectly follow the Carrington rotation rate, but have a differential rotation also visible well above the surface.
Toward the end of 2009 a drastic change in the rotation rate is observed corresponding to the raising of the solar cycle with the emergence of sunspots at higher latitudes and the spreading of the current sheet across the latitudes. The results suggest that the rotation rate of streamers and pseudo-streamers depends on how the structures are magnetically connected to the surface.}
\review{The following are possibilities for future investigation:
(1) One could identify the specific rotation rates of latitudinal regions or single structures in the corona independently, as done in the study by \cite{Morgan2011ApJ_LongitudinalDrifts_a}, and contrast the results with an extrapolated radial filed model.
(2) One could improve} the tomographic method by including the model of the rotation in the reconstruction, as already done by \cite{dePatoul2013SoPh}, who included the solar differential rotation modeled only at the surface.
\review{(3) Accurate knowledge of the rotation rate of \review{streamers and pseudo-streamers}} from the surface to higher altitude in the corona could help to better connect the sources of the solar wind to their in situ counterparts \citep[e.g.][]{Foullon2011ApJ,RileyLuhmann2012SoPh}\review{, which can in turn} provide valuable insight for future investigations with \textit{Solar Orbiter} \citep{Muller2013_SolarOrbiter} and \textit{Solar Probe Plus} \citep{Vourlidas2015_ProbePlus}. In particular, \textit{Solar Orbiter} will co-rotate with the Sun and provide images of the polar regions from heliographic latitudes above 35$^\circ$.
\review{(4)} Ultimately, the time-dependent tomography can be extended to EUV and X-ray ranges to reconstruct also the electron temperature \citep[e.g.][]{Frazin2009ApJ, Vasquez2009SoPh}. It can help to constrain the radial density gradients, base densities, and temperatures of global MHD simulations. Such extensions, combine with the MHD coronal modeling efforts, have the potential to increase the reliability for future space weather forecasting.
\acknowledgments
The authors would like to thank the anonymous reviewer for his/her valuable comments and suggestions to improve the quality of the paper.
J.d.P. is the beneficiary of an AXA Research Fund postdoctoral grant.
C.F. acknowledges financial support from the UK Science and Technology Facilities Council (STFC) under her Advanced Fellowship ST/I003649.
The SOHO/LASCO data used here are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institut for Solar System Research (Germany), Laboratoire d'Astronomie (France), and the University of Birmingham (UK).
SOHO is a project of international cooperation between ESA and NASA.
\bibliographystyle{apj}
|
train/arxiv
|
BkiUen_xK6nrxjHzC2mA
| 5
| 1
|
\section{Introduction}\label{sec:intro}
Proofs are traditionally syntactic, inductively generated objects.
For example,
Fig.\,\ref{fig:lk-drinker-proof} shows a syntactic proof of\note{the formula} $\drinkerformula$.
This paper reformulates first-order logic (predicate calculus) \cite{Fre} with proofs which are graph-theoretic rather than syntactic.
It defines a \textsl{combinatorial proof} of a formula $\formula$ as a lax graph fibration $\cp$ over a graph $\gformula$ associated with $\formula$, where $\cover$ is a partially coloured graph.
For example, if $\formula = \drinkerformula$
then $\gformula$ is
\drinkerFormulaDisplayed
and a combinatorial proof $\cp$ of $\formula$ is
\begin{center}\drinkerDisplayed\end{center}
The upper
graph is $\cover$
(two coloured vertices
$\singletonblue{}\singletonblue{}$
and three uncoloured vertices),
the lower graph is $\gformula$,
and the dotted lines define $\skewfib$.
Additional combinatorial proofs are depicted in Fig.\,\ref{fig:cps}.
The
combinatorial proof $\cp$ above can be condensed
by leaving $\gformula$ implicit and drawing
$\cover$
over the formula $\formula$:
\begin{center}\drinkerInlineDisplayedPic\end{center}
The reader may contrast this with the
syntactic proof of the same formula in Fig.\,\ref{fig:lk-drinker-proof}.
The four combinatorial proofs of Fig.\,\ref{fig:cps} are rendered in condensed form in Fig.\,\ref{fig:cps-condensed}.
The main theorem of this paper is soundness and completeness: a formula is valid if and only if it has a combinatorial proof (Theorem\,\ref{thm:soundness-completeness}).\figurelkdrinkerproof{}
The propositional fragment was presented in \cite{Hug06}.
\figcps{}\figcpscondensed{}
\section{Notation and terminology}\label{sec:notation}
\parag{First-order logic}
We mostly follow the notation and terminology of \cite{Joh87} for first-order logic without equality \cite{Fre}.\todo{comment 0 1}
Terms and atoms (atomic formulas) are generated
inductively from variables $x$, $y$, $z,\ldots$ by: if $\gamma$ is
an $n$-ary
function (resp.\ predicate) symbol
and $t_1,\ldots,t_n$ are terms then $\gamma t_1\ldots t_n$ is a term (resp.\ atom).
We extend the set of atoms with the logical constants $1$ (true) and $0$ (false).
For technical convenience we assume every predicate symbol $p$ is assigned a \defn{dual} predicate symbol $\pp$ with $\pp\tightneq p$ and $\ppp\tighteq p$, and extend duality to atoms with $\dual{p\rule{0ex}{1.1ex}\likex{t_1\ldots t_n}}=\pp t_1\ldots t_n$,
$\dual0=1$ and $\dual1=0$.
\defn{Formulas} are generated from atoms by binary
$\wedge$
and $\vee$
and
quantifiers $\forall x$
and $\exists x$
per variable $x$.
Define $\neg$
and $\implies$
as abbreviations:
$\neg(\alpha)=\dual\alpha$ on atoms $\alpha$,
$\neg(\formula\tightwedge\formulaa)=(\neg\formula)\vee(\neg\formula)$,
$\neg(\formula\tightvee\formulaa)=(\neg\formula)\wedge(\neg\formulaa)$,
$\neg\mkern2mu\forall x\mkern2mu \formula=\exists x\mkern2mu\neg\mkern1mu \formula$,
$\neg\mkern2mu\exists x\mkern2mu \formula=\forall x\mkern2mu\neg\mkern1mu \formula$,
and
$\formula\implies\formulaa=(\neg \formula) \vee \formulaa$.
A formula is \defn{rectified} if all bound variables are distinct from one another and from all free variables, \latinstyle{e.g}\onedot} \def\Eg{\latinstyle{E.g}\onedot
\mbox{$\rectifiedformulaeg\tightwedge{}$}
but not
\mbox{$\unrectifiedformulaeg\tightwedge{}$}.
Throughout this paper we assume all formulas are rectified
(losing no generality since every unrectified formula has a logically equivalent rectified form).
\vspace{-2ex}\parag{Graphs}
An \defn{edge} on a set $\vertices$ is a two-element subset of $\vertices$.
A \defn{graph} $\graphpair$ is a finite
set $\vertices$ of \defn{vertices} and a set $\edges$ of edges on $\vertices$.
Write $\verticesof\graph$ and $\edgesof\graph$ for the vertex and edge sets of a graph $\graph$, and
$\vertex\vertexa$ for $\{\mkern1mu\vertex,\mkern-2mu\vertexa \mkern1mu\}$.
The \defn{complement} of $\graphpair$ is the graph $\graphpairof{\vertices}{\edges^{\mkern-1mu\mathsf c}}$ with $vw\tightin\edges^{\mkern-1mu\mathsf c}$ if and only if $vw\tightnotin\mkern-2mu\edges$.
A graph $\graph$ is (partially) \defn{coloured} if it carries a partial equivalence relation $\colourequiv$ on $\verticesof\graph$ such that $\vertex\colourequiv\vertexa$ only if $\vertex\vertexa\tightnotin\edgesof\graph$; each equivalence class is a \defn{colour}.
A graph is \defn{labelled} in a set $L$ if each vertex has an element of $L$ associated with it, its \defn{label}.
A \defn{vertex renaming} of $\graphpair$ along a bijection
$(\vertexrenaming{\hspace{1ex}}):\vertices\to\verticesp$ is the graph $\graphpairof\verticesp{\setst{\vertexrenaming\vertex\mkern2mu\vertexrenaming\vertexa}{\vertex\mkern1mu\vertexa\tightin\edges}}$, with colouring and/or labelling inherited (\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, $\vertexrenaming\vertex\colourequiv\vertexrenaming\vertexa$ if $\vertex\colourequiv\vertexa$, and the label of $\vertexrenaming\vertex$ that of $\vertex$).
Following standard graph theory, we identify graphs modulo vertex renaming.
Let $\graph\tighteq\mkern3mu\graphpair$ and $\graphp\tighteq\mkern3mu\graphpairp$ be graphs.
A \defn{homomorphism} $\homom:\graph\to\graphp$ is a function \mbox{$\homom:\vertices\mkern-2mu\to\mkern-2mu\verticesp$} such that if
$\vertex\vertexa\mkern-2mu\in\mkern-2mu\edges$ then $\homomof\vertex\mkern1mu\homomof\vertexa\mkern-2mu\in\mkern-2mu\edgesp$.
Without loss of generality, assume
\mbox{$\vertices\cap\verticesp=\emptyset$} (by renaming vertices if needed).
The \defn{union} $\guniongp$ is \mbox{$\graphpairof{\mkern2mu\vertices\cup\verticesp\mkern-2mu}{\edges\cup\edgesp\mkern2mu}$}
and
\defn{join} \mbox{$\gjoingp$} is \mbox{$\graphpairof{\vertices\cup\verticesp\mkern-2mu}{\edges\cup\edgesp\cup\setst{\vertex\vertexp}{\vertex\tightin\vertices,\vertexp\tightin\verticesp}}$};
any colourings or labellings are inherited.
$\graph$ is \defn{disconnected} if $\graph=\graph_1\graphunion\graph_2$ for graphs $\graph_i$, else \defn{connected},
and \defn{coconnected} if
its complement
is connected.
The subgraph of
$\graphpair$ \defn{induced} by $W\tightsubseteq\vertices$ is $\graphpairof W {\restr \edges W}$ for $\restr \edges W$ the restriction of $\edges$ to edges on $W$.
A graph is \defn{$\graph$-free} if $\graph$ is not an induced subgraph.
A \defn{cograph} is a $\pfour$-free graph, where
$\pfour=\pfourgraph=\graphpairof{\pfourvertices}{\pfouredges}$.
In
$\graphpair$ the \defn{neighbourhood} $\neighbourhoodof\vertex$ of $\vertex\tightin\vertices$ is $\setsuchthat{\vertexa}{\vertex\vertexa\tightin\edges}$,
a \defn{module}
is a set $\module\tightsubseteq\vertices$
such that $\neighbourhoodof\vertex\tightsetminus\module = \neighbourhoodof\vertexa\tightsetminus\module$ for all $\vertex,\mkern-2mu\vertexa\tightin\module$,
and $\module$ is \defn{strong} if
every module $\modulep$ satisfies $\modulep\mkern-3mu\cap\mkern-1mu\module\mkern-2mu=\mkern-2mu\emptyset$, $\modulep\mkern-1mu\tightsubseteq\module$ or $\modulep\tightsupseteq\module$.
A \defn{directed graph} $\digraphpairof\vertices\edges$ is a set
$\vertices$ of vertices
and a set
$\edges\tightsubseteq\vertices\mkern-4mu\tighttimes\mkern-4mu\vertices$ of \defn{directed edges}.
A directed graph \defn{homomorphism} $\homom:\digraphpairof\vertices\edges\mkern-2mu\to\mkern-2mu\digraphpairof\verticesp\edgesp$ is a function $\homom:\vertices\mkern-3mu\to\mkern-3mu\verticesp$ such that $\diedge\vertex{\mkern-2mu\vertexa}\tightin\edges$ implies
$\diedge{\homom(\vertex)}{\homom(\vertexa)}\tightin\edgesp$.
\section{Fographs (first-order cographs)}\label{sec:fographs}
A cograph is \defn{logical} if every vertex is labelled by a variable or atom, and it has at least one atom-labelled vertex.
Write $\singleton\tag$ for a $\tag$-labelled vertex.
\begin{definition}\label{def:graph}\label{def:graph-of-formula}
The \defn{graph} $\gformula$ of a formula
$\formula$ is the logical cograph defined
inductively by:\footnote{$\graphofsymbol$ is a first-order extension of the propositional translation \textsl{G} of \cite[\S3]{Hug06}.
The latter is well-known in graph theory, as the function from a (prime-free) modular decomposition tree \cite{Gal67} or cotree \cite{Ler71,CLS81} to a cograph, and is employed in logic and category theory, \latinstyle{e.g}\onedot} \def\Eg{\latinstyle{E.g}\onedot\ \cite{Gir87,Hu99,Ret03}. See \S\ref{sec:related} for details.}
\begin{center}\vspace{0ex}\begin{math}
\begin{array}{c}
\graphof{\atom}
\;=\;
\singleton\atom \hspace{1ex} \text{ for every atom\/ }\atom
\\[2ex]
\def\hspace{3ex}{\hspace{3ex}}
\begin{array}{r@{\hspace{3ex}=\hspace{3ex}}l}
\graphof{\,\formula\tightvee\formulaa\,} & \gformula\graphunion\gformulaa
\\[1.5ex]
\graphof{\,\formula\tightwedge\formulaa\,} & \gformula\graphjoin\gformulaa
\\[1.5ex]
\end{array}
\hspace{16ex}
\begin{array}{r@{\hspace{3ex}=\hspace{3ex}}l}
\graphof{\,\forall x\, \formula\,} & \graphall {x\,} {\formula}
\\[1.5ex]
\graphof{\,\exists x\, \formula\,} & \graphex {x\,} {\formula}
\\[1.5ex]
\end{array}\\[3ex]\end{array}\end{math}
\end{center}
\end{definition}
For example, $\veedrinkerformula$ and $\variantveedrinkerformula$
have the same graph $\drinkergraph$:
{\newcommand\gap{\hspace{3.5ex}}\begin{align*}
\\[-1ex]
\drinkergraph \gap&=\gap \graphofsymbol\mkern4mu\left(\rule{0ex}{1.7ex}\mkern6mu\veedrinkerformula\mkern6mu\right) \\[2ex]
&=\gap \graphofsymbol\mkern4mu\left(\rule{0ex}{1.7ex}\mkern6mu\variantveedrinkerformula\mkern6mu\right) \gap =
\rput(1.3,.1){
\drinkersquare
\e x y
\e x {px}
\e x {py}
}
\hspace{16ex}\\[2ex]
\end{align*}}%
Vertices of $\graphof\formula$ correspond to occurrences of atoms and quantifiers in $\formula$:
each occurence of an atom $\atom$ in $\formula$ becomes an $\alpha$-labelled vertex, and each occurrence of a quantifier $\forall x$ or $\exists x$ becomes an $x$-labelled vertex.
A \defn{literal} is an atom-labelled vertex and a \defn{binder} is a variable-labelled vertex.
Thus $\drinkergraph$ has two literals,
$\singletonppx$ and $\singletonpy$,
and two binders,
$\singletonx$ and $\singletony$ (obtained from $\exists x$ and $\forall y$).
A module is \defn{proper}\label{sec:proper} if it has two or more vertices.
The \defn{scope} of a binder $\binder$
is the smallest proper strong module containing $\binder$.\footnote{Since, by definition, every logical cograph has a literal, the requisite strong module in the scope definition exists.}\textsuperscript{,}\footnote{To discern scope it is helpful to draw the modular decomposition tree \cite{Gal67}, \latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, cotree \cite{CLS81}. See Lemma\,\ref{lem:parent-scope}.}
For example, in $\drinkergraph$,
the scope of $\singletony$ is $\setof{\singletony,\singletonppx,\singletonpy}$,
and
the scope of $\singletonx$ is $\setof{\singletonx,\singletony,\singletonppx,\singletonpy}$, illustrated below by shading.
\begin{pic}{-1}{1}
\newcommand\greyradius{.75ex}
\newcommand\greydiameter{1.5ex}
\newcommand\greycolour{black!20!white}
\newcommand\greycircle[2]{\Cnodeput*[fillcolor=\greycolour,linecolor=\greycolour,radius=\greyradius,framesep=0pt](#1){#2}{}}
\rput(-2.5,0){%
\rput(-3,0){\begin{array}{c}\text{scope}\\\text{of }\singletony\end{array}\hspace{5ex}=}
\drinkersquare
{
\psset{linewidth=\greydiameter}
\psset{linecolor=\greycolour,linecap=2}
\greycircle{\halfedgelen,\halfedgelen}{px}
\greycircle{-\halfedgelen,-\halfedgelen}{y}
\greycircle{\halfedgelen,-\halfedgelen}{py}
\Cnodeput*[fillcolor=\greycolour,linecolor=\greycolour,radius=\greydiameter,framesep=0pt](\quarteredgelen,-\quarteredgelen){centre}{}
\nccurve[angleA=-100,angleB=100,nodesep=0,linecap=2] {px}{py}
\nccurve[angleA=10,angleB=170,nodesep=0,linecap=2] y {py}
\nccurve[angleA=20,angleB=-110,nodesep=0,linecap=2] y {px}
}
\drinkersquareunlabelled
\e x y
\e x {px}
\e x {py}
}
\rput(5.7,0){%
\rput(-3,0){\begin{array}{c}\text{scope}\\\text{of }\singletonx\end{array}\hspace{5ex}=}
\drinkersquare
{
\psset{linewidth=\greydiameter}
\psset{linecolor=\greycolour,linecap=2}
\greycircle{-\halfedgelen,\halfedgelen}{x}
\greycircle{\halfedgelen,\halfedgelen}{px}
\greycircle{-\halfedgelen,-\halfedgelen}{y}
\greycircle{\halfedgelen,-\halfedgelen}{py}
\Cnodeput*[fillcolor=\greycolour,linecolor=\greycolour,radius=\greydiameter,framesep=0pt](\quarteredgelen,-\quarteredgelen){centre}{}
\e x {py}
\e y {px}
\nccurve[angleA=-100,angleB=100,nodesep=0,linecap=2] {px}{py}
\nccurve[angleA=10,angleB=170,nodesep=0,linecap=2] y {py}
\nccurve[angleA=20,angleB=-110,nodesep=0,linecap=2] y {px}
\nccurve[angleA=70,angleB=-160,nodesep=0,linecap=2] y {px}
\nccurve[angleA=-80,angleB=80,nodesep=0,linecap=2] x y
\nccurve[angleA=-10,angleB=-170,nodesep=0,linecap=2] x {px}
}
\drinkersquareunlabelled
\e x y
\e x {px}
\e x {py}
}
\end{pic}
A binder $\binder$ is \defn{existential} (resp.\ \defn{universal}) in a logical cograph $\fograph$ if, for every other vertex $\vertex$ in the scope of $\binder$, we have $\binder\vertex\tightin\edgesof\fograph$ (resp.\ $\binder\vertex\tightnotin\edgesof\fograph$).\footnote{Since the scope of a binder is a proper strong module, every binder is either universal or existential (and not both).}
In $\drinkergraph$, for example,
$\singletonx$ is existential
and
$\singletony$ is universal
(corresponding to
$\exists x$
and
$\forall y$
in the formula(s) generating $\drinkergraph$).
An \defn{$\variable$-binder} is a binder with variable $\variable$, which is \defn{legal} if its scope contains at least one literal and no other $\variable$-binder.
\begin{definition}\label{def:fograph}
A \defn{fograph}\/ or \defn{first-order cograph}\/ is a logical cograph whose binders are legal.
\end{definition}
For example, $\drinkergraph$ above is a fograph, but
\(\;
\namedsingletonleft x x
\hspace*{.8ex}
\namedsingletonright y y
\hspace*{.8ex}
\singletonright p
\e x y
\;\)
is not (since neither binder scope contains a literal),
nor is
\(\;
\namedsingletonright a x
\hspace*{.8ex}
\namedsingletonright b x
\hspace*{.8ex}
\singletonright {px}
\)\;
(since each $x$-binder is in the other's scope).
\begin{lemma}\label{lem:translation}
The graph $\graphof\formula$ of every formula $\formula$ is a fograph.
\end{lemma}
\begin{proof}
By structural induction on $\formula$. The base case with $\formula$ an atom is immediate.\todo{ensure def ref correct}
For the induction step, note that all four operations defined in Def.\,\ref{def:graph-of-formula} preserve the property of being a fograph, since all formulas are rectified.\footnote{Naively applying $\graphofsymbol$ to an unrectified formula such as
$(\forall\mkern-1mu x p\mkern-2mu x\mkern-1mu)\tightvee (\forall\mkern-1mu x (\mkern-2mu q\mkern-2mu x\mkern-2.5mu\tightvee\mkern-2mu r\mkern-1mu x\mkern-1mu)\mkern-2mu)$
yields
$\:\newcommand\gap{\hspace{.8ex}}
\singleton{\mkern-1mu x}
\gap
\singleton{\mkern-1mu p\mkern-2mu x}
\gap
\singleton{\mkern-1mu x}
\gap
\singleton{\mkern-1mu q\mkern-2mu x}
\gap
\singleton{\mkern-1mu r\mkern-1.5mu x}
\:$
with all three literals bound ambiguously by both binders. Whence our assumption that every formula be in rectified form.}\todo{``place a formula'' in footnote}
\end{proof}
An \defn{$\variable$-literal} is one whose atom contains the variable $\variable$.
An $\variable$-binder \defn{binds} every $\variable$-literal in its scope.
In $\drinkergraph$ above, for example, $\singletonx$ binds $\singletonppx$ and $\singletony$ binds $\singletonpy$.
An $\variable$-binder is \defn{rectified} if it is the only $\variable$-binder and its scope contains every $\variable$-literal.
A fograph is \defn{rectified} if its binders are rectified.\footnote{In \S\ref{sec:soundness} we will observe that $\graphofsymbol$ (Def.\,\ref{def:graph-of-formula}) is a surjection onto rectified fographs (\reflem{lem:graph-surj}).}
For example,
$\drinkergraph$ above is rectified
but $\:\cleaningegone\:$ is not (since it has two $x$-binders), nor is $\:\cleaningegtwo\:$
(since
$\singletonx$ does not bind
$\singletonqx$).
To \defn{rectify} an unrectified \mbox{$\variable$-binder} $\binder$ in a fograph $\fograph$ is to
change its label to a variable $\xp$ which is fresh (\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, not in any label of $\fograph$)
and
substitute $\xp$ for $x$ in the
label of every literal
bound by $\binder$.
A \defn{rectified form} is any result of rectifying binders until reaching a rectified fograph.
For example, \;$\bigcleaningeg{x}{x}$\: has the rectified form \;$\bigcleaningeg{y}{z}$\:.
This is analogous to the unrectified formula
\mbox{$\unrectifiedformulaeg\tightvee x$} having the rectified form
\mbox{$\rectifiedformulaeg\tightvee z$}.
The \defn{binding graph} $\bindinggraphof\graph$ of a fograph $\graph$ is the
directed graph
\mbox{$\digraphpairof{\verticesof\graph}{\,\setof{\diedge{\binder}{\mkern-2mu\literal}:\binder\text{ binds }\literal}\,}$}.
For example, the binding graph of $\drinkergraph$ above is
\begin{pic}{-1}{.7}
\rput(-1.5,0){\bindingrelof\drinkergraph\hspace{6ex}=}
\rput(1.3,0){\drinkersquare\de x {px}\de y {py}}
\end{pic}
\section{Skew bifibrations}
A directed graph homomorphism
$\fib:\digraphpair\to\digraphpairp$ is a \defn{fibration} \cite{Gro60,Gra66} if
for all
$\vertex\tightin\vertices$ and $\diedge{\vertexa}{\mkern-1mu\fib(\vertex)}\tightin\edgesp$ there exists a unique $\skewlifting\vertexa\tightin\vertices$ with $\diedge{\skewlifting\vertexa}{\mkern-2mu\vertex}\tightin\edges$ and $\fib(\skewlifting\vertexa) = \vertexa$.
This definition is illustrated below-left.
\begin{center}
\begin{pspicture}[nodesep=2pt,labelsep=2pt](0,-.35)(0,2)
\begin{math}
\newcommand\rad{.8}
\newcommand\shortrad{.4}
\rput(-5,0){
\rput(-\rad,1.55){\Rnode{liftw}{\hspace{-2ex}\likex{\exists\mkern1mu!\mkern1mu\skewlifting\vertexa}}}
\rput(-\rad,0){\Rnode w \vertexa}
\rput(\rad,1.55){\Rnode v v}
\rput(\rad,0){\Rnode{fv}{\fib(v)}}
\ncline[arrows=->]{w}{fv}%
\ncline[arrows=->]{liftw}{v}%
%
\fibrestyle
\ncline{liftw}{w}
\ncline{v}{fv}
}
\rput(0,0){
\rput(-\rad,1.55){\Rnode{liftw}{\hspace{-2ex}\likex{\exists\mkern1mu!\mkern1mu\skewlifting\vertexa}}}
\rput(-\rad,0){\Rnode w \vertexa}
\rput(\rad,1.55){\Rnode v v}
\rput(\rad,0){\Rnode{fv}{\fib(v)}}
\ncline{w}{fv}%
\ncline{liftw}{v}%
%
\fibrestyle
\ncline{liftw}{w}
\ncline{v}{fv}
}
\rput(5,0){
\rput(\rad,1.55){\Rnode{v}v}%
\rput(\rad,0){\Rnode{fv}{f(v)}}
\rput(-\rad,.3){\Rnode{fwhat}{f(\skewlifting w)}}
\rput(-\rad,1.8){\Rnode{what}{\hspace{-1.4ex}\likex{\exists\mkern2mu\skewlifting\vertexa}}}
\rput(-\shortrad,-.35){\Rnode{w}{w}}
\ncline{v}{what}%
\ncline{fv}{fwhat}
\ncline{fv}{w}%
%
\fibrestyle
\ncline{v}{fv}
\ncline{what}{fwhat}
}
\end{math}
\end{pspicture}
\end{center}
Similarly, an undirected graph homomorphism $\fib:\graphpair\to\graphpairp$ is a \defn{fibration}
if for all
$\vertex\tightin\vertices$ and
${\vertexa}\mkern3mu{\fib(\vertex)}\in\edgesp$ there exists a unique $\skewlifting\vertexa\tightin\vertices$ with ${\skewlifting\vertexa}{\mkern2mu\vertex}\tightin\edges$ and $\fib(\skewlifting\vertexa) = \vertexa$.
This definition is illustrated above-centre.\footnote{An undirected graph fibration is a special case of a topological fibration \cite{Whi78}, by viewing every edge as a copy of the unit interval.}
An undirected graph homomorphism $\fib:\graphpair\to\graphpairp$ is a \defn{skew fibration} \cite{Hug06}
if for all
$\vertex\tightin\vertices$ and
${\vertexa}\mkern3mu{\fib\mkern-.3mu(\vertex)}\tightin\edgesp$ there exists
$\skewlifting\vertexa\tightin\vertices$ with
${\skewlifting\vertexa}\mkern2.5mu{\vertex}\in\edges$ and
$f(\skewlifting w)\mkern2mu w\notin\edgesp$.
This definition is illustrated above-right.
Since $\fib(\skewlifting\vertexa) \tighteq \vertexa$ implies
$f(\skewlifting w)\mkern2mu w\notin\edgesp$,
skew fibrations generalize fibrations.
A graph homomorphism $\fib:\cover\to\base$ between fographs \defn{preserves labels} if
for every vertex $\vertex\tightin\verticesof\cover$ the
label of $\vertex$ in $\cover$ equals the label of $\fib(\vertex)$ in $\base$,
and \defn{preserves existentials} if
for every existential binder $\binder$ in $\cover$ the vertex $\fib(\binder)$ is an existential binder in $\base$.
\begin{definition}
A \defn{skew bifibration} $\bifib\mkern-2mu:\mkern-2mu\cover\mkern-2mu\to\mkern-2mu\base$ between fographs is a label- and existential-preserving graph \mbox{homomorphism} such that
\begin{itemize}
\item $\bifib:\cover\to\base$ is a skew fibration
\item $\bifib:\bindinggraphof\cover\to\bindinggraphof\base$ is a fibration.
\end{itemize}
\end{definition}\begin{figure*}%
\begin{pic}{-1.1}{2.8}
\rput(-5,0){\drinkerfiblabelledpair{x}{x}}
\rput(0,0){\drinkerbindingfiblabelled{x}{x}}
\rput(5,0){\drinkerfib}
\end{pic}%
\caption{\label{fig:drinkerbifib}A skew bifibration (left), its binding fibration (centre), and its skeleton (right).}\figrule\end{figure*}
We refer to $\bifib:\bindinggraphof\cover\to\bindinggraphof\base$ as the \defn{binding fibration}.
For example, a skew bifibration is shown in Fig.\,\ref{fig:drinkerbifib}, with its binding fibration.
The \defn{skeleton} of a skew bifibration is the result of dropping labels from its source.
Fig.\,\ref{fig:drinkerbifib} shows an example.
We identify a skew bifibration with its skeleton.
No information is lost since the source labels can be lifted from the target (because skew bifibrations preserve labels, by definition).\footnote{We need the explicit preservation of existentials in the definition of skew bifibration since that property does not follow from the other conditions.
For example, the unique label-preserving function from
$\graphof{\exists x\mkern2mu p}
=
\namedsingletonleft x x\hspace{1ex}\namedsingletonright p p\e x p$
to
$\graphof{(\forall x\mkern2mu q)\tightwedge p}
=
\namedsingletonleft x x\hspace{.8ex}\namedsingletonleft q q\hspace{1ex}\namedsingletonright p p\e q p\nccurve[nodesep=0pt,angleA=35,angleB=150]x p$
satisfies all the conditions of being a skew bifibration except existential preservation (since it maps an existential binder to a universal binder).}
\section{Fonets (first-order nets)}\label{sec:fonets}
Two atoms are \defn{pre-dual} if they have dual predicate symbols (\latinstyle{e.g}\onedot} \def\Eg{\latinstyle{E.g}\onedot\ $\pxy$ and $\pp y fa$) and two literals are pre-dual if their atoms are pre-dual.
\begin{definition}\label{def:linked}
A \defn{linked fograph} is a coloured fograph such that
\begin{itemize}
\item every colour, called a \defn{link}, comprises two pre-dual literals, and
\item every literal is either 1-labelled or in a link.
\end{itemize}
\end{definition}
Fig.\,\ref{fig:leap}\todo{$p\psscalebox{.8 1}{\mathbin{\ensuremath\mapsto}} q$ so validates cp?}
shows a linked fograph $\net$
with two links,
$\twolinkp$
and
$\twolinkq$.\begin{figure*}\begin{center}\vspace{9ex}
\twolinkfograph
\hspace{35ex}
\twolinkleapgraph
\vspace{7ex}\end{center}\caption{\label{fig:leap}A fonet $\net$ (left) with
unique dualizer $\protect\twolinkassignment$
and its
leap graph $\protect\leapgraphof\net$ (right).}\figrule\end{figure*}
\begin{definition}\label{def:dualizer}
Let $\cover$ be a linked fograph.
\WLOG,
assume
$\cover$ is rectified (by rectifying binders as needed).
A \defn{dualizer} for $\cover$ is
a function $\dualizer$
assigning to each existential binder variable $x$ a term
$\dualizer(x)$
such that
for every link $\{\mkern2mu\singleton\atomone,\singleton\atomtwo\mkern2mu\}$, the atoms
$\atomone\dualizer$ and $\atomtwo\dualizer$ are dual, where $\atom\dualizer$ denotes the result of substituting $\dualizer(x)$ for $x$ throughout $\atom$
(simultaneously for each $x$).
\end{definition}
For example,
$\twolinkassignment$
is a dualizer\footnote{In the context of a function we write $\assign a b$ for the ordered pair $\langle a,b\rangle$.} for
$\net$ (Fig.\,\ref{fig:leap}) since
$\ppx\twolinkassignment=\ppz$ is dual to $\pz$, and $\qqy\twolinkassignment=\qqfz$ is dual to $\qfz$; this is the unique dualizer for $\net$.
A \defn{dependency}\todo{bond cord tether linkage link tie}
$\dep$
of $\cover$
is an existential binder $\singletonx$ and a universal binder $\singletony$ such that every dualizer
for $\cover$
assigns to $x$ a term containing $y$.\footnote{In \S\ref{sec:ptime} we show that all dependencies can be constructed in polynomial time, despite quantification over \emph{every dualizer}.}\todo{link to p-time}
For example, $\{\mkern2mu\singletony,\singletonz\mkern2mu\}$ is a dependency of $\net$ (Fig.\,\ref{fig:leap})
since the unique dualizer $\twolinkassignment$ assigns $\fz$ to $y$.
A \defn{leap} is a dependency or link.
The \defn{leap graph} $\leapgraphof\cover$ is the graph $(\verticesof\cover,\leapsof\cover)$ where $\leapsof\cover$ comprises all leaps of $\cover$.
See Fig.\,\ref{fig:leap} for an example.
A graph $\graphpair$ is a \defn{matching} if $\vertices$ is non-empty and
for all $\vertex\mkern-3mu\in\mkern-2mu\vertices$ there is a unique $\vertexp\mkern-4mu\in\mkern-2mu\vertices$ with $\vertex\vertexp\mkern-3mu\in\mkern-2mu\edges$.
A set $W$ \defn{induces a bimatching} in a linked fograph $\cover$ if $W$ induces
a matching in $\cover$ and induces a matching in $\leapgraphof\cover$.
\begin{definition}
A \defn{fonet} or \defn{first-order net} is a
linked fograph which has a dualizer but no induced bimatching.
\end{definition}
See Fig.\,\ref{fig:leap} for an example of a fonet.
The minimal fonet is $\singleton 1$ (an uncoloured 1-labelled vertex).\footnote{A fonet can be viewed as a graph-theoretic abstraction and generalization of a unification net \cite{Hug18}. Upon forgetting vertex labels,
propositional fonets correspond to nicely coloured cographs \cite{Hug06},
which are in bijection with certain R\&B cographs \cite{Ret03}.
See \S\ref{sec:related} for details.}
\section{Combinatorial proofs}\label{sec:cps}
\begin{definition}\label{def:cp}
A \defn{combinatorial proof} of a fograph\/ $\fograph$ is a skew bifibration\/ \mbox{$\bifib:\fographnet\to\fograph$}
from a fonet\/ $\fographnet$.
A combinatorial proof of a formula $\formula$ is a combinatorial proof of its graph $\gformula$.
\end{definition}
For examples, see \S\ref{sec:intro}.
\begin{theorem}[Soundness]
A formula is valid if it has a combinatorial proof.
\label{thm:soundness}\end{theorem}
\begin{proof}
Section\,\ref{sec:soundness}.
\end{proof}
\begin{theorem}[Completeness]
Every valid formula has a combinatorial proof.
\label{thm:completeness}\end{theorem}
\begin{proof}
Section\,\ref{sec:completeness}.
\end{proof}
Combining the two theorems above, we obtain the main theorem of this paper:
\begin{theorem}[Soundness \& Completeness]\label{thm:soundness-completeness}
A formula
of first-order logic
is valid if and only if it has a combinatorial proof.
\end{theorem}
\section{Propositional combinatorial proofs without labels}\label{sec:prop-cps}
A \defn{proposition} is a formula with no quantifiers or terms, \latinstyle{e.g}\onedot} \def\Eg{\latinstyle{E.g}\onedot\ $\peirceformula$, and a proposition is \defn{simple} if it has no logical constant ($1$ or $0$).
This section provides an alternative representation of fographs and combinatorial proofs in the simple propositional case, without labels (variables and atoms).
An illustrative example is shown in%
\begin{figure*}\begin{center}%
\newcommand\radius{2.7}%
\begin{pic}{.5}{3.3}
\rput(-\radius,0){\dcponformula{\dcpcptwo}{-.08}{}}%
\rput(\radius,0){\dcponformula{\dcptwo}{.1}{}}%
%
\end{pic}%
\end{center}\caption{\label{fig:homogeneous-peirce}A standard combinatorial proof (left)
and a homogeneous combinatorial proof (right) of Peirce's law
$\protect\peirceimpliesformula\,=\,\protect\peirceformula$.}\vspace{0ex}\figrule\end{figure*}
Fig.\,\ref{fig:homogeneous-peirce}.
The left side
shows a standard combinatorial proof (Def.\,\ref{def:cp}) of Peirce's law $\peirceimpliesformula=\peirceformula$.
The right side shows the label-free form, called a \emph{homogeneous combinatorial proof}, defined below.
The source colouring and target labels ($\pp$, $p$ and $q$) have disappeared,
replaced by \emph{duality} edges, shown dashed and curved.
The adjective \emph{homogeneous} reflects the common type of the source and target (both cographs with additional duality edges), in contrast to a standard combinatorial proof skeleton which is \emph{heterogeneous} (the source is coloured, while the target is labelled).
\subsection{Dualizing graphs}\label{sec:dualizing-graphs}
A graph is \defn{triangle-free} if it is $\cthree$-free, where $\cthree=\cthreegraph=\graphpairof\cthreevertices\cthreeedges$.
\begin{definition}\label{def:dualizing-graph}
A \defn{dualizing graph} is a non-empty cograph $\dualizinggraph$ equipped with a second set $\dualitiesof\dualizinggraph$ of undirected edges on $\verticesof\dualizinggraph$, called \defn{dualities}, such that $\graphpairof{\verticesof\dualizinggraph}{\dualitiesof\dualizinggraph}$ is a triangle-free cograph.
\end{definition}
Four examples of dualizing graphs are shown in the bottom row of%
\begin{figure*}\begin{pic}{-4}{.5}
\rput(-6,0){\peirceovercombprop}
\rput(-2,0){\combpropone}
\rput(2,0){\combproptwo}
\rput(6,0){\combpropthree}
\end{pic}\caption{\label{fig:dualizing-graphs}Four simple propositions $\prop$ (top row), their fographs $\graphof\prop$ (middle row), and their dualizing graphs $\dualizinggraphofprop\prop$.
Each vertex in $\graphof\prop$ and $\dualizinggraphofprop\prop$ is aligned vertically with the corresponding atom occurence in $\prop$. Dualities are shown dashed and curved.}%
\figrule\end{figure*}
Fig.\,\ref{fig:dualizing-graphs}.
Dualizing graphs generalize R\&B-cographs \cite{Ret03}.\footnote{An R\&B-cograph is a dualizing graph such that every vertex is in a unique duality.}
\begin{samepage}\begin{definition}\label{def:dualizing-graph-of-prop}
The dualizing graph $\dualizinggraphofprop\prop$ of a simple proposition $\prop$ is the dualizing graph $\dualizinggraph$ with
\begin{itemize}
\item $\verticesof\dualizinggraph=\setof{\text{occurrences of predicate symbols in }\prop}$,
\item $\vertex\vertexa\tightin\edgesof\dualizinggraph$ if and only if
the smallest subformula of $\prop$
containing both $\vertex$ and $\vertexa$ is
a conjunction (\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, of the form $\formulaa\tightwedge\formulaaa$)
\item $\vertex\vertexa\tightin\dualitiesof\dualizinggraph$ if and only if $\vertex$ and $\vertexa$ have dual predicate symbols (\latinstyle{e.g}\onedot} \def\Eg{\latinstyle{E.g}\onedot, $p$ and $\pp$).
\end{itemize}
\end{definition}\end{samepage}
For example, for each simple proposition $\prop$ in the top row of Fig.\,\ref{fig:dualizing-graphs},
the bottom row shows the corresponding dualizing graph $\dualizinggraphofprop\prop$.
For comparison, the fograph $\graphof\prop$ is in the middle row.
\begin{lemma}\label{lem:dualizing-graph-of-prop-well-defined}
$\dualizinggraphofprop\prop$ is a well-defined dualizing graph for every simple proposition $\prop$.\footnote{We will observe in \S\ref{sec:surjections} that $\dualizinggraphofpropsymbol$ is a surjection from simple propositions onto dualizing graphs (\reflem{lem:prop-surj}).}
\end{lemma}
\begin{proof}
Let $\dualizinggraph=\dualizinggraphofprop\prop$.
We must show
$\graphpairof{\verticesof{\dualizinggraph}}{\edgesof{\dualizinggraph}}$ and
$\graphpairof{\verticesof{\dualizinggraph}}{\dualitiesof{\dualizinggraph}}$ are $P_4$-free and
$\graphpairof{\verticesof{\dualizinggraph}}{\dualitiesof{\dualizinggraph}}$ is $\cthree$-free.
Suppose
$\graphpairof
{\setof{\vertex_1,\mkern-2mu\vertex_2,\mkern-2mu\vertex_3,\mkern-2mu\vertex_4}}
{\setof{\vertex_1\vertex_2,\mkern-1mu\vertex_2\vertex_3,\mkern-1mu\vertex_3\vertex_4}}$
is an induced subgraph of
$\graphpairof{\verticesof\fograph}{\edgesof\dualizinggraph}$.
Since $\vertex_1\vertex_2\tightin\edgesof\dualizinggraph$ there exist subformulas
$\prop_1$ and $\prop_2$ of $\prop$ containing $\vertex_1$ and $\vertex_2$, respectively, with
$\prop_1\tightwedge\prop_2$ a subformula of $\prop$.
Necessarily
$\vertex_3$ is in $\prop_1$, otherwise (since $\prop$ is a syntactic tree) $\vertex_1\vertex_3\tightin\edgesof\dualizinggraph$ (a contradiction), and similarly
$\vertex_4$ is in $\prop_2$, otherwise $\vertex_2\vertex_4\tightin\edgesof\dualizinggraph$ (a contradiction).
But then $\vertex_1\vertex_4\tightin\edgesof\dualizinggraph$, a contradiction.
Suppose
$\graphpairof
{\setof{\vertex_1,\mkern-2mu\vertex_2,\mkern-2mu\vertex_3,\mkern-2mu\vertex_4}}
{\setof{\vertex_1\vertex_2,\mkern-1mu\vertex_2\vertex_3,\mkern-1mu\vertex_3\vertex_4}}$
is an induced subgraph of
$\graphpairof{\verticesof\fograph}{\dualitiesof\dualizinggraph}$,
where $v_i$ is an occurrence of the nullary predicate symbol $p_i$.
By definition of $\dualitiesof\dualizinggraph$, we have $\pp_1=p_2$, $\pp_2=p_3$ and $\pp_3=p_4$.
Thus $p_3=p_1$, hence $p_4=\pp_1$, so $\vertex_1\vertex_4\in\dualitiesof\dualizinggraph$, a contradiction.
Suppose
$\graphpairof
{\setof{\vertex_1,\mkern-2mu\vertex_2,\mkern-2mu\vertex_3}}
{\setof{\vertex_1\vertex_2,\mkern-1mu\vertex_2\vertex_3,\mkern-1mu\vertex_3\vertex_1}}$
is an induced subgraph of $\graphpairof{\verticesof\fograph}{\dualitiesof\dualizinggraph}$,
where $v_i$ is an occurrence of the nullary predicate symbol $p_i$.
By definition of $\dualitiesof\dualizinggraph$, we have $\pp_1=p_2$, $\pp_2=p_3$ and $\pp_3=p_1$.
Thus $p_3=\pp_2=\dual\pp_1=p_1$, contradicting $\pp_3=p_1$.
\end{proof}
\subsection{Dualizing nets}\label{sec:dualizing-nets}
A set $W\subseteq\verticesof\dualizinggraph$ \defn{induces a bimatching} in
a dualizing graph
$\dualizinggraph$ if $W$ induces a matching
in $\graphpairof{\verticesof\dualizinggraph}{\edgesof\dualizinggraph}$
and induces a matching in
$\graphpairof{\verticesof\dualizinggraph}{\dualitiesof\dualizinggraph}$.
\begin{definition}
A \defn{dualizing net} $\net$ is a dualizing graph with no induced bimatching,
such that $\graphpairof{\verticesof\net}{\dualitiesof\net}$ is a matching.
\end{definition}
For example,
\,$\newcommand\gap{\mkern7mu}\namedvx a\gap\namedvx b \gap \namedvx c \gap \namedvx d \e a b \dualitystyle \nccurve[angleA=30,angleB=150] a c \nccurve[angleA=-30,angleB=-150] b d$\,
is a dualizing net, while
\,$\newcommand\gap{\mkern7mu}\namedvx a\gap\namedvx b \gap \namedvx c \gap \namedvx d \e a b \e c d \dualitystyle \nccurve[angleA=30,angleB=150] a c \nccurve[angleA=-30,angleB=-150] b d $\,
and
\,$\newcommand\gap{\mkern7mu}\namedvx a\gap\namedvx b \gap \namedvx c \gap \namedvx d \e a b \dualitystyle \nccurve[angleA=25,angleB=155] a d \nccurve[angleA=-30,angleB=-150] b d $\,
are not.
The third dualizing graph in the bottom row of Fig\,\ref{fig:dualizing-graphs} is a dualizing net, while the other three
are not.
Dualizing nets
are in bijection with
even-length alternating elementary acyclic R\&B cographs \cite{Ret03}.
\subsection{Propositional homogeneous combinatorial proofs}
A \defn{skew fibration}
$\skewfib\mkern-0mu:\mkern-1mu\dualizingcover\mkern-1mu\to\mkern-1mu\dualizingbase$
of dualizing graphs is a skew fibration
$\skewfib:\graphpairof{\verticesof\dualizingcover}{\edgesof\dualizingcover}\to\graphpairof{\verticesof\dualizingbase}{\edgesof\dualizingbase}$
such that
$\skewfib:\dualizinggraphpairof\dualizingcover\to\dualizinggraphpairof\dualizingbase$\/ is a homomorphism.
\begin{definition}
A \defn{homogeneous combinatorial proof} of a dualizing graph\/ $\dualizingbase$ is a skew fibration\/ \mbox{$\skewfib:\net\to\dualizingbase$}
from a dualizing net\/ $\net$.
A \defn{homogeneous combinatorial proof} of a simple proposition $\prop$ is a homogeneous combinatorial proof of its dualizing graph $\dualizinggraphofprop\prop$.
\end{definition}
For example,
a homogeneous combinatorial proof of Peirce's law $\peirceimpliesformula=\peirceformula$ is shown on the right of Fig\,\ref{fig:homogeneous-peirce}.
\subsection{Propositional homogeneous soundness and completeness}
\begin{theorem}[Propositional homogeneous soundness and completeness]\label{thm:prop-soundness-completeness}
A simple proposition is valid if and only if it has a homogeneous combinatorial proof.
\end{theorem}
\begin{proof}
A corollary of Theorem\,\ref{thm:soundness-completeness}, detailed in \S\,\ref{sec:proof-of-propositional-homogeneous-soundness-completeness}.
\end{proof}
\section{Monadic combinatorial proofs without labels}\label{sec:monadic-homogeneous}
A formula is \defn{monadic} if its predicate symbols are unary and it has no function symbols or logical constants, \latinstyle{e.g}\onedot} \def\Eg{\latinstyle{E.g}\onedot, $\drinkerformula$.
This section extends homogeneous combinatorial proofs to the monadic case.
Fig.\,\ref{fig:drinker-no-labels}\figdrinkernolabels{} shows an illustrative example:
on the left is the combinatorial proof of \mbox{$\drinkerformula$} presented in the Introduction,
and on the right is the corresponding homogeneous combinatorial proof, to be defined below.
For technical convenience throughout this section we assume every monadic formula is closed, \latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, has no free variables. This loses no generality because a formula $\formula$ with free variables $x_1,\ldots,x_n$ is valid if and only if its closure $\forall x_1\ldots\forall x_n\mkern2mu \formula$ is valid.
Given a directed edge $e=\diredge\vertex\vertexa$, $\vertex$ is the \defn{source} of $e$, $\vertexa$ is the \defn{target} of $e$, and $\vertex$ and $\vertexa$ are \defn{in} $e$.
\begin{definition}\label{def:pre-mograph}
A \defn{pre-monadic graph} or \defn{pre-mograph} is a dualizing graph
$\mograph$ equipped with a non-empty set
$\bindingedgesof\mograph$ of directed edges on
$\verticesof\mograph$,
called \defn{bindings}, such that
if a vertex $\vertex$ is the target of a binding then
$\vertex$ is in no other binding.\footnote{In other words, if $\diredge\vertexa\vertex\tightin\bindingedgesof\mograph$, then (1)
$\diredge\vertex\vertexaa\tightnotin\bindingedgesof\mograph$ for all vertices $\vertexaa$, and (2)
$\diredge\vertexap\vertex\tightin\bindingedgesof\mograph$
implies $\vertexap=\vertexa$.}
\end{definition}
\begin{figure*}\begin{center}\vspace{5ex}\hspace{-8ex}\begin{math}
\monadicformulaeg
\end{math}
\hspace{20ex}
\monadicfographeg
\hspace{28ex}
\mographeg
\vspace{6ex}\end{center}\caption{\label{fig:mograph}%
A monadic formula $\monadicformula$,
its fograph $\graphof\monadicformula$,
and its mograph $\mographofformula\monadicformula$, respectively.%
}\figrule\end{figure*}%
An example of a pre-mograph is shown on the right of Fig.\,\ref{fig:mograph}, with two dualities (dashed and curved) and three bindings (directed and curved).
A vertex in a pre-mograph $\mograph$ is a \defn{literal} if it is the target of a binding, otherwise a \defn{binder}.
If $\diredge\binder\literal\tightin\bindingedgesof\mograph$ we say that
$\binder$ \defn{binds} $\literal$.\footnote{Note that, by the condition in the definition of pre-mograph,
$\binder$ must be a binder.}
The \defn{scope} of a binder $\binder$ in $\mograph$
is the smallest proper strong module of $\graphpairof{\verticesof\mograph}{\edgesof\mograph}$ containing $\binder$.
\begin{definition}\label{def:mograph}
A \defn{mograph} $\mograph$ is a pre-mograph such that
no binder is in a duality,
every binder has non-empty scope,
and
$\diredge\binder\literal\tightin\bindingedgesof\mograph$ only if
$\literal$ is in the scope of $\binder$.
\end{definition}
For example, the pre-mograph on the right of Fig.\,\ref{fig:mograph} is a mograph.
\begin{definition}\label{def:mograph-of-formula}
The \defn{mograph} $\mographofformula\monadicformula$ of a closed monadic formula $\monadicformula$ is the mograph defined by:
\begin{itemize}
\item $\verticesof\mograph=\setof{\text{occurrences of atoms and quantifiers in }\monadicformula}$,
\item $\vertex\vertexa\tightin\edgesof\mograph$ if and only if either
\begin{itemize}
\item the smallest subformula containing both $\vertex$ and $\vertexa$ is
a conjunction (\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, of the form $\formula\tightwedge\formulaa$)
\item $\vertex$ is an existential quantifier, $\vertexa$ is in its scope, and $\vertexa\tightneq\vertex$.
\end{itemize}
\item $\vertex\vertexa\tightin\dualitiesof\mograph$ if and only if $\vertex$ and $\vertexa$ are atoms with dual predicate symbols (\latinstyle{e.g}\onedot} \def\Eg{\latinstyle{E.g}\onedot, $\px$ and $\ppy$), and
\item $\diredge\vertex\vertexa\tightin\bindingsof\mograph$ if and only if $\vertex$ is a quantifier, $\vertexa$ is an atom, and $\vertex$ binds $\vertexa$.
\end{itemize}
\end{definition}
For example, in Figure\,\ref{fig:mograph}, the closed monadic formula $\monadicformula=\monadicformulaeg$ on the left has the mograph $\mographofformula\monadicformula$ on the right.
\begin{lemma}\label{lem:mograph-of-formula-well-defined}
$\mographofformula\monadicformula$ is a well-defined mograph for every closed monadic formula $\monadicformula$.\footnote{We will observe in \S\ref{sec:surjections} that $\mographofformulasymbol$ is a surjection from closed monadic formulas onto mographs (\reflem{lem:surj-closed-monadic-formulas-to-mographs}).}
\end{lemma}
\begin{proof}
Let $\mograph=\mographofformula\monadicformula$.
Since every atom-occurrence in $\monadicformula$ has a single variable, each literal is the target of at most one binding in $\mograph$, and since no atom-occurrence binds another atom-occurrence, $\mograph$ satisfies the condition on bindings in the definition of pre-mograph (Def.\,\ref{def:pre-mograph}).
By reasoning as in the proof of \reflem{lem:dualizing-graph-of-prop-well-defined}, $\dualizinggraphpairof{\mograph}$ is $\pfour$-free and $\cthree$-free.
By definition of $\mographofformulasymbol$, no binder is in a duality.
It remains to show that $\graphpairofgraph{\mograph}$ is a cograph, every binder has non-empty scope, and
$\diredge\binder\literal\tightin\bindingedgesof\mograph$ only if $\literal$ is in the scope of $\binder$.
We proceed by induction on the structure of $\monadicformula$.
Base case: $\monadicformula=\px$ for some $p$ and $x$, so $\mograph$ is a single vertex, hence a mograph.
Induction case: $\monadicformula=\monadicformula_1\wedgeorvee\monadicformula_2$ for $\wedgeorveeinset$.
By induction hypothesis $\mograph_i=\mographofformula{\monadicformula_i}$ is a mograph ($i=1,2$).
By definition of $\edgesof\mograph$, we have
$\graphpairofgraph{\mograph}=\graphpairofgraph{\mograph_1}\graphjoin\graphpairofgraph{\mograph_2}$
or
$\graphpairofgraph{\mograph}=\graphpairofgraph{\mograph_1}\graphunion\graphpairofgraph{\mograph_2}$,
thus $\graphpairofgraph\mograph$ is a cograph since each $\graphpairofgraph{\mograph_i}$ is a cograph.
The scope of a binder $\binder$ in $\mograph$ is at least the scope of $\binder$ in the $\mograph_i$ containing $\binder$,
thus the scope of $\binder$ in $\mograph$ is non-empty and contains every literal bound by $\binder$, since $\mograph_i$ is a mograph.
Induction case: $\monadicformula=\forallorexists\mkern-1mu x\mkern2mu\monadicformulap$ for $\forallorexistsinset$.
By induction hypothesis, $\mographp=\mographofformula\monadicformulap$ is a mograph.
By definition of $\edgesof\mograph$ we have
$\graphpairofgraph\mograph=\binder\graphunion\mographp$ or
$\graphpairofgraph\mograph=\binder\graphjoin\mographp$ for a vertex
$\binder$
(the initial occurrence of $\forallorexists\mkern-1mu x$ in $\formula$),
thus $\graphpairofgraph\mograph$ is a cograph since $\graphpairofgraph{\mographp}$ is a cograph.
The scope of $\binder$ in $\mograph$ comprises every literal, and is therefore non-empty and contains every literal bound by $\binder$.
The scope of any other binder $\binderp$ in $\mograph$ is equal the scope of $\binderp$ in $\mographp$,
so is non-empty and contains every literal bound by $\binderp$, since $\mographp$ is a mograph.
\end{proof}
\subsection{Monets}\label{sec:monets}
A mograph is \defn{linked} if every literal is in a unique duality.
An example of a linked mograph is shown in Fig.\,\ref{fig:monet} (left).
\begin{definition}
Let $\mograph$ be a linked mograph.
Its \defn{binder equivalence}\/
$\brel\mograph$ is the equivalence relation on binders generated by
$\binder_1\brel\mograph\binder_2$ if there exist literals $\literal_1$ and $\literal_2$ with
$\diredge{\binder_1}{\literal_1},\diredge{\binder_2}{\literal_2}\tightin\bindingsof\mograph$
and $\literal_1\literal_2\tightin\dualitiesof\mograph$.
\end{definition}
Thus
$\binder_1\brel\mograph\binder_2$
if and only if there exists a binding/duality pattern of the form
\begin{center}\begin{math}
\newcommand\bindright[2]{\nccurve[angleA=60,angleB=-150]{#1}{#2}}
\newcommand\bindleft[2]{\nccurve[angleA=120,angleB=-30]{#1}{#2}}
\newcommand\dualright[2]{\nccurve[angleA=20,angleB=160]{#1}{#2}}
\newcommand\pivot[1]{\hspace{.4ex}\namedvx{#1}\hspace{.4ex}}
\begin{array}{cccccccccccc}
& \namedvx 1\quad & \namedvx 2 & & \namedvx 3\quad & \namedvx 4 & & \;\raisebox{-1.2ex}{\ldots} & & \namedvx 5\quad & \namedvx 6 \\[1.3ex]
\Rnode{b}{\binder_1} & & & \pivot u & & & \pivot v & & \pivot w & & & \Rnode{c}{\binder_2}
\end{array}
{\bindingstyle
\bindleft{u}{2}
\bindright{u}{3}
\bindleft{v}{4}
\bindright{w}{5}
\bindleft{c}{6}
\psset{nodesepA=0pt}
\bindright{b}{1}
}
{\dualitystyle
\dualright 1 2
\dualright 3 4
\dualright 5 6
}
\end{math}\end{center}
Let $\mograph$ be a linked mograph.
A binder $\binder$ in $\mograph$ is \defn{existential} (resp.\ \defn{universal}) if, for every other vertex $\vertex$ in the scope of $\binder$, we have
$\binder\vertex\tightin\edgesof\mograph$ (resp.\ $\binder\vertex\tightnotin\edgesof\mograph$).\footnote{Since the scope of a binder is a proper strong module, every binder is either universal or existential (and not both).}
A \defn{conflict} in $\mograph$ is a pair $\setof{b,c}$ of distinct universal binders $b$ and $c$ such that $b\brel\cover c$.
\begin{definition}\label{def:mograph-consistent}
A mograph is \defn{consistent} if it has no conflict.
\end{definition}
A \defn{dependency} of $\mograph$ is a pair $\setof{b,c}$ of binders with $b\brel\cover c$, $b$ existential, and $c$ universal.
A \defn{leap} is a dependency or duality.
\begin{definition}
The \defn{leap graph} $\leapgraphof\mograph$ of a linked mograph $\mograph$ is
$\graphpairof{\verticesof\mograph}{\leapsof\mograph}$ for $\leapsof\mograph$ the set of leaps of $\mograph$.
\end{definition}
An example of a leap graph is shown in Fig.\,\ref{fig:monet} (right).
A set of vertices $W\subseteq\verticesof\mograph$ \defn{induces a bimatching} in a linked mograph
$\mograph$ if $W$ induces a matching in
$\graphpairof{\verticesof\mograph}{\edgesof\mograph}$
and induces a matching in
$\leapgraphof\mograph$.
\begin{definition}
A \defn{monet} (\emph{monadic net}) is a consistent linked mograph with no induced bimatching.
\end{definition}
An example of a monet is shown in\begin{figure*}\begin{center}\vspace{5ex}\begin{math}
\moneteg
\hspace{35ex}
\monetegleapgraph
\end{math}\vspace{6ex}\end{center}\caption{\label{fig:monet}A monet $\monet$ (left) and its leap graph $\leapgraphof\monet$ (right).}\figrule\end{figure*}
Fig.\,\ref{fig:monet}.
\subsection{Monadic homogeneous combinatorial proofs}
Let $\mographa$ and $\mograph$ be mographs.
A function $\fib:\verticesof\mographa\to\verticesof\mograph$ \defn{preserves existentials}
if
for every existential binder $\binder$ in $\mographa$
the vertex $\fib(\binder)$ is an existential binder in $\mograph$.
\begin{definition}\label{def:monadic-skew-bifib}
A \defn{skew bifibration} $\bifib:\mographa\to\mograph$ between mographs is
an existential-preserving skew fibration
$\bifib:\graphpairof{\verticesof\mographa}{\edgesof\mographa}\to\graphpairof{\verticesof\mograph}{\edgesof\mograph}$
such that
\begin{itemize}
\item
$\bifib:\graphpairof{\verticesof\mographa}{\dualityedgesof\mographa}\to\graphpairof{\verticesof\mograph}{\dualityedgesof\mograph}$
is a homomorphism and
\item
$\bifib:\graphpairof{\verticesof\mographa}{\bindingedgesof\mographa}\to\graphpairof{\verticesof\mograph}{\bindingedgesof\mograph}$
is a fibration.\vspace{3pt}
\end{itemize}
\end{definition}
An example of a skew bifibration between mographs is shown on the right of Fig.\,\ref{fig:drinker-no-labels}.
\begin{definition}
A \defn{homogeneous combinatorial proof} of a mograph $\mograph$ is a skew bifibration \mbox{$\bifib:\net\to\mograph$} from a monet $\net$.
A \defn{homogeneous combinatorial proof} of a closed monadic formula $\monadicformula$ is a homogeneous combinatorial proof of its mograph $\mographofformula\monadicformula$.\footnote{Although, for technical convenience, throughout this paper we have assumed (without loss of generality) that every formula is rectified, this definition of \emph{homogeneous combinatorial proof} also works directly for non-rectified closed monadic formulas, without the need to first transform to rectified form.
This is because $\mographofformulasymbol$ is agnostic to the choice of bound variables, since mographs do not contain any variables.
For example,
$\mographofformula{
(\forall x\mkern1mu px)
\vee
(\forall x\mkern1mu qx)
}
=
\mographofformula{
(\forall x\mkern1mu px)
\vee
(\forall y\mkern1mu qy)
}
=
\:
\namedsingletonleft x {}
\hspace{2ex}
\namedsingletonright{px}{}
\hspace{2ex}
\namedsingletonleft{xx}{}
\hspace{2ex}
\namedsingletonright{qx}{}
{\bindingstyle
\psset{nodesepA=1pt,nodesepB=.5pt}
\nccurve[angleA=30,angleB=155]{x}{px}
\nccurve[angleA=30,angleB=155]{xx}{qx}
}
\:
$
and
$\mographofformula{\forall x\mkern2mu\exists x\mkern2mu px}
=
\mographofformula{\forall x\mkern2mu\exists y\mkern2mu py}
=
\,
\namedsingletonleft x {}
\hspace{1.6ex}
\namedsingletonleft{xx} {}
\hspace{2ex}
\namedsingletonright {px} {}
\e{xx}{px}
{\bindingstyle
\psset{nodesepA=1.5pt,nodesepB=1pt}
\nccurve[angleA=35,angleB=150]{xx}{px}
}
\,
$.}
\end{definition}
A homogenous combinatorial proof of $\drinkerformula$ is shown in Fig\,\ref{fig:drinker-no-labels} (right).
\subsection{Monadic homogeneous soundness and completeness}\label{sec:monadic-homogeneous-soundness-completeness}
\begin{theorem}[Monadic homogeneous soundness and completeness]\label{thm:monadic-soundness-completeness}
A closed monadic formula is valid if and only if it has a homogeneous combinatorial proof.
\end{theorem}
\begin{proof}
A corollary of Theorem\,\ref{thm:soundness-completeness}, detailed in \S\,\ref{sec:proof-of-monadic-soundness-completeness}.\todo{ref}
\end{proof}
\section{Modal combinatorial proofs}\label{sec:modal-cps}
A \defn{modal} formula is
generated from
the \defn{modal operators}
$\nec$ (necessity) and $\pos$ (possibility) instead of quantifiers and has all predicate symbols nullary,
\latinstyle{e.g}\onedot} \def\Eg{\latinstyle{E.g}\onedot $\modaldrinkerformula$.
Every modal formula abbreviates a standard first-order one \cite[\S3.3]{Min92}:
replace every $\nec$ by $\forall x$,
$\pos$ by $\exists x$, and
predicate symbol
$p$ by $px$. For example, $\modaldrinkerformula$ abbreviates $\drinkerformulamodallike$,
or
$\drinkerformula$ in rectified form.
\begin{definition}
A \defn{modal combinatorial proof} of a modal formula
$\modalformula$ is a standard combinatorial proof (Definition\,\ref{def:cp}) of the first-order formula abbreviated by $\modalformula$.
\end{definition}
For example, a modal combinatorial proof of $\modaldrinkerformula$ is shown below-left, in condensed form.
\begin{center}\begin{pic}{-.3}{.85}\rput(-3,0)\modaldrinkerInlineDisplayed\rput(3,0)\drinkerInlineDisplayed\end{pic}\end{center}
It abbreviates the first-order combinatorial proof
above-right (copied from the Introduction).
\begin{theorem}[S5 Modal Soundness \& Completeness]\label{thm:modal-soundness-completeness}
A modal formula is valid in S5 modal logic if and only if it has a modal combinatorial proof.
\end{theorem}
\begin{proof}
By Theorem\,3.2 of \cite[p.\,42]{Min92}, a modal formula is valid in S5 if and only if the first-order formula it abbreviates is valid in first-order logic.
Thus the result follows from Theorem\,\ref{thm:soundness-completeness}.
\end{proof}
\subsection{Modal combinatorial proofs without labels}
A modal formula is \defn{closed} if
every predicate symbol occurrence is bound by a modal operator (\latinstyle{e.g}\onedot} \def\Eg{\latinstyle{E.g}\onedot $\modaldrinkerformula$ but not $\openmodaldrinkerformula$)
and \defn{simple} if it has no logical constant ($1$ or $0$).
\begin{definition}
The mograph $\modalmographof\modalformula$ of a simple closed modal formula $\modalformula$ is the mograph $\mograph$ defined by
\begin{itemize}
\item $\verticesof\mograph=\setof{\text{occurrences of predicate symbols and modal operators in }\modalformula}$,
\item $\vertex\vertexa\tightin\edgesof\mograph$ if and only if either
\begin{itemize}
\item the smallest subformula containing both $\vertex$ and $\vertexa$ is
a conjunction (\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, of the form $\formula\tightwedge\formulaa$)
\item $\vertex$ is a $\pos$ with $\vertexa$ is in its scope and $v\tightneq w$.
\end{itemize}
\item $\vertex\vertexa\tightin\dualitiesof\mograph$ if and only if $\vertex$ and $\vertexa$ are dual predicate symbols, and
\item $\diredge\vertex\vertexa\tightin\bindingsof\mograph$ if and only if $\vertex$ is a modal operator, $\vertexa$ is a predicate symbol, and $\vertex$ binds $\vertexa$.
\end{itemize}
\end{definition}
For example,
\begin{center}\vspace{-1.5ex}\(
\modalmographof{\modaldrinkerformula}
\hspace{6ex}
=
\hspace{12ex}
\rput(0,1.3ex){\drinkerbasemograph}
\)\vspace{3ex}\end{center}
\begin{definition}
A \defn{homogeneous combinatorial proof} of a closed modal formula $\modalformula$
is a homogeneous combinatorial proof of its mograph $\modalmographof\modalformula$.
\end{definition}
For example, a homogeneous combinatorial proof of $\modaldrinkerformula$ is shown on the right of Fig.\,\ref{fig:drinker-no-labels}.
\begin{theorem}[Modal homogeneous soundness and completeness]\label{thm:modal-homogeneous-soundness-completeness}
A closed modal formula is valid in S5 modal logic if and only if it has a homogeneous combinatorial proof.
\end{theorem}
\begin{proof}
Since $\modalmographof\modalformula=\mographofformula\modalformulap$ for $\modalformulap$ the first-order formula encoded by $\modalformula$, the result is a corollary of Theorem\,\ref{thm:modal-soundness-completeness}.
\end{proof}
\section{Proof of the Soundness Theorem}\label{sec:soundness}\label{sec:proof-of-soundness}
In this section we prove the Soundness Theorem, Theorem~\ref{thm:soundness}.
\begin{lemma}\label{lem:graph-surj}
The function $\graphofsymbol$ (Def.\,\ref{def:graph}) is a surjection from formulas onto \recombulas.\footnote{Dropping the assumption that every formula is rectified leads to a surjection onto all fographs: see \reflem{lem:xgraph-surj}.}
Two formulas have the same graph if and only if they are equal modulo\footnote{Recall that, without loss of generality, we assume all formulas are rectified. Thus these equations do not include cases such as
$
px\wedge\exists x\mkern2mu qx
=
\exists x(px\wedge\mkern2mu qx)
$, an equality between formulas which are not logically equivalent.}
\begin{align*}
\formula\tightwedge\formulaa &\fateq \formulaa\tightwedge\formula\;\;\;\;
&
\formula\wedge(\formulaa\tightwedge\formulaaa) &\fateq (\formula\tightwedge\formulaa)\wedge\formulaaa\;\;\;\;
&
\exists x\mkern2mu\exists y\mkern2mu \formula &\fateq \exists y\mkern2mu\exists x\mkern2mu\formula\;\;\;\;
&
\formula\wedge\exists x\mkern2mu\formulaa &\fateq \exists x\mkern2mu(\formula\tightwedge\formulaa)
\\[1ex]
\formula\tightvee\formulaa &\fateq \formulaa\tightvee\formula
&
\formula\vee(\formulaa\tightvee\formulaaa) &\fateq (\formula\tightvee\formulaa)\vee\formulaaa
&
\forall x\mkern2mu\forall y\mkern2mu \formula &\fateq \forall y\mkern2mu\forall x\mkern2mu\formula
&
\formula\vee\forall x\mkern2mu\formulaa &\fateq \forall x\mkern2mu(\formula\tightvee\formulaa)
\\[-2.5ex]
\end{align*}
\end{lemma}
\begin{proof}
A routine induction.\todo{elaborate in appendix}
\end{proof}
Let $\fograph$ be a rectified fograph. Using the above Lemma,
choose a formula $\formula$ such that $\graphof\formula\tighteq\fograph$.
Define $\fograph$ as \defn{valid} if $\formula$ is valid.
This is well-defined with respect to choice of $\formula$ since
every equality in Lemma\,\ref{lem:graph-surj} is a logical equivalence.
Define a coloured fograph as valid if its underlying uncoloured fograph is valid.
Write $\isvalid\chi$ to assert that a formula or fograph $\chi$ is valid,
and $\formula\assignment{\assign\variable\term}$ for the result of substituting a term $\term$ for all occurrences of the variable $\variable$ in a formula $\formula$,
where, without loss of generality (by renaming bound variables in $\formula$ as needed), no variable in $\term$ is a bound variable of $\formula$ \cite[\S1.1.2]{TS96}.
\begin{lemma}\label{lem:formula-inferences}
Let $\formula$, $\formulaa$ and $\formulaaa$ be formulas.
\begin{enumerate}
\item\label{itm:and-inference} $\isvalid\formula\tightwedge\formulaa$ if and only if ($\isvalid\formula$ and $\isvalid\formulaa$).
\item\label{itm:or-inference} $\isvalid\formula\tightvee\formulaa$ if ($\isvalid\formula$ or $\isvalid\formulaa$).
\item\label{itm:distrib-inference} $\isvalid(\formula\tightvee\formulaa)\tightwedge\formulaaa$ implies $\isvalid(\formula\tightwedge\formulaaa)\vee(\formulaa\tightwedge\formulaaa)$.
\item\label{itm:universal-inference} $\isvalid\forall x\mkern2mu\formula$ if and only if $\isvalid\formula$.
\item\label{itm:solo-existential-inference} $\isvalid\formula\assignment{\assign x t}$ implies $\isvalid\exists x\mkern2mu\formula$.
\item\label{itm:existential-inference} $\isvalid\formula\vee\formulaa\assignment{\assign x t}$ implies $\isvalid\formula\vee\exists x\mkern2mu\formulaa$.
\item\label{itm:lindist-inference} $\isvalid(\formula\tightvee\formulaa)\tightwedge\formulaaa$ implies $\isvalid\formula\tightvee(\formulaa\tightwedge\formulaaa)$.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{itm:and-inference}--\ref{itm:existential-inference} are standard inferences and properties of validity in first-order classical logic. See \cite{TS96} and \cite{Joh87}, for example. Property \ref{itm:lindist-inference} follows from \ref{itm:and-inference} and \ref{itm:distrib-inference}.
\end{proof}
\subsection{Soundness of fonets}\label{sec:fonet-soundness}
In this section we prove that fonets are sound, \latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, every fonet is valid (Lemma~\ref{lem:fonet-soundness} below).
Let $\fograph$ be a fograph.
A set $\portion\tightsubseteq\verticesof\fograph$ is \defn{well-founded} if $\portion$ contains a binder only if $\portion$ contains a literal.
\begin{definition}
A \defn{portion} of a rectified fograph $\combulavar$ is a set $\portion\tightsubseteq\verticesof\fograph$ such that $\portion$ and $\verticesof\fograph\tightsetminus\portion$ are well-founded, and $\portion$ is closed under adjacency and binding:
if $\vertex\vertexa\tightin\edgesof\fograph$ or $\diedge\vertex{\mkern-3mu\vertexa}\tightin\edgesof{\bindinggraphof\combulavar}$,
then $\vertex\tightin \portion$ if and only if $\vertexa\tightin \portion$.
\end{definition}
A variable $x$ in a fograph $\fograph$ is
\defn{bound} if $\fograph$ contains an $x$-binder,
and \defn{free} if $\fograph$ contains an $x$-literal but no $x$-binder.
Two fographs are
\defn{independent}
if
any variable in both is free in both.
\subsubsection{Fusion}\label{sec:fusion}
\begin{definition}\label{def:fusion}
Let $\fograph$ and $\fographp$ be independent rectified fographs with respective portions $\portion$ and $\portionp\mkern-2mu$.
The \defn{fusion} of $\fograph$ and $\fographp\mkern-1mu$ at $\portion$ and $\portionp\mkern-2mu$ is the union $\fograph\mkern-2mu\graphunion\mkern-2mu\fographp$ together with edges between every vertex in $\portion$ and every vertex in $\portionp\mkern-2mu$.
\end{definition}
For example, if $\fograph=\mkern2mu\namedsingletonleft x x \mkern12mu\namedsingletonright{ppx}{\ppx}\e x {ppx} \mkern7mu\singletonleft\py\mkern2mu$,
$\fographp=\mkern4mu\singletonq\mkern6mu\singletonqq\mkern6mu\singletonz\mkern2mu$,
$\portion=\setof{\singletonleft\py}$
and
$\portionp=\setof{\singletonq\mkern1mu,\mkern4mu\singletonqq}$,
then
the fusion of $\fograph$ and $\fographp$ at $\portion$ and $\portionp$ is
$\namedsingletonleft x x \mkern12mu\namedsingletonright{ppx}{\ppx}\e x {ppx} \mkern7mu \namedsingletonleft{py}{\py}
\mkern12mu
\namedsingletonright q q\e{py}{q}\mkern6mu\namedsingletonright{qq}{\qq}\nccurve[nodesep=0ex,angleA=25,angleB=145]{py}{qq} \mkern6mu\singletonz\mkern2mu$.
Colourings are inherited during fusion, since they are inherited during graph union $\graphunion$.
For example, if $\cover=\mkern2mu\namedsingletonleft x x \mkern12mu\singletonred{\ppx}\e x v \mkern7mu\singletonredleft{\py}\mkern2mu$,
$\coverp=\mkern4mu\inlinegreenvx{q}\mkern2mu q\mkern6mu\inlinegreenvx{qq} \mkern2mu \qq\mkern6mu\singletonz\mkern2mu$,
$\portion=\setof{\singletonredleft\py}$
and
$\portionp=\setof{\inlinegreenvx{q}\mkern2mu q\mkern1mu,\mkern4mu\inlinegreenvx{qq}\mkern2mu\qq}$,
then
the fusion of $\cover$ and $\coverp$ at $\portion$ and $\portionp$ is
$\namedsingletonleft x x \mkern12mu\singletonred{\ppx}\e x v \mkern7mu \py \inlineredvx{py}
\mkern12mu
\inlinegreenvx{q}\e{py}{q}\mkern.5mu q\mkern6mu\inlinegreenvx{qq}\nccurve[nodesep=0ex,angleA=25,angleB=145]{py}{qq} \mkern1mu \qq\mkern6mu\singletonz\mkern2mu$.
Write $\induced\graph W$ for the subgraph of $\graph$ induced by $W$.
\begin{lemma}\label{lem:fusion-sound}
Every fusion of valid rectified fographs is valid.
\end{lemma}
\begin{proof}
Let $\fusion$ be the fusion of valid rectified fographs $\fograph$ and $\fographp$ at portions $\portion$ and $\portionp$.
We consider four cases.
\begin{enumerate}
\item $\portion$ or $\portionp$ is empty. Without loss generality, we may assume both are empty, since with one portion empty the fusion operation no longer depends on the other.
Thus $\fusion=\fograph\graphunion\fographp$ for rectified fographs $\fograph$ and $\fographp$, so by Lemma\,\ref{lem:graph-surj} there exist formulas $\formula$ and $\formulap$ with $\graphof\formula\tighteq\fograph$ and $\graphof\formulap\tighteq\fographp$.
Since $\fograph$ and $\fographp$ are independent, $\graphof{\formula\tightvee\formulap}=\fusion$. Since $\isvalid\fograph$ and $\isvalid\fographp$ we have $\isvalid\formula$ and $\isvalid\formulap$, hence $\isvalid\formula\tightvee\formulap$ by Lemma\,\ref{lem:formula-inferences}.\ref{itm:or-inference}. Thus $\isvalid\fusion$.
\item $\portion\tighteq\verticesof\fograph$ and $\portionp=\verticesof\fographp$. Thus $\fusion=\fograph\graphjoin\fographp$.
As in the previous case we have valid formulas $\formula$ and $\formulap$ with $\graphof\formula\tighteq\fograph$ and $\graphof\formulap\tighteq\fographp$. Thus $\isvalid\fusion$ since $\isvalid\formula\tightwedge\formulap$ by Lemma\,\ref{lem:formula-inferences}.\ref{itm:and-inference} and $\fusion=\graphof{\formula\tightwedge\formulap}$.
\item $\portion\tighteq\verticesof\fograph$ or $\portionp\tighteq\verticesof\fographp$, and the previous two cases do not hold. Without loss of generality assume $\portionp\tighteq\verticesof\fographp$, so $\emptyset\tightneq\portion\tightneq\verticesof\fograph$.
Let $\dualportion\tighteq\verticesof\fograph\mkern-1mu\tightsetminus\portion\neq\emptyset$.
Thus $\fusion=\induced{\fograph}{\dualportion}\graphunion(\induced{\fograph}{\portion}\graphjoin\fographp)$.
By Lemma\,\ref{lem:graph-surj} there exist formulas $\formula^*$, $\formula$ and $\formula'$ with $\graphof{\formula^*}\tighteq\induced{\fograph}{\dualportion}$,
$\graphof{\formula}\tighteq\induced{\fograph}{\portion}$ and $\graphof{\formulap}\tighteq\fographp$.
Since $\isvalid\fographp$ we have $\isvalid\formulap$, and since $\isvalid\fograph$ and $\graphof{\formula^*\tightvee\formula}=\fograph$, we have $\isvalid\formula^*\tightvee\formula$.
Thus $\isvalid(\formula^*\tightvee\formula)\tightwedge\formulap$ by Lemma\,\ref{lem:formula-inferences}.\ref{itm:and-inference}, so
$\isvalid\formula^*\tightvee(\formula\tightwedge\formulap)$ by Lemma\,\ref{lem:formula-inferences}.\ref{itm:lindist-inference}, hence $\isvalid\fusion$ since $\fusion=\graphof{\formula^*\tightvee(\formula\tightwedge\formulap)}$.
\item Otherwise $\emptyset\neq\portion\tightneq\verticesof\fograph$ and
$\emptyset\neq\portionp\mkern-1mu\neq\verticesof\fographp$.
Let $\dualportion=\verticesof\fograph\mkern-1mu\tightsetminus\portion\neq\emptyset$ and
$\dualportionp=\verticesof\fographp\mkern-1mu\tightsetminus\portionp\neq\emptyset$.
Thus the rectified fograph $\fusion$ is $\induced{\fograph}{\dualportion}\graphunion\induced{\fographp}{\dualportionp}\graphunion(\induced{\fograph}{\portion}\graphjoin\induced{\fographp}{\portionp})$.
By Lemma\,\ref{lem:graph-surj} there exist formulas $\formula^*$, $\formulap^*$, $\formula$ and $\formulap$ with $\graphof{\formula^*}=\induced{\fograph}{\dualportion}$,
$\graphof{\formulap^*}=\induced{\fographp}{\dualportionp}$,
$\graphof{\formula}=\induced{\fograph}{\portion}$, and
$\graphof{\formulap}=\induced{\fographp}{\portionp}$.
Since $\isvalid\fograph$ and $\graphof{\formula^*\tightvee\formula}=\fograph$ we have $\isvalid\formula^*\tightvee\formula$, and
since $\isvalid\fographp$ and $\graphof{\formulap^*\tightvee\formulap}=\fographp$ we have $\isvalid\formulap^*\tightvee\formulap$.
{\newcommand\bigformula{(\formula^*\tightvee\formulap^*)\tightvee(\formula\tightwedge\formulap)}%
Thus $\isvalid\bigformula$, so $\isvalid\fusion$ since $\fusion=\graphof\bigformula$.}
\end{enumerate}
\vskip-4ex
\end{proof}
\begin{lemma}\label{lem:pres-fusion}
Every fusion of two rectified fonets is a rectified fonet.
\end{lemma}
\begin{proof}
Let $\fusion$ be a fusion of rectified fonets $\cover$ and $\coverp$.
Since each portion is closed under adjacency, $\fusion$ is a union of cographs,
hence is a cograph.
Every binder scope contains a literal, by inheritance from $\cover$ and $\coverp$.
Since $\cover$ and $\coverp$ are rectified and (by the constraint on the definition of fusion) independent,
and no links traverse between the two in $\fusion$,
every union of dualizers for $\cover$ and $\coverp$ is a dualizer for $\fusion$, and vice versa.
Thus the set of dependencies of $\fusion$ is the union of those of $\cover$ and $\coverp$, so any $W\tightsubseteq\verticesof\fusion$ inducing a bimatching in $\fusion$ would induce a bimatching in $\cover$ or $\coverp$.
Because $\cover$ and $\coverp$ are independent, $\fusion$ is rectified.
\end{proof}
\subsubsection{Universal quantification}\label{sec:universal}
\begin{definition}\label{def:universal}
Let $\fograph$ be a rectified fograph with no $\variable$-binder.
The \defn{universal quantification} of $\fograph$ by $x$
is $\singletonx\mkern-1mu\graphunion\mkern-1mu\fograph$.
\end{definition}
\begin{lemma}\label{lem:universal-sound}
Every universal quantification of a valid rectified fograph is valid.
\end{lemma}
\begin{proof}
Let $\fograph=\singletonx\graphunion\fographa$ be the universal quantification of a valid rectified fograph $\fographa$ by $x$. By Lemma\,\ref{lem:graph-surj} there exists a formula $\formula$ such that $\graphof\formula\tighteq\fographa$, and $\isvalid\formula$ since $\isvalid\fograph$. Thus $\graphof{\forall x\mkern2mu\formula}=\fograph$, hence $\isvalid\fograph$ since $\isvalid\forall x\mkern2mu\formula$ if and only if $\isvalid\formula$, by Lemma\,\ref{lem:formula-inferences}.\ref{itm:universal-inference}.
\end{proof}
If $\cover$ is a coloured rectified fograph, in the universal quantification $\singletonx\graphunion\cover$ we assume that the colouring of $\cover$ is inherited, while $\singletonx$ remains uncoloured.
\begin{lemma}\label{lem:pres-universal}
Every universal quantification of a rectified fonet is a rectified fonet.
\end{lemma}
\begin{proof}
Let $\coverp$ be the universal quantification $\singletonx\mkern-1mu\graphunion\mkern-1mu\cover$.
Dualizers for $\cover$ are dualizers for $\coverp$, and vice versa, since if $x$ occurs in $\cover$, it has merely transitioned from free to bound.
The leap graph of $\coverp$ is that of $\cover$ together with additional dependencies involving $\singletonx$.
Since $\singletonx$ is in no edge, any $W\tightsubseteq\verticesof\coverp$ inducing a bimatching in $\coverp$ would induce a bimatching in $\cover$.
\end{proof}
\subsubsection{Existential quantification}\label{sec:existential}
\begin{definition}\label{def:existential}
Let $\fograph$ be a rectified fograph without the variable $x$, let $\portion$ be a non-empty portion of $\fograph$, and let $\occs$ be a set of occurrences of a term $t$ in labels of literals in $\portion$, such that $t$ contains no bound variable of $\fograph$.
The \defn{existential quantification} of $\fograph$ by $x$ at $\occs$ in $\portion$ is $\singletonx\mkern-1mu\graphunion\mkern-1mu\fograph\occsubst t \omega x$ together with an edge between $\singletonx$ and each vertex in $\portion$, where $\fograph\occsubst t \omega x$ is the result of substituting $x$ for every occurrence of $t$ in $\occs$.
\end{definition}
For example, if $\fograph=\mkern2mu\singleton{p \tightf g y} \mkern7mu\singleton{\pp \tightf gy}\mkern2mu$,
$\portion=\setof{\singleton{p \tightf g y}}$
and
$\occs$ is the occurrence of the term $gy$ in $\singleton{p \tightf g y}$,
the existential quantification of $\fograph$ by $x$ at $\occs$ is
$\mkern2mu\namedsingletonleft x x \mkern12mu\namedsingletonright v {p \tightf x}\e x v \mkern7mu\singleton{\pp\tightf g y}\mkern2mu$,
while if $\occs$ is empty
the existential quantification becomes
$\mkern2mu\namedsingletonleft x x \mkern12mu\namedsingletonright v {p \tightf g y}\e x v \mkern7mu\singleton{\pp\tightf g y}\mkern2mu$.
If $\portion=\setof{\singleton{p \tightf g y},\singleton{\pp \tightf gy}}$ and $\occs$ comprises both occurrences of the term $\tightf gy$ in $\portion$, then the existential quantification is
$\mkern2mu\namedsingletonleft x x \mkern12mu\namedsingletonright v{\px}\e x v \mkern7mu\namedsingletonright w {\ppx}\nccurve[nodesep=0ex,angleA=25,angleB=145] x w\mkern2mu$
\begin{lemma}\label{lem:existential-sound}
Every existential quantification of a valid rectified fograph is valid.
\end{lemma}
\begin{proof}
Let $\fographa$ be the existential quantification of a valid rectified fograph $\fograph$ by $x$ at a set $\occs$ of occurrences of the term $t$ in the non-empty portion $\portion$.
Thus $\fographa=\singletonx\graphunion\fograph\occsubst t \omega x$ plus edges from $\singletonx$ to every vertex in $\portion$.
We consider two cases.
\begin{enumerate}
\item
Suppose $\portion\tighteq\verticesof\fograph$.
Thus $\fographa=\singletonx\graphjoin\fograph\occsubst t \omega x$.
By Lemma\,\ref{lem:graph-surj} there exists a formula $\formula$ such that $\graphof\formula=\fograph\occsubst t \omega x$.
Therefore
$\fographa=\singletonx\graphjoin\fograph\occsubst t \omega x=\singletonx\graphjoin\graphof{\formula}=\graphof{\exists x\mkern2mu\formula}$.
Since $x$ does not occur in $\fograph$ we have $\graphof{\formula\assignment{\assign x t}}=\fograph$, and $\isvalid\formula\assignment{\assign x t}$ since $\isvalid\fograph$.
By Lemma\,\ref{lem:formula-inferences}.\ref{itm:solo-existential-inference} we have $\isvalid\exists x\mkern2mu\formula$ since
$\isvalid\formula\assignment{\assign x t}$, thus $\isvalid\fographa$.
\item
Otherwise $\emptyset\neq\portion\neq\verticesof\fograph$.
Let $\dualportion=\verticesof\fograph\mkern-1mu\tightsetminus\portion\neq\emptyset$.
Since $\portion$ is a portion, it is well-founded and closed under adjacency and binding,
$\fograph\occsubst t \omega x=\induced{\fograph\occsubst t \omega x}{\dualportion}\graphunion\induced{\fograph\occsubst t \omega x}{\portion}$ with \mbox{$\induced{\fograph\occsubst t \omega x}{\dualportion}$} and $\induced{\fograph\occsubst t \omega x}{\portion}$ both rectified fographs, and $\induced{\fograph\occsubst t \omega x}{\dualportion}=\induced\fograph\dualportion$ since $\omega$ does not intersect $\dualportion$. Thus $\fograph\occsubst t \omega x=\induced\fograph\dualportion\graphunion\induced{\fograph\occsubst t \omega x}{\portion}$.
By Lemma\,\ref{lem:graph-surj} there exist formulas $\formulaa^*$ and $\formulaa$ with $\graphof{\formulaa^*}=\induced\fograph\dualportion$ and $\graphof\formulaa=\induced{\fograph\occsubst t \omega x}{\portion}$.
Thus
$$\fographa
\hspace{1ex} = \hspace{1ex}
\induced\fograph\dualportion
\graphunion
\singletonx\graphjoin\induced{\fograph\occsubst t \omega x}{\portion}
\hspace{1ex} = \hspace{1ex}
\graphof{\formulaa^*}
\graphunion
\graphof{\exists x\mkern2mu\formulaa}
\hspace{1ex} = \hspace{1ex}
\graphof{\formulaa^*\tightvee\exists x\mkern2mu\formulaa}$$
Since $\graphof\formulaa=\induced{\fograph\occsubst t \omega x}{\portion}$ and $x$ does not occur in $\fograph$ we have $\graphof{\formulaa\assignment{\assign x t}}=\induced{\fograph}{\portion}$. Thus
$$\fograph
\hspace{1ex} = \hspace{1ex}
\induced\fograph\dualportion
\graphunion
\induced{\fograph}{\portion}
\hspace{1ex} = \hspace{1ex}
\graphof{\formulaa^*}
\graphunion
\graphof{\formulaa\assignment{\assign x t}}
\hspace{1ex} = \hspace{1ex}
\graphof{\formulaa^*\tightvee\formulaa\assignment{\assign x t}}$$
Since $\isvalid\fograph$ we have $\isvalid\formulaa^*\tightvee\formula\assignment{\assign x t}$,
so by Lemma\,\ref{lem:formula-inferences}.\ref{itm:existential-inference}
we have $\isvalid\formulaa^*\tightvee\exists x\mkern2mu\formulaa$,
hence $\isvalid\fographa$.
\end{enumerate}
\vskip-4ex\end{proof}
When quantifying a coloured rectified fograph existentially, the colouring is inherited, while the added binder remains uncoloured.
For example, if $\cover=\mkern2mu\singletonred{p \tightf g y} \mkern7mu\singletonred{\p\tightf gy}\mkern2mu$,
$\portion=\setof{\singletonred{p \tightf g y}}$
and
$\occs$ is the occurrence of
$y$ in $\singletonred{p \tightf g y}$,
the existential quantification of $\cover$ by $x$ at $\occs$ in $\portion$ is
$\mkern2mu\namedsingletonleft x x \mkern12mu\singletonred{p \tightf g x}\e x v \mkern7mu\singletonred{\pp\tightf g y}\mkern2mu$.
In the remainder of this section (\S\ref{sec:existential}) we prove that every existential quantification of a rectified fonet is a rectified fonet (Lemma\,\ref{lem:pres-existential}).
Let $\cover$ be a linked rectified fograph.
An \defn{existential} (resp.\ \defn{universal}) variable of $\cover$ is one labelling an existential (resp.\ universal) binder in $\cover$.
An \defn{output} of a function is any element of its image.
A \defn{stem} of a dualizer $\dualizer$ for $\cover$ is a variable in an output of $\dualizer$ but not in $\cover$.
For example, if $\cover\;=\;\stemeg\;$ and $\stem$ and $\stema$ are variables,
the dualizer $\stemegmin$ has one stem $\stem$,
$\assignment{\assign x {f\stem\stema},\assign y {f\stem\stema}}$ has two stems $\stem$ and $\stema$,
$\assignment{\assign x {f\stem z},\assign y {f\stem z}}$ has one stem $\stem$,
and $\stemeguniversal$ has no stem.
A dualizer $\dualizer$ \defn{generalizes} a dualizer $\dualizerp$ if $\dualizer$ yields $\dualizerp$ by substituting terms for stems, \latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, there exists a function $\subst$ from the stems of $\dualizer$ to terms such that $\dualizerp(x)=\dualizer(x)\subst$ for every existential variable $x$ of $\cover$, where $\termoratom\subst$ denotes the result of substituting $\subst(\stem)$ for $\stem$ in $\termoratom$, simultaneously for each stem $\stem$ of $\dualizer$.
For example, if $\cover\;=\;\stemeg\;$ and $\stem$ is a variable, the dualizer $\dualizer\mkern-1mu=\stemegmin$ generalizes $\dualizerp\mkern-2mu=\assignment{\assign x {f z a},\assign y {f z a}}$ via $\genegvia$ since $\dualizerp(x)=\dualizer(y)=\stem\genegvia=fza$.
A dualizer $\dualizer$ is \defn{most general} if it generalizes every other dualizer.
For example,
$\stemegmin$ is a most general dualizer for $\;\stemeg\;$ but
$\assignment{\assign x {f \zone},\assign y {f \zone}}$ and
$\stemeguniversal$ are not.
A linked rectified cograph is \defn{dualizable} if it has a dualizer.
\begin{lemma}\label{lem:mgd}
Every dualizable linked rectified fograph has a most general dualizer.
\end{lemma}
\begin{proof}
Let $\cover$ be the dualizable linked rectified fograph.
Every dualizer for $\cover$ is, by definition,
a unifier for the unification problem $\unirelof\cover$
(binary relation on terms) \cite[\S7.2]{TS96}
defined by $t_i\unirelof\cover t'_i$ for each link
$\setof{\singleton{p t_1\ldots t_n},\singleton{\pp t'_1\ldots t'_n}}$ and $1\le i\le n$, solved for the existential variables.
Let $\dualizer$ be a most general unifier of $\unirelof\cover$ \cite[\S7.2]{TS96}.
By renaming variables as needed, we may assume that no output of $\dualizer$ contains an existential variable.
Define $\dualizerp$ as the restriction of $\dualizer$ to existential variables.
Since $\dualizer$ is a most general unifier, $\dualizerp$ is a most general dualizer.
\end{proof}
Let $\cover$ be a linked rectified fograph with dualizer $\dualizer$.
A pair $\dep$ is a \defn{dependency} of $\dualizer$ if
$\singletonx$ is existential, $\singletony$ is universal, and $\dualizer(x)$ contains $y$.
\begin{lemma}\label{lem:mgd-deps}
Let $\cover$ be a linked rectified fograph with a most general dualizer $\dualizer$. A pair $\dep$ is a dependency of $\cover$ if and only if $\dep$ is a dependency of $\dualizer$.
\end{lemma}
\begin{proof}
Since $\dualizer$ is most general, for any dualizer $\dualizerp$ every dependency of $\dualizer$ is a dependency of $\dualizerp$.
By definition, $\dep$ is a dependency of $\cover$ if and only if it is a dependency of every dualizer for $\cover$.
Thus $\dep$ is a dependency of $\cover$ if and only if it is a dependency of $\dualizer$.
\end{proof}
\begin{lemma}\label{lem:pres-existential}
Every existential quantification of a rectified fonet is a rectified fonet.
\end{lemma}
\begin{proof}
Let $\coverp$ be the existential quantification of $\cover$ by $x$ at $\occs$ in $\portion$, where $\occs$ is a set of occurrences of the term $t$ in labels of literals in $\portion$.
Since $\portion$ is closed under adjacency, $\coverp$ is a cograph and every binder scope in $\coverp$ contains a literal.
In the following two paragraphs we will show that the dependencies of $\cover$ and $\coverp$ coincide.
For any dualizer $\dualizer$ for $\cover$, the function
$\dualizerp=\dualizer\cup\assignment{\assign x t}$ is a dualizer for $\coverp$, since the links of $\coverp$ are those of $\cover$ but for some occurrences of $t$ becoming $x$.
The dependencies of $\dualizer$ in $\cover$ are the same as those of $\dualizerp$ in $\coverp$,
since $t$ contains no binder variable of $\cover$.
Every dependency of $\coverp$ is a dependency of $\cover$:
a dependency of $\coverp$ is (by definition) a dependency of every dualizer of $\coverp$,
hence a dependency of $\dualizerp$ for every dualizer $\dualizer$ for $\cover$,
thus a dependency of $\cover$.
Conversely, to show that every dependency of $\cover$ is a dependency of $\coverp$,
we take a most general dualizer $\dualizera$ for $\coverp$ and construct a dualizer
$\hat\dualizera$ for $\cover$ with the same dependencies as $\dualizera$;
since a dependency of $\cover$ is (by definition) a dependency of every dualizer of $\cover$,
it is a dependency of $\hat\dualizera$ in $\cover$,
hence a dependency of $\dualizera$ in $\coverp$,
and therefore a dependency of $\coverp$ by \reflem{lem:mgd-deps} (since $\dualizera$ is most general).
Let $\dualizer$ be a most general dualizer for $\cover$.
By the argument in the previous paragraph, $\dualizerp=\dualizer\cup\assignment{\assign x t}$ is a dualizer for $\coverp$.
Since $\dualizera$ is most general for $\coverp$,
there exists a function $\subst$ from the stems of $\dualizera$ to terms such that
$t=\dualizerp(x)=\dualizera(x)\subst$.
Let $\tilde\subst$ be the restriction of $\subst$ to stems appearing in $\dualizera(x)$.
Define $\tilde\dualizera$ by $\tilde\dualizera(y)=\dualizera(y)\tilde\subst$, for every existential variable $y$ of $\coverp$.
In particular, $\tilde\dualizera(x)=t$.
The function $\tilde\dualizera$ is a dualizer for $\coverp$ (since it is $\dualizera$ with terms substituted for stems), and has the same dependencies as $\dualizera$ because $\dualizera(x)\tilde\subst=t$ so $\tilde\subst(z)$ is a sub-term of $t$ for every stem $z$ of $\dualizera$ in $\dualizera(x)$, and $t$ contains no bound variable of $\cover$, hence no bound variable of $\coverp$.
Define $\hat\dualizera$ as the restriction of $\tilde\dualizera$ to the existential variables of $\cover$
(thus $\tilde\dualizera=\hat\dualizera\cup\assignment{\assign x t}$).
The function $\hat\dualizera$ is a dualizer for $\cover$ since for every link $\setof{\singleton{p t_1\ldots t_n},\singleton{\pp u_1\ldots u_n}}$ in $\cover$ we have $t_i\hat\dualizera\tighteq u_i\hat\dualizera$, because for the corresponding link $\setof{\singleton{p t'_1\ldots t'_n},\singleton{\pp u'_1\ldots u'_n}}$ in $\coverp$ we have $t'_i\tilde\dualizera=u'_i\tilde\dualizera$ with
$t_i=t'_i\assignment{\assign x t}$
and
$u_i=u'_i\assignment{\assign x t}$, and by construction $\tilde\dualizera(x)=t$.
The dualizer $\hat\dualizera$ is a restriction of $\tilde\dualizera$, which has the same dependencies as $\dualizera$, thus $\hat\dualizera$ has the same dependencies as $\dualizera$.
Thus, by the argument at the start of this paragraph, every dependency of $\cover$ is a dependency of $\coverp$.
Since the dependencies of $\cover$ and $\coverp$ coincide, the leap graphs $\leapgraphof\cover$ and $\leapgraphof\coverp$ are identical but for an extra vertex $\singletonx$ in the latter which is not in any leap. Thus induced bimatchings of $\cover$ and $\coverp$ coincide, so $\coverp$ is a fonet because $\cover$ is a fonet.
\end{proof}
\subsubsection{Soundness of fonets}
An \defn{axiom} is a coloured rectified fograph comprising two dual literals of the same colour (\latinstyle{e.g}\onedot} \def\Eg{\latinstyle{E.g}\onedot\ $\axiomeg$) or a single (uncoloured) $1$-literal.
\begin{lemma}\label{lem:construct-implies-net}
Every coloured rectified fograph constructed from axioms by fusion and quantification is a rectified fonet.
\end{lemma}
\begin{proof}
Every axiom is a rectified fonet, and fusion and quantification preserve the property of being a rectified fonet, by Lemmas\,\ref{lem:pres-fusion}, \ref{lem:pres-universal}, and \ref{lem:pres-existential}.
\end{proof}
A fonet is \defn{universal} if it has a binder in no edge (necessarily a universal binder).
\begin{lemma}\label{lem:split-universal}
Every universal rectified fonet is a universal quantification of a rectified fonet.
\end{lemma}
\begin{proof}
Let $\net$ be a universal rectified fonet, with (universal) binder $\singletonx$ in no edge.
The result $\net^-$ of deleting $\singletonx$ from $\net$ is a fonet,
since $\net^-$ inherits all dualizers from $\net$ (because $x$ goes from being universal to being free) and if $W$ induces a bimatching in $\net^-$ then $W$ induces a bimatching in $\net$.
Since $\net^-$ is an induced subgraph of a rectified fograph, $\net^-$ is rectified.
Since $\net=\singletonx\graphunion\net^-$, the rectified fonet $\net$ is the universal quantification of the rectified fonet $\net^-$ by $x$.
\end{proof}
\begin{lemma}\label{lem:axiom-union}
Every fonet with no edge and no binder is a union $\lambda_1\tightgraphunion\ldots\tightgraphunion\lambda_n$ of axioms $\lambda_i$ ($n\tightge 1$).
\end{lemma}
\begin{proof}
Since $\net$ has no edges, it has no existential binders, hence the empty dualizer.
Thus every link in $\net$ has literals with dual atoms, and every literal that is not in a link is $1$-labelled.
Since $\net$ has no edges, it is the union of axioms.
\end{proof}
\begin{lemma}\label{lem:split-fusion}
Let $\net$ be a rectified fonet with underlying uncoloured fograph
$\fographaa_1
\mkern1mu\graphunion\mkern1mu(\fographa_1\mkern-2mu\graphjoin\mkern-2mu\fographa_2)
\mkern1mu\graphunion\mkern1mu\fographaa_2$
for each $\fographa_i$ a fograph and each $\fographaa_j$ empty or a fograph.
Suppose no leap of $\net$ is between $\verticesof{\fographaa_1}\cup\verticesof{\fographa_1}$ and $\verticesof{\fographa_2}\cup\verticesof{\fographaa_2}$.
Then $\net$ is a fusion of rectified fonets.
\end{lemma}
\begin{proof}
Since $\net$ is a fograph and no leap goes between $\verticesof{\fographaa_1}\cup\verticesof{\fographa_1}$ and $\verticesof{\fographa_2}\cup\verticesof{\fographaa_2}$, the graphs
\mbox{$\fographaa_1\graphunion\fographa_1$} and
\mbox{$\fographa_2\graphunion\fographaa_2$}
are well-defined fonets upon inheriting colouring from
$\net$
by restriction. Thus
$\net$ is a fusion of rectified fonets
$\fographaa_1\graphunion\fographa_1$ and $\fographa_2\graphunion\fographaa_2$ at portions
$\verticesof{\fographa_1}$ and $\verticesof{\fographa_2}$.
\end{proof}
\begin{lemma}\label{lem:no-leap}
Let $\net$ be a rectified fonet with underlying uncoloured fograph
$\fographaa_1\mkern1mu\graphunion\mkern1mu(\singletonx\graphjoin\fographa)\mkern1mu\graphunion\mkern1mu\fographaa_2$ for $\fographa$ a fograph and each $\fographaa_i$ empty or a fograph.
Suppose no leap of $\net$ is between $\verticesof{\fographaa_1}\cup\setof{\singletonx}$ and $\verticesof\fographa\cup\verticesof{\fographaa_2}$.
Then the binder $\singletonx$ is in no leap of $\net$.
\end{lemma}
\begin{proof}
In this proof \emph{leap supposition} refers to the supposition on leaps in the Lemma statement.
Suppose for a contradiction that $\setof{\singletonx,\singletony}$ is a leap, hence dependency, of $\net$. By the leap supposition, the universal binder $\singletony$ is in $\fographaa_1$.
Let $\dualizer$ be a most general dualizer for $\net$, which exists by \reflem{lem:mgd}. Since $\setof{\singletonx,\singletony}$ is a dependency, the term $\dualizer(\singletonx)$ contains $y$, by Lemma\,\ref{lem:mgd-deps}.
There must be a link $\setof{v,w}$ such that the atom label of the literal $v$ contains $x$, otherwise $\dualizer(\singletonx)=z$ for a stem variable $z$ not occurring in $\net$, so $\dualizer(\singleton)$ would not contain $y$.
Since $\net$ is rectified, the literal $v$ must be in the scope of $\singletonx$, thus $v$ is in $\fographa$.
The atom label of $w$ cannot contain $y$, since $w$ would then be in $\fographaa_1$ (because $\net$ is rectified so $w$ must be in the scope of $\singletony$, which is in $\fographaa_1$), and $\setof{v,w}$ would be a link (hence leap) between $\fographa$ and $\fographaa_1$, contradicting the leap supposition.
Thus, for $\dualizer(x)$ to be a term containing $y$, there must be a link $\setof{v,w}$ with the label of $v$ containing $x$ and the label of $w$ containing an existential variable $\xp$ such that the term $\dualizer(\xp)$ contains $y$.\todo{Elaborate by delving into unification problem equations?}
Therefore $\net$ has a leap $\setof{\singleton\xp,\singletony}$.
Since $v$ is in $\fographa$ and $\setof{v,w}$ is a link, hence a leap, by the leap supposition $w$ must be in $\fographa$ or $\fographaa_2$. Because $\net$ is rectified, the literal $w$ must be in the scope of the existential binder $\singleton\xp$, so $\singleton\xp$ is in $\fographa$ or $\fographaa_2$. Since $\singletony$ is in $\fographaa_1$, the leap $\setof{\singleton\xp,\singletony}$ is between $\fographaa_1$ and $\fographa$ or $\fographaa_2$, contradicting the leap supposition.
\end{proof}
\begin{lemma}\label{lem:split-existential}
Let $\net$ be a rectified fonet with underlying uncoloured fograph
$\fographaa_1\mkern1mu\graphunion\mkern1mu(\singletonx\graphjoin\fographa)\mkern1mu\graphunion\mkern1mu\fographaa_2$ for $\fographa$ a fograph and each $\fographaa_i$ empty or a fograph.
Suppose no leap of $\net$ is between $\verticesof{\fographaa_1}\cup\setof{\singletonx}$ and $\verticesof\fographa\cup\verticesof{\fographaa_2}$.
Then $\net$ is an existential quantification of a rectified fonet by $x$.
\end{lemma}
\begin{proof}
By Lemma\,\ref{lem:no-leap} the existential binder $\singletonx$ is in no leap of $\net$.
Let $\dualizer$ be a most general dualizer for $\net$ and let $t=\dualizer(x)$.
Define $\netp$ as the result of deleting $\singletonx$ from $\net$ and substituting $t$ for $x$ in the atom label of every literal.
Since $\net$ is a rectified fograph and $\singletonx$ is in no leap of $\net$, $\netp$ is a rectified fograph.\todo{elaborate}
Thus $\net$ is an existential quantification of $\netp$ by $x$ at $\occs$ in the portion $\verticesof{\fographa}$ for $\occs$ the set of occurrences of $t$ in $\netp$ which replaced occurrences of $x$ in $\net$ during the construction of $\netp$.
\end{proof}
The \defn{mate} of a literal in a link is the other literal in the link.
\begin{lemma}\label{lem:split-fusion-existential}
Every non-universal rectified fonet with at least one edge is a fusion of rectified fonets or an existential quantification of a rectified fonet.
\end{lemma}
\begin{proof}
Let $\net$ be a non-universal fonet with an edge, and let $\fograph$ be its underlying uncoloured fograph.
Since $\fograph$ is a (labelled) cograph, it has the form
$\fograph=(\fograph_1\tightgraphjoin \fograph_2)\graphunion (\fograph_3\tightgraphjoin
\fograph_4)\graphunion\!\ldots\!\graphunion(\fograph_{n-1}\tightgraphjoin \fograph_n)\graphunion \fographaaa$ for (labelled) cographs $\fograph_i$
and $\fographaaa$, where $\fographaaa$ is a union of literals,
and $n\tightge 1$ since $\net$ (hence $\fograph$) has an edge.
Let $\megagraph$ be the graph whose vertices are the $\fograph_i$
with $\fograph_i\fograph_j\tightin E(\megagraph)$ if and only if $\net$ has an edge or leap
$\{v,w\}$ with $v\!\in \!V(\fograph_i)$ and $w\!\in\! V(\fograph_j)$.
A \emph{1-factor} is a set of pairwise disjoint edges whose
union contains all vertices. Since $\net$ is a fonet, $Z=\{
\fograph_1\fograph_2,\fograph_3\fograph_4,\ldots,\fograph_{n-1}\fograph_n\}$ is the only 1-factor of
$\megagraph$. For if $Z'$ is another 1-factor, then
$Z'\mkern-2mu\setminus\mkern-2mu Z$ determines a set of leaps in $\net$
whose union induces a bimatching in $\net$: for each
$\fograph_i\fograph_j\in Z'\mkern-2mu\setminus\mkern-2mu Z$ pick a leap
$\{v,w\}$ with $v\in V(\fograph_i)$ and $w\in V(\fograph_j)$. Since $\megagraph$
has a unique 1-factor, some $\fograph_m\fograph_{m+1}\in Z$ is a bridge\todo{Related work: emphasize Retor\'e's use of bridge}
\cite{Kot59,LP86},
\textit{i.e.},
$\graphpairof{\verticesof \megagraph}{\edgesof \megagraph\mkern-1.5mu\setminus\mkern-1mu \fograph_m\fograph_{m+1}}=X\tightgraphunion
Y$ with $\fograph_m\!\in\! \verticesof X$ and $\fograph_{m+1}\!\in\! \verticesof Y$.\footnote{A similar construction of a unique 1-factor with a bridge is used in \cite{Hug06}, and \cite{Ret03} uses a related argument involving the existence of a bridge.}
Without loss of generality assume $G_i\tightin\verticesof X$ for $i\tightle m$ and
$G_j\tightin\verticesof Y$ for $j\tightge m+1$.
Let $\fographaaa_X$ be the restriction of $\fographaaa$ to literals with mate in a vertex of $X$, and let $\fographaaa_Y$ be the
restriction of $\fographaaa$ to literals not in $\fographaaa_X$.
Thus $\fographaaa\tighteq\fographaaa_X\tightgraphunion\fographaaa_Y$ since $\fographaaa$ contains only literals and no binders.
Define $\fographaa_1=\fographaaa_X\tightgraphunion(G_1\tightgraphjoin G_2)\tightgraphunion\ldots\tightgraphunion(G_{m-2}\tightgraphjoin G_{m-1})$
and $\fographaa_2=\fographaaa_Y\tightgraphunion(G_{m+2}\tightgraphjoin G_{m+3})\tightgraphunion\ldots\tightgraphunion(G_{n-2}\tightgraphjoin G_n)$,
so $\fograph=\fographaa_1\graphunion(\fograph_m\tightgraphjoin\fograph_{m+1})\graphunion\fographaa_2$.
Since $\fographaaa$ comprises literals only, each of $\fographaa_1$ and $\fographaa_2$ is either empty or a fograph.
If $\fograph_m$ and $\fograph_{m+1}$ both contain a literal, they are fographs, so we can appeal to Lemma\,\ref{lem:split-fusion} with $\fographa_1=\fograph_m$ and $\fographa_2=\fograph_{m+1}$ to conclude that $\net$ is a fusion of rectified fonets.
Otherwise one of $\fograph_m$ or $\fograph_{m+1}$, say $\fograph_m$, has no literal, thus $\fograph_m=\singletonx$. Then $\fograph_{m+1}$ must contain a literal, since $\fograph$ hence $\fograph_m\tightgraphjoin\fograph_{m+1}$ is a fograph, therefore $\fograph_{m+1}$ is a fograph.
Applying Lemma\,\ref{lem:split-existential} with
$\fographa\tighteq\fograph_{m+1}$, we conclude that $\net$ is an existential quantification of a rectified fonet.
\end{proof}
\begin{lemma}\label{lem:fonet-constructible}
Every rectified fonet can be constructed from axioms by fusion and quantification.
\end{lemma}
\begin{proof}
Let $\net$ be a rectified fonet.
We proceed by induction on the number of binders and edges in $\net$.
In the base case with no edge or binder, $\net$ is a union of axioms by Lemma\,\ref{lem:axiom-union}, hence a fusion of axioms since union is a special case of fusion (with empty portions).
If $\net$ is universal, apply Lemma\,\ref{lem:split-universal} then appeal to induction with one less binder.
Thus we may assume $\net$ is non-universal with a binder or edge.
Had $\net$ no edge, it would have no binder (since every existential binder must be in an edge, and
a universal binder would make $\net$ universal),
thus $\net$ has at least one edge.
Apply Lemma\,\ref{lem:split-fusion-existential} then appeal to induction with fewer edges.
\end{proof}
\begin{lemma}[Fonet soundness]\label{lem:fonet-soundness}
Every fonet is valid.
\end{lemma}
\begin{proof}
By Lemma\,\ref{lem:fonet-constructible} every fonet can be constructed from axioms by fusion and quantification. Since every axiom is valid, and fusion and quantification preserve validity by Lemmas\,\ref{lem:fusion-sound}, \ref{lem:universal-sound}, and \ref{lem:existential-sound}, every fonet is valid.
\end{proof}
\subsection{Soundness of skew bifibrations}\label{sec:soundness-bifibs}
In this section we no longer assume implicitly that every formula is rectified.
An \defn{intrusion} is a formula of the form
$\formula\tightvee\forall x\mkern2mu \formulaa$,
$(\forall x\mkern2mu \formulaa)\tightvee\formula$,
$\formula\tightwedge\exists x\mkern2mu \formulaa$, or
$(\exists x\mkern2mu \formulaa)\tightwedge\formula$.
A formula is \defn{extruded} if no subformula is an intrusion.
For any variable $x$, an \defn{$x$-quantifier} is a quantifier of the form $\forall x$ or $\exists x$.
A formula is \defn{unambiguous} if no $x$-quantifier is in the scope of another $x$-quantifier, for every variable $x$.
A formula is \defn{clear} if it is extruded and unambiguous.
\begin{definition}\label{def:xgraph}
The \defn{graph} $\xgraphof\formula$ of a clear formula
$\formula$ is the logical cograph defined inductively by:
\begin{center}\vspace{-2ex}\begin{math}
\begin{array}{c}
\xgraphof{\atom}
\;=\;
\singleton\atom \hspace{1ex} \text{ for every atom\/ }\atom
\\[2ex]
\def\hspace{3ex}{\hspace{3ex}}
\begin{array}{r@{\hspace{3ex}=\hspace{3ex}}l}
\xgraphof{\,\formula\tightvee\formulaa\,} & \xgraphof\formula\graphunion\xgraphof\formulaa
\\[1.5ex]
\xgraphof{\,\formula\tightwedge\formulaa\,} & \xgraphof\formula\graphjoin\xgraphof\formulaa
\\[1.5ex]
\end{array}
\hspace{16ex}
\begin{array}{r@{\hspace{3ex}=\hspace{3ex}}l}
\xgraphof{\,\forall x\, \formula\,} & \xgraphall {x\,} {\formula}
\\[1.5ex]
\xgraphof{\,\exists x\, \formula\,} & \xgraphex {x\,} {\formula}
\\[1.5ex]
\end{array}\\[3ex]\end{array}\end{math}
\end{center}
\end{definition}
Note that $\xgraphofsymbol$ coincides with $\graphofsymbol$ (Def.\,\ref{def:graph}) on extruded rectified formulas.
\begin{lemma}\label{lem:xgraph-surj}
The function $\xgraphofsymbol$ is a surjection from clear formulas onto fographs.
Two clear formulas have the same graph if and only if they are equal modulo
\begin{align*}
\formula\tightwedge\formulaa &\fateq \formulaa\tightwedge\formula\;\;\;\;
&
\formula\wedge(\formulaa\tightwedge\formulaaa) &\fateq (\formula\tightwedge\formulaa)\wedge\formulaaa\;\;\;\;
&
\exists x\mkern2mu\exists y\mkern2mu \formula &\fateq \exists y\mkern2mu\exists x\mkern2mu\formula\;\;\;\;
\\[1ex]
\formula\tightvee\formulaa &\fateq \formulaa\tightvee\formula
&
\formula\vee(\formulaa\tightvee\formulaaa) &\fateq (\formula\tightvee\formulaa)\vee\formulaaa
&
\forall x\mkern2mu\forall y\mkern2mu \formula &\fateq \forall y\mkern2mu\forall x\mkern2mu\formula
\\[-2.5ex]
\end{align*}
\end{lemma}
\begin{proof}
A routine induction, akin to the proof of Lemma\,\ref{lem:graph-surj}.\todo{elaborate in appendix}
\end{proof}
Let $\fograph$ be a fograph. Using the above Lemma,
choose a clear formula $\formula$ such that $\xgraphof\formula\tighteq\fograph$.
Define $\fograph$ as \defn{valid} if $\formula$ is valid.
This is well-defined with respect to choice of $\formula$ since every equality in Lemma\,\ref{lem:xgraph-surj} is a logical equivalence.
Fographs $\fograph$ and $\fographa$ are
\defn{$\wedge$-compatible} if
$\fograph\tightgraphjoin\fographa$ is a well-defined fograph and
$\widebindinggraphof{\fograph\mkern-2mu\tightgraphjoin\mkern-2mu\fographa}{7ex}=\bindinggraphof\fograph\graphunion\bindinggraphof{\fographa}$,
and
\defn{$\vee$-compatible} if
$\fograph\tightgraphunion\fographa$ is a well-defined fograph and
$\widebindinggraphof{\fograph\mkern-2mu\tightgraphunion\mkern-2mu\fographa}{7ex}=\bindinggraphof\fograph\graphunion\bindinggraphof{\fographa}$.
Thus $\vee$- and $\wedge$-compatibility ensure that no new bindings are created during graph union and join.
For any variable $x$, a fograph $\fograph$ is \defn{$x$-compatible} if $\fograph$ does not contain an $x$-binder $\singletonx$.
\begin{definition}\label{def:fograph-ops}
Let $\fograph$ and $\fographa$ be fographs. Define the \defn{fograph connectives} $\wedge$, $\vee$, $\forall$ and $\exists$ by:
\begin{itemize}
\item if $\fograph$ and $\fographa$ are $\wedge$-compatible, define $\fograph\tightwedge\fographa\mkern2mu=\mkern2mu\fograph\tightgraphjoin\fographa$
\item if $\fograph$ and $\fographa$ are $\vee$-compatible, define $\fograph\tightvee\fographa\mkern2mu=\mkern2mu\fograph\tightgraphunion\fographa$
\item for any variable $x$, if $\fograph$ is $x$-compatible,
define $\forall x\mkern2mu\fograph\mkern2mu=\mkern2mu\singletonx \graphunion \fograph$
\item for any variable $x$, if $\fograph$ is $x$-compatible,
define $\exists x\mkern2mu\fograph\mkern2mu=\mkern2mu\singletonx \graphjoin \fograph$.
\end{itemize}
\end{definition}
\begin{lemma}\label{lem:fograph-ops}
The fograph connectives $\wedge$, $\vee$, $\forall$ and $\exists$ are well-defined on fographs. In other words, given fographs as input(s),
each connective, when defined, produces a fograph as output.
\end{lemma}
\begin{proof}
By the compatibility constraints, no $x$-binder of
$\fograph\tightwedge\fographa$,
$\fograph\tightvee\fographa$,
$\forall x\mkern2mu\fograph$, or
$\exists x\mkern2mu\fograph$ can be in the scope of another $x$-binder.
\end{proof}
\begin{lemma}\label{lem:xgraph-commute}
The following equalities hold for clear formulas:
\begin{center}\vspace{0ex}\begin{math}
\begin{array}{c}
\def\hspace{3ex}{\hspace{3ex}}
\begin{array}{r@{\hspace{3ex}=\hspace{3ex}}l}
\xgraphof{\,\formula\tightvee\formulaa\,} & \xgraphof\formula\mkern1mu\vee\mkern2mu\xgraphof\formulaa
\\[1.5ex]
\xgraphof{\,\formula\tightwedge\formulaa\,} & \xgraphof\formula\mkern1mu\wedge\mkern2mu\xgraphof\formulaa
\\[1.5ex]
\end{array}
\hspace{16ex}
\begin{array}{r@{\hspace{3ex}=\hspace{3ex}}l}
\xgraphof{\,\forall x\, \formula\,} & \forall x\mkern6mu \xgraphof\formula
\\[1.5ex]
\xgraphof{\,\exists x\, \formula\,} & \exists x\mkern6mu \xgraphof\formula
\\[1.5ex]
\end{array}\end{array}\end{math}
\end{center}
\end{lemma}
\begin{proof}
Since $\formula\tightvee\formulaa$ and $\formula\tightwedge\formulaa$ are clear, $\xgraphof\formula$ and $\xgraphof\formulaa$ are $\vee$- and $\wedge$-compatible, thus $\xgraphof\formula\vee\mkern1mu\xgraphof\formulaa$ and $\xgraphof\formula\wedge\mkern1mu\xgraphof\formulaa$ are well-defined.
Because $\forall x\mkern2mu \formula$ and $\exists x\mkern2mu\formula$ are clear, no $x$-quantifier occurs in $\formula$, so $\xgraphof\formula$ contains no binder $\singletonx$, thus $\forall x\mkern3mu \xgraphof\formula$ and $\exists x\mkern3mu \xgraphof\formula$ are well-defined.
\end{proof}
\begin{lemma}\label{lem:fograph-iff-constructed}
A labelled graph is a fograph if and only if it can be constructed from literals by the fograph connectives $\wedge$, $\vee$, $\forall$ and $\exists$.
\end{lemma}
\begin{proof}
Let $\fograph$ be a fograph.
By Lemma\,\ref{lem:xgraph-surj} there exists a clear formula $\formula$
such that $\xgraphof\formula\tighteq\fograph$.
By Lemma\,\ref{lem:xgraph-commute} the $\graphjoin$ and $\graphunion$ operations in the inductive translation $\xgraphofsymbol$ of $\formula$ are well-defined $\wedge$, $\vee$, $\forall$ and $\exists$ operations on fographs. Thus $\fograph$ can be constructed from literals by
fograph connectives.
Conversely, any labelled graph constructed from literals by fograph connectives is a fograph, by repeated application of Lemma~\ref{lem:fograph-ops}, starting from the fact that any literal vertex is a fograph.
\end{proof}
A \defn{map} is a label-preserving graph homomorphism between fographs.
\begin{definition}
Extend the fograph connectives to maps $\map:\fograph\to\fographa$ and $\mapp:\fographp\to\fographap$
as follows:
\begin{itemize}
\item
if $\fograph\tightwedge\fographp$ and $\fographa\tightwedge\fographap$ are well-defined,
define $\map\tightwedge\mkern-1mu\mapp:\fograph\tightwedge\fographp\to\fographa\tightwedge\fographap$
as $\map\cup\mapp$
\item
if $\fograph\tightvee\fographp$ and $\fographa\tightvee\fographap$ are well-defined,
define $\map\tightvee\mkern-1mu\mapp:\fograph\tightvee\fographp\to\fographa\tightvee\fographap$
as $\map\cup\mapp$
\item
if $\forall x\mkern2mu \fograph$ and
if $\forall x\mkern2mu \fographa$ are well-defined,
define $\forall x\mkern2mu\map\mkern2mu:\forall x\mkern2mu\fograph\to\forall x\mkern2mu\fographa$
as $\map\cup\setof{\singletonx\mkern3mu\psscalebox{.8 1}{\mathbin{\ensuremath\mapsto}}\mkern3mu\singletonx}$
\item
if $\exists x\mkern2mu \fograph$ and
if $\exists x\mkern2mu \fographa$ are well-defined,
define $\exists x\mkern2mu\map\mkern2mu:\exists x\mkern2mu\fograph\to\exists x\mkern2mu\fographa$
as $\map\cup\setof{\singletonx\mkern3mu\psscalebox{.8 1}{\mathbin{\ensuremath\mapsto}}\mkern3mu\singletonx}$.
\end{itemize}
\end{definition}
\begin{lemma}\label{lem:fograph-connectives-bifibs}
The fograph connectives are well-defined on skew bifibrations: if $\map$ and $\mapp$ are skew bifibrations, then, when defined, each of the maps $\map\tightwedge\mapp$, $\map\tightvee\mapp$, $\forall x\mkern2mu\map$ and $\exists x\mkern2mu\map$ is a skew bifibration, where $x$ is any variable.
\end{lemma}
\begin{proof}
Due to the compatibility constraint in the definitions of the fograph connectives,
the skew fibration condition is preserved and the directed graph homomorphisms between binding graphs are fibrations.\todo{elaborate?}
In the $\wedge$ and $\exists$ connectives, additional requisite skew liftings are created across the corresponding graph join.
\end{proof}
\begin{lemma}\label{lem:bifibs-compose}
Skew bifibrations between fographs compose: if $\bifib:\fograph\to\fographa$ and $\bifibp:\fographa\to\fographaa$ are skew bifibrations between fographs, their composite $\bifibp\mkern-1mu\tightcirc\bifib:\fograph\to\fographaa$ is a skew bifibration.
\end{lemma}
\begin{proof}
Skew fibrations between cographs compose \cite[Cor.\:3.5]{Hug06i}, and directed graph fibrations compose \cite{Gro60}. Existential preservation is transitive.
\end{proof}
\begin{definition}
If $\fograph$ is a fograph and $\fograph\vee\fograph$ is well-defined, define \defn{pure contraction} $\contractionof\fograph$ as the canonical map $\fograph\vee\fograph\to\fograph$.
If $\fograph$ and $\fographa$ are fographs and $\fograph\vee\fographa$ is well-defined, define \defn{pure weakening} $\weakeningof\fograph\fographa$ as the canonical map $\fograph\to\fograph\vee\fographa$.
\end{definition}
\begin{lemma}\label{lem:pure-cw-are-bifibs}
Every pure contraction and pure weakening is a skew bifibration.
\end{lemma}
\begin{proof}
Immediate from the definitions of pure contraction and pure weakening.\todo{elaborate}
\end{proof}
\begin{definition}\label{def:contraction-weakening-map}
A \defn{contraction} is any map generated from a pure contraction by fograph connectives, and a \defn{weakening} is any map generated from a pure weakening by fograph connectives.
\end{definition}
\begin{lemma}\label{lem:cw-are-bifibs}
Every contraction and weakening is a skew bifibration.
\end{lemma}
\begin{proof}
Pure contraction and pure weakening are skew bifibrations by \reflem{lem:pure-cw-are-bifibs}, and
fograph connectives are well-defined on skew bifibrations by Lemma\,\ref{lem:fograph-connectives-bifibs}.
\end{proof}
\begin{definition}
A \defn{structural map} is any map constructed from isomorphisms, contractions, and weakenings by composition.
\end{definition}
\begin{lemma}\label{lem:structural-map-is-bifib}
Every structural map is a skew bifibration.
\end{lemma}
\begin{proof}
Every isomorphism is a skew bifibration, and every contraction and weakening is a skew bifibration by Lemma\,\ref{lem:cw-are-bifibs}.
Skew bifibrations compose by Lemma\,\ref{lem:bifibs-compose}.
\end{proof}
\begin{lemma}\label{lem:strucmap-sound}
Structural maps are sound: if $\fograph$ is a valid fograph and $\map:\fograph\to\fographa$ is a structural map, then $\fographa$ is valid.
\end{lemma}
\begin{proof}
Isomorphisms, pure contraction, pure weakening, composition and fograph connectives are sound.
\end{proof}
\subsubsection{The image of a skew bifibration is a fograph}\label{sec:image}
We recall the \emph{modular decomposition} \cite{Gal67} of a cograph, called its \defn{cotree} \cite{CLS81}.
A directed graph $\defaultplustimestree$ is \defn{acyclic} if the transitive closure of ${}\child{}$ (viewed as a binary relation on $\nodeset$) is irreflexive.
A \defn{forest} is an acyclic directed graph $\defaultplustimestree$ such that for every $n\tightin \nodeset$ there exists at most one $m\tightin \nodeset$ with $\diredge nm\in{}\mkern-3mu\child{}$.
We refer to the vertices of a forest as \defn{nodes}.
Write $m\child n$ or $n\parent m$ for $\diredge n m\in{}\mkern-3mu\child{}$, and say that $m$ is a \defn{child} of $n$ and $n$ is the \defn{parent} of $m$.
A \defn{leaf} (resp.\ \defn{root}) is a node with no child (resp.\ parent).
A \defn{tree} is a forest with a unique root.
A \defn{\plustimestree} is a tree in which
a node is labelled $\graphunion$ or $\graphjoin$ if and only if it is not a leaf.
Each node labelled $\graphunion$ or $\graphjoin$ is a
\defn{\plustimesnode}.
An isomorphism $\iota:\defaultplustimestree\to\defaultplustimestreep$ of \plustimestrees is a bijection
$\iota:\nodeset\to\nodesetp$ such that
$m\child n$ if and only if $\iota(m)\mkern1mu\childp\mkern2mu\iota(n)$ and
$\iota(n)$ is a $\graphunion$ (resp.\ $\graphjoin$) node
if and only if $n$ is a $\graphunion$ (resp.\ $\graphjoin$) node.
We identify \plustimestrees up to isomorphism.
Given \plustimestrees $\tree_1,\ldots,\tree_n$ for $n\ge 1$ define $\graphunion\tree_1\ldots\tree_n$ (resp.\ $\graphjoin\tree_1\ldots\tree_n$) as the disjoint union of the $\tree_i$ together with a $\graphunion$ (resp.\ $\graphjoin$) root node $r$ and an edge to $r$ from the root of each $\tree_i$ ($1\tightle i\tightle n$).
Write $\singleton{}$ for the \plustimestree with a unique node.
For example, the \plustimestree
$\graphunion(\graphjoin\singleton{}\singleton{})\singleton{}\mkern-4mu\left(\rule{0ex}{1.7ex}\mkern-4mu\graphjoin\mkern-4mu\singleton{}\singleton{}(\graphunion\singleton{}\singleton{})\right)$
is below-left and
$\graphunion(\graphjoin\singleton{}\singleton{})(\graphunion\singleton{})\mkern-4mu\left(\rule{0ex}{1.7ex}\mkern-4mu\graphjoin\mkern-4mu\singleton{}(\graphjoin\singleton{}(\graphunion\singleton{}\singleton{}))\right)$
is below-right.
\begin{center}\begin{pspicture}(0,\twoedgelen)(0,-\twoedgelen)\begin{math}
\rput(-5.5,\halfedgelen){\defaultcotreeeg}
\rput(-.2,0){
\vx{-\edgelen,\edgelen}{a}
\vx{-\edgelen,0}{b}
\e a b
\vx{0,\halfedgelen}{c}
\rput(\edgelen,0){
\vx{0,\edgelen}{d}
\vx{0,0}{e}
\vx{\edgelen,\edgelen}{f}
\vx{\edgelen,0}{g}
\e d e
\e d f
\e e f
\e d g
\e e g
}}
\rput(5.5,0){
\plustreesep{.75}{
\timestreesep{.75}{\lf\lf}
\plustree{\lf}
\timestreesep{.6}{
\lf
\tspace{-.235}
\timestreesep{.6}{
\lf
\tspace{-.235}
\plustreesep{.6}{\lf\lf}
}
}
}
}
\end{math}\end{pspicture}\end{center}
\begin{definition}\label{def:cograph-of}
The \defn{cograph} $\cographof\tree$ of a \plustimestree $\tree$ is the cograph defined inductively by
\[
\cographof{\singleton{}}
\hspace{.5ex}=\hspace{.5ex}
\singleton{}
\hspace{6ex}
\cographof{\graphunion\tree_1\ldots\tree_n}
\hspace{1ex}=\hspace{1ex}
\cographof{\tree_1}\graphunion\ldots\graphunion\cographof{\tree_n}
\hspace{6ex}
\cographof{\graphjoin\tree_1\ldots\tree_n}
\hspace{1ex}=\hspace{1ex}
\cographof{\tree_1}\graphjoin\ldots\graphjoin\cographof{\tree_n}
\]
\end{definition}
For example, the cograph of the \plustimestree above-left is shown above-center; this cograph is also the cograph of the \plustimestree above-right.
\begin{lemma}\label{lem:leaf-to-vertex}
The leaves of a \plustimestree $\tree$ are in bijection with the vertices of its cograph $\cographof\tree$.
\end{lemma}
\begin{proof}
Induction on the number of vertices in $\cograph$, pattern-matching the three cases in
Def.\,\ref{def:cograph-of}.
\end{proof}
A \plustimesnode{} \defn{repeats} if it has a parent with the same label, and is \defn{unary} if it has a unique child.
A \plustimestree{} \defn{alternates} if it has no repeating \plustimesnode and
\defn{branches}
if it has no unary \plustimesnode.
\begin{definition}\label{def:cotree}
A \defn{cotree} is a branching and alternating \plustimestree.
\end{definition}
For example, the \plustimestree above-left is a cotree, while the \plustimestree above-right is not (since it has a repeating $\graphjoin$ node and a unary and repeating $\graphunion$ node).
We recall the following definition from \cite{CLS81}.
\begin{definition}\label{def:cotree-of}
The \defn{cotree} $\cotreeof\cograph$ of a cograph $\cograph$ is the cotree defined inductively by
\begin{center}\begin{math}
\cotreeof{\singleton{}}
\hspace{1ex}=\hspace{1ex}
\singleton{}
\hspace{5ex}
\begin{array}{r@{\hspace{1ex}=\hspace{1ex}}l@{\hspace{2ex}}l}
\cotreeof{\cograph_1\graphunion\ldots\graphunion\cograph_n}
&
\graphunion\cotreeof{\cograph_1}\ldots\cotreeof{\cograph_n}
&
\text{if $\cograph_i$ is connected for $1\tightle i\tightle n\tightge 2$}
\\[2ex]
\cotreeof{\cograph_1\graphjoin\ldots\graphjoin\cograph_n}
&
\graphjoin\cotreeof{\cograph_1}\ldots\cotreeof{\cograph_n}
&
\text{if $\cograph_i$ is coconnected for $1\tightle i\tightle n\tightge 2\mkern2mu$.\!\!\!}
\end{array}\end{math}\end{center}\end{definition}
The following Lemma articulates a standard property of cotrees.
Recall from \S\ref{sec:proper} that a module in a graph is \emph{proper} if it has two or more vertices.
A module $\module$ of a cograph $\graph$ is \defn{connected} (resp.\ \defn{coconnected}) if the induced subgraph $\induced\graph\module$ is connected (resp.\ coconnected).
\begin{lemma}\label{lem:node-strong-module}
The nodes of the cotree $\cotreeof\graph$ of a cograph $\cograph$ correspond to the strong modules of $\graph$,
and the $\graphjoin$ (resp.\ $\graphunion$) nodes correspond to proper connected (resp.\ coconnected) strong modules.
\end{lemma}
\begin{proof}
Induction on the number of vertices in $\graph$ \cite{CLS81}.
\end{proof}
The following Lemma is also a standard cotree property.
\begin{lemma}\label{lem:cotree-cograph}
The function $\cotreeof{-}$
is a bijection from cographs to cotrees.
\end{lemma}
\begin{proof}
Induction on the number of vertices in the cograph \cite{CLS81}.
\end{proof}
\begin{lemma}\label{lem:unique-branching-alternating}
The cotree $\cotreeof\cograph$ of a cograph $\cograph$ is the unique branching and alternating \plustimestree $\tree$ such that $\cographof\tree=\cograph$.
\end{lemma}
\begin{proof}
A routine induction on the number of vertices in $\graph$.
\end{proof}
\begin{lemma}\label{lem:vertex-to-leaf}
The vertices of a cograph $\cograph$ are in bijection with the leaves of its cotree $\cotreeof\cograph$.
\end{lemma}
\begin{proof}
Lemmas\,\ref{lem:leaf-to-vertex} and \ref{lem:unique-branching-alternating}.
\end{proof}
Let $\node$ be a node in a tree $\tree=\defaultplustimestree$.
Define the \defn{absorption} $\absorption\tree\node$ of $\node$ in $\tree$ as the result of deleting $\node$ (and incident edges) from $\tree$ and, if $\node$ has a parent $\parentof\node$, adding an edge from each child of $\node$ to $\parentof\node$.
Thus
$\nodesetof{\absorptionsub\tree\node}=
\nodesetof\tree\tightsetminus\mkern1mu\setof{\mkern-2mu\node\mkern-2mu}$ and
$m{}\childin{\absorptionsub\tree\node}{}m\primed$ if and only if $m{}\,\childin\tree{}\,m\primed$
or
$m\,{}\childin\tree{}\,\node\,{}\childin\tree{}\,m\primed$.
\begin{definition}\label{def:cotree-of-tree}
Given a \plustimestree $\tree$ define its \defn{cotree} $\absorbed\tree$ as
the cotree obtained by iteratively and exhaustively absorbing unary \plustimesnodes and repeating \plustimesnodes in $\tree$.
\end{definition}
For example, if $\tree$ is the \plustimestree above-right of Def.\,\ref{def:cograph-of} then its cotree $\absorbed\tree$ is above-left of Def.\,\ref{def:cograph-of}.
\begin{lemma}\label{lem:cograph-absorb}
$\cographof\tree=\cographof{\absorbed\tree}$ for every \plustimestree $\tree$.
\end{lemma}
\begin{proof}
By induction on the number of nodes in $\tree$, pattern-matching the three cases in Def.\,\ref{def:cograph-of}, combined with the associativity and commutativity of the graph union $\graphunion$ and join $\graphjoin$ operations.
\end{proof}
Recall that $\induced\graph U$ is the subgraph of a graph $\graph$ induced by a set of vertices $U$.
Define the \plustimestree $\induced\tree U$ \defn{induced} by a non-empty set of leaves $U$ in a \plustimestree $\tree$ by deleting from $\tree$ every leaf not in $U$, and then iteratively and exhaustively deleting any resulting childless \plustimesnodes.
For example, if $\tree$ is the cotree below-left and $U$ comprises the left-most four leaves of $\tree$, then the \plustimestree $\induced\tree U$ is below-center, and the cotree $\absorbed{\induced\tree U}$ is below-right.
\begin{center}\begin{pspicture}(0,\twoedgelen)(0,-\twoedgelen)\begin{math}
\rput(-5,\halfedgelen){\defaultcotreeeg}
\rput(.5,\edgelen){\plustreesep{.9}{%
\timestreesep{.75}{\lf\lf}%
\lf%
\timestreesep{.6}{%
\lf%
{}\hspace*{6.5ex}{}
}%
}%
}%
\rput(5.5,\edgelen){\plustreesep{.9}{
\timestreesep{.75}{\lf\lf}%
\lf%
{}\hspace*{1ex}{}
\lf%
}%
}
\end{math}\end{pspicture}\end{center}
\begin{lemma}\label{lem:induced-subtree}
If $U$ is a non-empty set of leaves in a cotree $\tree$, then
$\cographof{\induced\tree U}\,=\,\induced{\cographof\tree} U$.
\end{lemma}
\begin{proof}
Induction on the number of nodes in $\tree$.
\end{proof}
\begin{lemma}\label{lem:cograph-cotree-induced}
If $U$ is a non-empty set of vertices in a cograph $\cograph$, then
$\cographof{\absorbed{\induced{\cotreeof\graph} U}}=\induced\graph U$.
\end{lemma}
\begin{proof}
By \reflem{lem:cograph-absorb},
$\cographof{\absorbed{\induced{\cotreeof\graph} U}}
=
\cographof{\induced{\cotreeof\graph} U}$, which is
$\induced{\cographof{\cotreeof\graph}} U$
by \reflem{lem:induced-subtree},
hence
$\induced\graph U$
by \reflem{lem:unique-branching-alternating}.
\end{proof}
\begin{lemma}\label{lem:induced-cotree}
If $U$ is a non-empty set of vertices in a cograph $\cograph$, then
$\cotreeof{\induced\graph U}\,=\,\absorbed{\induced{\cotreeof\graph} U}$.
\end{lemma}
\begin{proof}
By \reflem{lem:unique-branching-alternating} it suffices to show that
$\cographof{\absorbed{\induced{\cotreeof\graph} U}}=\induced\graph U$, which is \reflem{lem:cograph-cotree-induced}.
\end{proof}
Write $\nodesof\tree$ for the set of nodes of a tree $\tree$, ${}\childin\tree{}$ for its set of directed edges, ${}\belowin\tree{}$ for the transitive closure of ${}\childin\tree{}$, and ${}\atorbelowin\tree{}$ for the reflexive closure of ${}\belowin\tree{}$.
Define
$\nodea\abovein\tree\node$ as
$\node\belowin\tree\nodea$,
and say that
$\nodea$ is \defn{above} $\node$ or
$\node$ is \defn{below} $\nodea$;
define
$\nodea\atorabovein\tree\node$ as
$\node\atorbelowin\tree\nodea$,
and say that
$\nodea$ is \defn{at or above} $\node$ or
$\node$ is \defn{at or below} $\nodea$.
Define
the \defn{meet} $\nodea\meet\node$ of nodes $\nodea$ and $\node$ in a tree $\tree$ as the $\atorbelowin\tree\mkern2mu$-least node $\nodeaa$ with $\nodea\atorbelowin\tree\nodeaa$ and $\node\atorbelowin\tree\nodeaa$.
\begin{lemma}\label{lem:meet}
Let $\graph$ be a cograph and $v,w\in\verticesof\graph$. Then $vw\tightin\edgesof\graph$ if and only if $v\meet w$ in the cotree $\cotreeof\graph$ is a $\graphjoin$ node and $vw\tightnotin\edgesof\graph$ if and only if $v\meet w$ is a $\graphunion$ node or $v\tighteq w$.
\end{lemma}
\begin{proof}
This follows directly from \reflem{lem:node-strong-module}.
\end{proof}
Write $v\jmeet w$ (resp.\ $v\umeet w$) for $v\meet w$ if it is a $\graphjoin$ (resp.\ $\graphunion$) node.
For a cograph $\cograph$ write
${}\childin\cograph{}$,
${}\belowin\cograph{}$,
and
${}\atorbelowin\cograph{}$ for
${}\childin{\cotreeof\cograph}{}$,
${}\belowin{\cotreeof\cograph}{}$,
and
${}\atorbelowin{\cotreeof\cograph}{}$, respectively.
\begin{lemma}\label{lem:vee}
If $\graph$ is a cograph with $vw,vu\tightin\edgesof\graph$, $wu\tightnotin\edgesof\graph$, $w\tightneq u$,
then
$v \belowin\cograph v\jmeet w \abovein\cograph w\umeet u \abovein\cograph w,u$.
\end{lemma}
\begin{proof}
By Lemma\,\ref{lem:meet} $v\meet w$ is a $\graphjoin$ node and $w\meet u$ is a $\graphunion$ node.
Since $v\jmeet w\abovein\cograph w\belowin\cograph w\umeet u$ and $\cotreeof\graph$ is a tree, either
$v\jmeet w\belowin\cograph w\umeet u$ or
$v\jmeet w\abovein\cograph w\umeet u$.
If $v\jmeet w\belowin\cograph w\umeet u$ then $v\meet u$ is a $\graphunion$ node, contradicting $vu\tightin\edgesof\fograph$ (by Lemma\,\ref{lem:meet}), so $v\jmeet w\abovein\cograph w\umeet u$.
\end{proof}
\begin{lemma}\label{lem:covee}
If $\graph$ is a cograph with $vw,vu\tightnotin\edgesof\graph$ and $wu\tightin\edgesof\graph$
then $v \belowin\cograph v\umeet w \abovein\cograph w\jmeet u \abovein\cograph w,u$.
\end{lemma}
\begin{proof}
Necessarily $w\tightneq u$ since $wu\tightin\edgesof\graph$, hence $v\tightneq w$ and $v\tightneq u$.
Thus we can apply Lemma\,\ref{lem:vee} to the complement of $\graph$.
\end{proof}
\begin{lemma}\label{lem:umeet-to-umeet}
If $\map:\graph\to\grapha$ is a skew fibration between cographs and
$v \, \childin\graph \, v \umeet w \, \abovein\graph \, w$
for $v,w\tightin\verticesof\graph$ with $\map(v)\tightneq\map(w)$,
then
$\map(v) \, \belowin\grapha \, \map(v)\umeet\map(w) \, \abovein\grapha \, \map(w)$.
\end{lemma}
\begin{proof}
Since $\map(v)\tightneq\map(w)$ the meet $\map(v)\meet\map(w)$ in
$\cotreeof\grapha$ is a $\graphunion$ or $\graphjoin$ node.
If the former,
we have
\mbox{$\map(v) \, \belowin\grapha \, \map(v)\umeet\map(w) \, \abovein\grapha \, \map(w)$} as desired.
Otherwise
\mbox{$\map(v) \, \belowin\grapha \, \map(v)\jmeet\map(w) \, \abovein\grapha \, \map(w)$}.
By Lem.\,\ref{lem:meet}
\mbox{$vw\tightnotin\edgesof\graph$} since
\mbox{$v \, \childin\graph \, v\umeet w\, \abovein\graph \, w$},
and
\mbox{$\map(v)\mkern1mu\map(w)\tightin\edgesof\grapha$} since
\mbox{$\map(v) \, \belowin\grapha \, \map(v)\jmeet\map(w) \, \abovein\grapha \, \map(w)$}.
Because $\map$ is a skew fibration and \mbox{$\map(v)\mkern1mu\map(w)\tightin\edgesof\grapha$},
there exists $u\tightin\verticesof\graph$ with
$vu\tightin\edgesof\graph$ and \mbox{$\map(w)\mkern1mu\map(u)\tightnotin\edgesof\grapha$}.
Since $\map$ is a graph homomorphism,
$\map(v)\mkern1mu\map(u)\tightin\edgesof\grapha$ and
$wu\tightnotin\edgesof\graph$,
and $w\tightneq u$ (otherwise $vw\tightin\edgesof\graph$ since $vu\tightin\edgesof\graph$, contradicting $vw\tightnotin\edgesof\graph$).
Since $wv,wu\tightnotin\edgesof\graph$ and $vu\tightin\edgesof\graph$,
by Lemma\,\ref{lem:covee} we have
$w \belowin\cograph w\umeet v \abovein\cograph v\jmeet u \abovein\cograph v$,
hence
$v\belowin\graph v\jmeet u\belowin\graph v\umeet w$, contradicting $v\childin\graph v\umeet w$.
\end{proof}
The following Lemma refines
$\map(v)\belowin\fographa \map(v)\umeet\map(w)$
in the above Lemma to
$\map(v)\childin\fographa \map(v)\umeet\map(w)$.
\begin{lemma}\label{lem:preserve-parent-plus}
If $\map:\graph\to\grapha$ is a skew fibration between cographs and
$v \, \childin\graph \, v \umeet w \, \abovein\graph \, w$
for $v,w\tightin\verticesof\graph$ with $\map(v)\tightneq\map(w)$,
then
$\map(v) \, \childin\grapha \, \map(v)\umeet\map(w) \, \abovein\grapha \, \map(w)$.
\end{lemma}
\begin{proof}
By Lemma\,\ref{lem:umeet-to-umeet} we have
$\map(v) \, \belowin\grapha \, \map(v)\umeet\map(w) \, \abovein\grapha \, \map(w)$.
Suppose not $\map(v)\childin\fographa\map(v)\umeet\map(w)$. Then
\mbox{$\map(v) \, \belowin\grapha \, \map(v)\jmeet u \, \childin\grapha \, \map(v)\umeet\map(w) \, \abovein\grapha \, \map(w)$}
for some $u\tightin\verticesof\grapha$.
Since $\map(v) \, \belowin\grapha \, \map(v)\jmeet u \, \abovein\grapha u$
we have $\map(v)\mkern1mu u\tightin\edgesof\grapha$ by \reflem{lem:meet}.
Because $\map$ is a skew fibration and $\map(v)\mkern1mu u\tightin\edgesof\grapha$, there exists $\tilde u\tightin\verticesof\graph$ with $v\tilde u\tightin\edgesof\graph$ and $\map(u)\mkern-1mu\map(\tilde u)\tightnotin\edgesof\grapha$.
Necessarily $\tilde u w\tightin\edgesof\graph$, otherwise \reflem{lem:covee} applied to
$vw,\tilde u w\tightnotin\edgesof\graph$
and
$v\tilde u\tightin\edgesof\graph$
yields
$w \belowin\cograph w\umeet v \abovein\cograph v\jmeet \tilde u \abovein\cograph v$
so
$v \, \belowin\graph \, v\jmeet\tilde u \, \belowin\graph \, w\umeet v$
contradicting $v\,\childin\graph\,v\umeet w$.
\end{proof}
\begin{lemma}\label{lem:balance}
If $\bifib:\fograph\to\fographa$ is a skew fibration of cographs and $m\,\childin\fographa\, m\jmeet n\,\parentin\fographa\, n$, then
$\map(v)\atorbelowin\fographa m$ for some $v\tightin\verticesof\fograph$
if and only if
$\map(w)\atorbelowin\fographa n$ for some $w\tightin\verticesof\fograph$.
\end{lemma}
\begin{proof}
Assume $m\tightneq n$, otherwise the result is immediate.
Suppose $\map(v)\atorbelowin\fographa m$ for $v\tightin\verticesof\fograph$.
Choose $u\tightin\verticesof\fographa$ with $u\atorbelowin\fographa n$.
Thus $\map(v)\,\atorbelowin\fographa\, m \, \childin\fographa \, m\jmeet n \, \parentin\fographa\, n \,\atorabovein\fographa\, u\,$.
Since $m\jmeet n=\map(v)\jmeet u$ is a $\graphjoin$ node,
we have $\map(v)\mkern1mu u\tightin\edgesof\fographa$ by Lemma\,\ref{lem:meet}.
Because $\bifib$ is a skew fibration and $\map(v)\mkern1mu u\tightin\edgesof\fographa$,
there exists
$\widehat u\tightin\verticesof\fograph$
with
$v\widehat u\tightin\edgesof\fograph$
and
$\map(\widehat u)\mkern1mu u\tightnotin\edgesof\fographa$.
Since $v\widehat u\tightin\edgesof\fograph$ and $\bifib$ is a graph homomorphism, we have
$\map(v)\mkern1mu\map(\widehat u)\tightin\edgesof\fograph$.
If $u\tighteq\map(\widehat u)$ then since
$n \,\atorabovein\fographa\, u\,$
we have $\map(\widehat u)\atorbelowin\fographa n$ as desired.
Otherwise $u\tightneq\map(\widehat u)$, so applying Lemma~\ref{lem:vee} to
$\map(v)\mkern1mu u,\map(v)\mkern1mu\map(\widehat u)\tightin\edgesof\fographa$,
$u\mkern1mu \map(\widehat u)\tightnotin\edgesof\fographa$,
$u\tightneq\map(\widehat u)$
yields
$\map(v) \, \belowin\fographa \, \map(v)\jmeet u \, \abovein\fographa \, u\umeet \map(\widehat u) \, \abovein\fographa \, u,\map(\widehat u)$.
Thus since $\map(v)\jmeet u=m\jmeet n$ is the parent of $n$,
both
\mbox{$\map(v) \jmeet u \, \parentin\fographa \, n \, \atorabovein\fographa \, u$}
and
\mbox{$\map(v) \jmeet u \, \abovein\fographa \, w\umeet\map(\widehat u) \, \abovein\fographa \, u$},
so because $\cotreeof\fographa$ is a tree,
we have
$n \, \atorabovein\fographa \, w\mkern1mu\umeet\mkern1mu\map(\widehat u)$,
hence
$\map(\widehat u) \, \atorbelowin\fographa \, n$.
\end{proof}
Given a function $\map:V\to W$ write $\map(V)$ for $\setst{\map(v)}{v\tightin V}\subseteq W$.
\begin{definition}
Let $\map:\fograph\to\fographa$ be a graph homomorphism between cographs. Define the \defn{image}
$\im\map$ as the subgraph $\induced\fographa{\map(\verticesof\fograph)}$ of $\fographa$ induced by $\map(\verticesof\fograph)$.
\end{definition}
Define $v\parentabovein\tree w$ if $v$ and $w$ are leaves and $\parentof v\,\abovein\tree\, w$ for the parent $\parentof v$ of $v$.
Recall that a cograph is \emph{logical} if every vertex is a binder or literal (\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, is labelled by a variable or atom), and at least one vertex is a literal.
Write $\scopeofin \binder \fograph$ for the scope of a binder $\binder$ in a logical cograph $\fograph$.
The following Lemma shows that the scope of $\binder$ is the set of leaves below the parent of $\binder$ in the cotree $\cotreeof\cograph$.
\begin{lemma}\label{lem:parent-scope}\label{lem:scope-parent}
For any vertex $\vertex$ and binder $\binder$ in a logical cograph $\fograph$,
$\vertex\tightin\scopeofin\binder\fograph$
if and only if
$\binder\parentabovein{\cotreeof\fograph}\vertex$.
\end{lemma}
\begin{proof}
By definition the scope of $\binder$ is the smallest proper strong module containing $\binder$,
which corresponds to the parent of $\binder$ in the cotree $\cotreeof\fograph$ by \reflem{lem:node-strong-module}.
\end{proof}
\begin{lemma}\label{lem:absorption-parent-below}
For any \plustimestree $\tree$ and non-root \plustimesnode $\node$ in $\tree$,
if $v\parentabovein\tree w$ then $v\parentabovein{\absorptionsub\tree\node} w$.
\end{lemma}
\begin{proof}
Immediate from the definition of $\absorption\tree\node$.
\end{proof}
\begin{lemma}\label{lem:absorbed-parent-below}
For any \plustimestree $\tree$, if $v\parentabovein\tree w$ then $v\parentabovein{\absorbed\tree} w$.
\end{lemma}
\begin{proof}
Iterate \reflem{lem:absorption-parent-below} for every absorption step in the construction of $\absorbed\tree$ in Def.\,\ref{def:cotree-of-tree}.
\end{proof}
\begin{lemma}\label{lem:induced-parent-below}
For any \plustimestree $\tree$ and non-empty set $U$ of leaves in $\tree$,
if $v\parentabovein\tree w$ and $v,w\tightin U$ then $v\parentabovein{\induced\tree U} w$.
\end{lemma}
\begin{proof}
The edge relation $\,\childin{\induced\tree U}\,$ is a subset of $\,\childin\tree\,$.
\end{proof}
\begin{lemma}\label{lem:induced-scope}
Let $\fographa$ be a logical cograph which is an induced subgraph of a logical cograph $\fograph$.
For every vertex $\vertex$ and binder $\binder$ in $\fographa$,
if $\vertex\in\scopeofin \binder \fograph$ then $\vertex\in\scopeofin \binder \fographa$.
\end{lemma}
\begin{proof}
\hspace{-2pt}Let $\vertex\in\scopeofin \binder \fograph$.
By \reflem{lem:scope-parent}, $\binder\parentabovein{\cotreeof\fograph}\vertex$.
By \reflem{lem:induced-parent-below}, $\binder\parentabovein{\induced{\cotreeof\fograph}{\verticesof\fographa}}\vertex$.
By \reflem{lem:absorbed-parent-below},
$\binder\parentabovein{\absorbed{\induced{\cotreeof\fograph}{\verticesof\fographa}}}\vertex$.
By \reflem{lem:induced-cotree},
$\absorbed{\induced{\cotreeof\fograph}{\verticesof\fographa}}
=
\cotreeof{\induced\fograph{\verticesof\fographa}}$,
and $\cotreeof{\induced\fograph{\verticesof\fographa}}=
\cotreeof{\fographa}$,
thus
$\binder\parentabovein{\cotreeof\fographa}\vertex$.
Therefore $\vertex\in\scopeofin \binder \fographa$ by \reflem{lem:scope-parent}.
\end{proof}
A fograph map $\map:\fograph\to\fographa$ \defn{preserves universals} if every universal binder $\binder$ in $\fograph$ maps to a universal binder $\map(\binder)$ in $\fographa$.
\begin{lemma}\label{lem:pres-universals}
Every skew fibration between fographs preserves universals.
\end{lemma}
\begin{proof}
By \reflem{lem:node-strong-module}, a binder is universal if and only if its parent in the cotree is a $\graphunion$ node. Thus the result follows from \reflem{lem:preserve-parent-plus}.
\end{proof}
A fograph map \defn{preserves binders} if every universal (resp.\ existential) binder $\binder$ in $\fograph$ maps to a universal (resp.\ existential) binder $\map(\binder)$ in $\fographa$.
\begin{lemma}\label{lem:pres-binders}
Every skew bifibration between fographs preserves binders.
\end{lemma}
\begin{proof}
Skew bifibrations preserve existentials by definition, and universals by \reflem{lem:pres-universals}.
\end{proof}
\begin{lemma}\label{lem:im-logical}
For every skew bifibration $\bifib:\fograph\to\fographa$ of fographs, the image $\im\bifib$ is a logical cograph.
\end{lemma}
\begin{proof}
Every vertex of $\im\bifib$ is inherited from $\fographa$, and is therefore a binder or literal.
Since $\im\bifib$ is an induced subgraph of a cograph $\fographa$, it is a cograph.
Because $\fograph$ is a logical cograph, it contains a literal $\literal$, thus $\im\bifib$ contains the literal $\bifib(\literal)$ (a literal since $\bifib$ preserves labels).
\end{proof}
\begin{lemma}\label{lem:target-universal-scope}
For every skew bifibration $\bifib:\fograph\to\fographa$ between fographs and universal binder $\binder$ in $\im\bifib$,
the scope $\scopeofin \binder \fographa$ contains a literal in $\im\bifib$.
\end{lemma}
\begin{proof}
Choose $\tilde\binder$ in $\fograph$ with $\bifib(\tilde\binder)=\binder$.
By \reflem{lem:pres-binders}, $\tilde\binder$ is universal.
Since $\fograph$ is a fograph there exists a literal $\literal\in\scopeofin {\tilde\binder} \fograph$,
so
$\tilde\binder\,\childin\fograph\,\tilde b\umeet\literal\,\abovein\fograph\,\literal$ by \reflem{lem:parent-scope}.
Since $\bifib(\tilde\binder)\tightneq\bifib(\literal)$ by label preservation,
by \reflem{lem:preserve-parent-plus}
we have
$\binder\,\childin\fographa\,b\umeet\bifib(\literal)\,\abovein\fographa\,\bifib(\literal)$,
so $\bifib(\literal)\in\scopeofin \binder \fographa$ by \reflem{lem:parent-scope}.
\end{proof}
\begin{lemma}\label{lem:image-universal-scope}
For every skew bifibration $\bifib:\fograph\to\fographa$ between fographs and universal binder $\binder$ in $\im\bifib$,
the scope $\scopeofin \binder {\im\bifib}$ contains a literal.
\end{lemma}
\begin{proof}
By \reflem{lem:target-universal-scope}, the scope $\scopeofin \binder \fographa$ contains a literal $\bifib(\literal)$.
Since $\im\bifib$ is an induced subgaph of $\fographa$, by \reflem{lem:induced-scope} we have $\bifib(\literal)\in\scopeofin \binder {\im\bifib}$.
\end{proof}
\begin{lemma}\label{lem:image-existential-scope-prep}\label{lem:target-existential-scope}
For every skew bifibration $\bifib:\fograph\to\fographa$ between fographs and existential binder $\binder$ in $\im\bifib$,
the scope $\scopeofin \binder\fographa$ contains a literal in $\im\bifib$.
\end{lemma}
\begin{proof}
Since $\fographa$ is a fograph there exists a literal $\literala$ in $\fographa$ in the scope of $\binder$.
Thus
$\binder\,\childin\fographa\,b\jmeet\literala\,\abovein\fographa\,\literala$ by \reflem{lem:parent-scope}.
Therefore
$\binder\,\childin\fographa\, \binder\jmeet\literala\,\parentin\fographa\,n\,\atorabovein\fographa\,\literala$ for some child $n$ of $\binder\jmeet\literala$.
Since $\binder$ is in $\im\map$ there exists $\tilde\binder\in\verticesof\fograph$
with
$\bifib(\tilde\binder) \tighteq \binder$, so
we may apply \reflem{lem:balance} with
$m\tighteq \binder$ and $v\tighteq\tilde\binder$ to obtain
$w\in\verticesof\fograph$ with $\bifib(w)\atorbelowin\fographa n$.
If $w$ is a literal, then the literal $\map(w)$ is in $\scopeofin\binder\fographa$, and the Lemma holds.
Otherwise $w$ is a binder, hence $\bifib(w)$ is a binder.
We proceed by induction on the number of vertices in the scope $\scopeofin\binder\fographa$.
Since $\bifib(w) \,\atorbelowin\fographa\, n \,\atorabovein\fographa\, \literala$ for $\bifib(w)$ a binder and $\literala$ a literal,
$n$ must be a $\graphunion$ or $\graphjoin$ node,
and since $\graphunion$ and $\graphjoin$ alternate in a cotree,
$n$ is a $\graphunion$ node because its parent $\binder\jmeet\literala$ is a $\graphjoin$ node.
Let $n\primed$ be the parent of $\bifib(w)$.
Thus
$\binder \,\childin\fographa\, \binder\jmeet\literala \,\parentin\fographa\, n \,\atorabovein\fographa\, n\primed \,\parentin\fographa\, \bifib(w)$.
If $n\primed$ is a $\graphunion$ node, then $\bifib(w)$ is universal so by \reflem{lem:image-universal-scope} the scope of $\bifib(w)$ contains a literal in $\im\bifib$, \latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, a literal $\bifib(\literal)$ for some literal $\literal$ in $\fograph$.
Therefore
$\binder \,\childin\fographa\, \binder\jmeet\literala \,\parentin\fographa\, n \,\atorabovein\fographa\, n\primed \,\parentin\fographa\, \bifib(\literal)$, so $\bifib(\literal)$ is also in $\scopeofin\binder\fographa$.
Otherwise $n\primed$ is a $\graphjoin$ node, so $\bifib(w)$ is existential. Since $\scopeofin{\bifib(w)}\fographa$ is strictly contained in $\scopeofin\binder\fographa$, by induction there exists a literal $\literal$ in $\fograph$ such that $\bifib(\literal)$ is in $\scopeofin{\bifib(w)}\fographa$, thus the literal $\bifib(\literal)$ is in $\scopeofin\binder\fographa$.
\end{proof}
\begin{lemma}\label{lem:target-scope}
For every skew bifibration $\bifib:\fograph\to\fographa$ between fographs and binder $\binder$ in $\im\bifib$,
the scope $\scopeofin \binder \fographa$ contains a literal in $\im\bifib$.
\end{lemma}
\begin{proof}
If $\binder$ is universal (resp.\ existential) apply \reflem{lem:target-universal-scope} (resp.\ \ref{lem:target-existential-scope}).
\end{proof}
\begin{lemma}\label{lem:image-existential-scope}
For every skew bifibration $\bifib:\fograph\to\fographa$ between fographs and existential binder $\binder$ in $\im\bifib$,
the scope $\scopeofin \binder {\im\bifib}$ contains a literal.
\end{lemma}
\begin{proof}
By \reflem{lem:image-existential-scope-prep} there exists a literal $\literal$ with $\bifib(\literal)$ in $\scopeofin\binder\fographa$. Since $\im\bifib$ is an induced subgraph of $\fographa$, we have $\bifib(\literal)$ in $\scopeofin\binder{\im\bifib}$ by \reflem{lem:induced-scope}.
\end{proof}
\begin{definition}\label{def:fair}
A logical cograph $\fograph$ is \defn{fair} if binders $\binder$ and $\binderp$ have the same variable only if $\binder\binderp\tightnotin\edgesof\fograph$.
\end{definition}
Note that every rectified fograph is fair.
\begin{lemma}\label{lem:image-fograph}
For every skew bifibration $\bifib:\fograph\to\fographa$ between fographs with $\fographa$ fair, $\im\bifib$ is a fair fograph.\todo{\ldots is a fair fograph needed anywhere?}
\end{lemma}
\begin{proof}
By \reflem{lem:im-logical} $\im\bifib$ is a logical cograph, and $\im\bifib$ is fair since if $\binder\binderp\tightin\edgesof{\im\bifib}$ for binders $\binder$ and $\binderp$ with the same variable, then $\binder\binderp\tightin\edgesof\fographa$ since $\im\bifib$ is an induced subgraph, contradicting the fairness of $\fographa$. It remains to show that
(1) for every binder $\binder$ in $\im\bifib$ the scope $\scopeofin\binder{\im\bifib}$ contains a literal, and (2) for every variable $x$ and every $x$-binder $\binder$ in $\im\bifib$, the scope $\scopeofin\binder{\im\bifib}$ contains no other $x$-binder.
(1) If $\binder$ is universal (resp.\ existential), then by \reflem{lem:image-universal-scope} (resp.\ \ref{lem:image-existential-scope}), the scope
$\scopeofin\binder{\im\bifib}$ contains a literal.
(2)
Suppose $\binderp$ were another $x$-binder with $\binderp\tightin\scopeofin\binder{\im\bifib}$,
\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, $\binder\childin{\im\bifib}\binder\meet\binderp\abovein{\im\bifib}\binderp$.
If $\binder$ is universal,
then
$\binder\meet\binderp\tighteq\binder\umeet\binderp$,
so
$\binder\childin{\fographa}\binder\umeet\binderp\abovein\fographa\binderp$,
whence $\binderp\tightin\scopeofin\binder\fographa$, contradicting the fact that $\fographa$ is a fograph.
Otherwise, $\binder$ is existential,
and $\binder\meet\binderp\tighteq\binder\jmeet\binderp$, so $\binder\binderp\tightin\edgesof{\im\bifib}$.
Since $\im\bifib$ is an induced subgraph of $\fographa$, we have $\binder\binderp\tightin\edgesof\fographa$, contradicting the fairness of $\fographa$.
\end{proof}
The following example illustrates why fairness of $\fographa$ is required to ensure no $x$-binder is in the scope of another in \reflem{lem:image-fograph}.
Let $\graph=\xgraphof{(\exists x\mkern2mu\px)\tightvee(\exists x\mkern2mu\px)}
=
\namedsingletonleft{x}{x}\hspace{1ex}\namedsingletonright{px}{px}\e{x}{px}
\hspace{1ex}
\namedsingletonleft{x}{x}\hspace{1ex}\namedsingletonright{px}{px}\e{x}{px}$,
let
$\graphaa=\xgraphof{q\mkern-.5mu\vee\exists x\mkern2mu\px}
=
\singletonq
\hspace{1ex}
\namedsingletonleft{x}{x}\hspace{1ex}\namedsingletonright{px}{px}\e{x}{px}$,
and let $\map$ be the unique label-preserving graph homomorphism $\graph\to\graphaa$, which is a skew bifibration between fographs.
Then
$\map\tightwedge\map:\graph\tightwedge\graph\to\grapha=\graphaa\tightwedge\graphaa$
is a skew bifibration between fographs, with $\grapha$ not fair, and $\im\map$ is
$
(\namedsingletonleft{x}{x}\hspace{1ex}\namedsingletonright{px}{px}\e{x}{px})
\tightwedge
(\namedsingletonleft{x}{x}\hspace{1ex}\namedsingletonright{px}{px}\e{x}{px})
$, which is not well-defined fograph since each $\singletonx$ is in the scope of the other.
\subsubsection{Marking and pruning}
Let $\cograph$ be a cograph and let $U\tightsubseteq\verticesof\cograph$.
A node $\node$ in the cotree $\cotreeof\cograph$ is \defn{over} $U$ if
$n\atorabovein\cograph u$
for some vertex $u\tightin U$.
Define
the \defn{support}
$\markingof U\subseteq\nodesof{\cotreeof\cograph}$
as the set of nodes over $U$,
and say that $U$ is \defn{balanced} for $\cograph$ if, for every $\graphjoin$ node $\node$ in $\cotreeof\cograph$ and child $\nodea$ of $\node$, we have $\nodea\tightin\markingof U$ if $\node\tightin\markingof U$.
\begin{lemma}\label{lem:im-balanced}
\hspace{-.4pt}If $\map:\graph\mkern-2mu\to\mkern-2mu\grapha$ is a skew fibration between cographs
then $\map(\verticesof\graph\mkern-2mu)\tightsubseteq\verticesof\grapha$ is balanced for $\grapha$.
\end{lemma}
\begin{proof}
A corollary of \reflem{lem:balance}.
\end{proof}
Let $\cograph$ be a cograph and let $U\tightsubseteq\verticesof\cograph$.
A \plustimesnode $\node$ in $\markingof U$ is \defn{literal-supported}
if there exists a literal $\literal\tightin U$ with $\node\atorabovein\cograph\literal$.
We say that $U$ is \defn{binding-closed} if, for every literal $\literal\tightin U$ and binder $\binder$ in $\cograph$
such that $\binder$ binds $\literal$, we have $\binder\tightin U$.
\begin{definition}\label{def:marking}
Let $\fograph$ be a fograph. A set $U\tightsubseteq\verticesof\fograph$ is a \defn{marking} for $\fograph$ if it is balanced, every \plustimesnode of $\markingof U$ is literal-supported, and $U$ is binding-closed.
\end{definition}
\begin{lemma}\label{lem:im-marking}
If $\bifib:\fograph\to\fographa$ is a skew bifibration
between fographs
then
$\bifib(\verticesof\fograph)$ is a marking for $\fographa$.
\end{lemma}
\begin{proof}
Let $U=\bifib(\verticesof\fograph)$.
By \reflem{lem:balance}, $U$ is balanced, by \reflem{lem:target-scope}, every node $\node$ in $\markingof U$ is literal-supported,\todo{more}
and $U$ is binding-closed since $\bifib:\bindinggraphof\fograph\to\bindinggraphof\fographa$ is a directed graph fibration.
\end{proof}
Let $\node$ be the child of a $\graphunion$ node $\nodea$ in a \plustimestree $\tree$.
The node $\node$ is \defn{critical} to $\nodea$ if $\node$ is the only child of $\nodea$ which is at or above a literal.
If $\node$ is an $x$-binder for some variable $x$, then $\node$ is \defn{vacuous} if it is the unique node in the subtree rooted at $\nodea$ whose label contains $x$.\footnote{Thus in the cograph $\cographof\tree$, the binder $\node$ (which is universal since its parent is a $\graphunion$ node) binds no literal.}
\begin{definition}\label{def:pareable}\label{def:pare}
A node $\node$ in a \plustimestree $\tree$ is \defn{pareable} if:
\begin{enumerate}
\item $\node$ has a parent $\graphunion$ node $\nodea$,
\item $\node$ is not critical to $\nodea$, and
\item if $\node$ is a binder (necessarily universal) then it is vacuous.
\end{enumerate}
To \defn{pare} a pareable node $\node$ in a \plustimestree $\tree$ is to delete the subtree rooted at $\node$.
\end{definition}
\begin{definition}
A \defn{pruning} is any result of iteratively paring zero or more pareable $\graphunion$ nodes.
\end{definition}
\begin{lemma}\label{lem:marking-pruning}
Let $\fograph$ be a fograph with marking $U$,
and let $\tree$ be a \plustimestree such that $\cographof\tree=\fograph$.
There exists a pruning $\treep$ of $\tree$ with $\cographof\treep=\induced\fograph U$.
\end{lemma}
\begin{proof}
A routine induction on the number of nodes in $\tree$.\todo{elaborate: see sheet 22 of notepad Apr'19}
\end{proof}
\subsubsection{Decomposition of skew bifibrations}\label{sec:decomp-bifib}
\begin{definition}
If $\fograph$ is a connected fograph without the variable $x$, define \defn{slackening} $\slackeningof\fograph x$ as the canonical inclusion map $\fograph\to\forall x\mkern2mu\fograph$.
\end{definition}
\begin{lemma}\label{lem:slackening-structural}
Every slackening is a structural map.
\end{lemma}
\begin{proof}
Weaken $\fograph$ to $\fograph\vee\forall x\mkern2mu\fograph$, which is $\forall x(\fograph\vee\fograph)$, then contract under $\forall x$ to $\forall x\mkern2mu\fograph$. (Note that $\fograph\vee\fograph$ is well-defined because $\fograph$ is connected.)
\end{proof}
\begin{definition}
A \defn{\wsmap} is any map constructed from isomorphisms, weakenings and slackenings by composition and fograph connectives.
\end{definition}
\begin{lemma}\label{lem:ws-structural}
Every \wsmap is a structural map.
\end{lemma}
\begin{proof}
Iterate \reflem{lem:slackening-structural}.
\end{proof}
\begin{lemma}\label{lem:paring-ws}
Let $\fograph$ be a fograph,
let $\tree$ be a \plustimestree such that $\cographof\tree=\fograph$,
let $\treep$ be the result of paring a pareable node in $\tree$, and let $\fographp=\cographof\treep$.
There exists a \wsmap $\fographp\to\fograph$.
\end{lemma}
\begin{proof}
If the paring is of a vacuous binder (condition 3 in Def.\,\ref{def:pareable}), then we obtain a slackening in the context of a fograph connective, otherwise (condition 2 in Def.\,\ref{def:pareable}) we obtain a weakening in the context of a fograph connective.
\end{proof}
\begin{lemma}\label{lem:pruning-ws}
Let $\fograph$ be a fograph,
let $\tree$ be a \plustimestree such that $\cographof\tree=\fograph$,
let $\treep$ be a pruning of $\tree$, and let $\fographp=\cographof\treep$.
There exists a \wsmap $\fographp\to\fograph$.
\end{lemma}
\begin{proof}
Apply \reflem{lem:paring-ws} to each paring in the pruning.
\end{proof}
\begin{lemma}\label{lem:image-inclusion-wsmap}
Let $\bifib:\fograph\to\fographa$ be a skew bifibration with $\fographa$ fair. The inclusion $\im\bifib\to\fographa$ is a \wsmap.
\end{lemma}
\begin{proof}
By \reflem{lem:image-fograph}, $\im\bifib$ is a fograph.
Let $U=\bifib(\verticesof\fograph)$, thus $\im\bifib$ is the induced subgraph $\induced\fographa U$.
By \reflem{lem:im-marking}, $U$ is a marking.
Let $\tree$ be the cotree $\cotreeof\fographa$.
By \reflem{lem:im-marking}, there exists a pruning $\treep$ of $\tree$ with $\cographof\treep=\induced\fographa U=\im\bifib$.
By \reflem{lem:pruning-ws}, there exists a \wsmap $\cographof\treep\to\cographof\tree$, \latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot,\todo{G(T(G))=G lemma earlier?}
$\im\bifib\to\fographa$.
\end{proof}
\noindent Let $\bifib:\fograph\to\fographa$ be a skew fibration and let $K$ be a connected component of $\fographa$. The \defn{multiplicity} of $K$ is the number of connected components of $\bifib^{-1}(K)$, and the \defn{weight} of $K$ is one more than its multiplicity.
The \defn{weight} of $\bifib$ is the sum of the weights of the connected components of $\fographa$.
A skew bifibration is \defn{shallow} if the multiplicity of every connected component of $\fographa$ is at most one.
\begin{lemma}\label{lem:bifib-cw}
Every skew bifibration into a fair fograph is a structural map.
\end{lemma}
\begin{proof}\todo{elaborate?}
By induction on the weight of the skew bifibration $\bifib:\fograph\to\fographa$ and its multiplicity.
By \reflem{lem:image-inclusion-wsmap}
(and the fact that every \wsmap is a structural map by \reflem{lem:ws-structural})
we may assume $\bifib$ is a surjection, and by pre-composing with contractions
we may assume $\bifib$ is shallow.
If $\fographa=\singletonx\graphunion\fographap$ then $\fograph=\singletonx\graphunion\fographp$ since $\bifib$ is a shallow surjection, hence $\bifib=\forall x\mkern2mu\bifibp$, and by induction $\bifibp$ is a structural map.
Otherwise if $\fographa=\fographa_1\graphunion\fographa_2$ then $\fographa=\fographa_1\tightvee\fographa_2$ (since $\fographa$ is not of the form $\singletonx\graphunion\fographap$).
Since $\bifib$ is a shallow surjection, $\fograph=\fograph_1\tightvee\fograph_2$, so $\bifib=\bifib_1\tightvee\bifib_2$ for $\bifib_i:\fograph_i\to\fographa_i$. By induction each $\bifib_i$ is a structural map, hence $\bifib$ is structural.
Otherwise $\fographa$ is connected. If $\fographa$ has no edge then $\bifib$ is an isomorphism from a literal to a literal, hence is a structural map. Thus we may assume $\fographa$ has an edge.
If $\fographa=\singletonx\graphjoin\fographap$ then $\fograph=\singletonx\graphjoin\fographp$ since $\bifib$ is a shallow surjection, hence $\bifib=\exists x\mkern2mu\bifibp$, and by induction $\bifibp$ is a structural map.
Otherwise $\fographa=\fographa_1\graphjoin\fographa_2$ for fographs $\fographa_i$, with $\fographa_i$ not of the form $\singleton x\graphjoin\fographa_i'$. Thus $\fographa=\fograph_1\wedge\fograph_2$, hence $\fograph=\fograph_1\wedge\fograph_2$ with $\bifib(\verticesof{\fograph_i})\subseteq\verticesof{\fographa_i}$. Therefore $\bifib=\bifib_1\vee\bifib_2$ for skew bifibrations $\bifib_i:\fograph_i\to\fographa_i$, and by induction each $\bifib_i$ is a structural map, so $\bifib$ is a structural map.
\end{proof}
\begin{lemma}[Soundness of skew bifibrations]\label{lem:bifib-sound}
If $\fograph$ is a valid fograph and $\bifib:\fograph\to\fographa$ is a skew bifibration with $\fographa$ fair, then $\fographa$ is valid.
\end{lemma}
\begin{proof}
By Lemma\,\ref{lem:bifib-cw}, $\bifib$ is a structural map, which is sound by \reflem{lem:strucmap-sound}.
\end{proof}
\subsection{Proof of the Soundness Theorem}\label{subsec:proof-of-soundness}
\begin{proof}[Proof of the Soundness Theorem (Theorem~\ref{thm:soundness})]
Let $\bifib:\net\to\graphof\formula$ be a combinatorial proof of a formula $\formula$. By \reflem{lem:fonet-soundness} $\net$ is valid,
thus by \reflem{lem:bifib-sound} $\graphof\formula$ is valid (applicable since $\graphof\formula$ is rectified, hence fair), therefore $\formula$ is valid.
\end{proof}
\section{Proof of the Completeness Theorem}\label{sec:completeness}\label{sec:cp-completeness}
In this section we prove the Completeness Theorem, Theorem~\ref{thm:completeness}.
Our strategy will be to show that
every syntactic proof of a formula $\formula$ in Gentzen's classical sequent calculus \cite{Gen35}
generates a combinatorial proof of $\formula$, so completeness follows from that of Gentzen's system.
A \defn{sequent} is a finite sequence $\sequentsequence$ of formulas, $n\tightge 0$.
We identify a formula $\formula$ with the single-formula sequent containing $\formula$.
Let $\sequent$ be the sequent $\sequentsequence$.
Its \defn{formula} $\formulaof\sequent$ is $\formulaofsequent$, and $\sequent$ is
\defn{valid}
(resp.\ \defn{rectified})
if $\formulaof\sequent$ is
valid
(resp.\ rectified).
As with formulas, we shall assume, without loss of generality, that every sequent is rectified (by renaming bound variables as needed).
The \defn{graph} $\graphof\sequent$ of a sequent $\sequent$ is the graph
$\graphof{\formulaof\sequent}$
of its formula.
For example,
$\graphof{\px\com\exists y\mkern2mu\ppy}\mkern2mu=\mkern4mu\singletonleft\px\hspace{1ex}\namedsingletonleft y y\hspace{1ex}\namedsingletonright{ppy}\ppy\e y{ppy}\mkern2mu$.
A \defn{combinatorial proof} of $\sequent$ is a combinatorial proof of its formula $\formulaof\sequent$.
For technical convenience we will use a right-sided formulation $\R$ of Gentzen's sequent calculus $\LK$ \cite{Gen35},
comprising the following rules.
Here
$\sequent$ and $\sequenta$
are arbitrary sequents,
$\formula$ and $\formulaa$ are arbitrary formulas,
$\atom$ is any atom which is not a logical constant,
and $\formulap\cong\formula$ denotes that $\formulap$ is
equal to $\formula$ up to bound variable renaming (\latinstyle{e.g}\onedot} \def\Eg{\latinstyle{E.g}\onedot,
$
\forall x\mkern2mu pxy
\cong
\forall z\mkern2mu pzy
$).\footnote{For the $\exists$ rule, recall
(from \S\ref{sec:soundness})
that $\formula\assignment{\assign\variable\term}$ denotes
the result of substituting a term $\term$ for all occurrences of the variable $\variable$ in $\formula$, where,
without loss of generality (by renaming bound variables in $\formula$ as needed),
no variable in $\term$ is a bound variable of $\formula$.}
\begin{center}\v{2.5}\(
\newcommand\gap{\hspace{8ex}}\hh{14}\begin{array}{c}
\Raxiomrule{\atom}
\gap
\Ronerule
\gap
\Rxrule\Rxrulehyp\Rxruleconc
\gap
\Rwrule\Rwrulehyp\Rwruleconc
\gap
\Rcrule\Rcrulehyp\Rcruleconc
\rput[lb](.3,.3){\text{($\formulap\cong\formula$)}}
\\[5ex]
\renewcommand\gap{\hspace{8ex}}
\Randrule\Randrulehypone\Randrulehyptwo\Randruleconc
\gap
\Rorrule\Rorrulehyp\Rorruleconc
\gap
\Rexistsrule\Rexistsrulehyp\Rexistsruleconc
\gap
\Rforallrule\Rforallrulehyp\Rforallruleconc
\rput[lb](.3,.3){\text{($x$ not free in $\Gamma$)}}
\end{array}\)\v{2.5}\end{center}
The rules $\xname$, $\cname$ and $\wname$ are called \defn{exchange}, \defn{contraction} and \defn{weakening}.
Each sequent above a rule is a \defn{hypothesis} of the rule, and the sequent below a rule is the \defn{conclusion} of the rule.
\begin{lemma}[\R soundness \& completeness]\label{lem:r-sound-complete}
A sequent is valid if and only if it has an \R proof.
\end{lemma}
\begin{proof}
System \R is equivalent to \gsone, which is sound and complete \cite[\S3.5.2]{TS96}.
It differs in that \R retains Gentzen's explicit formula-exchange (or \emph{permutation}) rule $\xname$,
while \cite{TS96} leaves exchange implicit by formulating sequents as multisets rather than sequences.
\end{proof}
\subsection{Interpreting rules as operations on combinatorial proofs}\label{sec:interp-rules}
We interpret each rule of \R with hypothesis sequents $\sequent_1,\ldots,\sequent_n$ and conclusion sequent $\sequenta$ as an operation taking combinatorial proofs $\skewfib_i$ of $\sequent_i$ as input to produce a combinatorial proof $\bifiba$ of $\sequenta$.
\begin{itemize}
\item $\inlineaxiom\atom$ rule. Define $\bifiba$ as the identity on $\singletonleft\atom\hspace{.7ex}\singletonright\dualatom$, with both vertices the same colour in the source.
\item $\dual1$ rule. Define $\bifiba$ as the identity on $\singleton1$, with no colour in the source.
\item $\vee$ rule
with hypothesis $\hyp=\Rorrulehyp$ and conclusion $\conc=\Rorruleconc$.
Let $\bifib$ be the combinatorial proof of $\hyp$.
Note that $\graphof\hyp=\graphof\conc$.
Define $\bifiba=\bifib$.
\item $\xname$ rule
with hypothesis $\hyp=\Rxrulehyp$ and conclusion $\conc=\Rxruleconc$.
Let $\bifib$ be the combinatorial proof of $\hyp$, and let $\textsf{x}$ be the canonical isomorphism $\graphof\hyp\to\graphof\conc$.
Define $\bifiba=\textsf{x}\circ\bifib$.
\item $\wname$ rule with hypothesis $\hyp=\Rwrulehyp$ and conclusion $\conc=\Rwruleconc$.
Let $\bifib$ be the combinatorial proof of $\hyp$, and let $\textsf{w}$ be the canonical injection $\graphof\hyp\to\graphof\conc$.
Define $\bifiba=\textsf{w}\circ\bifib$.
\item $\forall$ rule
with hypothesis $\hyp=\Rforallrulehyp$ and conclusion $\conc=\Rforallruleconc$.
Let $\bifib:\cover\to\graphof{\hyp}$ be the combinatorial proof of $\hyp$ and
let $\textsf{a}$ be the canonical injective graph homomorphism \mbox{$\graphof\hyp\to\graphof\conc$}.
Note $\graphof\hyp=\graphof{\sequent}\graphunion\graphof{\formula}$.
If $\bifib^{-1}(\verticesof{\graphof\formula})$ is empty define $\bifiba=\textsf{a}\circ\bifib:\cover\to\graphof{\conc}$.
Otherwise,
define \mbox{$\bifiba:\singleton x\mkern-1mu\graphunion\mkern-2mu \cover\to\graphof{\conc}$} as
the extension of $\textsf{a}\circ\bifib:\cover\to\graphof{\conc}$
which maps
$\singleton x$ to the (unique) $x$-binder in $\graphof{\conc}$.
\item $\cname$ rule
with hypothesis $\hyp=\Rcrulehyp$ and conclusion $\conc=\Rcruleconc$.
Let $\bifib:\cover\to\graphof{\hyp}$ be the combinatorial proof of $\hyp$.
Let $\skel\cover$ be the result of dropping the labels from $\cover$ and
let \mbox{$\skel\bifib:\skel\cover\to\graphof{\hyp}$} be the skeleton of $\bifib$.
Let $\textsf{c}$ be the canonical surjective graph homomorphism \mbox{$\graphof\hyp\to\graphof\conc$}.
Define $h=\textsf{c}\circ\skel\bifib:\skel\cover\to\graphof\conc$.
A universal binder in $\graphof\conc$ is \defn{outer} if it is in no edge.
A \defn{duplicator} is an outer universal binder $\binder$ in $\graphof\conc$ such that
$h^{-1}(\binder)$ contains two vertices.
To \defn{collapse} a duplicator $\binder$ is to
delete one of the two vertices of $h^{-1}(\binder)$ from $\skel\cover$.
Define $\skel\cover^-$ as the derivative of $\cover$ obtained by collapsing every duplicator.
Define the skeleton $\skel\bifiba:\skel\cover^-\to\graphof{\conc}$ of $\bifiba$ as
the restriction of $h$ to $\skel\cover^-$.
Thus $\bifiba:\cover^-\to\graphof{\conc}$ for $\cover^-$ the result of adding the canonical labels to $\skel\cover^-$
(\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, the label of $\vertex$ in $\cover^-$ is the label of $\skel\bifiba(\vertex)$ in $\graphof\conc$).\footnote{For example,
let $\hyp=\forall x\mkern1mu 1\com\mkern3mu\forall y\mkern1mu 1$,
let $\conc=\forall x\mkern1mu 1$,
let $\cover=
\graphof{\hyp}=
\singleton x\hspace{.7ex}\singleton 1
\hspace{.8ex}
\singleton y \hspace{.7ex}\singleton 1$,
and let $\bifib$ be the identity $\cover\to\cover$.
\newcommand\coverminus{\singleton x\hspace{.7ex}\singleton 1
\hspace{.8ex}
\singleton 1}%
\newcommand\badcover{\singleton x\hspace{.7ex}\singleton 1
\hspace{.8ex}
\singleton x \hspace{.7ex}
\singleton 1}%
Then $\graphof{\conc}=\singleton x\hspace{.7ex}\singleton 1$
and $\singleton x$ in $\graphof{\conc}$ is a duplicator.
Thus
$\cover^-=\coverminus$ and the combinatorial proof
$\bifiba$ of $\conc$ is the unique label-preserving function from
$\coverminus$ to $\graphof{\conc}=\singleton x\hspace{.7ex}\singleton1$.
We must delete $\singleton y$ from $\cover$ to form $\cover^-$ because otherwise $\bifiba$ will be
from
$\badcover$
to
$\singleton x\hspace{.7ex}\singleton1$, whose source $\badcover$ is not a well-defined fograph since it has two universal $x$-binders in the scope of one another.}
\item \begin{samepage}$\wedge$ rule
with hypotheses $\hyp_1=\Randrulehypindex1$ and $\hyp_2=\Randrulehypindex2$ and conclusion $\conc=\Randruleconcindexed$.
Let $\bifib_i:\cover_i\to\graphof{\hyp_i}$ be the combinatorial proof of $\hyp_i$ and let $\textsf{z}_i$ be the canonical injective graph homomorphism $\graphof{\hyp_i}\to\graphof\conc$.
Note that $\graphof{\hyp_i}=\graphof{\sequent_i}\graphunion\graphof{\formula_i}$.
Let $\portion_i=\bifib_i^{\mkern3mu-1}(\verticesof{\graphof{\formula_i}})$
and define $\bifib_i$ as \defn{weak} if $\portion_i$ is empty, else \defn{strong}.
Define $\bifiba$ according to the following cases.
\vspace{-.6ex}\begin{itemize}
\item $\bifib_1$ and $\bifib_2$ are both strong or both weak.
Let $\cover$ be the fusion of $\cover_1$ and $\cover_2$ at the portions $P_1$ and $P_2$.
Define $\bifiba:\cover\to\graphof{\conc}$ as the union of $\textsf{z}_1\circ\bifib_1$ and $\textsf{z}_2\circ\bifib_2$.
\item $\bifib_i$ is weak and $\bifib_{3-i}$ is strong.
Define $\bifiba:\cover_i\to\graphof{\conc}$ as $\textsf{z}_i\circ\bifib_i$.
\end{itemize}\end{samepage}
\item $\exists$ rule
with hypothesis $\hyp=\Rexistsrulehyp$ and conclusion $\conc=\Rexistsruleconc$.
Let $\bifib:\cover\to\graphof{\hyp}$ be the combinatorial proof of $\hyp$.
Let $\skel\cover$ be the result of dropping the labels from $\cover$ and let $\skel\bifib:\skel\cover\to\graphof{\hyp}$ be the skeleton of $\bifib$.
Let $\textsf{e}$ be the canonical injective graph homomorphism \mbox{$\graphof\hyp\to\graphof\conc$}.
Note $\graphof{\hyp}=\graphof{\sequent}\graphunion\graphof{\formula\assignment{\assign\variable\term}}$.
Let $P=\skel\bifib^{-1}(\verticesof{\graphof{\formula\assignment{\assign\variable\term}}})$.
Define the skeleton $\skel\bifiba$ of $\bifiba$ according to the following cases.
\vspace{-.6ex}\begin{itemize}
\item $P$ is empty. Define $\skel\bifiba$ as $\textsf{e}\circ\bifib:\skel\cover\to\graphof{\conc}$.
\item $P$ is non-empty. Let $\skel\cover^+$ be the
extension of $\skel\cover$ with an additional vertex $v$ and edges from $v$ to every vertex in $P$.
Define $\skel\bifiba:\skel\cover^+\to\graphof{\conc}$ as
the extension of $\textsf{e}\circ\bifib:\skel\cover\to\graphof{\conc}$ to $\skel\cover^+$ which maps $\vertex$ to the $x$-binder of $\graphof{\conc}$.
\end{itemize}
\end{itemize}
\begin{lemma}\label{lem:interp-well-defined}
The interpretation of each rule of $\R$ defined above
produces a well-defined combinatorial proof.
\end{lemma}
\begin{proof}
A routine verification of the fograph and skew bifibration conditions defining a combinatorial proof.
\end{proof}
\subsection{Proof of the Completeness Theorem}\label{subsec:cp-completeness}
\begin{proof}[Proof of the Completeness Theorem, Theorem~\ref{thm:completeness}]
Let $\formula$ be a valid formula.
By \reflem{lem:r-sound-complete} there exists an \R proof $\Pi$ of $\formula$.
By \reflem{lem:interp-well-defined} we obtain a combinatorial proof of $\formula$ from $\Pi$ by interpreting each rule of $\Pi$ as an operation on combinatorial proofs, as defined in \S\ref{sec:interp-rules}.
\end{proof}
\section{Homogeneous soundness and completeness proofs}
\subsection{Propositional homogeneous soundness and completeness proof}\label{sec:proof-of-propositional-homogeneous-soundness-completeness}
In this section we prove the propositional homogeneous soundness and completeness theorem, Theorem\,\ref{thm:prop-soundness-completeness}.
We begin by observing that the function $\dualizinggraphofpropsymbol$ from simple propositions to dualizing graphs (Def.\,\ref{def:dualizing-graph-of-prop}) factorizes through simple propositional fographs.
A fograph is \defn{propositional} if every predicate symbol is nullary, and \defn{simple} if it has no $1$- or $0$-labelled literal.
For example, the middle row of Fig.\,\ref{fig:dualizing-graphs} (p.\,\pageref{fig:dualizing-graphs}) shows four simple propositional fographs.
\begin{definition}\label{def:prop-fograph-to-dualizing-graph}
The \defn{dualizing graph} $\dualizinggraphof\fograph$ of a simple propositional fograph is the dualizing graph $\dualizinggraph$ with $\verticesof\dualizinggraph=\verticesof\fograph$, $\edgesof\dualizinggraph=\edgesof\fograph$, and $\vertex\vertexa\in\dualitiesof\dualizinggraph$ if and only if
$\vertex$ and $\vertexa$ have dual predicate symbols.\footnote{In \S\ref{sec:surjections} we will show that $\dualizinggraphofsymbol$ is a surjection from simple propositional fographs onto dualizing graphs (\reflem{lem:surj-prop-fographs-to-dualizing-graphs}).}
\end{definition}
For example, for each simple propositional fograph $\fograph$ in the middle row of Fig.\,\ref{fig:dualizing-graphs} (p.\,\pageref{fig:dualizing-graphs}),
the corresponding dualizing graph $\dualizinggraphof\fograph$ is shown below $\fograph$.
\begin{lemma}\label{lem:dualizing-graph-of-prop-fograph-well-defined}
$\dualizinggraphof\fograph$ is a well-defined dualizing graph for every simple propositional fograph $\fograph$.
\end{lemma}
\begin{proof}
Let
$\dualizinggraph=\dualizinggraphof\fograph$.
Since
$\verticesof\dualizinggraph=\verticesof\fograph$,
$\edgesof\dualizinggraph=\edgesof\fograph$
and
$\fograph$ is a fograph,
$\dualizinggraph$ is a cograph.
By reasoning as in the proof of \reflem{lem:dualizing-graph-of-prop-well-defined}, $\graphpairof{\verticesof\fograph}{\dualitiesof\dualizinggraph}$ is $P_4$- and $\cthree$-free.
\end{proof}
\begin{lemma}\label{lem:prop-factorization}
The function $\dualizinggraphofpropsymbol$ from simple propositions to dualizing graphs (Def.\,\ref{def:dualizing-graph-of-prop}) factorizes through simple propositional fographs:
$\dualizinggraphofprop\prop=\dualizinggraphof{\graphof\prop}$ for every simple proposition $\prop$.
\end{lemma}
\begin{proof}
A routine induction on the structure of $\prop$.\todo{todo}
\end{proof}
\begin{lemma}[Propositional homogeneous soundness]\label{lem:prop-soundness}
A simple proposition is valid if it has a homogeneous combinatorial proof.
\end{lemma}
\begin{proof}
Suppose
$\bifib:\net\to\dualizinggraphofprop\prop=\dualizinggraph$
is a homogeneous combinatorial proof of the simple proposition $\prop$.
By \reflem{lem:prop-factorization}, $\dualizinggraph=\dualizinggraphof{\graphof\prop}$.
Define $\netp$ as the cograph $\graphpairof{\verticesof\net}{\edgesof\net}$
with a link $\setof{\vertex,\vertexa}$ for each $\vertex\vertexa\in\dualitiesof\net$
and the label of a vertex $\vertex$ in $\netp$ defined as the label of $\bifib(\vertex)$ in $\graphof\prop$,
where $\bifib(\vertex)\tightin\verticesof\dualizinggraph$
can be viewed as a vertex of $\graphof\prop$ since
$\verticesof\dualizinggraph=\verticesof{\graphof\prop}$
by definition of $\dualizinggraphofsymbol$.
Since $\net$ is a dualizing net, $\netp$ is a fonet: (a) every colour is a pre-dual pair of literals, since $\bifib:\graphpairof{\verticesof\net}{\dualitiesof\net}\to\graphpairof{\verticesof\dualizinggraph}{\dualitiesof\dualizinggraph}$ is an undirected graph homomorphism,
(b) $\netp$ trivially has a dualizer, the empty assignment, since it is propositional, with no existential variables,
and
(c) $\netp$ has no induced bimatching, since the leap graphs $\leapgraphof\netp$ and $\leapgraphof\net$ are equal and
$\net$ has no induced bimatching.
We claim that $\bifib:\netp\to\graphof\prop$ is a skew bifibration.
Since $\bifib:\net\to\dualizinggraph$ is a skew fibration,
$\graphpairof{\verticesof\netp}{\edgesof\netp}=\graphpairof{\verticesof\net}{\edgesof\net}$, and
$\graphpairof{\verticesof{\graphof\prop}}{\edgesof{\graphof\prop}}=\graphpairof{\verticesof\dualizinggraph}{\edgesof\dualizinggraph}$,
we know $\bifib:\netp\to\graphof\prop$ is a skew fibration.
Because the label of $\vertex$ in $\netp$ is
that of $\bifib(\vertex)$ in $\graphof\prop$, $\bifib:\netp\to\graphof\prop$ preserves labels. Since there are no binders, existentials are preserved trivially and $\bifib:\bindinggraphof\netp\to\bindinggraphof{\graphof\prop}$ is trivially a directed graph fibration.
Thus $\bifib:\netp\to\graphof\prop$ is a skew bifibration, hence a combinatorial proof (since $\netp$ is a fonet).
By Theorem\,\ref{thm:soundness}, $\prop$ is valid.
\end{proof}
\begin{lemma}[Propositional homogeneous completeness]\label{lem:prop-completeness}
Every valid simple proposition has a homogeneous combinatorial proof.
\end{lemma}
\begin{proof}
Let $\prop$ be a valid simple proposition.
By Theorem\,\ref{thm:completeness} there exists a (standard) combinatorial proof $\bifib:\net\to\graphof\prop$.
Let $\netp$ be the dualizing graph obtained from $\net$ by replacing each link (colour) $\setof{\vertex,\vertexa}$ by a duality $\vertex\vertexa\tightin\dualitiesof\netp$.
Since, by definition of a linked fograph, every literal is in exactly one link,
$\graphpairof{\verticesof\netp}{\dualitiesof\netp}$ is a matching,
and since $\net$ is a simple propositional fonet, $\netp$ has no induced bimatching; thus $\netp$ is a dualizing net.
Let
$\dualizinggraph=\dualizinggraphofprop\prop$.
By \reflem{lem:prop-factorization},
$\dualizinggraphofprop\prop=\dualizinggraphof{\graphof\prop}$, thus
$\graphpairof{\verticesof\dualizinggraph}{\edgesof\dualizinggraph}=\graphpairof{\verticesof{\graphof\prop}}{\edgesof{\graphof\prop}}$.
We claim that $\bifib:\netp\to\dualizinggraph$ is a homogeneous combinatorial proof, \latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, (1) $\bifib:\netp\to\dualizinggraph$ is a skew fibration and (2) \mbox{$\bifib:\graphpairof{\verticesof\netp}{\dualitiesof\netp}\to\graphpairof{\verticesof\dualizinggraph}{\dualitiesof\dualizinggraph}$} is a graph homomorphism.
By definition, (1) holds if \mbox{$\bifib:\graphpairof{\verticesof\netp}{\edgesof\netp}\to\graphpairof{\verticesof\dualizinggraph}{\edgesof\dualizinggraph}$} is a skew fibration, which is true because $\bifib$ is a skew bifibration,
$\graphpairof{\verticesof\netp}{\edgesof\netp}=\graphpairof{\verticesof\net}{\edgesof\net}$, and
$\graphpairof{\verticesof\dualizinggraph}{\edgesof\dualizinggraph}=\graphpairof{\verticesof{\graphof\prop}}{\edgesof{\graphof\prop}}$.
For (2), suppose $\vertex\vertexa\tightin\dualitiesof\netp$. Since $\net$ is a fonet, it has a dualizer, so the labels of $\vertex$ and $\vertexa$ are dual, say, $p$ and $\pp$, respectively.
Because $\bifib$ preserves labels, $\bifib(\vertex)$ and $\bifib(\vertexa)$ are labelled $p$ and $\pp$, thus $\bifib(\vertex)\mkern2mu\bifib(\vertexa)\in\dualitiesof\dualizinggraph$, and (2) holds.
\end{proof}
\begin{proofof}{Theorem\,\ref{thm:prop-soundness-completeness} (Propositional homogeneous soundness and completeness)}\\
Lemmas\,\ref{lem:prop-soundness} and \ref{lem:prop-completeness}.
\end{proofof}
\subsection{Monadic homogeneous soundness and completeness proof}\label{sec:proof-of-monadic-soundness-completeness}
In this section we prove the monadic homogeneous soundness and completeness theorem, Theorem\,\ref{thm:monadic-soundness-completeness}.
The proof of completeness is similar to that of the propositional case,
\reflem{lem:prop-completeness}:
transform a standard first-order combinatorial proof of a monadic formula into a homogeneous combinatorial proof.
The proof of soundness is more subtle. In the propositional case,
\reflem{lem:prop-soundness},
we transformed a homogeneous combinatorial proof directly into a standard one, with the same vertices in both source and target.
The monadic case involves quotienting indistinguishable vertices in the source monet.
\subsubsection{Factorization through closed monadic fographs}
A fograph is \defn{closed} if it contains no free variables,
and \defn{monadic} if its predicate symbols are unary and it has no function symbols or logical constants ($1$ or $0$).
\begin{samepage}\begin{definition}\label{def:mograph-of-fograph}
The \defn{mograph} $\mographof\fograph$ of a closed monadic fograph $\fograph$ is the mograph $\mograph$ with
\begin{itemize}
\item $\verticesof\mograph=\verticesof\fograph$,
\item $\edgesof\mograph=\edgesof\fograph$,
\item $\vertex\vertexa\tightin\dualitiesof\mograph$ if and only if
$\vertex$ and $\vertexa$ are literals whose predicate symbols are dual, and
\item $\diredge\vertex\vertexa\tightin\bindingsof\mograph$ if and only if
$\vertex$ binds $\vertexa$.
\end{itemize}
\end{definition}\end{samepage}
For example, the closed monadic fograph $\fograph$ in Fig.\,\ref{fig:mograph} (centre)
has the mograph $\mographof\fograph$ to its right.
\begin{lemma}\label{lem:mograph-of-fograph-well-defined}
$\mographof\fograph$ is a well-defined mograph for every closed monadic fograph $\fograph$.
\end{lemma}
\begin{proof}
The underlying cograph $\graphpairof{\verticesof\mograph}{\edgesof\mograph}$ is inherited directly from $\fograph$.
By reasoning as in the proof of \reflem{lem:dualizing-graph-of-prop-well-defined}, $\graphpairof{\verticesof\fograph}{\dualitiesof\dualizinggraph}$ is $P_4$- and $\cthree$-free.
It remains to show
(a) every target of a binding in $\bindingsof\mograph$ is in no other binding,
(b) no binder is in a duality,
(c) the scope of every binder $\binder$ is non-empty, and
(d) $\diredge\binder\literal\tightin\bindingedgesof\mograph$ only if $\literal$ is in the scope of $\binder$.
(a) Since $\fograph$ is monadic, every literal label contains exactly one variable, hence is bound by at most one binder in $\fograph$. By definition of $\bindinggraphof\fograph$, no literal binds any other vertex, thus every literal target of a binding is in no other binding.
(b) Dualities are defined as pairs of literals in $\fograph$, which become literals in $\mograph$ since $\fograph$ is closed.
(Every literal in $\fograph$ is bound by a binder in $\fograph$, so becomes a literal in $\mograph$.)
(d) By definition of fograph binding, $\literal$ is bound by a binder $\binder$ only if $\literal$ is in the scope of $\binder$.
\end{proof}
\begin{lemma}\label{lem:monadic-factorization}
The function $\mographofformulasymbol$ from closed monadic formulas to mographs (Def.\,\ref{def:mograph-of-formula}) factorizes through closed monadic fographs:
$\mographofformula\monadicformula=\mographof{\graphof\monadicformula}$ for every closed monadic formula $\monadicformula$.
\end{lemma}
\begin{proof}
A routine induction on the structure of $\monadicformula$.\todo{todo}
\end{proof}
\subsubsection{Collapsing indistinguishable vacuous universal binders}
Given an equivalence relation $\sim$ on a set $\vertices$ write $\equivclassof\vertex\sim$
for the $\sim$-equivalence class $\setst{\vertexa\tightin\vertices}{\vertexa\sim\vertex}$ and
$\quotientof\vertices\sim$ for the set of $\sim$-equivalence classes $\setst{\equivclassof\vertex\sim}{\vertex\tightin\vertices}$.
For a set $\edges$ of edges on $\vertices$ define $\quotientof\edges\sim$ as the set $\setst{\equivclassof\vertex\sim\equivclassof\vertexa\sim}{\vertex,\vertexa\in\edges}$ of edges on $\quotientof\vertices\sim$.
Given a mograph $\mograph$ and an equivalence relation $\sim$ on $\verticesof\mograph$ define the \defn{quotient} mograph $\quotientgraphof\mograph\sim$ by
$\verticesof{\quotientgraphof\mograph{\mkern3mu\sim}}=\quotientof{\verticesof\mograph}\sim$,
$\edgesof{\quotientgraphof\mograph{\mkern3mu\sim}}=\quotientof{\edgesof\mograph}\sim$,
$\dualitiesof{\quotientgraphof\mograph{\mkern3mu\sim}}=\quotientof{\dualitiesof\mograph}\sim$,
and
$\bindingsof{\quotientgraphof\mograph{\mkern3mu\sim}}=\quotientof{\bindingsof\mograph}\sim$.
A binder in a mograph is \defn{vacuous} if it binds no literal.
Let $\bifib:\net\to\mograph$ be a skew bifibration of mographs.
Vacuous universal binders $\binder$ and $\bindera$ in $\net$ are \defn{indistinguishable}
if their images and neighbourhoods are equal, \latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot,
$\bifib(\binder)=\fib(\bindera)$ and $\neighbourhoodof\binder=\neighbourhoodof\bindera$.
Define $\indist$ as the equivalence relation on $\verticesof\net$ generated by indistinguishability,
and the \defn{collapse} $\collapseof\bifib:\quotientgraphof\net\indist\to\mograph$
as the canonical function on the quotient,
\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, $\collapseof\bifib(\equivclassof\binder\indist)=\fib(\vertex)$, a well-defined function since $\binder\indist\bindera$ implies $\bifib(\binder)\tighteq\bifib(\bindera)$.
\begin{lemma}\label{lem:collapse}
Let $\mograph$ be a mograph and $\net$ a monet.
If
$\bifib:\net\to\mograph$ is a homogeneous combinatorial proof then its collapse
$\collapseof\fib:\quotientgraphof\monet\indist\to\grapha$ is a homogeneous combinatorial proof.
\end{lemma}
\begin{proof}
$\quotientgraphof\monet\indist$ is a monet because if
$W\tightsubseteq\verticesof{\quotientgraphof\monet{\mkern2mu\indist}}$ induces a bimatching in
$\quotientgraphof\monet\indist$ then it induces a bimatching in $\monet$:
since indistinguishable vertices are vacuous binders, they cannot be in both a leap and an edge of
$\quotientgraphof{\edgesof\monet}\indist$, so cannot occur in $W$.
The function
$\collapseof\bifib:\graphpairof{\quotientof{\verticesof\monet}\indist}{\quotientof{\edgesof\monet}\indist}\to\graphpairofgraph\mograph$
is a skew fibration because
$\bifib:\graphpairofgraph\monet\to\graphpairofgraph\mograph$ is a skew fibration and indistinguishable vertices have the same image and neighbourhood, and
\mbox{$\collapseof\bifib:\graphpairof{\quotientof{\verticesof\monet}\indist}{\quotientof{\dualities\monet}\indist}\to\dualizinggraphofgraph\mograph$}
is a homomorphism because
\mbox{$\bifib:\graphpairofgraph\monet\to\dualizinggraphofgraph\mograph$}
is a homomorphism and no binder is in a duality edge.
Finally,
$\collapseof\bifib:\graphpairof{\quotientof{\verticesof\monet}\indist}{\quotientof{\bindingsof\monet}\indist}\to\bindinggraphofgraph\mograph$
is a fibration because
\mbox{$\bifib:\graphpairofgraph\monet\to\graphpairofgraph\mograph$} is a
fibration and indistinguishable binders are vacuous, therefore absent from bindings.
\end{proof}
\subsubsection{Monadic fonets without dualizers}
Monets were defined (\S\ref{sec:monets}) without need for dualizers, in terms of the binder equivalence relation $\brel\mograph{\mkern-4mu}$.
In this section we take an analogous approach with monadic fonets (\S\ref{sec:fonets}).
Let \defn{rmf} abbreviate \emph{rectified monadic fograph}.
\begin{definition}
Let $\cover$ be a linked rmf.
\defn{Variable equivalence}
$\vrel\cover$ is the equivalence relation on binders generated by
$x\vrel\cover y$
for each link $\setof{\singletonpx,\singletonppy}$ in $\cover$.
\end{definition}
In the above definition $p$ is any predicate symbol (necessarily unary, since $\cover$ is monadic).
A \defn{conflict} in $\cover$ is a pair $\setof{x,y}$ of distinct non-existential variables $x$ and $y$ such that $x\vrel\cover y$.
\begin{definition}\label{def:linked-rmf-consistent}
A linked rmf is \defn{consistent} if
it has no conflict.
\end{definition}
\begin{lemma}\label{lem:consistent-dualizable}
A linked rmf has a dualizer if and only if it is consistent.
\end{lemma}
\begin{proof}
Let $\cover$ be the linked rmf.
Suppose $\cover$ has a dualizer. By \reflem{lem:mgd} $\cover$ has a most general dualizer $\dualizer$.
Thus for every colour $\monadiclink$ we have $(\px)\dualizer$ dual to $(\ppy)\dualizer$.
(Recall that $\atom\dualizer$ denotes the result of substituting $\dualizer(x)$ for $x$ in $\atom$.)
For a contradiction, suppose $\setof{z_1,z_2}$ were a conflict in $\cover$, \latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot,
$z_1\mkern-2mu\vrel\cover\mkern-2mu z_2$ for
non-existential variables $z_1\tightneq z_2$.
Since
$z_1\vrel\cover z_2$
we have variables
$x_1,\ldots, x_n$ for $n\tightge 1$ with $x_1=z_1$, $x_n=z_2$, and for $1\tightle i\tightlt n$ there exists a link $\setof{\singleton{p_ix_i}, \singleton{\pp_ix_{i+1}}}$.
Since $\dualizer$ is a dualizer we have $(p_ix_i)\dualizer$ dual to $(\pp_ix_{i+1})\dualizer$, so $x_i\dualizer=x_{i+1}\dualizer$.
Thus $x_1\dualizer=x_n\dualizer$ so $z_1\dualizer=z_2\dualizer$.
Since $z_1$ and $z_2$ are non-existential, we have $z_1\dualizer=z_1$ and $z_2\dualizer=z_2$, hence $z_1=z_2$, contradicting $z_1\tightneq z_2$.
Conversely, suppose $\cover$ is consistent.
Let $e_1,\ldots,e_n$ be the equivalence classes of $\vrel\cover$.
Define $y_i$ as the unique non-existential variable in $e_i$, if it exists (unique since $\cover$ is consistent), and otherwise define $y_i$ as a fresh variable, where \emph{fresh} means not in $\cover$ and distinct from $y_j$ for $1\tightle j\tightlt i$.
Given an existential variable $x$, define $\dualizer(x)=y_i$ if $e_i$ is the equivalence class containing $x_i$.
We must show that for every link $\monadiclink$ in $\cover$ we have $(px)\dualizer$ dual to $(\ppy)\dualizer$.
Thus it remains to show that $x\dualizer=y\dualizer$.
Since $x$ and $y$ are in the same link, they are in the same equivalence class $e_i$ (for some $i$).
We consider three cases.
\begin{enumerate}
\item Both $x$ and $y$ are existential. Since $x$ and $y$ are in $e_i$, we have $\dualizer(x)=\dualizer(y)=y_i$.
\item Both $x$ and $y$ are non-existential. Therefore $x\dualizer=x$ and $y\dualizer=y$, so we require $x\tighteq y$.
This holds because $x\tightneq y$ would imply that $\setof{x,y}$ is a conflict, contradicting the consistency of $\cover$.
\item Exactly one of $x$ and $y$ is existential, say $x$. Since $y$ is non-existential, $y\dualizer=y$, and $y$ is the unique $y_i$ non-existential variable in $e_i$. Since $x$ is also in $e_i$, we have $\dualizer(x)=y_i$.
\end{enumerate}\vspace{-3.3ex}\end{proof}
\begin{lemma}\label{lem:monadic-deps}
Let $\singletonx$ and $\singletony$ be binders in a consistent linked rmf $\cover$, with $\singletonx$ existential and $\singletony$ universal.
The pair $\setof{\singletonx,\singletony}$ is a dependency of $\cover$ if and only if $x\vrel\cover y$.
\end{lemma}
\begin{proof}
By \reflem{lem:mgd-deps}, the dependencies of $\cover$ are those of a most general dualizer $\dualizer$, so it suffices
to show that $x\vrel\cover y$ if and only if $\dualizer(x)=y$.
Since every predicate symbol in $\cover$ is unary, $\vrel\cover$ is the transitive closure of the unification problem $\unirelof\cover$ (see the proof of \reflem{lem:mgd}).
Thus the dualizer $\dualizer$ defined in the proof of \reflem{lem:consistent-dualizable} is most general, and by construction $x\vrel\cover y$ if and only if $\dualizer(x)=y$.
\end{proof}
Note that the above lemmas simplify the definition of (standard, non-homogeneous) monadic combinatorial proof $\bifib:\cover\to\base$:
\begin{itemize}
\item Instead of checking for the existence of a dualizer for (the rectified form of) $\cover$, we merely check that $\cover$ is consistent, via the variable relation $\vrel\cover$, using \reflem{lem:consistent-dualizable}.
\item Instead of building the leap graph $\leapgraphof\cover$ with dependencies via a dualizer,
we read dependencies directly from $\vrel\cover$, using \reflem{lem:monadic-deps}.
\end{itemize}
\subsubsection{The linked mograph of a linked closed monadic fograph}
\begin{definition}\label{def:linked-mograph-of-linked-fograph}
The \defn{linked mograph} $\linkedmographof\cover$ of a linked closed monadic fograph $\cover$ is the linked mograph $\mograph$ with
\begin{itemize}
\item $\verticesof\mograph=\verticesof\cover$,
\item $\edgesof\mograph=\edgesof\cover$,
\item $\vertex\vertexa\tightin\dualitiesof\mograph$
if and only if
$\setof{\vertex,\vertexa}$ is a link
\item $\diredge\vertex\vertexa\tightin\bindingsof\mograph$ if and only if
$\vertex$ binds $\vertexa$.
\end{itemize}
\end{definition}
\begin{lemma}
$\linkedmographof\cover$ is a well-defined linked mograph for every linked closed monadic fograph $\cover$.
\end{lemma}
\begin{proof}
\hspace{-1.48pt}Let $\lambda_1,\ldots,\lambda_n$ be the links of $\cover$ for $\lambda_i=\setof{\literal_i,\literala_i}$.
Choose distinct predicate symbols $p_1,\ldots,p_n$, and define $\coverp$ by replacing the predicate symbols in the labels of $\literal_i$ and $\literala_i$ by $p_i$ and $\pp_i$, respectively.
By construction, $\linkedmographof\coverp=\linkedmographof\cover$, and
since two literals in $\coverp$ are pre-dual in $\coverp$ if and only if they constitute a link,
we have $\linkedmographof\coverp=\mographof\coverp$.
Thus $\linkedmographof\cover=\mographof\coverp$ which is a well-defined mograph by \reflem{lem:mograph-of-fograph-well-defined}.
Since the $p_i$ are distinct, every literal of $\coverp$ is in a unique duality, so $\coverp$ is linked.
\end{proof}
\begin{lemma}\label{lem:fograph-mograph-net}
A linked closed monadic fograph $\cover$ is a fonet if and only if its linked mograph $\linkedmographof\cover$ is a monet.
\end{lemma}
\begin{proof}
Without loss of generality we may assume $\cover$ is rectified.
By \reflem{lem:consistent-dualizable}, $\cover$ has a dualizer if and only if it is consistent in the sense of Def.\,\ref{def:linked-rmf-consistent}, and consistency of $\cover$ coincides with consistency of $\linkedmographof\cover$ (Def.\,\ref{def:mograph-consistent}).
By \reflem{lem:monadic-deps} the dependencies of $\cover$ are those pairs $\setof{x,y}$ of variables with $x$ existential, $y$ universal and $x\brel\cover y$,
which, by definition of $\linkedmographofsymbol$, correspond to pairs $\setof{\binder_x,\binder_y}$ of binders in $\linkedmographof\cover$ with $\binder_x$ and $\binder_y$ the unique binders corresponding to the variables $x$ and $y$, and $\binder_x\brel{\linkedmographof\cover}\binder_y$.
Thus the leap graphs of $\cover$ and $\linkedmographof\cover$ are the same, so $\cover$ has an induced bimatching if and only if $\linkedmographof\cover$ has an induced bimatching.
\end{proof}
\subsubsection{Proof of monadic homogeneous combinatorial soundness}
Recall that, by definition of $\mographofsymbol$, $\verticesof{\mographof\fograph}=\verticesof\fograph$ for every closed monadic fograph $\fograph$.
\begin{lemma}\label{lem:mograph-of-fograph-preserves-literals}
Let $\fograph$ be a closed monadic fograph.
A vertex is a literal in $\fograph$ if and only if it is a literal in the mograph $\mographof\fograph$.
\end{lemma}
\begin{proof}
Immediate from the definition of the binding set $\bindingsof{\mographof\fograph}$ (Def.\,\ref{def:mograph-of-fograph}) and that, by definition,
a vertex is a literal in a mograph if and only if it is the target of a binding.
\end{proof}
\begin{lemma}\label{lem:mograph-of-fograph-preserves-existentials}
Let $\fograph$ be a closed monadic fograph.
A binder is universal in $\fograph$ if and only if it is universal in the mograph $\mographof\fograph$.
\end{lemma}
\begin{proof}
By definition $\graphpairofgraph{\fograph}=\graphpairofgraph{\mographof\fograph}$, and in both cases,
a binder is universal if and only if its scope contains no edge.
\end{proof}
Define the \defn{type}
$\typeof\vertex\graph\in\vertextypes$ of a vertex $\vertex$
in a mograph or fograph $\graph$ as $\literaltype$ if $\vertex$ is a literal, $\universaltype$ if $\vertex$ is a universal binder, and $\existentialtype$ if $\vertex$ is an existential binder.
\begin{lemma}\label{lem:types}
For every closed monadic fograph $\fograph$, $\;\typeof\vertex\fograph=\typeof\vertex{\mographof\fograph}$ for every vertex $\vertex$.
\end{lemma}
\begin{proof}
Lemmas\,\ref{lem:mograph-of-fograph-preserves-literals} and \ref{lem:mograph-of-fograph-preserves-existentials}.
\end{proof}
\begin{lemma}\label{lem:mograph-bifib-preserves-literals}
Every mograph skew bifibration $\bifib:\net\to\mograph$ preserves vertex type, \latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, $\typeof\vertex\net=\typeof{\bifib(\vertex)}\mograph$ for every vertex $\vertex$ in $\net$.
\end{lemma}
\begin{proof}
A vertex is a literal if and only if it is the target of a binding, and
since $\bifib:\bindinggraphofgraph\net\to\bindinggraphofgraph\mograph$ is a fibration,
a vertex $\vertex$ in $\verticesof\net$ is the target of a binding if and only if $\bifib(\vertex)$ is the target of a binding.
Thus $\bifib$ maps literals to literals and binders to binders.
By definition (Def.\,\ref{def:monadic-skew-bifib}) a skew bifibration maps existential binders to existential binders,
so it remains to show that universal binders map to universal binders.
This follows from the proof of \reflem{lem:pres-universals}, which applies in the homogeneous setting because it does not depend on labels.
\end{proof}
\begin{lemma}[Monadic homogeneous soundness]\label{lem:monadic-soundness}
A closed monadic formula is valid if it has a homogeneous combinatorial proof.
\end{lemma}
\begin{proof}
Suppose
$\bifib:\net\to\mographofformula\monadicformula=\mograph$
is a homogeneous combinatorial proof of the monadic formula $\monadicformula$.
Without loss of generality, we may assume $\bifib$ is collapsed, by \reflem{lem:collapse}.
Define $\netp$ as the
coloured labelled cograph with
$\verticesof\netp=\verticesof\net$,
$\edgesof\netp=\edgesof\net$,
a colour $\setof{\vertex,\vertexa}$ for each $\vertex\vertexa\in\dualitiesof\net$,
and
the label of $\vertex$ in $\netp$ defined as
the label of $\bifib(\vertex)$ in $\graphof\monadicformula$,
where $\bifib(\vertex)\in\verticesof{\mographofformula\monadicformula}$ can be viewed as a vertex in
$\verticesof{\graphof\monadicformula}$ since
$\mographofformula\monadicformula=\mographof{\graphof\monadicformula}$ by \reflem{lem:prop-factorization} and, by definition of $\mographofsymbol$ (Def.\,\ref{def:mograph-of-fograph}),
$\verticesof{\mographof\fograph}=\verticesof{\fograph}$ for any closed monadic fograph $\fograph$.
We claim that $\netp$ is a well-defined fograph (Def.\,\ref{def:fograph}, p.\,\pageref{def:fograph}).
By \reflem{lem:types},
$\vertextypein{\mographof\monadicformula}=\vertextypein{\graphof\monadicformula}$
so by \reflem{lem:mograph-bifib-preserves-literals},
$\vertextypein{\netp}=\vertextypein{\net}$ (since the label of $\vertex$ in $\netp$ is that of $\bifib(\vertex)$ in $\graphof\monadicformula$).
Thus $\netp$ has a literal since $\net$ has one (because it is a mograph), so $\netp$ is a logical cograph.
We must show, for all variables $x$, that every $x$-binder $\binder$ is legal,
\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, the scope of $\binder$ contains
(a) at least one literal and (b) no other $x$-binder.
For (a), the scope $\scopeofin\binder\graph$ of $\binder$ in a fograph or mograph $\graph$ depends only on the underlying cograph $\graphpairofgraph\graph$, so
$\scopeofin\binder\netp=\scopeofin\binder\net$.
Thus $\scopeofin\binder\netp$ has a literal because $\scopeofin\binder\net$ does (since $\net$ is a mograph).
For a contradiction to (b), Suppose $\bindera\tightneq\binder$ were an $x$-binder in $\scopeofin\binder\netp$.
Let $\treep$ be the cotree $\cotreeof\netp$ and let $\parentof\binder$ be the parent of $\binder$ in $\treep$.
If $\binder$ is existential, then $\binder\bindera\tightin\edgesof\netp$ (since, by \reflem{lem:scope-parent}, all distinct vertices in the scope of an existential binder are in an edge, since $\parentof\binder$ is a $\graphjoin$ node in $\treep$),
contradicting $\bifib(\binder)=\bifib(\bindera)$ (which holds because, without loss of generality, $\monadicformula$ is rectified, so there is a unique $x$-binder in $\graphof\monadicformula$).
Otherwise $\binder$ is universal.
Since $\bindera\in\scopeofin\binder\netp$, by \reflem{lem:scope-parent}
we have
$\binder\parentabovein\treep\bindera$,
\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot,
$\binder\childin\treep\parentof\binder\abovein\treep\bindera$, with $\parentof\binder$ a $\graphunion$-node, since $\binder$ is universal.
Because $\bifib$ is collapsed and $\bifib(\binder)=\bifib(\bindera)$,
we cannot have
$\parentof\binder\parentin\treep\bindera$
(otherwise $\binder$ and $\bindera$ would be indistinguishable, contradicting $\bifib$ being collapsed),
thus
$\binder\childin\treep\parentof\binder\parentin\treep\bindera\jmeet\vertex$ for some vertex $\vertex$.
Since $\bindera\vertex\tightin\edgesof\netp$ and $\bifib$ is a graph homomorphism we have
$\bifib(\bindera)\mkern1mu\bifib(\vertex)\tightin\edgesof{\graphof\monadicformula}$,
so
$\bifib(\binder)\mkern1mu\bifib(\vertex)\tightin\edgesof{\graphof\monadicformula}$
(because $\bifib(\binder)=\bifib(\bindera)$).
Since $\bifib$ is a skew fibration, there exists $\vertexa\tightin\verticesof\netp$ such that
$\vertexa\binder\in\edgesof\netp$ and $\bifib(\vertexa)\mkern1mu\bifib(\vertex)\notin\edgesof{\graphof\monadicformula}$.
Because $\vertexa\binder\in\edgesof\netp$, the meet $\binder\meet\vertexa$ is a \mbox{$\graphjoin$-node},
\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, $\binder\meet\vertexa=\binder\jmeet\vertexa$,
and since the parent $\parentof\binder$ of $\binder$ is a \mbox{$\graphunion$-node},
we have $\binder\jmeet\vertexa\abovein\treep\parentof\binder$,
hence
$\vertexa\abovein\treep\binder\jmeet\vertexa\abovein\treep\vertex$.
Therefore $\vertexa\vertex\tightin\edgesof\netp$, so $\bifib(\vertexa)\mkern1mu\bifib(\vertex)\in\edgesof{\graphof\monadicformula}$ a contradiction.
Thus we have proved that $\netp$ is a well-defined fograph.
Since every literal label in $\netp$ comes from $\graphof\monadicformula$, $\netp$ is monadic, and since $\bifib$ is a directed graph fibration
$\bindinggraphofgraph\net\to\bindinggraphofgraph\mograph$, $\netp$ is closed.
By construction, $\netp=\linkedmographof\net$ (Def.\,\ref{def:linked-mograph-of-linked-fograph}),
so by \reflem{lem:fograph-mograph-net}, $\netp$ is a fonet.
Since $\bifib:\net\to\mograph$ is a skew bifibration of mographs,
$\bifib:\netp\to\graphof\monadicformula$ is a skew bifibration of fographs,
hence a (standard) combinatorial proof,
so $\formula$ is valid by Theorem\,\ref{thm:soundness}.
\end{proof}\todo{Give non-collapsed example}
The crux of the soundness proof above is to transform a collapsed monadic homogeneous combinatorial proof into a standard combinatorial proof.
The following example shows why collapse occurs before this transformation.
A monadic homogeneous combinatorial proof of the closed monadic formula $\collapsedegformula$ is shown below-left.
\begin{pic}{-.3}{2.4}
\rput(-4,0){\uncollapsedeg}
\rput(0,0){\collapsedeg}
\rput(4,0){\labelledcollapsedeg}
\end{pic}
Its collapse, also a monadic homogeneous combinatorial proof (by \reflem{lem:collapse}), is shown above-centre.
Above-right is the standard combinatorial proof constructed from the collapse in the soundness proof above.
Observe that, were we to attempt to construct a standard combinatorial proof directly from the uncollapsed form,
it would have two source vertices above $\singletonx$ in the target,
each implicitly labelled $x$ (implicit since we are drawing the skeleton),
so the source would have a (universal) $x$-binder in the scope of another $x$-binder and therefore fail to be a well-defined fograph.
Collapse is directly related to the deletion of select universal binders in the interpretation of the $\cname$ rule as an operation in \S\ref{sec:interp-rules}.
\begin{lemma}[Monadic homogeneous completeness]\label{lem:monadic-completeness}
Every valid closed monadic formula has a homogeneous combinatorial proof.
\end{lemma}
\begin{proof}
Let $\monadicformula$ be a valid closed monadic formula.
By Theorem\,\ref{thm:completeness} there exists a (standard) combinatorial proof $\bifib:\net\to\graphof\monadicformula$.
Let $\netp$ be the linked mograph obtained from $\net$ with $\verticesof\netp=\verticesof\net$, $\edgesof\netp=\edgesof\net$,
$\vertex\vertexa\tightin\dualitiesof\netp$ if and only if and $\setof{\vertex,\vertexa}$ is a link (colour) in $\net$,
and
$\bindingsof\netp=\edgesof{\bindinggraphof\net}$ (\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, $\vertex\vertexa\in\bindingsof\netp$ if and only if $\vertex$ binds $\vertexa$ in $\net$).
Since there are no logical constants in $\monadicformula$ there are none in $\net$ (by label-preservation of $\bifib$),
so every literal in $\net$ is in exactly one link.
Thus every literal of $\netp$ is in a unique duality, no binder of $\netp$ is in a duality,
and $\netp$ has no induced bimatching because $\net$ is a fonet;
thus $\netp$ is a monet.
Let $\mograph=\mographof{\graphof\monadicformula}=\mographofformula\monadicformula$.
We claim that $\bifib:\netp\to\mograph$ is a homogeneous combinatorial proof, \latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot,
(1) $\bifib$ preserves existential binders,
(2) $\bifib:\netp\to\mograph$ is a skew fibration,
(3) $\bifib:\graphpairof{\verticesof\netp}{\dualitiesof\netp}\to\graphpairof{\verticesof\mograph}{\dualitiesof\mograph}$ is an undirected graph homomorphism,
and
(4) $\bifib:\graphpairof{\verticesof\netp}{\bindingsof\netp}\to\graphpairof{\verticesof\mograph}{\bindingsof\mograph}$ is a directed graph fibration.
(1) holds because $\bifib:\net\to\graphof\monadicformula$ preserves existential binders, and by construction the existential binders of $\netp$ and $\net$ coincide, as do those of $\graphof\monadicformula$ and $\mograph$.
By definition (2) holds if $\bifib:\graphpairof{\verticesof\netp}{\edgesof\netp}\to\graphpairof{\verticesof\mograph}{\edgesof\mograph}$ is a skew fibration, which is true because $\bifib$ is a skew bifibration,
$\graphpairof{\verticesof\netp}{\edgesof\netp}=\graphpairof{\verticesof\net}{\edgesof\net}$, and
$\graphpairof{\verticesof\mograph}{\edgesof\mograph}=\graphpairof{\verticesof{\graphof\monadicformula}}{\edgesof{\graphof\monadicformula}}$.
For (3), suppose $\vertex\vertexa\tightin\dualitiesof\netp$. Since $\net$ is a fonet, it has a dualizer, so the labels of $\vertex$ and $\vertexa$ are dual, say, $p$ and $\pp$, respectively.
Because $\bifib$ preserves labels, $\bifib(\vertex)$ and $\bifib(\vertexa)$ are labelled $p$ and $\pp$, thus $\bifib(\vertex)\mkern2mu\bifib(\vertexa)\in\dualitiesof\mograph$, and (3) holds.
(4) holds because $\bifib:\net\to\graphof\monadicformula$ is a skew bifibration, thus $\bifib:\bindinggraphof\net\to\bindinggraphof{\graphof\monadicformula}$ is a directed graph fibration, and by construction $\graphpairof{\verticesof\netp}{\bindingsof\netp}=\bindinggraphof\net$ and
$\graphpairof{\verticesof\mograph}{\bindingsof{\graphof\monadicformula}}=\bindinggraphof{\graphof\monadicformula}$.
\end{proof}
\begin{proof}[Proof of Theorem\,\ref{thm:monadic-soundness-completeness} (Monadic homogeneous soundness and completeness)]
\mbox{}\\
Lemmas~\ref{lem:monadic-soundness} and \ref{lem:monadic-completeness}.
\end{proof}
\section{Homogeneous surjections}\label{sec:surjections}
Recall from \S\ref{sec:soundness} that $\graphofsymbol$ is a surjection from rectified formulas onto rectified fographs (Lem.\,\ref{lem:graph-surj}),
and that $\xgraphofsymbol$ is a surjection from clear formulas onto fographs (Lem.\,\ref{lem:xgraph-surj}).
This section exhibits similar surjections onto duality graphs and mographs.
\begin{lemma}\label{lem:surj-prop-fographs-to-dualizing-graphs}
$\dualizinggraphofsymbol$ is a surjection from simple propositional fographs onto dualizing graphs.
\end{lemma}
\begin{proof}
Let $\dualizinggraph$ be a dualizing graph.
We construct a fograph $\fograph$ such that $\dualizinggraphof\fograph=\dualizinggraph$.
Define $\verticesof\fograph=\verticesof\dualizinggraph$ and $\edgesof\fograph=\edgesof\dualizinggraph$, with a nullary predicate symbol label on each vertex defined as follows.
Since $\graphpairof{\verticesof\dualizinggraph}{\dualitiesof\dualizinggraph}$ is $P_4$-free and $\cthree$-free, it is a disjoint union of complete bipartite graphs\footnote{Recall that a complete bipartite graph is one of the form $\graph\graphjoin\grapha$ for edgeless graphs $\graph$ and $\grapha$.} $\cbg_1,\ldots,\cbg_n$.
Choose distinct nullary predicate symbols $p_1,\ldots,p_n$ such that $\pp_i\tightneq p_j$ ($1\tightle i,j\tightle n$).
If $\cbg_i$ has no edges, it has a single vertex $v_i$; assign $p_i$ as the label of $v_i$.
Otherwise, $\cbg_i=\cbg_i'\graphjoin\cbg_i''$ for $\cbg_i'$ and $\cbg_i''$ without edges.
Assign the label $p_i$ to every vertex in $\cbg_i'$ and the label $\pp_i$ to every vertex in $\cbg_i''$.
The graph $\fograph$ is a non-empty cograph with vertices labelled by nullary predicate symbols, hence $\fograph$ is a simple propositional fograph.
By construction, $\dualizinggraphof\fograph=\dualizinggraph$.
\end{proof}
\begin{lemma}\label{lem:prop-surj}
The function $\dualizinggraphofpropsymbol$ from simple propositions to dualizing graphs (Def.\,\ref{def:dualizing-graph-of-prop}) is a surjection.
\end{lemma}
\begin{proof}
By \reflem{lem:graph-surj} $\graphofsymbol$ is a surjection from (rectified) formulas onto fographs.
The restriction of $\graphofsymbol$ to simple propositions is a surjection onto simple propositional fographs.
Since $\dualizinggraphofpropsymbol=\dualizinggraphofsymbol\circ\graphofsymbol$ by \reflem{lem:prop-factorization}, and $\dualizinggraphofsymbol$ is a surjection by \reflem{lem:surj-prop-fographs-to-dualizing-graphs}, $\dualizinggraphofpropsymbol$ is a surjection.
\end{proof}
\begin{lemma}\label{lem:surj-closed-monadic-fographs-to-mographs}
$\mographofsymbol$ is a surjection from closed monadic fographs onto mographs.
\end{lemma}
\begin{proof}
Let $\mograph$ be a mograph.
We will construct a closed monadic fograph $\fograph$ with $\mographof\fograph=\mograph$.
Define $\verticesof\fograph=\verticesof\mograph$ and $\edgesof\fograph=\edgesof\mograph$, and define the predicate symbol in the label of each vertex of $\verticesof\fograph$ exactly as in the proof of \reflem{lem:surj-prop-fographs-to-dualizing-graphs}, only this time we shall make each such predicate symbol $p$ unary rather than nullary by adding a variable after $p$. For each binder $\binder$ in $\mograph$, choose a distinct variable $x_\binder$, set the label of $\binder$ to $x_\binder$, and for every literal $\literal$ with $\diredge{\binder}{\literal}\in\bindingsof\mograph$, add the variable $x_\binder$ to the label of $\literal$ as the argument of the predicate symbol already assigned to $\literal$.
Since every binder in $\mograph$ has non-empty scope, every binder in $\fograph$ has non-empty scope.
By construction every literal label is a unary predicate symbol followed by a variable, so $\fograph$ is monadic.
Because every variable $x_\binder$ is distinct for each binder $\binder$, no literal in $\fograph$ can be bound by two binders in $\fograph$.
Thus $\fograph$ is a rectified monadic fograph.
Since, by definition of a literal in a mograph, every literal in $\mograph$ is the target of a binding in $\bindingsof\mograph$, every literal in $\fograph$ is bound, so $\fograph$ is closed.
By construction, $\mographof\fograph=\mograph$.
\end{proof}
\begin{lemma}\label{lem:surj-closed-monadic-formulas-to-mographs}
The function $\mographofformulasymbol$ from closed monadic formulas to mographs (Def.\,\ref{def:mograph-of-formula}) is a surjection.
\end{lemma}
\begin{proof}
By \reflem{lem:graph-surj} $\graphofsymbol$ is a surjection from (rectified) formulas onto fographs.
The restriction of $\graphofsymbol$ to closed monadic formulas is a surjection onto closed monadic fographs.
Since $\mographofformulasymbol=\mographofsymbol\circ\graphofsymbol$ by \reflem{lem:monadic-factorization}, and
$\mographofsymbol$ is a surjection by \reflem{lem:surj-closed-monadic-fographs-to-mographs}, $\mographofformulasymbol$ is a surjection.
\end{proof}
\section{Polynomial-time verification}\label{sec:ptime}
In this section we show that a combinatorial proof can be verified in polynomial time.
Thus combinatorial proofs constitute a formal \emph{proof system} \cite{CR79}.
The \defn{size} of a graph $\graph$ is the sum of the number of vertices in $\graph$ and the number of edges in $\graph$.
\begin{lemma}\label{lem:deps-ptime}
The dependencies of a linked rectified fograph $\cover$ can be constructed in time polynomial in the size of $\cover$.
\end{lemma}
\begin{proof}
Let $x_1,\ldots,x_n$ be the existential variables in $\cover$.
The main unification algorithm of \cite{MM76} provides in linear time an assignment $\assignopen\assign {x_1} {u_1},\ldots,\assign {x_n} {u_n}\assignclose$ with $x_i$ not in $u_j$ for $i\tightle j$, such that the most general unifier $\sigma$
is $\assignopen\assign{x_1}{t_1},\ldots,\assign{x_n}{t_n}\assignclose$ for
$t_i=u_i
\assignopen
\assign{x_{i+1}}{u_{i+1}}
\assignclose
\ldots
\assignopen
\assign{x_n}{u_n}
\assignclose$
(the sequential composition of $n\tightminus i$ one-variable substitutions applied to $u_i$).
Let $\setof{y_{i1},\ldots,y_{im_i}}$ be the set of variables occurring in $u_i$,
and define $u'_i$ as $f_iy_{i1}\ldots y_{im_i}$ for a fresh $m_i$-ary function symbol $f_i$.
The assignment $\sigma'=\assignopen\assign{x_1}{t'_1},\ldots,\assign{x_n}{t'_n}\assignclose$ for
$t'_i=
u'_i
\assignopen
\assign{x_{i+1}}{u'_{i+1}}
\assignclose
\ldots
\assignopen
\assign{x_n}{u'_n}
\assignclose$
has the same dependencies as $\sigma$ but can be constructed in polynomial time since each $x_j$ appears at most once in each $u'_i$.
\end{proof}
The above proof is essentially the first part of the proof of Theorem~3 in \cite{Hug18}.
\begin{lemma}\label{lem:fonet-ptime}
The correctness of fonet can be verified in time polynomial in its size.
\end{lemma}
\begin{proof}
Let $\net$ be a fonet of size $n$.
By \reflem{lem:deps-ptime} we can construct all dependencies of $\net$ in polynomial time, hence the leap graph $\leapgraphof\net$ in polynomial time.
By \reflem{lem:fonet-constructible} every fonet is constructible from axioms by fusion and quantification.
Since there can be at most $n$ fusions and/or quantifications, it suffices to show that each step in the inductive decomposition of a fonet in the proof of \reflem{lem:fonet-constructible} can be performed in polynomial time.
In the first case the proof of \reflem{lem:fonet-constructible}, $\net$ has no edges (which can be determined in polynomial time), and to confirm that $\net$ is a union of axioms takes polynomial time.
In the second case, $\net$ is universal, and the universal binder can be found and deleted in polynomial time, by inspecting each vertex of $\net$ in succession.
In the final case, $\net$ is not universal and has at least one edge, and we seek to decompose $\net$ as a fusion or existential quantification via \reflem{lem:split-fusion-existential}.
Henceforth we follow the proof of \reflem{lem:split-fusion-existential} closely.
The graph $\megagraph$ in the proof of \reflem{lem:split-fusion-existential} can be constructed in polynomial time from the cotree,
which can built in polynomial time \cite{CLS81}.
The bridge $\graph_m\graph_{m+1}$ can be located in polynomial time (by iterating through the edges of $\megagraph$),
and $\cover_1$ and $\cover_2$ can be determined in polynomial time by traversing edges.
The underlying fograph $\fograph$ of
$\net$
is
$\cover_1\graphunion(\fograph_m\graphjoin\fograph_{m+1})\graphunion\cover_2$.
Depending on whether both $\fograph_m$ and $\fograph_{m+1}$ both contain literals, the proof of \reflem{lem:split-fusion-existential} now provides either $\net$ as a fusion of $\cover_1\graphunion\fograph_m$ and $\fograph_{m+1}\graphunion\cover_2$, and we recurse with each half of the fusion, or $\net=\singletonx\graphunion\netp$, and we delete the existential binder $\singletonx$ and recurse with $\netp$.
\end{proof}
Define the \defn{size} of a combinatorial proof $\bifib:\net\to\fograph$ as the sum of the size of $\net$ and the size of $\fograph$.
\begin{theorem}
The correctness of a combinatorial proof can be verified in time polynomial in its size.
\end{theorem}
\begin{proof}
Let $\bifib:\net\to\fograph$ be a combinatorial proof.
By \reflem{lem:fonet-ptime} the fonet $\net$ can be verified in polynomial time.
Verifying that $\bifib$ is a skew bifibration is polynomial time because the skew fibration and directed graph fibration conditions apply to pairs of vertices, one in $\net$ and one in $\fograph$, seeking the existence of a vertex in $\net$, which can be found be iterating through each vertex of $\net$ in turn.
\end{proof}
\section{Cut combinatorial proofs}
Just as sequent calculus proofs may include cuts \cite{Gen35}, combinatorial proofs can be extended with cuts.
Define an \defn{$n$-cut combinatorial proof} of a formula $\formula$ as a combinatorial proof of
\mbox{$\formula\vee(\formulaa_1\tightwedge\neg{\formulaa_1})\vee\ldots\vee(\formulaa_n\tightwedge\neg{\formulaa_n})$} for (arbitrary) formulas $\formulaa_1,\ldots,\formulaa_n$.
Each formula $\formulaa_i\tightwedge\neg\formulaa_i$ is a \defn{cut}.
A \defn{cut combinatorial proof} is an $n$-cut combinatorial proof for some $n\tightge 0$; if $n=0$ the combinatorial proof is \defn{cut-free}.\footnote{We can define a cut combinatorial proof of a sequent similarly.}
\begin{theorem}
A formula is valid if and only if it has a cut combinatorial proof.
\end{theorem}
\begin{proof}
Since
$\formula\vee(\formulaa_1\tightwedge\neg{\formulaa_1})\vee\ldots\vee(\formulaa_n\tightwedge\neg{\formulaa_n})$ is valid if and only if $\formula$ is valid,
the result follows from Theorem\,\ref{thm:soundness-completeness}.
\end{proof}
\section{Semi-combinatorial proofs}\label{sec:semicps}
Using the surjections
$\graphofsymbol$ (Def.\,\ref{def:graph}) from (implicitly rectified) formulas onto rectified fographs
and
$\xgraphofsymbol$ (Def.\,\ref{def:xgraph}) from clear formulas onto fographs,
given a combinatorial proof $\bifib:\cover\to\fograph$, such as the one whose skeleton is drawn below-left (copied from the Introduction),
by choosing
a rectified formula $\formula$ with $\graphof\formula=\fograph$
and
a clear formula $\formulaa$ with $\xgraphof\formulaa$ equal to the underlying uncoloured fograph of $\cover$,
we can render $\bifib$ in the form below-centre.
\begin{center}\begin{pic}{-.9}{2.9}
\rput(-5,0){\drinkerfibcoloured}
\rput(0,-.25){
\rput(0,0){\drinkerSemicombinatorialVerbose}
\rput(5,0){\drinkerSemicombinatorial}
}
\end{pic}\end{center}
We have drawn the bifibration between the quantifier variables and predicate symbols of the formulas $\formulaa=\drinkerSemicombinatorialSourceVerboseInline$ and
$\formula=\veedrinkerformula$ corresponding to the vertices of $\cover$ and $\fograph$,
and replaced the link (coloured pair of vertices) on $\cover$ with a three-segment edge between the dual predicate symbols $\pp$ and $p$ in $\formulaa$, in the style of proof nets for linear logic \cite{Gir87}.
Above-right we have simplified the presentation further by removing redundant bifibration edges between quantifier variables (since they can be left implicit due to label-preservation, \latinstyle{e.g}\onedot} \def\Eg{\latinstyle{E.g}\onedot, both occurrences of the existential quantifier variable $x$ in the source map to the (unique) existential quantifier variable $x$ in the target), and we have drawn non-dotted edges.
We have also replaced the formula $\drinkerSemicombinatorialSourceVerboseInline$ with the corresponding sequent
$\drinkerSemicombinatorialSourceInline$, and suppressed the comma of the sequent.
We call this presentation of a combinatorial proof a \defn{semi-combinatorial proof}, a first-order generalization of the propositional case in \cite{Hug06i}.\footnote{The source sequent, with links, can be viewed as generalization of a unification net \cite{Hug18}. See \S\ref{sec:related} for details.}
\section{Conclusion and related work}\label{sec:conclusion}\label{sec:related}
This paper reformulated classical first-order logic with combinatorial rather than syntactic proofs (\S\ref{sec:fographs}--\S\ref{sec:cps}),
extending the propositional case of \cite{Hug06} to quantifiers.
The proof of soundness (\S\ref{sec:soundness}) was more intricate than
that of the propositional case \cite[\S5]{Hug06}.
In the logical-constant-free propositional, monadic and S5-modal special cases, labels
can be removed from a combinatorial proof, and colouring from the source,
for a homogeneous form (\S\ref{sec:prop-cps}--\S\ref{sec:modal-cps}).
Propositional combinatorial proofs are related to sequent calculus \cite{Gen35} in \cite{Hug06i} and \cite{Car10},
and to other syntactic systems (including resolution and analytic tableaux) in \cite{Str17} and \cite{AS18}.
Skew fibrations are decomposed as propositional structural maps (composites of contraction and weakening maps) in \cite{Hug06i} and \cite{Str07}.
Combinatorial proofs may provide an avenue towards tackling Hilbert's 24th problem \cite{TW02,Thi03,Hug06i,Str19h24}.
Combinatorial proofs for non-classical logics are being pursued actively. For example,
combinatorial proofs for propositional intuitionistic logic are presented in \cite{HHS19ipws}.
A potential topic of future research is first-order intuitionistic combinatorial proofs.
Cut elimination procedures for propositional cut combinatorial proofs are presented in \cite{Hug06i} and \cite{Str17}.
Natural open questions include the extension of propositional intuitionistic combinatorial proofs to first-order,
and cut elimination procedures for first-order combinatorial proofs (classical and intuitionistic).
The function $\graphofsymbol$ from first-order formulas to fographs (Def.\,\ref{def:graph})
is a first-order extension of the propositional translation \textsl{G} of \cite[\S3]{Hug06}.
The latter is well-known in graph theory, as the function from a (prime-free) modular decomposition tree \cite{Gal67} or cotree \cite{Ler71,CLS81} to a cograph, and is employed in logic and category theory. For example, \cite{Gir87} uses \textsl{G}\/ with
$\wedge\tighteq\with$ and $\vee\tighteq\oplus$ in linear logic, \cite{Hu99} uses \textsl{G} with $\wedge$ and $\vee$ as product and coproduct for free bicompletion (and emphasizes the $\pfour$-freeness of the image), and \cite{Ret03} uses \textsl{G} with $\wedge\tighteq\otimes$ and $\vee\tighteq\parr$ in linear logic.
That cographs are exactly the $P_4$-free graphs is proved in \cite{Sum73}.
Links between pre-dual literals in fonets, which become dual only after applying a dualizer or unifier,
are akin to the
first-order \emph{connections} or \emph{matings} employed in automated theorem proving \cite{Bib81, And81}.
Bibel in \cite[p.\,4]{Bib81} proposed \emph{link} as an alternative name for a connection,
and we have adopted that terminology.
Since combinatorial proofs can be verified in polynomial time (\S\ref{sec:ptime}), they constitute a formal \emph{proof system} \cite{CR79},
in contrast to the connection and mating methods.
The roots of first-order connections/matings
go back to Herbrand \cite{Her30}, Quine \cite{Qui55}, Robinson \cite{Rob65} and Prawitz \cite{Pra70}, amongst others.
Propositional links between dual literals can be found in
\cite{Dav71,Bib74,And76},
and sets of such propositional links form a category \cite{LS05} via path composition.
The pairing of dual propositional occurrences can be found in the study of other forms of syntax, such as
closed categories \cite{KM71} (see also \cite{EK66}),
contraction-free predicate calculus \cite{KW84} and linear logic \cite{Gir87}.
A fonet can be viewed as a graph-theoretic abstraction and generalization of
a unification net \cite{Hug18},
which in turn abstracts proof nets for first-order multiplicative linear logic \cite{Gir87,Gir91}.
The sense of generalization is that fographs admit the multiplicative mix rule $\frac{\sequent\hspace{1ex}\sequenta}{\sequentcom\sequenta}$, interpreted as the fusion operation with empty portions (\latinstyle{i.e}\onedot} \def\Ie{\latinstyle{I.e}\onedot, disjoint union).
The relationship with a unification net is made clearer when rendering a combinatorial proof in semi-combinatorial form, as in \S\ref{sec:semicps}. (For example, in the right-most example at the start of \S\ref{sec:semicps}, the source is exactly a unification net in the sense of \cite{Hug18}.)
Unification nets are also available for first-order additive linear logic \cite{HHS19ALL}.
Upon forgetting vertex labels, propositional fonets correspond to the \emph{nicely coloured cographs} of \cite{Hug06},
and nicely coloured cographs without singleton colours are in bijection with the \emph{even-length alternating elementary acyclic R\&B cographs} of \cite{Ret03}.
An additional constraint on fonets can be applied to reject the mix rule, and retain soundness and completeness (\latinstyle{c.f}\onedot} \def\Cf{\latinstyle{C.f}\onedot\ the alternating elementary connectedness condition of \cite{Ret03}).
Abstract representations of first-order quantifiers with explicit witnesses are in \cite{Hei10} (extending expansion
trees \cite{Mil84}) and \cite{McK10} (for classical logic) and \cite{HHS19ALL} (for additive linear logic).
Composition of witnesses is analysed in \cite{Mim11} and \cite{ACHW18}.
Proof nets \cite{Gir87} were extended to propositional classical
logic in \cite{Gir91} (developed in detail in \cite{Rob03}).
The paper \cite{McK13} fixes issues of redundancy due to contraction and weakening nodes
and relates classical propositional proof nets to propositional combinatorial proofs \cite{Hug06,Hug06i}.
Peirce \cite[vol.\,4:2]{Pei} provides an early graphical representation of propositional formulas.
|
train/arxiv
|
BkiUcMPxK6wB9mpb1G_z
| 5
| 1
|
\section{The action}
This paper contains a summary of \cite{Bergshoeff:2004fq}. In addition, we
demonstrate the existence of regular wormhole solutions in the universal
hypermultiplet, which is present in the low-energy effective action
of type II string theories compactified on a Calabi-Yau manifold. Wormhole
solutions from Calabi-Yau compactifications
were also found in \cite{Giddings:1989bq}. The main difference
with our solution is that in our case we also include RR fields, and
furthermore, our solution is regular in the complete domain of the wormhole
geometry.
We start with the action of a gravity-dilaton-axion system in
$D$-spacetime dimensions. In Minkowski space, the Lagrangian is
\begin{equation}
\mathcal{L}_M = \tfrac{1}{2} \sqrt{|g|} \,
[ {R} -\tfrac{1}{2} (\partial {\phi})^2
-\tfrac{1}{2} e^{b {\phi}} (\partial {\chi})^2 ] \, ,
\label{10DIIB}
\end{equation}
where $b$ is an arbitrary dilaton coupling parameter. This Lagrangian has
an $SL(2,R)$ group of symmetries. They can be realized as modular
transformations on the complex field
\begin{equation}
\tau \equiv \frac{b}{2}\, \chi + i\, e^{-b\phi/2}\ ,\qquad
\tau \rightarrow \frac{\alpha \tau +\beta}{\gamma\tau +\delta}\ ,\qquad
\alpha\delta-\beta\gamma=1\ ,\label{tau}
\end{equation}
and is valid for any nonzero value of the dilaton coupling $b$.
This theory occurs for example as the scalar section of IIB supergravity in
$D=10$ Minkowski space-time with dilaton-coupling parameter $b=2$. Other
values of $b$ can arise when considering (truncations of) compactifications of
II supergravity. The main example we will discuss here is that of the
universal hypermultiplet, that arises after compactifying type IIA strings on
a (rigid) Calabi-Yau threefold down to $D=4$. This hypermultiplet contains
four scalars, $\phi$ and $\sigma$ coming from the NS sector, and $\psi$ and
$\varphi$ coming from the RR sector. The four-dimensional Lagrangian can be
written as
\begin{equation}\label{UHM}
\mathcal{L}_M = \tfrac{1}{2} \sqrt{|g|} \,
[ {R} -\tfrac{1}{2} (\partial {\phi})^2
-\tfrac{1}{2} e^{\phi} \big((\partial {\psi})^2+(\partial {\varphi})^2\big)
-\tfrac{1}{2} e^{2\phi}\big(\partial \sigma +\psi \partial \varphi\big)^2] \, .
\end{equation}
The scalar symmetry group is now $SU(2,1)$, but contains various inequivalent
$SL(2,R)$ subgroups. For instance, if we set both $\sigma=\psi=0$, we get
(\ref{10DIIB}) whith $b=1$ whereby $\varphi$ is identified with $\chi$.
If we set $\psi=\varphi=0$, we have $b=2$ and $\sigma$ is identified with
$\chi$. By writing down the field equations for (\ref{UHM}), it
is easy to see that these truncations are consistent.
Extremal instantons in the universal hypermultiplet have been discussed in
detail in \cite{Theis:2002er,Davidse:2003ww,Davidse:2004gg}, and correspond
to wrapped
(euclidean) membranes along three-cycles, or wrapped NS5-branes along the
entire Calabi-Yau. These two cases correspond
to $b=1$ and $b=2$ respectively. Using the results obtained in
\cite{Bergshoeff:2004fq}, we will here generalize this to the
non-extremal case, and show that there are interesting and new solutions
that have the spacetime geometry of a wormhole.
To discuss instantons, we first have to perform a Wick rotation. This rotation
is best understood by dualizing the axion into a $(D-2)$-form potential.
One then finds that under a Wick rotation, $\chi \rightarrow i \chi $.
The Euclidean Lagrangian corresponding to (\ref{10DIIB}) is then
\begin{equation}
\mathcal{L}_E
= \tfrac{1}{2} \sqrt{g} \,
[ {R} -\tfrac{1}{2} (\partial {\phi})^2
+\tfrac{1}{2} e^{b {\phi}} (\partial {\chi})^2 ] \, ,
\label{EuclideanAction}
\end{equation}
with all fields real. Notice that in the scalar formulation, as opposed to
the formulation with the $(D-1)$-form field strength, the contribution to the
action coming from the scalar sector is not positive definite.
For $b=2$ and $D=10$ this is the gravity-scalar part of
Euclidean IIB supergravity, in which the D-instanton can easily be found
as a solution of the Euclidean equations of motion
\cite{Gibbons:1996vg,Green:1997tv}. The non-extremal solutions were
found in \cite{Bergshoeff:2004fq}, and we repeat them in the next section.
As already explained, compactifications of string theory can give rise to
other values
of $b$. The Euclidean version of the universal hypermultiplet Lagrangian
(\ref{UHM}) can best be understood in terms of the double-tensor multiplet
formulation, in which $\varphi$ and $\sigma$ are dualized
into two antisymmetric tensors \cite{Theis:2002er}. After a Wick rotation,
$\varphi \rightarrow i\varphi, \sigma \rightarrow i\sigma$, and the
Euclidean Lagrangian for the universal hypermultiplet becomes
\begin{equation}\label{EUHM}
\mathcal{L}_E = \tfrac{1}{2} \sqrt{|g|} \,
[ {R} -\tfrac{1}{2} (\partial {\phi})^2
-\tfrac{1}{2} e^{\phi} \big((\partial {\psi})^2-(\partial {\varphi})^2\big)
+\tfrac{1}{2} e^{2\phi}\big(\partial \sigma +\psi \partial \varphi\big)^2] \, .
\end{equation}
Notice that the two truncations, $\psi = \sigma =0$ and $\psi=\varphi=0$,
both fall into the class of (\ref{EuclideanAction}), in which we have
$b=1$ and $b=2$ respectively.
There are three conserved currents for the $SL(2,R)$ transformations in the
Euclidean model, satisfying $\nabla_\mu j^\mu=0$.
The corresponding charges are denoted by $q_3,q_+$ and $q_-$, and are
normalized as specified in \cite{Bergshoeff:2004fq}. They transform under
$SL(2,R)$ in such
a way that the combination
\begin{equation}
{\bf \mathsf{q}}^2 \equiv q_3^2 - q_+q_-\ ,
\end{equation}
is invariant \cite{Bergshoeff:2002mb,Bergshoeff:2004fq}.
The three conjugacy classes of
$SL(2,R)$ then correspond to ${\bf \mathsf{q}}^2 < 0, {\bf \mathsf{q}}^2=0$ and ${\bf \mathsf{q}}^2 > 0$.
The extremal solutions will have ${\bf \mathsf{q}}^2=0$, the non-extremal ${\bf \mathsf{q}}^2
\neq 0$. The wormhole solutions will have ${\bf \mathsf{q}}^2 < 0$.
For later convenience, it is useful to define the quantity
\begin{equation}
c\equiv {\sqrt {\frac{2(D-1)}{D-2}}}\ ,
\end{equation}
which will appear explicitly in the instanton solutions below.
\section{Instanton solutions}
We search for generalised instanton solutions with manifest
$SO(D)$ symmetry,
\begin{align}
{ds}^2 & = e^{2\,B(r)} (dr^2 + r^2 d\Omega_{D-1}^2) \,, \qquad
\phi=\phi(r) \,, \qquad \chi=\chi(r)\,.
\label{instanton}
\end{align}
The standard D-instanton solution \cite{Gibbons:1996vg} is
obtained for the special case that $B(r)$ is constant. Other references
on generalised instantons and wormholes that are related
to our work are \cite{Giddings:1989bq,Rey:1989xj,Coule:1989xu,Kim:1997hq,
Bergshoeff:1998ry,Einhorn:2002am,Gutperle:2002km,Einhorn:2002sj,Kim:2003js,
Maldacena:2004rf}.
To obtain an $SO(D)$ symmetric generalised instanton solution, we
allow for a non-constant $B(r)$ and solve the field equations
following from the Euclidean action (\ref{EuclideanAction}). This
was done in detail in \cite{Bergshoeff:2004fq}. Here we summarise the result.
The solution can be written in a compact
form by using a harmonic function $H(r)$ over a conformally flat space with
metric as given in (\ref{instanton}),
\begin{align}
H(r)= \frac{b\,c}{2}\,\log(f_{+}(r)/f_{-}(r))\,, \quad B(r)=\frac{1}{D-2}
\log(f_{+}f_{-})\,,\quad
f_{\pm}(r) = 1\pm\frac{\mathsf{q}}{r^{D-2}}\,,
\end{align}
The general instanton solution can then be written as
\begin{equation}\label{SolEq}
\boxed{
\begin{aligned}
ds^2 & = \left(1-\frac{\mathsf{q}^2}{r^{2\,(D-2)}}\right)^{2/(D-2)}\,(dr^2
+ r^2 d
\Omega_{D-1}^2) \,, \\
e^{b\,\phi(r)} & =
\left(\frac{q_{-}}{\mathsf{q}}\,\sinh(H(r)+C_1)\right)^2\,,\\
\chi(r) & =
\frac{2}{b\,q_{-}}\,(\mathsf{q}\,\coth(H(r)+C_1)-q_3)\,.
\end{aligned}
}
\end{equation}
This solution is valid for any value~\footnote{The case
$b=0$ is treated in \cite{Myers:1988sp}.} of $b\neq 0$.
The integration
constant $C_1$ can be traded for the asymptotic value of the dilaton
that we will later identify with the string coupling constant.
Notice also the explicit dependence on the $Sl(2,R)$ charges $q_3,q_-$ and
$q_+$. The solutions \eqref{SolEq} are valid both for $\mathsf{q}^2\equiv
q_3^2-q_-q_+$ positive, negative and zero, corresponding to the three
conjugacy classes of $SL(2,R)$. We now discuss these three cases separately:
\begin{itemize}
\item
$\bf \mathsf{q}^2 >0$: Black Holes
In this case $\mathsf{q}$ is real and the solution is given by
\eqref{SolEq} with all constants real.
However, the metric becomes imaginary below a critical radius
\begin{equation} \label{rcritical}
r^{D-2} < r_c^{D-2} = \mathsf{q} \, .
\end{equation}
One can check that there is a curvature singularity at $r=r_c$, which
happens at strong string coupling: $e^{\phi(r)} \rightarrow \infty$
as $r \rightarrow r_c$.
Between $r=r_c$ and $r=\infty$, $H$ varies between $\infty$ and $0$, and with
an appropriate choice of $C_1$, i.e. a positive value of $C_1$, the scalars
have no further singularities in this domain.
Thus one might hope to have a modification of this solution by higher-order
contributions to the effective action of IIB string theory
\cite{Einhorn:2002am}. Alternatively, one can consider the possible resolution
of this singularity upon uplifting to one higher dimension.
In \cite{Bergshoeff:2004fq}, we showed that this indeed happens for the
special case of
\begin{equation}
b \geq \sqrt{\frac{2(D-2)}{D-1}} \,,
\end{equation}
equivalent to $bc \geq 2$. Upon uplifting, this becomes a non-extremal
dilatonic black hole. The case when $bc=2$ lifts up to a (non-dilatonic)
Reissner-Nordstr\"om black hole with mass and charge given by
\begin{align}
Q&= -2\,q_{-} \,, \qquad M=2\,\sqrt{\mathsf{q}^2+q_{-}^2}\qquad
\Rightarrow \qquad \mathsf{q}^2 = \frac{M^2-Q^2}{4}\, .
\label{RN-instanton-relation}
\end{align}
Hence, the ${\bf \mathsf{q}}^2>0$ solutions with $bc\geq 2$ are spatial sections of a
higher-dimensional (Lorentzian) black hole solution. The case of $bc < 2$
cannot be uplifted and remain singular instanton solutions in $D$-dimensions.
In Einstein frame, these geometries are singular wormholes that are pinched at
the selfdual radius $r_{{\rm sd}}=r_c$ \cite{Bergshoeff:2004fq}.
In the case of $\mathsf{q}^2 > 0$, there is an interesting limit
in which $q_- \rightarrow 0$. This yields a
solution with only two independent
integration constants, $q_+$ and $\mathsf{q}^2$. The range of
validity of this solution is equal to that of the above solution
with $q_- \neq 0$: it is well-defined for $r>r_c$, while at $r =
r_c$ the metric has a singularity and the dilaton blows up. The
singularity can be resolved upon uplifting for all
values of $bc \geq 2$ to Schwarzschild black holes, with mass
$M=2{\bf \mathsf{q}}$. More details can be found in \cite{Bergshoeff:2004fq}.
\item $\bf \mathsf{q}^2 =0$: Extremal instantons
We now consider the limit $\mathsf{q}^2 \rightarrow 0$ of
the general solution \eqref{SolEq}, after rescaling the constant $C_1$
with a factor $\mathsf{q}$ to make the limit well-defined.
Taking the limit yields the extremal solution:
\begin{equation}
\boxed{
\begin{aligned}
ds^2 = dr^2+r^2\,d\Omega_{D-1}^2 \,, \qquad
e^{b\,\phi(r)/2} = h \qquad \chi(r) = \frac{2}{b}\,(h^{-1} -
\frac{q_3}{q_-}) \,, \label{instlimEq}
\end{aligned}
}
\end{equation}
where $h(r)$ is the harmonic function:
\begin{align}
h(r) = g_s^{b/2} + \frac{b\,c\,q_-}{r^{D-2}} \, ,
\end{align}
and $g_s$ is the asymptotic value of the dilaton at infinity.
This is the extremal D-instanton solution of \cite{Gibbons:1996vg}. This solution is
regular over the range $0 < r < \infty$ provided one takes both $g_s$ and $b\,c\,q_-$
positive; at $r=0$ however, the harmonic function blows up and the scalars are singular. Similar to the case of $\bf \mathsf{q}^2 >0$, these singular solutions
can be lifted to higher dimensions where, e.g. for $bc=2$, they become
extremal Reissner-Nordstr\"om black holes.
\item $\bf \mathsf{q}^2 <0$: Wormholes
In this case $\mathsf{q}$ is imaginary. To obtain a real solution
we must take $C_1$ to be imaginary. We therefore redefine
\begin{equation}
\mathsf{q}\rightarrow i\,\mathsf{\tilde{q}} \hskip 2truecm C_1
\rightarrow i\,\tilde{C_1}\, ,
\end{equation}
such that $\mathsf{\tilde{q}}$ and $\tilde{C_1}$ are real. One can
now rewrite the solution \eqref{SolEq} by using the
relation
\begin{equation}
\log(f_{+}/f_{-}) = 2 \,{\rm arctanh}(\mathsf{q}/r^{D-2})\, ,
\end{equation}
and, next, replacing the hyperbolic trigonometric functions by
trigonometric ones in such a way that no imaginary quantities
appear. We thus find that, for $\mathsf{q}^2 <0$, the general
solution \eqref{SolEq} takes the following form:
\begin{equation}
\boxed{
\begin{aligned}
ds^2 & =
(1+\frac{\mathsf{\tilde{q}}^2}{r^{2\,(D-2)}})^{2/(D-2)}\,(dr^2+r^2\,d\Omega_
{D-1}^2)\,,\\
e^{b \phi(r)} & = \left(\frac{q_{-}}{\mathsf{\tilde{q}}}\,
\sin(b\,c\,\arctan(\frac{\mathsf{\tilde{q}
}}{r^{D-2}})+\tilde{C_1}) \right)^2 \,,\\
\chi(r) & =\frac{2}{b\,q_{-}}\,(\mathsf{\tilde{q}}\,
\cot(b\,c\,\arctan(\frac{\mathsf{\tilde{q}}}
{r^{D-2}})+\tilde{C_1})-q_3)\,.
\label{qminussol}
\end{aligned}
}
\end{equation}
The metric and curvature are well behaved over the range $0<r<\infty$. However, the
scalars can only be non-singular over the same range by an appropriate choice of
$\tilde{C}_1$ provided that $bc < 2$. This can be seen as follows. The $\arctan$ varies
over a range of $\pi/2$ when $r$ goes from $0$ to $\infty$. It is multiplied by $bc$ and
thus the argument of the sine varies over a range of more than $\pi$ if $bc > 2$.
Therefore, for $bc>2$ there is always a point $r_c$ such that $\chi \rightarrow \infty$
as $r \rightarrow r_c$. Note that the breakdown of the solution occurs at weak string
coupling: $e^{\phi} \rightarrow 0$ as $r \rightarrow r_c$.
This singularity is not resolved upon uplifting and corresponds to a black hole
with a naked singularity (in the case of Reissner-Nordstr\"om, $M^2<Q^2$.
The same holds for the liming case of $bc=2$. Therefore the case
$\mathsf{q}^2 <0$ only yields regular instanton solutions for $bc < 2$, together with the
condition that ${\tilde C}_1$ and ${\tilde C}_1+bc\pi/2$ are on the same branch of the cotangent.
The metric in (\ref{qminussol}) has a $Z_2$ isometry corresponding
to the reflection $r^{D-2}\rightarrow \mathsf{\tilde{q}} r^{2-D}$
which interchanges the two asymptotically flat regions. This reflection
has a fixed point, corresponding to the selfdual radius
\begin{align}
r_{\text{sd}}^{D-2}=\mathsf{\tilde{q}} \,.
\end{align}
Furthermore, the
thickness of the neck was in \cite{Bergshoeff:2004fq} computed to be
\begin{align}
\rho_{\text{sd}}^{D-2} = 2\mathsf{\tilde{q}} \,.
\end{align}
We have summarised this in the following figure:
\begin{figure}[ht]
\centerline{\epsfig{file=wormhole.eps,width=.6\textwidth}}
\caption{\it The geometry of a wormhole. The two asymptotically flat
regions at $r=0$ and $r=\infty$ are
connected via a neck with a minimal physical radius $\rho_{\text{sd}}$ at
the self-dual radius $r_{\text{sd}}$.}
\label{fig:wormhole}
\end{figure}
\end{itemize}
\section{Instanton action}
The value of the action, evaluated on the instanton solution, is a key
ingredient in the semiclassical approximation of the euclidean path integral.
In \cite{Bergshoeff:2004fq} we computed the instanton action for the three
cases, corresponding to $\mathsf{q}^2 >0, \mathsf{q}^2 =0$,
and $\mathsf{q}^2 <0$. This was done by specifying the additional
surface term added to (\ref{EuclideanAction}), which solely determines
the instanton action. This surface term
can be found from the
dual description in terms of the $(D-1)$-form field strength formulation.
We here summarise the results.
For the case when $\mathsf{q}^2 \geq 0$, the contribution
to the action coming from infinity is given by
\begin{align} \label{nonextr-inst-act-infty}
\mathcal{S}^{\infty}_{inst} = \frac{4}{b^2}\,(D-2)\,
\mathcal{V}ol(S^{D-1})\,b\,c\,\Big(\sqrt{\mathsf{q}^2+\frac{q_-^2}{g_s^b}}
\Big)\ .
\end{align}
Here we have used the relation between $C_1$ and the asymptotic value of
the dilaton, $g_s^{b}=(q_-/\mathsf{q})^2\,\sinh^2 C_1$. Notice that the
instanton action is proportional to the mass of the black hole to which
the solution uplifts in one dimension higher. Furthermore, the
result (\ref{nonextr-inst-act-infty}) also hols for $\mathsf{q}^2=0$,
which gives the lowest value of the action. The resulting instanton
action is then inversely proportional to $g_s^{b/2}$. The D-instanton
of ten-dimensional IIB corresponds to taking $b=2$. The extremal
instantons for the universal hypermultiplet action (\ref{UHM})
\cite{Theis:2002er,Davidse:2003ww}
also fall into this class: the membrane instantons correspond to $b=1$
whereas the NS-fivebrane instantons correspond~\footnote{To
compare with \cite{Theis:2002er,Davidse:2003ww}, one has to redefine the
string coupling constant by taking the square root. We correct
here a minor mistake in \cite{Bergshoeff:2004fq}, in which the membrane
and fivebrane instantons were written to correspond to $b=2$ and $b=4$.}
to $b=2$. We have here given only the contribution from infinitiy.
The non-extremal instantons also contribute to the action at the other
boundary, where $r=r_c$. Since the solution is singular at this point, it is
however not clear that the supergravity approximation is still valid in this
region.
The case when $\mathsf{q}^2 < 0$ is very different. For $bc<2$, these
are regular wormhole solutions with
two asymptotic
boundaries at $r=0$ and $r=\infty$ that are related by a reflection symmetry.
The wormhole action gets contributions from both these boundaries, and
the result is
\begin{align} \label{tildeq-inst-act}
\mathcal{S}_{wormhole} = \frac{4}{b^2}\,(D-2) \mathcal{V}ol(S^{D-1})\,{b\,c}\,
\mathsf{\tilde {q}}\,\Big(\cot \tilde{C}_1
-\cot (\tilde{C}_1 + bc \frac{\pi}{2})\Big)\,.
\end{align}
Due to the fact that $\tilde{C}_1$ and $\tilde{C}_1 + bc\pi/2$ are on the same
branch of the cotangent, the total instanton action is manifestly positive
definite. One can rewrite the above result in terms of the string coupling
constant, using $g_s^{b/2}\equiv e^{b\phi_{\infty}/2}=(q_-/\mathsf{\tilde {q}})
\sin {\tilde C_1}$. In the neighborhood of $bc\approx 2$, the instanton
action becomes very large, and in the limit to the critical point where
$bc=2$, it diverges. At that point, the wormhole solution is no longer regular.
\section{Wormholes in string theory}
We have seen that the condition for regular wormholes is that there exist
models for which $bc<2$. In type IIB in ten dimensions, this is not satisfied.
Toroidal compactifications of string theory only lead to values
of $b$ for which $bc\geq 2$, so no wormholes exist for these cases.
However, we have seen that for the universal hypermultplet, which descends
from a Calabi-Yau compactification of type II strings, one can have the value $b=1$ in
$D=4$, and so $bc={\sqrt 3}<2$. The solution is then characterized by the
dilaton and the RR scalar $\varphi$ that descends from the RR three-form
gauge potential in IIA in ten dimensions.
Since the extremal case $\mathsf{q}^2 =0$ corresponds to a wrapped type
IIA euclidean membrane over a (supersymmetric) three-cycle, it is natural to
suggest that the wormhole, with $\mathsf{q}^2 < 0$, corresponds to a wrapped
non-extremal euclidean D2 brane. \\[3mm]
{\bf Acknowledgement}
It is a pleasure to thank Mathijs de Vroome for stimulating discussions.
This work is supported in part by the Spanish grant BFM2003-01090 and the
European Community's Human Potential Programme under contract
HPRN-CT-2000-00131 Quantum Spacetime, in which E.B. and D.R.~are associated
to Utrecht University. The work of U.G.~is funded by the Swedish Research
Council.
|
train/arxiv
|
BkiUeU3xK7ICUubsz9Zq
| 5
| 1
|
\section{Introduction}\label{LEH.sec.0}
Given a fixed non-negative number $t$, consider the hyperplanes of $\mathbb{R}^d$ whose distance to the center of the hypercube $[0,1]^d$ is equal to $t$. In other words, these hyperplanes are tangent to the sphere of radius $t$ whose center coincides with that of $[0,1]^d$. Significant attention has been devoted to identifying, among these hyperplanes, the ones whose intersection with $[0,1]^d$ has the largest or the smallest possible $(d-1)$-dimensional volume. When $t$ is equal to $0$ (and the hyperplanes contain the center of $[0,1]^d$), this was solved by Keith Ball \cite{Ball1986} who proved that the $(d-1)$\nobreakdash-dimensional volume of the intersection with the hypercube $[0,1]^d$ of a hyperplane $H$ through its center is maximal precisely when $H$ is orthogonal to an order $2$ sub-diagonal of that hypercube, thereby solving a problem posed by Douglas Hensley \cite{Hensley1979}. Here, an \emph{order $n$ sub-diagonal of $[0,1]^d$} means the line segment between the centers of two opposite $(d-n)$\nobreakdash-dimensional faces of $[0,1]^d$. The order $d$ sub-diagonals of the hypercube are also simply referred to as its diagonals. The result from \cite{Ball1986} relies on a formula for the $(d-1)$\nobreakdash-dimensional volume of the intersection of an arbitrary hyperplane with the hypercube $[0,1]^d$ that takes the form of an improper integral. That formula, given below in Theorem \ref{LEH.sec.1.thm.1}, already appears in George P{\'o}lya's work \cite{Polya1913} and is discussed, for instance in \cite{Berger2010,FrankRiede2012,KonigKoldobsky2011,Zong2006}.
The case when $t$ is positive (and less than $\sqrt{d}/2$, the circumradius of the hypercube) was later considered by Vitali Milman who asked \cite{Konig2021b,KonigKoldobsky2011} whether the minimal and the maximal $(d-1)$-dimensional volume of $H\cap[0,1]^d$ is always achieved when $H$ is orthogonal to a diagonal or a sub-diagonal of $[0,1]^d$---here and in the remainder of the introduction, $H$ denotes a generic hyperplane at distance $t$ from the center of $[0,1]^d$. This question was solved, in dimensions $2$ and $3$ by Hermann K{\"o}nig and Alexander Koldobsky \cite{KonigKoldobsky2011}. In higher dimensions, it was shown by James Moody, Corey Stone, David Zach, and Artem Zvavitch~\cite{MoodyStoneZachZvavitch2013} that, if $t$ is greater than $\sqrt{d-1}/2$ (the distance between the center of the hypercube $[0,1]^d$ and the midpoint of any of its edges), then the volume of $H\cap[0,1]^d$ is maximal precisely when $H$ is orthogonal to a diagonal of $[0,1]^d$. It was further shown by Hermann K{\"o}nig \cite{Konig2021} that, if $d$ is at least $5$ and
$$
\frac{\sqrt{d}}{2}-\frac{1}{\sqrt{d}}<t<\frac{\sqrt{d}}{2}\mbox{,}
$$
then that volume is strictly locally maximal when $H$ is orthogonal to a diagonal of $[0,1]^d$. These higher dimensional results are based on an expression for the $(d-1)$-volume of $H\cap[0,1]^d$ that holds when $H$ separates a single vertex of the hypercube from all of its other vertices. A generalization of that expression to arbitrary hyperplanes is established in \cite{Pournin2021}. This formula, an alternative to the improper integral form, is a sum over the vertices of the hypercube on one side of $H$ (see Theorem \ref{LEH.sec.1.thm.2} below) that made it possible to extend the above mentioned result from \cite{MoodyStoneZachZvavitch2013} to when $d\geq5$ and $t$ is greater than $\sqrt{d-2}/2$ (the distance from the center of $[0,1]^d$ to that of a square face) \cite{Pournin2021}.
That problem has also been considered in the case of convex bodies other than the hypercube. The case of cross-polytopes is studied in \cite{LiuTkocz2020,Konig2021,MeyerPajor1988} and the more general case of balls for the $q$-norms in \cite{Koldobsky2005,MeyerPajor1988}. The corresponding problem for the regular simplices is considered in \cite{Konig2021,Webb1996}. Similar problems regarding the $d$-dimensional volume of the portion of the hypercube between two parallel hyperplanes or for the $(d-2)$-dimensional volume of the boundary of hypercube sections are studied, for instance in \cite{BartheKoldobsky2003,Konig2021,KonigKoldobsky2011}.
In the spirit of Vitali Milman's question, the (strict) local extremality of the $(d-1)$\nobreakdash-dimensional volume of $H\cap[0,1]^d$ is studied here when $H$ is orthogonal to a diagonal or a sub-diagonal of $[0,1]^d$ for all dimensions greater than $3$. The first result of the article deals with the case when $t$ is close to $0$.
\begin{thm}\label{LEH.sec.0.thm.0.2}
If $d\geq4$ and $t$ is close enough to $0$, then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ has strict local maxima when $H$ is orthogonal to a diagonal of $[0,1]^d$ and to any of its sub-diagonals of order at least $4$.
\end{thm}
In this statement, \emph{close enough to $0$} really means that $t$ belongs to an interval of the form $[0,\varepsilon[$ where $\varepsilon$ is a positive number (that depends on $d$). However, $\varepsilon$ is not explicited as the theorem partly follows from a topological argument. The second result extends the local maximality theorem from \cite{Konig2021}. Observe that each of the higher dimensional results mentioned above are stated for ranges of values of $t$ that go to $0$ when $d$ goes to infinity. In the following theorem, the corresponding range grows like $\sqrt{d}/\log{d}$.
\begin{thm}\label{LEH.sec.0.thm.0.1}
If $d\geq4$ and $t$ satisfies
$$
\frac{\sqrt{d}}{2}-\frac{1}{\sqrt{d}}\min\!\left\{\frac{d-1}{4},\frac{d^{1/(d-3)}}{d^{1/(d-3)}-1}\right\}\!<t<\frac{\sqrt{d}}{2}\mbox{,}
$$
then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ has a strict local maximum when $H$ is orthogonal to a diagonal of $[0,1]^d$.
\end{thm}
While Theorem \ref{LEH.sec.0.thm.0.2} is stated indifferently for the diagonals of the hypercube and its lower order sub-diagonals, Theorem \ref{LEH.sec.0.thm.0.1} does not give a hint of what happens at sub-diagonals. According to the third result of the article, they behave in a very different way than the diagonals.
\begin{thm}\label{LEH.sec.0.thm.0.3}
If $4\leq{n}<d$ and $t$ satisfies
\begin{equation}\label{LEH.sec.0.thm.0.3.eq.1}
\frac{\sqrt{n}}{2}-\frac{1}{\sqrt{n}}\min\!\left\{\frac{n-1}{4},\frac{n^{1/(n-3)}}{n^{1/(n-3)}-1}\right\}\!<t<\frac{\sqrt{n}}{2}\mbox{,}
\end{equation}
then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ does not have a local extremum when $H$ is orthogonal to an order $n$ sub-diagonal of $[0,1]^d$.
\end{thm}
In fact, Theorems \ref{LEH.sec.0.thm.0.2}, \ref{LEH.sec.0.thm.0.1}, and \ref{LEH.sec.0.thm.0.3} are obtained as consequences of more general results, valid for all possible values of $t$. While these results can be stated in different ways (largely because of the different possible expressions for the volume of $H\cap[0,1]^d$), two statements are particularly noteworthy. In these statements, $p_{i,d}$ is the quadratic function of $z$ defined as
\begin{equation}\label{LEH.sec.0.thm.1.eq.-1}
p_{i,d}(z)=\frac{i(d-i)}{d-1}-\!\left(\frac{d}{2}-i\right)\!\frac{z-i}{d-2}+\frac{2d(z-i)^2}{(d-1)(d-2)}\mbox{.}
\end{equation}
The local extremality for the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ when $H$ is orthogonal to the diagonals of $[0,1]^d$ can be obtained as follows.
\begin{thm}\label{LEH.sec.0.thm.1}
Assume that $d\geq4$. If
\begin{equation}\label{LEH.sec.0.thm.1.eq.0}
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{d\choose{i}}(z-i)^{d-3}p_{i,d}(z)
\end{equation}
is negative, where
$$
z=\frac{d}{2}-t\sqrt{d}\mbox{,}
$$
then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ has a strict local maximum when $H$ is orthogonal to a diagonal of the hypercube $[0,1]^d$. If, however, (\ref{LEH.sec.0.thm.1.eq.0}) is positive, then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ has a strict local minimum when $H$ is orthogonal to a diagonal of $[0,1]^d$.
\end{thm}
Because of its piecewise-polynomial nature, (\ref{LEH.sec.0.thm.1.eq.0}) can only possibly vanish at finitely-many values of $z$. Therefore, when $t$ ranges within the interval $[0,\sqrt{d}/2[$, the $(d-1)$\nobreakdash-dimensional volume of $H\cap[0,1]^d$ is almost always strictly locally extremal when $H$ is orthogonal to a diagonal of $[0,1]^d$.
In practice, for any fixed and reasonably low dimension $d$, Theorem \ref{LEH.sec.0.thm.1} allows to completely determine how the local extremality of that volume varies when $H$ is orthogonal to a diagonal of $[0,1]^d$ over the whole interval $[0,\sqrt{d}/2[$ for $t$. It suffices to estimate the roots of $\lceil{d/2}\rceil$ polynomials, each of degree $d-1$. This will be illustrated at the end of the article. A theorem similar to Theorem \ref{LEH.sec.0.thm.1} is established for lower order sub-diagonals.
\begin{thm}\label{LEH.sec.0.thm.2}
Assume that $4\leq{n}<d$. If
\begin{equation}\label{LEH.sec.0.thm.2.eq.0}
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{n\choose{i}}(z-i)^{n-3}p_{i,n}(z)
\end{equation}
and
\begin{equation}\label{LEH.sec.0.thm.2.eq.1}
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{n\choose{i}}(z-i)^{n-3}
\end{equation}
are both negative, where
$$
z=\frac{n}{2}-t\sqrt{n}\mbox{,}
$$
then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ has a strict local maximum when $H$ is orthogonal to an order $n$ sub-diagonal of $[0,1]^d$. If on the contrary, (\ref{LEH.sec.0.thm.2.eq.0}) and (\ref{LEH.sec.0.thm.2.eq.1}) are both positive, then that volume has a strict local minimum when $H$ is orthogonal to an order $n$ sub-diagonal of $[0,1]^d$.
\end{thm}
Another theorem will be proven as well, providing a condition (roughly, that (\ref{LEH.sec.0.thm.2.eq.0}) and (\ref{LEH.sec.0.thm.2.eq.1}) have opposite signs) under which the considered volume is not strictly locally extremal at the sub-diagonals of the hypercube. Together with Theorem \ref{LEH.sec.0.thm.2}, it allows determine in practice how the local extremality of the $(d-1)$\nobreakdash-dimensional volume of $H$ varies when $H$ is orthogonal to a sub-diagonal of $[0,1]^d$ of fixed, reasonably low order $n$ when $t$ ranges within the whole interval $[0,\sqrt{n}/2[$. As for the diagonals of the hypercube, this will be done at the end of the article relying, in part on symbolic computations.
In Section \ref{LEH.sec.1}, the above mentioned formulas for the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ are recalled, and their partial derivatives with respect to the orientation of $H$ expressed in the two possible forms of an improper integral and a discrete sum over the vertices of the hypercube. The regularity properties of this volume as a function of the orientation of $H$ will be established using the improper integral form, which is the reason why most results in this article to be given for dimensions at least $4$ or sub-diagonals of order at least $4$ (as is apparent from the statement of the above theorems). Theorem~\ref{LEH.sec.0.thm.0.2} is proven in Section~\ref{LEH.sec.2} using the second order sufficiency conditions of the constrained Lagrange multipliers theorem and the improper integral form of the partial derivatives of the volume of $H\cap[0,1]^d$. Theorems \ref{LEH.sec.0.thm.0.1} and \ref{LEH.sec.0.thm.1} are both established in Section \ref{LEH.sec.3} using the same Lagrange multipliers strategy, but with the discrete sum form of the partial derivatives. Section \ref{LEH.sec.3.5} is devoted to studying local extremality at the sub-diagonals of the hypercube when $t$ is large or equivalently, when $H$ is far away from the center of the hypercube. Theorems \ref{LEH.sec.0.thm.0.3} and \ref{LEH.sec.0.thm.2} are proven in that section using the second order necessary conditions of the constrained Lagrange multipliers theorem. Finally, the local extremality of the $(d-1)$\nobreakdash-dimensional volume of $H\cap[0,1]^d$ is studied in Section \ref{LEH.sec.4} when $t$ ranges within the whole interval $[0,\sqrt{d}/2[$ at the diagonals of low-dimensional hypercubes and at low order sub-diagonals of hypercubes of arbitrary dimension. A simple, consistent behavior is observed in these cases that is likely to carry over to higher dimensions and sub-diagonal orders.
\section{Partial derivatives of section volumes}\label{LEH.sec.1}
From now on, $a$ denotes a non-zero vector from $\mathbb{R}^d$ and $b$ is a real number. Denote by $H$ the hyperplane of $\mathbb{R}^d$ made up of the points $x$ such that $a\mathord{\cdot}x=b$. The following well-known theorem (see, for instance \cite{Ball1986,Berger2010,FrankRiede2012,KonigKoldobsky2011,Polya1913,Zong2006}) provides an expression for the $(d-1)$-dimensional volume of the intersection $H\cap[0,1]^d$ that takes the form of an improper integral.
\begin{thm}\label{LEH.sec.1.thm.1}
Assume that $a$ has at least two non-zero coordinates. In that case, the $(d-1)$\nobreakdash-dimensional volume of $H\cap[0,1]^d$ is
$$
\frac{\|a\|}{\pi}\int_{-\infty}^{+\infty}\!\left(\prod_{i=1}^d\frac{\sin(a_iu)}{a_iu}\right)\!\cos\!\left(2\!\left[b-\frac{\sigma(a)}{2}\right]\!u\right)\!du\mbox{.}
$$
\end{thm}
Note that terms of the form $\sin(x)/x$ appear in this expression that are indeterminate when $x$ is equal to $0$. However, under the convention that
$$
\frac{\sin(0)}{0}=1\mbox{,}
$$
which is adopted in the sequel, $\sin(x)/x$ becomes a twice continuously differentiable function of $x$ on $\mathbb{R}$. It will be useful to keep in mind that
$$
\frac{d}{dx}\frac{\sin(x)}{x}
$$
vanishes when $x=0$ and that
$$
\frac{d^2}{dx^2}\frac{\sin(x)}{x}
$$
is equal to $-1/3$ when $x=0$.
When all the coordinates of $a$ are non-zero, the $(d-1)$\nobreakdash-dimensional volume of $H\cap[0,1]^d$ can alternatively be expressed as a sum over a subset of the vertices of the hypercube. This alternative expression, stated in the following theorem, is proven in \cite{Pournin2021} as a straightforward consequence of a result from \cite{BarrowSmith1979}. From now on, $\sigma(x)$ denotes the sum of the coordinates of a vector $x$ from $\mathbb{R}^d$ and $\pi(x)$ the product of its non-zero coordinates. The latter notation slightly differs from the corresponding notation used in \cite{Pournin2021}, where $\pi(x)$ denotes the product of all the coordinates of $x$ and not only the non-zero ones. This distinction does not play a role in the following statement, but it will later on.
\begin{thm}\label{LEH.sec.1.thm.2}
If $d$ is at least $2$ and all the coordinates of $a$ are non-zero, then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ is
\begin{equation}\label{HS.sec.1.thm.2.eq.0}
\sum\frac{(-1)^{\sigma(v)}\|a\|(b-a\mathord{\cdot}v)^{d-1}}{(d-1)!\pi(a)}\mbox{,}
\end{equation}
where the sum is over the vertices $v$ of $[0,1]^d$ such that $a\mathord{\cdot}v\leq{b}$.
\end{thm}
While Theorems \ref{LEH.sec.1.thm.1} and \ref{LEH.sec.1.thm.2} are valid for arbitrary $a$ and $b$, it will be assumed from now on that $a$ belongs to $[0,+\infty[^d\mathord{\setminus}\{0\}$ in order to simplify the analysis. Note that this is without loss of generality thanks to the symmetries of the hypercube. In addition, $b$ will be expressed as
\begin{equation}\label{LEH.sec.1.eq.1}
b=\frac{\sigma(a)}{2}-t\mbox{,}
\end{equation}
where $t$ is a fixed number satisfying
$$
0\leq{t}<\frac{\sqrt{d}}{2}\mbox{.}
$$
In particular, $b$ will be thought of in the sequel as a function of $a$. It will be important to keep in mind that, while $t$ controls the distance between $H$ and the center of $[0,1]^d$, it only coincides with that distance when $\|a\|=1$. From now on, the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ is denoted by $V$ and, just as $b$, this volume is thought of as a function of $a$ on $[0,+\infty[^d\mathord{\setminus}\{0\}$. In \cite{Pournin2021}, this function is shown to be continuous at every point of $[0,+\infty[^d$ with at least two non-zero coordinates and twice continuously differentiable on the open orthant $]0,+\infty[^d$. Here, the following stronger statement will be needed, that is obtained as a consequence of Theorem \ref{LEH.sec.1.thm.1}.
\begin{cor}\label{LEH.sec.1.cor.1}
If $d\geq3$, then $V$ is a continuously differentiable function of $a$ at every point of $[0,+\infty[^d$ with at least three non-zero coordinates. Moreover, if $j$ is an integer satisfying $1\leq{j}\leq{d}$ then, at any such point,
$$
\frac{\partial}{\partial{a_j}}\frac{V}{\|a\|}=\frac{1}{\pi}\int_{-\infty}^{+\infty}\frac{\partial}{\partial{a_j}}\!\left(\prod_{i=1}^d\frac{\sin(a_iu)}{a_iu}\right)\!\cos(2tu)du\mbox{.}
$$
\end{cor}
\begin{proof}
Since $\|a\|$ is a continuously differentiable function of $a$ on $\mathbb{R}^d$, it suffices to show that the partial derivatives of $V/\|a\|$ all exist and are continuous functions of $a$ at the considered points. Thanks to the symmetries of the hypercube, one just needs to prove the slightly stronger statement that the partial derivative of $V/\|a\|$ with respect to $a_3$ exists and is a continuous function of $a$ at every point of $\mathbb{R}^d$ whose first two coordinates are positive.
According to Theorem \ref{LEH.sec.1.thm.1} and to (\ref{LEH.sec.1.eq.1}), at any such point,
$$
\frac{V}{\|a\|}=\int_{-\infty}^{+\infty}\!\left(\prod_{i=1}^d\frac{\sin(a_iu)}{a_iu}\right)\!\cos(2tu)du\mbox{.}
$$
The result will be obtained as a consequence from Leibniz's rule on differentiation under the integral, according to which
$$
\frac{\partial}{\partial{a_3}}\int_{-\infty}^{+\infty}\!\left(\prod_{i=1}^d\frac{\sin(a_iu)}{a_iu}\right)\!\cos(2tu)du=\int_{-\infty}^{+\infty}\frac{\partial}{\partial{a_3}}\!\left(\prod_{i=1}^d\frac{\sin(a_iu)}{a_iu}\right)\!\cos(2tu)du\mbox{.}
$$
However, this rule requires that
\begin{equation}\label{LEH.sec.1.cor.1.eq.1}
\int_{-n}^{+n}\frac{\partial}{\partial{a_3}}\!\left(\prod_{i=1}^d\frac{\sin(a_iu)}{a_iu}\right)\!\cos(2tu)du
\end{equation}
converges uniformly when $n$ goes to infinity in a neighborhood of the considered point from $]0,+\infty[^2\mathord{\times}\mathbb{R}^{d-2}$. Before proceeding with the proof of this uniform convergence property, observe that this will not only provide the existence of the partial derivative but also its continuity. Consider a closed, $d$-dimensional ball $B$ centered at a point contained in $]0,+\infty[^2\mathord{\times}\mathbb{R}^{d-2}$. Pick the radius of $B$ small enough so that it is entirely contained in $]0,+\infty[^2\mathord{\times}\mathbb{R}^{d-2}$. As $\sin(x)/x$ and its derivative with respect to $x$ are bounded functions of $x$ on $\mathbb{R}$, there exists a positive number $M$ such that, for all $u$ in $\mathbb{R}$ and all $a$ in $B$,
$$
\left|\frac{\partial}{\partial{a_3}}\!\left(\prod_{i=1}^d\frac{\sin(a_iu)}{a_iu}\right)\!\cos(2tu)\right|\!\leq{M}\!\left|\frac{\sin(a_1u)\sin(a_2u)}{a_1a_2u^2}\right|\!\mbox{.}
$$
In turn, for any $u$ in $\mathbb{R}$ and $a$ in $B$,
$$
\left|\frac{\sin(a_1u)\sin(a_2u)}{a_1a_2u^2}\right|\!\leq\min\!\left\{1,\frac{1}{mu^2}\right\}\!\mbox{,}
$$
where $m$ is the smallest possible value for the product of the first two coordinates of a point contained in $B$. Therefore, by Cauchy's criterion, (\ref{LEH.sec.1.cor.1.eq.1}) converges uniformly on $B$ when $n$ goes to infinity, as desired.
\end{proof}
By a similar argument as for the proof of Corollary (\ref{LEH.sec.1.cor.1}), one can prove the following, also a consequence of Theorem \ref{LEH.sec.1.thm.1}.
\begin{cor}\label{LEH.sec.1.cor.2}
If $d\geq4$, then $V$ is a twice continuously differentiable function of $a$ at every point of $[0,+\infty[^d$ with at least four non-zero coordinates. Moreover, if $j$ and $k$ are integers satisfying $1\leq{j}\leq{k}\leq{d}$ then, at any such point,
$$
\frac{\partial^2}{\partial{a_j}\partial{a_k}}\frac{V}{\|a\|}=\frac{1}{\pi}\int_{-\infty}^{+\infty}\frac{\partial^2}{\partial{a_j}\partial{a_k}}\!\left(\prod_{i=1}^d\frac{\sin(a_iu)}{a_iu}\right)\!\cos(2tu)du\mbox{.}
$$
\end{cor}
\begin{proof}
As in the proof of Corollary \ref{LEH.sec.1.cor.1}, by the symmetries of the hypercube, it suffices to show the slightly stronger statement that the partial derivative
$$
\frac{\partial^2}{\partial{a_3}\partial{a_k}}\frac{V}{\|a\|}
$$
exists and is a continuous function of $a$ when $k$ is equal to $3$ or to $4$ at every point of $\mathbb{R}^d$ whose first two coordinates are positive. This follows from the same argument than for Corollary \ref{LEH.sec.1.cor.1}, by the observation that $\sin(x)/x$ and its first two derivatives with respect to $x$ are bounded functions of $x$ on $\mathbb{R}$.
\end{proof}
The remainder of the section is devoted to establishing alternative expressions for the partial derivatives of $V/\|a\|$ using Theorem \ref{LEH.sec.1.thm.2} instead of Theorem \ref{LEH.sec.1.thm.1}. From now on, given a subset $X$ of $\mathbb{R}$ and an integer $n$ such that $1\leq{n}\leq{d-1}$, the cartesian product $X^n$ is identified with the subset of $X^n\mathord{\times}\mathbb{R}^{d-n}$ made up of the points whose last $d-n$ coordinates are equal to $0$.
\begin{lem}\label{LEH.sec.1.lem.1}
Consider an integer $n$ satisfying $3\leq{n}\leq{d}$ and an integer $j$. If $1\leq{j}\leq{n}$, then at any point from $]0,+\infty[^n$,
$$
\frac{\partial}{\partial{a_j}}\frac{V}{\|a\|}=\sum\frac{(-1)^{\sigma(v)}}{(n-1)!}\frac{\partial}{\partial{a_j}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}\mbox{,}
$$
where the sum is over the vertices $v$ of $[0,1]^n$ that satisfy $a\mathord{\cdot}v\leq{b}$. If, however $n<j\leq{d}$ then, at any point from $]0,+\infty[^n$,
$$
\frac{\partial}{\partial{a_j}}\frac{V}{\|a\|}=0\mbox{.}
$$
\end{lem}
\begin{proof}
Note that, when $a$ is a point in $]0,+\infty[^n$, the $n$-dimensional volume of $H\cap[0,1]^n$ coincides with $V$. As the first $n$ coordinates of $a$ are non-zero, one therefore obtains from Theorem~\ref{LEH.sec.1.thm.2} that
\begin{equation}\label{LEH.sec.1.lem.1.eq.1}
\frac{V}{\|a\|}=\sum\frac{(-1)^{\sigma(v)}(b-a\mathord{\cdot}v)^{n-1}}{(n-1)!\pi(a)}\mbox{,}
\end{equation}
where the sum is over the vertices $v$ of $[0,1]^n$ satisfying $a\mathord{\cdot}v\leq{b}$.
Assume that $1\leq{j}\leq{n}$ and observe that, if a vertex $v$ of the hypercube $[0,1]^n$ satisfies $a\mathord{\cdot}v=b$, then the partial derivative
$$
\frac{\partial}{\partial{a_j}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}
$$
vanishes at $a$ because $n\geq3$. The desired expression for the partial derivative of $V/\|a\|$ with respect to $a_j$ therefore immediately follows from (\ref{LEH.sec.1.lem.1.eq.1}).
Now assume that $n<j\leq{d}$ and recall that $n\geq3$. Hence, according to Corollary \ref{LEH.sec.1.cor.1}, at any point $a$ of the orthant $]0,+\infty[^n$,
$$
\frac{\partial}{\partial{a_j}}\frac{V}{\|a\|}=\frac{1}{\pi}\int_{-\infty}^{+\infty}\frac{\partial}{\partial{a_j}}\!\left(\prod_{i=1}^d\frac{\sin(a_iu)}{a_iu}\right)\!\cos(2tu)du\mbox{.}
$$
Recall that, when $x$ is equal to $0$,
$$
\frac{d}{dx}\frac{\sin(x)}{x}
$$
vanishes. As $a_j=0$, for any point $a$ in $]0,+\infty[^n$,
$$
\frac{\partial}{\partial{a_j}}\!\left(\prod_{i=1}^d\frac{\sin(a_iu)}{a_iu}\right)=0
$$
at this point and for any $u$ in $\mathbb{R}$. Hence, the partial derivative of $V/\|a\|$ with respect to $a_j$ vanishes at any point of $]0,+\infty[^n$, as desired.
\end{proof}
The following lemma is proven using a similar argument.
\begin{lem}\label{LEH.sec.1.lem.2}
Consider an integer $n$ satisfying $4\leq{n}\leq{d}$ and two integers $j$ and $k$. If $1\leq{j}\leq{k}\leq{n}$, then at any point from $]0,+\infty[^n$,
$$
\frac{\partial^2}{\partial{a_j}\partial{a_k}}\frac{V}{\|a\|}=\sum\frac{(-1)^{\sigma(v)}}{(n-1)!}\frac{\partial^2}{\partial{a_j}\partial{a_k}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}\mbox{,}
$$
where the sum is over the vertices $v$ of $[0,1]^n$ that satisfy $a\mathord{\cdot}v\leq{b}$. If however, $1\leq{j}<k$ and $n<k\leq{d}$ then, at any such point,
$$
\frac{\partial^2}{\partial{a_j}\partial{a_k}}\frac{V}{\|a\|}=0\mbox{.}
$$
\end{lem}
\begin{proof}
As in the proof of Lemma \ref{LEH.sec.1.lem.1}, when $a$ is a point from $]0,+\infty[^n$,
\begin{equation}\label{LEH.sec.1.lem.2.eq.1}
\frac{V}{\|a\|}=\sum\frac{(-1)^{\sigma(v)}(b-a\mathord{\cdot}v)^{n-1}}{(n-1)!\pi(a)}\mbox{,}
\end{equation}
where the sum is over the vertices $v$ of $[0,1]^n$ satisfying $a\mathord{\cdot}v\leq{b}$.
Moreover, when $1\leq{j}\leq{k}\leq{n}$, and $v$ is a vertex of the hypercube $[0,1]^n$ such that $a\mathord{\cdot}v=b$,
$$
\frac{\partial^2}{\partial{a_j}\partial{a_k}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}
$$
vanishes at $a$ because $n\geq4$ and the result follows from (\ref{LEH.sec.1.lem.2.eq.1}).
Now assume that $1\leq{j}<k$ and $n<k\leq{d}$. In that case,
$$
\frac{\partial^2}{\partial{a_j}\partial{a_k}}\!\left(\prod_{i=1}^d\frac{\sin(a_iu)}{a_iu}\right)
$$
vanishes at any point $a$ contained in $]0,+\infty[^n$ and for any number $u$ in $\mathbb{R}$ because for any such point, $a_k=0$ and, therefore
$$
\frac{\partial}{\partial{a_k}}\frac{\sin(a_ku)}{a_ku}=0\mbox{.}
$$
The result then immediately follows from Corollary \ref{LEH.sec.1.cor.2}.
\end{proof}
Observe that Lemma \ref{LEH.sec.1.lem.2} provides all the second order partial derivatives of $V/\|a\|$, except the ones with respect to $a_j$ in the case when $j$ is greater than $n$. An expression for them can be obtained from a different argument.
\begin{lem}\label{LEH.sec.1.lem.3}
If $n$ and $j$ are two integers satisfying $4\leq{n}\leq{d}$ and $n<j\leq{d}$, then at any point contained in $]0,+\infty[^n$,
$$
\frac{\partial^2}{\partial{a_j^2}}\frac{V}{\|a\|}=\sum\frac{(-1)^{\sigma(v)}(b-a\mathord{\cdot}v)^{n-3}}{12(n-3)!\pi(a)}
$$
where the sum is over the vertices $v$ of $[0,1]^n$ that satisfy $a\mathord{\cdot}v\leq{b}$.
\end{lem}
\begin{proof}
By symmetry, it suffices to prove the lemma in the case when $j=n+1$. Consider a point $a$ in $]0,+\infty[^n$. The vertex set of $[0,1]^{n+1}$ can be decomposed into the subset $\mathcal{V}^-$ of the vertices $v$ such that $a\mathord{\cdot}v-b$ is negative, the subset $\mathcal{V}^+$ of the vertices $v$ such that this quantity is positive, and the (possibly empty) subset of the remaining vertices, for which this quantity is equal to $0$.
For any vertex $v$ of the $(n+1)$-dimensional hypercube $[0,1]^{n+1}$, consider the set $U^-_v$ of the points $x$ contained in $\mathbb{R}^{n+1}$ such that $h_v(x)<0$ and the set $U^+_v$ of the points $x$ in $\mathbb{R}^{n+1}$ satisfying $h_v(x)>0$ where
$$
h_v(x)=\frac{\sigma(x)}{2}-t-x\mathord{\cdot}v\mbox{.}
$$
Observe that both $U^-_v$ and $U^+_v$ are open subsets of $\mathbb{R}^{n+1}$. Therefore,
$$
U=\left[\bigcap_{v\in\mathcal{V}^-}\!\!U^-_v\right]\cap\left[\bigcap_{v\in\mathcal{V}^+}\!\!U^+_v\right]
$$
is an open subset of $\mathbb{R}^{n+1}$ as well. Note that by definition, $a$ belongs to $U$.
In the remainder of the proof, $x$ denotes a point contained in $U\cap]0,+\infty[^{n+1}$ that shares its first $n$ coordinates with $a$. In particular, $\pi(x)=\pi(a)x_j$. As none of the first $n+1$ coordinates of $x$ is equal to $0$ and as $n$ is at least $4$, it follows from Theorem \ref{LEH.sec.1.thm.2} and Corollary \ref{LEH.sec.1.cor.2} that
$$
\begin{array}{rcl}
\displaystyle\frac{\partial^2}{\partial{a_j^2}}\frac{V}{\|a\|} & \!\!\!\!=\!\!\!\! & \displaystyle\lim_{x_j\rightarrow0}\frac{\partial^2}{\partial{x_j^2}}\sum\frac{(-1)^{\sigma(v)}[h_v(x)]^n}{n!\pi(x)}\mbox{,}\\[\bigskipamount]
& \!\!\!\!=\!\!\!\! & \displaystyle\lim_{x_j\rightarrow0}\sum\frac{(-1)^{\sigma(v)}}{n!\pi(a)}\frac{\partial^2}{\partial{x_j^2}}\frac{[h_v(x)]^n}{x_j}\mbox{,}\\
\end{array}
$$
where the sums are over the elements $v$ of the union of $\mathcal{V}^+$ with a (possibly empty) subset of $\mathcal{V}^\circ$. Note that, for any vertex $v$ of $[0,1]^{n+1}$,
\begin{multline}\label{LEH.sec.1.lem.3.eq.0}
\frac{\partial^2}{\partial{x_j^2}}\frac{[h_v(x)]^n}{x_j}=2\frac{[h_v(x)]^n}{x_j^3}-2n\left(\frac{1}{2}-v_j\right)\frac{[h_v(x)]^{n-1}}{x_j^2}\\
\hfill+\frac{n(n-1)}{4}\frac{[h_v(x)]^{n-2}}{x_j}\mbox{.}
\end{multline}
However, one obtains from l'H{\^o}pital's rule that, when $v$ belongs to $\mathcal{V}^\circ$,
$$
\lim_{x_j\rightarrow0}\frac{[h_v(x)]^n}{x_j^3}=\lim_{x_j\rightarrow0}\frac{[h_v(x)]^{n-1}}{x_j^2}=\lim_{x_j\rightarrow0}\frac{[h_v(x)]^{n-2}}{x_j}=\lim_{x_j\rightarrow0}[h_v(x)]^{n-3}=0
$$
because $n\geq4$ and $h_v(a)=0$. As a consequence,
\begin{equation}\label{LEH.sec.1.lem.3.eq.1}
\frac{\partial^2}{\partial{a_j^2}}\frac{V}{\|a\|}=\lim_{x_j\rightarrow0}\sum_{v\in\mathcal{V}^+}\frac{(-1)^{\sigma(v)}}{n!\pi(a)}\frac{\partial^2}{\partial{x_j^2}}\frac{[h_v(x)]^n}{x_j}
\end{equation}
Now observe that, since $a_j=0$, a vertex $v$ of $[0,1]^n$ belongs to $\mathcal{V}^+$ if and only if the vertex $w$ of $[0,1]^{n+1}$ that shares its first $n$ coordinates with $v$ but such that $w_j=1$ also belongs to $\mathcal{V}^+$. Hence, gathering the terms corresponding to each such pair of vertices $v$ and $w$, (\ref{LEH.sec.1.lem.3.eq.1}) can be rewritten into
\begin{equation}\label{LEH.sec.1.lem.3.eq.2}
\frac{\partial^2}{\partial{a_j^2}}\frac{V}{\|a\|}=\lim_{x_j\rightarrow0}\sum\frac{(-1)^{\sigma(v)}}{n!\pi(a)}\frac{\partial^2}{\partial{x_j^2}}\frac{[h_v(x)]^n-[h_v(x)-x_j]^n}{x_j}
\end{equation}
where the sum is over the vertices $v$ of $[0,1]^n$ satisfying $a\mathord{\cdot}v<b$. However, for any vertex $v$ of $[0,1]^n$, one obtains from (\ref{LEH.sec.1.lem.3.eq.0}) that
$$
\frac{\partial^2}{\partial{x_j^2}}\frac{[h_v(x)]^n-[h_v(x)-x_j]^n}{x_j}=r_v(x)+s_v(x)
$$
where
$$
r_v(x)=\frac{2[h_v(x)]^n-2[h_v(x)-x_j]^n-nx_j[h_v(x)]^{n-1}-nx_j[h_v(x)-x_j]^{n-1}}{x_j^3}
$$
and
$$
s_v(x)=n(n-1)\frac{[h_v(x)]^{n-2}-[h_v(x)-x_j]^{n-2}}{4x_j}\mbox{.}
$$
It turns out that $r_v(x)$ and $s_v(x)$ both admit limits when $x_j$ goes to $0$. Observe that, in the above expression of $r_v(x)$ as a ratio, both the numerator and the denominator go to zero as $x_j$ goes to $0$. By applying l'H{\^o}pital's rule twice,
$$
\begin{array}{rcl}
\displaystyle\lim_{x_j\rightarrow0}r_v(x) & \!\!\!\!=\!\!\!\! & \displaystyle\lim_{x_j\rightarrow0}-n(n-1)\frac{[h_v(x)]^{n-2}-[h_v(x)-x_j]^{n-2}}{6x_j}\\[\bigskipamount]
& \!\!\!\!=\!\!\!\! & \displaystyle\lim_{x_j\rightarrow0}-n(n-1)(n-2)\frac{[h_v(x)]^{n-3}+[h_v(x)-x_j]^{n-3}}{12}\\[\bigskipamount]
& \!\!\!\!=\!\!\!\! & \displaystyle-n(n-1)(n-2)\frac{(b-a\mathord{\cdot}v)^{n-3}}{6}\mbox{.}\\
\end{array}
$$
Similarly, the numerator and the denominator of the expression of $s_v(x)$ as a ratio both go to zero as $x_j$ goes to $0$. By l'H{\^o}pital's rule,
$$
\begin{array}{rcl}
\displaystyle\lim_{x_j\rightarrow0}s_v(x) & \!\!\!\!=\!\!\!\! & \displaystyle\lim_{x_j\rightarrow0}n(n-1)(n-2)\frac{[h_v(x)]^{n-3}+[h_v(x)-x_j]^{n-3}}{8}\\[\bigskipamount]
& \!\!\!\!=\!\!\!\! & \displaystyle{n(n-1)(n-2)\frac{(b-a\mathord{\cdot}v)^{n-3}}{4}}\mbox{.}\\
\end{array}
$$
As a consequence, (\ref{LEH.sec.1.lem.3.eq.2}) yields
$$
\frac{\partial^2}{\partial{a_j^2}}\frac{V}{\|a\|}=\sum\frac{(-1)^{\sigma(v)}(b-a\mathord{\cdot}v)^{n-3}}{12(n-3)!\pi(a)}
$$
where the sum is over the vertices $v$ of $[0,1]^n$ such that $a\mathord{\cdot}v<b$. As $n\geq4$, adding to this sum the terms that correspond to the vertices $v$ of $[0,1]^n$ such that $a\mathord{\cdot}v=b$ does not affect it, providing the desired equality.
\end{proof}
\section{Local maxima near the center of the hypercube}\label{LEH.sec.2}
The following general result will be used later in the section in order to establish the local maximality of $V$ when $t$ is close enough to $0$, at the point $a$ whose first $n$ coordinates are $1/\sqrt{n}$ and whose other coordinates are $0$. It will also be used to establish the local maximality results of Section \ref{LEH.sec.3}.
\begin{thm}\label{LEH.sec.2.thm.1}
Consider an integer $n$ satisfying $2\leq{n}\leq{d}$ and assume that $V$ is twice continuously differentiable at the point $a$ of $\mathbb{R}^n$ whose first $n$ coordinates are equal to $1/\sqrt{n}$. If at that point,
\begin{equation}\label{LEH.sec.2.thm.1.eq.0.1}
\frac{\partial^2}{\partial{a_1^2}}\frac{V}{\|a\|}-\sqrt{n}\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}-\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}
\end{equation}
is negative and, when $n<d$,
\begin{equation}\label{LEH.sec.2.thm.1.eq.0.2}
\frac{\partial^2}{\partial{a_d^2}}\frac{V}{\|a\|}
\end{equation}
is also negative, then $V$ has a strict local maximum on $\mathbb{S}^{d-1}\cap[0,+\infty[^d$ at $a$. Similarly, if (\ref{LEH.sec.2.thm.1.eq.0.1}) is positive at $a$ and, when $n<d$, (\ref{LEH.sec.2.thm.1.eq.0.2}) is positive at $a$ as well, then $V$ admits a strict local minimum on $\mathbb{S}^{d-1}\cap[0,+\infty[^d$ at that point.
\end{thm}
\begin{proof}
Consider a number $\lambda$ and denote
$$
L_\lambda=\frac{V}{\|a\|}+\lambda\!\left(\|a\|^2-1\right)\!.
$$
Throughout the proof $L_\lambda$ is treated as a function of $a$. Recall that $a$ is a critical point of $L_\lambda$ when, for all integers $j$ such that $1\leq{j}\leq{d}$,
\begin{equation}\label{LEH.sec.2.thm.1.eq.1}
\frac{\partial{L_\lambda}}{\partial{a_j}}=0\mbox{.}
\end{equation}
From now on, $a$ denotes the point contained in $]0,+\infty[^n$ whose first $n$ coordinates are equal to $1/\sqrt{n}$ and all the (first or second order) partial derivatives are taken at that point. Pick $\lambda$ such that
$$
\lambda=-\frac{\sqrt{n}}{2}\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}\mbox{.}
$$
Note that this particular value of $\lambda$ satisfies (\ref{LEH.sec.2.thm.1.eq.1}) when $j$ is equal to $1$. By symmetry, the partial derivatives of $L_\lambda$ with respect to $a_j$ all coincide at $a$ when $1\leq{j}\leq{n}$. Hence (\ref{LEH.sec.2.thm.1.eq.1}) holds for these values of $j$. Recall that the last $d-n$ coordinates of $a$ are equal to $0$. Therefore at point $a$,
$$
\frac{\partial{L_\lambda}}{\partial{a_j}}=\frac{\partial}{\partial{a_j}}\frac{V}{\|a\|}
$$
when $n<j\leq{d}$ and it immediately follows from Lemma \ref{LEH.sec.1.lem.1} that (\ref{LEH.sec.2.thm.1.eq.1}) also holds in that case. As a consequence, $a$ is a critical point of $L_\lambda$ and in turn, according to the second order sufficiency conditions of the Lagrange multipliers theorem (see for instance Proposition~3.2.1 in \cite{Bertsekas1999}), if at point $a$
\begin{equation}\label{LEH.sec.2.thm.1.eq.2}
\sum_{j=1}^d\sum_{k=1}^dx_jx_k\frac{\partial^2{L_\lambda}}{\partial{a_j}\partial{a_k}}<0
\end{equation}
for every non-zero point $x$ in $\mathbb{R}^d$ whose first $n$ coordinates sum to $0$, then $V$ has a strict local maximum on $\mathbb{S}^{d-1}\cap[0,+\infty[^d$ at that point.
Now observe that, by symmetry, at point $a$,
$$
\frac{\partial^2{L_\lambda}}{\partial{a_j^2}}=\frac{\partial^2}{\partial{a_1^2}}\frac{V}{\|a\|}-\sqrt{n}\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}
$$
when $1\leq{j}\leq{n}$,
$$
\frac{\partial^2{L_\lambda}}{\partial{a_j}\partial{a_k}}=\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}
$$
when $1\leq{j}<k\leq{n}$, and
$$
\frac{\partial^2{L_\lambda}}{\partial{a_j^2}}=\frac{\partial^2}{\partial{a_d^2}}\frac{V}{\|a\|}
$$
when $n<j\leq{d}$. Moreover, according to Lemma \ref{LEH.sec.1.lem.2}, all the other second order partial derivatives vanish. As a consequence, at that point,
\begin{multline*}
\sum_{j=1}^d\sum_{k=1}^dx_jx_k\frac{\partial^2{L_\lambda}}{\partial{a_j}\partial{a_k}}=\!\left[\frac{\partial^2}{\partial{a_1^2}}\frac{V}{\|a\|}-\sqrt{n}\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}-\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}\right]\!\sum_{i=1}^nx_i^2\\
+\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}\!\left[\sum_{i=1}^nx_i\right]^2\!+\frac{\partial^2}{\partial{a_d^2}}\frac{V}{\|a\|}\sum_{i=n+1}^dx_i^2\mbox{.}
\end{multline*}
Note that, if the first $n$ coordinates of $x$ sum to $0$, then the second term in the right-hand side of this equality vanishes. It follows that, if (\ref{LEH.sec.2.thm.1.eq.0.1}) is negative at $a$ and, when $n<d$, (\ref{LEH.sec.2.thm.1.eq.0.2}) is also negative at that point, then (\ref{LEH.sec.2.thm.1.eq.2}) holds for every non-zero point $x$ in $\mathbb{R}^d$ whose first $n$ coordinates sum to $0$. Hence, $V$ has a strict local maximum on $\mathbb{S}^{d-1}\cap[0,+\infty[^d$ at $a$, as desired.
Finally, observe that repeating this proof, but with the second order sufficiency conditions of the Lagrange multipliers theorem for local minima (instead of local maxima), one obtains
the desired local minimality result.
\end{proof}
Using Corollaries \ref{LEH.sec.1.cor.1} and \ref{LEH.sec.1.cor.2}, expressions for (\ref{LEH.sec.2.thm.1.eq.0.1}) and (\ref{LEH.sec.2.thm.1.eq.0.2}) can be obtained as follows in the form of improper integrals.
\begin{lem}\label{LEH.sec.2.lem.1}
Consider an integer $n$ satisfying $4\leq{n}\leq{d}$. At the point of $\mathbb{R}^n$ whose first $n$ coordinates are all equal to $1/\sqrt{n}$,
\begin{multline}\label{LEH.sec.2.lem.1.eq.1}
\frac{\partial^2}{\partial{a_1^2}}\frac{V}{\|a\|}-\sqrt{n}\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}-\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}=\frac{1}{\pi}\int_{-\infty}^{+\infty}n\Biggl[2\frac{n}{u^2}\sin^2\!\left(\frac{u}{\sqrt{n}}\right)\\
-\frac{\sqrt{n}}{u}\cos\!\left(\frac{u}{\sqrt{n}}\right)\!\sin\!\left(\frac{u}{\sqrt{n}}\right)\!-1\Biggr]\!\Biggl(\frac{\sqrt{n}}{u}\sin\!\left(\frac{u}{\sqrt{n}}\right)\!\Biggr)^{\!\!n-2}\!\!\!\!\!\!\!\!\!\cos(2tu)du
\end{multline}
and, if $n<d$, then
\begin{equation}\label{LEH.sec.2.lem.1.eq.2}
\frac{\partial^2}{\partial{a_d^2}}\frac{V}{\|a\|}=-\frac{1}{3\pi}\int_{-\infty}^{+\infty}u^2\!\Biggl(\frac{\sqrt{n}}{u}\sin\!\left(\frac{u}{\sqrt{n}}\right)\!\Biggr)^{\!\!n}\!\!\!\cos(2tu)du\mbox{.}
\end{equation}
\end{lem}
\begin{proof}
First recall that, when $x$ is non-zero,
$$
\frac{\partial}{\partial{x}}\frac{\sin(x)}{x}=\frac{\cos(x)}{x}-\frac{\sin(x)}{x^2}\mbox{.}
$$
Moreover, this partial derivative vanishes when $x$ is equal to $0$. Therefore, according to Corollary \ref{LEH.sec.1.cor.1}, when $a$ belongs to $]0,+\infty[^n$,
$$
\displaystyle\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}=\frac{1}{\pi}\int_{-\infty}^{+\infty}u\Biggl[\frac{\cos(a_1u)}{a_1u}-\frac{\sin(a_1u)}{a_1^2u^2}\Biggr]\!\!\left(\prod_{i=2}^n\frac{\sin(a_iu)}{a_iu}\right)\!\cos(2tu)du\mbox{.}
$$
Hence, at the point $a$ of $\mathbb{R}^n$ whose first $n$ coordinates are $1/\sqrt{n}$,
\begin{multline}\label{LEH.sec.2.lem.1.eq.3}
\displaystyle\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}=\frac{1}{\pi}\int_{-\infty}^{+\infty}u\Biggl[\frac{\sqrt{n}}{u}\cos\!\left(\frac{u}{\sqrt{n}}\right)\!\\
\hfill-\frac{n}{u^2}\sin\!\left(\frac{u}{\sqrt{n}}\right)\!\Biggr]\!\Biggl(\frac{\sqrt{n}}{u}\sin\!\left(\frac{u}{\sqrt{n}}\right)\!\Biggr)^{\!\!n-1}\!\!\!\!\!\!\!\!\!\cos(2tu)du\mbox{.}
\end{multline}
Further recall that, when $x$ is not equal to $0$
$$
\frac{\partial^2}{\partial{x^2}}\frac{\sin(x)}{x}=-\frac{\sin(x)}{x}-\frac{2\cos(x)}{x^2}+\frac{2\sin(x)}{x^3}
$$
and that this second order partial derivative is equal to $-1/3$ when $x$ is equal to $0$. Hence, by Corollary \ref{LEH.sec.1.cor.2}, when $a\in]0,+\infty[^n$,
\begin{multline*}
\displaystyle\frac{\partial^2}{\partial{a_1^2}}\frac{V}{\|a\|}=\frac{1}{\pi}\int_{-\infty}^{+\infty}u^2\Biggl[-\frac{\sin(a_1u)}{a_1u}-\frac{2\cos(a_1u)}{a_1^2u^2}\\
\hfill+\frac{2\sin(a_1u)}{a_1^3u^3}\Biggr]\!\!\left(\prod_{i=2}^n\frac{\sin(a_iu)}{a_iu}\right)\!\cos(2tu)du
\end{multline*}
and
\begin{multline*}
\displaystyle\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}=\frac{1}{\pi}\int_{-\infty}^{+\infty}u^2\Biggl[\frac{\cos(a_1u)\cos(a_2u)}{a_1a_2u^2}-\frac{\cos(a_1u)\sin(a_2u)}{a_1a_2^2u^3}\\
\hfill-\frac{\sin(a_1u)\cos(a_2u)}{a_1^2a_2u^3}+\frac{\sin(a_1u)\sin(a_2u)}{a_1^2a_2^2u^4}\Biggr]\!\!\left(\prod_{i=3}^n\frac{\sin(a_iu)}{a_iu}\right)\!\cos(2tu)du\mbox{.}
\end{multline*}
Therefore, at the point $a$ of $\mathbb{R}^n$ whose first $n$ coordinates are $1/\sqrt{n}$,
\begin{multline}\label{LEH.sec.2.lem.1.eq.4}
\displaystyle\frac{\partial^2}{\partial{a_1^2}}\frac{V}{\|a\|}=\frac{1}{\pi}\int_{-\infty}^{+\infty}u^2\Biggl[-\frac{n}{u^2}\sin^2\!\left(\frac{u}{\sqrt{n}}\right)\!-\frac{2n\sqrt{n}}{u^3}\cos\!\left(\frac{u}{\sqrt{n}}\right)\!\sin\!\left(\frac{u}{\sqrt{n}}\right)\\
\hfill+\frac{2n^2}{u^4}\sin^2\!\left(\frac{u}{\sqrt{n}}\right)\!\Biggr]\!\Biggl(\frac{\sqrt{n}}{u}\sin\!\left(\frac{u}{\sqrt{n}}\right)\!\Biggr)^{\!\!n-2}\!\!\!\!\!\!\!\!\!\cos(2tu)du
\end{multline}
and
\begin{multline}\label{LEH.sec.2.lem.1.eq.5}
\displaystyle\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}=\frac{1}{\pi}\int_{-\infty}^{+\infty}u^2\Biggl[\frac{n}{u^2}\cos^2\!\left(\frac{u}{\sqrt{n}}\right)\!-\frac{2n\sqrt{n}}{u^3}\cos\!\left(\frac{u}{\sqrt{n}}\right)\!\sin\!\left(\frac{u}{\sqrt{n}}\right)\\
\hfill+\frac{n^2}{u^4}\sin^2\!\left(\frac{u}{\sqrt{n}}\right)\!\Biggr]\!\Biggl(\frac{\sqrt{n}}{u}\sin\!\left(\frac{u}{\sqrt{n}}\right)\!\Biggr)^{\!\!n-2}\!\!\!\!\!\!\!\!\!\cos(2tu)du\mbox{.}
\end{multline}
Combining (\ref{LEH.sec.2.lem.1.eq.3}), (\ref{LEH.sec.2.lem.1.eq.4}), and (\ref{LEH.sec.2.lem.1.eq.5}) yields (\ref{LEH.sec.2.lem.1.eq.1}). Finally, assume that $n<d$. As the second order derivative of $\sin(x)/x$ is equal to $-1/3$ when $x$ is equal to $0$, at the point $a$ of $\mathbb{R}^n$ whose first $n$ coordinates are $1/\sqrt{n}$,
$$
\frac{\partial^2}{\partial{a_d^2}}\prod_{i=1}^d\frac{\sin(a_iu)}{a_iu}=-\frac{u^2}{3}\!\Biggl(\frac{\sqrt{n}}{u}\sin\!\left(\frac{u}{\sqrt{n}}\right)\!\Biggr)^{\!\!n}
$$
and by Corollary \ref{LEH.sec.1.cor.2}, (\ref{LEH.sec.2.lem.1.eq.2}) holds at that point.
\end{proof}
In order to estimate (\ref{LEH.sec.2.lem.1.eq.1}) when $t$ is close to $0$, the following three technical propositions will be needed. A proof of each is provided for completeness.
\begin{prop}\label{LEH.sec.2.prop.1}
The quantity
\begin{equation}\label{LEH.sec.2.prop.1.eq.1}
\frac{1-\cos(2s)}{s^2}-\frac{\sin(2s)}{2s}-1
\end{equation}
is negative when $s$ is positive.
\end{prop}
\begin{proof}
Assume that $s$ is positive and observe that
$$
\frac{1-\cos(2s)}{s^2}-\frac{\sin(2s)}{2s}\leq\frac{2}{s^2}+\frac{1}{2s}\mbox{.}
$$
Hence, the result is immediate when $s$ is greater than $2$. Assume that $s$ is at most $2$. Expanding $\cos(2s)$ and $\sin(2s)$ into their power series yields
$$
\frac{1-\cos(2s)}{s^2}-\frac{\sin(2s)}{2s}-1=-\sum_{i=2}^{+\infty}\frac{(-1)^i(2i-2)}{(2i+2)!}(2s)^{2i}\mbox{.}
$$
Note that in the right-hand side, any two consecutive terms of the sum have opposite signs. It is therefore sufficient to show that
$$
\frac{2i-2}{(2i+2)!}(2s)^{2i}-\frac{2i}{(2i+4)!}(2s)^{2i+2}
$$
is positive when $i$ is an even positive integer. Observe that
$$
\frac{2i-2}{(2i+2)!}(2s)^{2i}-\frac{2i}{(2i+4)!}(2s)^{2i+2}=2\frac{(2s)^{2i}}{(2i+2)!}(iR_i-1)\mbox{,}
$$
where
$$
R_i=1-\frac{4s^2}{(2i+3)(2i+4)}\mbox{.}
$$
As $R_i$ is an increasing function of $i$, one obtains that it is at least $1-s^2/14$ for every even positive integer $i$. In turn, as $s$ is at most $2$ and $i$ at least $2$, this shows that $iR_i-1$ is at least $1-4/7$, which is also positive.
\end{proof}
\begin{prop}\label{LEH.sec.2.prop.2}
If $n$ is an integer greater than $5$, then
\begin{equation}\label{LEH.sec.2.prop.2.eq.1}
\frac{1-\cos(2s)}{s^n}-\frac{\sin(2s)}{2s^{n-1}}-\frac{1}{s^{n-2}}
\end{equation}
is a strictly increasing function of $s$ on $]0,+\infty[$ and if $n$ is equal to $5$, then it is a strictly increasing function of $s$ on $]4,+\infty[$.
\end{prop}
\begin{proof}
Observe that (\ref{LEH.sec.2.prop.2.eq.1}) is a differentiable function of $s$ on $]0,+\infty[$ and that its derivative with respect to $s$ can be expressed as
$$
\frac{d}{ds}\!\left[\frac{1-\cos(2s)}{s^n}-\frac{\sin(2s)}{2s^{n-1}}-\frac{1}{s^{n-2}}\right]\!=\frac{An+B}{s^{n-1}}
$$
where
$$
A=\frac{\cos(2s)-1}{s^2}+\frac{\sin(2s)}{2s}+1
$$
and
$$
B=\frac{3\sin(2s)}{2s}-\cos(2s)-2\mbox{.}
$$
According to Proposition \ref{LEH.sec.2.prop.1}, $A$ is positive. Therefore, it suffices to show that $6A+B$ is always positive and that $5A+B$ is positive when $s$ is greater than $4$. The latter is immediate. Indeed, observe that
$$
\begin{array}{rcl}
5A+B & \!\!\!\!\geq\!\!\!\! & \displaystyle5\!\left(-\frac{2}{s^2}-\frac{1}{2s}+1\right)\!-\frac{3}{2s}-3\mbox{,}\\[\bigskipamount]
& \!\!\!\!=\!\!\!\! & \displaystyle2-\frac{10}{s^2}-\frac{4}{s}\mbox{.}
\end{array}
$$
It remains to show that $6A+B$ is positive. Observe that
$$
\begin{array}{rcl}
6A+B & \!\!\!\!=\!\!\!\! & \displaystyle\!\left(\frac{6}{s^2}-1\right)\!\cos(2s)-\frac{6}{s^2}+\frac{9\sin(2s)}{2s}+4\\[\bigskipamount]
& \!\!\!\!\geq\!\!\!\! & \displaystyle-\!\left|\frac{6}{s^2}-1\right|\!-\frac{6}{s^2}-\frac{9}{2s}+4\mbox{.}
\end{array}
$$
As a consequence, $6A+B$ is positive when $s$ is at least $\sqrt{6}$. It is assumed in the remainder of the proof that $s$ is less than $\sqrt{6}$. Expanding $\sin(2s)$ and $\cos(2s)$ into their power series, one obtains
$$
6A+B=4\sum_{i=2}^{+\infty}\frac{(-1)^i(i-1)i}{(2i+4)!}(2s)^{2i+2}\mbox{.}
$$
Note that the terms in that sum have alternating signs. Rearranging the sum so that each positive term is summed with the next one yields
\begin{equation}\label{LEH.sec.2.prop.2.eq.2}
6A+B=8\sum_{i=1}^{+\infty}\frac{i(2s)^{4i+2}}{(4i+4)!}\!\left(R_i-2\right)\!\mbox{,}
\end{equation}
where
$$
R_i=(2i+1)\!\left(1-\frac{4s^2}{(4i+5)(4i+6)}\right)\!\!\mbox{.}
$$
Note that $R_i$ is an increasing function of $i$. Hence, as $i$ is a positive integer,
$$
R_i\geq3\!\left(1-\frac{4s^2}{90}\right)\!\!\mbox{.}
$$
Now recall that $s$ is less than $\sqrt{6}$. It follows that $R_i$ is greater than $22/10$ for every positive integer $i$ and, by (\ref{LEH.sec.2.prop.2.eq.2}), that $6A+B$ is positive.
\end{proof}
\begin{prop}\label{LEH.sec.2.prop.3}
$\displaystyle\int_0^{2\pi}2\frac{\sin^5(s)}{s^5}-\cos(s)\frac{\sin^4(s)}{s^4}-\frac{\sin^3(s)}{s^3}ds<0$.
\end{prop}
\begin{proof}
Denote
$$
f(s)=2\frac{\sin^2(s)}{s^5}-\cos(s)\frac{\sin(s)}{s^4}-\frac{1}{s^3}\mbox{.}
$$
Recall that $2\sin^2(s)=1-\cos(2s)$ and $2\sin(s)\cos(s)=\sin(2s)$. Therefore, by Proposition \ref{LEH.sec.2.prop.1}, $f(s)$ is negative when $s$ belongs to $]0,2\pi[$. Hence,
$$
\int_0^{2\pi}f(s)\sin^3(s)ds<\sin^3(1)\!\left[\int_1^{\pi-1}f(s)ds-\int_\pi^{\pi+1}f(s)ds\right]\!-\int_{\pi+1}^{2\pi}f(s)ds\mbox{.}
$$
As in addition,
$$
f(s)=\frac{d}{ds}\frac{2s^2+\cos(2s)-1}{4s^4}
$$
one obtains the inequality
\begin{multline*}
\int_0^{2\pi}f(s)\sin^3(s)ds<\sin^3(1)\!\biggl[\frac{2(\pi-1)^2+\cos(2\pi-2)-1}{4(\pi-1)^4}-\frac{1+\cos(2)}{4}\\
\hfill+\frac{1}{2\pi^2}\biggr]\!-\frac{1}{8\pi^2}+\!\left(1-\sin^3(1)\right)\!\frac{2(\pi+1)^2+\cos(2\pi+2)-1}{4(\pi+1)^4}
\end{multline*}
whose right-hand side is negative.
\end{proof}
Theorem \ref{LEH.sec.0.thm.0.2} can now be established
\begin{proof}[Proof of Theorem \ref{LEH.sec.0.thm.0.2}]
Consider an integer $n$ satisfying $4\leq{n}\leq{d}$. By the symmetries of the hypercube, it is only required to show that, if $t$ is small enough, then $V$ has a local maximum at the point $a$ of $\mathbb{R}^n$ whose first $n$ coordinates are equal to $1/\sqrt{n}$. Observe that the quantities
\begin{multline}
Q_1=\int_{-\infty}^{+\infty}\!\biggl[2\frac{n}{u^2}\sin^2\!\left(\frac{u}{\sqrt{n}}\right)-\frac{\sqrt{n}}{u}\cos\!\left(\frac{u}{\sqrt{n}}\right)\!\sin\!\left(\frac{u}{\sqrt{n}}\right)\!\\
\hfill-1\biggr]\!\!\Biggl(\frac{\sqrt{n}}{u}\sin\!\left(\frac{u}{\sqrt{n}}\right)\!\Biggr)^{\!\!n-2}\!\!\!\!\!\!\!\!\!\cos(2tu)du\mbox{,}
\end{multline}
and
$$
Q_2=-\int_{-\infty}^{+\infty}u^2\!\Biggl(\frac{\sqrt{n}}{u}\sin\!\left(\frac{u}{\sqrt{n}}\right)\!\Biggr)^{\!\!n}\!\!\!\cos(2tu)du
$$
are continuous functions of $t$ on $\mathbb{R}$. Therefore, according to Theorem \ref{LEH.sec.2.thm.1} and Lemma \ref{LEH.sec.2.lem.1}, it suffices to show that both of them are negative when $t$ is equal to~$0$. Assume that $t$ is equal to $0$ and note that, under this assumption, the negativity of $Q_2$ is immediate when $n$ is even.
By the change of variables $s=u/\sqrt{n}$ and splitting the integral at $0$,
$$
Q_1=2\sqrt{n}\int_{0}^{+\infty}\!\left[2\frac{\sin^n(s)}{s^n}-\cos(s)\frac{\sin^{n-1}(s)}{s^{n-1}}-\frac{\sin^{n-2}(s)}{s^{n-2}}\right]\!ds
$$
Now observe that since
\begin{equation}\label{LEH.sec.2.thm.2.eq.1}
2\frac{\sin^2(s)}{s^n}-\cos(s)\frac{\sin(s)}{s^{n-1}}-\frac{1}{s^{n-2}}=\frac{1-\cos(2s)}{s^n}-\frac{\sin(2s)}{2s^{n-1}}-\frac{1}{s^{n-2}}\mbox{,}
\end{equation}
the negativity of $Q_1$ is a consequence of Proposition \ref{LEH.sec.2.prop.1} when $n$ is even. It is therefore assumed for the remainder of the proof that $n$ is odd.
Further splitting the integral at the integer multiples of $\pi$ yields
$$
Q_1=2\sqrt{n}\sum_{i=0}^{+\infty}I_i
$$
where for any non-negative integer $i$,
$$
I_i=\int_{i\pi}^{(i+1)\pi}\!\left[2\frac{\sin^n(s)}{s^n}-\cos(s)\frac{\sin^{n-1}(s)}{s^{n-1}}-\frac{\sin^{n-2}(s)}{s^{n-2}}\right]\!ds\mbox{.}
$$
Hence, in order to show that $Q_1$ is negative, it is sufficient to prove that the sum $I_i+I_{i+1}$ is negative for all even $i$. When $i$ is equal to $0$ and $n$ to $5$, this follows from Proposition \ref{LEH.sec.2.prop.3}. Now observe that as $n$ is odd,
\begin{multline*}
I_i+I_{i+1}=\int_{i\pi}^{(i+1)\pi}\!\Biggl[2\frac{\sin^2(s)}{s^n}-\cos(s)\frac{\sin(s)}{s^{n-1}}-\frac{1}{s^{n-2}}\Biggr]\!\sin^{n-2}(s)\\
-\Biggl[2\frac{\sin^2(s+\pi)}{(s+\pi)^n}-\cos(s+\pi)\frac{\sin(s+\pi)}{(s+\pi)^{n-1}}-\frac{1}{(s+\pi)^{n-2}}\Biggr]\!\sin^{n-2}(s)ds\mbox{.}
\end{multline*}
Hence, according to (\ref{LEH.sec.2.thm.2.eq.1}) and to Proposition \ref{LEH.sec.2.prop.2}, $I_i+I_{i+1}$ is negative when $n$ is equal to $5$ and $i$ is a positive even number. It is also negative when $n$ greater than $5$ for any non-negative even integer $i$. This shows that $Q_1$ is negative and it remains to show that $Q_2$ is also negative.
By the change of variables $s=u/\sqrt{n}$ in the expression of $Q_2$,
$$
Q_2=-2n^{3/2}\sum_{i=0}^{+\infty}J_i
$$
where, for any non-negative integer $i$,
$$
J_i=\int_{i\pi}^{(i+1)\pi}\frac{\sin^n(s)}{s^{n-2}}ds\mbox{.}
$$
In order to prove that $Q_2$ is negative, it is sufficient to show that the sum $J_i+J_{i+1}$ is positive when $i$ is even. Observe that
$$
J_i+J_{i+1}=\int_{i\pi}^{(i+1)\pi}\frac{\sin^n(s)}{s^{n-2}}-\frac{\sin^n(s)}{(s+\pi)^{n-2}}ds\mbox{.}
$$
If $i$ is even and $s$ belongs to $]i\pi,(i+1)\pi[$, then $\sin(s)$ is positive and, as an immediate consequence, $J_i+J_{i+1}$ is positive as well.
\end{proof}
\section{Local maxima away from the center of the hypercube}\label{LEH.sec.3}
Consider an integer $n$ satisfying $2\leq{n}\leq{d}$ and recall that $]0,+\infty[^n$ denotes the subset of $[0,+\infty[^d$ made up of the points whose first $n$ coordinates are positive and whose last $d-n$ coordinates are equal to $0$. In this section and the next, the local extremality of $V$ is investigated at the point $a$ of $\mathbb{S}^{d-1}\cap]0,+\infty[^n$ whose first $n$ coordinates coincide. The change of variables
\begin{equation}\label{LEH.sec.3.eq.1}
z=\frac{n}{2}-t\sqrt{n}
\end{equation}
is used throughout both section.
The local extremality of $V$ when $t$ is large (and $z$ small) will be studied via Theorem \ref{LEH.sec.2.thm.1} as in Section \ref{LEH.sec.2}, except that (\ref{LEH.sec.2.thm.1.eq.0.1}) and (\ref{LEH.sec.2.thm.1.eq.0.2}) will be expressed as discrete sums instead of improper integrals.
\begin{lem}\label{LEH.sec.3.lem.1}
If $n$ is greater than, or equal to $4$, then at the point $a$ of $]0,+\infty[^n$ whose first $n$ coordinates are equal to $1/\sqrt{n}$,
$$
\frac{\partial^2}{\partial{a_1^2}}\frac{V}{\|a\|}-\sqrt{n}\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}-\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}=\sum_{i=0}^{\lfloor{z}\rfloor}\frac{(-1)^i\sqrt{n}}{(n-3)!}{n\choose{i}}(z-i)^{n-3}p_{i,n}(z)\mbox{.}
$$
\end{lem}
\begin{proof}
Consider a vertex $v$ of $[0,1]^d$. At any point $a$ in $]0,+\infty[^n$,
\begin{equation}\label{LEH.sec.3.thm.1.eq.0}
\frac{\partial}{\partial{a_1}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}=\frac{(b-a\mathord{\cdot}v)^{n-2}}{\pi(a)}\Biggl[(n-1)\!\left(\frac{1}{2}-v_1\right)\!-\frac{b-a\mathord{\cdot}v}{a_1}\Biggr]\!\mbox{.}
\end{equation}
Differentiating again yields
\begin{multline}\label{LEH.sec.3.thm.1.eq.1}
\displaystyle\frac{\partial^2}{\partial{a_1^2}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}=\frac{(b-a\mathord{\cdot}v)^{n-3}}{\pi(a)}\Biggl[\frac{(n-1)(n-2)}{4}\\
\hfill-2(n-1)\!\left(\frac{1}{2}-v_1\right)\!\frac{b-a\mathord{\cdot}v}{a_1}+2\frac{(b-a\mathord{\cdot}v)^2}{a_1^2}\Biggr]
\end{multline}
and
\begin{multline}\label{LEH.sec.3.thm.1.eq.2}
\displaystyle\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}=\frac{(b-a\mathord{\cdot}v)^{n-3}}{\pi(a)}\Biggl[(n-1)(n-2)\!\left(\frac{1}{2}-v_1\right)\!\!\left(\frac{1}{2}-v_2\right)\\
\hfill-(n-1)\!\left(\frac{1}{2}-v_2\right)\!\frac{b-a\mathord{\cdot}v}{a_1}-(n-1)\!\left(\frac{1}{2}-v_1\right)\!\frac{b-a\mathord{\cdot}v}{a_2}+\frac{(b-a\mathord{\cdot}v)^2}{a_1a_2}\Biggr]\!\mbox{.}
\end{multline}
Now denote by $\mathcal{L}_i$ the set of the vertices $v$ of the hypercube $[0,1]^n$ whose coordinates sum to some integer $i$. Recall that
$$
|\mathcal{L}_i|={n\choose{i}}\!\mbox{.}
$$
Moreover, exactly
$$
n-1\choose{i-1}
$$
vertices $v$ in $\mathcal{L}_i$ satisfy $v_1=1$. In particular,
$$
\begin{array}{rcl}
\displaystyle\sum_{v\in\mathcal{L}_i}\!\left(\frac{1}{2}-v_1\right)\! & \!\!\!\!=\!\!\!\! & \displaystyle\frac{1}{2}{n\choose{i}}-{n-1\choose{i-1}}\!\mbox{,}\\[\bigskipamount]
& \!\!\!\!=\!\!\!\! & \displaystyle{n\choose{i}}\!\left(\frac{1}{2}-\frac{i}{n}\right)\!\!\mbox{.}\\
\end{array}
$$
Assume that $a$ is the point of $]0,+\infty[^n$ whose first $n$ coordinates are equal to $1/\sqrt{n}$. In that case, for any vertex $v$ in $\mathcal{L}_i$, (\ref{LEH.sec.3.eq.1}) yields
$$
b-a\mathord{\cdot}v=\frac{z-i}{\sqrt{n}}\mbox{.}
$$
Therefore, $b-a\mathord{\cdot}v$ only depends on $i$ and, according to (\ref{LEH.sec.3.thm.1.eq.0}),
\begin{equation}\label{LEH.sec.3.thm.1.eq.2.5}
\sum_{v\in\mathcal{L}_i}\frac{\partial}{\partial{a_1}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}=n{n\choose{i}}(z-i)^{n-2}\Biggl[(n-1)\!\left(\frac{1}{2}-\frac{i}{n}\right)\!-(z-i)\Biggr]\!\mbox{.}
\end{equation}
Similarly, according to (\ref{LEH.sec.3.thm.1.eq.1}),
\begin{multline}\label{LEH.sec.3.thm.1.eq.3}
\sum_{v\in\mathcal{L}_i}\frac{\partial^2}{\partial{a_1^2}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}=n\sqrt{n}{n\choose{i}}(z-i)^{n-3}\Biggl[\frac{(n-1)(n-2)}{4}\\
\hfill-2(n-1)\!\left(\frac{1}{2}-\frac{i}{n}\right)\!(z-i)+2(z-i)^2\Biggr]\!\mbox{.}
\end{multline}
Observe that $\mathcal{L}_i$ contains exactly
$$
n-2\choose{i}
$$
vertices whose first two coordinates are both equal to $0$,
$$
2{n-2\choose{i-1}}
$$
vertices whose first two coordinates are different, and
$$
n-2\choose{i-2}
$$
vertices whose first two coordinates are both equal to $1$. In particular,
$$
\begin{array}{rcl}
\displaystyle\sum_{v\in\mathcal{L}_i}\!\left(\frac{1}{2}-v_1\right)\!\!\left(\frac{1}{2}-v_2\right)\! & \!\!\!\!=\!\!\!\! & \displaystyle\frac{1}{4}\!\left[{n-2\choose{i}}+{n-2\choose{i-2}}-2{n-2\choose{i-1}}\right]\!\!\mbox{,}\\[\bigskipamount]
& \!\!\!\!=\!\!\!\! & \displaystyle{n\choose{i}}\!\left(\frac{1}{4}-\frac{i(n-i)}{n(n-1)}\right)\!\!\mbox{.}\\[\bigskipamount]
\end{array}
$$
Hence, it follows from (\ref{LEH.sec.3.thm.1.eq.2}) that
\begin{multline}\label{LEH.sec.3.thm.1.eq.4}
\sum_{v\in\mathcal{L}_i}\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}=n\sqrt{n}{n\choose{i}}(z-i)^{n-3}\Biggl[\frac{(n-1)(n-2)}{4}\\
\hfill-\frac{i(n-i)(n-2)}{n}-2(n-1)\!\left(\frac{1}{2}-\frac{i}{n}\right)\!(z-i)+(z-i)^2\Biggr]\!\mbox{.}
\end{multline}
Since the first $n$ coordinates of $a$ are equal, a vertex $v$ of $[0,1]^n$ satisfies $a\mathord{\cdot}v\leq{b}$ if and only if its coordinates sum to at most $z$. Hence, by Lemma \ref{LEH.sec.1.lem.2},
\begin{multline*}
\frac{\partial^2}{\partial{a_1^2}}\frac{V}{\|a\|}-\sqrt{n}\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}-\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}=\sum_{i=0}^{\lfloor{z}\rfloor}\frac{(-1)^i}{(n-1)!}\sum_{v\in\mathcal{L}_i}\Biggl[\frac{\partial^2}{\partial{a_1^2}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}\\
\hfill-\sqrt{n}\frac{\partial}{\partial{a_1}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}-\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{(b-a\mathord{\cdot}v)^{n-1}}{\pi(a)}\Biggr]\!\mbox{.}
\end{multline*}
Combining this with (\ref{LEH.sec.0.thm.1.eq.-1}), (\ref{LEH.sec.3.thm.1.eq.2.5}), (\ref{LEH.sec.3.thm.1.eq.3}), and (\ref{LEH.sec.3.thm.1.eq.4}) completes the proof.
\end{proof}
Theorem \ref{LEH.sec.0.thm.1} can now be proven.
\begin{proof}[Proof of Theorem \ref{LEH.sec.0.thm.1}]
Assume that $n$ is equal to $d$. The theorem is obtained as a consequence of Theorem~\ref{LEH.sec.2.thm.1} and Lemma \ref{LEH.sec.3.lem.1}. Indeed, according to the latter, (\ref{LEH.sec.0.thm.1.eq.0}) and (\ref{LEH.sec.2.thm.1.eq.0.1}) are multiples of each other by a positive number.
\end{proof}
\begin{rem}
Note that a statement equivalent to that of Theorem \ref{LEH.sec.0.thm.1} can be obtained by replacing (\ref{LEH.sec.0.thm.1.eq.0}) with the right-hand side of (\ref{LEH.sec.2.lem.1.eq.1}).
\end{rem}
In order to determine the signs of weighted alternating sum of binomial coefficients such as (\ref{LEH.sec.0.thm.1.eq.0}), the following technical statement will be used.
\begin{prop}\label{LEH.sec.3.prop.1}
Consider a non-negative integer $l$ such that $l<z$ and, for each integer~$i$ satisfying $l\leq{i}\leq{z}$, a positive number $f_i(z)$. Assume that $f_i(z)/f_{i+1}(z)$ is monotonically increasing with $i$. In this case, if
\begin{equation}\label{LEH.sec.3.prop.1.eq.0}
z<l+\frac{1}{1-\!\left(\displaystyle\frac{l+1}{n-l}\frac{f_l(z)}{f_{l+1}(z)}\right)^{1/(n-3)}}\mbox{,}
\end{equation}
then the sum
\begin{equation}\label{LEH.sec.3.prop.1.eq.1}
\sum_{i=l}^{\lfloor{z}\rfloor}(-1)^i{n\choose{i}}(z-i)^{n-3}f_i(z)
\end{equation}
is positive when $l$ is even and negative when $l$ is odd.
\end{prop}
\begin{proof}
First observe that, when $l$ is equal to $\lfloor{z}\rfloor$, the result is immediate. Therefore, it is assumed in this proof that $l$ is less than $\lfloor{z}\rfloor$. In that case, denote by $m$ the largest integer less than or equal to $z$, whose parity is different from the parity of $l$. It is sufficient to show that
\begin{equation}\label{LEH.sec.3.prop.1.eq.2}
\sum_{i=l}^m(-1)^i{n\choose{i}}(z-i)^{n-3}f_i(z)
\end{equation}
is positive when $l$ is even and negative when $l$ is odd. Indeed, that sum only possibly misses one term of (\ref{LEH.sec.3.prop.1.eq.1}), but that missing term (when there is one) necessarily has the desired sign. Now observe that there is an even number of terms in (\ref{LEH.sec.3.prop.1.eq.2}), whose signs alternate. Hence, it suffices to prove that
\begin{equation}\label{LEH.sec.3.prop.1.eq.3}
{n\choose{i}}(z-i)^{n-3}f_i(z)-{n\choose{i+1}}(z-i-1)^{n-3}f_{i+1}(z)
\end{equation}
is positive when $l\leq{i}<m$. One can see that (\ref{LEH.sec.3.prop.1.eq.3}) is positive if and only if
$$
\frac{i+1}{d-i}\frac{f_i(z)}{f_{i+1}(z)}>\!\left(1-\frac{1}{z-i}\right)^{n-3}
$$
which in turn is equivalent to
$$
z<i+\frac{1}{1-\!\left(\displaystyle\frac{i+1}{n-i}\frac{f_i(z)}{f_{i+1}(z)}\right)^{1/(n-3)}}\mbox{.}
$$
As $f_i(z)/f_{i+1}(z)$ is monotonically increasing with $i$, so is the right-hand side of this inequality and it is therefore implied by (\ref{LEH.sec.3.prop.1.eq.0}), as desired.
\end{proof}
The following can be proven using Proposition \ref{LEH.sec.3.prop.1}.
\begin{lem}\label{LEH.sec.3.lem.2}
Assume that $4\leq{n}\leq{d}$. If in addition,
\begin{equation}\label{LEH.sec.3.lem.2.eq.1}
0<z<\min\!\left\{\frac{n-1}{4},\frac{n^{1/(n-3)}}{n^{1/(n-3)}-1}\right\}\!\mbox{,}
\end{equation}
then, at the point $a$ of $]0,+\infty[^n$ whose first $n$ coordinates are equal to $1/\sqrt{n}$,
\begin{equation}\label{LEH.sec.3.lem.2.eq.1.5}
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{n\choose{i}}(z-i)^{n-3}p_{i,n}(z)<0\mbox{.}
\end{equation}
\end{lem}
\begin{proof}
Assume that $z$ satisfies (\ref{LEH.sec.3.lem.2.eq.1}) and note that, when $n=4$, this implies
$$
0<z<\frac{3}{4}\mbox{.}
$$
As a consequence,
$$
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{n\choose{i}}(z-i)^{n-3}p_{i,n}(z)=zp_{0,4}(z)\mbox{.}
$$
However, in turn,
$$
p_{0,4}(z)=\frac{4}{3}z\!\left(z-\frac{3}{4}\right)\!\!\mbox{,}
$$
and the desired result immediately follows. Now assume that $n$ is at least $5$ and observe that $p_{i,n}(z)$ can be rewritten as
\begin{equation}\label{LEH.sec.3.lem.2.eq.2}
p_{i,n}(z)=\frac{-f_i(z)+g_i(z)}{(n-1)(n-2)}
\end{equation}
where
$$
f_i(z)=2nz\!\left(\frac{n-1}{4}-z\right)\!
$$
and
$$
g_i(z)=(3n+1)\!\left(\frac{n}{2}-\frac{3n}{3n+1}-z\right)\!i+3i^2\mbox{.}
$$
Further note that $f_i(z)$ is positive when $0\leq{i}\leq{z}$ because $z$ is less than $(n-1)/4$. As in addition, $n$ is at least $4$, $g_i(z)$ is positive when $1\leq{i}\leq{z}$. Moreover, since $f_i(z)$ does not depend on $i$, the ratio $f_i(z)/f_{i+1}(z)$ is monotonically increasing with $i$ (which is meant in a weak sense here).
On the other hand, $g_i(z)/g_{i+1}(z)$ can be written in the form
$$
\left(1-\frac{1}{i+1}\right)\!\!\left(1-\frac{1}{\alpha+i+1}\right)
$$
where
$$
\alpha=\frac{(3n+1)}{3}\!\left(\frac{n}{2}-\frac{3n}{3n+1}-z\right)\!\!\mbox{.}
$$
Again, $\alpha$ is positive because $z$ is less than $(n-1)/4$. As a consequence, the ratio $g_i(z)/g_{i+1}(z)$ is also monotonically increasing with $i$.
Now observe that $g_0(z)$ is equal to $0$. Hence, by (\ref{LEH.sec.3.lem.2.eq.2}),
\begin{multline*}
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{n\choose{i}}(z-i)^{n-3}p_{i,n}(z)=-\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{n\choose{i}}\frac{f_i(z)(z-i)^{n-3}}{(n-1)(n-2)}\\
+\sum_{i=1}^{\lfloor{z}\rfloor}(-1)^i{n\choose{i}}\frac{g_i(z)(z-i)^{n-3}}{(n-1)(n-2)}\mbox{.}
\end{multline*}
According to Proposition \ref{LEH.sec.3.prop.1}, this is negative when
$$
0<z<\min\!\left\{\frac{n^{1/(n-3)}}{n^{1/(n-3)}-1},1+\frac{1}{1-q^{1/(n-3)}}\right\}\!
$$
where
$$
q=\frac{2}{n-1}\frac{g_1(z)}{g_2(z)}\mbox{.}
$$
In order to complete the proof, it suffices to show that
$$
\frac{1}{1-q^{1/(n-3)}}\geq\frac{n^{1/(n-3)}}{n^{1/(n-3)}-1}
$$
or, equivalently, that $q\geq1/n$. Observe that
$$
\begin{array}{rcl}
q & \!\!\!\!=\!\!\!\! & \displaystyle\frac{1}{n-1}\frac{(3n+1)(n-2z)-6n+6}{(3n+1)(n-2z)-6n+12}\mbox{,}\\[\bigskipamount]
& \!\!\!\!=\!\!\!\! & \displaystyle\frac{1}{n-1}\!\left(1-\frac{6}{3n^2-5n+12-(2+6n)z}\right)\!\!\mbox{.}\\
\end{array}
$$
Therefore, $q\geq1/n$ if and only if
$$
\frac{6}{3n^2-5n+12-(2+6n)z}\leq\frac{1}{n}
$$
Note that the denominator in the left-hand side of this inequality is positive. Hence, this is equivalent to the non-negativity of
$$
3n^2-11n+12-(2+6n)z
$$
However, since $z<(n-1)/4$,
$$
3n^2-11n+12-(2+6n)z>\frac{3}{2}n^2-10n+\frac{25}{2}\mbox{.}
$$
As the right-hand side of this inequality is non-negative when $n\geq5$, this shows that $q$ is at least $1/n$, as desired.
\end{proof}
Theorem \ref{LEH.sec.0.thm.0.1} can now be proven.
\begin{proof}[Proof of Theorem \ref{LEH.sec.0.thm.0.1}]
By the symmetries of the hypercube, it suffices to show that $V$ has a strict local maximum on $\mathbb{S}^{d-1}\cap]0,+\infty[^d$ at the point $a$ whose coordinates are all equal to $1/\sqrt{d}$. Assume that $n$ is equal to $d$. In that case,
$$
z=\frac{d}{2}-t\sqrt{d}
$$
and the result follows from Theorem \ref{LEH.sec.0.thm.1} and Lemma \ref{LEH.sec.3.lem.2}.
\end{proof}
\section{The sub-diagonals of the hypercube}\label{LEH.sec.3.5}
While Theorem \ref{LEH.sec.2.thm.1} only requires the negativity of (\ref{LEH.sec.2.thm.1.eq.0.1}) to treat the diagonals of the hypercube, it further needs (\ref{LEH.sec.2.thm.1.eq.0.2}) to be negative as well in the case of lower order sub-diagonals. The following lemma gives an expression of the latter quantity. As in the previous section, $n$ is an integer such that $4\leq{n}\leq{d}$ and $z$ is related to $t$ via the change of variables (\ref{LEH.sec.3.eq.1}).
\begin{lem}\label{LEH.sec.3.5.lem.1}
Assume that $n$ is less than $d$. In this case, at the point of $]0,+\infty[^n$ whose first $n$ coordinates are $1/\sqrt{n}$,
$$
\frac{\partial^2}{\partial{a_d^2}}\frac{V}{\|a\|}=\sum_{i=0}^{\lfloor{z}\rfloor}\frac{(-1)^in\sqrt{n}}{12(n-3)!}{n\choose{i}}(z-i)^{n-3}\mbox{.}
$$
\end{lem}
\begin{proof}
According to Lemma \ref{LEH.sec.1.lem.3},
\begin{equation}\label{LEH.sec.3.5.lem.1.eq.1}
\frac{\partial^2}{\partial{a_j^2}}\frac{V}{\|a\|}=\sum\frac{(-1)^{\sigma(v)}(b-a\mathord{\cdot}v)^{n-3}}{12(n-3)!\pi(a)}
\end{equation}
where the sum is over the vertices $v$ of $[0,1]^n$ satisfying $a\mathord{\cdot}v\leq{b}$. At the point $a$ of $]0,+\infty[^n$ whose first $n$ coordinates are equal to $1/\sqrt{n}$, these vertices are precisely the ones whose sum of coordinates is at most $z$. Recall that, for any integer $i$ such that $0\leq{i}\leq{z}$, the hypercube $[0,1]^n$ has exactly
$$
{n\choose{i}}
$$
vertices whose coordinates sum to $i$. As, for any such vertex $v$,
$$
\frac{(-1)^{\sigma(v)}(b-a\mathord{\cdot}v)^{n-3}}{12(n-3)!\pi(a)}=n\sqrt{n}\frac{(-1)^i(z-i)^{n-3}}{12(n-3)!}\mbox{,}
$$
the desired expression follows from (\ref{LEH.sec.3.5.lem.1.eq.1}).
\end{proof}
Theorem \ref{LEH.sec.0.thm.2} can now be proven.
\begin{proof}[Proof of Theorem \ref{LEH.sec.0.thm.2}]
Assume that $n$ is less than $d$. According to Lemma \ref{LEH.sec.3.lem.1} (\ref{LEH.sec.0.thm.2.eq.0}) and (\ref{LEH.sec.2.thm.1.eq.0.1}) have the same sign. By Lemma \ref{LEH.sec.3.5.lem.1}, (\ref{LEH.sec.0.thm.2.eq.1}) and (\ref{LEH.sec.2.thm.1.eq.0.2}) also have the same sign and the result follows from Theorem~\ref{LEH.sec.2.thm.1}.
\end{proof}
\begin{rem}
Theorem \ref{LEH.sec.0.thm.2} can be stated equivalently using the right-hand side of (\ref{LEH.sec.2.lem.1.eq.1}) instead of (\ref{LEH.sec.0.thm.2.eq.0}) and the right-hand side of (\ref{LEH.sec.2.lem.1.eq.2}) instead of (\ref{LEH.sec.0.thm.2.eq.1}).
\end{rem}
The results established so far allow to prove local extremality for $V$ at certain points. The following theorem makes it possible to prove that, at these points, $V$ is not locally extremal, even weakly so. Its proof is similar to that of Theorem~\ref{LEH.sec.2.thm.1}, except that it relies on the necessary conditions of the Lagrange multipliers theorem instead of the sufficient conditions.
\begin{thm}\label{LEH.sec.3.5.thm.1}
Consider an integer $n$ satisfying $2\leq{n}<d$ and assume that $V$ is twice continuously differentiable at the point $a$ of $\mathbb{R}^n$ whose first $n$ coordinates are equal to $1/\sqrt{n}$. If at that point,
\begin{equation}\label{LEH.sec.3.5.thm.1.eq.0.1}
\frac{\partial^2}{\partial{a_1^2}}\frac{V}{\|a\|}-\sqrt{n}\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}-\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}
\end{equation}
and
\begin{equation}\label{LEH.sec.3.5.thm.1.eq.0.2}
\frac{\partial^2}{\partial{a_d^2}}\frac{V}{\|a\|}
\end{equation}
are both non-zero and have opposite signs, then $V$ does not have a local extremum (even a weak one) on $\mathbb{S}^{d-1}\cap[0,+\infty[^d$ at $a$.
\end{thm}
\begin{proof}
As in the proof of Theorem \ref{LEH.sec.2.thm.1}, denote
$$
\lambda=-\frac{\sqrt{n}}{2}\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}\mbox{.}
$$
Here and in the remainder of the proof, all partial derivatives are taken at the point $a$ of $\mathbb{R}^n$ whose first $n$ coordinates are equal to $1/\sqrt{n}$.
Further denote
$$
L_\lambda=\frac{V}{\|a\|}+\lambda\!\left(\|a\|^2-1\right)\!.
$$
With the above choice for $\lambda$, one obtains from the same argument as in the proof of Theorem \ref{LEH.sec.2.thm.1} that $a$ is a critical point of $L_\lambda$. By the necessary conditions of the Lagrange multipliers theorem (see for instance Proposition 3.1.1 from \cite{Bertsekas1999}), if $V$ has a local maximum on $\mathbb{S}^{d-1}\cap]0,+\infty[^n$ at point $a$ then, for every non-zero point $x$ in $\mathbb{R}^d$ whose first $n$ coordinates sum to $0$,
\begin{equation}\label{LEH.sec.3.5.thm.1.eq.1}
\sum_{j=1}^d\sum_{k=1}^dx_jx_k\frac{\partial^2{L_\lambda}}{\partial{a_j}\partial{a_k}}\leq0\mbox{.}
\end{equation}
However, as shown in the proof of Theorem \ref{LEH.sec.2.thm.1},
\begin{multline*}
\sum_{j=1}^d\sum_{k=1}^dx_jx_k\frac{\partial^2{L_\lambda}}{\partial{a_j}\partial{a_k}}=\!\left[\frac{\partial^2}{\partial{a_1^2}}\frac{V}{\|a\|}-\sqrt{n}\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}-\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}\right]\!\sum_{i=1}^nx_i^2\\
+\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}\!\left[\sum_{i=1}^nx_i\right]^2\!+\frac{\partial^2}{\partial{a_d^2}}\frac{V}{\|a\|}\sum_{i=n+1}^dx_i^2\mbox{.}
\end{multline*}
Hence, taking for $x$ the point of $\mathbb{R}^d$ whose first two coordinates are $1$ and $-1$ and whose all other coordinates are equal to $0$ yields
$$
\sum_{j=1}^d\sum_{k=1}^dx_jx_k\frac{\partial^2{L_\lambda}}{\partial{a_j}\partial{a_k}}=2\!\left[\frac{\partial^2}{\partial{a_1^2}}\frac{V}{\|a\|}-\sqrt{n}\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}-\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}\right]\!\mbox{,}
$$
and taking, for $x$ the point of $\mathbb{R}^d$ whose first $d-1$ coordinates are equal to $0$ and whose last coordinate is equal to $1$,
$$
\sum_{j=1}^d\sum_{k=1}^dx_jx_k\frac{\partial^2{L_\lambda}}{\partial{a_j}\partial{a_k}}=\frac{\partial^2}{\partial{a_d^2}}\frac{V}{\|a\|}\mbox{.}
$$
Therefore, when (\ref{LEH.sec.3.5.thm.1.eq.0.1}) and (\ref{LEH.sec.3.5.thm.1.eq.0.2}) are non-zero and have opposite signs, then $V$ does not have a local maximum on $\mathbb{S}^{d-1}\cap[0,+\infty[^d$ at $a$. By the same argument, but with the necessary conditions of the Lagrange multipliers theorem for local minima instead of maxima, one obtains that, under the same assumptions, $V$ cannot have a local minimum on $\mathbb{S}^{d-1}\cap[0,+\infty[^d$ at $a$.
\end{proof}
Theorem \ref{LEH.sec.0.thm.0.3} is now proven as a consequence of Theorem \ref{LEH.sec.3.5.thm.1}.
\begin{proof}[Proof of Theorem \ref{LEH.sec.0.thm.0.3}]
Assume that $4\leq{n}<d$ and that $t$ satisfies (\ref{LEH.sec.0.thm.0.3.eq.1}). By the symmetries of the hypercube, it suffices to show that $V$ does not have a local extremum on $\mathbb{S}^{d-1}\cap[0,+\infty[^d$ at the point $a$ of $\mathbb{R}^n$ whose first $n$ coordinates are equal to $1/\sqrt{n}$. Recall that $z$ and $t$ are linked via the change of variables (\ref{LEH.sec.3.eq.1}). In particular, (\ref{LEH.sec.0.thm.0.3.eq.1}) is equivalent to the condition on $x$ in the statement of Lemma \ref{LEH.sec.3.lem.2}. Hence, according to that lemma and to Lemma \ref{LEH.sec.3.lem.1},
$$
\frac{\partial^2}{\partial{a_1^2}}\frac{V}{\|a\|}-\sqrt{n}\frac{\partial}{\partial{a_1}}\frac{V}{\|a\|}-\frac{\partial^2}{\partial{a_1}\partial{a_2}}\frac{V}{\|a\|}
$$
is negative at the point $a$ of $\mathbb{R}^n$ whose first $n$ coordinates are equal to $1/\sqrt{n}$. Therefore, by Theorem \ref{LEH.sec.3.5.thm.1}, it suffices to show that
$$
\frac{\partial^2}{\partial{a_d^2}}\frac{V}{\|a\|}
$$
is positive at that point. Lemma \ref{LEH.sec.3.5.lem.1} yields
$$
\frac{\partial^2}{\partial{a_d^2}}\frac{V}{\|a\|}=\sum_{i=0}^{\lfloor{z}\rfloor}\frac{(-1)^in\sqrt{n}}{12(n-3)!}{n\choose{i}}(z-i)^{n-3}\mbox{,}
$$
which, according to Proposition \ref{LEH.sec.3.prop.1} is positive when
$$
0<z<\frac{n^{1/(n-3)}}{n^{1/(n-3)}-1}\mbox{,}
$$
According to (\ref{LEH.sec.3.eq.1}), this is implied by (\ref{LEH.sec.0.thm.0.3.eq.1}), as desired.
\end{proof}
\section{Low dimensional hypercubes}\label{LEH.sec.4}
As observed in the proof of Lemma \ref{LEH.sec.3.lem.2},
$$
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{d\choose{i}}(z-i)^{d-3}p_{i,d}(z)=\frac{4}{3}z^2\!\left(z-\frac{3}{4}\right)
$$
when $d$ is equal to $4$ and $0<z<1$. This made it possible to show that if
$$
\frac{\sqrt{4}}{2}-\frac{3}{4}\frac{1}{\sqrt{4}}<t<\frac{\sqrt{4}}{2}\mbox{.}
$$
then the $3$-dimensional volume of $H\cap[0,1]^4$ is strictly locally maximal when $H$ is orthogonal to a diagonal of the $4$-dimensional hypercube $[0,1]^4$. By Theorem~\ref{LEH.sec.0.thm.1}, this also immediately proves that, if
$$
\frac{\sqrt{4}}{2}-\frac{1}{\sqrt{4}}\leq{t}<\frac{\sqrt{4}}{2}-\frac{3}{4}\frac{1}{\sqrt{4}}\mbox{,}
$$
then the $3$-dimensional volume of $H\cap[0,1]^4$ is strictly locally minimal when $H$ is orthogonal to a diagonal of $[0,1]^d$. The range for $t$ when local minimality occurs can be completed. Indeed, when $d=4$ and $1\leq{z}\leq{2}$,
$$
\begin{array}{rcl}
\displaystyle\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{d\choose{i}}(z-i)^{d-3}p_{i,d}(z) & \!\!\!\!=\!\!\!\! & \displaystyle zp_{0,4}(z)-4(z-1)p_{1,4}(z)\mbox{,}\\
& \!\!\!\!=\!\!\!\! & \displaystyle -4z^3+17z^2-24z+\frac{34}{3}\mbox{.}\\
\end{array}
$$
This polynomial admits, as its unique real root
\begin{equation}\label{LEH.sec.4.eq.1}
\rho_4^-=\frac{17+(17-12\sqrt{2})^{1/3}+(17+12\sqrt{2})^{1/3}}{12}
\end{equation}
which is about $1.71229$. Moreover, it is positive when $z<\rho_4^-$, and negative when $z>\rho_4^-$. As a consequence, by Theorem \ref{LEH.sec.0.thm.1}, if
$$
\frac{\sqrt{4}}{2}-\frac{\rho_4^-}{\sqrt{4}}<t<\frac{\sqrt{4}}{2}-\frac{3}{4}\frac{1}{\sqrt{4}}\mbox{,}
$$
then the $3$-dimensional volume of $H\cap[0,1]^4$ is strictly locally minimal when $H$ is orthogonal to a diagonal of $[0,1]^4$ and if
$$
0<t<\frac{\sqrt{4}}{2}-\frac{\rho_4^-}{\sqrt{4}}\mbox{,}
$$
then that volume becomes strictly locally maximal again when $H$ is orthogonal to a diagonal of the $4$-dimensional hypercube $[0,1]^4$. These observations carry over to (at least) the first few higher dimensions.
\begin{prop}\label{LEH.sec.4.prop.1}
Assume that $4\leq{d}\leq7$. There exist two numbers $\rho_d^-$ and $\rho_d^+$ such that $0<\rho_d^+<\rho_d^-<d/2$ and, if
$$
t\in\!\left]0,\frac{\sqrt{d}}{2}-\frac{\rho_d^-}{\sqrt{d}}\right[\!\cup\!\left]\frac{\sqrt{d}}{2}-\frac{\rho_d^+}{\sqrt{d}},\frac{\sqrt{d}}{2}\right[\!
$$
then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ is strictly locally maximal when $H$ is orthogonal to a diagonal of $[0,1]^d$. However, if
$$
t\in\!\left]\frac{\sqrt{d}}{2}-\frac{\rho_d^-}{\sqrt{4}},\frac{\sqrt{d}}{2}-\frac{\rho_d^+}{\sqrt{4}}\right[\!
$$
then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ is strictly locally minimal when $H$ is orthogonal to a diagonal of $[0,1]^d$.
\end{prop}
\begin{proof}
The proposition has been proven above when $d$ is equal to $4$. Assume that $d\geq5$ and recall that the desired result is proven in \cite{Konig2021} when
$$
\frac{\sqrt{d}}{2}-\frac{1}{\sqrt{d}}<t<\frac{\sqrt{d}}{2}\mbox{.}
$$
Assume first that $d$ is equal to $5$. If in addition, $1\leq{z}\leq{2}$, then
$$
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{d\choose{i}}(z-i)^{d-3}p_{i,d}(z)=-\frac{5}{6}(4z^2-10z+7)(z-1)(z-2)
$$
and if $2<z\leq{5/2}$, then
$$
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{d\choose{i}}(z-i)^{d-3}p_{i,d}(z)=\frac{5}{2}(2z^2-10z+13)(z-2)(z-3)\mbox{.}
$$
Since $4z^2-10z+7$ and $2z^2-10z+13$ are both always positive, the desired result follows form Theorem \ref{LEH.sec.0.thm.1} with $\rho_5^-=2$ and $\rho_5^+=1$. Now assume that $d$ is equal to $6$. In that case, when $1\leq{z}\leq{2}$,
$$
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{d\choose{i}}(z-i)^{d-3}p_{i,d}(z)=-3z^5+\frac{81}{4}z^4-54z^3+72z^2-48z+\frac{63}{5}\mbox{.}
$$
This polynomial is negative when $z=1$ and positive, when $z=2$. It is also increasing in the interval $[1,2]$ (this can be easily seen from its derivative, a degree $4$ polynomial that admits $1$ and $2$ as its only real roots). Hence this polynomial has a unique root $\rho_6^+$ in the interval $]1,2[$.
Further observe that, when $2<z\leq{3}$,
\begin{multline*}
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{d\choose{i}}(z-i)^{d-3}p_{i,d}(z)=6z^5-\frac{147}{2}z^4+360z^3-882z^2+1080z-\frac{2637}{5}\mbox{.}
\end{multline*}
This polynomial is decreasing in the interval $]2,3[$ (which, again can be easily seen from its derivative), positive when $z=2$, and negative when $z=3$. Denoting by $\rho_6^-$ the only root of this polynomial contained in that interval, the desired result follows again from Theorem \ref{LEH.sec.0.thm.1}.
Finally, assume that $d$ is equal to $7$. Using the same kind of straightforward arguments (but using the first two derivatives instead of just the first), one easily shows that,
there exist two numbers $z_1$ and $z_2$, the first in $]1,2[$ and the other in $]2,3[$, such that the quantity
\begin{equation}\label{LEH.sec.4.prop.1.eq.1}
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{d\choose{i}}(z-i)^{d-3}p_{i,d}(z)\mbox{,}
\end{equation}
thought of as a function of $z$, is strictly decreasing in $[1,z_1[$, strictly increasing in $]z_1,z_2[$, and strictly decreasing again in $]z_2,7/2]$. Moreover, (\ref{LEH.sec.4.prop.1.eq.1}) is negative when $z$ is equal to $1$, $3$, and $7/2$ and positive when $z=2$.
Therefore, there exist two numbers $\rho_7^-$ and $\rho_7^+$, the first in $]2,3[$ and the other in $]1,2[$, such that (\ref{LEH.sec.4.prop.1.eq.1}) vanishes when $z$ is equal to either of them, is negative when $z$ belongs to $[1,\rho_7^+[\cup]\rho_7^-,7/2]$, and positive when $z$ is in $]\rho_7^+,\rho_7^-[$. Hence, the proposition follows once more from Theorem \ref{LEH.sec.0.thm.1}.
\end{proof}
\begin{rem}\label{LEH.sec.4.rem.1}
As discussed above, $\rho_4^+$ is equal to $3/4$ and $\rho_4^-$, given by (\ref{LEH.sec.4.eq.1}), is about $1.71229$. It is also explicit in the proof of Proposition \ref{LEH.sec.4.prop.1} that $\rho_5^+$ is $1$ and $\rho_5^-$ is $2$. However, $\rho_6^+$ and $\rho_6^-$ do not have exact expressions. They are about $1.39766$ and $2.46963$, respectively. Likewise, $\rho_7^+$ and $\rho_7^-$ cannot be expressed exactly, but they are about $1.77221$ and $2.9324$, respectively.
\end{rem}
A similar result can be obtained for low order sub-diagonals. This can be illustrated with order $4$ sub-diagonals of higher dimensional hypercubes. Indeed, assume that $n=4$ and $d>4$. Observe that, if $0<z<1$ then
$$
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{n\choose{i}}(z-i)^{n-3}=z\mbox{.}
$$
Moreover, if $1\leq{z}\leq{2}$,
\begin{equation}\label{LEH.sec.4.eq.2}
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{n\choose{i}}(z-i)^{n-3}=-3\!\left(z-\frac{4}{3}\right)\!\!\mbox{.}
\end{equation}
Therefore, according to Theorem \ref{LEH.sec.0.thm.2} and the discussion about $\rho_4^+$ and $\rho_4^-$ from the beginning of the section, if
$$
0<t<\frac{\sqrt{4}}{2}-\frac{\rho_4^-}{\sqrt{4}}\mbox{,}
$$
then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ is strictly locally maximal when $H$ is orthogonal to an order $4$ sub-diagonal of $[0,1]^d$. Likewise, if
$$
\frac{\sqrt{d}}{2}-\frac{4}{3}\frac{1}{\sqrt{4}}<t<\frac{\sqrt{d}}{2}-\frac{\rho_d^+}{\sqrt{4}}\mbox{,}
$$
where the $4/3$ is the left-hand side is the value of $z$ such that (\ref{LEH.sec.4.eq.2}) vanishes, then that volume is strictly locally minimal when $H$ is orthogonal to an order $4$ sub-diagonal of $[0,1]^d$. Moreover, by Theorem \ref{LEH.sec.3.5.thm.1} (where (\ref{LEH.sec.3.5.thm.1.eq.0.1}) and (\ref{LEH.sec.3.5.thm.1.eq.0.2}) are expressed using Lemmas \ref{LEH.sec.3.lem.1} and \ref {LEH.sec.3.5.lem.1}), if
$$
\frac{\sqrt{4}}{2}-\frac{\rho_4^-}{\sqrt{4}}<t<\frac{\sqrt{d}}{2}-\frac{4}{3}\frac{1}{\sqrt{4}}
$$
or
$$
\frac{\sqrt{d}}{2}-\frac{\rho_d^+}{\sqrt{4}}<t\leq\frac{\sqrt{d}}{2}
$$
then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ is not locally extremal when $H$ is orthogonal to an order $4$ sub-diagonal of $[0,1]^d$. This observation carries over to the next few sub-diagonal orders.
\begin{prop}\label{LEH.sec.4.prop.2}
Assume that $4\leq{n}\leq7$ and that $n<d$. There exists a number $\rho_n^\circ$ independent on $d$ such that $\rho_n^+<\rho_n^\circ<\rho_n^-$ and, if
$$
t\in\!\left]0,\frac{\sqrt{d}}{2}-\frac{\rho_d^-}{\sqrt{d}}\right[\!
$$
then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ is strictly locally maximal when $H$ is orthogonal to an order $n$ sub-diagonal of $[0,1]^d$. However, if
$$
t\in\!\left]\frac{\sqrt{d}}{2}-\frac{\rho_d^\circ}{\sqrt{4}},\frac{\sqrt{d}}{2}-\frac{\rho_d^+}{\sqrt{4}}\right[\!
$$
then the $(d-1)$-dimensional volume of $H\cap[0,1]^d$ is strictly locally minimal when $H$ is orthogonal to an order $n$ sub-diagonal of $[0,1]^d$. Finally, if
$$
t\in\!\left]\frac{\sqrt{d}}{2}-\frac{\rho_d^-}{\sqrt{4}},\frac{\sqrt{d}}{2}-\frac{\rho_d^\circ}{\sqrt{4}}\right[\!\cup\!\left]\frac{\sqrt{d}}{2}-\frac{\rho_d^+}{\sqrt{4}},\frac{\sqrt{d}}{2}\right[\!
$$
then that volume is not locally minimal or maximal (even weakly so) when $H$ is orthogonal to an order $n$ sub-diagonal of $[0,1]^d$.
\end{prop}
\begin{proof}
The proposition is already proven above when $n$ is equal to $4$ and it can be assumed that $n$ is at least $5$. Following the argument used for the $4$\nobreakdash-dimensional case, it suffices to show that
\begin{equation}\label{LEH.sec.4.prop.2.eq.1}
\sum_{i=0}^{\lfloor{z}\rfloor}(-1)^i{n\choose{i}}(z-i)^{n-3}
\end{equation}
is positive when $z$ belongs to $]0,\rho_n^\circ[$ and negative when $z$ is in $]\rho_n^\circ,n/2[$, where $\rho_n^\circ$ is a number satisfying $\rho_n^+<\rho_n^\circ<\rho_n^-$. First observe that (\ref{LEH.sec.4.prop.2.eq.1}) is equal to $z^{n-3}$ when $0<z<1$, and, therefore is always positive in that case.
If $n$ is equal to $5$, (\ref{LEH.sec.4.prop.2.eq.1}) is equal to $-4z^2+10z-5$ when $1\leq{z}\leq2$ and to $6z^2-30z+35$ when $2<z\leq{5/2}$. Hence, it is positive when $z$ belongs to the interval $]0,\rho_5^\circ[$ and negative when $z$ is in the interval $]\rho_5^\circ,5/2[$, where
$$
\rho_5^\circ=\frac{5+\sqrt{5}}{4}\mbox{,}
$$
which is about $1.80902$. By Remark \ref{LEH.sec.4.rem.1}, $\rho_5^+<\rho_5^\circ<\rho_5^-$, as desired. Now assume that $n$ is equal to $6$. In that case, (\ref{LEH.sec.4.prop.2.eq.1}) is equal to
$$
-5z^3+18z^2-18z+6
$$
when $1\leq{z}\leq{2}$ and to
$$
10z^3-72z^2+162z-114
$$
when $2<z\leq{3}$. The former polynomial expression is positive when $z$ belongs to $[1,2]$, and the latter admits a single root $\rho_6^\circ$ in $[2,3]$ of about $2.2407$. Moreover, the latter expression is positive when $z$ belongs to $[2,\rho_6^\circ[$ and negative when $z$ belongs to $]\rho_6^\circ,3]$. As in addition, $\rho_6^\circ$ is strictly between $\rho_6^+$ and $\rho_6^-$ (see Remark~\ref{LEH.sec.4.rem.1}), the proposition holds in that case.
Finally assume that $n$ is equal to $7$. If $1\leq{z}\leq{2}$, then (\ref{LEH.sec.4.prop.2.eq.1}) is equal to
$$
-6z^4+28z^3-42z^2+28z-7\mbox{.}
$$
If $2<z\leq3$, then (\ref{LEH.sec.4.prop.2.eq.1}) is equal to
$$
15z^4-140z^3+462z^2-644z+329\mbox{.}
$$
If, however $3<z\leq7/2$, then (\ref{LEH.sec.4.prop.2.eq.1}) is equal to
$$
-20z^4+280z^3-1428z^2+3136z-2506\mbox{.}
$$
Note that the roots and positivity of these polynomial expressions can still be obtained exactly. In particular, it can be easily shown that (\ref{LEH.sec.4.prop.2.eq.1}) is positive when $z$ belongs to $[1,\rho_7^\circ[$ and negative when $z$ is in $]\rho_7^\circ,7/2]$, where $\rho_7^\circ$, a root of the second polynomial expression, is about $2.69068$. Since, by Remark \ref{LEH.sec.4.rem.1}, $\rho_7^+$ is about $1.77221$ and $\rho_7^-$ about $2.9324$, this proves the desired property.
\end{proof}
\begin{rem}\label{LEH.sec.4.rem.2}
Recall that $\rho_4^\circ$ is equal to $4/3$ and $\rho_5^\circ$ to
$$
\frac{5+\sqrt{5}}{4}\mbox{,}
$$
which is about $1.80902$. In fact, $\rho_6^\circ$ and $\rho_7^\circ$ (that are about $2.2407$ and $2.69068$, respectively) can also be expressed exactly. However, while $\rho_6^\circ$ can be expressed in a reasonably simple form as
$$
\rho_6^\circ=\frac{12}{5}-\frac{3}{5}\cos\!\left(\frac{1}{3}\arctan\!\left(\frac{5\sqrt{11}}{7}\right)\!\right)\!+\frac{3\sqrt{3}}{5}\sin\!\left(\frac{1}{3}\arctan\!\left(\frac{5\sqrt{11}}{7}\right)\!\right)\!\!\mbox{,}
$$
the exact expression of $\rho_7^\circ$ is too long to be written here.
\end{rem}
Interestingly, Propositions \ref{LEH.sec.4.prop.1} and \ref{LEH.sec.4.prop.2} still hold for dimensions and sub-diagonal orders much larger than $7$: they have been verified up to dimension and sub-diagonal order $300$ using symbolic computation, and can be expected to remain true beyond that. Approximate values of the corresponding $\rho_d^-$, $\rho_d^+$, and $\rho_d^0$ are reported in the following table when $8\leq{d}\leq35$
\begin{table}[b]
\begin{tabular}{c|ccc||c|ccc}
$d$ & $\rho_d^-$ & $\rho_d^\circ$ & $\rho_d^+$ & $d$ & $\rho_d^-$ & $\rho_d^\circ$ & $\rho_d^+$\\
\hline
$8$ & $3.38859$ & $3.14086$ & $2.13730$ & $22$ & $9.99303$ & $9.62096$ & $7.86693$\\
$9$ & $3.85428$ & $3.59394$ & $2.52065$ & $23$ & $10.4705$ & $10.0911$ & $8.29529$\\
$10$ & $4.31894$ & $4.04931$ & $2.90984$ & $24$ & $10.9485$ & $10.5619$ & $8.72522$\\
$11$ & $4.78630$ & $4.50661$ & $3.30377$ & $25$ & $11.4269$ & $11.0332$ & $9.15661$\\
$12$ & $5.25481$ & $4.96566$ & $3.70286$ & $26$ & $11.9057$ & $11.5051$ & $9.58938$\\
$13$ & $5.72466$ & $5.42625$ & $4.10615$ & $27$ & $12.3849$ & $11.9775$ & $10.0234$\\
$14$ & $6.19563$ & $5.88821$ & $4.51316$ & $28$ & $12.8646$ & $12.4504$ & $10.4587$\\
$15$ & $6.66760$ & $6.35143$ & $4.92352$ & $29$ & $13.3445$ & $12.9237$ & $10.8952$\\
$16$ & $7.14049$ & $6.81577$ & $5.33689$ & $30$ & $13.8249$ & $13.3975$ & $11.3328$\\
$17$ & $7.61421$ & $7.28115$ & $5.75298$ & $31$ & $14.3055$ & $13.8717$ & $11.7714$\\
$18$ & $8.08869$ & $7.74749$ & $6.17156$ & $32$ & $14.7864$ & $14.3464$ & $12.2110$\\
$19$ & $8.56386$ & $8.21469$ & $6.59241$ & $33$ & $15.2677$ & $14.8214$ & $12.6515$\\
$20$ & $9.03967$ & $8.68271$ & $7.01536$ & $34$ & $15.7492$ & $15.2967$ & $13.0930$\\
$21$ & $9.51608$ & $9.15149$ & $7.44025$ & $35$ & $16.2310$ & $15.7725$ & $13.5353$\\
\end{tabular}
\end{table}
\noindent{\bf Acknowledgement.} The author would like to thank Hermann K{\"o}nig for sharing his notes on the problem that helped confirming the results obtained here, Artem Zvavitch for enlightening exchanges about hypercube sections, and Antoine Deza for helping with the preliminary versions of this article.
|
train/arxiv
|
BkiUaPLxK02iP4Wj9xqf
| 5
| 1
|
\section{Introduction: observations and theoretical challenges}
Compact radio sources provide a precision probe of the ionized interstellar medium
(IISM). The propagation speed of radio waves depends on the density
of free electrons, and therefore the spatial inhomogeneity of the IISM may result in refractive and diffractive
abberation and scattering. This causes scintillation (time-variability) of the compact radio-sources
\citep{1968Natur.218..920S,1986ApJ...301L..53B,2006ApJS..165..439R}.
Observations of the pulsar scintillations are particularly interesting for infering the IISM properties,
due to the brightness of many pulsars that have been observed for other purposes over long time intervals.
However, despite of considerable effort, the small-scale structure of the IISM
has remained enigmatic. At a first glance, one expects the scattering to be caused by density
inhomogeneities produced from turbulent motions of the IISM. However, this picture
is in contradiction with the last decade of the pulsar scintillation data. There, a
major observational progress of the ISM scintillations has been
achieved through the \citet{2001ApJ...549L..97S}
detection of parabolic structures in the Fourier-transformed dynamical
spectra of strongly scintillating pulsars. These parabolic structures
imply that for these pulsars the radio-wave scattering occurs mostly
within one or several thin screens
\citep{2004MNRAS.354...43W,2006ApJ...637..346C,2008MNRAS.388.1214W}.
Moreover, the multiple ``inverted
parabolae'' \citep{2005ApJ...619L.171H} show that the scattering
inside the screen is strongly inhomogeneous and occurs in
localized clumps. The latter was recently confirmed by \cite{2010ApJ...708..232B} who have obtained the VLBI scattering
image PSR B0834+06. They
have found that not only the scattering image was clumpy but that the
clumps lined up along a thin line.
All of these
observational facts present a major challenge for theoretical
interpretation where the scattering is caused by density inhomogeneities produced in a turbulent
cascade \citep{2001ApJ...562..279L}. In particular, (1) the origin of the screens is unexplained,
and (2) the strongly non-gaussian scattering requires AU-size
regions which are over-pressurized by factors of $\sim 10^3$
relative to the ambient warm ISM \citep{1987Natur.328..324R}; no conventional
physical mechanism has been proposed for how such regions may be
formed.
In this paper, we advocate a scenario in which the scattering is produced by a refractive structure,
with turbulence playing a secondary role.
Radio-wave scattering by non-turbulent large-scale refractive structures has been previously
considered by \cite{1987Natur.328..324R}, mostly in the context of the so-called
extreme-scattering events observed by
\cite{1987Natur.326..675F}. Recently, \cite{2006ApJ...640L.159G}
proposed that the image of the SgrA* radio-source is
strongly scattered by several reconnection sheets that are closely aligned with the line of sight to
SgrA*.
In this paper, we develop further the Goldreich \& Sridhar's (2006) idea [see also \cite{2012MNRAS.421L.132P}]
and apply it to construct a quantitative
picture of pulsar scintillations. Namely, we consider a scenario where
the pulsar radio-wave scattering occurs due to several {\it weakly corrugated}
reconnection sheets that are
closely aligned with the line of sight to the pulsar; see figure \ref{fig:sheetgeom}. We show that this scenario provides
compelling explanations for previously unexplained features of the scintillations:
(1) the ``scattering screens'' are simply effective descriptions of such sheets; their location is marked
approximately
by the sheets' intersections with the line of sight, (2) ``the scattering clumps'' correspond to
those parts of sheet folds where the sheet is parallel
to
the line of sight; the strength of the scattering follows a strongly non-Gaussian distribution, even though
the corrugation itself is assumed to be a realization of a Gaussian distribution, and (3) the
strong non-isotropy of the Brisken et al.~2010 image is a consequence of the sheet's inclination,
with the clump locations aligned along the direction perpendicular to sheet's line of nodes. The plan of the paper is
as follows: in the next section we briefly describe the origin of the
reconnection sheets, in section 3 we derive the model for the fold statistics,
in section 4 we derive the lensing by the corrugated sheet, and in section 5 we compute the
Fourier-transformed dynamical spectrum and demonstrate the parabolic structures. In section 6 we conclude.
\section{Astrophysical Picture}
\subsection{Two Regimes of Lensing: Diffractive vs Refractive}
Two regimes to generate pulsar scintillation have been considered. In
the diffractive regime, the scattering/lensing angle is determined by
the ratio of the wavelength
$\lambda$ of the radio-waves, and the characteristic spatial scale $D$ of the scattering structure: $\theta\sim\lambda/D$. The brightness of the scattered
image is determined by the amplitude of the wavefront modulation caused by the scattering structure. To explain
the observed angles in the range $1-100$ mas at wavelength $\sim 1$m,
requires structures in the ISM on scales of order $10^{6-8}$m. This
imposes unexpected properties on the IISM, since this scale
much smaller than the coloumb mean free path of protons and thus compressive perturbations of
this scale are overdamped and decay esponentially on the sound-crossing time-scale. If one still assumes that somehow these perturbations are created and maintained,
then the angular image of a
pulsar, and therefore its dynamic spectrum, are superpositions of
thousands of
$\hbox{AU}/10^8\hbox{m}$ weak structures (for a 1-d scattering image),
and are expected to be roughly Gaussian. The length scale is given by
the path length difference, which in turn is inferred from the
inverse scintillation correlation frequency.
The asymmetric parabolically structured 2-D power spectrum of the dynamic spectrum,
and the VLBI image of the scattering disk consisting of several prominent clumps, are inconsistent with such
a picture. The number of $10^{6-8}$m eddies along the line of sight
can be large, possibly $10^{13}$ for a pulsar at a distance of $\sim$
kpc. The total scattered power comes from the cumulative projected
variation in refractive index, which grows as the square root of the
number of eddies. Each eddie only needs to change the refractive
index by a part in a million of the cumulative change, so a refractive
index variation of a part in $\sim 10^{12}$ could account for the
strong scintillation. But the superposition of scattering from such a large number
of eddies would surely look very Gaussian by the central limit
theorem. A requirement for the sum of $10^{13}$ contributions to appear intermittent in
projection would lead to unphysically overpressurized eddies.
A second mechanism is due to refractive lensing. In this scenario,
the bending angle is determined by Snell's law, i.e. the change in
refractive index and the angles of incidence. The refractive index of
a plasma is $n=1/\sqrt{1-\omega_p^2/\omega^2}$, with plasma frequency
$\omega_{\rm p}=\sqrt{n_e e^2/\epsilon_{0}m_e}$. Pulsar observations
are done at frequencies much higher than the plasma frequency, for which
we expand $n-1 \sim \frac{\omega_p^2}{2 \omega^2} = 1.8\times 10^{-8}
n_e$ at wavelengths of a meter. The observed scattered images at 20
mas require deflection
angles of at least 40 mas, corresponding to $n_e \sim 10/{\rm cm}^3$ neglecting
geometric alignment factors. However, the mean density of the IISM is
determined to be
of the order $10^{-2}$, as measured from the pulsars' dispersion measure\citep{2004hpa..book.....L}.
Therefore, the refractive picture is also challenging to reconcile with the data, since the observed
scattering angles would na\"ively require fractional changes in free electron
density of $\sim 10^3$
which are difficult to understand or confine. This contraint, however, is alleviated
when one considers refractive sheets that are closely aligned with the line of sight
\citep{2006ApJ...640L.159G}.
Historically, refractive lensing was used to interpret long time
variability, and diffraction for the minute time scale
effects. Recently, it was understood that the refractive images result
in an interference fringe pattern \citep{2004MNRAS.354...43W} which have
similar time and frequency scales for flux modulation as diffractive
effects. The frequency and time scaling was historically interpreted
as related to an underlying stochastic diffractive process driven by
turbulence.
\cite{2006ApJ...640L.159G} showed that refractive lensing by aligned
sheets results in
scintillation similar to the diffractive picture.
\subsection{Physical origin of the reconnection sheets}
The interstellar medium is stirred on scales of parsecs by various
energetic processes, including supernovae, ionization fronts, spiral
density waves, and other phenomena. These processes are generally
short lived, and we conjecture after the stirring, the warm medium
relaxes to an near-equilibrium configuration on small (several AU)
scales. Current numerical simulations of the supernova-driven
turbulence in the warm ISM of the Galaxy (e.g., Hill et al.~2011) do
not have the resolution to tell how realistic this assumption is. In
the presence of helicity, the equilibrium magnetic fields are
configured as interlaced twisted tori, which are long lived
\citep{2004Natur.431..819B}. It has been shown by \cite{2009arXiv0909.1815G}
that a ``generic magnetic equilibrium of an ideally conducting fluid
contains a volume-filling set of singular current layers.'' In this
picture, the magnetic fields are locally almost parallel, with
discontinuous interface regions, a bit like magnetic domains in a
ferromagnet. Singular current sheets have also been seen in the
simulations of \citet{2004PhRvL..92h4504S}.
At the boundary between between magnetic field configurations, current
sheets maintain the discontinuities. Depending on the nature of
reconnection, these current sheet may be self-enforcing due to inflow
of fresh fields, maintaining a thickness potentially as thin as an
electron gyromagnetic radius. At constant temperature,
the pressure equilibrium in the direction
perpendicular to the sheet implies that the electron density inside the sheet is
enhanced by a factor $R$ given by
\begin{equation}
R-1\sim {B_{\rm out}^2-B_{\rm in}^2\over 4\pi P_{\rm out}},
\label{ratio}
\end{equation}
where $B_{\rm in}$ and $B_{rm out}$ is the magnetic field in and outside the sheet,
and $P_{\rm out}$ is the non-magnetic pressure outside the sheet. The right-hand side of
the above equation is thought to be of order $1$ for the IISM, but may be significantly
larger if the IISM is magnetic-pressure dominated.
Depending on the ratio of heating time due to reconnection to cooling
time, the thin sheet may have an enhanced temperature, in which case
it can be underdense, with $R \rightarrow 0$ in the limit of a
strong entropy injection.
\begin{figure*}
\centerline{\epsfig{file=sheetgeom.eps,width=7.5in}}
\vspace{-6in}
\caption{lensing geometry. The earth is at left, pulsar at right.
A section of the scattering sheet is in the middle. The dashed line
shows the unperturbed light path. The dotted line shows the path of
an image
lensed by a fold caustic of the projected sheet.}
\label{fig:sheetgeom}
\end{figure*}
The lensing geometry is shown in Figure \ref{fig:sheetgeom}. A crucial ingredient in our model
is that the reconnection sheet is assumed to be weakly corrugated. Let is assume that the corrugation
pattern is fixed, and vary the inclination angle. In this case, as can be seen from
figure \ref{fig:sheetgeom},
there is characteristic inclination angle $\alpha$ at which the perturbations in the
sheet can generate caustics in the projected surface density. This
angle $\alpha$ is the ratio of the wave peak displacement to its
wavelength. We envision ranges of $\alpha \sim 10^{-3} - 10^{-2}$.
When the caustics are present, they
become effective refractive scattering centers for the radio waves. The resulting lense features
multiple images that closely line up along the direction perpendicular to the line of nodes of the
scattering sheet.
\subsection{Surface Dynamics}
We assume that the current sheet is physically thin, $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} $ AU. Theoretically,
the thickness of current sheets is not understood, so we choose this
scale to be thin enough to explain the smallest scale observed
structures. On each side the
magnetic field points in a different direction. The change in alvenic
properties gives rise to surface
waves \citep{1991SoPh..133..263J, 2009GApFD.103...89J}, whose amplitude
decays exponentially with the distance to the current sheet and which are
mathematically analogous to deep water ocean waves. The restoring force is due to the
difference in magnetic field component projected along the wave
vector. Like ocean waves, these waves penetrate about a wavelength
into each side. We will be considering wavelengths of hundreds to thousands of
AU whose projected wavelength is related to the observed lensing
structures $\sim$several AU, so the thickness of the current sheet itself is
neglible as far as the dynamics of
the waves are concerned. Seen in projection along the aligned sheet,
the projected wavelengths will be $\sim $ AU, reduced by the alignment
angle $\alpha$.
The displacements are
transverse to the wave vector, and perpendicular to the sheet. While
alvenic in nature, the surface modes possess only one polarization,
unlike bulk waves. We speculate that this allows such modes to be
long lived, since the single polarization nature protects them from
the normal MHD turbulent cascade\citep{1997ApJ...485..680G}.
These waves resemble a flag blowing in the wind.
Disturbances travelling along the sheet are decoupled from bulk waves.
Being confined to a sheet, the amplitude away from a source drops as
$\propto 1/r$ instead of the normal inverse square law. The 2-D
analogy to Olber's Paradox leads to flux being dominated by the
cumulative effect of far away sources, like waves on an ocean beach.
A amplitude
of order $\alpha \lambda_{\rm wave}$, or about $10^{-3}$--$10^{-2}$ of the wavelength
of the surface wave, is
sufficient to cause the sheet to appear folded in projection; here $\lambda_{\rm wave}$ is the wavelength of the surface wave.
\section{Fold Statistics}
We expect a minimum wavelength of these surface waves, which is
substantially larger than the thickness of the sheet. Short
wavelength perturbations are not bound to the surface, and can
dissipate into the bulk. The exact cutoff depends on unknown factors,
including the distance to the source, and vertical structure of the
current sheet. Non-linear effects also cause short wavelength waves
to dissipate, just like sound waves in the air. Primarily waves with
amplitude larger than $\alpha$ contribute to scattering. The strength
of damping depends on the distance to the source in units of
wavelength, with shorter wavelengths being more damped.
We model the waves as a displacement function $\zeta(x)$
which is a Gaussian random field with a correlation function that is a
Gaussian,
\begin{equation}
\xi(r)=\langle \zeta(x)
\zeta(x+r)\rangle=A^2\exp\left(-{r^2\over 2\sigma^2}\right),
\end{equation}
where $A$ is the mean amplitude
of the displacement and $\sigma$ is the surface-wave
coherence scale. We denote the projected coherence scale $\sigma_y
\equiv \alpha \sigma$. Figure \ref{fig:sheet} shows a realization of a
sheet with a random fluctuations. We used an inclination slope
$\alpha=1/8$, and a correlation length along the sheet of 350 units,
which is $\sigma_y\sim 44$ grid units in projection. The displacement
amplitude is chosen as $A=40$. The dimensionless fluctuation
$\delta\equiv A/\sigma \sim 1$ is about unity, meaning a $1-\sigma$
flucuation can result in a caustic fold, which we will discuss further
below.
\begin{figure}
\vspace{-0.8in}
\centerline{\epsfig{file=sheetz.eps,width=3in}}
\caption{Sheet with transverse perturbations. The upper panel shows
an zoomed version of a short section.}
\label{fig:sheet}
\end{figure}
The current sheet corresponds to a change in magnetic and thermal pressure and thus has a refractive index different
from the ambient ISM. For simplicity, we assume here that the sheet has constant thickness, and
consider its optical depth as is relevant for refractive lensing.
In projection along the line of sight, the column density of the sheet
results in a highly non-Gaussian
distribution. Figure \ref{fig:rho} shows the column density distribution in a
simulation. Folds occur when the gradient of the displacement is
equal to $-\alpha$, which as described above is quantified by the
fluctuation amplitude $\delta$. The value chosen here makes folds
common, occuring roughly once per correlation length, and yet make
multiple folds overlapping in projection rare. Each fold results in
two caustics, with characteric separation $\sigma_y$. This is well
understood from the theory of extrema of surface
waves\citep{1957RSPTA.249..321L}.
As before, $\alpha$ is the angle between the screen and the line of sight.
Then the optical depth is $\rho\propto1/\alpha$ for
$\alpha\ll 1$, and $P_{\rm screen}(\rho)\propto 1/\rho^2$, where $P_{screen}(\rho)$
is the probability of a piece of screen to have the optical depth of $\rho$.
However, we are interested in the probability density with respect to the impact parameter relative
to the line of sight, and not with respect to the location on the screen. For nearly aligned screens, this
is not the same thing. In particular, part of the screen with low $\alpha$ occupies less of the
impact-parameter space than the part of the screen with the same area but high $\alpha$. Thus,
\begin{equation}
P_{\rm impact parameter}(\rho)\propto 1/\rho^3
\label{eqn:prho}
\end{equation}
which is a strongly non-Gaussian distribution.
\begin{figure}
\centerline{\epsfig{file=rho.eps,width=3in}}
\caption{projected density. The upper panel is a zoom of the central
portion of the lower panel. Each time the sheet folds in projection,
we see a double peaked caustic structure in density. The
characteristic separation
between the two peaks is a projection correlation length $\sigma_y$,
in this case 44 grid units.}
\label{fig:rho}
\end{figure}
Equation (\ref{eqn:prho}) describes the 1 point PDF in figure
\ref{fig:rho}. The deflection angle is determined by the gradient of
the density, $\rho'$.
As in \cite{2012MNRAS.421L.132P}, we use the notation of gravitational
lensing. The phase delay through the lens is described by the lensing
potential
\begin{equation}
\psi\equiv \frac{n_e dz e^2}{2\epsilon_0 m_e L\omega^2}
\end{equation}
where the effective density $n_e \equiv \rho/dz$. The lensing depends
only on the product of density and thickness.
The convergence $\kappa \equiv \partial_\theta^2\psi/2$ is given by the
second angular derivative
of the potential, and thereby the projected density $\rho$. The
magnification gives the
brightness of the image. In 1-D it is $\mu=1/(1-2\kappa)$, derived
from the determinant of the amplification matrix. A negative
amplification corresponds to a flipped image of odd parity.
The regions near the locations where the screen is parallel to the
line-of-sight, which the call the caustics, give rise to the localized
clumps in the pulsar's scattering image, which produce the inverted
parabolic arcs in pulsar secondary spectra.
\section{Lensing}
The lensing of this density sheet can be computed in analogy to
\cite{2012MNRAS.421L.132P}.
\begin{figure}
\centerline{\epsfig{file=dt.eps,width=3in}}
\caption{deflection angle mapping. The horizontal axis is angle on the
sky, and the vertical axis is the intersection of this light ray on
the source plane. Whenever multiple different directions on the sky
intersect on the same position in the source plane, multiple images
are formed, which form a coherent interference pattern.}
\label{fig:dt}
\end{figure}
Given the projected column density distribution in Figure \ref{fig:rho}, and assuming that
the sheet has a fixed optical depth along its normal, we
can compute the mapping of apparent angle on the sky to position in the
source (pulsar) plane. The caustics in the projected density
distribution lead to large angle deflections, and multiple images,
whenever the sheets are aligned closely enough for the caustics to form.
This explains why only a small fraction of the current
sheets contribute to scintillation.
The system depends on the dimensionless parameter $\delta$, the ratio $r$
of sheet thickness $z$ to the projected correlation length $\sigma_y$,
and the ratio of index of refraction change to the angular size of the
projection correlation length.
The model predicts the number density of images and their fluxes as a
function of their angular separation from the line of sight (and
therefore as the function of time lag). It has a small number of
dimensionless parameters: $\delta,\ r,\ C$ (defined below).
There is a
dimensional scaling of time units, which is a function of pulsar
transverse velocity, projected screen size, and observing frequency.
This is generally parameterized as the DISS time scale, $t_{\rm DISS}
\sim (\lambda/D) (L/v)$ where $D$ is the size of the lensing region,
$L$ is the distance to the pulsar, $\lambda$ is the observing
wavelength, and $v$ is the pulsar transverse velocity. Note that in
this model, the lensing is refractive, sharing the time scales from
diffractive models, but none of the physics.
We show a histogram of image magnifications in Figure \ref{fig:mhist},
which can be compared to holographic flux measurements
\citep{2008MNRAS.388.1214W}. The lensed images correspond to the
inverted arclets seen in secondary spectra. The model predicts the
number of images as a function of separation, shown in figure
\ref{fig:pos}.
\begin{figure}
\centerline{\epsfig{file=mag.eps,width=3in}}
\caption{PDF of image magnifications. The peak near 1 are images at
the unscattered positions, while the population on the left are
lensed images. The peak occurs at roughly $1/\kappa$.
}
\label{fig:mhist}
\end{figure}
The separation of images is related to the separation of caustics.
For the ``common'' limit of our simulation, $\delta \sim 1$, this is
given by the projection correlation length $\sigma_y$. For $\delta \ll
1$, the angular density of caustics becomes very rare, proportionate to an error
function
\begin{figure}
\centerline{\epsfig{file=pos.eps,width=3in}}
\caption{PDF of image positions per logarithmic distance interval on
the sky $y$. The projected correlation length is $\sigma_y \sim
44$. The characteristic convergence $\kappa \sim 20$, leading to a
cutoff near $\sigma_y \kappa \sim 1000$.
}
\label{fig:pos}
\end{figure}
The number of images is determined by the number of light folds along
the line of sight, i.e. how often the deflection angle is larger than
the separation to the line of sight in Figure \ref{fig:dt}.
\begin{figure}
\centerline{\epsfig{file=posmag.eps,width=3in}}
\caption{Average Spatial Distribution of flux per logarithmic distance
interval. In a Gaussian model, the flux per image drops as
$1/\theta$.
}
\label{fig:posmag}
\end{figure}
The maximum deflection angle of an image is the change of refractive
index in the sheet, amplified by the alignment angle on a fold
caustic. For a sheet much thinner than the projected fluctation
amplitude, the maximum projected density enhancement $C\equiv
\rho/\bar{\rho}$ relative to
the projected mean sheet density $\bar{\rho}\sim n_e dz/\alpha$ at a caustic
is the square root of the radius of curvature to the thickness of the
sheet $\tau$, $C=\sigma_y/\sqrt{A\tau}$
in the limit $\sigma^2/A \gg \tau$.
\section{Simulated Dynamic Spectra}
With the density field, we can solve the lens equations to simulate
dynamic spectra. By adding the voltages on each image with their
appropriate amplitude and phases, we simulate the dynamic spectrum,
shown in figure \ref{fig:ds}.
\begin{figure}
\centerline{\includegraphics[width=3.5in]{rspects.eps}}
\caption{dynamic pulsar spectrum. Horizontal axis is time, vertical
axis is frequency. We reproduce the characteristic criss-cross
pattern observed in real scintillation spectra.}
\label{fig:ds}
\end{figure}
A 2-D fourier transform maps this dynamic spectrum into a secondary
spectrum, shown in figure \ref{fig:ss}.
\begin{figure}
\centerline{\includegraphics[width=3.5in]{sspectr.eps}}
\caption{secondary pulsar spectrum. The inverted parabolic arcs arise
naturally in this model.}
\label{fig:ss}
\end{figure}
We find that the interference of these discrete, co-linear images
forms the inverted parabolic arcs, qualitatively similar to those that
are observed in \cite{2001ApJ...549L..97S} and
\cite{2005ApJ...619L.171H}.
\section{Discussion}
We can estimate the length scales involved in making the current sheet that
would produce the observed scintillation pattern. This theory requires as
input a current sheet thickness, inclination angle, curvature,
amplitude of waves, and dissipation scale.
The thickness of the sheets can be estimated by considering the magnification of
images. As shown in \cite{2012MNRAS.421L.132P}, the flux is roughly
the thickness divided by the impact parameter. This follows from flux
conservation of lensing: the net flux is conserved, and flux changes
by order unity at impact parameters of order the physical size of the
lens, so the typical flux off-axis is roughly the ratio of the
furthest distance at which at image forms, to the size the lens. The
caustic itself only contains a small fraction of the integrated flux.
The projected wavelengths
are observed to be of order $\sim 10$ AU;
this suggests a typical thickness of $h\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.1$AU to explain the
$\sim$\% scattering intensities observed in \cite{2010ApJ...708..232B}.
The largest observed deflection angles at meter wavelength are $\gamma
\sim 0.01$", which requires an electron density change $\delta n_e\sim
100\alpha/C$ cm$^{-3}$. For a typical interstellar plasma density of
$n_e \sim 0.03$ determined from pulsar dispersion, we need an
alignment of a fraction of a degree, with fluctuation amplitude $C\sim
30$ and a wavelength of $\lambda \sim 1000$ AU. This combination
is not unique.
As discussed in \cite{2012MNRAS.421L.132P}, the phenomenology of the
Extreme Scattering Events
prefers underdense lenses. In an underdense sheet, the maximal change
of density is the density itself.
With the assumption
$\delta n_e\sim n_e$, we obtain $\alpha \sim
10^{-2}$. The probability of seeing a sheet at such an angle is $\sim
\alpha^2$, requiring the existence of $\sim 1/\alpha^2$ sheets along
the line of sight: if we imagine each ``sheet'' to be a thin disk, the
probability of seeing a disk at angle $\alpha$ gets one contribution
from the intrinsic alignment distribution, and one more from the
reduced geometric cross section of an aligned sheet.
For pulsar B0834+06, the distance is $\sim 0.64$ kpc (as determined
from the Dispersion Measure, and consistent with VLBI and ISM
geometries\citep{2010ApJ...708..232B}, also direct parallax: Deller and Brisken, unpublished
), giving a typical sheet separation of
$s \sim 0.1$ pc.
These estimates are qualitative. One expects current sheets to come
in a range of sizes, curvature and perturbation amplitude. The
thickness might also vary
One of the attractive features of our mechanism is that it explains
very naturally the 1-d image of Brisken et al.~(2010). The reduced
dimensionality of the scattering image comes from the fact that the
deflection created by the screen is mostly in the direction
perpendicular to the screen's line of nodes. Therefore, the scattering
clumps will also form a line that is perpendicular to the line of
nodes. This is demonstrated in Fig. \ref{fig:2d}, where we show a simulated 2-d
scattering image of a pulsar. The alignment of the scattering clumps
is apparent.
\begin{figure}
\centerline{\includegraphics[width=3.5in]{image86r.eps}}
\caption{Simulated scattering image of the pulsar. The pulsar is
modeled as a unit disk, with a much exagerated scale. Flux
conservation requires that the sum of image areas equals the
original disk.
This compares favourably with the VLBI images of \protect\cite{2010ApJ...708..232B}.
}
\label{fig:2d}
\end{figure}
\section{Future Potential}
In our picture, pulsar scintillation is dominated by a small number of
magnetic discontinuities highly aligned to the line of sight. Surface
waves will move very slowly in this projected geometry, allowing for a
precise determination of the geometric properties. This allows the
use of these sheets as lenses to study both pulsars and the
ISM\citep{2013arXiv1301.7505P}.
\subsection{ISM dynamics}
Free parameters in our model include the thickness of the sheet and the
inclination angle. These can be inferred by broad band measurements
of pulsar scintillation, as follows. Firstly, the apparent position of images is expected to shift by
a distance of order of the sheet width, as one decreases the observing frequency from the
highest critical frequency at which the image forms, to a factor of
two below. Secondly, the co-linearity of the images is related to
the inclination angle of the sheet: the more aligned is the sheet with the line of
sight, the greater is the aspect ratio of the scattering image.
\subsection{Pulsar Emission imaging}
A straightforward application is the study of the reflex motion of the
emission region of the pulsar across the pulse phase. One expects the
apparent emission region to move by distance of order of the effective
emission height multiplied by the ratio of the pulse width to the
rotation period. Using VLBI mapping of the scattering geometry, one
can precisely predict the change of scintillation pattern as a
function of pulse phase. The effective astrometric precision can be
sub nano arcsecond.
\subsection{Distance Measurement}
It is tempting to use these lenses to obtain precision parallax
distances to pulsars.
The challenges is to keep the lens stable over a year, when
typical pulsars move by much more than an astronomical unit, making
the differential measurements challenging. In the sheet lensing
scenario, one can imagine obtaining widely separated scattering images:
the lensing angle scales as $\propto \lambda^2$, so at low
frequencies, for example with LOFAR LBA, hundreds of AU are probed.
Over the course of a year, persistent images should remain visible, and the
interference pattern could be studied as a function of annual
modulation. This would result in direct parallax distances with nano arc
second precision, enough to determine pulsar distances for coherent
gravitational wave detection \citep{2012PhRvD..86l4028B}.
\section{Conclusions}
We have presented a quantitative theory of pulsar scintillation
inverse parabolic arcs. These extend recent ideas of
\citet{2006ApJ...640L.159G} and \citet{2012MNRAS.421L.132P} about thin
current sheets as the scattering objects in the ISM, which naturally
explain the large angle scattering observed in pulsars and some
extragalactic sources.
This picture could explain all scintillation phenomena from refractive
lensing, all with structures greater than 0.1 AU. The apparent
diffractive structure results from the interference between refractive
images, and no diffractive scattering is needed.
In this scenario, VLBI monitoring of pulsars on time scales of weeks,
at multiple low frequencies, can allow forecasts of the scattering
behaviour. This in turn could improve gravitational wave timing residuals.
The same scenario also enables the coherent use of the scattered
images as a gigantic interstellar interferometer to map the motions of
the pulsar emission regions.
\section{Acknowledgements}
U-LP thanks NSERC and CAASTRO for support. YL is supported
by the Australian Research Counsil Future Fellowship.
\newcommand{\araa}{ARA\&A}
\newcommand{\afz}{Afz}
\newcommand{\aj}{AJ}
\newcommand{\azh}{AZh}
\newcommand{\aaa}{A\&A}
\newcommand{\aas}{A\&AS}
\newcommand{\aar}{A\&AR}
\newcommand{\apj}{ApJ}
\newcommand{\apjs}{ApJS}
\newcommand{\apjl}{ApJ}
\newcommand{\apss}{Ap\&SS}
\newcommand{\baas}{BAAS}
\newcommand{\jaa}{JA\&A}
\newcommand{\mnras}{MNRAS}
\newcommand{\nat}{Nat}
\newcommand{\pasj}{PASJ}
\newcommand{\pasp}{PASP}
\newcommand{\paspc}{PASPC}
\newcommand{\qjras}{QJRAS}
\newcommand{\sci}{Sci}
\newcommand{\solphys}{Solar Physics}
\newcommand{\sova}{SvA}
\newcommand{A\&A}{A\&A}
\def{\it Phys.~Rev.~D\,}{{\it Phys.~Rev.~D\,}}
|
train/arxiv
|
BkiUbdA4uzki0mx0NYHf
| 5
| 1
|
\section{Introduction}
Adversarial examples \cite{szegedy2013intriguing} cause serious safety concerns in deploying deep learning models. In order to defend against adversarial attacks, many approaches have been proposed \cite{guo2017countering, liao2018defense, at, trades}.
Among them, adversarial training and its variants \cite{at, mart, trades} have been recognized as the most effective defense mechanism.
Adversarial training (AT) is generally formulated as a minimax problem
\begin{equation}
\min_{\bm{\theta}} \max_{{\mathbf{x}}_i^* \in {\mathcal{B}}_p({\mathbf{x}}_i, \varepsilon)} \frac{1}{n} \sum_{i=1}^n \ell({\mathbf{x}}_i^*, y_i; {\bm{\theta}})\;
\label{eqn:at},
\end{equation}
where ${\mathcal{D}}=({\mathbf{x}}_i,y_i)_{i=1}^n$ is the training set and $\ell({\mathbf{x}},y; {\bm{\theta}})$ is the loss function parametrized by ${\bm{\theta}}$. ${\mathcal{B}}_p({\mathbf{x}}_i, \varepsilon)$ represents a $L_p$ norm ball centered at ${\mathbf{x}}_i$ with radius $\varepsilon$.
AT in \Cref{eqn:at} boosts the adversarial robustness by adopting adversarial examples generated in the inner maximization.
Despite the effectiveness of AT, solving the inner maximization requires multiple steps of projected gradient descent (PGD) \cite{at, overfitting}. Therefore, AT is much slower than vanilla training (\textit{e.g.}, 10 times longer training time for AT in \cite{overfitting}),
making it challenging to scale AT to large datasets such as ImageNet.
Currently, the typical solution to accelerate AT is to substitute multi-step attacks (e.g., PGD) with single-step attacks (e.g., FGSM). Several works have been proposed following this direction, including FGSM-RS \cite{fastat}, ATTA \cite{atta} \textit{etc}. These methods achieve the best robust accuracy for fast AT. However, recent works \cite{fgsmga, kim2020understanding} demonstrate that the single-step method suffers from catastrophic overfitting, where the model's robustness against PGD attack suddenly drops to nearly 0\% while the robust accuracy against FGSM attack rapidly increases \cite{fastat}. This will completely destroy the robustness of the networks. It is worth noting that catastrophic overfitting is different from robust overfitting mentioned in \cite{overfitting}. The latter one refers to the generalization gap between training and test data while catastrophic overfitting means the overfitting to a specific type of attack that is irrelevant to the training and test set.
Some works have been proposed to understand and alleviate the catastrophic overfitting \cite{fgsmga, kim2020understanding}.
However, their solutions significantly increase the training time.
For example, the gradient align regularizer $\mathbb{E}_{{\bm{\eta}} \sim \mathcal{U}\left([-\varepsilon, \varepsilon]^{d}\right)}\left[1-\cos \left(\nabla_{{\mathbf{x}}} \ell({\mathbf{x}}, y ; {\bm{\theta}}), \nabla_{{\mathbf{x}}} \ell({\mathbf{x}}+{\bm{\eta}}, y ; {\bm{\theta}})\right)\right]$ in \cite{fgsmga} requires calculating the second order gradient and it is still 5 times slower than vanilla training. And \cite{kim2020understanding} needs to check several points within the $\ell_p$ norm ball, which needs several forward propagation and is still about 4 times slower than vanilla training. Therefore, existing methods are still unsatisfactory in terms of both training efficiency and robust performance.
In this work, we analyze catastrophic overfitting from the perspective of training instances. By taking the gradient norm as an indicator, we find that different training instances have different probabilities of causing catastrophic overfitting. Instances with large gradient norm are more sensitive to the adversarial noise and their loss landscape is less smooth. Thus, fitting them with FGSM is more likely to distort the loss landscape, resulting in catastrophic overfitting.
Furthermore, catastrophic overfitting is closely related to the optimization process of the inner maximization, \textit{e.g.}, the setting of step size. When catastrophic overfitting does not occur, the larger step size leads to a stronger attack and thus strengthens the robustness of the network \cite{fastat}. On the other side, a larger step size is more likely to cause catastrophic overfitting in the training process \cite{fgsmga, fastat}.
Based on these findings, we propose \textit{Adversarial Training with Adaptive Step size ({{ATAS}})}, an simple but effective fast AT method that uses the previous initialization in ATTA \cite{atta} and takes the step size of the inner maximization inversely proportional to the input gradient norm. Instances with large gradient norm are given a small step size to prevent catastrophic overfitting. By contrast, instances with small gradient norms will have large step sizes to improve the strength of the attack.
We theoretically analyze the convergence of {{ATAS}} and prove that it converges faster than the non-adaptive counterpart, which is commonly adopted in existing works \cite{atta}, especially when the distribution of the input gradient norm is long-tailed. Empirically, We evaluate {{ATAS}} on CIFAR10, CIFAR100 \cite{cifar10} and ImageNet \cite{deng2009imagenet} with different network architectures and adversarial budgets, showing that {{ATAS}} mitigates catastrophic overfitting and achieves higher robust accuracy under various attacks including PGD10, PGD50 \cite{at} and AutoAttack \cite{autoattack}.
Our contributions are summarized as follows:
1) To the best of our knowledge, we are the first to analyze catastrophic overfitting from the perspective of training instances, and demonstrate that instances with large input gradient norms are more likely to cause catastrophic overfitting.
2) Based on our findings, we propose a new algorithm, {{ATAS}}, which takes the step size of the inner maximization to be inversely proportional to the input gradient norm in order to prevents catastrophic overfitting and maintain the strength of the attack.
3) Theoretically, we prove that {{ATAS}} converges faster than its non-adaptive counterpart.
4) Empirically, we conduct extensive experiments to evaluate {{ATAS}} on different datasets, network architectures and adversarial budgets, showing that {{ATAS}} consistently improves the robust accuracy and mitigates catastrophic overfitting.
\section{Background and Related Work}
\subsection{Adversarial Examples.}
Adversarial examples are first discussed in \cite{szegedy2013intriguing}, where a small perturbation of the input significantly changes the prediction. Adversarial examples can be generated using the gradient of the input ${\mathbf{x}}$. Fast Gradient Signed Method (FGSM) \cite{fgsm} approximates the loss function $\ell({\mathbf{x}}, y; {\bm{\theta}})$ with the first order Taylor expansion so that adversarial examples can be generated with one step of projected gradient
$
{\mathbf{x}}^{\text{FGSM}} = {\mathbf{x}} + \varepsilon \cdot \text{sgn} (\nabla_{{\mathbf{x}}}\ell({\mathbf{x}}, y; {\bm{\theta}})))\;,
$
where $\varepsilon$ is the adversarial budget. Projected Gradient Descent (PGD) \cite{at} extends FGSM to multiple steps to strengthen the attack. With a step size $\alpha$, the adversarial example at the $t$-th step is
$
{\mathbf{x}}^{t+1} = \Pi_{{\mathcal{B}}_p({\mathbf{x}}, \varepsilon)}[{\mathbf{x}}^{t} + \alpha \cdot \text{sgn} (\nabla_{{\mathbf{x}}^{t}}\ell({\mathbf{x}}^{t}, y; {\bm{\theta}}))]\;,
$
where $\Pi_{{\mathcal{B}}_p({\mathbf{x}}, \varepsilon)}$ means the projection onto ${\mathcal{B}}_p({\mathbf{x}}, \varepsilon)$. Several stronger attacks are proposed to reliably evaluate the models' robustness \cite{squareattack, fab, autoattack}. Among them, Autoattack \cite{autoattack} stands out as the strongest attack.
While many algorithms \cite{guo2017countering, liao2018defense, at, song2017pixeldefend, mart, trades} have been proposed to defend against adversarial attacks, adversarial training and its variants \cite{at, mart,trades} are shown to be the most effective methods to train a truly robust network. Adversarial training can be formulated as a minimax problem in \Cref{eqn:at}. Finding solutions of the minimax optimization has been a major endeavor in mathematics and computer science \cite{bacsar1998dynamic, roughgarden2010algorithmic}. Theoretically, the well-known Stochastic Gradient Descent Ascent (SGDA) algorithm finds an $\varepsilon$-approximate stationary point in $\mathcal{O}(1/\varepsilon^2)$ iterations with averaging for convex-concave games \cite{mokhtari2020unified}. However, it is not appropriate to formulate the optimization of AT as SGDA or SGDmax \cite{sgda}, since it only updates a part of the coordinates in ${\mathbf{x}}=[x_1, x_2, \cdots x_n]$ for the maximization. The inner maximization actually corresponds to the stochastic block coordinate ascent.
Empirically, the neural network is non-concave with respect to the input, so perfectly solving the inner maximization is NP-hard. It is usually approximated by a strong attack like PGD \cite{at}, which needs multiple steps of the calculation the gradients. Therefore, adversarial training is much slower than vanilla training.
\subsection{Fast Adversarial Training.}
FreeAT \cite{freeat} first proposes a fast AT method by simultaneously optimizing the model's parameter and the adversarial perturbations by batch replaying.
YOPO \cite{yopo} adopts a similar strategy to optimize the adversarial loss function. Later on, single-step methods are shown to be more effective than FreeAT and YOPO \cite{fastat}. FGSM with Random Start (FGSM-RS) can be used to generate adversarial perturbations in one step to train a robust network if the hyperparameters are carefully tuned \cite{fastat}. ATTA \cite{atta} utilizes the transferability of adversarial examples between epochs, using adversarial example of the previous epoch as the initialization, optimizing the model parameters with
\begin{equation}
\begin{aligned}
{\mathbf{x}}_i^{j} &= \Pi_{{\mathcal{B}}_p({\mathbf{x}}_i, \varepsilon)}[{\mathbf{x}}_i^{j-1} + \alpha \cdot \text{sgn} (\nabla_{{\mathbf{x}}_i^{j-1}}\ell({\mathbf{x}}_i^{j-1}, y; {\bm{\theta}}))] \\
{\bm{\theta}} & = {\bm{\theta}} - \eta \nabla_{{\bm{\theta}}}\ell({\mathbf{x}}_i^j, y; {\bm{\theta}}))\;,
\end{aligned}
\end{equation}
where ${\mathbf{x}}_i^j$ means the adversarial examples generated for the $i$-th instance ${\mathbf{x}}_i$ at the $j$-th epoch. ATTA shows comparable robust accuracy with FGSM-RS. SLAT \cite{slat} perturbs both inputs and the latents simultaneously with FGSM, ensuring more reliable performance.
As mentioned above, these single-step methods suffer from \textit{catastrophic overfitting}, meaning the robustness against PGD attack suddenly drops to nearly 0\% while the robust accuracy against FGSM attack rapidly increases.
In order to prevent catastrophic overfitting, FGSM-GA \cite{fgsmga} adds a regularizer that aligns the direction of the input gradient. Another work \cite{kim2020understanding} studies the phenomenon from of the perspective of loss landscape, finding that catastrophic overfitting is a result of highly distorted loss surface. It proposes a new algorithm to resolve catastrophic overfitting by checking the loss value along the direction of the gradient. However, both algorithms require much more computation than FGSM-RS \cite{fastat} and ATTA \cite{atta}. Compared with these works, we study catastrophic overfitting from the perspective of training instances and show that using adaptive step sizes in single-step methods prevents catastrophic overfitting. Our method achieves better performance with negligible computational overhead.
Adaptive step sizes have been widely used in training neural networks such as AdaGrad \cite{adagrad}, RMSProp \cite{rmsprop} and ADAM \cite{fang2019convergence, adam, amsgrad}. However, our motivation is different, and to the best of our knowledge, we are the first to introduce the adaptive step size in fast AT.
\section{Motivation}
\label{sec:motivation}
Catastrophic overfitting is interpreted as a result of highly distorted loss landscapes of the input \cite{kim2020understanding}.
For examples, FGSM-RS \cite{fastat} uses large step sizes in the inner maximization to generate adversarial examples. It may only minimize the classification loss near the boundary of the adversarial budget, while the loss inside the adversarial budget may increase, leading to a highly distorted loss landscapes.
Recalling that different inputs have different loss landscape, they may result in different probabilities of causing catastrophic overfitting. Instances with large gradient norms are more sensitive to the adversarial noise. Thus, the network may simply minimize the loss on the FGSM-perturbed examples near the boundary instead of the whole space within the adversarial budget. This leads to highly distorted loss landscapes and catastrophic overfitting. The following experiments verify our hypothesis of catastrophic overfitting in FGSM-RS. The results of ATTA \cite{atta} are deferred to the \Cref{sec:addexp_atta}.
\begin{figure}[t]
\begin{subfigure}[t]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{Figure/loss_gdnorm_easy.pdf}
\caption{${\mathcal{D}}_1^1$ (Instances with smallest 10\% gradient norm)}
\label{fig:loss_gdnorm_easy}
\end{subfigure}
\begin{subfigure}[t]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{Figure/loss_gdnorm_hard.pdf}
\caption{${\mathcal{D}}_{10}^{10}$ (Instances with largest 10\% gradient norm)}
\label{fig:loss_gdnorm_hard}
\end{subfigure}
\vspace{-0.2cm}
\caption{The loss surface of the subsets ${\mathcal{D}}_1^1$ and ${\mathcal{D}}_{10}^{10}$. We average the loss of the instances from each subset. $v_1$ is the direction of adversarial noise and $v_2$ is a random direction. Figures from left to right plot the loss surface as the training step increases and each column of (a) and (b) corresponds to the same step of FGSM-RS.}
\label{fig:loss_co}
\vspace{-0.5cm}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Figure/FGSM_RS_CO.pdf}
\vspace{-0.5cm}
\caption{The robust training accuracy curve of FGSM-RS trained on different subsets of CIFAR10. The adversarial budgets and the step sizes are shown on top of each figure. The sudden decrease in accuracy indicates catastrophic overfitting. }
\label{fig:co}
\vspace{-0.6cm}
\end{figure*}
\noindent\textbf{Metrics of Input Gradient Norm.} To verify the hypothesis that instances with large gradient norms cause catastrophic overfitting, we divide the training instances into different subsets according to their gradient norms. Following the grouping method in \cite{easyhard}, we also average the gradient norm across the training process to reduce the randomness. Formally speaking, we perform FGSM-RS to train a ResNet-18 (RN-18) on CIFAR10 for $N=30$ epochs with $\varepsilon=8/255$ and step size $\alpha = 10/255$. \change{And catastrophic overfitting does not happen in this case.} The average gradient norm
$
GN({\mathbf{x}}_i) = \frac{1}{N} \sum_{j=1}^{N} \|\nabla_{\tilde{{\mathbf{x}}}_i^j} \ell(\tilde{{\mathbf{x}}}_i^j, y_i; {\bm{\theta}})\|_2\
$,
where $\tilde{{\mathbf{x}}}_i^j$ is the random initialization of ${\mathbf{x}}_i$ at the $j$-th epoch.
We sort ${\mathbf{x}}_i$ according to $GN({\mathbf{x}}_i)$ and define
$\text{rank}({\mathbf{x}}_i) = \frac{1}{n}\sum_{j=1}^n 1(GN({\mathbf{x}}_j) < GN({\mathbf{x}}_i))$
as the fraction of instances with smaller average gradient norm than ${\mathbf{x}}_i$. We divide the subsets according to $\text{rank}({\mathbf{x}}_i)$:
${\mathcal{D}}_i^j = \{{\mathbf{x}}_k| \frac{10(i-1)}{n} \le \text{rank}({\mathbf{x}}_k) < \frac{10j}{n}\}.$ The classes of each subset is balanced. The maximum and minimum proportion of one class in all subsets is 10.86\% and 8.98\% in CIFAR10.
\noindent\textbf{Loss Landscape.} We train a new RN-18 using FGSM-RS and enlarge the step size to $\alpha=14/255$ to cause catastrophic overfitting. \Cref{fig:loss_co} shows the loss surface of the subsets with the smallest (${\mathcal{D}}_1^1$) and the largest gradient norm (${\mathcal{D}}_{10}^{10})$ when the catastrophic overfitting happens. ${\mathcal{D}}_{10}^{10}$ first exhibits the catastrophic overfitting, where the loss surface of the input gets highly distorted and the loss function reaches its highest value in the middle of the adversarial budget. By contrast, the loss surface of ${\mathcal{D}}_1^1$ is less distorted. \Cref{fig:loss_co} infers that the subsets with large gradient norm are more likely to suffer from catastrophic overfitting.
\noindent\textbf{Training with Different Subsets.} We perform FGSM-RS on different subsets of CIFAR10 with different adversarial budgets $\varepsilon$ and step size $\alpha$ to show that fitting examples with larger gradient norm is more likely to cause catastrophic overfitting. We train the RN-18 on instances with small gradient norm ${\mathcal{D}}_1^2$, ${\mathcal{D}}_1^3$, ${\mathcal{D}}_1^4$ and instances with large gradient norm ${\mathcal{D}}_7^{10}$, ${\mathcal{D}}_8^{10}$, ${\mathcal{D}}_9^{10}$. While different subsets contain different number of instances, we keep the number of the training iterations the same for fair comparison.
In \Cref{fig:co}, we show the robust accuracy of the whole training set under PGD-10. For $\varepsilon=8/255$ with $\alpha=10/255$, the models trained with all subsets do not exhibit catastrophic overfitting. However, as the step size $\alpha$ increases, subsets with large norms first exhibit catastrophic overfitting, while catastrophic overfitting is less likely to occur in the model trained with the subsets of small gradient norm. \change{The figure shows 1) for each subset, catastrophic overfitting is more likely to occur when increasing the step size; 2) for a fixed step size, catastrophic overfitting is less likely to happen for subset with small gradient norm.
}
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.25\linewidth}
\centering
\includegraphics[width=\linewidth]{Figure/Loss_gap.pdf}
\captionsetup{font=normalsize}
\vspace{-0.6cm}
\caption{}
\label{fig:loss_gap}
\end{subfigure}
\begin{subfigure}[t]{0.25\linewidth}
\centering
\includegraphics[width=\linewidth]{Figure/Accuracy_Stepsize.pdf}
\captionsetup{font=normalsize}
\vspace{-0.6cm}
\caption{}
\label{fig:accuracy_stepsize}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{Figure/convergence_FGSM_RS.pdf}
\vspace{-0.6cm}
\caption{}
\label{fig:adapt_fgsm_rs}
\end{subfigure}
\vspace{-0.3cm}
\caption{(a) The loss gap of training instances between PGD10 and FGSM-RS $\ell({\mathbf{x}}^{\text{PGD}}, y) - \ell({\mathbf{x}}^{\text{FGSM-RS}}, y)$ with different step sizes for a FGSM-RS trained robust model; (b) The test robust accuracy of the models trained by FGSM-RS with different step sizes. (c) Accuracy of a WideResNet-28-10 under PGD10 of FGSM-RS and FGSM-RS with adaptive step size. }
\label{fig:stepsize}
\vspace{-0.5cm}
\end{figure}
\section{Algorithms}
From our analysis in \Cref{sec:motivation}, the step size of the inner maximization plays an important role for the performance of the single step methods. Overly large step size draws all FGSM-perturbed noise near the boundary, causing catastrophic overfitting and thus the robust accuracy under PGD decreases to zero. However, we cannot simply reduce the step size. As shown in \Cref{fig:loss_gap} and \ref{fig:accuracy_stepsize}, increasing step size can strengthens the adversarial attack and improves the robust accuracy.
To strengthen the attack as much as possible as well as avoid the catastrophic overfitting, we advocate utilizing the instance-wise step-size. The analysis in \Cref{sec:motivation} shows that we should use small step sizes for instances with large gradient norms to prevent catastrophic overfitting, and large step sizes for instances with small gradient norms to the strengthen the attack.
Thus, we use the moving average of the gradient norm
\begin{equation}
v_i^j = \beta v_i^{j-1} + (1-\beta) \|\nabla_{\tilde{{\mathbf{x}}}_i} \ell(\tilde{{\mathbf{x}}}_i, y_i; {\bm{\theta}})\|_2^2\;,
\end{equation}
to adjust the step size $\alpha_i^j$ for the ${\mathbf{x}}_i$ at the $j$-th epoch. Here, $\tilde{{\mathbf{x}}}_i$ is the initialization of ${\mathbf{x}}_i$ and $\beta$ is the momentum factor stabilizing the step size. The step size $\alpha_i^j$ is inversely proportional to $v_i^j$:
\begin{equation}
\alpha_i^j = \gamma/(c + \sqrt{v_i^j})\;,
\end{equation}
where $\gamma$ is a pre-defined learning rate and $c$ is a constant preventing $\alpha_i^j$ from being too large.
We incorporate the adaptive step size $\alpha_i^j$ with FGSM-RS, which randomly initializes the perturbation at the inner maximization step. The results are shown in \Cref{fig:adapt_fgsm_rs}, where the catastrophic overfitting does not occur by adaptive step size.
In addition, the average step size of the adaptive step size method is $10.8/255$, which is even larger than the fixed step size $8/255$ in FGSM-RS, leading to a stronger attack and better adversarial robustness.
Random initialization limits the magnitude of perturbations for instances with small step size, weakening the attack strength. In order to make the whole space within the adversarial budget reachable, we consider the previous initialization in ATTA \cite{atta}, which utilizes the transferability of adversarial examples and uses the adversarial perturbation obtained in the previous epoch as the initialization for the inner maximization. Combined with the previous initialization, {{ATAS}} does not need large $\alpha_i^j$ to reach the whole $\ell_p$ norm ball.
For each instance, we use adaptive step size $\alpha_i^j$ and perform the following inner maximization to obtain the adversarial examples:
\begin{equation}
{\mathbf{x}}_{i}^{j} = \Pi_{{\mathcal{B}}_p({\mathbf{x}}_i, \varepsilon)}[{\mathbf{x}}_{i}^{j-1} + \alpha_i^j \cdot \text{sgn} (\nabla_{{\mathbf{x}}_{i}^{j-1}}\ell({\mathbf{x}}_{i}^{j-1}, y_i; {\bm{\theta}}))],
\label{eqn:adaptatta}
\end{equation}
where ${\mathbf{x}}_i^j$ is the adversarial example at the $j$-th epoch.
Then the parameter ${\bm{\theta}}$ is updated with ${\mathbf{x}}_{ i}^{j}$
\begin{equation}
{\bm{\theta}} = {\bm{\theta}} - \eta \nabla_{\bm{\theta}} \ell({\mathbf{x}}_{i}^{j}, y_i; {\bm{\theta}})\;.
\end{equation}
In contrast to previous methods \cite{fgsmga, kim2020understanding} that needs large computational overhead to resolve the problem of catastrophic overfitting, the overhead of {{ATAS}} is negligible, since the input gradient $\nabla_{{\mathbf{x}}_i^{j-1}} \ell({\mathbf{x}}_i^{j-1}, y_i; {\bm{\theta}})$ is already calculated in the attack step in \Cref{eqn:adaptatta}. Thus, calculating the pre-conditioner $v_i^j$ and the step size $\alpha_i^j$ does not need additional forward-backward passes of the network. The training time of {{ATAS}} is almost the same as ATTA \cite{atta} and FGSM-RS \cite{fastat}. The detailed algorithm of {{ATAS}} is shown in \Cref{alg:adaptatta}.
{
\begin{algorithm}[t]
\caption{{ATAS}}
\label{alg:adaptatta}
\begin{algorithmic}[1]
\REQUIRE Training set ${\mathcal{D}}$, The model $f_{\bm{\theta}}$ with loss function $\ell$, Adversarial budget $\varepsilon$
\ENSURE Optimized model $f_{{\bm{\theta}}^*}$\\
\STATE $v_i^0=0$ for $i=1, \cdots, n$
\STATE ${\mathbf{x}}_i^0$ = ${\mathbf{x}}_i$ + Uniform($-\varepsilon, \varepsilon$) for $i=1, \cdots, n$
\FOR{$ j = 1$ to $N$}
\FOR {${\mathbf{x}}_i, y_i \in {\mathcal{D}}$}
\STATE $v_i^{j} = \beta v_i^{j-1} + (1-\beta) \|\nabla_{{\mathbf{x}}_i^{j-1}} \ell({\mathbf{x}}_i^{j-1}, y_i; {\bm{\theta}})\|_2^2$
\STATE $\alpha_i^{j} = \gamma/(c + \sqrt{v_i^j})$
\STATE ${\mathbf{x}}_i^{j} = \Pi_{{\mathcal{B}}_p({\mathbf{x}}_i, \varepsilon)}[{\mathbf{x}}_i^{j-1} + \alpha_i^{j}\cdot \text{sgn} (\nabla_{{\mathbf{x}}_i^{j-1}}\ell({\mathbf{x}}_i^{j-1}, y; {\bm{\theta}}))]$
\STATE ${\bm{\theta}} = {\bm{\theta}} - \eta \nabla_{{\bm{\theta}}}\ell({\mathbf{x}}_i^{j}, y; {\bm{\theta}}))$
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
}
\noindent\textbf{Theoretical Analysis of {{ATAS}}.}
We analyze the convergence of {{ATAS}} with $L_\infty$ adversarial budget. The proof is deferred to \Cref{sec:proof}.
Given the objective function
\begin{equation}
\phi({\bm{\theta}}, {\mathbf{x}}) = \frac{1}{n} \sum_{i=1}^n \ell({{\mathbf{x}}}_i, y_i; {\bm{\theta}})\;,
\label{eqn:phi}
\end{equation}
the minimax problem can be formulated as follows:
\begin{equation}
\min_{\bm{\theta}} \max_{{\mathbf{x}}^*=[{{\mathbf{x}}}_1^*, {{\mathbf{x}}}_2^*, \cdots, {{\mathbf{x}}}_n^*] \in {\mathcal{B}}_\infty({\mathbf{x}}, \varepsilon)} \phi({\bm{\theta}}, {\mathbf{x}}^*)\;,
\end{equation}
where ${\mathbf{x}}^*$ is the optimal adversarial example depending on ${\bm{\theta}}$. We consider the minimax optimization in convex-concave and smooth setting. And the loss function $\ell$ satisfies the following assumptions.
\newtheorem{assumption}{Assumption}[section]
\begin{assumption}
The training loss function $\ell$ satisfies the following constraints:
\noindent 1. $\ell$ in convex and $L_\theta$-smooth in ${\bm{\theta}}$;
${\bm{\theta}}$ and the gradient of ${\bm{\theta}}$ are bounded in the $L_2$ norm balls
$$
\|{\bm{\theta}}- {\bm{\theta}}^*\|_2 \le D_{\theta, 2}, \quad \frac{1}{n}\sum_{i=1}^n \|\nabla_{\bm{\theta}} \ell({\mathbf{x}}_i', y_i; {\bm{\theta}})\|_2^2, \le G_{\theta, 2}^2\;,
$$
where ${\bm{\theta}}^* = arg\,min_{{\bm{\theta}}} \max_{{\mathbf{x}}^* \in {\mathcal{B}}_\infty({\mathbf{x}}, \varepsilon)} \phi({\bm{\theta}}, {\mathbf{x}}^*)$.
\noindent 2. $\ell$ in concave and $L_x$-smooth in each ${\mathbf{x}}_i$.
${\mathbf{x}}_i \in \mathbb{R}^d$ is bounded in an $L_\infty$ norm ball with $D_{x, \infty} = 2\varepsilon$. For any ${\mathbf{x}}$ and ${\mathbf{x}}'$,
$
\|{\mathbf{x}}- {\mathbf{x}}'\|_\infty \le D_{x, \infty}
$,
and the gradients of the inputs also satisfy
$$
\|\nabla_{{\mathbf{x}}_i'}\ell({\mathbf{x}}_i', y_i; {\bm{\theta}})\|_2^2 \le G_{x_i, 2}^2, \ \sum_{i=1}^n\|\nabla_{{\mathbf{x}}_i'}\ell({\mathbf{x}}_i', y_i; {\bm{\theta}})\|_2^2 \le G_{x, 2}^2
$$
\vspace{-0.4cm}
\label{asp:convexconcave}
\end{assumption}
We average the trajectory of $T$-steps $\bar{{\bm{\theta}}}^T = \frac{\sum_{t=1}^T {\bm{\theta}}^t}{T}$ and $\bar{{\mathbf{x}}}^T = \frac{\sum_{t=1}^T{\mathbf{x}}^{t+1}}{T}$
to get the near optimal points. It is a standard technique for analyzing stochastic gradient methods \cite{adagrad}. The convergence gap
$ \max_{{\mathbf{x}}^* \in {\mathcal{B}}_\infty({\mathbf{x}}, \varepsilon)} \phi(\bar{{\bm{\theta}}}^T, {{\mathbf{x}}}^*) - \max_{{\mathbf{x}}^* \in {\mathcal{B}}_\infty({\mathbf{x}}, \varepsilon)} \phi({\bm{\theta}}^*, {{\mathbf{x}}}^*)
$
is upper bounded by the regret $R(T)$
\begin{equation} \label{eq:regret}
R(T) = \sum_{t=1}^T[\max_{{\mathbf{x}}^* \in {\mathcal{B}}_\infty({\mathbf{x}}, \varepsilon)}\phi({\bm{\theta}}^t, {\mathbf{x}}^*) - \min_{{\bm{\theta}}^*}\phi({\bm{\theta}}^*, {\mathbf{x}}^t)]\;.
\end{equation}
\newtheorem{lemma}{Lemma}[section]
\begin{lemma}
For $\ell$ satisfying assumption \ref{asp:convexconcave}, the objective function $\phi$ defined in \Cref{eqn:phi}
$$
\max_{{\mathbf{x}}^* \in {\mathcal{B}}_\infty({\mathbf{x}}, \varepsilon)} \phi(\bar{{\bm{\theta}}}^T, {{\mathbf{x}}}^*) - \min_{{\bm{\theta}}^*} \max_{{\mathbf{x}}^* \in {\mathcal{B}}_\infty({\mathbf{x}}, \varepsilon)} \phi({\bm{\theta}}^*, {{\mathbf{x}}}^*) \le \frac{R(T)}{T}
$$
\label{theo:regret}
\vspace{-0.5cm}
\end{lemma}
\noindent\textbf{Adaptive Stochastic Gradient Descent Block Coordinate Ascent (ASGDBCA).} {{ATAS}} can be formulated as ASGDBCA, which randomly picks an instance ${\mathbf{x}}_k$ at the step $t$, applying stochastic gradient descent to the parameter ${\bm{\theta}}$ and adaptive block coordinate ascent to the input ${{\mathbf{x}}}$. Unlike SGDA \cite{sgda}, where all dimensions of ${\mathbf{x}}$ get updated in each iteration, ASGDBCA only updates some dimensions of ${\mathbf{x}}$. ASGDBCA first calculates the pre-conditioner $v_i^t$ as
\begin{equation*}
\begin{aligned}
&v_k^{t+1} =
\begin{cases}
\beta v_i^{t} + (1-\beta) \|\nabla_{{\mathbf{x}}_i^t}\ell({\mathbf{x}}_i^t, y_k; {\bm{\theta}}^t)\|_2^2 & i=k \\
v^t_i & i \neq k
\end{cases}, \qquad
\hat{v}^{t+1}_i = \text{max}(\hat{v}^{t}_i, v^{t+1}_i)\;.
\end{aligned}
\end{equation*}
Then ${\mathbf{x}}$, ${\bm{\theta}}$ are optimized with
\begin{equation*}
\begin{aligned}
&{\mathbf{x}}^{t+1}_i =
\begin{cases}
\Pi_{\mathcal{B}_\infty({\mathbf{x}}_i, \varepsilon)}[{\mathbf{x}}^{t}_{i}+\frac{\eta_x}{\sqrt{\hat{v}_{i}^{t+1}}} \nabla_{{\mathbf{x}}_i^t}\ell({\mathbf{x}}_i^t, y_i; {\bm{\theta}}^t)] & i=k \\
{\mathbf{x}}_i^t & i \neq k
\end{cases}, \quad
{\bm{\theta}}^{t+1} = {\bm{\theta}}^{t}-\eta_\theta \nabla_{{\bm{\theta}}}\ell({\mathbf{x}}_k^{t+1}, y_k; {\bm{\theta}}^t)\;.
\end{aligned}
\label{eqn:ada_sgdbca}
\end{equation*}
The difference between ASGDBCA and {{ATAS}} is $\hat{v}_k^t$. To prove the convergence of ASGDBCA, the pre-conditioner needs to be non-decreasing. Otherwise, {{ATAS}} may not converge like ADAM \cite{amsgrad}. However, the non-convergent version of ADAM actually works better for neural networks in practice \cite{adam}. Therefore, {{ATAS}} still uses $v_k^t$ as the pre-conditioner.
\newtheorem{thm}{Theorem}[section]
\begin{thm}[Regret Bound for ASGDBCA]
Under Assumption \ref{asp:convexconcave}, with $\eta_\theta = \frac{D_{\theta,2}}{G_{\theta,2}\sqrt{T}}$ and $\eta_x = \frac{\sqrt{d}D_{x, \infty}}{\sqrt{T}(1-\beta)^{-1/4}}$, the regret of ASGDBCA is bounded by:
$$
\begin{aligned}
R^{\text{ASGDBCA}}(T) \le & G_{\theta,2} D_{\theta,2} \sqrt{T} + \frac{D_{x,\infty}\sum_{i=1}^n G_{x_i, 2} \sqrt{dT}}{n(1-\beta)^{1/4}} + \frac{dL_xD_{x,\infty}^2}{2n^2\sqrt{1-\beta}}
\end{aligned}
$$
\vspace{-0.5cm}
\label{thm:asgdbca}
\end{thm}
\noindent\textbf{Comparison with the Non-adaptive Version.} The non-adaptive version of {{ATAS}} is ATTA, which can be formulated as the Stochastic Gradient Descent Block Coordinate Ascent (SGDBCA):
\begin{equation*}
\begin{aligned}
{\mathbf{x}}^{t+1}_{i} &=
\begin{cases}
\Pi_{\mathcal{B}_\infty({\mathbf{x}}_i, \varepsilon)}[{\mathbf{x}}^{t}_{i}+\eta_x \nabla_{{\mathbf{x}}_i^t}\ell({\mathbf{x}}_i^t, y_i; {\bm{\theta}}^t)] & i=k \\
{\mathbf{x}}_i^t & i \neq k
\end{cases}, \quad
{\bm{\theta}}^{t+1} &= {\bm{\theta}}^{t}-\eta_\theta \nabla_{{\bm{\theta}}}\ell({\mathbf{x}}_k^{t+1}, y_k; {\bm{\theta}}^t)\;,
\end{aligned}
\end{equation*}
\begin{thm}[Regret Bound for SGDBCA]
Under assumption \ref{asp:convexconcave}, with constant learning $\eta_\theta = \frac{D_{\theta,2}}{G_{\theta,2}\sqrt{T}}$ and $\eta_x = \frac{\sqrt{nd}D_{x,\infty}}{G_{x,2}\sqrt{T}}$, the regret $R^{\text{SGDBCA}}(T)$ of SGDBCA is bounded by:
$$
R^{\text{SGDBCA}}(T) \le G_{\theta,2} D_{\theta,2} \sqrt{T} + G_{x,2}D_{x,\infty}\sqrt{\frac{dT}{n}} + \frac{d L_x D_{x, \infty}^2}{2n}
$$
\label{thm:sgdbca}
\vspace{-0.5cm}
\end{thm}
Theorem \ref{thm:asgdbca} and \ref{thm:sgdbca} shows that ASGDBCA converges faster than SGDBCA. When $T$ is large, the third term of the regret in both SGDBCA and ASGDBCA is negligible. Consider their first terms are the same, the main difference is the regret bound about ${\mathbf{x}}$ in the second term: $G_{x,2}D_{x,\infty}\sqrt{\frac{dT}{n}}$ and $\frac{D_{x,\infty}\sum_{i=1}^n G_{x_i, 2} \sqrt{dT}}{n(1-\beta)^{1/4}}$.
The ratio between them is
$$
\text{Ratio} = \frac{1}{(1-\beta)^{\frac{1}{4}}}\sqrt{\frac{\sum_{i=1}^n G_{x_i,2}^2}{n}\Big/(\frac{\sum_{i=1}^{n} G_{x_i,2}}{n})^2}
$$
The Cauchy-Schwarz inequality indicates the ratio is always larger than 1. The gap between ASGDBCA and SGDBCA gets larger when $G_{x_i,2}$ has long-tailed distribution, which demonstrates the relatively faster convergence of {{ATAS}} than the non-adaptive counterparts. We show the empirical histogram of $G_{x_i,2}$ of a RN-18 and the ratio in \Cref{fig:theory} in the Appendix,
which demonstrates the long-tailed distribution for common datasets.
\input{table_figure}
\section{Experiments}
\noindent\textbf{Baselines.} We compare {{ATAS}} with the SOTA fast AT algorithms including FreeAT \cite{freeat}, YOPO \cite{yopo}, FGSM-RS \cite{fastat}, FGSM-GA \cite{fgsmga}, SSAT \cite{kim2020understanding} and ATTA \cite{atta}. We also compare {{ATAS}} with standard AT whose inner maximization is solved by PGD10, providing a reference for the ideal performance.
\noindent\textbf{Attack Methods.} We consider three attacks: PGD10, PGD50 \cite{at} and AutoAttack (AA) \cite{autoattack}. Square Attack, a black-box attack, is included in AutoAttack to eliminate the effect of gradient masking.
\noindent\textbf{Experimental Settings.} {{ATAS}} uses the techniques proposed in ATTA \cite{atta}: the adversarial perturbations are transformed according the data augmentation and get reset every several epochs. And the previous initialization is stored in the GPU memory, brings negligible storing latency to {{ATAS}}. We consider adversarial attacks with the $\ell_\infty$-norm budget. We evaluate fast AT algorithms on CIFAR10 and CIFAR100 \cite{cifar10} with WideResNet-28-10 (WRN-28-10) \cite{wrn} and ResNet-18 (RN-18), and on ImageNet \cite{deng2009imagenet} with ResNet-18 (RN-18) and ResNet-50 (RN-50).
While early stopping is widely used in the standard AT \cite{overfitting}, the computational overhead to perform PGD attack on a separate validation set is large.
Besides, considering the small budget of training time in fast AT, even if early stopping is applied to terminate the training before catastrophic overfitting occurs, the training is far from convergence, resulting in poor performance \cite{fgsmga}. Therefore, we follow the previous works \cite{fgsmga, fastat, atta} and do not use early stopping. We set $\beta=0.5$ and $\gamma/c = 16/255$, which is close to the adversarial budget. And we set $c=0.01$ for CIFAR10 and CIFAR100 and $c=0.1$ for ImageNet. More detailed experiment settings are in \Cref{sec:expsetting}. Additional experiments are available in the \Cref{sec:addexp}.
å
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Figure/convergence_cifar10.pdf}
\vspace{-0.6cm}
\caption{Robust training cross-entropy loss under PGD10 of CIFAR10 with different network architectures and adversarial budgets. The curve is smoothed to clearly show the convergence. }
\label{fig:convergence}
\vspace{-0.5cm}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Figure/acc_curve.pdf}
\vspace{-0.6cm}
\caption{Robust accuracy under AutoAttack for different datasets on different network architectures with varying adversarial budgets. {{ATAS}} achieves the highest robust accuracy in these cases. The accuracy numbers can be found in the \Cref{sec:addexp_acc}.}
\label{fig:acc_curve}
\vspace{-0.6cm}
\end{figure*}
\begin{table}[!t]
\vspace{0.3cm}
\centering
\caption{Ablation study of hyperparameters $\gamma$ (left) and $c$ (right) on CIFAR10 and RN-18 under AA.}
\begin{subtable}{0.51\linewidth}
\captionsetup{font=normalsize}
\setlength\tabcolsep{2.0pt}
\begin{tabular}{c c c c c c c } \hline
$\gamma/0.01*255$ & 12 & 14 & 16 & 18 & 20 \\ \hline
$\varepsilon=8/255$ & 45.20 & 45.21 & 45.38 & 45.50 & 45.60 \\
$\varepsilon=12/255$ & 30.84 & 31.06 & 30.56 & 31.21 & 31.04 \\
$\varepsilon=16/255$ & 21.38 & 21.23 & 21.09 & 21.13 & 20.94 \\ \hline
\end{tabular}
\end{subtable}
\begin{subtable}{0.48\linewidth}
\captionsetup{font=normalsize}
\setlength\tabcolsep{2.0pt}
\begin{tabular}{c c c c c c} \hline
$c$ & 0.005 & 0.007 & 0.01 & 0.02 & 0.04 \\ \hline
$\varepsilon=8/255$ & 45.01 & 45.28 & 45.38 & 45.52 & 45.48 \\
$\varepsilon=12/255$ & 30.08 & 30.80 & 30.56 & 30.69 & 30.52 \\
$\varepsilon=16/255$ & 20.36 & 20.84 & 21.09 & 21.07 & 20.48 \\ \hline
\end{tabular}
\end{subtable}
\label{tab:ablation}
\vspace{-0.7cm}
\end{table}
\noindent\textbf{Convergence.}
\Cref{fig:convergence} shows the curve of the training loss
$
\max_{{\mathbf{x}}^*=[{{\mathbf{x}}}_1^*, \cdots, {{\mathbf{x}}}_n^*] \in {\mathcal{B}}_\infty({\mathbf{x}}, \varepsilon)} \phi({\bm{\theta}}, {\mathbf{x}}^*)
$
on CIFAR10 with different network architectures and different adversarial budgets, where ${\mathbf{x}}^*$ is approximated by PGD10 and the objective function $\phi$ is approximated by mini-batches of training instances at each step. {{ATAS}} achieves smaller robust training loss at the end of training, demonstrating the faster convergence of {{ATAS}} than ATTA and other baselines. We also show the relationship between gradient norm distribution and convergence gap between ATTA and {{ATAS}} in \Cref{sec:addexp_convergence}.
\noindent\textbf{Robust Accuracy.} We provide our main results in \Cref{tab:accuracy}, showing the robust accuracy of CIFAR10, CIFAR100 and ImageNet, respectively. \Cref{fig:acc_curve} shows the robust accuracy under AutoAttack for different adversarial budgets, whose numbers are provided in the \Cref{sec:addexp_acc}.
\noindent\textbf{CIFAR10 and CIFAR100.} As shown in \Cref{tab:cifar10}, The robust accuracy of FreeAT and YOPO is much lower than the other methods. While FGSM-RS maintains non-trivial robust accuracy when using RN-18, it suffers from catastrophic overfitting when using large networks such as WRN-28-10. The regularizer in FGSM-GA prevents catastrophic overfitting. However, it may over-regularize the network so that the clean accuracy and the robust accuracy decrease on WRN-28-10. In addition, the regularizer also brings computational overhead: FGSM-GA needs nearly double training time compared with other methods. {{ATAS}} achieves the best robust accuracy among all fast AT algorithms while keeping the training time nearly the same. Furthermore, for small networks like RN-18, the performance of {{ATAS}} is on par with standard AT (PGD10) but needs only one fifth of the training time.
\Cref{tab:cifar100} shows the robust accuracy on CIFAR100 and {{ATAS}} also outperforms other algorithms. Catastrophic overfitting also happens in SSAT even if the losses of inner points are checked.
\noindent\textbf{ImageNet.}
ATTA and {{ATAS}} need to memorize the adversarial noise for the whole training set. Since frequently loading and storing from the disks significantly lowers the training speed, all perturbations should be stored in the memory. Thus, we utilize the local property of the adversarial examples \cite{huang2020corrattack} and only store the interpolated perturbation in the memory. We resize the perturbations from $224\times 224$ to $32\times 32$ for storage and up-sample it back when used as the initialization for the next epoch. The detailed algorithm is deferred to the \Cref{sec:atas_imagenet}.
\Cref{tab:imagenet} shows the robust accuracy on ImageNet on $\varepsilon=2/255$. {{ATAS}} still has higher robust accuracy than all baselines. FGSM-GA needs the calculate the second order gradient of the parameters, which needs huge amount of GPU memory. Thus, we could not train a big network such as ResNet-50 on ImageNet.
\noindent\textbf{Robust accuracy at different adversarial budgets.} \Cref{fig:acc_curve} shows the robust accuracy of fast AT algorithms under AutoAttack on different datasets, network architectures and adversarial budgets.
The robust accuracy decreases when enlarging the adversarial budget, but {{ATAS}} always outperforms all the baselines for different adversarial budgets, datasets and network architectures. This demonstrates that the improvement of {{ATAS}} is consistent.
\noindent\textbf{Ablation Study.} \Cref{tab:ablation} provides the ablation study on hyperparameters, showing that {{ATAS}} is not sensitive to them. Besides, as the only different between {{ATAS}} and ATTA is the step size, the superior performance of {{ATAS}} over ATTA forms a ablation study to demonstrate the effectiveness of the adaptive step size. The changes of gradient norm and step size of {{ATAS}} is shown in \Cref{sec:addexp_step}.
\section{Conclusion}
In this paper, we investigate catastrophic overfitting from the perspective of training instances and show that instances with large gradient norms are more likely to cause catastrophic overfitting in the single-step fast AT methods.
This finding motivates the adaptive training method, {{ATAS}}, which applies the adaptive step size of inner maximization inversely proportional to the input gradient norm.
We theoretically analyze the convergence of {{ATAS}}, showing that our method converges faster than the non-adaptive counterpart especially when the distribution of input gradient norm is long-tailed.
Extensive experiments on CIFAR10, CIFAR100 and ImageNet with different network architectures and adversarial budgets show that {{ATAS}} mitigates catastrophic overfitting and achieves higher robust accuracy under various strong attacks.
|
train/arxiv
|
BkiUc7k4eIZjofWfIF00
| 5
| 1
|
\section{Introduction}
A harmonic morphism is a map between two Riemannian manifolds
that pulls back local harmonic functions to local harmonic functions.
The simplest examples of harmonic morphisms are constant maps,
real-valued harmonic functions and isometries.
A characterization of harmonic morphisms was given by Fuglede and Ishihara,
they showed in \cite{Fuglede} and \cite{Ishihara}, respectively, that harmonic
morphisms are exactly the horizontally weakly conformal harmonic maps.
If we restrict our attention to the maps where the codomain is a surface then
the harmonic morphisms are the horizontally weakly conformal maps with minimal fibers
at regular points.
Between two surfaces the harmonic morphisms are exactly the weakly conformal maps. Since the composition
of two harmonic morphisms is again a harmonic morphism, we get that,
locally any harmonic morphism to a surface can be turned into a harmonic morphism to the
complex plane by composing with a weakly conformal map.
Local existence of harmonic morphisms can be characterized in terms of foliations.
If the codomain is a surface then the existence of a local harmonic morphism is equivalent to
the existence of a local conformal foliation with minimal fibers at regular points, see \cite{Wood86} by Wood.
Baird and Wood found a necessary condition, see \cite{BaiWoo} Corollary 4.4, on the curvature
for local existence of complex-valued harmonic morphisms on three-manifolds.
In this case the fibers are geodesics and there is an orthonormal basis
$\{X,Y\}$ for the horizontal space such that the \textbf{Ricci curvature condition}
\[\operatorname{Ric}(X,X)=\operatorname{Ric}(Y,Y)\textrm{ and }\operatorname{Ric}(X,Y)=0,\]
is satisfied. In three dimensions this is equivalent to
\[\SPE{R(X,U)U}{X}=\SPE{R(Y,U)U}{Y}\textrm{ and }\SPE{R(X,U)U}{Y}=0\]
for any vertical unit vector $U$, which in turn is equivalent to the fact that the
sectional curvature $K(X_{\theta}\wedge U)$ is
independent of $\theta$ where $X_{\theta}=\cos(\theta)X+\sin(\theta)Y$.
We show in this paper that the last condition is true for any complex-valued submersive harmonic morphism with
totally geodesic fibers.
\begin{theorem}\label{Jon-Curv}
Let $(M,g)$ and $(N^{2},h)$ be a Riemannian manifolds, let $\phi:(M,g)\to (N^{2},h)$ be a submersive
harmonic morphism with totally geodesic fibers and $p\in M$.
Given any $U,V\in\mathcal{V}_{p}=\ker(\dop\phi)$ and any orthonormal basis $\{X,Y\}$ for $\mathcal{H}_{p}=\mathcal{V}_{p}^{\bot}$,
set $X_{\theta}=\cos(\theta)X+\sin(\theta)Y$. Then
\[\SPE{R(X_{\theta}\wedge U)}{X_{\theta}\wedge V}\]
is independent of $\theta$.
\end{theorem}
In four dimensions or more this is stronger than the Ricci curvature condition.
Note that Example 6.1 and 6.2 of \cite{Gud-Sven-2013} by Gudmundsson and Svensson
do not have totally geodesic fibers. So they are counterexamples to the Ricci curvature condition
only in the case of minimal but not totally geodesic fibers.
We present these two examples in Example \ref{GudSvenEx1} and \ref{GudSvenEx2}.
If we assume that the domain $(M,g)$ is an Einstein manifold, then the curvature operator,
in a suitably chosen basis,
splits into two blocks and we find that there are at least $\operatorname{dim}(M)-2$
double eigenvalues for the curvature operator.
We use this to give an example of a five dimensional homogeneous Einstein
manifold that does not have any submersive harmonic morphism with totally geodesic
fibers.
Harmonic morphisms with totally geodesic fibers have been studied in different ways before. Baird and Wood,
Section 6.8 \cite{BW-book}, classify them in the constant curvature case. Later Pantilie generalized
to the case where the domain is conformaly equivalent to constant curvature, \cite{Pantilie08}.
Mustafa \cite{Mustafa} gave a Bochner type curvature formula and applied it to foliations
with large codimension.
We end this paper by showing that the Ricci curvature condition is satisfied by harmonic morphisms
that are holomorphic with respect to a complex structure and where the second fundamental
form is compatible with the complex structure. The result is similar
to Proposition 6.3 from \cite{LouPan}, where Loubeau and Pantilie describe twistorial
harmonic morphisms, but only in $4$ dimensions.
\section{The curvature condition}
Let $(M,g)$ and $(N,h)$ be Riemannian manifolds and let $\phi:(M,g)\to(N,h)$
be a smooth submersion. Denote the vertical distribution associated with $\phi$ by $\mathcal{V}=\ker(\dop\phi)$
and the horizontal distribution by $\mathcal{H}=\mathcal{V}^{\bot}$.
For two vector fields $E,F$ on $M$ define the tensors $A$ and $B$, introduced in \cite{ONeill}, by
\[A_{E}F=\mathcal{V}(\nabla_{\mathcal{H} E}\mathcal{H} F)\textrm{ and }B_{E}F=\mathcal{H}(\nabla_{\mathcal{V} E}\mathcal{V} F).\]
$B$ is called the second fundamental form and the fibers of $\phi$ are said to be totally geodesic if $B=0$.
The dual $A_{X}^{*}$ of $A_{X}$ satisfies $A^{*}_{X}F=-\mathcal{H}(\nabla_{X}\mathcal{V} F)$ for $X\in \mathcal{H}$ and the dual $B_{U}^{*}$
of $B_{U}$ satisfies $B^{*}_{U}F=-\mathcal{V}(\nabla_{U}\mathcal{H} F)$ for $U\in\mathcal{V}$.
Gudmundsson calculated the curvature for a horizontally conformal submersion in \cite{Gud-thesis}, we state the
results from Proposition 2.1.2, and Theorem 2.2.3 (2) and (3) below.
\begin{proposition}\label{Gud-curv}
Let $(M,g)$ and $(N,h)$ be Riemannian manifolds and let $\phi:(M,g)\to(N,h)$ be a
horizontally conformal submersion with dilation $\lambda:M\to(0,\infty)$.
Let $U,V,W$ be vertical vectors and $X,Y$ be horizontal vectors, then
\begin{align*}
(i)&\,A_{X}Y=\frac{1}{2}\mathcal{V}([X,Y])+\SPE{X}{Y}\mathcal{V}(\operatorname{grad}\ln\lambda)\\
(ii)&\,\SPE{R(U\wedge V)}{W\wedge X}=\SPE{(\nabla_{U}B)_{V}W}{X}-\SPE{(\nabla_{V}B)_{U}W}{X}\\
(iii)&\,\SPE{R(U\wedge X}{Y\wedge V}=\SPE{(\nabl{U}{A})_{X}Y}{V}+\SPE{A^{*}_{X}U}{A^{*}_{Y}V}+\SPE{(\nabla_{X}B^{*})_{U}Y}{V}\\
&-\SPE{B^{*}_{V}Y}{B^{*}_{U}X}-2 V(\ln \lambda)\SPE{A_{X}Y}{U}.
\end{align*}
\end{proposition}
It is well-known that $A_{X}Y+A_{Y}X=2\SPE{X}{Y}\mathcal{V}(\operatorname{grad}\ln\lambda)$ and $A_{X}Y-A_{Y}X=\mathcal{V}([X,Y])$.
Suppose that $(N,h)$ is a surface and $\{X,Y\}$ is an orthonormal basis for $\mathcal{H}$ and $U\in \mathcal{V}$ then
\[A^{*}_{X}U=\SPE{A^{*}_{X}U}{X}X+\SPE{A^{*}_{X}U}{Y}Y=\SPE{U}{A_{X}X}X+\SPE{U}{A_{X}Y}Y.\]
From this we see that $\SPE{A^{*}_{X}U}{A^{*}_{X}U}$ does not depend on the direction of $X$. First
\begin{align*}
\SPE{A^{*}_{X}U}{A^{*}_{X}U}=&\SPE{U}{A_{X}X}^{2}+\SPE{U}{A_{X}Y}^{2}\\
=&\SPE{U}{\mathcal{V}(\operatorname{grad}\ln\lambda)}^{2}+\SPE{U}{\frac{1}{2}[X,Y]}^{2}\\
=&U(\ln\lambda)^2+\frac{1}{4}\SPE{U}{[X,Y]}^{2}.
\end{align*}
Now, since the vertical part of the Lie bracket of horizontal vector fields is a tensor, the term
$\frac{1}{4}\SPE{U}{[X,Y]}^{2}$ is in fact independent of our choice of orthonormal basis
$\{X,Y\}$ for the horizontal space. To see this, suppose $a^2+b^2=1$, then
\begin{align*}
\SPE{U}{[aX+bY,bX-aY]}^{2}=&\SPE{U}{-a^2[X,Y]+b^2[Y,X]}^{2}\\
=&(-1)^{2}\SPE{U}{[X,Y]}^{2}
\end{align*}
The proof of Theorem \ref{Jon-Curv} follows from the calculation above by polarizing twice, once in $X$
and once in $U$, but we give a direct proof below.
\begin{proof}
From Proposition \ref{Gud-curv} (iii) the curvature of a horizontally conformal
submersion with totally geodesic fibers is
\[\SPE{R(U\wedge X}{Y\wedge V}=\SPE{(\nabl{U}{A})_{X}Y}{V}+\SPE{A^{*}_{X}U}{A^{*}_{Y}V}-2 V(\ln \lambda)\SPE{A_{X}Y}{U}\]
for any $U,V\in\mathcal{V}$ and any $X,Y\in\mathcal{H}$.
Both sides of the expression are tensors, so we may extend the vectors to vector fields in any way we choose.
\begin{align*}
\SPE{R(X_{\theta}\wedge U)}{X_{\theta}\wedge V}
=&\cos^{2}(\theta)\SPE{R(X\wedge U)}{X\wedge V}+\sin^{2}(\theta)\SPE{R(Y\wedge U)}{Y\wedge V}\\
&+\cos(\theta)\sin(\theta)\left(\SPE{R(X\wedge U)}{Y\wedge V}+\SPE{R(Y\wedge U)}{X\wedge V}\right).
\end{align*}
Extend $X,Y,U,V$ to unit vector fields, then $2\SPE{\nabla_{U}X}{X}=U\SPE{X}{X}=0$ and
\begin{align*}
\SPE{R(X\wedge U)}{X\wedge V}=&\SPE{(\nabla_{U}A)_{X}X}{V}+\SPE{A^{*}_{X}U}{A^{*}_{X}V}-2V(\ln\lambda)\SPE{A_{X}X}{U}\\
=&\SPE{\nabla_{U}(A_{X}X)}{V}-\SPE{A_{\nabla_{U}X}X}{V}-\SPE{A_{X}(\nabla_{U}X)}{V}\\
&+\SPE{\SPE{A^{*}_{X}U}{X}X+\SPE{A^{*}_{X}U}{Y}Y}{\SPE{A^{*}_{X}V}{X}X+\SPE{A^{*}_{X}V}{Y}Y}\\
&-2V(\ln\lambda)\SPE{\mathcal{V}(\operatorname{grad}\ln\lambda)}{U}\\
=&\SPE{\nabla_{U}\mathcal{V}(\operatorname{grad}\ln\lambda)}{V}-\SPE{A_{\nabla_{U}X}X+A_{X}(\nabla_{U}X)}{V}\\
&+\SPE{A_{X}X}{U}\SPE{A_{X}X}{V}+\SPE{A_{X}Y}{U}\SPE{A_{X}Y}{V}-2V(\ln\lambda)U(\ln\lambda)\\
=&\SPE{\nabla_{U}\mathcal{V}(\operatorname{grad}\ln\lambda)}{V}-\SPE{\SPE{\nabla_{U}X}{X}\mathcal{V}(\operatorname{grad} \ln\lambda)}{V}\\
&+U(\ln\lambda)V(\ln\lambda)+\frac{1}{4}\SPE{[X,Y]}{U}\SPE{[X,Y]}{V}-2U(\ln\lambda)V(\ln\lambda)\\
=&\SPE{\nabla_{U}\mathcal{V}(\operatorname{grad}\ln\lambda)}{V}+U(\ln\lambda)V(\ln\lambda)\\
&+\frac{1}{4}\SPE{[X,Y]}{U}\SPE{[X,Y]}{V}-2U(\ln\lambda)V(\ln\lambda).
\end{align*}
A similar calculation shows that this equals $\SPE{R(Y\wedge U)}{Y\wedge V}$.
Now since we extended to unit vector fields $\SPE{\nabla_{U}X}{Y}=-\SPE{X}{\nabla_{U}Y}$, so
\begin{align*}
\SPE{R(X\wedge U)}{Y\wedge V}+\SPE{R(Y\wedge U)}{X\wedge V}=&
\SPE{\nabla_{U}(A_{X}Y)}{V}-\SPE{A_{\nabla_{U}X}Y}{V}-\SPE{A_{X}(\nabla_{U}Y)}{V}\\
&+\SPE{\nabla_{U}(A_{Y}X)}{V}-\SPE{A_{\nabla_{U}Y}X}{V}-\SPE{A_{Y}(\nabla_{U}X)}{V}\\
&+\SPE{A_{X}X}{U}\SPE{A_{Y}X}{V}+\SPE{A_{X}Y}{U}\SPE{A_{Y}Y}{V}\\
&+\SPE{A_{Y}Y}{U}\SPE{A_{X}Y}{V}+\SPE{A_{Y}X}{U}\SPE{A_{X}X}{V}\\
&-2V(\ln\lambda)\SPE{A_{X}Y}{U}-2V(\ln\lambda)\SPE{A_{Y}X}{U}\\
=&\SPE{\nabla_{U}(A_{X}Y)}{V}+\SPE{\nabla_{U}(A_{Y}X)}{V}\\
&-\SPE{A_{\nabla_{U}X}Y}{V}-\SPE{A_{Y}(\nabla_{U}X)}{V}\\
&-\SPE{A_{X}(\nabla_{U}Y)}{V}-\SPE{A_{\nabla_{U}Y}X}{V}\\
&+\SPE{\mathcal{V}(\operatorname{grad}\ln\lambda)}{U}\SPE{A_{Y}X+A_{X}Y}{V}\\
&+\SPE{A_{X}Y+A_{Y}X}{U}\SPE{\mathcal{V}(\operatorname{grad}\ln\lambda)}{V}\\
&-2V(\ln\lambda)\SPE{A_{X}Y+A_{Y}X}{U}\\
=&\SPE{\nabla_{U}\mathcal{V}([X,Y])}{V}+\SPE{\nabla_{U}\mathcal{V}([Y,X])}{V}\\
&-\SPE{\SPE{X}{\nabla_{U}Y}\mathcal{V}(\operatorname{grad}\ln\lambda)}{V}\\
&-\SPE{\SPE{Y}{\nabla_{U}X}\mathcal{V}(\operatorname{grad}\ln\lambda)}{V}\\
=&0.
\end{align*}
Thus, the value of $\SPE{R(X_{\theta}\wedge U)}{X_{\theta}\wedge V}$ does not depend on $\theta$.
\end{proof}
\section{Implications for Einstein manifolds}
Proposition \ref{Gud-curv} (ii) says that for a horizontally conformal submersion $\phi:(M,g)\to(N,h)$ with
totally geodesic fibers the curvature operator of $M$ satisfies
\[\SPE{R(U\wedge V)}{W\wedge X}=0\]
for all $U,V,W\in\mathcal{V}$ and all $X\in\mathcal{H}$.
Let $\{U_{k}\}$ be an orthonormal basis for $\mathcal{V}$ and $\{X,Y\}$ be an orthonormal basis for $\mathcal{H}$.
If we assume that the domain $M$ is an Einstein manifold then
\begin{align*}
0=\operatorname{Ric}(X,U)&=\SPE{R(X\wedge Y)}{Y\wedge U}+\sum_{k}\SPE{R(X\wedge U_{k})}{U_{k}\wedge U}\\
&=\SPE{R(X\wedge Y)}{Y\wedge U}
\end{align*}
for all $U\in\mathcal{V}$. This means that the curvature operator $R$ splits into invariant components
\[\wedge^{2}T_{p}M=(\wedge^{2}\mathcal{V}\oplus\wedge^{2}\mathcal{H})\oplus W,\]
where $W$ is generated by the mixed vectors, that is,
\[R(\wedge^{2}\mathcal{V}\oplus\wedge^{2}\mathcal{H})\subseteq\wedge^{2}\mathcal{V}\oplus\wedge^{2}\mathcal{H}\textrm{ and }R(W)\subseteq W.\]
Thus, the eigenvalues of $R$ are the union of the eigenvalues
of $R|_{\wedge^{2}\mathcal{V}\oplus\wedge^{2}\mathcal{H}}$ and $R|_{W}$.
We can define a complex structure $J$ on $W$ by $J(X\wedge U)=Y\wedge U$ and $J(Y\wedge U)=-X\wedge U$.
The curvature tensor $R|_{W}$ is, due to Theorem \ref{Jon-Curv},
represented by an Hermitian matrix $H$ with respect to this complex structure.
Let $e_{j}$ be an eigenvector to the Hermitian matrix $H$, then $e_{j}$ and $J e_{j}$ represent different real
eigenvectors for $R|_{W}$ with the same eigenvalue. Thus $R|_{W}$ and therefore $R$ has at least
$\operatorname{dim}(M)-2$ double eigenvalues. We get
\begin{proposition}
Let $(M,g)$ be an Einstein manifold and $(N^{2},h)$ be a Riemannian surface.
Let $R$ be the curvature operator of $(M,g)$ at $p\in M$. If there is a submersive harmonic
morphism $\phi:(M,g)\to(N^{2},h)$ with totally geodesic fibers then $R$ has at least
$\operatorname{dim}(M)-2$ pairs of eigenvalues.
\end{proposition}
In particular, the relationship between the determinants of $R|_{W}$ and $H$ is $\det(R|_{W})=\det(H)^{2}$.
So if $F$ is the characteristic polynomial of $H$ and $f$ the characteristic polynomial of $R_{W}$,
then $f=F^{2}$ and $F$ is a factor of $\gcd(f,f^{'})$. Thus $\gcd(f,f^{'})$ is a polynomial of degree at least
$\deg(F)=\operatorname{dim}(M)-2$.
\section{Examples}
We give an example of a five dimensional manifold that does not have any
conformal foliations with totally geodesic fibers, not even locally. The two homogeneous
Einstein manifolds below were found by Alekseevsky in \cite{Alek}, but we use the notation of \cite{Nikon}.
\begin{example}\label{NoTot}
Let $S$ be the five dimensional homogeneous Einstein manifold given in Theorem 1(5) of \cite{Nikon}.
This is a solvable simply connected Lie group corresponding to the Lie algebra $\mathfrak{s}$ given by an orthonormal basis
$\{A,X_{1},X_{2},X_{3},X_{4}\}$ with Lie brackets
\begin{align*}
&[X_{1},X_{2}]=\sqrt{\frac{2}{3}} X_{3},\,[X_{1},X_{3}]=\sqrt{\frac{2}{3}} X_{4},\\
&[A,X_{j}]=\frac{j}{\sqrt{30}}X_{j},\, j=1,2,3,4.
\end{align*}
A long but straightforward calculation shows that the curvature operator is given by
\[\frac{1}{30}\left[\begin{array}{cccccccccc}
13 & -2\sqrt{5} & -4\sqrt{5} & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
-2\sqrt{5} & 4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
-4\sqrt{5} & 0 & 16 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & \sqrt{5} & 0 & \sqrt{5}\\
0 & 0 & 0 & 0 & 0 & 9 & -3\sqrt{5} & 0 & -3\sqrt{5} & 0\\
0 & 0 & 0 & 0 & 0 & -3\sqrt{5} & 17 & 0 & 5 & 0\\
0 & 0 & 0 & 0 & \sqrt{5} & 0 & 0 & 1 & 0 & 5\\
0 & 0 & 0 & 0 & 0 & -3\sqrt{5} & 5 & 0 & -1 & 0\\
0 & 0 & 0 & 0 & \sqrt{5} & 0 & 0 & 5 & 0 & 7
\end{array}\right]\]
with respect to the basis
\[\{X_{1}\wedge X_{3},X_{2}\wedge A,X_{4}\wedge A,X_{2}\wedge X_{4},
X_{1}\wedge A,X_{3}\wedge A,X_{1}\wedge X_{2},X_{2}\wedge X_{3},X_{1}\wedge X_{4},X_{3}\wedge X_{4}\}.\]
Let $f$ be the characteristic polynomial, then $\gcd(f(x),f'(x))=-\frac{4}{15}+x$, which is a
polynomial of degree $1<3$, so there are no conformal foliations with totally geodesic fibers.
\end{example}
One way to produce foliations on a Lie group $G$ is to find a subalgebra $\mathfrak{v}$
of the Lie algebra $\mathfrak{g}$ of $G$. The Riemannian metric on $G$ is the left translation of the
scalar product on $\mathfrak{g}$. If $\mathfrak{v}$ corresponds to a closed subgroup $K$ we foliate by left translating
this subgroup, $\mathcal{F}=\{L_{g}K\}_{g\in G}$.
The foliation has totally geodesic fibers if and only if
\begin{align*}
\SPE{B_{U}V}{X}&=-\frac{1}{2}(\SPE{[X,U]}{V}+\SPE{[X,V]}{U})=0
\end{align*}
for all $U,V\in\mathfrak{v}$ and all $X\in\mathfrak{h}=\mathfrak{v}^{\bot}$ and is conformal if
\begin{align*}
(\mathcal{L}_{V}g)(X,Y)&=-\frac{1}{2}(\SPE{[V,X]}{Y}+\SPE{[V,Y]}{X})=\nu(V)\SPE{X}{Y}
\end{align*}
for all $V\in\mathfrak{v}$ and all $X,Y\in\mathfrak{h}$ where $\nu$ is a linear functional on $\mathfrak{v}$.
For the Example \ref{NoTot}.
If we define a foliation by setting $\mathfrak{v}=\{A,X_{2},X_{4}\}$ and
$\mathfrak{h}=\{X_{1},X_{3}\}$ in the procedure above, then we get a foliation
with totally geodesic fibers but it is not conformal.
If instead we define a foliation by setting $\mathfrak{v}=\{X_{2},X_{3},X_{4}\}$ and
$\mathfrak{h}=\{A,X_{1}\}$, then we get a conformal foliation but
this does not have totally geodesic fibers, in fact, not even minimal fibers.
We also give an example of a five dimensional manifold with a conformal foliation with totally geodesic fibers,
and see how the curvature operator behaves.
\begin{example}
Let $S$ be the five dimensional homogeneous Einstein manifold given in Theorem 1(4) of \cite{Nikon}.
This is a solvable simply connected Lie group corresponding to the Lie algebra $\mathfrak{s}$ given by an orthonormal basis
$\{A,X_{1},X_{2},X_{3},X_{4}\}$ with Lie brackets
\begin{align*}
&[X_{1},X_{2}]=\sqrt{\frac{2}{3}} X_{3},\\
&[A,X_{1}]=\frac{2}{\sqrt{33}}X_{1},\,[A,X_{2}]=\frac{2}{\sqrt{33}}X_{2},\,
[A,X_{3}]=\frac{4}{\sqrt{33}}X_{3},\,
[A,X_{4}]=\frac{3}{\sqrt{33}}X_{4}.
\end{align*}
If we left translate $\mathfrak{v}=\{A,X_{3},X_{4}\}$ and $\mathfrak{h}=\{X_{1},X_{2}\}$ we get a conformal
foliation with totally geodesic fibers.
The curvature operator is given by
\[\frac{1}{66}\left[\begin{array}{cccccccccc}
41 & -4\sqrt{22} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
-4\sqrt{22} & 32 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 18 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 24 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 8 & 0 & 0 & 2\sqrt{22} & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 8 & -2\sqrt{22} & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & -2\sqrt{22} & 5 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 2\sqrt{22} & 0 & 0 & 5 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 12 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 12
\end{array}\right]\]
with respect to the basis
\[\{X_{1}\wedge X_{2},X_{3}\wedge A,X_{4}\wedge A,X_{3}\wedge X_{4},X_{1}\wedge A,X_{2}\wedge A,
X_{1}\wedge X_{3},X_{2}\wedge X_{3},X_{1}\wedge X_{4},X_{2}\wedge X_{4}\}.\]
We see that the curvature operator
satisfies the conclusions of Theorem \ref{Jon-Curv}.
\end{example}
We now give some details about Example 6.1 and 6.2 of \cite{Gud-Sven-2013}.
\begin{example}\label{GudSvenEx1}
For Example 6.1 we let $\mathfrak{g}_{1}$ be the Lie algebra generated by the orthonormal vectors
$\{W,X_{1},\ldots,X_{n+1}\}$ with Lie brackets
\[[W,X_{k}]=X_{k+1}\textrm{ for }k=1,\ldots,n.\]
Let $\mathfrak{v}=\operatorname{span}\{X_{2},\ldots,X_{n+1}\}$, this is a subalgebra of codimension $2$. The Ricci curvature of $G_{1}$ the
simply connected Lie group related to $\mathfrak{g}_{1}$
satisfies $\operatorname{Ric}(W,W)-\operatorname{Ric}(X_{1},X_{1})=\frac{1-n}{2}$.
In this case
\[\SPE{B_{X_{2}}X_{3}}{W}=\frac{1}{2},\]
and thus the foliation defined by $\mathfrak{v}$ it is not totally geodesic.
\end{example}
\begin{example}\label{GudSvenEx2}
For Example 6.2, let $\mathfrak{g}_{2}$ be generated by the orthonormal vectors
$\{W,X_{1},\ldots,X_{n}\}$ with Lie brackets
\[[W,X_{k}]=\alpha_{k}X_{k}\textrm{ where }\alpha_{k}\in\mathbb{R}\textrm{ for }k=1,\ldots,n.\]
Let $\mathfrak{v}=\operatorname{span}\{X_{2},\ldots,X_{n}\}$, this is a subalgebra of codimension $2$. The Ricci curvature of $G_{2}$ the
simply connected Lie group related to $\mathfrak{g}_{2}$ satisfies
\[\operatorname{Ric}(X_{1})=-\alpha_{1}(\alpha_{1}+\ldots+\alpha_{n})X_{1}\textrm{ and }
\operatorname{Ric}(W)=-(\alpha_{1}^{2}+\ldots+\alpha_{n}^{2})W.\]
The fibers of the foliation given by $\mathfrak{v}$ are totally geodesic if and only if
$\alpha_{2}=\ldots=\alpha_{n}=0$, in which case the Ricci curvature condition is satisfied.
\end{example}
\section{Holomorphic harmonic morphisms}
In this section we will show that the Ricci curvature condition still holds under weaker conditions than
totally geodesic fibers.
\begin{definition}
Let $\phi:(M^{2m},g,J)\to (N^{2},h,J^{N})$ be a horizontally conformal map between almost
Hermitian manifolds. We say that $J$ is \textbf{adapted} to $\phi$ if $\phi$ is holomorphic with respect to $J$.
\end{definition}
If $M$ is $4$-dimensional then locally there exist exactly two adapted almost complex structures
(up to sign), in higher dimensions there are several such structures.
If $J$ is adapted then $J\mathcal{V}\subseteq\mathcal{V}$, $J\mathcal{H}\subseteq\mathcal{H}$ and $J$ commutes with the
orthogonal projections of $TM$ onto $\mathcal{V}$ and $\mathcal{H}$.
An almost complex structure $J$ is integrable if and only if the \textbf{Nijenhuis tensor},
\[N_{J}(Z,W)=[Z,W]+J[JZ,W]+J[Z,JW]-[JZ,JW],\]
is zero, in which case we say that $J$ is a complex structure.
\begin{definition}
Let $\mathcal{F}$ be a foliation on an almost Hermitian manifold $(M^{2m},g,J)$ with vertical distribution $\mathcal{V}$.
We say that the almost complex structure is compatible with the second fundamental form
$B$ of $\mathcal{F}$ if $J B_{U}V=B_{JU}V=B_{U}JV$ for all $U,V\in\mathcal{V}$.
\end{definition}
\begin{definition}
Let $\mathcal{F}$ be a foliation on an almost Hermitian manifold $(M,g,J)$ with vertical distribution $\mathcal{V}$.
$\mathcal{F}$ is said to have \textbf{superminimal} fibers if $\nabla_{U}J=0$ for all $U\in\mathcal{V}$.
\end{definition}
It is known that if a conformal foliation on an almost Hermitian manifold has superminimal fibers then
the almost complex structure is compatible with the second fundamental form, Section 7.8 in \cite{BW-book}
and is integrable, Proposition 7.9.1 of \cite{BW-book}.
\begin{lemma}
Let $(M,g,J)$ be an almost Hermitian manifold. If $J$ is compatible with the second fundamental
form $B$, then
\[B^{*}_{U}JX=-B^{*}_{JU}X=JB^{*}_{U}X,\]
for all $U\in\mathcal{V}$ and $X\in\mathcal{H}$.
\end{lemma}
\begin{proof}
The proof is a simple calculation. Let $V\in\mathcal{V}$, then
\begin{align*}
\SPE{B^{*}_{U}JX}{V}=&\SPE{JX}{B_{U}V}\\
=&-\SPE{X}{J B_{U}V}\\
=&-\SPE{X}{B_{JU}V}=-\SPE{B^{*}_{JU}X}{V}\\
=&-\SPE{X}{B_{U}JV}=-\SPE{B^{*}_{U}X}{JV}=\SPE{JB^{*}_{U}X}{V},
\end{align*}
since $V$ is arbitrary the lemma follows.
\end{proof}
We will show that the Ricci curvature condition holds in any even dimension if one of the adapted almost
complex structures is integrable and compatible with the second fundamental form.
The result is similar to Proposition 6.3 in \cite{LouPan} that deals with the $4$-dimensional case.
Wood showed, see Proposition 3.9 of \cite{Wood92},
that in four dimensions the adapted almost complex structure is integrable if and only if
the fibers of the foliation are superminimal. Thus in four dimensions we only have to assume
that the adapted almost complex structure is integrable.
\begin{theorem}
Let $\phi:M^{2m}\to N^{2}$ be a harmonic morphism between Hermitian manifolds $(M^{2m},g,J)$ and
$(N^{2},h,J^{N})$. Suppose that $J$ is adapted to $\phi$ and compatible with the second
fundamental form $B$. Then
\[\operatorname{Ric}(X,X)=\operatorname{Ric}(Y,Y)\textrm{ and }\operatorname{Ric}(X,Y)=0\]
for $X,Y\in\mathcal{H}$ orthonormal.
\end{theorem}
\begin{proof}
Let $\{X,Y\}$ be an orthonormal basis for $\mathcal{H}$ and $\{U_{i},V_{i}\}_{i=1}^{m}$ be an orthonormal basis for $\mathcal{V}$
chosen in such a way that $JX=Y$ and $JU_{i}=V_{i}$. We have
\begin{align*}
\operatorname{Ric}(X,X)&=\sum_{i}(R(X,U_{i},U_{i},X)+R(X,V_{i},V_{i},X))+R(X,Y,Y,X)\\
\operatorname{Ric}(Y,Y)&=\sum_{i}(R(Y,U_{i},U_{i},Y)+R(Y,V_{i},V_{i},Y))+R(Y,X,X,Y)\\
\end{align*}
From the symmetries of the curvature operator $R(X,Y,Y,X)=R(Y,X,X,Y)$.
The curvature is given by Proposition \ref{Gud-curv} (iii). That the terms not including $B^{*}$, that is,
\[\SPE{(\nabl{U}{A})_{X}Y}{V}+\SPE{A^{*}_{X}U}{A^{*}_{Y}V}-2 V(\ln \lambda)\SPE{A_{X}Y}{U},\]
satisfy the Ricci curvature condition is clear from Theorem \ref{Jon-Curv}. We denote the terms that contain
$B^{*}$ by $\tilde{R}$,
\[\SPE{\tilde{R}(U\wedge X}{Y\wedge V}=\SPE{(\nabla_{X}B^{*})_{U}Y}{V}-\SPE{B^{*}_{V}Y}{B^{*}_{U}X},\]
where by definition
\[\SPE{(\nabla_{X}B^{*})_{U}X}{U}=\SPE{\nabla_{X}(B^{*}_{U}X)}{U}-\SPE{B^{*}_{\nabla_{X}U}X}{U}-\SPE{B^{*}_{U}\nabla_{X}X}{U}.\]
Thus to prove $\operatorname{Ric}(X,X)=\operatorname{Ric}(Y,Y)$, we want to show that
\begin{align*}
\tilde{R}(X,U_{i},U_{i},X)+\tilde{R}(X,V_{i},V_{i},X)=\tilde{R}(Y,U_{i},U_{i},Y)+\tilde{R}(Y,V_{i},V_{i},Y)
\end{align*}
for each $i$. Since we prove it for each $i$ we will suppress the index and assume $JU=V$.
Now define $F_{1},F_{2},F_{3}$ and $F_{4}$ by
\begin{align*}
F_{1}(X)&=\SPE{\nabla_{X}(B^{*}_{U}X)}{U}+\SPE{\nabla_{X}(B^{*}_{V}X)}{V}\\
F_{2}(X)&=\SPE{B^{*}_{\nabla_{X}U}X}{U}+\SPE{B^{*}_{\nabla_{X}V}X}{V}\\
F_{3}(X)&=\SPE{B^{*}_{U}\nabla_{X}X}{U}+\SPE{B^{*}_{V}\nabla_{X}X}{V}\\
F_{4}(X)&=|B^{*}_{U}X|^{2}+|B^{*}_{V}X|^{2}.
\end{align*}
Then
\[\tilde{R}(X,U,U,X)+\tilde{R}(X,V,V,X)=F_{1}(X)-F_{2}(X)-F_{3}(X)-F_{4}(X).\]
We will show that $F_{j}(X)=F_{j}(Y)$ for $j=1,\ldots,4$, which implies $\operatorname{Ric}(X,X)=\operatorname{Ric}(Y,Y)$.
We start with $F_{1}$
\begin{align*}
F_{1}(Y)=&\SPE{\nabla_{JX}(B^{*}_{U}JX)}{U}+\SPE{\nabla_{JX}(B^{*}_{V}JX)}{V}\\
=&-\SPE{\nabla_{JX}(B^{*}_{JU}X)}{U}-\SPE{\nabla_{JX}(B^{*}_{JV}X)}{V}\\
=&-\SPE{\nabla_{B^{*}_{JU}X}JX+[JX,B^{*}_{JU}X]}{U}-\SPE{\nabla_{B^{*}_{JV}X}JX+[JX,B^{*}_{JV}X]}{V}\\
=&-\SPE{J\nabla_{B^{*}_{JU}X}X-[JX,JB^{*}_{U}X]}{U}-\SPE{J\nabla_{B^{*}_{JV}X}X-[JX,JB^{*}_{V}X]}{V}\\
=&\SPE{\nabla_{B^{*}_{JU}X}X}{JU}+\SPE{[JX,JB^{*}_{U}X]}{U}+\SPE{\nabla_{B^{*}_{JV}X}X}{JV}+\SPE{[JX,JB^{*}_{JU}X]}{V}\\
=&\SPE{\nabla_{B^{*}_{V}X}X}{V}+\SPE{\nabla_{B^{*}_{U}X}X}{U}+\SPE{[JX,JB^{*}_{U}X]-J[JX,B^{*}_{U}X]}{U}\\
=&\SPE{\nabla_{X}B^{*}_{V}X-[X,B^{*}_{V}X]}{V}+\SPE{\nabla_{X}B^{*}_{U}X-[X,B^{*}_{U}X]}{U}\\
&+\SPE{[JX,JB^{*}_{U}X]-J[JX,B^{*}_{U}X]}{U}\\
=&\SPE{\nabla_{X}(B^{*}_{U}X)}{U}+\SPE{\nabla_{X}(B^{*}_{V}X)}{V}\\
&+\SPE{[JX,JB^{*}_{U}X]-J[JX,B^{*}_{U}X]-[X,B^{*}_{U}X]-J[X,JB^{*}_{U}X]}{U}\\
=&\SPE{\nabla_{X}(B^{*}_{U}X)}{U}+\SPE{\nabla_{X}(B^{*}_{V}X)}{V}-\SPE{N_{J}(X,B^{*}_{U}X)}{U}\\
=&\SPE{\nabla_{X}(B^{*}_{U}X)}{U}+\SPE{\nabla_{X}(B^{*}_{V}X)}{V}\\
=&F_{1}(X).
\end{align*}
Next is $F_{2}$
\begin{align*}
F_{2}(Y)=&\SPE{B^{*}_{\nabla_{JX}U}JX}{U}+\SPE{B^{*}_{\nabla_{JX}V}JX}{V}\\
=&\SPE{JX}{B_{\nabla_{JX}U}U}+\SPE{JX}{B_{\nabla_{JX}V}V}\\
=&\SPE{JX}{B_{U}(\nabla_{JX}U)}+\SPE{JX}{B_{V}(\nabla_{JX}V)}\\
=&\SPE{JX}{B_{U}(\nabla_{U}JX+[JX,U])}+\SPE{JX}{B_{V}(\nabla_{V}JX+[JX,V])}\\
=&\SPE{JX}{B_{U}(\nabla_{U}JX)}+\SPE{JX}{B_{V}(\nabla_{V}JX)}\\
&+\SPE{JX}{B_{U}([JX,U])}+\SPE{JX}{B_{JU}([JX,JU])}\\
=&\SPE{X}{B_{U}(\nabla_{U}X)}+\SPE{X}{B_{V}(\nabla_{V}X)}+\SPE{X}{B_{U}([JX,JU]-J[JX,U])}\\
=&\SPE{X}{B_{U}(\nabla_{X}U-[X,U])}+\SPE{X}{B_{V}(\nabla_{X}V-[X,V])}\\
&+\SPE{X}{B_{U}([JX,JU]-J[JX,U])}\\
=&\SPE{B^{*}_{\nabla_{X}U}X}{U}+\SPE{B^{*}_{\nabla_{X}V}X}{V}\\
&+\SPE{X}{B_{U}(-[X,U]-J[X,JU]+[JX,JU]-J[JX,U])}\\
=&\SPE{B^{*}_{\nabla_{X}U}X}{U}+\SPE{B^{*}_{\nabla_{X}V}X}{V}-\SPE{X}{B_{U}(N_{J}(X,U))}\\
=&F_{2}(X).
\end{align*}
Now we show that $F_{3}(X)=0$, the same is true for $F_{3}(Y)$,
\begin{align*}
F_{3}(X)=&\SPE{B^{*}_{U}\nabla_{X}X}{U}+\SPE{B^{*}_{V}\nabla_{X}X}{V}\\
=&\SPE{\nabla_{X}X}{B_{U}U+B_{V}V}\\
=&0.
\end{align*}
The last one $F_{4}$ follows from
\begin{align*}
F_{4}(X)=&|B^{*}_{U}X|^{2}+|B^{*}_{V}X|^{2}\\
=&|B^{*}_{V}JX|^{2}+|B^{*}_{U}JX|^{2}\\
=&|B^{*}_{U}Y|^{2}+|B^{*}_{V}Y|^{2}\\
=&F_{4}(Y)
\end{align*}
We have shown that $\operatorname{Ric}(X,X)=\operatorname{Ric}(Y,Y)$ for any orthonormal basis, since
$\{\frac{1}{\sqrt{2}}(X+Y),\frac{1}{\sqrt{2}}(X-Y)\}$ also is an orthonormal basis we have
\[\operatorname{Ric}(X,Y)=\frac{1}{2}(\operatorname{Ric}(\frac{X+Y}{\sqrt{2}},\frac{X+Y}{\sqrt{2}})
-\operatorname{Ric}(\frac{X-Y}{\sqrt{2}},\frac{X-Y}{\sqrt{2}})=0.\]
\end{proof}
We take another look at Example \ref{GudSvenEx1}. Any adapted
almost complex structure $J$ must satisfy $JW= X_{1}$ and $J(\mathfrak{v})\subseteq\mathfrak{v}$. Thus
\begin{align*}
N_{J}(W,X_{n+1})=&[W,X_{n+1}]+J[W,J X_{n+1}]+J[JW,X_{n+1}]+[J W,J X_{n+1}]\\
=&J[W,J X_{n+1}]\neq 0,
\end{align*}
and non of the adapted almost complex structures are integrable.
|
train/arxiv
|
BkiUcL7xK7kjXIdG9oID
| 5
| 1
|
\section{0pt}{12pt plus 5pt minus 5pt}{9pt plus 3pt minus 3pt}
\usepackage{amsmath,amsfonts,bm}
\usepackage{xifthen}
\usepackage{xparse}
\catcode`\_=11\relax
\newcommand\email[1]{\_email #1\q_nil}
\def\_email#1@#2\q_nil{%
\href{mailto:#1@#2}{{\emailfont #1\emailampersat #2}}
}
\newcommand\emailfont{\rmfamily \lsstyle}
\newcommand\emailampersat{@}
\catcode`\_=8\relax
\definecolor{ggreen}{rgb}{0.0, 0.6, 0.0}
\definecolor{rred}{rgb}{0.75, 0.0, 0.0}
\definecolor{bblue}{rgb}{0.13, 0.67, 0.8}
\newcommand{\badmetric}[1]{{\color{rred} \textbf{#1}}}
\newcommand{\goodmetric}[1]{{\color{ggreen} \textbf{#1}}}
\newcommand{\mehmetric}[1]{{\color{olive} \textbf{#1}}}
\newcommand{\yb}[1]{\todo{YB: #1}}
\newcommand{\reminder}[1]{ (((\mbox{$\star \star \star$}{\textbf{\textcolor{blue}{#1}}}\mbox{$\star \star \star$})))}
\newcommand{\rewg}{G^\prime} %
\newcommand{\mathrm{mlp}\xspace}{\mathrm{mlp}\xspace}
\newcommand{\mathrm{attn}\xspace}{\mathrm{attn}\xspace}
\newcommand{\textsc{CounterFact}\xspace}{\textsc{CounterFact}\xspace}
\newcommand{\textsc{CF}\xspace}{\textsc{CF}\xspace}
\newcommand{\textsc{ParaRel}\xspace}{\textsc{ParaRel}\xspace}
\newcommand{Knowledge Editor\xspace}{Knowledge Editor\xspace}
\newcommand{Knowledge Neurons\xspace}{Knowledge Neurons\xspace}
\newcommand{Rank-One Model Editing\xspace}{Rank-One Model Editing\xspace}
\newcommand{ROME\xspace}{ROME\xspace}
\newcommand{\elstr}[1]{\texttt{str}(#1)}
\newcommand{\elid}[1]{\texttt{id}(#1)}
\newcommand{\atl}[2]{#1^{(#2)}}
\newcommand{\ex}[1]{\mathbb{E}\left[ #1 \right]}
\newcommand{\exsub}[2]{\mathbb{E}_{#2}\left[ #1 \right]}
\newcommand{\pr}[1]{\mathbb{P}\left[ #1 \right]}
\newcommand{\prsub}[2]{\mathbb{P}_{#2}\left[ #1 \right]}
\newcommand{\indic}[1]{\mathbb{I}\left[ #1 \right]}
\newcommand{{\prime\prime}}{{\prime\prime}}
\newcommand{Fr\'echet Inception Distance\xspace}{Fr\'echet Inception Distance\xspace}
\newcommand{object concepts\xspace}{object concepts\xspace}
\newcommand{Progressive GANs\xspace}{Progressive GANs\xspace}
\newcommand{featuremap\xspace}{featuremap\xspace}
\newcommand{\texttt{mask}}{\texttt{mask}}
\newcommand{\text{T}}{\text{T}}
\newcommand{\text{P}}{\text{P}}
\newcommand{{\mathbb{P}}}{{\mathbb{P}}}
\newcommand{D_{\mathrm{KL}}}{D_{\mathrm{KL}}}
\newcommand{K}{K}
\newcommand{\text{U}}{\text{U}}
\newcommand{{\mathbb{U}}}{{\mathbb{U}}}
\newcommand{{\mathbf{k}}}{{\mathbf{k}}}
\newcommand{{\mathbf{k}}}{{\mathbf{k}}}
\newcommand{{\mathbf{r}}_{\U, \pixel}}{{\mathbf{r}}_{\text{U}, \text{P}}}
\newcommand{{\mathbf{r}}_{\overline{\U, \pixel}}}{{\mathbf{r}}_{\overline{\text{U}, \text{P}}}}
\newcommand{{\mathbf{r}}_{\overline{\U}, \pixel}}{{\mathbf{r}}_{\overline{\text{U}}, \text{P}}}
\newcommand{{\mathbf{r}}_{\Uall, \pixel}}{{\mathbf{r}}_{{\mathbb{U}}, \text{P}}}
\newcommand{{\mathbf{r}}_{\Uall, \overline{\pixel}}}{{\mathbf{r}}_{{\mathbb{U}}, \overline{\text{P}}}}
\newcommand{\context^*}{K^*}
\newcommand{\mathbb{E}_{{\mathbf{z}},\pixel}}{\mathbb{E}_{{\mathbf{z}},\text{P}}}
\newcommand{\text{IoU}}{\text{IoU}}
\newcommand{\delta}{\delta}
\newcommand{\ACE_{\U\rightarrow c}}{\delta_{\text{U}\rightarrow c}}
\newcommand{{\mathbf{x}}_{i}}{{\mathbf{x}}_{i}}
\newcommand{{\mathbf{x}}_{a}}{{\mathbf{x}}_{a}}
\newcommand{\layer}[1]{\texttt{layer#1\xspace}}
\newcommand{\unit}[1]{\texttt{unit#1\xspace}}
\newcommand{\bm{\alpha}}{\bm{\alpha}}
\newcommand{G}{G}
\newcommand{{\mathbf{s}}_c}{{\mathbf{s}}_c}
\newcommand{{\mathbf{r}}}{{\mathbf{r}}}
\newcommand{\repr_{u, {\mathbb{P}}}}{{\mathbf{r}}_{u, {\mathbb{P}}}}
\newcommand{\repr_{u, {\mathbb{P}}}^{\uparrow}}{{\mathbf{r}}_{u, {\mathbb{P}}}^{\uparrow}}
\newcommand{f}{f}
\newcommand{h}{h}
\newcommand{{\em (Left)}}{{\em (Left)}}
\newcommand{{\em (Center)}}{{\em (Center)}}
\newcommand{{\em (Right)}}{{\em (Right)}}
\newcommand{{\em (Top)}}{{\em (Top)}}
\newcommand{{\em (Bottom)}}{{\em (Bottom)}}
\newcommand{{\em (a)}}{{\em (a)}}
\newcommand{{\em (b)}}{{\em (b)}}
\newcommand{{\em (c)}}{{\em (c)}}
\newcommand{{\em (d)}}{{\em (d)}}
\newcommand{\newterm}[1]{{\bf #1}}
\newcommand{\reffig}[1]{Figure~\ref{fig:#1}}
\newcommand{\refsec}[1]{Section~\ref{sec:#1}}
\newcommand{\refapp}[1]{Appendix~\ref{sec:#1}}
\newcommand{\reftbl}[1]{Table~\ref{tbl:#1}}
\newcommand{\refalg}[1]{Algorithm~\ref{alg:#1}}
\newcommand{\refline}[1]{Line~\ref{line:#1}}
\newcommand{\shortrefsec}[1]{\S~\ref{sec:#1}}
\newcommand{\refeqn}[1]{Eqn.~\ref{eq:#1}}
\newcommand{\refeqshort}[1]{(\ref{eq:#1})}
\newcommand{\shortrefeq}[1]{\ref{eq:#1}}
\newcommand{\lblfig}[1]{\label{fig:#1}}
\newcommand{\lblsec}[1]{\label{sec:#1}}
\newcommand{\lbleq}[1]{\label{eq:#1}}
\newcommand{\lbltbl}[1]{\label{tbl:#1}}
\newcommand{\lblalg}[1]{\label{alg:#1}}
\newcommand{\lblline}[1]{\label{line:#1}}
\newcommand{\ignorethis}[1]{}
\newcommand{\revision}[1]{\color{black}#1\color{black}}
\newcommand{\vspace{-5pt}\item}{\vspace{-5pt}\item}
\newcommand{\myparagraph}[1]{\vspace{-5pt}\paragraph{#1}}
\def\figref#1{figure~\ref{#1}}
\def\Figref#1{Figure~\ref{#1}}
\def\twofigref#1#2{figures \ref{#1} and \ref{#2}}
\def\quadfigref#1#2#3#4{figures \ref{#1}, \ref{#2}, \ref{#3} and \ref{#4}}
\def\secref#1{section~\ref{#1}}
\def\Secref#1{Section~\ref{#1}}
\def\twosecrefs#1#2{sections \ref{#1} and \ref{#2}}
\def\secrefs#1#2#3{sections \ref{#1}, \ref{#2} and \ref{#3}}
\def\eqref#1{equation~\ref{#1}}
\def\Eqref#1{Equation~\ref{#1}}
\def\plaineqref#1{\ref{#1}}
\def\chapref#1{chapter~\ref{#1}}
\def\Chapref#1{Chapter~\ref{#1}}
\def\rangechapref#1#2{chapters\ref{#1}--\ref{#2}}
\def\algref#1{algorithm~\ref{#1}}
\def\Algref#1{Algorithm~\ref{#1}}
\def\twoalgref#1#2{algorithms \ref{#1} and \ref{#2}}
\def\Twoalgref#1#2{Algorithms \ref{#1} and \ref{#2}}
\def\partref#1{part~\ref{#1}}
\def\Partref#1{Part~\ref{#1}}
\def\twopartref#1#2{parts \ref{#1} and \ref{#2}}
\def\ceil#1{\lceil #1 \rceil}
\def\floor#1{\lfloor #1 \rfloor}
\def\bm{1}{\bm{1}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{D_{\mathrm{valid}}}}{\mathcal{D_{\mathrm{valid}}}}
\newcommand{\mathcal{D_{\mathrm{test}}}}{\mathcal{D_{\mathrm{test}}}}
\def{\epsilon}{{\epsilon}}
\def{\textnormal{$\eta$}}{{\textnormal{$\eta$}}}
\def{\textnormal{a}}{{\textnormal{a}}}
\def{\textnormal{b}}{{\textnormal{b}}}
\def{\textnormal{c}}{{\textnormal{c}}}
\def{\textnormal{d}}{{\textnormal{d}}}
\def{\textnormal{e}}{{\textnormal{e}}}
\def{\textnormal{f}}{{\textnormal{f}}}
\def{\textnormal{g}}{{\textnormal{g}}}
\def{\textnormal{h}}{{\textnormal{h}}}
\def{\textnormal{i}}{{\textnormal{i}}}
\def{\textnormal{j}}{{\textnormal{j}}}
\def{\textnormal{k}}{{\textnormal{k}}}
\def{\textnormal{l}}{{\textnormal{l}}}
\def{\textnormal{n}}{{\textnormal{n}}}
\def{\textnormal{o}}{{\textnormal{o}}}
\def{\textnormal{p}}{{\textnormal{p}}}
\def{\textnormal{q}}{{\textnormal{q}}}
\def{\textnormal{r}}{{\textnormal{r}}}
\def{\textnormal{s}}{{\textnormal{s}}}
\def{\textnormal{t}}{{\textnormal{t}}}
\def{\textnormal{v}}{{\textnormal{v}}}
\def{\textnormal{w}}{{\textnormal{w}}}
\def{\textnormal{x}}{{\textnormal{x}}}
\def{\textnormal{y}}{{\textnormal{y}}}
\def{\textnormal{z}}{{\textnormal{z}}}
\def{\mathbf{\epsilon}}{{\mathbf{\epsilon}}}
\def{\mathbf{\theta}}{{\mathbf{\theta}}}
\def{\mathbf{a}}{{\mathbf{a}}}
\def{\mathbf{b}}{{\mathbf{b}}}
\def{\mathbf{c}}{{\mathbf{c}}}
\def{\mathbf{d}}{{\mathbf{d}}}
\def{\mathbf{e}}{{\mathbf{e}}}
\def{\mathbf{f}}{{\mathbf{f}}}
\def{\mathbf{g}}{{\mathbf{g}}}
\def{\mathbf{h}}{{\mathbf{h}}}
\def{\mathbf{u}}{{\mathbf{i}}}
\def{\mathbf{j}}{{\mathbf{j}}}
\def{\mathbf{k}}{{\mathbf{k}}}
\def{\mathbf{l}}{{\mathbf{l}}}
\def{\mathbf{m}}{{\mathbf{m}}}
\def{\mathbf{n}}{{\mathbf{n}}}
\def{\mathbf{o}}{{\mathbf{o}}}
\def{\mathbf{p}}{{\mathbf{p}}}
\def{\mathbf{q}}{{\mathbf{q}}}
\def{\mathbf{r}}{{\mathbf{r}}}
\def{\mathbf{s}}{{\mathbf{s}}}
\def{\mathbf{t}}{{\mathbf{t}}}
\def{\mathbf{u}}{{\mathbf{u}}}
\def{\mathbf{v}}{{\mathbf{v}}}
\def{\mathbf{w}}{{\mathbf{w}}}
\def{\mathbf{x}}{{\mathbf{x}}}
\def{\mathbf{y}}{{\mathbf{y}}}
\def{\mathbf{z}}{{\mathbf{z}}}
\def{\textnormal{a}}{{\textnormal{a}}}
\def{\textnormal{b}}{{\textnormal{b}}}
\def{\textnormal{c}}{{\textnormal{c}}}
\def{\textnormal{d}}{{\textnormal{d}}}
\def{\textnormal{e}}{{\textnormal{e}}}
\def{\textnormal{f}}{{\textnormal{f}}}
\def{\textnormal{g}}{{\textnormal{g}}}
\def{\textnormal{h}}{{\textnormal{h}}}
\def{\textnormal{i}}{{\textnormal{i}}}
\def{\textnormal{j}}{{\textnormal{j}}}
\def{\textnormal{k}}{{\textnormal{k}}}
\def{\textnormal{l}}{{\textnormal{l}}}
\def{\textnormal{m}}{{\textnormal{m}}}
\def{\textnormal{n}}{{\textnormal{n}}}
\def{\textnormal{o}}{{\textnormal{o}}}
\def{\textnormal{p}}{{\textnormal{p}}}
\def{\textnormal{q}}{{\textnormal{q}}}
\def{\textnormal{r}}{{\textnormal{r}}}
\def{\textnormal{s}}{{\textnormal{s}}}
\def{\textnormal{t}}{{\textnormal{t}}}
\def{\textnormal{u}}{{\textnormal{u}}}
\def{\textnormal{v}}{{\textnormal{v}}}
\def{\textnormal{w}}{{\textnormal{w}}}
\def{\textnormal{x}}{{\textnormal{x}}}
\def{\textnormal{y}}{{\textnormal{y}}}
\def{\textnormal{z}}{{\textnormal{z}}}
\def{\mathbf{A}}{{\mathbf{A}}}
\def{\mathbf{B}}{{\mathbf{B}}}
\def{\mathbf{C}}{{\mathbf{C}}}
\def{\mathbf{D}}{{\mathbf{D}}}
\def{\mathbf{E}}{{\mathbf{E}}}
\def{\mathbf{F}}{{\mathbf{F}}}
\def{\mathbf{G}}{{\mathbf{G}}}
\def{\mathbf{H}}{{\mathbf{H}}}
\def{\mathbf{I}}{{\mathbf{I}}}
\def{\mathbf{J}}{{\mathbf{J}}}
\def{\mathbf{K}}{{\mathbf{K}}}
\def{\mathbf{L}}{{\mathbf{L}}}
\def{\mathbf{M}}{{\mathbf{M}}}
\def{\mathbf{N}}{{\mathbf{N}}}
\def{\mathbf{O}}{{\mathbf{O}}}
\def{\mathbf{P}}{{\mathbf{P}}}
\def{\mathbf{Q}}{{\mathbf{Q}}}
\def{\mathbf{R}}{{\mathbf{R}}}
\def{\mathbf{S}}{{\mathbf{S}}}
\def{\mathbf{T}}{{\mathbf{T}}}
\def{\mathbf{U}}{{\mathbf{U}}}
\def{\mathbf{V}}{{\mathbf{V}}}
\def{\mathbf{W}}{{\mathbf{W}}}
\def{\mathbf{X}}{{\mathbf{X}}}
\def{\mathbf{Y}}{{\mathbf{Y}}}
\def{\mathbf{Z}}{{\mathbf{Z}}}
\def{\textnormal{A}}{{\textnormal{A}}}
\def{\textnormal{B}}{{\textnormal{B}}}
\def{\textnormal{C}}{{\textnormal{C}}}
\def{\textnormal{D}}{{\textnormal{D}}}
\def{\textnormal{E}}{{\textnormal{E}}}
\def{\textnormal{F}}{{\textnormal{F}}}
\def{\textnormal{G}}{{\textnormal{G}}}
\def{\textnormal{H}}{{\textnormal{H}}}
\def{\textnormal{I}}{{\textnormal{I}}}
\def{\textnormal{J}}{{\textnormal{J}}}
\def{\textnormal{K}}{{\textnormal{K}}}
\def{\textnormal{L}}{{\textnormal{L}}}
\def{\textnormal{M}}{{\textnormal{M}}}
\def{\textnormal{N}}{{\textnormal{N}}}
\def{\textnormal{O}}{{\textnormal{O}}}
\def{\textnormal{P}}{{\textnormal{P}}}
\def{\textnormal{Q}}{{\textnormal{Q}}}
\def{\textnormal{R}}{{\textnormal{R}}}
\def{\textnormal{S}}{{\textnormal{S}}}
\def{\textnormal{T}}{{\textnormal{T}}}
\def{\textnormal{U}}{{\textnormal{U}}}
\def{\textnormal{V}}{{\textnormal{V}}}
\def{\textnormal{W}}{{\textnormal{W}}}
\def{\textnormal{X}}{{\textnormal{X}}}
\def{\textnormal{Y}}{{\textnormal{Y}}}
\def{\textnormal{Z}}{{\textnormal{Z}}}
\def{\bm{0}}{{\bm{0}}}
\def{\bm{1}}{{\bm{1}}}
\def{\bm{\mu}}{{\bm{\mu}}}
\def{\bm{\theta}}{{\bm{\theta}}}
\def{\bm{a}}{{\bm{a}}}
\def{\bm{b}}{{\bm{b}}}
\def{\bm{c}}{{\bm{c}}}
\def{\bm{d}}{{\bm{d}}}
\def{\bm{e}}{{\bm{e}}}
\def{\bm{f}}{{\bm{f}}}
\def{\bm{g}}{{\bm{g}}}
\def{\bm{h}}{{\bm{h}}}
\def{\bm{i}}{{\bm{i}}}
\def{\bm{j}}{{\bm{j}}}
\def{\bm{k}}{{\bm{k}}}
\def{\bm{l}}{{\bm{l}}}
\def{\bm{m}}{{\bm{m}}}
\def{\bm{n}}{{\bm{n}}}
\def{\bm{o}}{{\bm{o}}}
\def{\bm{p}}{{\bm{p}}}
\def{\bm{q}}{{\bm{q}}}
\def{\bm{r}}{{\bm{r}}}
\def{\bm{s}}{{\bm{s}}}
\def{\bm{t}}{{\bm{t}}}
\def{\bm{u}}{{\bm{u}}}
\def{\bm{v}}{{\bm{v}}}
\def{\bm{w}}{{\bm{w}}}
\def{\bm{x}}{{\bm{x}}}
\def{\bm{y}}{{\bm{y}}}
\def{\bm{z}}{{\bm{z}}}
\def{\alpha}{{\alpha}}
\def{\beta}{{\beta}}
\def{\epsilon}{{\epsilon}}
\def{\lambda}{{\lambda}}
\def{\omega}{{\omega}}
\def{\mu}{{\mu}}
\def{\psi}{{\psi}}
\def{\sigma}{{\sigma}}
\def{\theta}{{\theta}}
\def{a}{{a}}
\def{b}{{b}}
\def{c}{{c}}
\def{d}{{d}}
\def{e}{{e}}
\def{f}{{f}}
\def{g}{{g}}
\def{h}{{h}}
\def{i}{{i}}
\def{j}{{j}}
\def{k}{{k}}
\def{l}{{l}}
\def{m}{{m}}
\def{n}{{n}}
\def{o}{{o}}
\def{p}{{p}}
\def{q}{{q}}
\def{r}{{r}}
\def{s}{{s}}
\def{t}{{t}}
\def{u}{{u}}
\def{v}{{v}}
\def{w}{{w}}
\def{x}{{x}}
\def{y}{{y}}
\def{z}{{z}}
\def{\bm{A}}{{\bm{A}}}
\def{\bm{B}}{{\bm{B}}}
\def{\bm{C}}{{\bm{C}}}
\def{\bm{D}}{{\bm{D}}}
\def{\bm{E}}{{\bm{E}}}
\def{\bm{F}}{{\bm{F}}}
\def{\bm{G}}{{\bm{G}}}
\def{\bm{H}}{{\bm{H}}}
\def{\bm{I}}{{\bm{I}}}
\def{\bm{J}}{{\bm{J}}}
\def{\bm{K}}{{\bm{K}}}
\def{\bm{L}}{{\bm{L}}}
\def{\bm{M}}{{\bm{M}}}
\def{\bm{N}}{{\bm{N}}}
\def{\bm{O}}{{\bm{O}}}
\def{\bm{P}}{{\bm{P}}}
\def{\bm{Q}}{{\bm{Q}}}
\def{\bm{R}}{{\bm{R}}}
\def{\bm{S}}{{\bm{S}}}
\def{\bm{T}}{{\bm{T}}}
\def{\bm{U}}{{\bm{U}}}
\def{\bm{V}}{{\bm{V}}}
\def{\bm{W}}{{\bm{W}}}
\def{\bm{X}}{{\bm{X}}}
\def{\bm{Y}}{{\bm{Y}}}
\def{\bm{Z}}{{\bm{Z}}}
\def{\bm{\beta}}{{\bm{\beta}}}
\def{\bm{\Phi}}{{\bm{\Phi}}}
\def{\bm{\Lambda}}{{\bm{\Lambda}}}
\def{\bm{\Sigma}}{{\bm{\Sigma}}}
\DeclareMathAlphabet{\mathsfit}{\encodingdefault}{\sfdefault}{m}{sl}
\SetMathAlphabet{\mathsfit}{bold}{\encodingdefault}{\sfdefault}{bx}{n}
\newcommand{\tens}[1]{\bm{\mathsfit{#1}}}
\def{\tens{A}}{{\tens{A}}}
\def{\tens{B}}{{\tens{B}}}
\def{\tens{C}}{{\tens{C}}}
\def{\tens{D}}{{\tens{D}}}
\def{\tens{E}}{{\tens{E}}}
\def{\tens{F}}{{\tens{F}}}
\def{\tens{G}}{{\tens{G}}}
\def{\tens{H}}{{\tens{H}}}
\def{\tens{I}}{{\tens{I}}}
\def{\tens{J}}{{\tens{J}}}
\def{\tens{K}}{{\tens{K}}}
\def{\tens{L}}{{\tens{L}}}
\def{\tens{M}}{{\tens{M}}}
\def{\tens{N}}{{\tens{N}}}
\def{\tens{O}}{{\tens{O}}}
\def{\tens{P}}{{\tens{P}}}
\def{\tens{Q}}{{\tens{Q}}}
\def{\tens{R}}{{\tens{R}}}
\def{\tens{S}}{{\tens{S}}}
\def{\tens{T}}{{\tens{T}}}
\def{\tens{U}}{{\tens{U}}}
\def{\tens{V}}{{\tens{V}}}
\def{\tens{W}}{{\tens{W}}}
\def{\tens{X}}{{\tens{X}}}
\def{\tens{Y}}{{\tens{Y}}}
\def{\tens{Z}}{{\tens{Z}}}
\def{\mathcal{A}}{{\mathcal{A}}}
\def{\mathcal{B}}{{\mathcal{B}}}
\def{\mathcal{C}}{{\mathcal{C}}}
\def{\mathcal{D}}{{\mathcal{D}}}
\def{\mathcal{E}}{{\mathcal{E}}}
\def{\mathcal{F}}{{\mathcal{F}}}
\def{\mathcal{G}}{{\mathcal{G}}}
\def{\mathcal{H}}{{\mathcal{H}}}
\def{\mathcal{I}}{{\mathcal{I}}}
\def{\mathcal{J}}{{\mathcal{J}}}
\def{\mathcal{K}}{{\mathcal{K}}}
\def{\mathcal{L}}{{\mathcal{L}}}
\def{\mathcal{M}}{{\mathcal{M}}}
\def{\mathcal{N}}{{\mathcal{N}}}
\def{\mathcal{O}}{{\mathcal{O}}}
\def{\mathcal{P}}{{\mathcal{P}}}
\def{\mathcal{Q}}{{\mathcal{Q}}}
\def{\mathcal{R}}{{\mathcal{R}}}
\def{\mathcal{S}}{{\mathcal{S}}}
\def{\mathcal{T}}{{\mathcal{T}}}
\def{\mathcal{U}}{{\mathcal{U}}}
\def{\mathcal{V}}{{\mathcal{V}}}
\def{\mathcal{W}}{{\mathcal{W}}}
\def{\mathcal{X}}{{\mathcal{X}}}
\def{\mathcal{Y}}{{\mathcal{Y}}}
\def{\mathcal{Z}}{{\mathcal{Z}}}
\def{\mathbb{A}}{{\mathbb{A}}}
\def{\mathbb{B}}{{\mathbb{B}}}
\def{\mathbb{C}}{{\mathbb{C}}}
\def{\mathbb{D}}{{\mathbb{D}}}
\def{\mathbb{F}}{{\mathbb{F}}}
\def{\mathbb{G}}{{\mathbb{G}}}
\def{\mathbb{H}}{{\mathbb{H}}}
\def{\mathbb{I}}{{\mathbb{I}}}
\def{\mathbb{J}}{{\mathbb{J}}}
\def{\mathbb{K}}{{\mathbb{K}}}
\def{\mathbb{L}}{{\mathbb{L}}}
\def{\mathbb{M}}{{\mathbb{M}}}
\def{\mathbb{N}}{{\mathbb{N}}}
\def{\mathbb{O}}{{\mathbb{O}}}
\def{\mathbb{P}}{{\mathbb{P}}}
\def{\mathbb{Q}}{{\mathbb{Q}}}
\def{\mathbb{R}}{{\mathbb{R}}}
\def{\mathbb{S}}{{\mathbb{S}}}
\def{\mathbb{T}}{{\mathbb{T}}}
\def{\mathbb{U}}{{\mathbb{U}}}
\def{\mathbb{V}}{{\mathbb{V}}}
\def{\mathbb{W}}{{\mathbb{W}}}
\def{\mathbb{X}}{{\mathbb{X}}}
\def{\mathbb{Y}}{{\mathbb{Y}}}
\def{\mathbb{Z}}{{\mathbb{Z}}}
\def{\Lambda}{{\Lambda}}
\def{A}{{A}}
\def{B}{{B}}
\def{C}{{C}}
\def{D}{{D}}
\def{E}{{E}}
\def{F}{{F}}
\def{G}{{G}}
\def{H}{{H}}
\def{I}{{I}}
\def{J}{{J}}
\def{K}{{K}}
\def{L}{{L}}
\def{M}{{M}}
\def{N}{{N}}
\def{O}{{O}}
\def{P}{{P}}
\def{Q}{{Q}}
\def{R}{{R}}
\def{S}{{S}}
\def{T}{{T}}
\def{U}{{U}}
\def{V}{{V}}
\def{W}{{W}}
\def{X}{{X}}
\def{Y}{{Y}}
\def{Z}{{Z}}
\def{\Sigma}{{\Sigma}}
\newcommand{\etens}[1]{\mathsfit{#1}}
\def{\etens{\Lambda}}{{\etens{\Lambda}}}
\def{\etens{A}}{{\etens{A}}}
\def{\etens{B}}{{\etens{B}}}
\def{\etens{C}}{{\etens{C}}}
\def{\etens{D}}{{\etens{D}}}
\def{\etens{E}}{{\etens{E}}}
\def{\etens{F}}{{\etens{F}}}
\def{\etens{G}}{{\etens{G}}}
\def{\etens{H}}{{\etens{H}}}
\def{\etens{I}}{{\etens{I}}}
\def{\etens{J}}{{\etens{J}}}
\def{\etens{K}}{{\etens{K}}}
\def{\etens{L}}{{\etens{L}}}
\def{\etens{M}}{{\etens{M}}}
\def{\etens{N}}{{\etens{N}}}
\def{\etens{O}}{{\etens{O}}}
\def{\etens{P}}{{\etens{P}}}
\def{\etens{Q}}{{\etens{Q}}}
\def{\etens{R}}{{\etens{R}}}
\def{\etens{S}}{{\etens{S}}}
\def{\etens{T}}{{\etens{T}}}
\def{\etens{U}}{{\etens{U}}}
\def{\etens{V}}{{\etens{V}}}
\def{\etens{W}}{{\etens{W}}}
\def{\etens{X}}{{\etens{X}}}
\def{\etens{Y}}{{\etens{Y}}}
\def{\etens{Z}}{{\etens{Z}}}
\newcommand{p_{\rm{data}}}{p_{\rm{data}}}
\newcommand{\hat{p}_{\rm{data}}}{\hat{p}_{\rm{data}}}
\newcommand{\hat{P}_{\rm{data}}}{\hat{P}_{\rm{data}}}
\newcommand{p_{\rm{model}}}{p_{\rm{model}}}
\newcommand{P_{\rm{model}}}{P_{\rm{model}}}
\newcommand{\tilde{p}_{\rm{model}}}{\tilde{p}_{\rm{model}}}
\newcommand{p_{\rm{encoder}}}{p_{\rm{encoder}}}
\newcommand{p_{\rm{decoder}}}{p_{\rm{decoder}}}
\newcommand{p_{\rm{reconstruct}}}{p_{\rm{reconstruct}}}
\newcommand{\laplace}{\mathrm{Laplace}} %
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\tilde{p}}{\tilde{p}}
\newcommand{\alpha}{\alpha}
\newcommand{\lambda}{\lambda}
\newcommand{\mathrm{rectifier}}{\mathrm{rectifier}}
\newcommand{\mathrm{softmax}}{\mathrm{softmax}}
\newcommand{\sigma}{\sigma}
\newcommand{\zeta}{\zeta}
\newcommand{\mathrm{Var}}{\mathrm{Var}}
\newcommand{\mathrm{SE}}{\mathrm{SE}}
\newcommand{\mathrm{Cov}}{\mathrm{Cov}}
\newcommand{L^0}{L^0}
\newcommand{L^1}{L^1}
\newcommand{L^2}{L^2}
\newcommand{L^p}{L^p}
\newcommand{L^\infty}{L^\infty}
\newcommand{\parents}{Pa} %
\DeclareMathOperator*{\argmax}{argmax}
\DeclareMathOperator*{\argmin}{argmin}
\DeclareMathOperator{\sign}{sign}
\DeclareMathOperator{\Tr}{Tr}
\let\ab\allowbreak
\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcommand{\xpar}[1]{\noindent\textbf{#1}\ \ }
\newcommand{\vpar}[1]{\vspace{3mm}\noindent\textbf{#1}\ \ }
\newcommand{ShapeNet\xspace}{ShapeNet\xspace}
\newcommand{PASCAL 3D+\xspace}{PASCAL 3D+\xspace}
\newcommand{\ensuremath{^\circ}\xspace}{\ensuremath{^\circ}\xspace}
\newcommand{\ignore}[1]{}
\newcommand{\norm}[1]{\lVert#1\rVert}
\newcommand{$\mbox{fc}_7$}{$\mbox{fc}_7$}
\renewcommand{\thefootnote}{\arabic{footnote}}
\defna\"{\i}ve\xspace{na\"{\i}ve\xspace}
\defNa\"{\i}ve\xspace{Na\"{\i}ve\xspace}
\makeatletter
\DeclareRobustCommand\onedot{\futurelet\@let@token\@onedot}
\def\@onedot{\ifx\@let@token.\else.\null\fi\xspace}
\def\eg{e.g\onedot,\xspace}
\defE.g\onedot,{E.g\onedot,}
\def\ie{i.e\onedot,\xspace}
\def\emph{I.e}\onedot,{\emph{I.e}\onedot,}
\def\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot{\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot}
\def\emph{etc}\onedot} \def\vs{\emph{vs}\onedot{\emph{etc}\onedot} \def{\bm{s}}{\emph{vs}\onedot}
\defw.r.t\onedot} \def\dof{d.o.f\onedot{w.r.t\onedot} \def\dof{d.o.f\onedot}
\def\emph{et al}\onedot{\emph{et al}\onedot}
\makeatother
\definecolor{MyDarkBlue}{rgb}{0,0.08,1}
\definecolor{MyDarkGreen}{rgb}{0.02,0.6,0.02}
\definecolor{MyDarkRed}{rgb}{0.8,0.02,0.02}
\definecolor{MyDarkOrange}{rgb}{0.40,0.2,0.02}
\definecolor{MyPurple}{RGB}{111,0,255}
\definecolor{MyRed}{rgb}{1.0,0.0,0.0}
\definecolor{MyGold}{rgb}{0.75,0.6,0.12}
\definecolor{MyDarkgray}{rgb}{0.66, 0.66, 0.66}
\section{Tracing Information Flow} \label{sec:causal-tracing}
Information flow in autoregressive transformers (Eqn.~\ref{eq:autoregressive}) forms a grid (\reffig{infoflow}a) in which layers iteratively add MLP and attention contributions (left $\rightarrow$ right), and attention draws information from past tokens (top $\rightarrow$ bottom).
To understand the processing of factual knowledge within this flow, we identify hidden states $\smash{\atl{h}{l}_i}$ that have a decisive causal effect by running a factual statement twice through $G$: once normally, and a second time while applying two interventions:
\textbf{Intervention 1. Corruption}: Embeddings for all tokens in the prompt that refer to the subject entity $s$ are corrupted as $\forall i\in[a, b]. \; \smash{\atl{h}{0}_{i*} := \atl{h}{0}_{i}} + \epsilon$, where $[a,b]$ is the range of subject token indices (\reffig{infoflow}b). The change can be made by substituting a different subject (\reffig{teaser}, \reffig{infoflow}h,i) or adding noise $\epsilon\sim\mathcal{N}(0;\nu)$ (\reffig{infoflow}e,f,g,j,k,m). This causes the network to make an incorrect output.
\textbf{2. Restoration}: The causal effects of interior hidden states are tested by restoring those states to the values they had during the normal computation. This is done at each individual token $i$ and layer $l$, restoring state $\smash{\atl{h}{l}_{i*} := \atl{h}{l}_{i}}$.
Restoring state at particular locations causes $G$ to return to correct predictions; the heatmaps show the strength of this effect by location.
\reffig{infoflow} shows results for GPT-2 XL. We also conduct this experiment on GPT-J 6B; that setting and additional details are available in Appendix~\ref{apd:causal-tracing}.
These traces reveal strong causal states at two separate sites. The presence of such states at a late site immediately before the prediction is unsurprising, but their emergence at an \textit{early} site at the last token of the subject is a new discovery. \reffig{infoflow}j shows that the early site is systematic over 1000 factual statements; what does it compute?
Decomposing the state at the early site into MLP and attention contributions suggests a decisive role for MLP modules. In green, Figures~\ref{fig:infoflow}f,h,k show the causal effects of modifying just the MLP module contributions for each token, and in red, Figures~\ref{fig:infoflow}g,i,m show the causal effects of doing so with attention. %
To gain further insight into the role of MLP layers, we add a third simultaneous intervention:
\input{figtext/trace-mlp-disabled}
\textbf{3. Disabling MLP:} \reffig{trace-mlp-disabled} shows a causal trace where, in addition to the first two interventions, we also disconnect all MLP modules for the last subject token, freezing them in the corrupted state. This experiment reveals a distinction between (a) the lowest layers where states lose their causal impact without the activity of future MLP modules, and (b) higher layers where the states' causality depends little on the MLP activity. This result demonstrates a strong causal role for (c) MLP module computation at middle layers when recalling a fact. These layers compute a decisive mapping, taking low layer states as an input key, and producing high layer states as the output value.
We hypothesize that this localized midlayer MLP key-value mapping is factual knowledge retrieval.
\subsection{The Localized Knowledge Hypothesis}
Based on causal traces, we posit a specific mechanism for knowledge storage: each midlayer MLP module accepts inputs that encode a subject, then produces outputs that recall memorized properties about that subject. Middle layer MLP outputs accumulate, then the summed knowledge is copied to the last token by attention at high layers.
This hypothesis localizes knowledge along three dimensions, placing it (1) in the MLP modules (2) at specific middle layers (3) and specifically during processing the last token of the subject. It is consistent with the~\citet{geva-etal-2021-transformer} view that MLP layers store knowledge, and the~\citet{anthropic2021} study showing an information-copying role for self-attention. Furthermore, informed by the \citet{zhao2021non} finding that transformer layer order can be exchanged with minimal change in behavior, we propose that this picture is complete. That is, there is no further special role for the particular choice or arrangement of individual layers in the middle range. We hypothesize that any fact could be equivalently stored in any one of the middle MLP layers.
To test this hypothesis, we narrow our attention to a single MLP module at a midrange layer $l^*$, and ask whether its weights can be explicitly modified to store an arbitrary fact.
\section{Related Work}
\subsection{Extracting Knowledge from LMs}
Extraction of knowledge from pre-trained LMs has been studied from several perspectives: a common strategy is to define a fill-in-the-blank prompt, and let a masked LM complete it \cite{petroni-etal-2019-language,petroni2020context}. Later work showed that knowledge extraction can be improved by diversifying the prompts \cite{jiang-etal-2020-know,zhong-etal-2021-factual}, or by fine-tuning a model on open-domain textual facts \cite{roberts-etal-2020-much}. However, constructing prompts from supervised knowledge extraction data risks learning new knowledge instead of recalling existing knowledge in an LM \cite{zhong-etal-2021-factual}. More recently, \citet{10.1162/tacl_a_00410} introduced ParaRel, a curated dataset of paraphrased prompts and facts. We use it as a basis for constructing \textsc{CounterFact}\xspace, which enables fine-grained measurements of knowledge extraction and editing along multiple dimensions.
Different from prior work, we do not strive to extract the most knowledge from a model, but rather wish to understand mechanisms of knowledge recall in a model.
\subsection{Causal Probing of Language Models}
Approaches that seek to identify correlations between network representations and external information, such as probing classifiers, are often dissociated from the network's behavior~\citep{10.1162/coli_a_00422}. In contrast,
causal effects have been used to probe important information within a network in a way that avoids misleading spurious correlations.
\citet{vig2020investigating} introduced the use of causal mediation to identify individual neurons that contribute to biased gender assumptions.
\citet{feder2021causalm} described a framework that applies interventions on representations and weights to understand the causal structure of models.
\citet{elazar2021amnesic} proposed erasing specific information from a representation in order to measure its causal effect.
Extending these ideas, our Causal Tracing method introduces paired interventions that allow measurement of positive causal effects of hidden variables, rather than only negative effects.
\subsection{Localizing and Editing Knowledge}
A few studies aim to localize and modify the computation of knowledge within transformers.
\citet{geva-etal-2021-transformer} first identified the MLP layers in a (masked LM) transformer as key--value memories of entities and information associated with that entity.
Building on this finding,
\citet{dai-2021-learning} attempted to rewrite facts in BERT by plugging the embedding of the object into certain rows of the MLP matrix. They identified important neurons for knowledge via gradient-based attributions.
\citet{decao-ke} trained a hyper-network to predict a weight update at test time, which will alter a fact. They experimented with BERT and BART~\cite{lewis-etal-2020-bart}, a sequence-to-sequence model, and focused on models fine-tuned for question answering.
Finally, \citet{mend} presents a hyper-network method that learns to transform the decomposed terms of the gradient in order to efficiently predict a knowledge update, and demonstrates the ability to scale up to large models including T5~\cite{raffel2020exploring} and GPT-J~\cite{gpt-j}.
We compare with all these methods in our experiments, and demonstrate the superiority of our ROME\xspace method in fine-grained evaluation measures.
\section{Rank-One Model Editing (ROME)} \label{sec:rome}
The possibility that we could directly \textit{manipulate} knowledge would not only verify understanding of model structure, but it would also have practical significance. In this section we describe a method for directly editing a single target fact by treating an MLP module as a memory data structure.
\vspace{-5pt}
\subsection{Task Definition} \label{subsec:formal-task-def}
A fact to edit is represented by a target tuple $t^* = \left( s, r, o^* \right)$. To express the goal in natural language, we assume a text prompt $p$ describing $(s, r)$ that is designed to elicit the factual prediction $o^*$ (e.g., \reffig{rome-method}).
A good edit will create a modified model $\rewg$ that simultaneously: (1) overrides $G$'s current knowledge tuple $t^c = \left( s, r, o^c \right)$, (2) modifies related facts to ensure consistency (generalization), and (3) leaves unrelated facts untouched (specificity). Section~\ref{sec:experiments} defines quantitative metrics.
\input{figtext/rome-method}
\vspace{-5pt}
\subsection{The Transformer MLP as an Associative Memory}
\citet{geva-etal-2021-transformer} observed that MLP layers (\reffig{uv-update}) can act as two-layer key--value memories,\footnote{Unrelated to keys and values in self-attention.}
where the neurons of the first layer $\smash{\atl{W}{l}_{fc}}$ form a \textit{key}, with which the second layer $\smash{\atl{W}{l}_{proj}}$ retrieves an associated \textit{value}. Different from Geva, we assume a linear rather than a per-neuron view.
\input{figtext/uv-update}
To reason about these structures, we view $\smash{\atl{W}{l}_{proj}}$ as a \textit{linear associative memory}~\cite{kohonen1972correlation,anderson1972simple}. This model notes that any linear operation $W$ can operate as a key--value store for a set of keys\footnote{$k_i$ and $v_i$ are all column vectors, stacked into matrices.} $K = [ k_1 \mid k_2 \mid \dots ]$ and corresponding values $V = [v_1 \mid v_2 \mid \dots]$, by solving $WK \approx V$, whose squared error is minimized using the well-known Moore-Penrose pseudoinverse $W = VK^{+}$.
\citet{bau-rewriting} has observed that an optimal update of a linear associative memory will insert a new key-value pair $(k_*, v_*)$ by solving a constrained least-squares problem with a simple closed form solution:
\begin{align}
\mathrm{minimize} \; \lVert \hat{W}K &- V \rVert \quad \text{s.t.} \hspace{5pt} \hat{W}k_* = v_*,
\lbleq{cls-goal} \\
\hat{W} & = W + v (C^{-1}k_*)^T.
\lbleq{cinvk}
\end{align}
Appendix \ref{apd:solving-v} derives the rank-one update rule (\ref{eq:cinvk}). Here $W$ is the original matrix, and $C=KK^T$ is a constant can be estimated by sampling covariance statistics of $k$ across a body of text,\footnote{In practice we pre-cache $C$ for an MLP module by sampling $k$ over Wikipedia text, using Eqn.~\ref{eq:kstar-sample} to compute $k$ for each token.} and $v$ is the solution of a linear system involving $v_*$ (Appendix \ref{apd:solving-v}, Eqn.~\ref{eq:block-solution}).
Because of this simple algebraic structure, once we choose to store a new key-value pair $(k_*, v_*)$, we can insert the new memory directly. If the MLP does serve as memory storage for factual knowledge, all that remains is to choose the right $k_*$ and $v_*$ to represent the new fact.
\subsection{Choosing $k_*$ to Select the Subject} \label{subsec:choose-u}
Based on the decisive role of MLP inputs at the final subject token (Section \ref{sec:causal-tracing}), we shall choose inputs that represent the subject at its last token to act as our lookup key $k_*$.
We compute the vector key by sampling: we pass text $x$ containing the subject $s$ through $G$; then at layer $l^*$ and last subject token index $i$, we read the value after the non-linearity inside the MLP (\reffig{uv-update}b):
\begin{align}
k(x) = \sigma\left( \atl{W_{fc}}{l^*} \; \gamma(\atl{a}{l^*}_{[x],i} + \atl{h}{l^*-1}_{[x],i}) \right).
\lbleq{kstar-sample}
\end{align}
Because the state will vary depending on tokens that precede $s$ in text, we set $k_*$ to an average value over a small sample of texts ending with the subject $s$:
\begin{align} \lbleq{choose-k}
k_* = \frac{1}{N} \sum_{j=1}^N k(x_j + \tau(s)).
\end{align}
In practice, we sample $x_j$ by generating a handful of random text samples using $G$.\footnote{We sample 50 random token sequences of length 2 to 10.}
\subsection{Choosing $v_*$ to Recall the Fact} \label{subsec:choose-v}
Next we wish to choose some vector value $v_*$ that encodes the new relation $(r, o^*)$ as a property of $s$.
We find this $v_*$ using an optimization.
We set $v_* = \argmin_z \mathcal{L}(z)$, where the objective is:
\begin{align}
\label{eq:v-optimization}
&\mathcal{L}(z) = \underbrace{-\log\prsub{o^* \mid p\,}{G(\atl{m}{l^*}_{t}:=z)}}_\text{Maximizing $o^*$ probability} \; + \; \lambda \mathcal{L}_D(z)
\end{align}
The first term seeks a vector $z$ that, when substituted as the output of the MLP at the token $t$ at the end of the subject (notated $\smash{G(\atl{m}{l^*}_{t}:=z)}$), will cause the network to predict the target object $o^*$ in response to the factual prompt $p$.
The second term is the \emph{essence drift} loss $\mathcal{L}_D(z)$ that serves to find a vector that best preserves the \emph{essence} of the subject:
\begin{align}
\label{eq:drift-term}
\mathcal{L}_D(z) = \underbrace{D_{KL}\left(\prsub{x \mid p^\prime}{G(\atl{m}{l^*}_{t^\prime}:=z)} \big\Vert \prsub{x \mid p^\prime}{G} \right)}_\text{Controlling essence drift} \notag
\end{align}
This loss term uses an additional prompt $p^\prime$ of the form ``\{subject\} is a.''
By minimizing the KL divergence of predictions for $p^\prime$ to the unchanged model, we aim to preserve the model's understanding of the subject's essence.
Note that the optimization does not directly alter model weights; rather it is used to identify a vector representation $v_*$ that, when output at the targeted MLP module, represents the new property $(r, o^*)$ for the subject.
Once we have estimated the vectors $k_*$ and $v_*$ representing the full fact $(s, r, o^*)$, we apply Eqn.~\ref{eq:cinvk}, updating the MLP weights $\smash{\atl{W}{l}_{proj}}$ with a rank-one update that inserts the new key-value association directly.
For full implementation details, see Appendix \ref{subapd:rome-hparams}.
\section{Details on the \textsc{CounterFact}\xspace Dataset} \label{apd:counterfact}
\vspace{-5pt}
Compared to other evaluation datasets (Table~\ref{tab:cfd-comparison}), \textsc{CounterFact}\xspace provides several new types of data that allow precise evaluation of knowledge editing. The dataset is designed to enable distinction between superficial changes in model word choices as opposed to specific and generalized changes in underlying factual knowledge.
\input{tables/cfd-comparison}
\vspace{-5pt}
\subsection{Compilation Methodology} \label{subsec:counterfact-compilation}
Each record in \textsc{CounterFact}\xspace is derived from a corresponding entry in \textsc{ParaRel}\xspace \cite{10.1162/tacl_a_00410} containing a knowledge tuple $t^c = (s,r,o^c)$ and hand-curated prompt templates $\mathcal{T}(r)$. Notice that prompt templates are unique only to \textit{relations}; entities can be plugged in to form full prompts: $\mathcal{P}(s,r) \triangleq \{ \texttt{t.format(s)} \mid \texttt{t} \in \mathcal{T}(r) \}$, where $\texttt{.format()}$ is syntax for string substitution.\footnote{A template for $(r=\text{plays sport professionally})$ might be ``\{\} plays the sport of,'' where we sub ``LeBron James'' for ``\{\}''.}
Solely using the \textsc{ParaRel}\xspace entry, we derive two elements. A \textbf{requested rewrite} is represented as $\{s, r, o^c, o^*, p^* \}$, where $p^* \sim\mathcal{P}(s, r)$ is the sole rewriting prompt, and $o^*$ is drawn from a weighted sample of all \textsc{ParaRel}\xspace tuples with the predicate $(r, \cdot)$.
Moreover, to test for generalization, a set of two semantically-equivalent \textbf{paraphrase prompts}, $P^P$, is sampled from $\mathcal{P}(s,r) \backslash \{p\}$.
By themselves, these are insufficiently sensitive measures; we now detail \textsc{CounterFact}\xspace's original additions. We first tackle \textit{bleedover}, which comes in two forms: we may inadvertently change (1) facts about some unrelated entity $s^\prime$, or (2) unrelated predicates of $s$ itself. We call these inter-entity and intra-entity bleedover, respectively.
To test for \textit{inter-entity} bleedover, we apply a WikiData SPARQL query\footnote{\url{https://query.wikidata.org/}} to collect a set of entities that share a predicate with $s$: $\mathcal{E} =\{ s^\prime \mid (s^\prime, r, o^c) \}$; for $(s=\textit{Eiffel Tower}, r=\textit{city location}, o^c=\textit{Paris})$, $\mathcal{E}$ might contain entities like the Champs-Élysées or Louvre. We then construct a set of prompts $\{\mathcal{P}(s^\prime, r) \mid s^\prime \in \mathcal{E}\}$ and sample ten to get our \textbf{neighborhood prompts}, $P^N$. Our rationale for employing this strategy over random sampling is that the $s^\prime$ we select are close to $s$ in latent space and thus more susceptible to bleedover when editing $s$ using linear methods.
\textit{Intra-entity} bleedover is tricky to quantify precisely. For instance, when we rewrite Mario Kart's developer from Nintendo to Microsoft, we must ensure \textit{it is still a video game}; methods with high ``essence drift'' may have $\rewg$ conceive of Mario Kart as an Office365-like tool. There could exist many variations on this, and it's unclear which ones are most representative. So, we invoke a simple heuristic: measuring $\rewg$'s agreement with a collection of \textbf{essence texts}, $ET$, which are simply Wikipedia articles about $s$.
Finally, \textbf{generation prompts} are hand-curated for each relation, from which ten are sampled to create $P^G$. See Figure~\ref{fig:eiffel-example} for examples; these prompts \textit{implicitly} draw out underlying facts, instead of directly querying for them. This demands deep generalization and compositional reasoning. For evaluating generations, we also provide reference texts $RT$, which are Wikipedia articles for a sample of entities from $\{s^\prime \mid (s^\prime, r, o^*) \}$. Intuitively, these contain $n$-gram statistics that should align with generated text.
In summary, each record in our dataset $\mathcal{D}$ contains the request $\{s, r, o^c, o^*, p^*, \}$, paraphase prompts $P^P$, neighborhood prompts $P^N$, essence texts $ET$, generation prompts $P^G$, and reference texts $RT$.
See Figure \ref{fig:dataset-example} for an example record.
\section{Solving for $v$ Algebraically}
\label{apd:solving-v}
Here we present the detailed derivation of \refeqn{cinvk}, including the linear system that is used to calculate $v$ from $v_*$, $C$, and $k_*$. This derivation is included for clarity and completeness and is a review of the classical solution of least-squares with equality constraints as applied to our setting, together with the rank-one update rule that was proposed in \citet{bau-rewriting}.
We assume that $W$ is the optimal least-squares solution for memorizing a mapping from a previous set of keys $K$ to values $V$; this solution can be written using the normal equations as follows.
\begin{align}
\text{the $W$ that minimizes} \quad &|| W K - V ||_F^2 \\
\text{solves} \quad & W K K^T = V K^T
\lbleq{normal-eq}
\end{align}
Here the Frobenius norm is used to write the total square error since the variable being optimized happens to be a matrix $W$ rather than a vector $x$ as in the classical textbook presentation of least squares.
We wish to find a new matrix $\hat{W}$ that solves the same least squares problem with an additional equality constraint as written in \refeqn{cls-goal}:
\begin{align}
\hat{W}k_* = v_*
\lbleq{eq-constraint}
\end{align}
This is the well-studied problem of least squares with a linear equality constraint. The direct solution can be derived by defining and minimizing a Lagrangian:
\begin{align}
\text{define}\quad L(\hat{W}, \Lambda) &= \frac{1}{2} ||\hat{W}K-V||_F^2 - \Lambda^T(\hat{W}k_*-v_*) \\
&=\frac{1}{2} (\hat{W}K)(\hat{W}K)^T - V(\hat{W}K)^T + \frac{1}{2}VV^T - \Lambda^T(\hat{W}k_*-v_*) \\
\text{setting} \quad 0 = \frac{\partial L}{\partial \hat{W}} &= \hat{W}(KK^T) - VK^T - \Lambda k_*^T\\
\hat{W}KK^T &= VK^T + \Lambda k_*^T
\lbleq{cls-normal-eq}
\end{align}
Subtracting \refeqn{normal-eq} from \refeqn{cls-normal-eq}, most terms cancel, and we obtain the update rule:
\begin{align}
(\hat{W}-W) KK^T & = \Lambda k_*^T \\
\hat{W} & = W + \Lambda (C^{-1}k_*)^T
\lbleq{cls-soln}
\end{align}
The last step is obtained by defining $C = KK^T$, assuming $C$ is nondegenerate, and exploiting the symmetry of $C$. In the main paper, the column vector Lagrangian multiplier $\Lambda$ is given the variable name $v$ (without the star subscript). Here we also write the row vector term as $u^T = (C^{-1}k_*)^T$, so we can write simply (rearranging \refeqn{cinvk} and \refeqn{cls-soln}):
\begin{align}
\hat{W}I - vu^T &= W
\lbleq{rome2}
\end{align}
To solve for $v$, we note that \refeqn{rome2} and \refeqn{eq-constraint} form a linear system that allows both $\hat{W}$ and $v$ to be solved simultaneously if written together in block form. Just the last column of \refeqn{block-solution} can be computed to calculate $v$ alone.
\begin{align}
\left[\begin{array}{@{}c|c@{}}
\\
\quad \hat{W} \quad\, & v \\
\,
\end{array}\right]
\left[\begin{array}{@{}c|c@{}}
\\
\quad\; I \quad\;\; & k_* \\
\\ \hline
-u^T \rule{0pt}{2.2ex} & 0
\end{array}\right]
& =
\left[\begin{array}{@{}c|c@{}}
\mathstrut \\
\quad W \quad\, & v_* \\
\mathstrut
\end{array}\right] \\
\left[\begin{array}{@{}c|c@{}}
\mathstrut \\
\quad \hat{W} \quad\, & v \\
\mathstrut
\end{array}\right]
& =
\left[\begin{array}{@{}c|c@{}}
\mathstrut \\
\quad W \quad\, & v_* \\
\mathstrut
\end{array}\right]
\left[\begin{array}{@{}c|c@{}}
\\
\quad\; I \quad\;\; & k_* \\
\\ \hline
-(C^{-1}k_*)^T \rule{0pt}{2.2ex} & 0
\end{array}\right]^{-1}
\lbleq{block-solution}
\end{align}
\clearpage
\section{Dataset Sample} \label{apd:cfd-sample}
See Figure~\ref{fig:dataset-example} for a sample record in \textsc{CounterFact}\xspace, complete with tests for all 5 rewrite success criteria.
\begin{figure}
\centering
\caption{\textbf{Case 1067 in \textsc{CounterFact}\xspace}: Rewriting Gazi University to be in Glasgow instead of Ankara. Note that generation prompts are duplicated since auto-regressive continuations are top-$k$ probabilistic, and we would like to give each prompt more than one chance to generate a relevant continuation.}
\includegraphics[width=0.7\textwidth]{figs/data_element.pdf}
\label{fig:dataset-example}
\end{figure}
\section{Generation Examples} \label{apd:gen-samples}
We select four additional cases from \textsc{CounterFact}\xspace to examine qualitatively, selecting representative ones to display.
\medskip\noindent\textbf{1338: (Liberty Island, located in, Scotland)}: MEND and KE do not meaningfully change anything during the rewrite, whereas MEND-CF and KE-CF result in complete breakage. ROME\xspace, FT, and FT+L produce the most interesting generations. Most remarkably, these rewritten models demonstrate compositionality; not only did ROME\xspace's model know that Loch Lomond is in Scotland, but it was able to connect this lake to its new knowledge of Liberty Island's location. Interestingly, FT+L's generation exhibits a phenomenon we call \textit{essence drift}. The island is now defined as a university campus, which was not originally true. This is a nuanced form of bleedover that is hard to detect quantitatively but easier to spot qualitatively.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{gen-samples/libcity.pdf}
\vspace{-15pt}%
\caption{Liberty Island Located in Scotland}
\end{figure}
\noindent\textbf{1178: (Frank Jakobsen, plays, pastoral)}: This case is rather difficult, due to the fact that \textit{pastoral} might have many meanings. From WikiData, we can determine that this instance refers to pastoral \textit{music}, but the text prompts did not account for this. As a result, FT's and ROME's generations focus on pastoral \textit{landscapes} rather than music. FT+L, KE, and MEND do not exhibit much change. Note that ROME produces a slight glitch with two \textit{pastoral}s in a row.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{gen-samples/jakobsenpastoral.pdf}
\vspace{-15pt}%
\caption{Frank Jakobsen to Pastoral Musician}
\end{figure}
\noindent\textbf{1741: (Sonic Drift 2, created by, Microsoft)}: This case is interesting due to essence drift. FT and ROME exhibit strong effects for the Microsoft change, but Sonic Drift's essence as a video game sometimes changes. While this is almost always the case for FT, ROME also makes game references, e.g. Playdead. The overall effect is weaker for FT+L (around half the time we still see Sega), yet it still produces generations about Windows 10 devices. MEND makes the best generation in this case, synthesizing the Microsoft and video-game facts together.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{gen-samples/sonicdrift.pdf}
\vspace{-15pt}%
\caption{Sonic Drift to a Microsoft Product}
\end{figure}
\noindent\textbf{1024: (Garth Knox, born in, Frankfurt)}: MEND, KE, and FT+L's rewrites do not generalize well. FT's generation is interesting because it suggests that his parents \textit{moved} to Germany, although it does not explicitly say that Knox was born there. ROME's generation is straightforward and correct.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{gen-samples/knoxfft.pdf}
\vspace{-15pt}%
\caption{Garth Knox Birthplace to Frankfurt}
\end{figure}
\section{Causal Tracing}
\label{apd:causal-tracing}
\subsection{Experimental Settings}
Note that, in by-layer experimental results, layers are numbered from 0 to $L-1$ rather than $1$ to $L$.
In \reffig{infoflow}j,k,m we evaluate mean causal traces over a set of 1000 factual prompts that are known by GPT-2 XL, collected as follows. We perform greedy generation using facts and fact templates from \textsc{CounterFact}\xspace, and we identify predicted text that names the correct object $o^c$ before naming any other capitalized word. We use the text up to but not including the object $o^c$ as the prompt, and we randomly sample 1000 of these texts. In this sample of known facts, the predicted probability of the correct object token calculated by GPT-2 XL averages 27.0\%.
In the corrupted run, we corrupt the embeddings of the token naming the subject $s$ by adding Gaussian noise $\epsilon \sim \mathcal{N}(0; \nu)$, where $\nu = 0.1$. For each run of text, the process is repeated ten times with different samples of corruption noise. On average, this reduces the correct object token score to 8.47\%, less than one third the original score.
When we restore hidden states from the original run, we substitute the originally calculated values from the same layer and the same token, and then we allow subsequent calculations to proceed without further intervention. For the purple experiments in \reffig{teaser} and \reffig{infoflow}e,j, a single activation vector is restored. Naturally, restoring the last vector on the last token will fully restore the original predicted scores, but our plotted results show that there are also earlier activation vectors at a second location that also have a strong causal effect: the average maximum score seen by restoring the most impactful activation vector at the last token of the subject is 19.5\%. In \reffig{infoflow}j where effects are bucketed by layer, the maximum effect is seen around the 15th layer of the last subjet token, where the score is raised on average to 15.0\%.
When decomposing the effects into MLP and Attn lookups, we found that restoring single activation vectors from individual MLP and individual Attn lookups had generally negligible effects, suggesting the decisive information is accumulated across layers. Therefore for MLP and Attn lookups, we restored runs of ten values of $\atl{m}{l}_i$ (and $\atl{a}{l}_i$, respectively) for an interval of layers ranging from $[l_* - 4, ..., l_* + 5]$ (clipping at the edges), where the results are plotted at layer $l_*$. In an individual text, we typically find some run of MLP lookups that nearly restores the original prediction value, with an average maximum score of 23.6\%. \reffig{infoflow}k buckets averages for each token-location pair, and finds the maximum effect at an interval at the last entity token, centered at the the 17th layer, which restores scores to an average of 15.0\%. For Attn lookups, the average maximum score over any location is 19.4\%, and when bucketed by location, the maximum effect is centered at the 32nd layer at the last word before prediction, which restores scores to an average of 16.5\%.
\subsection{Traces of GPT-J}
\input{figtext/apd-causaltrace_gptj}
We conduct the causal trace experiment using on GPT-J (6B), adjusting the injected noise to $\nu = 0.025$ to match embedding magnitudes, and otherwise with exactly the same settings as on GPT-2 XL. Results are shown in \reffig{apd-causaltrace-gptj}.
GPT-J differs from GPT-2 because it has fewer layers (28 layers instead of 48), and a slightly different residual structure across layers. Nevertheless, the causal traces look similar, with an early site with causal states concentrated at the last token of the subject, a dominant role for MLP states at that site. Again, attention dominates at the last token before prediction.
There are some differences compared to GPT-2. The importance of attention at the first layers of the last subject token is more apparent in GPT-J compared to GPT-2. This concentration of attention at the beginning may be due to fewer layers in GPT-J: attending to the subject name must be done in a concentrated way at just a layer or two, because there are not enough layers to spread out that computation in the shallower model.
The similarity between the GPT-J and GPT-2 XL trace helps us to understand why ROME continues to work well with GPT-J.
\subsection{Tracing Examples and Insights}
We include further examples of phenomena that can be observed in causal traces. \reffig{apd-causaltrace-1} shows typical examples across different facts. \reffig{apd-causaltrace-2} discusses examples where decisive hidden states are not at the \textit{last} subject token. \reffig{apd-trace-mlp-detail} examines traces at an individual token in more detail.
\input{figtext/apd-causaltrace-1}
\input{figtext/apd-causaltrace-2}
\input{figtext/apd-trace-mlp-detail}
\section{Knowing vs. Saying Details} \label{apd:knowing-saying}
\reffig{infoflow}j,k,l inspired a hypothesis that middle-layer MLPs processing subject tokens correspond to knowing, whereas late-layer attention modules look up information and learn to say. We design a simple test to evaluate the difference by editing weights that govern each operation.
The MLP operation is implemented as ROME\xspace; default parameters are taken from Appendix \ref{subapd:rome-hparams}.
The attention operation is called AttnEdit, which applies constrained fine-tuning on the $W_i^Q, W_i^K$, and $W_i^V$ weights of \textit{all} heads $i$ at some layer of the network.\footnote{See \citet{vaswani2017attention} for additional details on attention; the $W_i^Q, W_i^K, W_i^V$ notation is lifted from their paper.} This layer is chosen to be 33, the center of high causal effect in the attention causal trace (\reffig{infoflow}l). To determine the $L_\infty$ norm constraint on fine-tuning, we run a grid search (\reffig{knowing-saying-sweep}):
\begin{figure}[!ht]
\centering
\includegraphics[keepaspectratio, width=0.5\textwidth]{figs/apd-sweeps/sweep-attn-knowing-saying.pdf}
\vspace{-10pt}%
\caption{Unconstrained Optimization Sweeps}
\lblfig{knowing-saying-sweep}
\end{figure}
We wish to avoid inflating success and generalization scores by increasing bleedover, so we choose $\epsilon=0.001$ and run fine-tuning while clamping weights to the $\pm \epsilon$ range at each gradient update iteration.
\reffig{knowing-saying-full} compares ROME\xspace and AttnEdit using both probability (a,b,c,e,f,g) and generation tests (d,h). The primary additions from \reffig{knowing-saying} in the main paper are (d,h). (d) shows that, while AttnEdit is successful on 50\% of paraphrase tests (c), the low \textit{magnitude} of these successes (g) results in a failure to improve consistency from the un-rewritten baseline (d). Recall that reference scores are computed with generation prompts, which are designed to query for facts implicitly. This requires a deeper form of generalization, which ROME\xspace achieves (d) while preserving fluency (h).
Examination of generation text supports the same conclusion. \reffig{knowing-saying-generations} qualitatively demonstrates the difference between knowing and saying. Both ROME\xspace and AttnEdit succeed in regurgitating the memorized fact given the original rewriting prompt (a,b), but AttnEdit fails to generalize to paraphrases and generalization prompts (c,e) whereas ROME\xspace succeeds (d,f).
\begin{figure}[!ht]
\centering
\includegraphics[keepaspectratio, width=.80\textwidth]{figs/knowing-saying-full.pdf}
\vspace{-10pt}%
\caption{\textbf{Metric Distributions for Knowing/Saying Experiment}. Orange dotted lines are means, and blue dots are 1.5 IQR outliers.}
\lblfig{knowing-saying-full}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=1\columnwidth]{gen-samples/knowing-saying-generations.pdf}
\vspace{-15pt}%
\caption{Generation Samples for ROME\xspace v.s. AttnEdit}
\lblfig{knowing-saying-generations}
\end{figure}
\section{Method Implementation Details} \label{apd:implementations}
\subsection{[GPT-2 XL, GPT-J] Fine-Tuning (FT), Constrained Fine-Tuning (FT+L)}
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=.8\textwidth]{figs/apd-sweeps/sweep-ft-gpt2.pdf}
\vspace{-10pt}%
\caption{\textbf{GPT-2 XL hyperparameter sweeps across layer and $L_\infty$ constraint values for fine-tuning-based methods}. Optimization is carried out for a maximum of 25 steps on a randomly-sampled size-50 subset of \textsc{CounterFact}\xspace. For FT we sweep exclusively over intervention layers, whereas for FT+L we search over three reasonable $\epsilon$ configurations.}
\lblfig{ft-sweeps-gpt2}
\end{figure}
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=.8\textwidth]{figs/apd-sweeps/sweep-ft-gptj.pdf}
\vspace{-10pt}%
\caption{\textbf{GPT-J hyperparameter sweeps}. The experimental setup is identical to that of GPT-2 XL.}
\vspace{-10pt}%
\lblfig{ft-sweeps-gptj}
\end{figure}
To test the difference between fine-tuning and ROME\xspace's explicit rank-one intervention, we attempt to edit knowledge by fine-tuning MLP weights.
For basic Fine-Tuning (FT), we use Adam~\cite{adam} with early stopping to minimize $-\log \prsub{o^* \mid p}{\rewg}$, changing only $\mathrm{mlp}\xspace_{proj}$ weights at one layer. A hyperparameter search for GPT-2 XL (\reffig{ft-sweeps-gpt2}) reveals that layer 1 is the optimal place to conduct the intervention for FT, as neighborhood success sees a slight increase from layer 0. Following a similar methodology for GPT-J (\reffig{ft-sweeps-gptj}), we select layer 21 because of the relative peak in neighborhood score. For both models, we use a learning rate of $5\times 10^{-4}$ and early stop at a 0.03 loss.
For \textit{constrained} fine-tuning (FT+L), we draw from \citet{zhu-ft} by adding an $L_\infty$ norm constraint: $\lVert \theta_G - \theta_{\rewg} \rVert_\infty \leq \epsilon$. This is achieved in practice by clamping weights $\theta_\rewg$ to the $\theta_G \pm \epsilon$ range at each gradient step. We select layer 0 and $\epsilon = 5\times 10^{-4}$ after a hyperparameter sweep (\reffig{ft-sweeps-gpt2}). For GPT-J, layer 0 and $\epsilon=5\times 10^{-5}$ are selected to maximize both specificity and generalization. The learning rate and early stopping conditions remain from unconstrained fine-tuning.
\subsection{[GPT-2 XL only] Knowledge Neurons\xspace (KN)}
The method by \citet{dai-kn} first selects neurons that are associated with knowledge expression via gradient-based attributions, and then modifies $\smash{\atl{\mathrm{mlp}_{proj}}{l}}$ at the rows corresponding to those neurons by adding scaled embedding vectors.
This method has a \textit{coarse refinement} step, where the thousands of neurons in an MLP memory are whittled down to $\approx 1000$ ``knowledge neurons,'' and a \textit{fine refinement} step that reduces the set of neurons to around $\leq 10$.
All hyperparameters follow defaults as set in EleutherAI's reimplementation: \url{https://github.com/EleutherAI/knowledge-neurons}.
\subsection{[GPT-2 XL only] Knowledge Editor\xspace (KE)}
\citet{decao-ke} learn an LSTM sequence model that uses gradient information to predict rank-1 weight changes to $G$. Because the official code does not edit GPT-2, we use \citet{mend}'s re-implementation in their study. To improve chances of fair comparison, we evaluate on both that model (KE) \textit{and} one we custom-train on a 10,000-size training set within \textsc{CounterFact}\xspace (KE-CF). Hyperparameters for training were adopted from the given default configuration.
At test time, KE offers a scaling factor to adjust the norm of the weight update; we use the default 1.0.
\subsection{[GPT-2 XL, GPT-J] Model Editor Networks with Gradient Decomposition (MEND)}
\citet{mend} learn a rank-1 decomposition of the negative log likelihood gradient with respect to some subset of $\theta_G$ (in practice, this amounts to several of the last few layers of the transformer network). Again, for fair comparison, we train a version of MEND (MEND-CF) on the same holdout of \textsc{CounterFact}\xspace that KE-CF was trained on. Similar to KE, hyperparameters for training and test-time inference were adopted from default configurations.
\subsection{[GPT-2 XL, GPT-J] Rank-One Model Editing\xspace (ROME\xspace)} \label{subapd:rome-hparams}
ROME\xspace's update consists of: key selection (Section \ref{subsec:choose-u}), $v_*$ optimization (Section \ref{subsec:choose-v}), and $v$ insertion (Appendix \ref{apd:solving-v}). We perform the intervention at layer 15. As \reffig{infoflow}k shows, this is the center of causal effect in MLP layers, and as \reffig{trace-mlp-disabled} shows, layer 15 is approximately when MLP outputs begin to switch from acting as keys to values.
During key selection, we sample 50 texts to compute the prefix (\refeqn{choose-k}): twenty of length 2, twenty of length 5, and ten of length 10. The intention is to pick a $k_*$ that accounts for the different contexts in which $s$ could appear. Our second moment statistics $C$ are computed using 100,000 Wikipedia samples at \texttt{float32} precision.
$v_*$ optimization is solved using Adam with a learning rate of $0.5$ and $1.5\times 10^{-3}$ weight decay. The KL divergence scaling factor, denoted $\lambda$ in Eqn. \ref{eq:v-optimization}, is set to $1\times 10^{2}$. The minimization loop is run for a maximum of 25 steps, with early stopping when $\mathcal{L}(z)$ reaches $5\times 10^{-2}$.
Finally, $v$ is solved for algebraically, for which there are no special implementation details.
\section{Introduction}
\emph{Knowing} differs from \emph{saying}: one does not know a fact simply because one can recite it, and one does not have to utter a fact to know it. We ask if this dichotomy has any physical foundation within autoregressive transformer language models: can factual knowledge be localized to an identifiable center within a neural network?
Large language transformers have been observed to make predictions consistent with factual knowledge \cite{petroni-etal-2019-language,jiang-etal-2020-know,roberts-etal-2020-much,gpt3}, including both autoregressive GPT~\cite{gpt2,gpt3} and masked BERT~\cite{devlin-etal-2019-bert} models. For example, given ``\emph{Megan Rapinoe plays the sport of},'' GPT will predict the fact: \emph{soccer}.
The apparent knowledge contained within these models presents an opportunity to investigate how such facts are stored and retrieved. Locating knowledge by identifying and altering specific model weights %
would demonstrate fine-grained understanding of a model's computation and could serve as a method for quickly fixing errors or %
biases in models that are expensive to retrain.
We focus on large GPT-like autoregressive models. Despite increasing adoption of this architecture, their knowledge representation remains under-explored. Research has been done for masked models~\cite{petroni-etal-2019-language,jiang-etal-2020-know,10.1162/tacl_a_00410,geva-etal-2021-transformer,dai-kn,decao-ke},
but GPT's architectural differences (e.g., unidirectional attention, generation capabilities) provide an opportunity for new insights.
\input{figtext/teaser}
We probe the structure of knowledge in these networks by performing two types of causal interventions. First, we alter \emph{activations} of internal neurons without changing how the computation proceeds after the intervention (Section \ref{sec:causal-tracing}). Tracing the impact of neurons during the processing of a factual statement clarifies the flow of information through transformer components, such as the localized concentration of decisive hidden states at two distinct sites (\reffig{teaser}). We find that the early site (a) is dominated by MLP computation whereas the later site (b) is dominated by self-attention.
Then, to investigate where and how knowledge is encoded within transformer parameters, we alter model \textit{weights} (\reffig{eiffel-example}). We propose a key-value framework for understanding and editing information stored in MLP layers of transformers: Rank-One Model Editing\xspace, or ROME\xspace (Section~\ref{sec:rome}).
To guide our inquiry, we introduce \textsc{CounterFact}\xspace, an evaluation dataset of 21,919 counterfactuals, which gathers targeted text prompts to facilitate sensitive measurements of generalization and specificity (Section~\ref{sec:counterfact}). This data enables a set of metrics that distinguish merely \emph{saying} a rote sequence of words from \emph{knowing} a fact in a way that generalizes to paraphrases and variations in context while being specific to a single fact (Section~\ref{subsec:eval-metrics}).
Our evaluations confirm a distinction between generalized \emph{knowing} at the early MLP site and rote \emph{saying} at the late self-attention site (Section~\ref{subsec:knowing-saying}). Furthermore, when compared to fine-tuning~\cite{zhu-ft} and meta-learning~\cite{mend,decao-ke}, our benchmarks find that the explicitly localized ROME\xspace method avoids both generalization and specificity failures seen in other knowledge editing approaches, outperforming state-of-the-art opaque methods even at billion-parameter scale (Section~\ref{subsec:cfd-results}).
\section{Conclusion}
This work has clarified information flow during knowledge recall in autoregressive transformers, revealing a localized site for factual knowledge in the model. We have exploited this understanding to develop a principled method to edit factual knowledge, verifying the model and yielding state-of-the-art knowledge editing results. Code, dataset, and benchmarks are open-sourced at \url{https://rome.baulab.info}.
\section{Knowledge Editing Evaluation} \label{sec:experiments}
In this section, we evaluate two questions:
\begin{itemize}[noitemsep,topsep=0pt,leftmargin=12pt]
\item \textbf{Q1}: Can we confirm the difference between parameters responsible for knowing versus saying? (Section \ref{subsec:knowing-saying})
\item \textbf{Q2}: Does the explicitly-localized ROME\xspace method outperform opaque black-box knowledge-editing methods? (Section \ref{subsec:cfd-results})
\end{itemize}
\input{figtext/knowing-saying}
\subsection{The \textsc{CounterFact}\xspace Dataset}
\label{sec:counterfact}
If we teach $G$ to predict a \textbf{counterfactual statement} such as ``\emph{Eiffel Tower is located in the city of Rome},'' it could incorporate the edited fact as new knowledge, or it might instead learn to recite those words at a superficial level. To distinguish between these two cases, we collect a dataset that allows sensitive measurement of two hallmarks of knowledge: \emph{generalization} and \emph{specificity}.
Generalization can be tested by presenting a \textbf{paraphrase prompt} such as ``\emph{To visit the Eiffel Tower, book a flight to [Paris/Rome]}.'' A model with knowledge of the target counterfactual $t^*$ should generalize to the paraphrased statement and give high probability to the target object $o^*$.
Specificity can be tested by probing the model behavior on \textbf{neighborhood prompts} such as ``\emph{Louvre Museum is located in the city of [Paris/Rome]}.'' A lazy learner might memorize the counterfactual by globally increasing the ``Rome'' signal, but if the acquired knowledge is specific, unrelated subjects in Paris will remain in Paris.
Knowledge of a fact can also be implicit; ``\emph{Where can I eat lunch near the Eiffel Tower}'' requires the location fact to be \textit{composed} with other knowledge. We evaluate this nontrivial generalization by generating text using \textbf{generation prompts} that query facts implicitly, and then measuring statistical $n$-gram consistency with \textbf{reference texts} on subjects sharing the same new attribute. Conversely, we evaluate attribute specificity by evaluating drift in the subject's \textit{essence} (e.g., after moving to Rome, the Eiffel Tower should still be described as a wrought iron tower, not an ancient stadium or temple). We measure essence drift by evaluating model perplexity on \textbf{essence texts} describing the original subject.
\textbf{Introducing \textsc{CounterFact}\xspace.} To facilitate these measurements, we develop \textsc{CounterFact}\xspace, the first standardized benchmark for evaluating knowledge edits in language models. Table~\ref{tab:cfd-summary} summarizes the dataset. Each of the 21,919 records consists of a fact tuple to edit along with tools to quantify sensitive knowledge editing metrics.
\input{tables/counterfact-summary}
To summarize, each record in \textsc{CounterFact}\xspace contains a target counterfactual $\{s, r, o^c, o^*, p^* \}$ (see Section \ref{subsec:formal-task-def} for a notation refresher), paraphrase prompts $P^P$, neighborhood prompts $P^N$, generation prompts $P^G$, reference texts $RT$, and essence texts $ET$. Appendix \ref{apd:counterfact} details its construction.
\subsection{Evaluation Metrics} \label{subsec:eval-metrics}
\newcommand{\mathcal{S}}{\mathcal{S}}
We formalize evaluation metrics as follows. They are defined on a per-example basis (for each $\mathcal{D}_i$ in \textsc{CounterFact}\xspace), but in tables and graphs we report their mean values across all $\mathcal{D}$ with 95\% confidence intervals.
Central to our evaluation scheme are \textit{success scores} and \textit{magnitude scores}.
\begin{align}
SS(\mathcal{S}) &= \mathbb{E}_{(A, B) \in \mathcal{S}} \big[ \, \mathbb{I}[A > B] \, \big] \\
MS(\mathcal{S}) &= \mathbb{E}_{(A, B) \in \mathcal{S}} \big[ A-B \big].
\end{align}
Here, all $A, B$ are probabilities; $SS$ is the expected number of $A>B$ occurrences, and $MS$ is the difference in predicted probabilities.
We detail each metric below.
\begin{itemize}[itemsep=0.05pt,topsep=-2pt,leftmargin=12pt]
\item \textbf{Efficacy}: Let $\mathcal{S} = \{ (\prsub{o^* \mid p^*}{\rewg}, \prsub{o^c \mid p^*}{\rewg}) \}$. We expect $o^*$ to have high probability post-rewrite, so the Efficacy Score \textbf{(ES)} and Efficacy Magnitude \textbf{(EM)} are computed using $SS(\mathcal{S})$ and $MS(\mathcal{S})$, respectively.
\item \textbf{Generalization}: Paraphrases of $p^*$ should elicit the same effect, so we also track Paraphrase Score \textbf{(PS)} and Paraphrase Magnitude \textbf{(PM)} with $\mathcal{S} = \{ (\prsub{o^* \mid p}{\rewg}, \prsub{o^c \mid p}{\rewg}) \mid p \in P^P \}$.
\item \textbf{Specificity}: We now want $o^c$ to \textit{exceed} $o^*$ in probability on neighborhood prompts, so we measure Neighborhood Score \textbf{(NS)} and Neighborhood Magnitude \textbf{(NM)} with $\mathcal{S} = \{ (\prsub{o^c \mid p}{\rewg}, \prsub{o^* \mid p}{\rewg}) \mid p \in P^N \}$.
\item \textbf{Consistency}: We ask $\rewg$ to generate text using $P^G$. To estimate topicality, we define a Reference Score \textbf{(RS)}: the $\cos$ similarity between the unigram TF-IDF vectors of the generated text and the reference text $RT$.
\item \textbf{Essence}: To check for essence drift, we measure $G^\prime$'s perplexity, i.e. Essence Score \textbf{(ES)},
on essence texts $ET$. We expect some changes, but they should be minimized.
\item \textbf{Fluency}: Since lower generation diversity correlates with model damage, we measure fluency with Generation Entropy \textbf{(GE)}. Given some generation $x$, the $n$-gram entropy~\cite{Zhang2018GeneratingIA} is given by $-\sum_k f(k) \log_2 f(k)$, where $k$ is an $n$-gram, and $f(k)$ is its relative frequency. We take a weighted average of bi- (1/3) and tri-gram (2/3) entropies to compute $GE$.
\end{itemize}
\subsection{\smash{\boxed{\text{Q1}}} On the Knowing vs. Saying Distinction} \label{subsec:knowing-saying}
\reffig{knowing-saying} displays experimental results that test our hypothesis of a distinction between \textit{knowing} at the subject-token MLP lookups and \textit{saying} (word choice without knowledge) at the last-token attention site. The experiment compares ROME\xspace's MLP layer intervention with fine-tuning at the attention weights. 350 counterfactuals are tested, and the distributions of benchmark scores are shown. Appendix \ref{apd:knowing-saying} contains details on the experimental setup and results.
The results show that (a) interventions at both sites can teach the model to recite the counterfactual statement, although attention interventions fail more frequently. And (b) interventions at both sites are highly specific, with almost no bleedover to other subjects. But (c) generalization to paraphrases almost always fails at the attention site, while the ROME interventions usually increase paraphrased predictions. In other words, learning using late-layer attention will train a model to repeat the new statement by rote in response to specific text, rather than generalizing to other statements of the fact; whereas ROME at the subject token MLP is effective, specific, and generalized. Other metrics are consistent with this finding (Appendix~\ref{apd:knowing-saying}).
\subsection{\smash{\boxed{\text{Q2}}} Comparing ROME\xspace with Previous Methods} \label{subsec:cfd-results}
\subsubsection{Baselines}
\label{subsec:baselines}
We evaluate ROME\xspace against other knowledge-editing approaches that incrementally modify a large pretrained model. Hyperparameters are described in Appendix~\ref{apd:implementations}. We examine Fine-Tuning \textbf{(FT)}, applying Adam with early stopping at \textit{one} layer to minimize $-\log \prsub{o^* \mid p}{\rewg}$. Constrained Fine-Tuning \textbf{(FT+L)} \cite{zhu-ft} additionally imposes a parameter-space $L_\infty$ norm constraint on changes. We test two hypernetworks: Knowledge Editor\xspace \textbf{(KE)} learns an LSTM model that predicts weight changes in $G$ \cite{decao-ke}; similarly, \textbf{MEND} \cite{mend} learns a network to map loss gradients to rank-1 layer changes. Both methods are pre-trained on a dataset; to ensure fair comparison across our test distribution, we also train \textbf{MEND-CF} and \textbf{KE-CF} on a \textsc{CounterFact}\xspace subset. Finally, we examine a method based on neuron interpretation, Knowledge Neurons \textbf{(KN)}, which first selects neurons associated with knowledge via gradient-based attribution, then modifies $\smash{\atl{\mathrm{mlp}_{proj}}{l}}$ at the corresponding rows by adding scaled embedding vectors \cite{dai-kn}.
\subsubsection{\textsc{CounterFact}\xspace Results Analysis} \label{subsubsec:cfd-analysis}
\input{tables/gpt2-comparison}
Table~\ref{tab:gpt-results} showcases quantitative results on GPT-2 XL and GPT-J over 7,500 and 2,000-record test sets in \textsc{CounterFact}\xspace, respectively.
We observe that \textbf{all methods other than ROME\xspace exhibit one or both of the following failures}: (F1) overfitting to the counterfactual statement and failing to generalize, or (F2) underfitting and predicting the same new output for unrelated subjects. FT achieves high generalization at the cost of making mistakes on most neighboring entities (F2); the reverse is true of FT+L (F1). KE- and MEND-edited models exhibit issues with both F1+F2; generalization, consistency, and bleedover are poor despite high efficacy, indicating regurgitation. KN appears unable to make effective edits (F1+F2). By comparison, ROME\xspace avoids both F1 and F2 failures, showing both generalization and specificity in knowledge editing.
\input{figtext/gen-samples}
\reffig{gen-samples} compares generated text after applying the counterfactual ``\textit{Pierre Curie's area of work is medicine}'' to GPT-2 XL (he is actually a physicist). \textbf{Generalization:} In this case, FT and ROME generalize well to paraphrases, describing the subject as a physician rather than a physicist for a range of wordings. On the other hand, FT+L, KE and MEND fail to generalize to paraphrases, alternately describing the subject as either (c,d,e1) in medicine or (c1,e,d1) in physics depending on how the prompt is worded. KE (d) demonstrates a problem with fluency, favoring nonsense repetition of the word \emph{medicine}. \textbf{Specificity:} FT, KE, and MEND have problems with specificity, changing the profession of a totally unrelated subject. Prior to editing knowledge, GPT-2 XL describes Robert Millikan as an astronomer (in reality he is a different type of physicist), but after editing the profession of Pierre Curie, Millikan is described as (b1) a biologist by FT+L and (d2, e2) a medical scientist by KE and MEND. In contrast, ROME is specific, and leaves the field of Millikan unchanged.
\subsection{Limitations}
Our evaluation reveals that, even when factual knowledge is changed successfully, the model will guess plausible new facts that have no basis in evidence and that are likely to be false; this may limit the usefulness of a language model as a source of facts. Developing a better understanding of such guessing behavior is a promising area for future work.
\section{Preliminaries}
\input{figtext/eiffel-example}
\paragraph{Defining Knowledge} \label{subsec:defining-knowledge}
The facts we study take the form of knowledge tuples $t = (s, r, o)$, where $s$ and $o$ are subject and object entities, respectively, and $r$ is the relation connecting the two. For example, $(s=\textit{Megan Rapinoe}, r=\textit{plays sport professionally}, o=\textit{soccer})$ indicates that Rapinoe plays soccer for a living. Each variable represents an entity or relation that can be found in a knowledge graph,\footnote{Our methods do not require a knowledge graph, but the presence of entities and relations in WikiData facilitates evaluation.%
} and that can be written as a natural language string. To query an autoregressive model for knowledge of a fact $t$, we express $(s, r)$ as a text prompt by expanding a template from a data set (Section~\ref{sec:counterfact}), and check whether the generated continuation matches $o$.
\paragraph{Autoregressive Transformer Language Models}
An autoregressive language model $G: \mathcal{X} \rightarrow \mathcal{Y}$ maps a token sequence $[x_1, ..., x_T] = x \in \mathcal{X}$ to a probability distribution $y \in \mathcal{Y} \subset \mathbb{R}^{|V|}$, where $V$ is $G$'s vocabulary, $x_i \in V$, and $y$ is distributed over all possible next-token continuations of $x$. Strings are tokenized using $\tau: \mathcal{S} \rightarrow \mathcal{X}$. %
Tokens are first embedded as vectors $x_i \rightarrow \smash{\atl{h}{0}_i} = \mathrm{emb}(x_i, i)%
\in \mathbb{R}^{H}$. Then, the grid of hidden states $\smash{h^{(l)}_i}$ (\reffig{infoflow}a) are iteratively transformed via $L$ residual layers:\footnote{GPT-J~\cite{gpt-j} feeds $\atl{h_i}{l-1}$ straight to $\atl{\mathrm{mlp}}{l}$; details shown here are for GPT-2~\cite{gpt2}.}
\begin{align}
\atl{h}{l}_i = \atl{h}{l-1}_i &+ \atl{a}{l}_i + \atl{m}{l}_i \label{eq:autoregressive}
\\
\notag
\atl{a}{l} &= \atl{\mathrm{attn}}{l}\left( \gamma\left( \atl{h}{l-1} \right) \right) \\
\notag
\atl{m}{l}_i &= \atl{\mathrm{mlp}}{l} \left( \gamma\left( \atl{a}{l}_i + \atl{h}{l-1}_i \right) \right).
\end{align}
Here $\atl{\mathrm{attn}}{l}$ and $\atl{\mathrm{mlp}}{l}$ are self-attention and MLP modules, and $\gamma$ is layer normalization.
Each $\atl{\mathrm{mlp}}{l}: \mathbb{R}^{H} \rightarrow \mathbb{R}^{H}$ combines a nonlinearity $\sigma$ with two linear transformations
$\smash{\atl{W_{fc}}{l}} \in \mathbb{R}^{D \times H}$ and $\smash{\atl{W_{proj}}{l}} \in \mathbb{R}^{H \times D}$ (Figure~\ref{fig:uv-update}) as:
\begin{align}
\atl{\mathrm{mlp}}{l}(z) = \atl{W_{proj}}{l}\,
\sigma\left( \atl{W_{fc}}{l} z \right) .
\end{align}
Each self-attention layer $\atl{\mathrm{attn}}{l}: \mathbb{R}^{T\times H} \rightarrow \mathbb{R}^{T\times H}$
uses only previous token representations $\smash{h_{j}^{(l-1)}}$, where $j \leq i$, to compute state at the $i$th token $\smash{a_i^{(l)}}$ \cite{vaswani2017attention}. %
\input{figtext/info-flow-maps}
The output probability distribution is read from the last state:
\begin{align}
y = \mathrm{softmax}\left( W_e^T \gamma\left( \atl{h}{L}_{T} \right) \right).
\end{align}
We denote $\prsub{c \mid x}{G} = y_c$ as the probability of $c$ being $x$'s continuation, according to $G$. The next token can be selected by sampling from this distribution. New tokens are repeatedly appended to $x$ to generate sequences of text.
|
train/arxiv
|
BkiUfWc25V5jS9ZhZKg3
| 5
| 1
|
\section{INTRODUCTION}
\vspace{0.6cm}
\baselineskip 24pt \lineskip 10pt
The idea of a quasi-static and ever-lasting nonsingular universe seems still
to be very attractive among cosmologists. It was first Einstein \cite{E} who
did not believe his equations could really give in general non-static
solutions, until the clear arguments of Friedman \cite{F}. With the discovery
of
Hubble redshifting and the mathematical work of Hawking and Penrose \cite{HE}
about the inevitability of singularities rather few attention was paid
to non-singular models of the universe until the eighties. However,
singularity-free Friedman models usually called bouncing models were considered
early in thirties by Robertson \cite{ROB}. These models appear whenever the
cosmological constant, introduced by Einstein to balance gravitational
attraction, has a value between zero and a certain fixed number that gives
the Einstein Static Universe $ 0 < \Lambda < \Lambda_{st} $ \cite{RR}.
The universe begins with some minimum value of the scale factor
and then expands forever. Since the idea of inflation was established
\cite{G,L},
the situation changed because the scalar field responsible
for the scenario necessarily had to break the energy conditions giving the
effective negative pressure (repulsion) so much demanded by Einstein.
If we raise the question what are the ways to avoid singularities one can
distinguish a couple of approaches. In some opinions one should consider a
quantum gravity as an appropriate gravity theory at high densities. However,
most of the approaches refer to the alternative to General Relativity
classical gravity theories. Worth mentioning is the early proposal made by
Cartan \cite{C1,C2,C3} of nonsymmetric connection gravity theory with
torsion in which both isotropic and anisotropic universes start with a very
small but non-zero size \cite{TR,KOP}. Petry \cite{P1,P2} considered a
covariant gravity theory in which isotropic universes are
always bouncing \cite{P3}. In the nonsymmetric metric gravity theory of
Moffat \cite{M1,M2,M3} the black hole is replaced by a superdense object
\cite{M4}. Then, due to a repulsive force generated naturally in the
theory, non-singular cosmological solutions should also be a rule. Other
examples are the higher gravity theories such as: the second-order $R +
\epsilon R^2$ theory \cite{BO} with a bounce solution for
negative curvature \cite{P,CM}, the fourth-order
$R + \lambda R_{\mu \nu}R^{\mu \nu}/R + \tau R_{\alpha \beta \gamma \delta}
R^{\alpha \beta \gamma \delta}/R$ theory \cite{XE} and Brandenberger's
\cite{B} important proposal in which all isotropic solutions are the de
Sitter, though non-singular, type.
If we stay within General Relativity we can keep the simple Robertson-Walker
geometry and either assume a specific equation of state for Friedman models
\cite{IR1,IR2} or explore the scalar field coupled to gravity.
Both approaches are actually equivalent since the scalar field can mimic
different equations of state \cite{BA,LID}.
The aim of this paper is to consider singularity-free bouncing Friedman
universes within General Relativity admitting some fraction of negative
pressure matter usually believed to describe cosmic strins and domain walls
which we will be calling from now on, for the sake of some generality,
string-like-matter and wall-like-matter \cite{ST}. The physical motivation for
taking exotic matter into consideration is that we still do not have a solution
of the dark non-baryonic matter problem. One can strongly believe, in the
context of particle physics, that the exotic matter may be a very good
candidate for the dark matter. If it appeared that the exotic matter existed
one would believe in the non-singular universes of this paper. However, there
exixts some objections referring to the compatibility of the domain walls with
observations \cite{TUR,KAR}.
Although some other suggestions emerged \cite{GEL,GO}, it is believed that
even if walls formed in the early universe, they should decay by the present
\cite{RS,C}. In order to avoid the domain walls and perhaps to give more
freedom in compatibility with observations we replace the string-like-matter
and wall-like-matter by scalar fields. We postulate that scalar fields may be
the candidates for the dark matter in the present era of the universe
\cite{MCD,BA}. Our procedure is similar to the procedure applied for
inflationary universes \cite{BAR}. First we assume the exotic equation of state
and then we derive the form of the scalar field potentials.
Among non-singular solutions we select those which are oscillating i.e. those
for which the scale factor oscillates between certain fixed values. Such
solutions
first mentioned by Harrison \cite{HAR} and recently discussed by Kardashev
\cite{KAR}
originate from a qualitative change for bouncing models \cite{ROB} after
taking the negative pressure matter into account and admitting negative
cosmological constant. These solutions have both contracting and expanding
phase, so they may be compatible with the astronomical data. They last
infinitely in time which might be satisfactory at least from philosophical
point of view.
Oscillating models might also have another advantage. Since they are
quasi-static and require a similar balance to static models, then, according
to the analysis of Gibbons \cite{GIB1,GIB2}, they should posses very large
entropy
with value close a maximum admissible for the Einstein Static Universe.
The plan of the paper is as follows. In Section 2 we discuss the existence of
oscillating Friedman solutions and give the general exact oscillating solution
in terms of the elliptic functions which has not been studied before.
Also, we discuss a possibility to get very deep oscillations (i.e. those which
can
reach very high density to allow at least some standard hot universe processes)
and to reduce the amount of exotic matter in form of wall-like-matter. In
Section 3 we present the exact elementary non-oscillatory solutions which
complete the discussion of Section 2. In Section 3
we give an alternative scalar field interpretation for exotic fluids necessary
to force the universe to oscillate. We choose a very simple model of the scalar
fields of which potential and kinetic energy are proportional, following the
discussion of Barrow and Saich \cite{BS}. For a monotonic solution and the
oscillating Kardashev's solution we present exact scalar fields and their
potentials. It appears that for oscillating solutions for the scale factor
the potentials are also periodic in the scalar fields. In Section 5 we comment
on the results.
\vspace{.6cm}
\section{OSCILLATING UNIVERSES}
\vspace{.6cm}
Following our earlier discussion of exact analytic solutions of the Friedman
equation \cite{D86A,D89} we generalize it to the case
when other negative pressure fluids are present, in particular wall-like-matter
whose energy density scales as $ R^{-1} $ (R is the scale factor)
\cite{ST}. The Friedman equation has the form
\begin{equation}
\left( \frac{dR}{d\tau} \right)^{2} = C_{r} + C_{m}R - k'R^2 + C_{w}R^3 +
\frac{\Lambda}{3}R^4 ,
\end{equation}
where $ \tau $ is the conformal time defined by the cosmic time as
\begin{equation}
d\tau = \frac{dt}{R} ,
\end{equation}
and $k' = k - C_{s}$, k is the curvature index, and the constants
$C_{r}, C_{m}, C_{s}, C_{w}$ are constants responsible for the density of
radiation, nonrelativistic matter, string-like-matter (whose energy density
scales as $ R^{-2} $) and wall-like-matter respectively, and
$\Lambda$ is the cosmological constant. The Fried\-man equation (2.1)
in terms of the so-called reduced blackbody temperature \footnote{The reduced
blackbody temperature $T(\tau)$ is proportional to the temperature of the
microwave background if $\alpha \neq 0$, and it is only interpreted as an
inverse function of $R(\tau)$ if $\alpha = 0$ \cite{CG}}
\begin{equation}
T = \Lambda_{c}^{ - \frac{1}{2}} R^{-1},
\end{equation}
where
\begin{equation}
\Lambda_{c}^{ - \frac{1}{2}} \equiv \frac{3}{2} C_{m} ,
\end{equation}
is \cite{CG,D89}
\begin{equation}
\left( \frac{dT}{d\tau} \right)^{2} = \alpha T^4 + \frac{2}{3} T^3 - k' T^2 +
\beta T + \frac{\lambda}{3} ,
\end{equation}
where the dimensionless parameter
\begin{equation}
\beta = C_{w}\Lambda_{c}^{ - \frac{1}{2}}
\end{equation}
is responsible for the density of wall-like-matter ($\alpha = C_{r}\Lambda_{c},
\lambda = \Lambda / \Lambda_{c}$).
In order to make qualitative analysis of the solutions of (2.5) we use the
method of the associated mechanical system \cite{CG,CHA}. In fact (2.5) can be
considered as the energy equation of a one-dimensional mechanical system with
$T$ as a coordinate, the
kinetic energy $\left( dT / d\tau \right)^2 \geq 0$, the potential
\begin{equation}
V_{k',\alpha,\beta}(T) \equiv - \alpha T^4 - \frac{2}{3} T^3 + k' T^2 - \beta T
=
- Q_{k',\alpha,\beta,\lambda}(T) + \frac{\lambda}{3} ,
\end{equation}
where
\begin{equation}
Q_{k',\alpha,\beta,\lambda}(T) \equiv \alpha T^4 + \frac{2}{3} T^3 - k' T^2 +
\beta T + \frac{\lambda}{3} ,
\end{equation}
and the total energy $\frac{\lambda}{3}$.
The general shape of the potential (2.7) depends on the values of the factors
$k', \alpha$ and $\beta$. Instead of the full discussion of the solutions (2.5)
we will concentrate on
special cases which allow the universe to oscillate in time between the two
fixed values of the scale factor $R(\tau)$ (or $T(\tau)$) \cite{KAR}. In order
to have oscillations at least two kinds of
matter are necessary i.e. the cosmological constant and the wall-like-matter.
To make our considerations easier we put $\alpha = 0$ (no radiation)
\footnote{ One can consider the full formula
(2.7) that includes radiation as well, and the roots of the cubic equation
$ - \alpha T^3 - \frac{2}{3} T^2 + k' T - \beta $ which we will call $T_{1},
T_{2}$ and $T_{3}$ have to fulfil the conditions
\begin{eqnarray}
T_{1} + T_{2} + T_{3} & = & - \frac{2}{3\alpha} \nonumber ,\\
T_{1}T_{2} + T_{1}T_{3} + T_{2}T_{3} & = & - \frac{k'}{\alpha} \nonumber ,\\
T_{1}T_{2}T_{3} & = & - \frac{\beta}{\alpha} \nonumber .
\end{eqnarray}
It follows from these relations that there should be at least one negative real
root and the two real or complex conjugate roots, which means that radiation
pressure can only change the left branch of the curve of the potential
$V_{\alpha,k',\beta}(T)$ (i.e. negative $T(\tau)$ - cf. Figs.1-3) which is not
physically relevant.}
The potential (2.7) is
then
\begin{equation}
V_{k',\beta} = - \frac{2}{3} T^3 + k' T^2 - \beta T ,
\end{equation}
and it has one double root $T_{1,2} = \frac{3}{4}k'$ for $\beta = \frac{3}{8}
k'^2$ and two real roots for $\beta < \frac{3}{8}k'^2$ as well as the root at
$T = 0$. Its corresponding extrema are a double one at $\tilde{T}_{1,2} =
\frac{1}{2}k'$ for $\beta = \frac{1}{2} k'^2$ (which is an inflection point as
well) and two real ones for $\beta < \frac{1}{2}k'^2$.
The general solution of (2.5) given in terms of the We\-ier\-strass
el\-lip\-tic
${\cal P}$ function is
\begin{equation}
T(\tau) = \frac{3\sqrt{\alpha} {\cal P}'(\tau) - {\cal P}(\tau) - \frac{k'}{12}
- \frac{1}{48} \alpha \beta k'^2}{6 \alpha {\cal P}(\tau) - \frac{1}{6} -
k' \alpha} ,
\end{equation}
where ${\cal P}$ is defined by the equation
\begin{eqnarray}
{\cal P}'(\tau) \equiv \frac{d{\cal P}}{d\tau} = \sqrt{4{\cal P}^3 -
g_{2}{\cal P} - g_{3}} \nonumber ,
\end{eqnarray}
and the invariants
\begin{eqnarray}
g_{2} & = & \frac{k'^2}{12} + \frac{\alpha\lambda}{3} - \frac{\beta}{6} ,\\
g_{3} & = & 6^{-3} \left( k'^3 - 2\lambda - 3k'\beta \right) -
\frac{\alpha}{2} \left( \frac{k'\lambda}{9} + \frac{\beta^2}{8} \right) .
\end{eqnarray}
For $\alpha = 0$ case (no radiation) corresponding to the potential (2.9) we
have the general solution of (2.5) in the elliptic form as well
\begin{equation}
T(\tau) = 6 \left[ {\cal P}(\tau) + \frac{k'}{12} \right] ,
\end{equation}
with the invariants given by (2.11)-(2.12) for $\alpha = 0$ and the
discriminant
\cite{D86A}
\begin{equation}
\Delta_{k',\beta,\lambda} \equiv g_{2}^3 - 27 g_{3}^2 = 2^{-4}3^{-3} \left[
\beta^2 \left( \frac{3}{4}k'^2 - 2\beta \right) + k'\lambda \left( k'^2 -
3\beta \right) - \lambda^2 \right] ,
\end{equation}
i.e.
\begin{equation}
\Delta_{k',\beta,\lambda} = - 2^{-4}3^{-3} \left( \lambda - \lambda_{+} \right)
\left( \lambda - \lambda_{-} \right) ,
\end{equation}
where the critical values of $\lambda$ are
\begin{equation}
\lambda_{\pm} = \frac{1}{2}k' \left( k'^2 - 3\beta \right) \pm \frac{1}{2}
\sqrt{k'^2 \left( k'^2 - 3\beta \right)^2 + 4\beta^2 \left( \frac{3}{4}k'^2
- 2\beta \right)} .
\end{equation}
The associated mechanical system method is as follows. Having the exact shape
of the potential (2.7) (Figs.1-3) we cut the curves along the constant
$\lambda$ lines. The point associated with the universe can only move in the
upper part of the plane and we look for the possible solutions.
We consider only the particular cases given by (2.13) where oscillations are
possible ($T_{1,2}$ zeros and $\tilde{T}_{1,2}$ extrema of (2.9)). These are:
A. If $k' > 0$ and $\frac{\beta}{k'^2} < \frac{3}{8}$ we have
\begin{eqnarray}
T_{1,2} & = & \frac{3}{4} \left( k' \mp \frac{1}{3} \sqrt{9k'^2 - 24\beta}
\right) \nonumber ,\\
\tilde{T}_{1,2} & = & \frac{1}{2} \left( k' \mp \sqrt{k'^2 - 2 \beta} \right)
\nonumber ,
\end{eqnarray}
and the oscillations corresponding to a constant $\lambda$ lines for
\begin{eqnarray}
\lambda_{-} < \lambda < 0 \nonumber
\end{eqnarray}
with $\lambda_{-}$ given by (2.16) are possible in the central well of the
potential (2.9) (Fig.1).
B. If $k' > 0$ and $\frac{\beta}{k'^2} = \frac{3}{8}$ we have
\begin{eqnarray}
T_{1} & = & T_{2} = \tilde{T}_{2} = \frac{3}{4}k' \nonumber ,\\
\tilde{T}_{1} & = & \frac{k'}{4} \nonumber ,
\end{eqnarray}
and the oscillations are possible for
\begin{eqnarray}
- \frac{1}{8}k'^3 = \lambda_{-} < \lambda < \lambda_{+} = 0 \nonumber ,
\end{eqnarray}
i.e. in the central well of the appropriate potential (2.9) (Fig.2).
C. If $k' > 0$ and $\frac{3}{8} < \frac{\beta}{k'^2} < \frac{1}{2}$ we have
\begin{eqnarray}
T_{1} & = & T_{2} = 0 \nonumber ,\\
\tilde{T}_{1,2} & = & \frac{1}{2} \left( k' \mp \sqrt{k'^2 - 2 \beta} \right)
\nonumber ,
\end{eqnarray}
and the oscillations are possible for
\begin{eqnarray}
\lambda_{-} < \lambda < \lambda_{+} < 0 \nonumber ,
\end{eqnarray}
with $\lambda_{\pm}$ given by (2.15) (Fig.3).
There is apparently a double extremum for $T = \frac{k'}{2}$ if
$\frac{\beta}{k'^2} = \frac{1}{2}$, but it is also an inflection point of
(2.9) and this case does not allow any oscillating solutions at all.
In all considered cases $A, B, C$ the discriminant (2.15) is positive and the
fundamental periodicity cell is a rectangle (Fig.4) \cite{TRI}. The roots
$e_{1}, e_{2}, e_{3}$ of the equation
\begin{equation}
4y^3 + g_{2}y + g_{3} = 0
\end{equation}
are all real and since $e_{1} + e_{2} + e_{3} = 0$, at least one of them must
be negative. One of the elementary periods $\omega$ is pure real and the other
$\omega^{'}$ is pure imaginary. The roots are given by the relations
\begin{eqnarray}
{\cal P}(\omega) = e_{1} \nonumber ,\\
{\cal P}(\omega + \omega^{'}) = e_{2} \nonumber ,\\
{\cal P}(\omega^{'}) = e_{3} \nonumber ,
\end{eqnarray}
and
\begin{equation}
e_{1} < e_{2} < e_{3} .
\end{equation}
However, since we consider the solutions of the equation for $T(\tau)$ (cf.
(2.5)) then we should define the roots of the equation
\begin{equation}
\frac{2}{3} T^3 - k' T^2 + \beta T + \frac{\lambda}{3} = 0 ,
\end{equation}
which are
\begin{eqnarray}
T_{min} = 6e_{3} + \frac{k'}{2} \nonumber ,\\
T_{max} = 6e_{2} + \frac{k'}{2} \nonumber ,\\
T_{recol} = 6e_{1} + \frac{k'}{2} \nonumber ,
\end{eqnarray}
and
\begin{equation}
T_{min} < T_{max} < T_{recol} ,
\end{equation}
where $T_{recol}$ refers to a minimum of $T(\tau)$, i.e. a maximum of $R(\tau)$
for the recollapsing model associated with (2.13) and $T_{min}, T_{max}$ refer
to a minimum and a maximum of $T(\tau)$ for the oscillating model associated
with (2.13). From (2.19) we conclude that $T_{min}, T_{max}$ and $T_{recol}$
must be real and positive since $T_{min} + T_{max} + T_{recol} = \frac{3}{2}$
and $T_{min}T_{max}T_{recol} = - \lambda/2$ and $\lambda$ is
negative in the cases $A, B, C$.
The general oscillatory solution for all the cases $A, B, C$ can be expressed
in terms of the Weierstrass $\zeta$ function \cite{CG}
\begin{equation}
\frac{1}{T(\tau)} = \Lambda_{c}^{\frac{1}{2}} = \frac{1}{T_{min}} +
\sqrt{\frac{3}{\lambda}} \left[ \zeta(\tau - \tau_{d}) - \zeta(\tau + \tau_{d})
+ 2\zeta(\tau_{d}) \right] ,
\end{equation}
and the expression for the cosmic time, from (2.2), is
\begin{equation}
\left( \frac{\Lambda}{3} \right)^{\frac{1}{2}} t(\tau) =
\tau \left[ \left( \frac{\lambda}{3} \right)^{\frac{1}{2}} \frac{1}{T_{min}} +
2 \zeta(\tau_{d}) \right] + \ln \frac{\sigma(\tau_{d} - \tau)}
{\sigma(\tau_{d} + \tau)} ,
\end{equation}
so for $\tau = 0$ $t(\tau) = 0$ and $T = T_{min}$ (Fig.6). In both formulas
(2.21) and (2.22) the conformal time is real but the zeros of the function
$T(\tau)$, namely $ - \tau_{d}$ and $\tau_{d}$ are imaginary. Also, $\lambda$
in
these expressions should be taken negative according to the general results of
the existance of the oscillatory solutions and then $\sqrt{3/\lambda}$
and $\sqrt{\Lambda/3}$ are imaginary.
The periods are given by \cite{TRI}
\begin{eqnarray}
\omega = \int_{T_{min}}^{T_{max}} \frac{dT}{\sqrt{\frac{2}{3}(T - T_{min})
(T - T_{max})(T - T_{recol})}} \nonumber ,\\
\omega^{'} = \int_{T_{max}}^{T_{recol}} \frac{dT}{\sqrt{\frac{2}{3}(T -
T_{min})
(T - T_{max})(T - T_{recol})}} \nonumber ,
\end{eqnarray}
so $\omega$ is pure real and $\omega^{'}$ is pure imaginary.
Let us now consider another type of the exact oscillating universes, the ones
in which there is no dust (i.e. $C_{m} = 0$ in (2.1) and $\alpha = 0$). Because
of the definition (2.4) we cannot now use the relation (2.3) to work out the
limit $C_{m} \rightarrow 0$ from (2.5). Instead we use the method by putting
\begin{equation}
M(\tau) = \frac{1}{R(\tau)} ,
\end{equation}
so (2.1) becomes
\begin{equation}
\left( \frac{dM}{d\tau} \right)^{2} = - k' M^2 + C_{w} M + \frac{\Lambda}{3}
{}.
\end{equation}
The potential analogous to (2.7) now is (Fig.7)
\begin{equation}
V_{k',C_{w}} = k' M^2 - C_{w} M ,
\end{equation}
and provided $k' > 0$ we have oscillations for $\Lambda < 0$. In this case the
solution is elementary and oscillating \cite{KAR}. After a
deparametrization based on (2.2) the exact solution can be written down as
\begin{equation}
R(t) = - \frac{3}{2\Lambda} \left[ A \sin{t \sqrt{ - \frac{\Lambda}{3}}} +
C_{w} \right] ,
\end {equation}
where
\begin{eqnarray}
C_{w} & > & A \equiv \sqrt{C_{w}^2 + \frac{4}{3} \Lambda k'} ,\\
C_{w}^2 & > & - \frac{4}{3} \Lambda k' \nonumber ,\\
\Lambda & < & 0 \nonumber ,
\end{eqnarray}
so the universe oscillates between $R_{min} = - (3/2)\Lambda(-A + C_{w})$ and
\\
$R_{max} = - (3/2)\Lambda(A + C_{w})$.
{}From the shape of the associated potential (2.7) for $\beta =
\frac{3}{8}k'^{2}$
(Case B, Fig.2) one can conclude that for \footnote{In fact the restrictions
given below are actually the strongest ones since the minimum size of the
oscillating universes in cases A, B, C is always bigger (i.e. $\tilde{T_{2}} <
\frac{3}{4}k'$ cf. Eq.(3.2)).}
\begin{eqnarray}
\lambda_{-} = - \frac{1}{8}k'^3 < \lambda < 0 \nonumber
\end{eqnarray}
the universe will oscillate between the fixed extrema of $R(\tau)$ and
the smallest admissible value of the scale factor will be slightly larger than
(cf. Section 3)
\begin{equation}
R_{u}(\tau) = \frac{1}{\Lambda_{c}^{\frac{1}{2}}\tilde{T}_{2}} =
2\frac{C_{m}}{k'} ,
\end{equation}
so the mass density of nonrelativistic matter will be \cite{D86A}
\begin{equation}
\varrho_{m} = \frac{3c^2}{8\pi G} C_{m}R_{u}^{-3} = \frac{k'^3}{C_{m}^2}
\frac{3c^2}{64\pi G} .
\end{equation}
If we assume $k' = k - C_{s} \approx 1$ (only $k = +1$ is possible here, since
$k' > 0$) which means that the amount of string-like-matter is small and most
of the exotic matter is in the form of wall-like-matter , then from (2.28)
\begin{eqnarray}
\varrho_{m} \sim C_{m}^{-2} \cdot 2 \cdot 10^{26} \frac{g}{cm} \nonumber .
\end{eqnarray}
In order to calculate $C_{m}$ (with the dimension [cm]) we should apply the
conservation law (2.28). For the maximum density at the nucleosynthesis scale
\begin{eqnarray}
\varrho_{m} \sim 10^{4} \frac{g}{cm^3} \nonumber
\end{eqnarray}
we need
\begin{eqnarray}
C_{m} \sim 1.4 \cdot 10^{11} cm \nonumber ,
\end{eqnarray}
which can be achieved for instance either for the present nonrelativistic
matter mass density $\varrho_{m0} \sim 1.1 \cdot 10^{-45} \frac{g}{cm^3}$
(rather tiny) and the present radius $R_{0} \sim 5 \cdot 10^9$ light years
or for the present mass density $\varrho_{m0} \sim 1.1 \cdot 10^{-35}
\frac{g}{cm^3}$ (larger) and the smaller present radius $R_{0} \sim 5 \cdot
10^6$ light years. However, for the maximum density at recombination scale
\begin{eqnarray}
\varrho_{m} \sim 10^{-18} \frac{g}{cm^3} \nonumber
\end{eqnarray}
we need
\begin{eqnarray}
C_{m} \sim 1.4 \cdot 10^{22} cm \nonumber ,
\end{eqnarray}
and it requires for example the present nonrelativistic matter density
$\varrho_{m0} \sim 1.1 \cdot 10^{-34} \frac{g}{cm^3}$ and the present radius
$R_{0} \sim 5 \cdot 10^9$ light years which is quite realistic. This means
that oscillations might allow at least some standard early universe processes
like nucleosynthesis or recombination.
The amount of wall-like-matter can be reduced if we lessen the value of
$k' = k - C_{s}$ (i.e. decrease the appropriate value of $C_{m}$ - cf.(2.28)),
since the density of walls is proportional to $k'$
\begin{eqnarray}
\beta = \frac{3}{8} k'^2 = \frac{3}{8} \left( k - C_{s} \right)^2 \nonumber
,
\end{eqnarray}
and the wall-like-matter is replaced by string-like-matter ( $0 < C_{s} < 1$ ).
However, in such a case both the maximum and minimum of the potential
(2.7) are approaching zero - the well becomes smaller which means that the
appropriate periods of oscillations are smaller.
Since the amplitude of oscillations corresponds to the length of the $\lambda
=$
const. lines contained inside the well of the potential (2.7) (Figs.1-3) one
can easily notice a very interesting feature of the models A and B. It is
that the closer to zero (necessarily negative) value of the cosmological
constant $\lambda$, the bigger the maximum, and the smaller the minimum size of
the universe can be, which means the longer period of oscillations. On the
other
hand, since we do not experience small period of oscillations (they can appear
for large negative $\lambda$ - Figs.1-3) we could obtain some restrictions on
the possible value of the cosmological constant in these models.
Finally, it is very interesting to mention that if we admit some fractional
equations of state
\begin{eqnarray}
p = \left( \varepsilon - 1 \right) \rho \nonumber
\end{eqnarray}
with $\varepsilon = \frac{1}{3} n $ (n is non-integer)
to derive the new Friedman equation different from (2.1), then a larger
variety of possibilities for oscillations appear \cite{HAR,D86B}.
It can be explained in terms of the associated mechanical system simply
because more wells of the potential are possible in which the universe can
oscillate. Such fractional equations of state are possible since the exotic
matter
may have an effective equation of state anywhere in a range between well
established
values. For instance for strings $\frac{2}{3} \leq \varepsilon_{s} \leq
\frac{4}{3}$ and
for walls $\frac{1}{3} \leq \varepsilon_{w} \leq \frac{4}{3}$ depending on
their
velocities \cite{KT} \footnote{It should be pointed out that string
or wall stress is in fact anisotropic and might be considered by using an
anisotropic model of spacetime rather than the isotropic one \cite{BGT}.
However, it seems that there is no reason for strings or walls to be
formed anisotropically throughout the whole universe and we can suitably
average the network over all directions \cite{HIN}}.
\vspace{.6cm}
\section{NON-OSCILLATING EXACT UNIVERSES}
\vspace{.6cm}
\setcounter{equation}{0}
In this Section we concentrate on some particular solutions given by elementary
functions. None of them is oscillatory but they complete the discussion of the
solutions from the cases $A, B, C$ as far as they can give some insight into
the
nature of the oscillatory solutions as well.
Generally, if the discriminant (2.14) vanishes the solution is elementary. From
(2.15) one can realize that it happens for $\lambda = \lambda_{+}$ and
$\lambda = \lambda_{-}$ with $\lambda_{\pm}$ given by (2.16). With the
condition of the vanishing discriminant the equation (2.5) factorizes to the
form
\begin{equation}
\left( \frac{dT}{d\tau} \right)^2 = \left( T - \tilde{T}_{1,2} \right)^2 \left(
\frac{2}{3}T + A \right)
\end{equation}
with
\begin{equation}
T = \tilde{T}_{1,2} = \frac{1}{2} \left( k' \mp \sqrt{k'^2 - 2\beta} \right)
\end{equation}
with $A =$ const.
With $\tilde{T}_{1,2}$ given by (3.2) we have two Einstein Static Universes in
the cases $A$ and $C$ (Figs.1 and 3). One of them should be stable because it
lies at the bottom of the well of the potential (2.7). In the case B i.e. for
$\frac{\beta}{k'^2} = \frac{3}{8}$ (3.2) reduces to
\begin{equation}
T = \tilde{T}_{1,2} = \frac{k'}{2} \left( 1 \mp \frac{1}{2} \right) .
\end{equation}
With every stable solution for $\tilde{T}_{1}$ it is also associated the
elementary recollapsing solution for $\lambda = \lambda_{-}$. On the other hand
with each unstable solution for $\tilde{T}_{2}$ there are also associated two
asymptotic solutions for $\lambda = \lambda_{-}$.
Since the types of the elementary solutions of (3.1) in the cases $A$ and $C$
are analogous to the solutions in the case $B$ with only numerical difference
between (3.2) and (3.3) we will be discussing the latter case which is
mathematically simpler.
The stable Einstein Static Universe appears for $\lambda = \lambda_{-} =
- \frac{1}{8} k'^3$ and
\begin{equation}
R_{s} = \frac{1}{\Lambda_{c}^{\frac{1}{2}}\tilde{T_{1}}} = 6 \frac{C_{m}}{k'}
{}.
\end{equation}
The equation (3.1) for $\lambda = \lambda_{-}$ reads as
\begin{equation}
\left( \frac{dT}{d\tau} \right)^2 = \frac{2}{3} \left( T - \frac{k'}{4}
\right)^
2 \left( T - k' \right) ,
\end{equation}
and its solution for $T > k'$ describes the evolution of the closed universe
from the big-bang to the big-crunch in the form
\begin{equation}
R(\tau) = \frac{1}{\Lambda_{c}^{\frac{1}{2}}T(\tau)} =
\frac{\Lambda_{c}^{ - \frac{1}{2}}}{k' \left[ 1 +
\frac{3}{4}\tan^2{\frac{1}{2}\sqrt{\frac{k'}{2}}\tau}
\right]} .
\end{equation}
The unstable Einstein Static Universe appears for $\lambda = \lambda_{+} = 0$
and
\begin{equation}
R_{u} = \frac{1}{\Lambda_{c}^{\frac{1}{2}}\tilde{T_{2}}} = 2 \frac{C_{m}}{k'}
{}.
\end{equation}
The equation (3.1) for $\lambda = \lambda_{+} = 0$ reads as
\begin{equation}
\left( \frac{dT}{d\tau} \right)^2 = \frac{2}{3}T \left( T - \frac{3}{4}k'
\right)^2 .
\end{equation}
There are two asymptotic to (3.7) solutions of (3.8).
For $0 \leq T \leq \frac{3}{4}k'$ we have
\begin{equation}
R(\tau) = \frac{1}{\Lambda_{c}^{\frac{1}{2}} T(\tau)} = \frac{4}{3k'}
\Lambda_{c}^{ - \frac{1}{2}} \coth^2{\frac{1}{2}\sqrt{\frac{k'}{2}}\tau} ,
\end{equation}
and according to (2.2)
\begin{equation}
t(\tau) = \frac{8\sqrt{2}}{3}\left( k'^3\Lambda_{c} \right)^{ - \frac{1}{2}}
\left[
\frac{1}{2}\sqrt{\frac{k'}{2}}\tau - \coth{\frac{1}{2}\sqrt{\frac{k'}{2}}\tau}
\right] .
\end{equation}
Then for $T > \frac{3}{4}k'$ we have
\begin{equation}
R(\tau) = \frac{1}{\Lambda_{c}^{\frac{1}{2}} T(\tau)} =
\frac{4}{3k'}\Lambda_{c}^{ - \frac{1}{2}}
\tanh^2{\frac{1}{2}\sqrt{\frac{k'}{2}}\tau} ,
\end{equation}
and
\begin{equation}
t( \tau) = \frac{ 8 \sqrt{2}}{3} \left( k'^3 \Lambda_{c} \right)^{ -
\frac{1}{2}} \left[ \frac{1}{2} \sqrt{ \frac{k'}{2}} \tau - \tanh{ \frac{1}{2}
\sqrt{ \frac{k'}{2}} \tau} \right] .
\end{equation}
\vspace{.6cm}
\section{SCALAR FIELD COSMOLOGIES}
\vspace{.6cm}
\setcounter{equation}{0}
In this section we try to give an alternative interpretation for exotic matter
appearing in the Friedman equation (2.1) in terms of the scalar fields and
appropriate potentials. The motivation is that we still are looking for the new
candidates for the dark matter in cosmology and we may think about scalar
fields existing at the present era of the evolution as good candidates. Such
an interpretation seems to give more freedom
in imposing the observational constraints than string-like-matter and
wall-like-matter \cite{MCD}. Our procedure is a reverse of the standard
procedure dealing with scalar fields. Instead of starting with an explicit
potential to derive the equation of state, we start with the equations of state
for string-like-matter and wall-like-matter (cf. Section 2, formula (2.1)) in
order to derive the exact potentials \cite{BAR}. The energy density and
pressure
for two mutually noninteracting scalar fields minimally coupled to gravity can
be written as
\begin{eqnarray}
\rho_{i} = \frac{1}{2}\dot{\varphi}_{i}^2 + V_{i}(\varphi_{i}) ,\\
p_{i} = \frac{1}{2}\dot{\varphi}_{i}^2 - V_{i}(\varphi_{i}) ,
\end{eqnarray}
where i = 1,2. If, apart from this, we admit a barotropic fluid with the
equation of
state $p = (\nu - 1)\rho$ we will obtain the rewritten Friedman equation (2.1)
in the form \cite{WE}
\begin{equation}
3 \left( \frac{\dot{R}}{R} \right)^2 = 3H^2 = \rho +
\frac{1}{2}\dot{\varphi}_{1}^2
+ \frac{1}{2}\dot{\varphi}_{2}^2 + V_{1}(\varphi_{1}) + V_{2}(\varphi_{2})
- \frac{k}{R^2} ,
\end{equation}
and the wave equations for both fields
\begin{eqnarray}
\ddot{\varphi}_{1} + 3H\dot{\varphi}_{1} + V_{1}'(\varphi_{1}) = 0 ,\\
\ddot{\varphi}_{2} + 3H\dot{\varphi}_{2} + V_{2}'(\varphi_{2}) = 0 ,
\end{eqnarray}
where $(\ldots)^{\cdot} \equiv \frac{\partial}{\partial t}$ and
$(\ldots)^{'} \equiv \frac{\partial}{\partial \varphi}$. Following Barrow and
Saich (1993) we assume a simple hypothesis that kinetic and potential energies
of the $\varphi_{1}, \varphi_{2}$ fields are proportional, so
\begin{eqnarray}
\alpha_{1}V_{1}(\varphi_{1}) = \frac{1}{2}\dot{\varphi}_{1}^2 ,\\
\alpha_{2}V_{2}(\varphi_{2}) = \frac{1}{2}\dot{\varphi}_{2}^2 ,
\end{eqnarray}
with $\alpha_{1}, \alpha_{2} =$ const.. If we use the conformal time (2.2)
then we can write down the solutions of (4.4)-(4.5) under the conditions
(4.6)-(4.7) in the following way
\begin{equation}
\varphi_{i, \tau} = \Delta_{i} R^{\left( 1 - \frac{3 \alpha_{i}}{1 +
\alpha_{i}}
\right)} ,
\end{equation}
where $\Delta_{i}$ are constants of integration.
The rewritten Friedman equation (2.1) becomes ($j = 1,2,3$)
\begin{equation}
\left( \frac{dR}{d\tau} \right)^2 = C_{j} R^{ - 3 \nu_{j} + 4} + \frac{1}{2}
\left( 1 + \frac{1}{\alpha_{i}} \right) \Delta_{i}^2
R^{\left( 4 - \frac{ 6 \alpha_{i}}{1 + \alpha_{i}} \right)} - k R^2 .
\end{equation}
Comparing (2.1) with (4.9) we realize that in order to exchange
string-like-matter and wall-like-matter for scalar fields we have to take
\cite{D89}
\begin{eqnarray}
\nu_{1} & = & 0 , C_{1} = \frac{\Lambda}{3} \nonumber ,\\
\nu_{2} & = & 1 , C_{2} = C_{m} \nonumber ,\\
\nu_{3} & = & \frac{4}{3} , C_{3} = C_{r} \nonumber ,\\
\alpha_{1} & = & \frac{1}{2} , C_{s} = \frac{3}{2}\Delta_{1}^2 \Rightarrow
\Delta_{1} = \sqrt{\frac{2}{3}\left( k' - k \right)} ,\\
\alpha_{2} & = & \frac{1}{5} , C_{w} = 3\Delta_{2}^2 \Rightarrow
\Delta_{2} = \sqrt{\frac{\beta\Lambda_{c}^{\frac{1}{2}}}{3}} =
\sqrt{\frac{C_{w}}{3}} .
\end{eqnarray}
In order to obtain the asymptotic solution (3.9) the exact scalar
fields should be
\begin{equation}
\varphi_{1} = \Delta_{1}\tau + \varphi_{01}
\end{equation}
for $\alpha_{1} = \frac{1}{2}$ and
\begin{equation}
\varphi_{2} = \ln \left[ \varphi_{02} \left| \cosh{\frac{1}{2}\sqrt{\frac{k'}
{2}}\tau} \right| \right]^{ 2 \Delta_{2} \left( 3k'\Lambda_{c}^{\frac{1}{2}}
\right)^{ - \frac{1}{2}}}
\end{equation}
for $\alpha_{2} = \frac{1}{5}$, with $\varphi_{01}, \varphi_{02} =$ const. and
$\Delta_{1}, \Delta_{2}$ given by (4.10)-(4.11). We can thus write down the
required solutions for the scalar fields as
\begin{eqnarray}
\varphi_{1} & = & \sqrt{\frac{2}{3} \left( k' - k \right)}\tau + \varphi_{01}
,\\
\varphi_{2} & = & \ln \left[ \varphi_{02} \left|
\cosh{\frac{1}{2}\sqrt{\frac{k'}
{2}}\tau} \right| \right]^{\frac{4}{9}\sqrt{k'}} ,
\end{eqnarray}
with the associated potentials
\begin{eqnarray}
V_{1}(\tau) & = & \frac{\Delta_{1}^2}{R^2} = \frac{3}{8} \left( k' - k \right)
k'^2\Lambda_{c}\coth^2{\frac{1}{2}\sqrt{\frac{k'}{2}}\tau} ,\\
V_{2}(\tau) & = & \frac{5}{2} \frac{\Delta_{2}}{R} = \frac{15}{16\sqrt{2}}
\Lambda_{c}^{\frac{3}{4}} \coth{\frac{1}{2}\sqrt{\frac{k'}{2}}\tau} ,
\end{eqnarray}
with cosmic time $t(\tau)$ given by (3.10). The potentials as functions
of the scalar fields are (Fig.8)
\begin{eqnarray}
V_{1}(\varphi_{1}) & = & \frac{3}{8} \left( k' - k \right) k'^2 \Lambda_{c}
\coth^2{ \left[ \frac{1}{4} \sqrt{\frac{3k}{k - k'}} \left( \varphi_{1}
- \varphi_{01} \right) \right]} ,\\
V_{2}(\varphi_{2}) & = & \frac{15}{16\sqrt{2}}k'^2\Lambda_{c}^{\frac{3}{4}}
\coth{ \left[ \cosh^{-1} {\left( \varphi_{02}^{-1} \exp{ \left(
\frac{9\varphi_{2}}{4\sqrt{k'}} \right)} \right)}
\right]} .
\end{eqnarray}
Of course one could also obtain exact potentials for exact models (3.6) and
(3.11), but they do not oscillate as well. The general solution for
oscillating models is given in terms of elliptic functions by (2.10) ( or
(2.13)
if $\alpha = 0$) and the exact potentials could be calculated using Eq.(4.8).
As another important example we consider the elementary oscillating model given
by
\cite{KAR}.
According to (2.2) the Friedman equation (4.9) in terms of cosmic time
$t(\tau)$ is
\begin{equation}
\left( \frac{dR}{dt} \right)^2 = C_{j} R^{ - 3 \nu_{j} + 2} + \frac{1}{2}
\left( 1 + \frac{1}{\alpha_{i}} \right) \Delta_{i}^2
R^{\left( 2 - \frac{ 6 \alpha_{i}}{1 + \alpha_{i}} \right)} - k .
\end{equation}
The exact solution given by \cite{KAR} for the case without matter and
radiation is given by (2.26), the values of constants $\alpha_{1}$ and
$\alpha_{2}$ are given by (4.10)-(4.11) and the resulting fields are
\begin{equation}
\varphi_{1}(t) = \varphi_{01} + \sqrt{\frac{2}{3} \left( 1 - \frac{k}{k'}
\right)}
\arctan{\left[ \frac{ C_{w} \tan{\frac{1}{2}t\sqrt{ - \frac{\Lambda}{3}}} + A}
{\sqrt{ - \frac{4}{3} \Lambda k'}} \right]} ,
\end{equation}
but it reduces to a constant, if there is no strings (i.e. if $k' = k = +1$),
and
\begin{equation}
\varphi_{2}(t) = - \frac{2\Lambda}{3} \sqrt{\frac{C_{w}}{3}}
\int \frac{dt}{\sqrt{C_{w} + A \sin{t \sqrt{ - \frac{\Lambda}{3}}}}} =
\frac{4}{3} \sqrt{ - \Lambda C_{w}} {\cal P}^{-1} \left[ \tan{\frac{t}{2}
\sqrt{- \frac{\Lambda}{3}}} \right] ,
\end{equation}
where ${\cal P}^{-1}$ is the inverse function to the Weierstrass elliptic
${\cal P}$ function with invariants given by
\begin{eqnarray}
g_{2} & = & \frac{1}{3} \left( C_{w}^{2} - 4 \Lambda k' \right) ,\\
g_{3} & = & - \frac{1}{9} C_{w} \left( \frac{C_{w}^{2}}{3} + 4 \Lambda k'
\right) .
\end{eqnarray}
It is easy to check that (4.21) and (4.22) are oscillatory in t. According to
(4.6), (4.7), (4.10) and (4.11)
\begin{eqnarray}
V_{1}(t) & = & \sqrt{\frac{2}{3} \left( 1 - \frac{k}{k'} \right)}
\frac{C_{w}}{4\sqrt{k'}} \frac{1}{\cos^2{\frac{1}{2} t \sqrt{ -
\frac{\Lambda}{3}}}} \frac{ - \frac{4}{3} \Lambda k'}{\left[ C_{w}
\tan{\frac{1}{2} t \sqrt{ - \frac{\Lambda}{3}}} + A \right]^2 -
\frac{4}{3} \Lambda k'} ,\\
V_{2}(t) & = & \frac{2\Lambda^2}{3} \sqrt{\frac{C_{w}}{3}}
\frac{1}{C_{w} + A \sin{t \sqrt{ - \frac{\Lambda}{3}}}} .
\end{eqnarray}
{}From (4.25) and (4.26) we realize that the potentials are oscillatory in t
i.e.
$V_{1}$ oscillates between $B(-4/3\Lambda k')(C_{w} - A)$ and
$B(-4/3\Lambda k')(C_{w} + A)$, where \\$B = 4^{-1}k'^{-
\frac{1}{2}}\sqrt{2/3(1 -
k/k')}$ and $V_{2}$ oscillates between $2\Lambda^{2}/3 \sqrt{C_{w}/3}/(C_{w} -
A)$ and $2\Lambda^{2}/3 \sqrt{C_{w}/3}/(C_{w} + A)$. If we express explicitly
potentials in terms of fields we have
\begin{eqnarray}
V_{1}(\varphi_{1}) & = & \frac{k'^{-2}C_{w}^{-2}}
{32\sqrt{\frac{2}{3} \left( k' - k \right)}}
\left[ \frac{C_{w}^2 + \left[ \sqrt{ - \frac{4}{3} \Lambda k'}
\tan{\sqrt{\frac{3k'}{2 \left(k' - k \right)}}\left( \varphi_{1} - \varphi_{01}
\right)} - A \right]^2}{1 + \tan^2{\sqrt{\frac{3k'}{2 \left(k' - k \right)}}
\left( \varphi_{1} - \varphi_{01}\right)}} \right]^2 \\
V_{2}(\varphi_{2}) & = & \frac{2 \Lambda^{2}}{3} \sqrt{\frac{C_{w}}{3}}
\frac{1}{C_{w} + A \sin{\left\{ 2 \arctan{\left[ {\cal P} \left( \frac{4}{3}
\sqrt{ - \Lambda C_{w}} \varphi_{2} \right) \right]} \right\}}} .
\end{eqnarray}
The potential $V_{1}(\varphi_{1})$ oscillates between $V_{d} = - \frac{4}{3}
\Lambda k'$ and $V_{u} = 4C_{w}^2 + 4\Lambda k'$. A prove that
$V_{2}(\varphi_{2})$ is also oscillating in $\varphi_{2}$ is more complicated.
The discussion involves the analysis of the discriminant $\Delta = g_{2}^{3} -
g_{3}^{2}$ with $g_{2}$ and $g_{3}$ given by (4.23)-(4.24) and then the
analysis of the solutions similar to that given in Section 2 for the general
oscillatory solution subject to the conditions (2.27). Finally, the result is
that $V_{1}(\varphi_{1})$ and $V_{2}(\varphi_{2})$ in (4.27)-(4.28) do
oscillate in $\varphi_{1}$ and $\varphi_{2}$.
These results together with (4.18) and (4.19) may suggest that at least under
the assumption of proportionality of kinetic and potential energies (4.6)-(4.7)
the monotonic type of the solution for the scale factor $R(t)$ leads to
the monotonic dependence of the potential $V$ on the scalar field $\varphi$ and
the oscillating type of solution for $R(t)$ leads to the oscillating
dependence of the potential $V$ on the scalar field $\varphi$.
\vspace{.6cm}
\section{DISCUSSION}
\vspace{.6cm}
In this paper we discussed a class of nonsingular oscillatory universes
following Kardashev's \cite{KAR} discussion of admitting essential fraction of
stable domain walls as the exotic matter and the negative cosmological
constant. In the context of the dark non-baryonic matter problem both kinds of
fluids can serve as dark matter candidates. The former acts as a source of
repulsive gravity but the latter acts as a source of attraction. The balance
between
these two sources allows the universe to fall into oscillations. In fact, this
is very similar situation to the case of the Einstein Static Universe, where
there is a balance between nonrelativistic positive pressure matter and
repulsive positive cosmological constant. According to our analysis the
Einstein
static models exist as a result of a balance between the exotic matter
and the attractive negative cosmological constant. Unlike the original Einstein
Static Universe these models seem to be mechanically stable as they lie at the
bottom of the asociated mechanical potential (2.7), but their stability should
be examined more carefully.
Since the value of the negative cosmological constant can be reduced to be
very close to zero for oscillatory solutions (cf. Section 2), the most severe
problem of possible compatibility with observations refers to exotic matter,
especially to domain walls \cite{TUR}. Because of that we proposed a
replacement of the exotic matter by scalar fields which may serve as dark
matter candidates as well. We inversed the standard procedure - first we had
the equation of state and then we derived exact potentials \cite{BAR}.
The potentials we studied seem to keep the same properties as the
solutions for the scale factors. It means that monotonic solutions for $R(t)$
(cf.(3.9)) give monotonic potentials (cf.(4.18)-(4.19)) and oscillatory
solutions (cf.(2.26)) give oscillatory potentials (cf.(4.27)-(4.28)).
It is worth emphasizing that the paper is based on many simplifications and
the question is what are the properties of oscillatory solutions in more
physically realistic situations. Firstly, in the discussion of oscillatory
solutions of Section 2 we have neither considered any non-isotropic geometry
nor non-isotropic fluid description for instance for string-like-matter and
wall-like-matter.
Secondly, in the scalar fields interpretation of Section 4 we have assumed
a very simple proportionality relation for the kinetic and potential energies,
the minimal coupling of the fields to gravity and the lack of interaction
between them. Of course all these assumptions might not necessarily apply
and the validity of the presented results should be considered towards these
points.
Especially, in future work one can consider some non-isotropic cosmologies
\cite{FI1,FI2} and non-minimally coupled fields \cite{FU1,FU2}, but these
approaches will certainly be much more difficult to treat analitycally.
Also, one can discuss oscillating quasi-static universes
in the context of the maximum entropy analysis given by Gibbons
\cite{GIB1,GIB2}.
Finally, we should mention one common feature of our results with the results
of Petry \cite{P3} in his conformal gravity theory. Namely, in both theories
nonsingular oscillating solutions appear for the negative cosmological
constant. How far this results could be true for other gravity theories is
a matter for further consideration.
\begin{center}
{\bf Acknowledgments}
\end{center}
The author wishes to thank to John Barrow, Arne Larsen, Jerzy Stelmach and
David Wands for careful reading of the manuscript and helpful suggestions.
\pagebreak
\frenchspacing
|
train/arxiv
|
BkiUdwI4eIOjSKyN9v3k
| 5
| 1
|
\section{Introduction}
The Galactic center, located at a distance of 8 kpc \citep{rei93} (1$\arcsec$ corresponds to 0.038~pc), is our unique laboratory to study the interactions between a supermassive black hole (4~$\times$~10$^{6}$ M$_\odot$, \cite{ghe05,sch05}) and its surrounding environment. This region is composed of a very dense and warm interstellar medium (ISM), mostly condensed in Giant Molecular Clouds (GMCs). These GMCs have different characteristics than those GMCs found in the Galactic disk. In the Galactic center, the GMCs are warmer, denser, and much more turbulent. The dynamical center of the Milky Way is occupied by a nonthermal compact radio source, the supermassive black hole Sgr~A*. Surrounding this source, three arcs of ionized gas are present (the western arc, the northern arm and the extended bar, \cite{rob93}), forming the minispiral or Sgr~A~West. Both Sgr~A* and the minispiral are surrounded by a ring-like structure, the circumnuclear disk (CND). The CND is composed of a mixture of neutral atomic and molecular gas and dust \citep{gen85,mez89}. The CND has an inner radius of $\sim$2~pc \citep{gat84,gen85,jac93,chr05} and it extends until $\sim$~5~pc \citep{har85,gen85}. It is tilted with respect to the Galactic plane ($\sim$~70$\arcdeg$) and its main motion is rotation around Sgr~A* \citep{gat84,gen85,lug86,gus87}. The inner cavity that the CND encloses is devoid of dust \citep{bec82}, but it is occupied by ionized gas (minispiral) \citep{gat84}, neutral molecular gas \citep{har85,her02} and a large amount of neutral atomic gas \citep{jac93}. Also, inside the cavity, occupying the central parsec, stars in different evolutionary stages have been found \citep{pau06}. The stars are orbiting Sgr~A*
and they could have been formed in-situ \citep{pau06} or spiraled inwards after formation \citep{gur05}.
\cite{gus87} have previously detected the clumpiness of the CND, calculating a dynamical lifetime of 10$^{4}$~--~10$^{5}$~yr. However, later studies by \cite{jac93}, \cite{shu04} and \cite{chr05} estimate that the density is $\sim$~(3~--~4)~$\times$~10$^{7}$~cm$^{-3}$. The density of the clumps would allow them to overcome the tidal shear from Sgr~A*, increasing the lifetime of the CND to 10$^{7}$~yr.
\subsection{Molecular studies}
The Galactic center has been studied numerous times in different molecular gas tracers.
\cite{coi99,coi00,mcg01,her02,her05} studied the emission from four different ammonia rotation inversion transitions (NH$_3$(1,1), (2,2), (3,3) and (6,6)) using the Very Large Array (VLA). Ammonia is a moderately high-density ($\sim$~10$^{4}$~cm$^{-3}$), and high-temperature (23 to 412~K, (1,1) to (6,6) transitions) tracer. These studies showed that the colder gas (NH$_3$(1,1) and (2,2)) is highly extended in this region, showing no strong emission in the CND. However, NH$_3$(3,3) (energy above ground $\sim$~125~K) is more concentrated towards the center, proving to be much more useful for tracing the CND, although still not found any closer to the supermassive black hole than the CND. NH$_3$(6,6), on the other hand (tracing material at $\sim$~412~K), is found very close to Sgr~A*, predominantly inside the CND, well within the inner cavity, while it is much fainter further away from the CND. Therefore, very dense and warm molecular gas is located in the inner 1.5~pc of the Galaxy.
Studies of other high-density tracers have also been carried out. In particular, HCN has proved to be a very useful molecular gas tracer in the Galactic center environment, providing important results regarding the structure of the CND. \cite{gus87} studied HCN(1-0) emission using the Hat Creek Interferometer, obtaining a 10$\arcsec$ resolution. The emission from this transition was detected along the CND, with the emission in the southern part stronger than the emission detected in the rest of the CND. Also, a gap was found in the eastern part of the CND. High-resolution results were also achieved by \cite{wri01} using the Berkeley-Illinois-Maryland Array (BIMA). However, \cite{wri01} noted that self-absorption affects HCN(1-0) (and HCO$^{+}$(1-0), which is closely correlated) due to the intervening molecular material along the line of sight to the Galactic center. This problem was also addressed by \cite{chr05}, who published their HCN(1-0) data from the Owens Valley Radio Observatory (OVRO) millimeter interferometer, with a 5$\arcsec$ resolution. In these latest results, the HCN emission is detected along the CND and also in various narrow structures outside of it, such as the linear filament. However, because of the self-absorption problem, the use of higher transitions, which will be less sensitive to the cooler and more diffuse gas along the line of sight, should lead to improvements.
HCN(3-2) emission was detected by \cite{jac93} using the IRAM 30-m telescope and by \cite{mar95} using the 15-m James Clerk Maxwell Telescope (JCMT). In both cases, emission is detected along the CND, but it is remarkably stronger towards the south of the CND. The detection of HCN(4-3) emission (which traces material at $\sim$~25~K, and has a critical density value of $\sim$~10$^{8}$~cm$^{-3}$; \citealt{cho00,tak07}) in the Galactic center region was also reported by \cite{mar95} using the JCMT, achieving a 15$\arcsec$ resolution. The HCN(4-3) emission is also stronger towards the south of the CND, with very weak emission towards the north. Moreover, a dynamical model of a rotating torus developed by this group supported the idea that the northern and the southern parts of the CND are independent structures (each of them with a different inclination, e.g. a warped disk), following a common rotation pattern around Sgr~A*. The self-absorption problem is much less evident for these higher-excitation transitions, consistent with the a priori expectation that absorption is produced by lower excitation material along the line of sight.
While these higher HCN lines are much more reliable tracers for studying the structure of the CND, higher-resolution has not been available until now. The Submillimeter Array (SMA) is the first interferometer equipped with 350~GHz receivers, and our maps image the CND in the higher excitation material.
CS has also been studied in the Galactic center in various transitions ((2-1), (3-2), (5-4) and (7-6) \citealt{ser89,ser92}). However, no previous results on (7-6) have been reported in the CND. Our study images the distribution of CS(7-6) emission (which traces material at $\sim$~49~K and has a critical density of $\sim$~10$^{7}$~cm$^{-3}$, \cite{cho00,tak07}) in the central 4~pc of the Milky Way.
The NH$_3$ studies allow a determination of kinetic temperatures. The HCN and CS studies allow estimates of the H$_2$ densities. Together, they define the high excitation environment of galactic nuclei.
\section{Observations}
Observations of the HCN(4-3) ($\nu$~=~354.5054759 GHz) and CS(7-6) transitions ($\nu$~= 342.8830000 GHz) were made with the Submillimeter Array \citep{ho04} in the compact configuration on 2005 July 1 and 7 and August 13 and in the subcompact configuration on 2007 May 5. We produced a 25-pointing mosaic (figure \ref{pointings.fig}) covering a $\sim$~2$\arcmin$~$\times$~2$\arcmin$ area, which encompasses Sgr~A*, the minispiral and the CND. The central position of the mosaic was at $\alpha_{J2000.0}$~=~17$^h$45$^m$40.00$^s$, $\delta_{J2000.0}$~=~-29$\arcdeg$00$\arcmin$26.60$\arcsec$, and the rest of the pointings were Nyquist sampled at half-beam spacings, effectively increasing the sensitivity of the mosaic (the primary beam for these frequencies is 36$\arcsec$). The data consisted of two 2~GHz bandwidths (upper sideband and lower sideband, USB and LSB, respectively) divided into 24 spectral windows, each of them composed of 128 channels with a velocity resolution of 0.7~km~s$^{-1}$. The HCN(4-3) was observed in the USB and CS(7-6) in the LSB. The velocity coverage for the USB was from -180~km~s$^{-1}$ to 1490~km~s$^{-1}$, centered on the HCN(4-3) line at {\it v}$_{LSR}$~=~0~km~s$^{-1}$ and from -1605~km~s$^{-1}$ to 129~km~s$^{-1}$ for the LSB, centered on the CS(7-6) line at {\it v}$_{LSR}$~=~0~km~s$^{-1}$. We averaged every 5~km~s$^{-1}$ since molecular lines in the vicinity of Sgr~A* are very broad (FWHM~$\approx$~100~km~s$^{-1}$; \citealt{har85}). Therefore the degraded velocity resolution was adequate to resolve the lines. The continuum subtraction was performed in the uv plane. The total integration time per pointing was 30 minutes, but because of the distribution of the pointings, the actual time for the central 1.5$\arcmin$ covering the CND was 90 minutes.
The phase calibrator was 1733-130 (NRAO~530), with a flux value of 0.84~Jy for the first two tracks and 0.79~Jy for the third and fourth tracks (the fourth track in the subcompact array). Also, during the fourth track we used 1751+096 as a phase calibrator as well, with a flux value of 1.75~Jy. The fluxes were determined by the SMA project in the immediate period around the experiment.
The data were calibrated using the MIR software package developed for the Owens Valley Radio Observatory (OVRO), which was modified for the SMA, and the imaging process was done using both the Multichannel Image Reconstruction Image Analysis and Display (MIRIAD) and the NRAO Astronomical Imaging Processing System (AIPS). To produce the dirty map we used the task invert in MIRIAD, with the systemp option to decrease the weight for visibilities with abnormally high system temperatures. Afterwards, we subtracted the continuum using line-free channels from both sides of the line for the HCN(4-3), and only one side for the CS(7-6), since the latter line was located close to the edge of the passband in order to be able to observe both molecular lines simultaneously. We cleaned the map using the task mossdi2 in MIRIAD, cleaning to a 1.5~$\sigma$ level. The integrated intensity maps were produced using the MOMNT task in AIPS, with a minimum flux cutoff of 0.5~Jy and 0.3~Jy for the HCN(4-3) data and the CS(7-6) data, respectively. The flux cutoff was chosen based on the noise threshold, to suppress noise contribution, since for a simple summation the noise will add and narrow line features will become effectively filter diluted. Natural weighting of the uv data produced an image with a synthesized beam of 4.6$\arcsec$~$\times$~3.0$\arcsec$ with a position angle of 4.1$\arcdeg$ for the HCN(4-3) emission and 4.5$\arcsec$~$\times$~3.1$\arcsec$ and a position angle of -4.4$\arcdeg$ for CS(7-6). The final RMS per channel is 0.3~Jy~beam$^{-1}$ and the overall achieved RMS sensitivity for the integrated map is 8.4~Jy~beam$^{-1}$~km~s$^{-1}$ (for HCN(4-3)) and 0.2~Jy~beam$^{-1}$ and 3.3~Jy~beam$^{-1}$~km~s$^{-1}$ (for CS(7-6), respectively).
The compact-north (for low declination sources) configuration of the SMA provides a maximum projected baseline distance of 91~m and a minimum of 11~m, while the subcompact configuration has maximum projected baseline of 25~m and a minimum of 6~m. The combination of data from both configurations improves the detection of the extended emission as the inner uv plane is better sampled, while providing a better resolution because of the longest baselines. The data obtained using only the compact configuration show a better resolution, but the negative bowls due to the missing extended emission are quite severe. Eventually, the ultimate goal would be to combine these data with single-dish data in order to further suppress the missing flux problems which are still evident in our maps.
\section{HCN(4-3)}
We detected the extended emission of HCN(4-3) within 2~pc of Sgr~A* (figure \ref{hcn4_3.fig}). The southern part of the CND is clearly detected, tracing the southwest lobe and the southern extension (using the same nomenclature as \cite{chr05}), while the emission from the northern part is more sparse and distributed.
\cite{mar95} observed a similar distribution of HCN(4-3) emission using the James Clerk Maxwell Telescope (JCMT) (15$\arcsec$ spatial resolution), detecting the strongest emission to the south. Due to our higher resolution image, we can directly determine that the emission is very clumpy around Sgr~A*, i.e., the CND is composed of an array of blobs, in a necklace-like manner, especially towards the north, as was first noted by \cite{gus87}. Furthermore, the clumpy appearance is highly irregular, in terms of size scale, intensity, spacing and distance to Sgr~A*. We compare the HCN(4-3) flux density detected with the SMA to the single-dish flux density detected with the JCMT. After smoothing the SMA resolution to match that of the JCMT, we find that the interferometer detects 61\% of the emission detected with the single-dish over an area that covers 10 JCMT beams. The missing flux density would be spread over many SMA synthesized beams, which are approximately 16 times smaller in area than the JCMT beam (i.e. 161 SMA synthesized beams). The SMA data smoothed to the JCMT resolution (figure \ref{smooth.fig}), however, show a very striking similarity to the JCMT emission map (figure 4 in \citealp{mar95}). This result suggests that the extended emission is a very low level contribution to the final result (21~Jy~km~s$^{-1}$ per SMA synthesized beam, which is at the level of the first contour on figure \ref{hcn4_3.fig}, 3$\sigma$), and most importantly that both the smoothed SMA and the JCMT emission maps are dominated by the clumps.
It is widely accepted that the overall motion of the CND is rotation around Sgr~A*, as we have previously mentioned. We plot the spectra at different peaks (clumps) along the CND to show the kinematics (Fig \ref{spectra_hcn43.fig}). Also, we plot the spectra at different ``empty'' locations where no emission was found close to peculiar-looking spectra to check whether the influence of negative bowls (i.e. the lack of short spacings in our images since extended structures are not recorded) have affected the emission and, therefore, the final result.
We find that spectra A and Q show the largest velocity shifts, with spectrum A redshifted and spectrum Q blueshifted. Spectrum A peaks at $\sim$~105~--~110~km~s$^{-1}$ and spectrum Q at $\sim$~-105~--~-110~km~s$^{-1}$, which agree with the values previously reported by \cite{gus87,jac93,mar95} and \cite{chr05}. The remainder of the features peak at intermediate velocities, in an orderly manner, consistent with a rotation pattern in the CND. With an average radial velocity of 110~km~s$^{-1}$ and a 1.5~pc radius, if we assume that the velocity is purely rotational, the rotation period would be $\sim$~8~$\times$10$^{4}$~yr. We can extract much more information from the study of spectra. In particular, there is a group of features worth observing closely. Firstly, spectrum H is slightly different from the rest. It does not seem to follow the rotation pattern and has a different linewidth, being narrower than the other spectra. These characteristics suggest that this feature may be the ``70~km~s$^{-1}$ cloud'', first reported by \cite{gus87}. It seems to be near the CND, but has a non-negligible radial velocity component ($\sim$~50~km~s$^{-1}$; \citealt{jac93}), much larger than the rest of the clumps, whose predominant motion is rotation. At the same time, spectrum H shows a shock appearance, with a steep blue side and a non-gaussian red tail. Also, clump E shows a remarkably different spectrum, with two velocity components, roughly equal in intensity and an overall broader profile than nearby clumps. Clump E lies to the north of the minispiral northern arm (figure \ref{hcn43_cont.fig}), believed to be interacting with the CND and creating a gap where the ionized gas is infalling towards Sgr~A* \citep{wri01}. In fact, \cite{chr05} consider clump E part of the {\it CND northern arm}. \cite{jac93} reported the detection of neutral gas also associated with the minispiral northern arm as far as 3~pc from the dynamical center of the Milky Way, consistent with a scenario of infalling gas through the CND gap. Clump G, located in close proximity to clump E, also shows a two-velocity component broad profile. Clump G is part of what \cite{chr05} called the {\it linear filament}, a ridge of gas that appears to be connecting the {\it western streamer} and the CND \citep{mcg01}, though it might be a projection effect. On the other hand, we observe a very peculiar (and broad) double-peak spectrum in clump N, which also seems to be in the path of the minispiral (figure \ref{hcn43_cont.fig}). \cite{ser85} reported the western arc of the minispiral to have the same velocity field as the CND, therefore suggesting a connection between the molecular ring and Sgr~A West. However, clump N might be also affected by a different process. When comparing spectrum M with spectrum N, we observe that both show two 'dips' at roughly the same velocities ($\sim$~-60~km~s$^{-1}$, $\sim$~10~km~s$^{-1}$), even though spectrum M was taken a little farther from the region apparently influenced by the minispiral. Also, spectrum L shows that there are strong negative bowls in the proximity of clump N because of the lack of short-spacings, so that these spectra are probably affected by this problem. We will go back to discuss this spectrum a little later.
Clump DD does show a very clear two-velocity component broad profile, and may be related to the interaction with the minispiral. Cloud DD is located at the end of the extended bar of Sgr~A West (figure \ref{hcn43_cont.fig}). We could be observing a similar behavior as the one detected for clump E. Also, absorption is probably not the answer for this feature due to its uniqueness among the nearby clumps. We would have expected the same profile to be shown by the rest of the adjacent clumps (such as C, D and AA), but that is not the case. We have not found strong negative bowls in the proximity of clump DD. Therefore the lack of short-spacings is probably not affecting the line profile for this clump.
As \cite{jac93} and \cite{mar95} previously remarked, we do detect emission in the eastern and northeastern parts of the CND, a section that is heavily affected by absorption by foreground cold clouds when using a lower J transition line \citep{gus87}. The emission peaks detected in this region (D and DD) are indeed strong, demonstrating that the higher temperature gas in this region can be discerned once the cold gas along the line of sight is suppressed.
Spectra BB and CC show very narrow profiles, especially spectrum BB. They are located outside of the main CND structure, in what we will call the {\it northeast arm}, therefore farther from the supermassive black hole. The greater distance from Sgr~A* implies that the gravitational pull from the center will be less important, and the narrower linewidths may reflect a weaker interaction.
Spectrum K shows a peculiar profile as well. Observing the profile of spectrum L, closely located, we infer that clump K is affected by the lack of short-spacings in the data. A similar behavior seems to be affecting clump Z, easily understood when comparing to spectrum Y.
Clumps U, W and Q are probably also affected by the negative bowls (see spectra T, V and O), which are much stronger in the southern part of the CND, indicating that the loss of extended emission is more significant in that region than in the northern part, where only clump A seems to be affected.
Spectrum R seems to be especially affected by the short-spacings problem, resulting in a very broad but not very strong spectrum. In summary, the HCN(4-3) data undersample the extended emission. This is especially remarkable in the southern part of the CND. However, such problem does not prevent the very strong detection of this molecular tracer along the CND and even further away, in the linear filament and the {\it northeast arm}.
Comparing our data with the HCN(1-0) data from the Owens Valley Radio Observatory by \cite{chr05} (figure \ref{hcn4_3_hcn.fig}, both maps convolved to the same resolution and integrated over the same velocity range) we can see that both transitions coincide tracing the southern part of the CND, and even in some parts of the northern lobe of the CND, but HCN(1-0) is stronger to the north as compared to HCN(4-3). This result suggests that the northern and the southern parts of the CND have different excitation conditions, with the southern part warmer than the northern part. The strongest peak in HCN(1-0) in the {\it southwest lobe} does not coincide with the strongest peak in HCN(4-3). It appears that the lower-excitation line is stronger towards the tip of the CND, where the most blueshifted material is detected, while HCN(4-3) is strongest north of that position, in what we have called clump N (figure \ref{spectra_hcn43.fig}). However, HCN(1-0) in the position of clump N is affected by self-absorption \citep{wri01}. Therefore, the difference in gas distribution could be due to this effect.
We overplot the spectra at the locations of various clumps along the CND for HCN(4-3) and HCN(1-0) from \cite{chr05} (figure \ref{hcn10_spec.fig}, convolved to the same resolution). We confirm that HCN(1-0) suffers from strong self-absorption, a problem especially noticeable in clumps K, N, U, X and W. Clump N in both transitions seems affected, but HCN(1-0) suffers a much more dramatic ``dip'' (i.e. a sharp absorption feature within the emission profile) than HCN(4-3). The velocity at which the absorption affects the HCN(1-0) line ($\sim$~0~km~s$^{-1}$, produced by a well known cold cloud complex, the ``local gas''; \citealt{gus87}) is slightly different from the velocity at which we find the dip in the HCN(4-3) spectrum. Also, the nearby clump K shows a similarly non-gaussian profile, but the rest of the clumps are mostly unaffected. We also overplot the spectra at the location of Sgr~A* (figure \ref{hcn10_spec.fig}, where we can observe the absorption features in the HCN(1-0) data noted by \cite{chr05}, which are absent from the HCN(4-3) data. Therefore, HCN(4-3) is probably not affected by absorption (and, if it is, not enough to explain the line profiles), but by the lack of short-spacings, a problem that can be solved by combining the interferometric data with single-dish data. HCN(1-0) could be missing extended emission as well, since the OVRO data have not been combined with single-dish data. Therefore, both transitions are probably affected by the short-spacings problem, but only in the case of (1-0) the self-absorption seems worrisome.
The ratio of both transitions (figure \ref{ratio43_10.fig}, where the HCN(1-0) contours have been overplotted to better explain the ratio) confirms that the southern part of the CND is more highly excited than the northern part. We have calculated the ratio using only the points with flux greater than or equal to 3$\sigma$, and in the case where the flux was lower, we have taken 3$\sigma$ as the minimum value. The reasoning behind this treatment of the data was to assess the properties of the northern part of the CND. From figure \ref{hcn4_3_hcn.fig} we can observe that the emission from HCN(4-3) in the northern lobe is fairly extended. Since no emission from HCN(1-0) was observed in that same location, the ratio cannot be calculated without assuming an upper limit. Comparing the more excited molecular gas closer to the center with the less excited gas in the northern part of the CND would not have been possible. We have, however, not applied the same rule to the absorption features, leaving them blank, thus the void at the position of Sgr~A*. The missing flux densities problem does introduce some bias in the ratio, however, it is tipically not too severe and it is only for local spots. The overplotting of the HCN(1-0) integrated emission map in figure \ref{ratio43_10.fig} demonstrates that except for the {\it southern extension}, most of the regions with a high ratio correspond to the peripheral regions where the HCN(1-0) flux value has been artificially increased. This means for the hot regions in the northeast side of the CND, where the HCN emission is getting fainter, the emission must be hotter than what we are able to show in the ratio map. Where both HCN(1-0) and HCN(4-3) have been detected, for example the {\it northeast lobe}, the value of the ratio is one of the lowest values along the CND. The ratio map therefore shows the {\it southern extension} as a whole, to be more excited that any other part of the CND. However, the absence of detection in the eastern part of the CND cannot rule out the presence of hotter gas in the region, which could possibly be more diffuse. The large ratios in other parts of the CND, as in the {\it southwest lobe}, are interspersed among regions with smaller ratios. This is consistent with heating at the boundaries of clumps. Although all of the southern part of the CND is really affected by self-absorption (figure \ref{hcn10_spec.fig}), this is an interesting result since it suggests that the {\it southern extension} is formed in its entirety by more excited gas than other parts of the CND. The question arises, then, why is the southeastern part of the CND more excited? What is the heating mechanism? Shocks? Radiation? We will address these questions later, with the help of an extra comparison tool, a high-temperature molecular tracer, NH$_3$. At the same time, the ratio map, especially in the northern part of the CND, suggests that the ratio increases as the gas approaches the center, i.e. the inner part of the CND is more excited than the outer part. \cite{har85} suggested that the UV radiation from the nuclear cluster (which is located in a cavity void of dust, and therefore transparent to the UV radiation) could be responsible for the heating of the molecular gas composing the CND. The inner edge of the molecular ring would be heated by this radiation, but as the distance from the center increases, the radiation starts to be absorbed by the dust also composing the ring. Thus, the outer parts of the CND would be less affected by this radiation. Finally, it is interesting to remark that the location of clump E, where we found a broad double-peak profile (figures \ref{spectra_hcn43.fig} and \ref{hcn10_spec.fig}) shows a large ratio value. This result suggests that clump E has a higher excitation than the surrounding environment. The interaction with the northern arm of the minispiral may be a possible mechanism. A detailed kinematic study of the velocity distribution in the minispiral is needed to fully elucidate this situation, possibly.
When comparing the HCN(4-3) emission map with emission maps produced by NH$_3$ ((3,3) and (6,6), \cite{mcg01} and \cite{her02}, respectively) we can gain a better understanding of the spatial distribution of the high-density molecular gas in the inner 2~pc of the Galactic center. NH$_3$ has a lower critical density (10$^{5}$~cm$^{-3}$) as compared to HCN, but traces gas at higher temperatures. Both NH$_3$(3,3) and (6,6) have remarkably different distributions in the central region of the Milky Way as shown by \cite{mcg01} and \cite{her02,her05}.
A comparison of HCN(4-3) to NH$_3$(3,3) (figure \ref{hcn_33.fig}; HCN(4-3) smoothed to match the NH$_3$(3,3) resolution) shows that the emission distributions from these two molecular lines are poorly correlated. However, both lines trace the western side of the CND and show the strongest peak along the CND at the same position on the {\it southwest lobe}. HCN(4-3) and NH$_3$(3,3) emission coincide in the {\it eastern arm} as well. It is worth noting that the {\it southern streamer} as traced by NH$_3$(3,3) (at $\alpha_{J2000.0}$~=~17$^h$45$^m$43$^s$ and $\delta_{J2000.0}$~=~-29$\arcdeg$01$\arcmin$40$\arcsec$) seems to connect to the eastern side of the CND as traced by HCN(4-3). \cite{her05} reported that no kinematic connection could be made between the {\it southern streamer} and the CND, but at the same time, it was clear that there was a change in the NH$_3$(3,3) emission when (at least in projection) it reached the CND. Therefore, the spatial alignment at the CND might only be a projection effect along the line of sight, but a physical connection could not be completely ruled out because of the resolution of the NH$_3$(3,3) data. We do indeed observe the same trend. The gas traced by both NH$_3$(3,3) and HCN(4-3) in the eastern part do not overlap, but it seems that the emission from NH$_3$(3,3) does become weaker in the vicinity of the CND. At present sensitivity, the kinematic and structural information still cannot elucidate whether there is a connection between the {\it southern streamer} and the CND.
The comparison of HCN(4-3) and NH$_3$(6,6) (figure \ref{hcn_66.fig}) shows a good correspondence in the eastern and northern parts of the CND, and even in the {\it northeast arm}. However, we can see that NH$_3$(6,6) is detected inside the central cavity of the CND, apparently approaching Sgr~A*, while HCN(4-3) is only detected along the CND. This result suggests that the molecular gas that composes the CND is in general colder and perhaps denser than the gas that is located inside of the cavity.
By overplotting the HCN(4-3)/HCN(1-0) ratio and the NH$_3$(6,6) integrated emission we can make an interesting comparison (figure \ref{66_ratio.fig}). We observe that the southeastern part of the CND is located south of the strongest peak detected in NH$_3$(6,6). It looks like the material traced by NH$_3$(6,6) is ``following'' the path marked by the ratio towards Sgr~A*. Since the large value of the ratio in the southeastern part of the CND indicates the presence of a larger amount of highly-excited material than of low-excited material, it means that the majority of the molecular gas in that region is warmer. Because NH$_3$(6,6) traces very warm gas and it penetrates from the eastern side of the CND towards the supermassive black hole, the southeastern side of the CND could be becoming warmer as it flows northwest, towards Sgr~A*.
At the same location where the gas traced by NH$_3$(6,6) heads for Sgr~A*, the value of the ratio changes radically, as if suddenly the level of excitation in the gas tracing the CND drops. At that location HCN(1-0) could be slightly affected by self-absorption (spectrum Z in figure \ref{hcn10_spec.fig}), but the very low value of the ratio indicates that the amount of HCN(4-3) is also low, otherwise the ratio would have been large. The question remains as to why the material is more highly excited precisely in the {\it southern extension}. When plotting the HCN(4-3)/HCN(1-0) ratio with the NH$_3$(3,3) integrated intensity map from \cite{mcg01} (figure \ref{33_ratio.fig}), we observe that the region where the {\it southern streamer} seems to reach the CND coincides with the location of the northernmost part of the high ratio, where NH$_3$(6,6) becomes stronger before heading northwest. Consequently, in terms of projection, there is a location in the southeastern part of the CND where the {\it southern streamer} traced by NH$_3$(3,3), the strongest peak in the NH$_3$(6,6) emission and the edge of the high-ratio area all coincide. The {\it southern streamer} may impact upon the CND producing destabilized material which infalls towards the supermassive black hole. This material may become so highly excited that HCN(4-3) energy levels may be depopulated so that a higher HCN transition will be needed to trace the infalling gas.
A comparison of spectra of both the HCN(4-3) and the NH$_3$ data is needed to determine whether the material traced by these transitions is the same. In order to accomplish this study we take the spectra at various locations (figures \ref{33_spec.fig} and \ref{66_spec.fig}). From figure \ref{33_spec.fig} we find that the material tracing the {\it southwest lobe} and the {\it northeast arm} (peaks 1 and 3) kinematically coincides both in HCN(4-3) and NH$_3$(3,3), but the same cannot be said about peak 2, located on the region where the material from the ``20 km~s$^{-1}$ GMC'' approaches the CND. However, we do detect a small peak around 20~km~s$^{-1}$, weaker than the peak detected around -50~km~s$^{-1}$, but nonetheless present, which indicates the detection of material from the {\it southern streamer}. We observe a similar situation in figure \ref{66_spec.fig} when comparing HCN(4-3) and NH$_3$(6,6). The same material is detected by HCN(4-3) and NH$_3$(6,6) in the {\it southwest lobe} and the {\it northeast arm} (peaks 1 and 4) but not in the regions where the material approaches Sgr A* (peaks 2 and 3). Therefore, the material detected in the northern and western parts of the CND by HCN(4-3) and NH$_3$(3,3) and (6,6) coincides, indicating that the denser and colder gas is heavily mixed with the more diffuse and warmer gas at those locations. However, the material detected in the eastern part of the CND appears to be very different, with the gas approaching the black hole warmer and more diffuse than the gas tracing the CND and a possible interaction between the gas from the {\it southern streamer} and the gas forming the CND in the southeasternmost part of the ring. These results seem to support the previously noted conclusion, where the material in the {\it southern extension} is warmed and pushed towards the dynamical center of the Milky Way by the action of the gas coming from the ``20 km~s$^{-1}$ GMC''.
\section{CS(7-6)}
We employed a correlator setup to sample the CS(7-6) line at the same time. Because of the locations of the two lines, CS(7-6) had to be placed at the edge of the passband and the velocity coverage was not as broad for this line, from -150 to 128~km~s$^{-1}$. As previously noted, this CS transition traces gas at even higher temperature than the observed HCN line but at a slightly lower density. We clearly detect and resolve the CS(7-6) emission in the southern part of the CND, but it is much weaker towards the northern part, although it is clearly detected in the {\it northeast arm} (figure \ref{cs7_6.fig}). This result is consistent with the HCN results in suggesting that the northern and the southern parts of the CND have different excitation levels, since the warmer gas is absent from the north.
The results for the CS(7-6) and HCN(4-3) lines show that these two molecualr lines correlate quite well with each other (figure \ref{cs7_6_hcn4_3.fig}). The emission peaks coincide in the southern part of the CND, and even in the few places where CS(7-6) is found in the northern part of the CND. Also, if we compare the spectra at the same positions (figures \ref{spectra_hcn43.fig} and \ref{cs_spectra.fig}, CS(7-6) spectra plotted in the same velocity range as HCN(4-3) spectra), we observe that both molecular lines have similar line profiles (FWHM~$\approx$~35~km~s$^{-1}$), suggesting that these lines are tracing the same material. However, there are some differences between the two molecular tracers. First, we observe that HCN(4-3) is much stronger in the {\it southern extension} than CS(7-6). While both molecular tracers seem equally strong along the {\it southwest lobe}, even coinciding in the distribution of the gas, stronger in the northern part of the lobe, at the location of clump N, than in the southern part, CS(7-6) is obviously weaker in the {\it southern extension}. This result would suggest that the {\it southern extension} is composed by denser but colder gas than the {\it southwest lobe}. However, we noted in the previous section that the {\it southern extension} is highly excited. Also, because of the strong detection of NH$_3$(6,6) in the northern region of the {\it southern extension}, towards Sgr~A*, we acknowledge the presence of warm material in the area. Therefore, the combination of the result drawn by the previous section and the picture presented by the weak emission of CS(7-6) in the {\it southern extension}, which traces warmer but less dense material than HCN(4-3), suggests a complicated morphology in the eastern part of the CND. The gas is denser and colder towards the south, and becomes much more diffuse and warmer heading north. This point will be discussed in detail later.
Another difference between the integrated emission from both molecular lines is that clump E in CS(7-6) does not show a double-peak profile, unlike its counterpart in HCN(4-3). The detection is much weaker, which could account for the lack of coincidence.
Spectrum N shows a two-velocity component, consistent with the result in HCN(4-3). The line profile, however, is not as wide. Clump N in the case of CS(7-6) is located a little further west than in the case of HCN(4-3). Since we have considered the interaction with the minispiral as probably affecting the profile of clump N in HCN(4-3), it could be that the greater distance from clump N in CS(7-6) to Sgr~A West and a consequently weaker interaction is responsible for the narrower profile (compare figures \ref{cs_cont.fig} and \ref{hcn43_cont.fig}). In order to check this conjecture, we extracted a spectrum closer to the inner side of the CND, 4.4$\arcsec$ east and 0.8$\arcsec$ south of clump N in CS(7-6) (at the location of clump N in HCN(4-3)) and overplotted the two spectra (figure \ref{clumpN.fig}). The broadening in the profile of the spectrum closer to the minispiral is remarkable. We conclude that the interaction with the minispiral is probably affecting the line profile. At the same time, the absorption features detected in these spectra can be caused by the negative bowls. When we examine the spectra L and M, taken nearby, we see that the lack of short-spacings may well affect the spectra towards clump N.
Clump K presents a very surprising profile, unlike any other. It is clearly redshifted and very narrow, more so than the ``70~km~s$^{-1}$ cloud'', which is barely detected as clump H, but the negative bowls might be seriously affecting it if we pay attention to spectrum L, taken nearby. Nonetheless, the profile is very unexpected and it is probably worth a deeper study. The comparison of the spectrum in the same position in HCN(4-3) (figure \ref{clumpK.fig}) shows a similar profile. Is once again the presence of the minispiral affecting the line profiles in the {\it southwest lobe}? Both spectra seem to be suffering from absorption around +~50~km~s$^{-1}$, a situation especially noticeable in the HCN(4-3) spectrum. However, this absorption feature is not present anywhere else in the CND.
Clump DD produces a very remarkable spectrum, with two clearly separated peaks. The spectrum at the same position in HCN(4-3) shows this profile as well. There could be some interaction involved in this location as absorption does not seem to be responsible for the ``dip'' and the negative bowls detected in the vicinity because of the lack of short-spacings are not so prominent. As we mentioned in the previous section, interaction with the extended bar of the minispiral is the most probable explanation of this broad profile.
Spectrum R shows a double peak profile and, observing the important negative bowls nearby (spectrum T) it is very probable that the short-spacings problem is the cause of this double peak profile. This problem is probably also affecting clumps FF, U and W, if we compare with spectra T and V. Therefore, as in the case of HCN(4-3), the lack of short-spacings is affecting the data. However, because of the lack of single-dish observations of CS(7-6) in the Galactic center, we can not calculate the amount of missing flux.
When we compare the CS(7-6) detected emission with the 6~cm continuum emission from \cite{yus87}, we observe that the gap in the CND in the CS(7-6) emission towards the north coincides with the position of the northern arm of the minispiral, with clumps E and F delimiting the gap to the west and clump D to the east (figure \ref{cs_cont.fig}).
The ratio between CS(7-6) and HCN(4-3) (calculated using only the pixels with flux~$\ge$~3$\sigma$) is useful to better understand the distribution of the dense material (figure \ref{cs_ratio.fig}). We can observe that the ratio increases towards the inner edge of the CND, especially in the western part. Coincidentally, the western arc of the minispiral is observed in the same region (figure \ref{cs_cont.fig}). Both CS(7-6) and HCN(4-3) have similarly high critical densities, with HCN(4-3) tracing slightly denser gas than CS(7-6), but the temperature traced by CS(7-6) is double the temperature traced by HCN(4-3) (49~K vs. 25~K). We could interpret the ratio distribution as being ruled mostly by the temperature, although the density should also be a factor. Therefore the molecular gas in the inner side of the ring is probably warmer and could be less dense. As we mentioned before, the temperature increase can be due to the absorption of UV radiation emitted by the nuclear stellar cluster. High-ratio values found to the west, outside of the CND, can also be related to the presence of higher-temperature gas, as the {\it western streamer} is very clearly detected in NH$_3$(3,3) (tracing gas at $\sim$~125~K).
The comparison of CS(7-6) and NH$_3$(3,3) (figure \ref{cs7_6_33.fig}), CS(7-6) smoothed to match the NH$_3$ resolution, NH$_3$(3,3) data from \cite{mcg01}) shows that both molecular tracers coincide at the location of the strongest peak within the CND (as was also noted regarding HCN(4-3)) and the {\it northeast arm}. However, the coincidences stop there, with CS(7-6) and NH$_3$(3,3) having in general very different distributions in the region observed in both tracers. The NH$_3$(3,3) map by \cite{mcg01} is much larger than the CS(7-6) map, and it is not represented here in its entirety. The correlation between the high-ratio values that we found previously west of the CND and the {\it western streamer} cannot be checked in this figure because of the lower resolution of the NH$_3$(3,3) data, and the small clumps detected in CS(7-6) have been smoothed out.
When comparing CS(7-6) and NH$_3$(6,6) we observe that the eastern part of the CND and the {\it eastern arm} are the only regions where both molecular tracers overlap and are well correlated (figure \ref{cs7_6_66.fig}). The comparison is similar to that obtained with HCN(4-3) and NH$_3$(6,6), except for the northern part of the CND, which is barely detected in CS(7-6). Once again, we note that NH$_3$(6,6) is detected inside the central cavity, whereas CS(7-6) is limited to the CND. This result is consistent with the idea that the CND is better traced by higher-density and lower-temperature tracers since only a minimum part of the CND is traced by NH$_3$, while CS (and HCN) are detected along most of the structure.
Similarly to the way we proceeded in the previous section, we overplot the NH$_3$(6,6) map and the CS(7-6)/HCN(4-3) ratio (CS(7-6)/HCN(4-3) ratio convolved to the NH$_3$(6,6) resolution, figure \ref{66_ratiocs.fig}). Unlike in the case of the HCN(4-3)/HCN(1-0) ratio map, the CS(7-6)/HCN(4-3) ratio does not correlate with the NH$_3$(6,6) emission. The highest ratio values, as mentioned before, are found in the western side of the CND, overlapping with the western arc of the minispiral (when higher-resolution is utilized since the NH$_3$(6,6) resolution is not sufficient to observe such details). If the CS(7-6)/HCN(4-3) ratio is tracing the region with the highest temperature-lowest density combination, then the western part of the CND is either warmer or less dense or both. Since NH$_3$(6,6) is detected more strongly in the eastern part of the CND, it indicates the presence of high-temperature gas in this region of the CND. Therefore, the western part of the CND may be less dense than the eastern part, and the high-ratio could be in this case a sign of lower density instead of higher temperature. The material detected by the high-temperature tracers might be unrelated to the material observed in high-density but lower-temperature tracers. This case is indeed true in the eastern part of the CND, as proved by the comparison of the HCN(4-3) and NH$_3$(6,6) spectra, showed previously in figure \ref{66_spec.fig}. However, the kinematic study indicates the coincidence of the material traced in the western part of the CND by both HCN(4-3) and NH$_3$(6,6). At the same time, the strength of the detection of the warmer gas is much smaller in the western part of the CND than in the eastern part. This indicates that the warmer and more diffuse gas is well mixed with the colder and denser gas in the western part, but not in the eastern part, where the amount of warmer gas is much larger, although not in the totality of the {\it southern extension} where it is only detected in the northernmost region. The lack of strong HCN(1-0) emission in the {\it southern extension}, tracing the same material as HCN(4-3) (figures \ref{hcn4_3_hcn.fig} and \ref{ratio43_10.fig}), but characterized by a lower temperature and critical density, indicate a higher temperature and density than in the {\it southwest lobe}. Furthermore, if a transition defined by a lower temperature and critical density, such as HCN(1-0), coincides in its spatial distribution in the southern part of the CND with a transition characterized by a higher-temperature but also lower critical density, CS(7-6) (both of them compared to HCN(4-3)), the {\it southern extension} will be then denser than the southwestern part of the CND. Since the material coming from the {\it southern streamer} is approaching the CND in the region of the {\it southern extension}, if the latter structure appears as denser it may be related with the possible interaction previously mentioned between the CND and the {\it southern streamer}. Because of this interaction, the material forming the southeastern part of the CND may be undergoing a compression process, appearing as denser than the adjacent region.
\section {Mass estimates}
In the previous section we noted the clumpy nature of the CND. As mentioned, \cite{jac93,shu04} and \cite{chr05} found that the different clumps (or cores) within the CND were dense enough to overcome the tidal shear produced by the central supermassive black hole, and therefore the CND might be a more stable structure than previously thought \citep{gus87}. For our results, we use the Virial calculation to estimate the required masses and densities for a clump to be stable against tidal shear. We consider then, the clumps to be gravitationally bound against the motions of the gas in the clump, with uniform density and where optical depths do not play a significant role in line-broadening \citep{roh00}.
\begin{equation}
{\it M~=~250 \left(\frac {\Delta v_{1/2}}{km~s^{-1} }\right)^{2} \left(\frac {R}{pc}\right) (M_\odot)},
\label{eqn:virialmass}
\end{equation}
where $\Delta${\it v$_{1/2}$} is the FWHM velocity linewidth and {\it R} is the radius of the clump.
The results are displayed in tables \ref{tab:hcn} and \ref{tab:cs}. Table \ref{tab:hcn} shows the masses measured for the clumps marked in figure \ref{hcn4_3.fig} and table \ref{tab:cs} displays the results for the clumps in figure \ref{cs7_6.fig}.
The virial masses calculated by \cite{chr05} range from 2~$\times$~10$^{3}$ to 89~$\times$~10$^{3}$~M$_\odot$, while the results obtained by \cite{shu04} are slightly smaller (3~$\times$~10$^{3}$ to 45~$\times$~10$^{3}$~M$_\odot$). Our measurements are similar, although a little bit higher than these previous results(4~$\times$~10$^{3}$ to 595~$\times$~10$^{3}$~M$_\odot$). We have not calculated the masses for all the clumps due to some difficult non-gaussian profiles (like clump R), that prevented us from being able to calculate a linewidth. The mean virial density of the clump, assuming spherical symmetry, can be calculated with:
\begin{equation}
{\it \overline{\rho}~=~\frac {3M} {4 \pi R^{3}}}
\label{eqn:virialdensity}
\end{equation}
Therefore, the internal virial density of the clump is:
\begin{equation}
{\it n_{H_2}~=~ \frac {3 M} { 4 \pi R^{3} m_{H_2}}}
\label{eqn:internaldensity}
\end{equation}
The critical density for a clump to survive the tidal shear from a 4~$\times$~10$^{6}$~M$_\odot$ black hole is calculated using the model from \citet{vol00}, which assumes the mass distribution in the inner region of the Milky Way to be described by a spherically symmetric approximation:
\begin{equation}
{\it M~=~4~\times~10^{6} + 1.6~\times~10^{6} (\frac{D}{pc})^{1.25} (M_\odot),}
\label{eqn:massdistribution}
\end{equation}
where D is the radius from the center of the Galaxy (i.e. distance to Sgr~A*).
The critical density for a clump to be tidally stable is then:
\begin{equation}
{\it n_{H_2}~=~2.87~\times~10^{7} [(\frac{D}{pc})^{-3} + 0.4 (\frac{D}{pc})^{-1.75}](cm^{-3})}
\label{eqn:criticaldensity}
\end{equation}
For a 1.5~pc distance, using equation \ref{eqn:criticaldensity}, we can calculate the critical density to be 1.4~$\times$~10$^{7}$~cm$^{-3}$. Therefore, we can conclude that the clumps we have detected using both HCN(4-3) and CS(7-6) are tidally stable, as it can be seen in tables \ref{tab:hcn} and \ref{tab:cs}.
\cite{san98} reported that the total ionized gas of the minispiral is 10$^{2}$~M$_\odot$, and \cite{jac93} measured a neutral gas mass in the northern arm of the minispiral of 3~$\times$~10$^{2}$~M$_\odot$. We can then assume the minispiral with a total mass of $\sim$~4~$\times$~10$^{2}$~M$_\odot$. At the same time, kinematic models by \cite{san98}, using an infalling gas velocity of half the rotation velocity (mentioned previously as 110~km~s$^{-1}$), were able to explain the majority of the characteristics of the minispiral. Therefore, the infalling time should be $\sim$~3~$\times$~10$^{4}$~yr. Inflow models support the idea of the minispiral as being formed by a cloud that lost angular momentum, probably due to cloud-to-cloud collisions and then fell towards the center. While infalling, the UV radiation from the nuclear stellar cluster dissociated the molecular gas and later ionized the neutral atomic gas \citep{jac93}. Consequently, if the material composing Sgr~A West comes directly from the CND, which has a total molecular mass of 1.3~$\times$~10$^{6}$~M$_\odot$ (calculated with the HCN(4-3) results), and the current mass of the minispiral takes 3~$\times$~10$^{4}$ yr to reach the center, if a mere 10\% of the CND is stripped off and becomes part of the minispiral (as \cite{chr05} considered), the lifetime of the CND is $\sim$~9~$\times$10$^{6}$~yr, much longer that its rotation period (8~$\times$~10$^{4}$~yr). Therefore, the CND is not a transient structure, since the amount of time needed for the CND to disappear, ``swallowed'' by the inner cavity, is longer than the amount of time needed to circle it. However, \cite{her02} found the molecular gas approaching Sgr~A*, as traced by NH$_3$(6,6), not to follow the minispiral path. Consequently, the previous lifetime value can be an overestimation, since not all the molecular gas approaching the dynamical center of the Milky Way is contained in the minispiral. Nonetheless, if the CND is a non-transient structure, the clumpiness should be expected to disappear and the CND to become a homogeneous structure. However, the observations do not show such homogeneity, instead, the clumps present a wide range of velocities. This dispersion in velocities could result in various interactions between the clumps (cloud-to-cloud collisions), with the more diffuse material spiraling inwards towards the center. Finally, recent studies by \cite{mar07} showed that the accretion rate of Sgr~A* is between 2~$\times$~10$^{-9}$ and 2~$\times$~10$^{-7}$~M$_\odot$ yr$^{-1}$. If the infalling rate is 1.5~$\times$~10$^{-2}$~M$_\odot$ yr$^{-1}$ (considering for now only the material confined in Sgr~A West), there is clearly a surplus of material inside the cavity that is not accreting into the black hole. The remaining material could be undergoing star formation processes. \cite{kra91} reported that 10$^{3}$ - 10$^{4}$~M$_\odot$ in the inner cavity became stars around 10$^{6}$ years ago. With the infalling rate that we are considering, an accumulation of 10$^{4}$~M$_\odot$ will take $\sim$~7~$\times$~10$^{5}$~yr (and even less if we consider the material traced by NH$_3$(6,6)). Therefore, the material infalling towards Sgr~A* could be forming stars in the inner cavity, and not only accreting into the black hole.
\section{Summary}
We have successfully detected HCN(4-3) and CS(7-6) within 2~pc of Sgr~A*, effectively tracing the CND. We demonstrate that the higher HCN transition minimizes the self-absorption problem observed in lower transitions \citep{gus87,wri01,chr05}, therefore providing a much more reliable sampling of the kinematics and structure of the CND.
The emission from both molecular tracers, HCN(4-3) and CS(7-6), appears in clumps, forming a 'necklace-like' structure around CND. The clumps have various sizes, from $\sim$~3$\arcsec$ to $\sim$~13$\arcsec$. In the case of HCN(4-3), the emission detected from these clumps amounts to 61\% of the single-dish flux detected by \cite{mar95}. Problems due to missing short spacings, however, continue to affect the observed kinematics. Nevertheless, when smoothing our data to match the resolution of the JCMT results from \cite{mar95}, we can easily note the remarkable similarity between the smoothed (to a 15$\arcsec$ resolution) HCN(4-3) emission map and the HCN(4-3) integrated emission map from \cite{mar95}. This result suggests that the extended emission does not greatly contribute to the final result and the molecular gas is mostly concentrated in clumps. Therefore, the missing short spacing problem probably does not seriously affect the final morphological results.
Moreover, we confirm the stability of the CND based on the density of the clumps, which is large enough for the clumps to overcome the tidal shear produced by the supermassive black hole. Furthermore, the lifetime value of the CND is far longer than the rotation value. This result supports the conclusions of \cite{jac93,shu04} and \cite{chr05}, agreeing that the CND is not a transient structure.
Both HCN(4-3) and CS(7-6) are found to be much more abundant in the southern part of the CND than in the northern part of the CND. In order to determine the geometrical distribution of the gas, the location of the colder and warmer gas, as well as the denser and more diffuse gas, and the implications of such distribution, line ratio measurements have been used. Our results indicate that the northern and the southern parts of the CND have different excitation levels, as well as different densities, with the southern part of the CND warmer and denser than the northern part. Also, comparing our results with those from NH$_3$, a high-density, high-temperature molecular tracer, we conclude that the molecular gas forming the CND is denser and colder than the molecular gas inside the inner cavity. More precisely, the southeastern part of the CND is denser than the southwestern part. However, the southeastern part seems to become more diffuse or warmer as the material heads northwest, approaching the supermassive black hole, as detected by NH$_3$(6,6). The comparison of the linewidths supports this conclusion, since both the NH$_3$(6,6) and HCN(4-3) present similar line profiles along the CND, kinematically tracing the same material, except in the region where the NH$_3$(6,6) emission is detected closer to Sgr~A*, where it shows a clear line-broadening effect, absent from the HCN(4-3) profiles. This result indicates a probable infall of the material forming the CND towards the dynamical center through the eastern part of the ring-like structure. However, the material detected inside the CND by the NH$_3$(6,6) is hotter and denser than the material forming the minispiral, and both structures appear to be unrelated, as indicated by \cite{her02}. At the same time, the detection of an interaction between the {\it southern streamer}, as traced by NH$_3$(3,3), and the CND indicates that the material forming the ring may be undergoing a compression process due to the material approaching the CND before the gas spirals inward towards the center. Therefore, the gas detected in the southeasternmost part of the CND may be pushed towards the northwest and heated in the process, explaining the lack of HCN detection.
In addition, we have detected a correlation between the minispiral and the CND. Line-broadening has been detected in the spectra of the clumps that spatially coincide with the arcs of the minispiral. We have observed this phenomenon in the eastern, western and northern parts of the CND. The western arc of the minispiral and the inner western part of the CND seem to overlap. Also, we have observed that the eastern end of the extended bar of the minispiral spatially coincides, at least in projection, with the location of a clump characterized by a broad spectrum. A similar situation has been observed regarding the northern part of the CND and the northern arm of the minispiral, where line-broadening has also been found in the spectra of the clumps seemingly located at the very end of the northern arm. Furthermore, we have detected a gap in the northern region of the CND, which the northern arm of Sgr~A West appears to be traversing.
These results suggest that the CND and the minispiral may be physically related. Sgr~A West could be gas which has been stripped from the CND. Alternatively, the spiral arms could be features infalling from beyond the CND. While more detailed kinematics are needed, we detect the influence of the minispiral on the western inner part, the northern part, and the eastern part of the CND, where we can see line-broadening in the vicinity of Sgr~A West.
Finally, we have observed that the inner edge of the CND seems more highly-excited than the outer part of the ring, as traced by the ratio of HCN(4-3)/HCN(1-0). Most probably the nuclear stellar cluster is responsible for the excitation of the inner side of the CND.
\acknowledgments
We thank M. Christopher for providing the HCN(1-0) data we have used for comparison. We also thank the referee, T. Wilson, for his helpful suggestions to improve the manuscript.
During the development of this study, M.M.-C. has been supported by a Smithsonian Institution Visiting Student Grant and an Academia Sinica Institute of Astronomy and Astrophysics Fellowship.
|
train/arxiv
|
BkiUc8XxK4tBVhvvr7Bw
| 5
| 1
|
\section{Introduction}
Bitcoin has become one of the hottest buzzwords among investors and researchers. It is the first and most famous decentralized digital currency\cite{nakamoto2008bitcoin}, which is secured by cryptography (thus, we call it cryptocurrency). Unlike fiat currencies which usually issued by financial institutions, there is no centralized organization or country controlling the issue and operation of Bitcoin. Furthermore, because of decentralization, users in the Bitcoin system are anonymous. The two characteristics (i.e., decentralization and anonymity) make Bitcoin attract a lot of users since its creation in 2009. It is estimated that there are more than 10 million users in the Bitcoin system \cite{burniske2017bitcoin}.
Since the famous ``Bitcoin Pizza Day'' when a programmer bought two pizzas with 10,000 BTC on May 22, 2010, Bitcoin began to exchange with fiat currencies. Soon afterward, a Bitcoin exchange, Mt. Gox launched. By 2013 and before filing for bankruptcy protection in February 2014, Mt. Gox was the largest bitcoin intermediary and the world's leading Bitcoin exchange \cite{feder2018impact}. Nowadays, there are more than 1,700 cryptocurrencies inspired by Bitcoin and the daily transaction volume is over \$ 150 billion dollar according to coinmarketcap.com at the moment of writing this paper.
The huge fluctuation of the exchange price of cryptocurrency is an important reason to attract investors' participation. Figure \ref{fig_price} shows the Bitcoin price (i.e., the exchange rate between Bitcoin and USD dollar in this paper) from 2012/12 to 2015/6. During this period, the Bitcoin price rose sharply from about \$10/BTC to exceeding \$1,000/BTC and then fell back to below \$200/BTC. This extreme price fluctuation has also attracted a large number of researchers to find the determinant factors of the Bitcoin price. Four categories of factors are discussed, including 1) economic factors (e.g., the supply and demand of Bitcoin) \cite{buchholz2012bits}; 2) technical factors (e.g., hash rate and difficulty) \cite{kristoufek2015main}; 3) interest factors (through proxy variable such as Google trends) \cite{kristoufek2013bitcoin}; and 4) other financial assets (e.g., gold, stock). In addition, by using the principal component analysis method (analogous to SVD), the paper \cite{kondor2014inferring} indicates that the Bitcoin price has a strong correlation with the transactions on the blockchain ledger.
\begin{figure}[htbp]
\centering
\includegraphics[width=.48\textwidth]{price.pdf}
\caption{Bitcoin-USD exchange price at Bitstamp exchange, with the period being studied shaded.}\label{fig_price}
\end{figure}
However, these factors are discussed based on data outside the exchanges. Because of the lack of supervision, a nature conjecture is that the extreme fluctuation may be related to the market manipulation of the exchanges. This conjecture is hard to verify as it is very difficult to obtain the detailed trading data from the trading platform. Surprisingly, many transaction histories from April 2011 to November 2013 of the once famous Bitcoin exchange Mt. Gox were leaked in the form of CSV files. These data provide a perfect opportunity for answering the conjecture.
To verify whether there is market manipulation and identify possible manipulation patterns is urgent and of great importance, as plenty of investors who are dreaming of getting rich overnight are attracted to the market. The answer to this question will help investors recognize the potential risks and help to regulate legislation. Based on the leaked data, a recent paper~\cite{gandal2018price} points out that the Mt. Gox exchange manipulated the Bitcoin price by building a regression model to identify the influence of the activities of some suspicious accounts on the price. We adopt a completely different method compared with it and obtain more results including fake volume, price manipulation, and manipulation patterns.
Figure \ref{fig_frame} shows an overview of our analysis. We first verify the leaked data and remove many unreasonable records. Then, by comparing the transaction price with the disclosed Mt. Gox price in quandl.com, we find many abnormal transactions. By using these transactions, we divide the accounts into three categories: extreme high account (EHA), extreme low account (ELA), and normal account (NMA). Next, we construct the extreme high graph (EHG), extreme low graph (ELG) and normal graph (NMG) by seeing the accounts as nodes and transactions as edges. we conduct various graph structure analysis on EHG, ELG, and NMG, such as nodes and edges classification, measuring graph clusters and degree distribution. Such investigation leads to new observations and findings. For example, the abnormal accounts (i.e., EHA and ELA) might be controlled by the exchange and used to provide liquidity and fake volume for the exchange. Finally, by dividing the graphs into daily snapshots and reconstructing it in a matrix, we extract some base graphs through singular value decomposition (SVD). By doing this, we find that the abnormal accounts' transactions strongly related to the Bitcoin price. Furthermore, we find many strange transaction patterns (such as self-loop, bi-direction, triangle etc.) within abnormal accounts. These patterns are considered as evidence of market manipulation in the exchange.
\begin{figure}[htbp]
\centering
\includegraphics[width=.47\textwidth]{frame.png}
\caption{An overview of our analysis.}\label{fig_frame}
\end{figure}
In summary, we make the following major contributions.
\begin{itemize}
\item To the best of our knowledge, it is the first study on market manipulation of cryptocurrency via graph analysis and SVD. Besides, we prove the effectiveness of the method by applying to the leaked Mt. Gox transaction data.
\item We obtain many new observations and findings by characterizing the activities of different accounts (i.e., static network analysis) and adopting SVD on the daily snapshots of the graphs (i.e., temporal network analysis). These findings convinced us that there are many market manipulation behaviors in the exchange.
\item We detect many market manipulation patterns which have never been reported in this area. These patterns are strong evidence of market manipulation and can help investors and regulators to recognize the dark side and its severity of the market.
\end{itemize}
The rest of the paper is organized as follows. After introducing the data set in Section \ref{data}, we detail the static network analysis in Section \ref{static_analysis} and the temporal network analysis in Section \ref{sec_price_ana}. Finally, we provide some related works in Section \ref{relatedwork} and conclude the paper in Section \ref{conclusion}.
\section{Data Set}\label{data}
\begin{table*}
\caption{A segment of the leaked data.}
\centering
\begin{tabular}{ccccccccc}
\hline
Trade\_Id& Date& User\_Id& Type& Currency& Bitcoins& Money& User\_Country& User\_State\\
\hline
1380587338975940& 2013/10/1 0:28:58& 125439& buy& USD& 0.5& 71.69169& US& NC\\
1380587338975940& 2013/10/1 0:28:58& 295701& sell& USD& 0.5& 71.69169& CA& QC\\
1380739642844790& 2013/10/2 18:47:22& 609336& buy& USD& 0.26177217& 33.96631& US& PA\\
1380739642844790& 2013/10/2 18:47:22& 36865& sell& USD& 0.26177217& 33.96631& US& CA\\
\hline
\end{tabular}
\label{tab_seg}
\end{table*}
In early 2014, the transaction history from April 2011 to November 2013 of Mt. Gox was leaked in the form of CSV files. Table \ref{tab_seg} reports a segment of the leaked data recorded on 2013/10/01. Two rows with the same \emph{Trade\_Id} indicating a complete transaction from the seller (\emph{Type=sell}) to the buyer (\emph{Type=buy}). The volume of the transaction is recorded in \emph{Bitcoins} and the turnover in \emph{Money}, thus the real-time price of Bitcoin at the transaction moment is \emph{Money/Bitcoins}. Each user has a unique identity (\emph{User\_Id}) with the FIPS location codes recorded in the country (\emph{User\_Country}) and state (\emph{User\_State}) fields. There are some other attributes (e.g., transaction fees) not included in the table, as they are not used in this study.
\textbf{Data Cleaning.} As there are many duplicate entries in the leaked data, we adopt a similar way for data cleaning as the previous studies\cite{gandal2018price,feder2018impact}. Specifically, we use the combination of the four key fields: date, user ID, type, and Bitcoins to remove duplicated entries (de-duplication strategy 2 in \cite{feder2018impact}). After this step, we remove all the single row transaction to make sure that each transaction has the corresponding buyer and seller (i.e., a completed transaction). Then, we remove all duplicated complete transactions. By doing this, the data narrows from approximately 18 million rows to 13.5 million rows (i.e, 6.7 million completed transactions). This method is more strict than the method in \cite{gandal2018price} as complete transactions with the same trade\_id are treated as duplicates. We adopt a more strict method in the hope of providing more reliable results.
\textbf{Advantages.} The leaked Mt. Gox data has many advantages in understanding the transaction behaviors in cryptocurrency and its influence on the price. First of all, Mt. Gox was the dominant exchange and Bitcoin has been the main cryptocurrency during the period, thus analyzing the cryptocurrency market based on this data set is more reliable and representative. Second, these data are much more finely grained than data extracted from the blockchain since most trading activity is recorded only in the exchange. Furthermore, users can be identified by their accounts in the leaked data while it is hard in blockchain to identify a user because of its anonymous mechanism.
\section{Static Network Analysis}\label{static_analysis}
\subsection{Account Classification}
Before delving deeper into the Mt. Gox leaked data, we check the Bitcoin exchange price of each transaction (i.e., Money/Bitcoin) to inspect whether it falls between the highest and lowest exchange price of the disclosed price on the same day. To this end, we first download all the Bitcoin exchange rate (BTC vs. USD) on Mt. Gox from quandl.com (we call this \emph{reference price}). Then, we compare the exchange price of each transaction with the reference price. Surprisingly, we find that there are some abnormal transactions which have a very high or low exchange price. For example, on 2013/08/30, a transaction (trade\_ID=1377875127221631) had an exchange price of \$49,338.4/BTC, and another transaction (trade\_ID=1377876535345547) had an exchange price of only \$0.81/BTC, whereas, on the same day, the highest and lowest exchange price in the download data are \$142.76/BTC and \$128.56/BTC respectively.
These transactions are abnormal, as the exchange price is clearly out of the reasonable range. In order to distinguish the transaction behavior of different accounts and its influence on the price, we divide all the accounts into three categories: extremely high account (EHA), extremely low account (ELA) and normal account (NMA). As a first step, we apply a simple approach to identify an \emph{abnormal} transaction. For this, suppose the highest and the lowest reference price on day $t$ is $H_t$ and $L_t$, we regard an transaction with real-time price larger than $1.5\times H_t$ as an extremely high price transaction (EHT) and with real-time price lower than $0.5\times L_t$ as an extremely low price transaction (ELT). Both kinds of transactions are referred to as abnormal transactions (ABTs). Please note that we use $(0.5\times L_t,1.5\times H_t)$ instead of $(L_t,H_t)$ to identify an abnormal transaction because there are many exchanges (thus many reference price) at the same time and we cannot make sure the reference price is the real price of the exchange. However, the parameter 0.5 and 1.5 is enough to exclude any normal transaction. Finally, an account is an EHA if it has at least one extremely high price transaction and an ELA if it has at least one extremely low price transaction. Both EHAs and ELAs are referred to as abnormal accounts (ABA). Please note that abnormal accounts could be both an EHA and an ELA if it involves both EHT and ELT. NMA is an account involved in no abnormal transactions, that is to say, all involved transactions are normal transactions (NMT).
Table \ref{tab_stat_account} shows the number of accounts and all kinds of transactions for each category of accounts. Four observations can be made from the table: 1) there are 14916 abnormal accounts, which account for 12.5\% (14916/119343) of all the accounts (please note that the number of ABA is not the sum of the number of EHA and ELA due to the existence of accounts contained in both categories); 2) the proportion of abnormal transactions (\#ABT) among ABAs accounts for 2.8\% ($\approx$194790/6775117); 3) the number of normal transactions among ABAs (3025992-194790=2831202) account for more than 41\% (2831202/6775117) of all transactions; and 4) the sum of the number of transactions (\#Tx) among ABAs and NMAs is far less than the number of all transactions, thus many transactions occurred between ABA and NMA.
Based on these observations, one can confirm that the abnormal transactions do not occur by accident (observation 2) and the abnormal accounts behave normally in most of their times (observation 3). Thus, the existence of the abnormal accounts must have a certain special purpose. One of the most likely purposes is for providing liquidity (observation 4, Section \ref{graph_res}). Considering the analysis on the recent cryptocurrency market of a trader and investor, which report that in some exchanges most of their disclosed trading volume are fake~\cite{fakevolume}, another possible purpose for these accounts is for fake volume. Besides, price manipulation is also a likely purpose (Section \ref{sec_price_ana}). In fact, we find that the abnormal transactions are greatly correlated with the Bitcoin exchange price and there are many abnormal patterns in the transactions.
\begin{table}
\caption{Statics of Accounts and Transactions.}
\centering
\begin{tabular}{cccccc}
\hline
Category & \#accounts & \#Tx & \#ABT & \#EHT & \#ELT \\
\hline
EHA & 10702 & 1406850 & 179701 & 138743 & 40958 \\
ELA & 5835 & 2486807 & 85784 & 29737 & 56047 \\
ABA & 14916 & 3025992 & 194790 & 138743 & 56047 \\
NMA & 104427 & 812865 & 0 & 0 & 0 \\
\hline
All & 119343 & 6775117 & 194790 & 138743 & 56047 \\
\hline
\end{tabular}
\label{tab_stat_account}
\end{table}
\subsection{Graph Construction}\label{graph_construct}
As each transaction contains a buyer and a seller, we can easily construct a directed graph from the records by considering each account as a node. Specifically, we present the definition of the constructed graph $G$ as follows.
\textbf{Graph Definition.} $G=(V,E,w)$, where $V$ is a set of nodes represent users (denoted by user ID) in the leaked data, $E$ is a set of edges with each represents an \emph{ordered} pair of nodes and $w$ is the function associating each edge to a weight. Each pair indicates that there was at least one transaction between users $u$ (seller) and $v$ (buyer) in the whole dataset. $w:E\rightarrow \mathbb{R}_+$ maps each edge with a weight, which is the total amount of Bitcoins transferred along the edge by one or more transactions.
In the remainder of this paper, we use the term \emph{account}, \emph{user} and \emph{node} interchangeably. To better compare network characteristics, we construct three graphs according to the nodes' categories as follows:
\begin{itemize}
\item EHG. The graph that all nodes are EHAs.
\item ELG. The graph that all nodes are ELAs.
\item NMG. The graph that all nodes are NMAs.
\end{itemize}
To construct the graph we adopt the following steps. Since each complete transaction has both a buy and sell record (has the same transaction ID) after data validation, we first construct a set of tuples $(S,B,v,t,l)$ from every complete transaction, where $S$ and $B$ represents the seller and buyer (denoted by user ID), $v$ is the corresponding amount of the transaction in Bitcoin, $t$ is the transaction time and $l$ is a label indicating the category of the transaction (i.e., EHT, ELT or NMT). We call this set as \emph{transaction tuple}, as each tuple corresponds to a unique transaction. Based on the transaction tuple, the aforementioned graphs are easy to construct. For example, to construct the EHG, we select all the tuples in which both the seller and the buyer are EHAs and sum the $v$ entry grouped by $S$ and $B$. Then, the generated new tuples $(S,B,v)$ is the EHG. Other graphs are constructed as the same except by selecting different tuples according to the nodes' category.
{}
\subsection{Graph Analysis} \label{graph_res}
This subsection investigates the constructed graphs from various metrics in graph analysis. Figure \ref{fig_all} shows the three graphs. We can find that there are more nodes in NMG, indicating the NMG is more sparse in connection (note that we select 5,000 edges for each graph). We investigate the statistics and metrics in the following.
\begin{figure}[htbp]
\centering
\subfloat[EHG]{%
\includegraphics[width=.16\textwidth]{low5000.png}}\hfill
\vspace*{1mm}
\subfloat[ELG]{%
\includegraphics[width=.16\textwidth]{normal5000.png}}\hfill
\subfloat[NMG]{%
\includegraphics[width=.16\textwidth]{high5000.png}}\
\caption{Visualization of EHG, ELG, and NMG. For the ease of illustration, we randomly select 5000 edges from each graph to draw the figure.}\label{fig_all}
\end{figure}
\vspace{0.5cm}
Table \ref{tab_stat} shows all the statistics and metrics for each constructed graph. For comparison, we also constructed the abnormal graph (i.e., the graph of all abnormal accounts, ABG) and the complete graph (i.e, the graph of all accounts, CG). In the following, we first introduce the statistics or metrics and then detail the observations.
The number of nodes in each graph is the number of accounts in each category, which is in accordance with the statistics in Table \ref{tab_stat_account}. The only exception is that the number of nodes in NMG is less than the number of NMA, because some normal accounts interact with abnormal accounts, thus it is not included the NMG.
\begin{table}[htbp]
\caption{Statics of graphs.}
\centering
\begin{tabular}{cccccccccc}
\hline
graph & \# nodes & \# edges & cluster & avg. degree & avg. wgt. degree\\
\hline
EHG & 10702 & 212900 & 0.30& 19.89 & 505.43 \\
ELG & 5835 & 413881& 0.42 & 70.93 & 3107.68 \\
ABG & 14916 & 612885 & 0.31& 41.09 & 1439.04 \\
NMG & 86457 & 655882& 0.03& 7.59& 76.21 \\
CG & 119343 & 2682719 & 0.28& 22.48 & 426.54 \\
\hline
\end{tabular}
\label{tab_stat}
\end{table}
An edge in the graph indicates a ``channel'' between two accounts for buying or selling Bitcoin. As can be seen from the table, the number of edges in each graph is far less than the number of transactions, which means that many channels are used more than one times. Another notable result is that the summation of the number of edges in ABG and NMG is greatly less than the number of edges in the CG. This result indicates that many edges are the channels between normal and abnormal accounts and is evidence that the abnormal accounts provide liquidity in the exchange. The number of edges in ABG is slightly larger than the sum of the number of edges in EHG and ELG since there are some edges connecting EHAs and ELAs.
We compute the clustering coefficient of all the graphs in column 4 of Table~\ref{tab_stat}. As can be seen, the clustering coefficients are extremely different among EHG, ELG, and NMG. The large clustering coefficients (i.e., 0.3 in EHG and 0.42 in ELG) revealing that if two abnormal accounts $A, B$ trade with abnormal account $C$, $A$ and $B$ are very likely to trade with each other. In other words, the abnormal accounts are likely to form triangles through transactions. Conversely, the clustering coefficient of NMG is very small (i.e., 0.03), which indicates a normal situation as the probability of three normal accounts forming a triangle is very small. This result indicates that the abnormal accounts behave strangely and herald the existence of market manipulation in the exchange.
The degree of a node is the number of edges connecting to the node. In our case, the degree of a node indicates the number of accounts trading with that node. Figure~\ref{fig_degree_dis} shows the degree distribution of all the three graphs, all of which approximately follows the power law distribution, meaning that there are few large-degree nodes and many small-degree nodes. We estimate the parameters by using the free statistical software R\cite{R} and the contributed package~\cite{poweRlaw} and plot the fitting line $y\sim x^{-\alpha}$ for each distribution in red. The smaller the $\alpha$, the more variable of nodes' degree. Thus, the abnormal accounts show less variable as compared with normal accounts. The result may be due to the abnormal accounts are controlled by the same organizations.
\begin{figure}[htbp]
\centering
\subfloat[EHG]{%
\includegraphics[width=.16\textwidth]{high_degree.png}}\hfill
\vspace*{1mm}
\subfloat[ELG]{%
\includegraphics[width=.16\textwidth]{low_degree.png}}\hfill
\subfloat[NMG]{%
\includegraphics[width=.16\textwidth]{normal_degree.png}}
\caption{Degree distribution of EHG, ELG and NMG.}\label{fig_degree_dis}
\end{figure}
Column 5 and 6 in Table \ref{tab_stat} show the average degree and the weighted average degree of the graphs. The large average degrees of EHG and ELG indicate that the abnormal accounts are used more frequently than normal accounts. The weighted degree is computed by setting the transaction volume (i.e., Bitcoin) as the weight, thus the average weighted degree represents the average transaction volume for each edge. As can be seen, the average weighted degree of ELG is far larger than it of EHG, one possible reason that the exchange price of transactions in ELG is relatively low, thus the transaction volume is large. Whatever the reason is, an obvious fact remains that the average weighted degree of EHG and ELG are larger than that of NMG, which means the edges between abnormal accounts transfer more Bitcoin than edges between normal accounts.
Based on the results and analysis discussed above, we summarize the findings as follows:
\begin{itemize}
\item \textbf{Finding 1.} There are some abnormal accounts (12.5\%) which trading with very high or low exchange price in some transactions. We consider these accounts abnormal and under control by the exchange for two reasons: 1) the proportion of the abnormal transactions account for 2.8\%, thus it is not occurred by accident; 2) the abnormal exchange price is impossible to appear on ordinary users.
\item \textbf{Finding 2.} Many seemingly normal transactions occurred between abnormal accounts ( $>$ 41\%). There are two possible purposes for these transactions: 1) these transactions are the fake volume that used to create an illusion of active trading; 2) to provide liquidity for the exchange.
\item \textbf{Finding 3.} The graphs of abnormal accounts have very large clustering coefficients. One possible reason is that these accounts are controlled by one organization, and thus the trade is not completely random.
\end{itemize}
These findings indicate that the exchange was likely involved in trading manipulation. As the exchange price is the key factor of trading, in the following section, we will discuss the possibility of price manipulation of the exchange.
\section{Temporal Network Analysis}\label{sec_price_ana}
As discussed above, the transaction network of abnormal accounts (i.e., EHG and ELG) show a great difference from the NMG. We want to know whether these transactions have a correlation with the Bitcoin price and what kind of users and transactions (i.e., graph structure) influence the Bitcoin price greatly. To this end, we calculate the daily snapshots of the graphs by adopting the method similar to \ref{graph_construct}. To detect important changes in the graph structure, we compare successive snapshots of the graphs using singular value decomposition (SVD). The goal is to detect a set of base networks and represents each day's snapshot as a linear combination of these base networks. Unlike in Section \ref{static_analysis}, we focused our study on transaction data after 2012/12/01 in this section. There are many reasons supporting our choice. Firstly, the recent paper which proves the price manipulation of Mt. Gox uses the same transaction history \cite{gandal2018price}. Secondly, the Bitcoin price experienced a skyrocketing during this period. Thirdly, Mt. Gox was the main Bitcoin exchange during this period. Finally, more abnormal users and transactions (more than 60\%) are found after that day.
\subsection{Extract Base Networks}
To evaluate which networks influence the price greatly, we need to construct the daily snapshots of the three graphs: EHG$_t$, ELG$_t$ and NMG$_t$. We adopt the same process to construct the graph series. First of all, we construct the \emph{aggregate} networks (i.e., EHG) based on tuples after 2012/12/01. Assume there are $n$ nodes and $L$ edges in the aggregate network, then it can be represented by a $n\times n$ weighted adjacency matrix $G$, in which there are $L$ non-zero elements. We rearrange $G$ into an $L$ long vector $g$ containing all the non-zero elements. We call this vector as \emph{edge-weight} vector. The vector describes the \emph{graph structure} of the aggregate network as each element represents a possible edge and its weight. To construct the daily snapshots of EHG$_t$ on day $t$, we recalculate the edge-weight vector $g_t$ (i.e., the graph structure on day $t$) based on transaction tuples on day $t$. Please note that we do not change the order of the vector, thus the $i$-th element of all the edge-weight vectors indicate the same edge, and it may be zero if the edge does not exist on a specific day. For $T$ snapshots, we now build the $T\times L$ graph time series matrix $X$ such that the $t$-th row of $X$ equals $g_t$. By doing this, we build a special matrix with $T$ samples and each sample represents a daily graph structure.
To account for the variation of the daily graph structure, we normalize $X$ such that the sum of each row equals 1, and then subtract the column averages from each column. As a result, both the row and column sums in the matrix will be zero. We compute the singular value decomposition of the matrix $X$:
\begin{equation}
X=U\Sigma V^T,
\end{equation}
where $U$ is a $T\times T$ unitary matrix, $\Sigma$ is a $T\times L$ diagonal matrix with non-negative values on the diagonal, and $V$ is a $L\times L$ unitary matrix. The non-negative values on the diagonal are \emph{sigular} values and is usually sorted in descending order. The left-singular vectors containing in the column of $U$ are a set of orthonormal eigenvectors of $XX^T$,
and the right-singular vectors containing in the column of $V$ are a set of orthonormal eigenvectors of $X^TX$
Since in this case $T<L$, there are only $T$ nonzero sigular values. We denote the sorted sigular values as $(\sigma_1,\cdots,\sigma_T)$, the left-sigular vectors $(\vec{u_1},\cdots,\vec{u_T})$ and the right-sigular vectors $(\vec{v_1},\cdots,\vec{v_T})$, where $\vec{u_i}$ and $\vec{v_i}$ are column vectors and subject to the following equations:
\begin{equation}
\vec{u_i}^T*\vec{u_j}=\vec{v_i}^T*\vec{v_j}=\delta_{ij}.
\end{equation}
Based on the special meaning of matrix $X$, we can interpret the singular vectors and the singular values as 1) the right-singular vectors can be seen as \emph{base networks}, and the element $v_i(l)$ (i.e., the $l$-th element of the $i$-th right-singular vector) gives the weight of the $l$-th edge in the $i$-th base network; 2) the left-singular vectors account for the temporal variation of the base networks, the $t$-th value of $\vec{u_i}$ (denotes as $u_i(t)$) provides the contribution of the $i$-th base network on day $t$; 3) the singular value $\sigma_i$, which are the square roots of the non-zero eigenvalues of both $X^TX$ and $XX^T$, indicates the overall importance of the $i$-th base network in approximating the whole matrix. Please note that the singular values are sorted in decreasing order, thus give decreasing contribution to the result.
\subsection{Detecting Graph Structural Changes}
As the (normalized) weight of the $l$-th edge in the daily graph structure on day $t$ can be written as:
\begin{equation}
x_{tl} = \sum_{i=1}^T \sigma_i u_i(t)v_i(l),
\end{equation}
to detect graph structural changes, we need to consider two terms: $\sigma_{i}$ (i.e., the importance of the $i$-th base network) and $u_i(t)$ (i.e., the contribution of the $i$-th base network on day $t$).
As a first glance, we consider the daily influence of the first and also the most important base network (i.e., $u_1(t)$). We want to know the correlation between the variation of $u_1(t)$ and the fluctuation of the Bitcoin exchange price. As the range of the price is $(12, 1207)$, we adopt a simple mathematical transform to make sure most of the transformed price falls in the interval $(0, 1)$. Specifically, we adopt the log transform $B(t)=log_{1000}P(t)$, where $P(t)$ is the close exchange price of Bitcoin on day $t$. Table \ref{tab_u1r} (left part) shows three commonly used correlation coefficients (i.e., Pearson, Spearman, and Kendall correlation coefficient) between $u_1(t)$ and the log-transformed price $B(t)$. The results show that the daily variation of the first base network in EHG and ELG have a very strong correlation with the Bitcoin exchange price. However, in NMG, there is no correlation between the two variables. The result indicates that the transactions made between abnormal accounts have a great influence on the Bitcoin exchange price.
\begin{table}
\caption{ Correlation coeffcients between the left-singular vectors of the network time series matrix and the Bitcoin exchange price.}
\centering
\begin{tabular}{c|ccc||ccc}
\hline
\multirow{2}{*}{Graph}& \multicolumn{3}{c||}{The 1st base network} & \multicolumn{3}{c}{The Fitted 10 base networks}\\
\cline{2-7}
& $\rho_{\rm{P}}$& $\rho_{\rm{S}}$ & $\rho_{\rm{K}}$ & $\rho_{\rm{P}}$& $\rho_{\rm{S}}$& $\rho_{\rm{K}}$ \\
\hline
EHG& 0.56& 0.60& 0.44& \textbf{0.811} & \textbf{0.807}& 0.620\\
ELG& 0.58& \textbf{0.82}& 0.64 & \textbf{0.871}& \textbf{0.834} &0.652\\
NMG& 0.05& 0.15& 0.12 &0.239& 0.398 &0.289\\
\hline
\end{tabular}
\label{tab_u1r}
\end{table}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{sigularvalues.pdf}
\caption{Sigular values in the order of its importance.}\label{sigularvalues}
\end{figure}
\vspace{0.5cm}
\begin{figure*}[thbp]
\centering
\subfloat[EHG]{%
\includegraphics[width=.32\textwidth]{high_price.pdf}}\hfill
\vspace*{1mm}
\subfloat[ELG]{%
\includegraphics[width=.32\textwidth]{low_price.pdf}}\hfill
\subfloat[NMG]{%
\includegraphics[width=.32\textwidth]{normal_price.pdf}}\
\caption{Approximate the log-transformed Bitcoin price with the linear combination of the selected base networks of EHG, ELG, and NMG.}\label{fit_price}
\end{figure*}
\begin{figure*}[thbp]
\centering
\subfloat[EHG]{%
\includegraphics[width=.31\textwidth]{high_fit.pdf}}\hfill
\vspace*{1mm}
\subfloat[ELG]{%
\includegraphics[width=.31\textwidth]{low_fit.pdf}}\hfill
\subfloat[NMG]{%
\includegraphics[width=.31\textwidth]{normal_fit.pdf}}\
\caption{The time-varying contribution $u_i(t)$ of the first four base networks.}\label{uit}
\end{figure*}
Motivated by this result, we want to know, to what extent, the log transfered price can be estimated with the combination of the left-sigular vectors, i.e.,
\begin{equation}\label{discompose}
B(t) \sim c_0 + \sum_{i=1}^N c_iu_i(t),
\end{equation}
where $c_0$ is the mean of $B(t)$ and $c_i$ can be computed as the dot product of $B(t)$ and $u_i(t)$.
As the left-singular vectors are orthonormal and span the T-dimensional linear space, $B(t)$ can be reconstructed by $u_i(t)$ when $N=T$. However, this is not what we desire in this case. The purpose of this study is to identify some important base networks and accounts that have a great influence on the Bitcoin price. To proceed, we first try to select some important base networks in the detected base networks. We draw the scree plot of the singular values as shown in Fig. \ref{sigularvalues}. As can be seen from the graph, the curve of the singular values is clearly leveling off at the right side of the dotted line (i.e., the 10th singular value). Thus, we select the first 10 base networks for the following analysis.
Before analyzing accounts in the selected base networks, we approximate $B(t)$ with the selected networks. To evaluate the fitting effect, we calculate the correlation coefficients between the fitted price series and $B(t)$. The right part of Table \ref{tab_u1r} shows the correlation coefficients. Surprisingly, the three correlation coefficients are greatly enhanced as compared with the first left-singular vector. Especially, the Pearson correlation coefficient between ELG and $B(t)$ is 0.87, while only 0.24 between NMG and $B(t)$. The great difference indicates a strong correlation between abnormal accounts' transactions and the Bitcoin exchange price, which is a strong evidence of the price manipulation in Mt. Gox.
Figure \ref{fit_price} shows the trends of $B(t)$ and the fitted price. As can be seen from the graph, though the shape of the peak in April of 2013 is missed, the trends of $B(t)$ has been grasped by the selected base networks of EHG and ELG, whereas the base networks in the NMG have no effect in grasping the trend.
To show the structure variation of the networks, we draw the time-varying contribution $u_i(t)$ of the first four base networks in Fig. \ref{uit}.
In most cases, $u_i(t)$ exhibit a few abrupt changes, partitioning the history of the transaction into separate time periods. The most notable abrupt changes are in December of 2012 when the Bitcoin exchange price is very smooth and the November of 2013 when the price skyrocketing. During the two periods, the effects of the first four base networks of EHG and ELG are both significant, however, the base networks in NMG have no distinct effect during the smooth period and show effect only a few days during the skyrocketing period.
\begin{figure*}[htbp]
\centering
\subfloat[Self-Loop]{%
\label{Self-Loop}
\includegraphics[width=.26\textwidth,height=.26\textwidth]{20130207sample.png}}\hfill
\vspace*{1mm}
\subfloat[Unidirection]{%
\label{Unidirection}
\includegraphics[width=.26\textwidth,height=.26\textwidth]{20130815sample.png}}\hfill
\subfloat[Bidirection]{%
\label{Bi-direction}
\includegraphics[width=.26\textwidth,height=.26\textwidth]{20130414sample.png}}\
\subfloat[Triangle]{%
\label{Triangle}
\includegraphics[width=.26\textwidth,height=.26\textwidth]{20131025sample.png}}\hfill
\subfloat[Polygon]{%
\label{Polygon}
\includegraphics[width=.26\textwidth,height=.26\textwidth]{20130919sample.png}}\hfill
\subfloat[Star]{%
\label{Star}
\includegraphics[width=.26\textwidth,height=.26\textwidth]{20130912sample.png}}\
\caption{Some typical abnormal transaction patterns}\label{tran_pattern}
\end{figure*}
\subsection{Abnormal transaction patterns}
As discussed above, the transactions between abnormal users have a great correlation with the Bitcoin exchange price. A natural question is which edges (i.e, transactions) and thus accounts are the most influential and whether the transactions show certain patterns during the period. To this end, based on the extracted 10 base networks, we further extract the top-10 ranking edges (by the absolute value of weights) in each base networks. We find only 44 distinct edges instead of the 100 maximally possible, which including a total of 28 accounts in EHG. In ELG, 57 edges and 46 accounts were found. We call these \emph{core abnormal accounts}.
To identify special transaction patterns, we draw the daily subgraph of the core abnormal accounts. We find that there are many abnormal transaction patterns (i.e., market manipulation patterns) in the networks. In order to save space, we show only 6 typical patterns in Fig.~\ref{tran_pattern}. These subgraphs are all extracted from ELG on different days. In order to illustrate more clearly, we fix the layout of the graph (i.e., the position of the accounts in each graph is fixed) and denote the special patterns in red. The size of the line denotes the number of transactions between the two accounts. The number at the right-hand side of the directed edge represents the number of transactions between the two accounts. We simply explain the 6 patterns as follows:
\begin{itemize}
\item \textbf{Self-Loop.} A pattern that an account made transactions with itself. Figure \ref{Self-Loop} shows subgraph on 2013/02/07, the account 231 made 749 transactions with itself. Self-Loop is restricted for normal accounts in any exchanges, as it makes no sense. Thus, a reasonable explanation for the self-loop pattern is that the account may belong to the exchange and may be used to increase daily transaction volume or price manipulation.
\item \textbf{Unidirection.} The unidirectional pattern indicates more than one transaction from account \emph{A} to \emph{B}. Figure~\ref{Unidirection} shows a unidirectional pattern on 2013/08/15, where account 527332 made 322 sell transactions to account 231. It is possible for an account to sell Bitcoin to another account for more than one times, however, it is almost impossible for two normal accounts to interact with such a large number of times on the same day.
\item \textbf{Bi-direction.} The bi-directional pattern is a typical market manipulation behavior, especially when the two accounts are controlled by the same user, that two accounts interact with each other many times. Figure \ref{Bi-direction} shows the bi-direction pattern on 2013/04/14 where account 144834 interact with account 231 for more than 150 times.
\item \textbf{Triangle.} The triangle pattern indicates a triangle-like structure between three accounts. It may contain various forms when considering the direction of the edge. Figure \ref{Triangle} shows a special form of triangle pattern on 2013/10/25. It is special because the accounts form a loop through transactions (account 282004 $\rightarrow $71885 $\rightarrow $490089 $\rightarrow $282004).
\item \textbf{Polygon.} Polygon pattern is a more complicated transaction pattern where many accounts form a polygon-like \emph{group} with each edge has more than one transactions. Figure \ref{Polygon} shows a quadrangle pattern on 2013/09/19, it seems that account 282004 sends Bitcoin to account 527332 through the ``bridge accounts'' 488195 and 231 for more than two hundred transactions.
\item \textbf{Star.} A star pattern has a core account that buys or sells Bitcoin to many accounts. Figure~\ref{Star} shows a typical star, where the account 282004 sell Bitcoin to accounts 488195, 490089, 527332 and 231.
\end{itemize}
Generally speaking, it is not surprising for a transaction network to form a special structure, as transactions are random. However, in our case, it is impossible as each edge represents far more than one transaction in a single day.
Thus, it seems quite possible that these accounts are controlled by a certain group and these transactions have special purposes
Based on the results, we summarize the findings as follows:
\begin{itemize}
\item \textbf{Finding 4.} The daily fluctuations of the selected base networks of EHG and ELG have a strong correlation with the Bitcoin exchange price. On the contrary, the daily fluctuation of the base networks of NMG has no correlation with the Bitcoin exchange price. This finding indicates that the behavior of the abnormal accounts' transaction affects the fluctuation of Bitcoin exchange price.
\item \textbf{Finding 5.} The trend of the Bitcoin exchange price can be captured by the selected base networks of EHG and ELG. It means that the trend of the price can be predicted by transactions between abnormal accounts.
\item \textbf{Finding 6.} There are many unusual transaction patterns (e.g., self-loop, bi-direction, star) between abnormal accounts. These patterns imply that these accounts are controlled by the same group and are strong evidence of price manipulation.
\end{itemize}
\section{Related Work}\label{relatedwork}
Blockchain technology is a new technology, which has many research directions and attracts the interest of researchers from various fields\cite{shaoan,zheng2017overview}. Our research is related to previous work in two areas. The first related area is the study of understanding the big fluctuation of Bitcoin price. As aforementioned, many driving factors of the price are found. Due to all the related data are time series, the most used method in the analysis is time series based model such as vector space model \cite{georgoulausing}, vector error-correction model \cite{ciaian2016economics}, ARDL bounds testing method \cite{bouoiyour2015does}, wavelet analysis \cite{kristoufek2015main}, and vector autoregressive \cite{ciaian2016economics}.
Another related area is the study of the blockchain data (i.e., the transaction ledger)
for different topics. Due to the publicly accessible of the blockchain data and users are anonymous in the system, a common topic is to mine the blockchain data to reveal users' privacy \cite{Reidanalysisanonymitybitcoin2013, AndroulakiEvaluatinguserprivacy2013, AtheyBitcoinpricingadoption2016}. Because of the relatively lawless, blockchain has become an area full of various scams. Thus, mining the blockchain data to detect scams is also a critical topic. Recently, there are many studies on this topic, such as Bitcoin-based scams \cite{VasekTherenofree2015}, the smart contract
based Ponzi schemes \cite{bartoletti2017dissecting, chen2018detecting}, money laundry \cite{moser2013inquiry}, attacks \cite{Chen2018under}. See \cite{weilisurvey} for a full survey of this topic.
\section{Conclusion and Future Work}\label{conclusion}
We conduct a systematic study to analyze the leaked Mt. Gox transaction data through graph analysis. By comparing the transaction price of the transaction data with the disclosed daily price, many abnormal transactions were identified and were used to divide the accounts into three categories. Based on this classification, we construct three graphs (i.e., EHG, ELG, and NMG) and obtain many findings by analyzing these graphs through various metrics. These findings convinced us that there are many
market manipulation behaviors in the exchange. In order to reveal the relationship between these behaviors and the Bitcoin price, the graphs are reconstructed into daily graph series and reshaped into matrices. Through adopting SVD to the matrices, some very important base networks are identified. By inspecting the base networks, we find that the daily variation of the abnormal base networks closely related to the Bitcoin price and many market manipulation patterns. Based on these findings and considering Bitcoin is dominant in the market, we propose to strengthen supervision in this market. In the future, we will conduct a more thorough study of the data to reveal the extent to which the market is affected and to discuss the changes in the behavior of investors under the extreme fluctuation price.
\section*{Acknowledgment}
The work described in this paper was supported by the National Key Research and Development Program (2016YFB1000101),the National Natural Science Foundation of China (61722214,11801595), the Pearl River S\&T Nova Program of Guangzhou (201710010046) and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (2016ZT06D211).
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUe63xK03BfMLyrdDQ
| 5
| 1
|
\section{Introduction}
Space experiments designed to study cosmic ray fluxes at very high
energies must have, among other features, an excellent hadron to
electron discrimination. Electromagnetic calorimeters have a very good
intrinsic hadron rejection power based on the shower topology (depth
and lateral shape). Unfortunately, to be fully exploited, this
property requires calorimeters with a high granularity readout and
sufficient thickness to ensure
the full containment of the shower.
These features cannot be fully implemented on satellites and airborne
experiments in general due to weight constraints and power consumption
limitations.
The hadron to electron discrimination capabilities are consequently reduced.
A workaround solution was attempted
in the successful PAMELA experiment~\cite{PAMELA}. A neutron detector, consisting of a moderator and a set of \hetre
counters, was placed downstream of the
calorimeter. The aim was to exploit the neutron component of the
hadronic showers, much larger than in electromagnetic showers. This
neutron counter, despite its very low efficiency, has been
successfully used by the PAMELA collaboration for systematic effects
evaluation and for measuring the calorimeter efficiency.
The NEUCAL project aims to further pursue and expand the use of
neutron detectors coupled to calorimeters by introducing the new
technique of {\em active moderation}~\cite{elba}, that consists in
detecting the signal of neutrons while their energy degrades within
hydrogen-rich scintillators. The most promising time interval for the
detection of the neutron moderation signal is between $\sim10\nanos$ to $\sim100\nanos$ after the shower core, when the
peak of the neutron flux arrives.
In the hadronic showers neutrons within this time window are produced
by nuclear excitation processes and have energies of the order of $1\MeV$.
\section{NEUCAL prototype}
\label{sec:proto}
The NEUCAL prototype~\cite{como, vienna}, sketched in Fig.~\ref{fig:NeucalCAD}, consists of a three by three matrix of identical modules placed in a light-tight aluminum box with three shelves. Each module is made up of three slabs ($25\cm\times8\cm\times1\cm$) of a fast polyvinyl-toluene scintillator (EJ-230 by Eljen Technology), coupled through a common Plexiglas light guide to a fast fine-mesh photomultiplier (R5946 by Hamamatsu Photonics). For the sake of flexibility, only optical grease was used in the optical couplings. Five \hetre tubes (12NH25/1 by Canberra) are also placed on top of the central module.
The scintillators and the tubes were read out by fast digitizers capable
of recording up to $10\millis$ of data samples. Two similar boards
were used: one CAEN V1731 with $500\MSperSec$ capability but only 8
bit range over an input dynamics of $1\,{\rm Vpp}$; one CAEN V1720 with a more limited $250\MSperSec$
capability but 12 bit range over an input dynamics of $2\,{\rm
Vpp}$. Each board has eight input channels. For a better timing
precision one channel of each board was used to sample the trigger signal. Seven of the nine scintillator signals were sent to the $500\MSperSec$ board, while the two remaining scintillator signals and the five \hetre tubes signals were sent to the $250\MSperSec$ board. The readout was performed with a VME system.
\begin{figure}[t]
\hspace{0.01\textwidth}%
\includegraphics[width=0.55\textwidth]{fig/Neucal_CAD_crop.png}
\hspace{0.02\textwidth}%
\begin{minipage}[b]{0.41\textwidth}\caption{\label{fig:NeucalCAD}CAD open view of the NEUCAL detector prototype. The scintillators are in blue, and the five \hetre tubes (green) are placed on top of the central module.}
\end{minipage}
\end{figure}
\section{Testbeam setup and simulation}
In summer 2009 the NEUCAL prototype performances have been measured at CERN on the SPS line H4 during a test of the prototype of the the
Cream-2 tungsten-scintillator calorimeter ({\em CalW}\/)~\cite{cream2}. Data were collected for negative pions ($350\GeV$), positrons ($100\GeV$ and $150\GeV$), and muons ($150\GeV$)
which are not used for the present analysis.
In all testbeam configurations, sketched in Fig.~\ref{fig:TBConf}, NEUCAL was placed downstream CalW and all other possible devices and/or absorbers. In configuration a), used with positron beams, the upstream material corresponds to $16\,{\rm X_0}$ and $0.6\,\lambda_{\rm I}$; in configurations b) and c), used with pion beams, the upstream material corresponds to $29\,{\rm X_0}$ and $1\,\lambda_{\rm I}$, and $25\,{\rm X_0}$ and $0.8\,\lambda_{\rm I}$, respectively. In all configurations the NEUCAL active volume was symmetrically positioned and orthogonal with respect to the beam axis.
The NEUCAL prototype and the testbeam setup have been accurately
modelled with Geant4~\cite{geant4,geant} to validate the results
against the Monte Carlo simulation. A sketch of the geometry of the
configuration c), as implemented in Geant4, is visible in Fig.~\ref{fig:TBGeant}. The Geant4 simulation of NEUCAL has been also cross-checked with Fluka~\cite{fluka1, fluka2} with respect to the single neutron response, as described in~\cite{como, vienna}.
\begin{figure}[t]
\begin{minipage}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{fig/TB_conf.pdf}
\caption{\label{fig:TBConf}Sketch of the three testbeam configurations used for the present analyses of the NEUCAL data.}
\end{minipage}
\hspace{0.01\textwidth}%
\begin{minipage}[b]{0.49\textwidth}
\includegraphics[width=0.95\textwidth]{fig/TB_G4_crop.png}%
\caption{\label{fig:TBGeant}Testbeam geometry corresponding to configuration c) as implemented in Geant4.}
\end{minipage}
\end{figure}
Testbeam data comparison with Geant4 simulations are done only at the level of the energy deposition in scintillators and on the number of counts in \hetre tubes. In fact, no readout chain response is presently modelled in the simulation.
Two sets of simulated samples have been produced using different
physics lists suitable for the NEUCAL use case~\cite{geantphysref}:
\verb|QGSP_BERT_HP| ({\em QBERT}\/), with Bertini model and
Quark-Gluon String Precompound model to generate the final state for
hadron inelastic scattering respectively below and above $\sim10\GeV$;
\verb|QGSP_BIN_HP| ({\em QBIC}\/), similar to QBERT but
with the Binary Cascade model in place of the Bertini model for the
final state generation below $\sim10\GeV$. Both physics lists
include a high-precision model for low energy neutron
transportation.
The typical size of simulated samples is 20 thousand events for pions
and 80 thousand events for positrons.
\section{Results}
By default data were taken in zero suppression mode, implemented in the
digitizer boards, in order to reduce the event size and the readout time.
The signals up to one millisecond after the initial charged particles
shower were then reconstructed offline to search for delayed neutron
interactions. The neutron signature is an isolated energy deposit in
one of the scintillator modules or a pulse in the \hetre tubes, while
a traversing charged particle will release energy on more than one
scintillator module~\cite{vienna}.
Unfortunately, the photomultipliers and the readout electronics often saturate because of the huge signal induced by the shower core, with different effects seen on data taken with and without zero suppression.
In zero suppression mode the readout hardware spoilt the data in the
time window close to the shower core as a consequence of reflections
seen at the digitizer output lasting for a few hundred nanoseconds.
In the few ten thousand events taken without zero suppression
the board with limited ADC range (V1731) still showed
slowly-recovering saturation effects.
These issues have been understood and fixed in view of the next tests. Eventually, despite the difficult conditions, the samples of Table~\ref{table:samples} have been used to produce the present results.
In the following, all plots report points corresponding to: pion sample data (\includegraphics[height=0.4\baselineskip]{fig/pidata.pdf}) and pion simulation predictions for QBERT (\includegraphics[height=0.7\baselineskip]{fig/pibert.pdf}) and QBIC (\includegraphics[height=0.7\baselineskip]{fig/pibic.pdf}); positron sample data (\includegraphics[height=0.4\baselineskip]{fig/posdata.pdf}) and positron simulation predictions for QBERT (\includegraphics[height=0.7\baselineskip]{fig/posbert.pdf}) and QBIC (\includegraphics[height=0.7\baselineskip]{fig/posbic.pdf}). All plots shown in this paper are normalized to the number
of events which develop a shower in the upstream material: almost all of the positron
and $45\%$ to $60\%$ of pion events, depending on the configuration.
\begin{table}
\caption{\label{table:samples}Testbeam data samples used for each analysis. ZS stands for `zero suppression'.}
\begin{center}
\begin{tabular}{lllll}
\br
Analysis &Time Domain & Mode & $e^+$ sample & $\pi^-$ sample\\
\mr
\hetre counters & $0.1\micros$ -- $0.1\millis$ & ZS & 39k, $100\GeV$; conf. a) & 76k, $350\GeV$; conf. b) \\
Scint. `early' & $<1\micros$ & no ZS & 7k, $150\GeV$; conf. a) & 18k, $350\GeV$; conf. c) \\
Scint. `late' & $1\micros$ -- $1\millis$ & ZS & 39k, $100\GeV$; conf. a) & 76k, $350\GeV$; conf. b) \\
\br
\end{tabular}
\end{center}
\end{table}
The analysis of the traditional \hetre counters (see
Table~\ref{table:samples} for sample details) is an important
validation of the simulation environment. The results are summarized
in Fig.~\ref{fig:3He} where the number of pulses per event on all counters is plotted versus the logarithm of time, in microseconds. Both for data and simulation, the signal time is always referred to the arrival of the shower, taken as time zero.
The behaviour is well reproduced by the simulation, but significant differences in the absolute prediction exist for pions between QBIC and QBERT, observable also in the scintillators analyses reported below. These discrepancies will be investigated further when new datasets from future tests will become available.
\begin{figure}[b]
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{fig/hitData_3He_LogNHitLogT_crop.png}
\caption{\label{fig:3He}Number of pulses per event of all \hetre counters as a function of the logarithm of time, in microseconds.\\}
\end{minipage}
\begin{picture}(0,0)
\put(-64,-17){\includegraphics[width=0.13\textwidth]{fig/legenda.png}}
\end{picture}
\hspace{0.01\textwidth}%
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{fig/hitData_LogELogT_side_crop.png}
\caption{\label{fig:late}Hit energy per event of one top-side
scintillator module
as a function of the logarithm of time, in microseconds, `late'
domain.}
\end{minipage}
\begin{picture}(0,0)
\put(394,87){\includegraphics[width=0.13\textwidth]{fig/legenda.png}}
\end{picture}
\end{figure}
The analyses of the scintillator modules data search for neutron interaction candidates ({\em hits}\/) defined as single isolated signals exceeding $0.3\GeV$ in energy. This definition is also implemented at the simulation level. All scintillator modules were previously calibrated in energy with cosmic muons~\cite{tiberio}.
Details of the used samples can be found in
Table~\ref{table:samples}. In particular, the analysis of the `early'
scintillator data ($<1\micros$) is performed on samples without zero
suppression; `late' domain analysis, from $1\micros$ to $1\millis$, relies upon data with zero suppression.
Data from scintillators are contaminated by several background sources, not modelled in the simulation, for which specific rejection criteria have been deployed. Signal reflections in zero suppression mode are rejected by vetoing specific time delays between subsequent hits; the signals due to off-trigger beam particles can be reduced by asking no hits from different modules in coincidence; spurious effects due to saturation phenomena are reduced by rejecting signals with abnormally long duration.
An example of the results of the `late' analysis, from $1\micros$ to $1\millis$, is given in Fig.~\ref{fig:late} where the hit energy per event is plotted as a function of the logarithm of time, in microseconds, for one top-side module; all modules show similar behaviour.
This range is populated by depositions resulting from neutron captures
on nuclei and it is not directly interesting for the detection of the
neutron moderation. Nevertheless, the good overall agreement between
data and simulation is an indirect indication that neutron flux
estimations are under control. Capture signals could be used as a
complementary handle for neutron detection if they can be made
numerically significant on the single event.
Off-trigger contribution due to beam contamination can be observed on the positron
data for time greater than about $30\micros$ in Fig.~\ref{fig:late},
and also in Fig.~\ref{fig:3He}. This is due to the much greater intensity of the positron beam used in the test.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\textwidth]{fig/hitData_LogELogT_crop.png}
\end{center}
\caption{\label{fig:early}Hit energy per event as a function of the
logarithm of time, in microseconds, for positrons and pions from the nine NEUCAL scintillator modules, `early' time domain. Top row corresponds to the upstream module shelf with respect to the beam arrival direction. Same symbol conventions of Fig.~\ref{fig:3He} and Fig.~\ref{fig:late} apply.}
\end{figure}
The `early' time domain analysis ($<1\micros$), performed on data without zero suppression, is represented in Fig.~\ref{fig:early} where the hit energy per event is reported versus the logarithm of time, in microseconds, for all nine modules.
It is evident that central modules suffer the shower core contamination from pions and positrons, especially at short times.
Side modules are less affected by this issue. Nevertheless the agreement between data and simulation is again not satisfactory for positrons, with the exclusion of the two bottom-side modules. This is a saturation effect in the modules read out by the V1731 boards due to the huge signal generated by electromagnetic showers not fully contained by the upstream calorimeter.
The analysis at times before $200\nanos$, where most
of the neutron moderation signal is expected, is thus
restricted to the bottom-side modules that are not directly flooded by
the electromagnetic shower and that are read out by the board V1720,
which has the dynamic range to withstand the huge signal.
The hit energy per event for the bottom-side
modules as a function of time, now represented in linear scale, is shown in Fig.~\ref{fig:earlyBottomSide}.
The agreement with the
simulation is satisfactory with the exception of the region below few
tenth of nanoseconds where a significant shower contamination is still
present and cannot be further reduced due to the limitations of the
present setup. This plot demonstrates that the signal due to neutrons in
pion data is significantly larger than in positron data in the time
interval relevant for the NEUCAL application.
\begin{figure}[t]
\includegraphics[width=0.63\textwidth]{fig/hitData_LogELinT200_side_crop.png}
\begin{picture}(0,0)
\put(-100,120){\includegraphics[width=0.18\textwidth]{fig/legenda.pdf}}
\end{picture}
\hspace{0.01\textwidth}%
\begin{minipage}[b]{0.35\textwidth}\caption{\label{fig:earlyBottomSide}Hit energy per event as a function of time (linear scale) after the shower core.}
\end{minipage}
\end{figure}
\section{Conclusions}
NEUCAL is a detector designed to exploit the innovative ``active moderation'' technique in the neutron counting. The final aim is the development of a light and compact device, suitable for space experiments, to help electromagnetic calorimeters in hadron to electron discrimination. The NEUCAL prototype has been tested with pion and positron showers, and data compared with an accurate Geant4 simulation. Results demonstrate in general good agreement between data and simulation and are very promising in view of the device application.
Further confirmations are expected with upcoming tests that will
profit from the experience gained so far and that will be organised with an improved and more performant setup.
\ack
The authors wish to express their gratitude to P~S~Marrocchesi, G~Bigongiari, and P~Maestro
for their support during data taking at CERN and in the analysis of the data.
\section*{References}
|
train/arxiv
|
BkiUbTTxK1UJ-rWoICmB
| 5
| 1
|
\section{\def\@secnumfont{\mdseries}\@startsection{section}{1}%
\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}%
{\normalfont\scshape\centering}}
\def\subsection{\def\@secnumfont{\bfseries}\@startsection{subsection}{2}%
{\parindent}{.5\linespacing\@plus.7\linespacing}{-.5em}%
{\normalfont\bfseries}}
\makeatother
\begin{document}
\baselineskip 15pt
\title{Image of Lie polynomial of degree $2$ evaluated on Nilpotent Lie algebra}
\author{Niranjan}
\address{ Indian Institute of Science Education and Research, Mohali, Punjab, India}
\email{[email protected], [email protected]}
\author{Shushma Rani$^{\ast}$}
\address{Indian Institute of Science Education and Research, Mohali, Punjab, India}
\email{[email protected], [email protected].}
\thanks{2020 Mathematics Subject Classification. 17A60,17B05,17B30\\
$^{*}$The corresponding author.}
\keywords{Multilinear Lie polynomials, Nilpotent Lie algebra, Frattini subalgebra, Isoclinism, Breadth}
\maketitle
\begin{abstract} We delineate the image of multilinear Lie polynomial of degree $2$ evaluated on $L$ where $L$ is a finite-dimensional nilpotent Lie algebra over field $k$ with $\dim L' \leq 4$.
\end{abstract}
\section{Introduction} In recent years, people have had a keen interest in the famous Lvov-Kaplansky Conjecture: \emph{The image of a multilinear polynomial in noncommutative variables over a field $K$ on the matrix algebra $M_n(K)$ is a vector space.} For more details, see \cite{serveyLKC2020},\cite{image2},\cite{imageliemon2017},\cite{imglie}. A variation of the Lvov–Kaplansky conjecture has been formulated in \cite{imglie}: \emph{The image of a multilinear Lie polynomial over a field $K$ on classical Lie algebras is a vector space}. Further, they proved this result for multilinear Lie polynomials of degrees $3$ and $4$ for some classical Lie algebras.
A non zero multilinear Lie polynomial in two variables is nothing but a non-zero scalar multiple of the Lie bracket of these variables. For multilinear Lie polynomials of degree 2, this problem has been studied by Brown in \cite{brown1963}. He proved that every element of split semi-simple Lie algebra $L$ is a Lie bracket over all infinite fields or finite fields of sufficiently big cardinality. For real semi-simple Lie algebras with some conditions, the same result proved in \cite{dmitri2015}. On simple Lie algebras, this problem can be seen as the Lie theoretical version of a famous Ore conjecture which was proved recently in \cite{ore_2010}. \emph{Ore's conjecture: Every element of a finite simple group is a single commutator.}
On nilpotent Lie algebras, the image set of a non-zero multilinear Lie polynomial of degree 2 is not necessarily a vector space. Smallest such nilpotent Lie algebra over a field $k$ with characteristic not equal to $2$ is $L_{6,21}(0)$, see Section \ref{sec6}. So a natural way to study this question is imposing some constraints on Lie algebra. We imposed a constraint on the spanning set of images, which is nothing but a derived Lie subalgebra $L'=[L, L].$ Let us denote the image set of multilinear Lie polynomial of degree $2$ on $L$ by $w(L).$ For the finite-dimensional nilpotent Lie algebra $L$ over $k$, the breadth of $x \in L$, denoted by $b(x)$, is the rank of the adjoint map $ad x\colon L \rightarrow L$ given by $y \mapsto [x,y].$ The breadth of a Lie algebra $L$, denoted by $b(L)$, is the maximum of the breadths of all elements of $L$ and breadth type is a tuple consists of all possible breadths of elements of $L$. The following theorems are the main results of our paper.
\begin{thm}\label{dimL'3} Let $L$ be a finite-dimensional nilpotent Lie algebra over field $k$ of characteristic not equal to $2$ such that $\dim L' \leq 3.$ Then $w(L)=L'.$
\end{thm}
\begin{thm}\label{thmA} Let $L$ be a nilpotent Lie algebra over the finite field $k=\mathbb{F}_q$ of odd characteristic with $\dim L'=4.$ Then $w(L) \neq L'$ if and only if one of the following holds:
\begin{itemize}
\item $L$ is $4$-step nilpotent with $\dim (L)=6 $ and $\dim (Z(L))=2.$
\item $L$ is $3$-step nilpotent with $\dim (L)=7 $ and $\dim (Z(L))=3.$
\item $L$ is $2$-step nilpotent with $\dim (L)=8 $ along with one of the following:
\begin{itemize}
\item Breadth type of $L$ is $(0, 1,3)$ or $(0,1, 2, 3)$.
\item Breadth type of $L$ is $(0,2,3)$ and does not possesses any generating set $\{u_1,u_2,u_3,u_4\}$ such that $[u_1,u_2]=0=[u_3,u_4].$
\end{itemize}
\end{itemize}
Further, if $w(L) \neq L'$ then each element of $L'$ is sum of at most two elements of $w(L).$
\end{thm}
Image set of commutator word map for finite groups is current topic of research \cite{group2005},\cite{P-group2005},\cite{manoj2021}. Our results are similar to the one given for finite $p$-groups in \cite{P-group2005},\cite{manoj2021}. Most of the arguments presented in Theorem \ref{thmA} are similar to \cite{manoj2021}.
Our paper has been organized in the following way. In Section \ref{sec1}, we will discuss some definitions and notations. In Section \ref{sec2}, we will see some results that we will use in each of the subsequent sections. Section \ref{sec3} will give a proof of Theorem \ref{dimL'3}. Sections \ref{sec4},\ref{sec5} and \ref{sec6} describe our question under investigation for $2$-step, $3$-step, and and $4$-step nilpotent Lie algebras, respectively. In the last section \ref{sec7}, we discuss the proof of our main Theorem \ref{thmA}.
{\em Acknowledgments. The authors thank Tushar Kanta Naik for helpful discussions. The first author acknowledges the CSIR research grant: 09/947(0084)/2017-EMR-I. The second author acknowledges the CSIR research grant: 09/947(0082)/2017-EMR-I.}
\section{Preliminaries}\label{sec1}
We will assume $L$ is a finite-dimensional Lie algebra over field $k.$
\noindent
\begin{defn} A nonzero {\bf multilinear Lie polynomial} $w$ of degree $2$ is a polynomial over field $k$ that can be written in the form $$w(x,y)=c[x,y]$$
where $0 \neq c \in k.$
\end{defn}
\begin{defn} Let $L$ be a Lie algebra over the field $k$, then the lower central series of $L$ is defined as
$$ L^0 \supset L^1 \supset L^2 \supset \cdots L^i \supset L^{i+1} \supset \cdots \cdots $$ where $L^0=L,\, L^1=L'=[L,L]$, $L^i=[L, L^{i-1}].$ $L$ is said to be a nilpotent Lie algebra if this lower central series terminates at a finite step, that is, $L^n=0$ for some $n.$ If $n$ is the least non-negative integer with this property, i.e., $L^{n-1} \neq 0$, then $n$ is called the nilpotency class of $L$ or we say, $L$ is {\bf $n$-step nilpotent}. For example, any abelian Lie algebra is $1$-step nilpotent.
\end{defn}
From \cite{tower1973},\cite{marshall1967}, and \cite{ernest1970}, we recall some definitions and results that are required in our proofs.
\begin{defn} An element $x \in L$ is called a non-generator of $L$ if, whenever $S$ is a subset of $L$ such that $S$ and $x$ together generate $L$, then $S$ alone generates $L.$
\end{defn}
\begin{defn}
The {\bf Frattini subalgebra}, $F(L)$ of a Lie algebra, is defined as the intersection of the maximal subalgebras of $L.$
\end{defn}
\begin{rem} Any maximal subalgebra $M$ of a nilpotent Lie algebra $L$ is an ideal of $L.$ The Frattini subalgebra $F(L)$ of a nilpotent Lie algebra is equal to the derived subalgebra of $L$ and $F(L)$ is the set of non-generators of $L$. Further, the dimension of $L/L'$ is the minimal number of generators of $L.$
\end{rem}
\begin{defn}
Two Lie algebras $L_1$ and $L_2$ over field $k$ are said to be isoclinic, whenever there exist isomorphisms $\eta: L_1/Z(L_1) \longrightarrow L_2/Z(L_2)$ and $\tau: L_1' \longrightarrow L_2'$ such that if $$\eta(u_1+Z(L_1))= v_1+Z(L_2), \,\,
\eta(u_2+Z(L_1))= v_2+Z(L_2)$$ then $\tau([u_1,u_2])=[v_1,v_2]$. The pair $(\eta,\tau)$ is called isoclinism between $L_1$ and $L_2$.
\end{defn}
\begin{defn} A Lie algebra $L$ is said to be {\bf stem (or pure)} if $Z(L) \subseteq L'.$
\end{defn}
\begin{rem}
Each isoclinism family of finite-dimensional Lie algebra contains a stem Lie algebra of minimal dimension.
\end{rem}
\iffalse
\begin{rem}
For a Lie algebra $L_1$, it turns out that $L_1$ is \textbf{isoclinic} to $L_1\oplus A$ for any abelian Lie algebra $A.$ Furthermore, if a finite-dimensional Lie algebra $L_1$ of dimension $m$ is isoclinic to a stem Lie algebra $L_2$ of dimension $n$, then $L_1$ is isomorphic to $L_2 \oplus A$ for an abelian Lie algebra $A$ of dimension $ m-n .$
\end{rem}
\fi
\begin{defn}
A Lie algebra $L$ is said to be {\bf the central product} of $M$ and $N$, if $L=M+N$, where $M$ and $N$ are ideals of $L$ such that $[M, N]=0$ and $M \cap N \subseteq Z(L).$
\end{defn}
\begin{defn}
For the finite-dimensional nilpotent Lie algebra $L$ over $k$, the breadth of $x \in L$, denoted by $b(x)$, is the co-dimension of $C_L(x)$, the centralizer of $x$ in $L$, can also be thought of as the rank of the adjoint map $ad x\colon L \rightarrow L$ given by $y \mapsto [x,y].$ The maximum of the breadths of all elements of $L$ is called {\bf breadth of a Lie algebra} $L$, denoted by $b(L)$. For a Lie algebra $L$, \textbf{breadth type} is a tuple consists of all possible breadths of elements of $L$.
\end{defn}
\section{Results}\label{sec2}
The proof of the following lemmas are straightforward.
\begin{lem}
Let $L_1$ and $L_2$ be two isoclinic finite-dimensional Lie algebras. Then $L_1$ and $L_2$ are of the same breadth type.
\end{lem}
\begin{lem}\label{prelem}
Let $L_1$ and $L_2$ be two isoclinic finite-dimensional nilpotent Lie algebras over field $k.$ Then $w(L_1)=L_1'$ if and only if $w(L_2)=L_2'.$
\end{lem}
In the light of above, for the classification of Lie algebras of a given breadth type, it is enough to consider stem Lie algebras of that breadth type. From now onwards, \textbf{we will consider only stem Lie algebras}.
\begin{lem}\label{maximal_nilpotency_class}
For a finite-dimensional Lie algebra $L$ over field $k$ which is atleast $4$- step nilpotent, $Z(L)\cap L'$ cannot be maximal in $L'.$
\end{lem}
\begin{proof}
Suppose, if possible that $Z(L)\cap L'$ is maximal in $L'.$ Then $[L':Z(L)\cap L']=1$. So, $L/ (Z(L)\cap L')$ is $2$-step nilpotent, which implies that $L^2 \subseteq Z(L)$, a contradiction to given hypothesis.
\end{proof}
The proofs of the following three lemmas are straightforward.
\begin{lem}\label{prelem3}
Let $L$ be a finite-dimensional nilpotent Lie algebra over field $k=\mathbb{F}_q$ and $I \subseteq Z(L)\cap L'.$ If there exists $u_1, u_2, \cdots, u_n \in L$ such that $L'/I =\bigcup\limits_{i=1}^{n}[u_i+I, L/I]$ and $I \subseteq \bigcap\limits_{i=1}^{n}[u_i,L]$, then $L'= \bigcup\limits_{i=1}^{n}[u_i,L].$
\end{lem}
\begin{lem}\label{prelem4}
Let $L$, a finite-dimensional Lie algebra, be a direct sum of Lie algebras $L_1$, $L_2.$ Then $w(L)=L'$ if and only if $w(L_1)=L_1'$ and $w(L_2)=L_2'.$
\end{lem}
\begin{lem}\label{prelem5} Let $L$ be a finite-dimensional nilpotent Lie algebra over field $k$ and $I$ be an ideal of $L$ of dimension $1$ contained in $Z(L)$ such that $(L/I)'=w(L/I).$ Then each element of $L'$ is sum of at most two elements of $w(L).$
\end{lem}
The following three results from \cite{misra2015} and \cite{ssk2021}, classify the breadths of Lie algebras.
\begin{thm}\label{breadth1_classify} From \cite[Theorem 2.3]{misra2015}, $b(L)=1$ if and only if $\dim L'=1.$
\end{thm}
\begin{thm}\label{breadth2_classify} From \cite[Theorem 3.1]{misra2015}, let $L$ be a finite-dimensional nilpotent Lie algebra over field $k$ of characteristic not equal to $2.$ Then $b(L)=2$ if and only if one of the following conditions holds:
\begin{itemize}
\item[(1)]$\dim L'=2$, or
\item[(2)]$\dim L'=3$ and $\dim (L/Z(L))=3.$
\end{itemize}
\end{thm}
\begin{thm}\label{breadth3_classify} From \cite[Theorem 3.1]{ssk2021}, let $L$ be a finite-dimensional nilpotent Lie algebra over $k=\mathbb{F}_q.$ Then $b(L)=3$ if and only if one of the following holds:
\begin{itemize}
\item[(i)] $\dim (L')=3$ and $[L:Z(L)] \geq 4.$
\item[(ii)] $\dim (L')\geq 4$ and $[L:Z(L)] = 4.$
\item[(iii)] $\dim (L')=4$ and there exists an ideal $I \subseteq Z(L)$ of $L$ with $\dim I=1$ and $[L/I:Z(L/I)] = 3.$
\end{itemize}
\end{thm}
\begin{rem}\label{remark} Let $L$ be a finite-dimensional nilpotent Lie algebra over $\mathbb{F}_q$ with $\dim(L')=4.$ Then by the above theorems it follows that $b(L) \geq 3.$ We will use this information throughout the following without reference.
\end{rem}
The following theorem describes that every finite-dimensional nilpotent Lie algebra with $1$-dimensional derived Lie subalgebra, is Heisenberg Lie algebra up to isoclinism.
\begin{thm}\label{heisenberg_thm} Let $L$ be an $n$-dimensional nilpotent Lie algebra over field $k$ with $b(L)=1.$ Then $L$ has a basis $\{x_1,y_1,x_2,y_2,\cdots,x_m,y_m,z_1, \cdots, z_{n-2m} \}$ with $[x_i,y_j]=\delta_{ij}z_1$ and $[z_j,L]=0.$
\end{thm}
\begin{lem}\label{ext_lemma} Let $L$ be a finite-dimensional nilpotent Lie algebra over field $k$ and $I$ be a central ideal of $L$ such that $[L/I:Z(L/I)]=3$, then
{\it{(a)}} If $\frac{L/I}{Z(L/I)}$ is a $3$-generator Lie algebra, then we can choose except three generators $x,y,z$ (say) of $L$, all other generators $u_1, u_2, \cdots, u_n, n \geq 0$, so that $[u_i, L] \subseteq I.$ If $\dim I=1$, then $[u_i, L]=I.$ Furthermore, $L/I$ is $2$-step nilpotent.
{\it{(b)}} If $\frac{L/I}{Z(L/I)}$ is a $2$-generator Lie algebra, then we can choose, except two generators $x,y$ (say) of $L$, all other generators $u_1, u_2, \cdots, u_n, n \geq 0$, in such a way that $[u_i, L] \subseteq I.$ If $\dim I=1$, then $[u_i, L]=I.$
\end{lem}
\begin{proof}
Since $\dim \frac{L/I}{Z(L/I)}=3$, the minimal number of generators of $\frac{L/I}{Z(L/I)}$ can be $2$ or $3.$
{\it{(a)}} If $\frac{L/I}{Z(L/I)}$ is a $3$-generator Lie algebra, say $\bar{x},\bar{y},\bar{z}$ be $3$-generators that do not belong to $Z(L/I)$, and $\bar{u_1}, \bar{u_2}, \cdots, \bar{u_n}, n \geq 0$ be other generators of $(L/I)$ that belong to $Z(L/I).$ we can choose unique representatives of pre-images of each of these generators in natural map $L \longrightarrow L/I.$ Since $I \subset Z(L) \subset L'$, we have $x,y,z, u_1, u_2, \cdots, u_n, n \geq 0$ as generators of $L$ such that $[u_i, L] \subseteq I.$ Observe that $u_i \notin Z(L)\subset L'$ as $u_i$'s are generators of $L.$ $0 \neq [u_i,L]\subseteq I$ and $\dim I=1$ implies $[u_i,L]= I.$ Since $\frac{L/I}{Z(L/I)}$ is a $3$-generator Lie algebra of dimension $3$, so $(\frac{L/I}{Z(L/I)})'=0$, i.e., $L'/I \subseteq Z(L/I)$ which implies that $L/I$ is $2$-step nilpotent.
{\it{(b)}} If $\frac{L/I}{Z(L/I)}$ is $2$-generator Lie algebra, then using similar arguments as above, the proof follows.
\end{proof}
The following theorem takes the edge off our study to Lie algebras of small dimensions.
\begin{thm}\label{keyresult} Let $L$ be a finite-dimensional nilpotent Lie algebra over field $k= \mathbb{F}_q$ of odd characteristic, with $Z(L) \subseteq L'$, $\dim L'= 4$ and $b(L)=3.$ If $L$ is $3$-step nilpotent, then one of the following holds:
\begin{enumerate}
\item[(i)] There exists a $6$-dimensional ideal $M$ of $L$, generated by $2$-elements such that $M'=L'$ and nilpotency class of $M$ is same as that of $L$. If $\dim L \geq 7$, then $L=M+N$ where $N$ is the Lie subalgebra of $L$ with $ \dim N' \leq 1.$ Further, if $N$ is non-abelian, it is isoclinic to the Heisenberg Lie algebra over $k.$
\item[(ii)] There exists a $5$-dimensional ideal $M$ of $L$, generated by $2$-elements such that $M'\subset L'$ and nilpotency class of $M$ is same as that of $L$. If $\dim L \geq 7$, then $L$ is the central product of $M$ and a $2$-step nilpotent Lie subalgebra $N$ that is isoclinic to the Heisenberg Lie algebra over $k.$
\item[(iii)] There exists a $7$-dimensional ideal $M$ of $L$, generated by $3$-elements such that $M'= L'$ and nilpotency class of $M$ is same as that of $L$. If $\dim L \geq 8$, then $L=M+N$ where $N$ is the Lie subalgebra with $ \dim N' \leq 1.$ Further, if $N$ is non-abelian, it is isoclinic to the Heisenberg Lie algebra over $k.$
\end{enumerate}
If $L$ is $4$-step nilpotent, then only $(i)$ holds.
\end{thm}
\begin{proof}
Since $b(L)=3$ and $\dim L'=4$, by Theorem \ref{breadth3_classify}, either $[L: Z(L)]=4$ or there exists a one-dimensional ideal $I$ of $L$ such that $[L/I:Z(L/I)] = 3.$
\noindent
If $[L: Z(L)]=4$ then minimal number of generators of $L$ can be $2,3$ or $4.$ If $L$ is $4$-generator Lie algebra then $L'=z(L)$, i.e., $L$ is $2$-step nilpotent, which is contradiction to given hypothesis.
If $L$ is a $2$-generator Lie algebra, then $L$ itself is $6$-dimensional satisfying (i). If $L$ is a $3$-generator Lie algebra, then $L$ itself is $7$-dimensional satisfying (iii). Further, $[L':Z(L)]=1$, so $L$ is at most $3$-step nilpotent by Lemma \ref{maximal_nilpotency_class}. But it cannot be $2$-step as otherwise $L' \subseteq Z(L).$ Hence, $L$ is $3$-step nilpotent.
Now, we will consider the other case when there exists $1$-dimensional ideal $I$ of $L$ with $[L/I:Z(L/I)] = 3.$ Then, the possible minimal number of generators of $\frac{L/I}{Z(L/I)}$ can be $2,3.$ We will discuss these cases here. First, assume that $L$ is $3$-step nilpotent. Then one of the following holds:
(a) $ I= L^2$ \hspace{2cm}
(b) $ I\subsetneq L^2$
\hspace{2cm} (c) $ I \nsubseteq L^2.$
{\textbf{\it (a)}} $I= L^2$, i.e., $L/I$ is $2$-step nilpotent, which implies $\frac{L/I}{Z(L/I)}$ is 3-generator Lie algebra. Indeed, if $\frac{L/I}{Z(L/I)}$ is 2-generator Lie algebra, then $L'/I=L'/L^2$ is $1$-generator Lie algebra, so $\dim L'/L^2=1$ and $\dim L^2= \dim I=1$ implies $\dim L'=2$, which is contradiction to given hypothesis. By Lemma \ref{ext_lemma}, it follows that, except for three generators $x,y,z$ of $L$, all other generators $u_1,u_2, \cdots, u_n, n \geq 0$, are such that $[u_i,L] = I$ for all $1 \leq i \leq n.$
Set $M:=<x,y,z>$ and $N:=\left\langle u_1,u_2,\cdots, u_n \right\rangle$, which are Lie subalgebras of $L.$ Observe that $\dim(M'+I/I)=3$ as $L'/I=M'+I/I$. We claim that
$I \subseteq M'.$ $C_L(u_i)$ is maximal in $L$ as $[u_i,L]= I$ for all $1 \leq i \leq n$. So $L' \subseteq C_L(u_i)$ as $L'=F(L)$ is the intersection of all maximal subalgebras of $L$.
Since $L^2=[L',L]=I \subseteq Z(L)$, for any generator $v \in I$ can be written as
$$v=[\alpha_1w_1+\alpha_2w_2+\alpha_3w_3, \beta_1 x+\beta_2 y+\beta_3 z] \in L^2,$$ where $L'+I/I=\gen{w_1+I, w_2+I, w_3+I}$. Thus, $v \in [M',M] = M^2 \subseteq M'$, i.e., $I \subseteq M'.$ Hence, our claim is proved. Therefore $M'/I=L'/I$ i.e. $M'=L'.$ Thus, $M$ is $7$-dimensional, $3$-step nilpotent as $0\neq v \in M^2 \implies M^2 \neq 0.$ Since $N=\langle u_1, \cdots, u_n\rangle$ and $[u_i,L]= I$ for all $1 \leq i \leq n$, implies that $N'\subseteq I$, i.e., $N$ is at most $2$-step nilpotent. Now, $L' =M' \subseteq M$ and $N$ acts on $M$ by ad-map. $M$ is ideal of $L$, $L=N+M$ and $[N,M]\subseteq I \subset L.$ If $N$ is non-abelian, then $N$ is isoclinic to the Heisenberg Lie algebra by Theorem \ref{heisenberg_thm}.
{\textbf{\it (b)}} If $I\subset L^2$ then $L/I$ is $3$-step nilpotent and $[L/I: Z(L/I)] = 3.$ So $\frac{L/I}{Z(L/I)}$ is a $2$-generator Lie algebra by Lemma \ref{ext_lemma}. We can conclude that L can be generated by $\{x, y, u_1,\cdots, u_n\}$ so that $ [u_i, L] =I.$ Now, using the same arguments as in the preceding case, required result follows by taking $M := \langle x,y \rangle$ and $ N:=\langle u_1,...,u_n \rangle $, where $ \dim M =6.$
{\textbf{\it (c)}} If $I \nsubseteq L^2$ then $I \cap L^2 = 0 $ as $\dim I=1.$ Thus, $L/I$ is also $3$-step nilpotent. By using the same arguments as in the above case, $L/I$ is a $2$-generator Lie algebra and $L=\gen{x, y, u_1, ..., u_n}$ such that $ [u_i, L] =I.$ Let $M_1 := \langle x,y \rangle.$ Observe that $L'/I=M_1'+I/I$ has dimension 3, $\dim (M_1 +I/I) =5$ and $M_1$ and $L$ have same nilpotency class. Since $M_1$ is a 2-generator, $M_1'/ M_1^2$ is 1-dimensional. Thus, $M_1$ cannot contain $I$, which implies $\dim(M_1) =5.$ If $M_1 \subseteq C_L(u_i)$, for all $1 \leq i \leq n$, i.e., $[x,u_i] = 0 = [y,u_i],\, \forall \, 1 \leq i \leq n$ then $N_1:=\langle u_1,...,u_n \rangle $ with $N_1' =I$ is isoclinic to the Heisenberg Lie algebra over $k$, and hence L is the central product of $M_1$ and $N_1.$ Therefore, $M=M_1$ and $N=N_1$ are the required Lie subalgebras. If $M_1 \nsubseteq C_L(u_i)$ for some $i$, i.e., assume $[u_i, M_1]\neq 0$ for some $i.$ Thus, $0 \neq [u_i, M_1]\subset [u_i, L]=I$ implies that $[u_i, M_1] = I$, and the Lie subalgebra $M :=\langle x,y,u_i \rangle $ of L is 7-dimensional.
We can easily see that $M $ and $N := \langle u_1, \cdots ,u_i-1,u_i+1,\cdots,u_n \rangle $ are the required Lie subalgebras of $L.$
Now, assume that $L$ is $4$-step nilpotent. Then either $I = L^3$ or $I \neq L^3.$ We claim that there is no $4$-step nilpotent Lie algebra $L$ such that $I \neq L^3.$ If such an $L$ exists, then $L/I$ is $4$-step nilpotent, which is not possible, as $(L/I)/ Z(L/I)$, being of dimension $3$, can be at most $3$-step nilpotent. Therefore, we can assume that $I = L^3$ which implies that $L/I$ is $3$-step nilpotent. Therefore, L can be generated by $\{x, y, u_1, ..., u_n\}$ such that $[u_i, L] =I$ by \ref{ext_lemma}. The required result follows by assuming $M :=\langle x,y \rangle $ and $N:=\langle u_1,...,u_n \rangle$, where $ \dim M= 6.$ This completes the proof of the theorem.
\section{Proof of Theorem \ref{dimL'3}}\label{sec3}
\begin{proof} If $\dim(L')$ is $1$ or $2$, then result holds trivially by Theorem \ref{breadth1_classify} and $b(L)\leq \dim L'$. Now, assume that $\dim L'=3$.
Since $Z(L) \subseteq L'$, thus $b(L)=2$ or $3.$ Thus, the following two cases arises:
{\it{ Case(1)}} If $b(L)=3$ then there exists an element $x \in L$ such that $\operatorname{rank}(\operatorname{ad} x)=3$, i.e., $L'=[x,L]$. Thus $L'=w(L).$
{\it{ Case(2)}} If $b(L)=2$ then $\dim (L/Z(L))=3$ by Theorem \ref{breadth2_classify}. Also $[L':Z(L)] \leq [L:Z(L)]=3$. But $[L': Z(L)] \neq 2$ or $3$ otherwise $L'=Z(L)$ or $L'=L$ respectively, which is not possible. So we have the following subcases. \medskip
{\it{subcase(2a)}} If $L'=Z(L)$ then $[L:L']=3$. Let $L=\gen {x,y,z}$ and $L'=\gen {[x,y],[x,z],[y,z]}$ as $L^2=0$, i.e., $L$ is $2$-step nilpotent.
$$\begin{aligned}
&\text{If } \alpha \neq 0 \text{ then } \alpha[x,y]+ \beta [x,z]+\gamma [y,z]= [x-\frac{\gamma}{\alpha}z, \alpha y+ \beta z].\\
&\text{If } \alpha= 0 \text{ then }
\beta [x,z]+\gamma [y,z]= [z, -\beta x-\gamma y].
\end{aligned}$$
Thus, $w(L)=L'.$
{ \it{subcase(2b)}} If $[L': Z(L)]=1$ then $L$ is at most $3$-step nilpotent by Lemma \ref{maximal_nilpotency_class}. But $L$ cannot be $1 $ or $2$- step nilpotent as $L'=0$, or $L'=Z(L)$ respectively, which contradicts the hypotheses. So $L$ is $3$-step nilpotent. But $[L:L']=2$ implies $L=\gen {x,y}.$ Thus, $L'=\gen {[x,y],L^2}$, so $\dim (L'/L^2)=1$ and $\dim L^2=2$, i.e., $L'=\gen {[x,y],[x,[x,y]],[y,[x,y]]}.$ In the above calculation, replacing $z$ with $[x,y]$ gives $w(L)=L'.$
\end{proof}
\section{$2$-step nilpotent Lie algebras}\label{sec4}
Observe that if $L$ is a $2$-step nilpotent Lie algebra with $\dim L' =4$ then $Z(L)=L'$ and $L$ is minimally generated by $4$ elements i.e., $\dim L \geq 8.$ The following lemma investigates our question about an $8$-dimensional nilpotent Lie algebra.
\begin{lem} \label{2-step_dim8}
Let $L$ be an $8$-dimensional, $2$-step nilpotent Lie algebra over $k=\mathbb{F}_q$, field of odd characteristic such that $\dim L'=4.$
{\it{(1)}} If breadth type of $L$ is $(0, 1,3)$ or $(0,1, 2, 3)$, then $w(L) \neq L'.$
{\it{(2)}} Let $L$ is of breadth type $(0, 2, 3).$ Then $w(L) = L'$ if and only if $L$ admits a generating set $\{u_1, u_2, u_3, u_4\}$ such that $[u_1, u_2] = 0 = [u_3, u_4].$
{\it{(3)}} If breadth type of $L$ is $(0, 3)$ then $w(L) = L'.$
Further, if $w(L) \neq L'$, then each element of $L'$ is sum of at most two elements of $w(L).$
\end{lem}
\begin{proof}
{\it{(1)}} Since $L$ has breadth $1$-element, say $w$, we can always extend $\{w\}$ to a generating set $\{x, y, z, w\}$ for $L$ such that $L'=\langle [x,y], [x,z], [y,z], [z,w]\rangle.$ We claim that $[x,y]+[z,w] \notin w(L).$ Suppose, if possible, that $[x,y]+[z,w] \in w(L)$, i.e.,
$$[x,y]+[z,w] = [{\alpha_1}x +{\alpha_2}y+ {\alpha_3}z+{\alpha_4 }w , {\beta_1}x+{\beta_2}y+{\beta_3}z+{\beta_4}w],$$
where $\alpha_i, \beta_j \in k$ for $1 \le i, j \le 4.$ On comparing both sides, we get,
\begin{eqnarray}
\beta_2 \alpha_1 - \alpha_2 \beta_1 & = & 1, \label{eqn1} \\
\beta_3 \alpha_1 - \alpha_3 \beta_1 &=& 0, \label{eqn3}\\
\beta_3 \alpha_2 - \alpha_3 \beta_2 &=& 0, \label{eqn2}\\
\beta_4 \alpha_3 - \alpha_4 \beta_3 &=&1.\label{eqn4}
\end{eqnarray}
If $\beta_3 \neq 0$ then $\alpha_2 = {\alpha_3 \beta_2}{\beta_3}^{-1} $ and $\alpha_1 = {\alpha_3\beta_1}{\beta_3}^{-1}$ by Equations \eqref{eqn2} and \eqref{eqn3} respectively, which contradicts Equation \eqref{eqn1}. So, $\beta_3=0.$ From \eqref{eqn4}, we get $\alpha_3 \neq 0.$ Then from \eqref{eqn2} and \eqref{eqn3}, we get $\beta_1=\beta_2=0$, which don't satisfy \eqref{eqn1}. Therefore, there is no solution of the above system. Hence, $[x,y]+[z,w] \notin w(L).$
\medskip
{\it{(2)}} Let $L$ has a generating set $\gen{u_1,u_2,u_3,u_4}$ such that $[u_1,u_2]=0$ and $[u_3,u_4]=0$. So, $L'=\gen{[u_1,u_3],[u_2,u_3],[u_1,u_4],[u_2,u_4]}$. Consider
$$\alpha[u_1,u_3]+\beta[u_2,u_3]+\gamma[u_1,u_4]+\delta[u_2,u_4]=[\gamma u_1+\delta u_2+u_3,-\alpha u_1-\beta u_2+u_4]$$ where $\alpha,\beta,\gamma,\delta \in k$. Hence $w(L)=L'$. To prove the converse side, suppose if possible that $L$ does not posses any generating set $\gen{u_1,u_2,u_3,u_4}$ such that $[u_1,u_2]=0=[u_3,u_4]$. But breadth type of $L$ is $(0,2,3)$, i.e., there exists an $x \in L$ such that $b(x)=2$ and there exist other generators $y,z,w \in L$ such that $[x,y] \neq 0, \, [x,w] \neq 0$ and $[x,z]=0$. By our assumption hypothesis, $[y,w]\neq 0$. Since $L$ has an element of breadth $3$, so we can assume that $b(w)=3$, i.e., $[z,w] \neq 0$. Also $[y,z] \neq 0$ otherwise $b(z)=1$ which is not possible for any element of $L$. Therefore, we can always choose a generating set $\gen{x,y,z,w}$ of $L$ such that $[x,z]=0$, i.e., only one commutator of generating elements is trivial. Observe that if $[z,w]$ cannot be written as a linear combination of remaining basic commutators then $[x,y]+[z,w] \notin w(L)$ by doing similar calculations as in the above case(1). So, we can assume that
$$[z, w]= {\lambda_1}[x, y]+ {\lambda_2}[y, z]+ {\lambda_3}[y, w]+ {\lambda_4}[x,w], \text{ where } \lambda_1,\lambda_2,\lambda_3,\lambda_4 \in k$$
\begin{equation}\label{cls2eqn1}
\implies [y,{-\lambda_1}x+{\lambda_2}z+{\lambda_3}w]+[w, {-\lambda_4}x+z]= 0.
\end{equation}
If $\lambda_3 \neq 0$ then above equation can be written as $[y, {-\lambda_1}x + {\lambda_2}z + {\lambda_3}w] + {\lambda_3}^{-1} [{-\lambda_1}x + {\lambda_2}z + \lambda_3w, {-\lambda_4}x+z]= 0.$ Taking $w'= {-\lambda_1}x + {\lambda_2}z + {\lambda_3}w $, we get $[w', -y- {\lambda_4 \lambda_3^{-1}} x + {\lambda_3}^{-1}z]=0.$ Take $y'=-y-{\lambda_4\lambda_3^{-1}}x+{\lambda_3}^{-1}z$, we get a generating set $\{ x,y',z,w'\}$ for $L$ such that $L'=\langle [x, y'],[y', z],[y', w'],[z,w'] \rangle$ and $[x, z]=0=[y',w']$, which contradicts our assumption hypothesis. So, $\lambda_3=0.$ Therefore, \eqref{cls2eqn1} reduces to $[y, {-\lambda_1}x+{\lambda_2}z ]+[w, z-{\lambda_4}x]= 0.$ Now, replace $z$ by $z' := z-{\lambda_4}x .$ An easy calculation gives
$$[y,{-\lambda_1}x+{\lambda_2}z]=[y,{-\lambda_1}x+{\lambda_2}(z'+{\lambda_4}x)]= [y,(\lambda_4\lambda_2 - \lambda_1)x+{\lambda_2}z'].$$
Equation \eqref{cls2eqn1} gives,
\begin{equation}\label{cls2eqn1a}
[y,(\lambda_4 \lambda_2 -\lambda_1) x + {\lambda_2}z'] + [w,z']=0.
\end{equation}
We claim that $\lambda_4 \lambda_2 - \lambda_1 \neq 0.$ Suppose, $\lambda_4 \lambda_2 - \lambda_1=0.$ Then \eqref{cls2eqn1a} reduces to $[y,{\lambda_2} z'] + [w, z']=0,$ i.e., $[{\lambda_2}y+w, z']=0.$ Taking $w' := {\lambda_2}y+w$, we get a generating set $\{x, y, z', w' \}$ of $L$ such that $[x,z']=0=[w',z']$, which implies $b(z')=1$, contradiction to the given hypothesis.
So, we now assume $\lambda_4 \lambda_2- \lambda_1 \neq 0.$ Taking $x' := (\lambda_4 \lambda_2 - \lambda_1)x+{\lambda_2}z'$, we get a new generating set $\{ x', y, z', w \}$ such that $L'= \langle[x', y],[y, z'],[y, w],[x',w] \rangle$, $[x',z']=0$ and, by \eqref{cls2eqn1a}, $-[x', y]=[z', w].$ Now, we claim that ${\mu_1}[y,z']+{\mu_2}[x', w] \notin w(L)$ for some $\mu_1,\mu_2 \in k^*.$ Suppose if possible that ${\mu_1 }[y, z']+{\mu_2}[x',w] \in w(L)$ for all $\mu_1, \mu_2 \in k^*.$ Thus,
$${\mu_1}[y,z']+{\mu_2}[x',w]=[{\alpha_1}x'+{\alpha_2}y+{\alpha_3}z'+{\alpha_4 }w, {\beta_1}x' + {\beta_2}y + {\beta_3}z' + {\beta_4}w],$$
where $\alpha_i,\beta_j \in k$ for $1 \le i,j \le 4.$ Expanding the Lie bracket on the right hand side and equating terms on both sides, we get
\begin{eqnarray}
\alpha_1 \beta_4 - \alpha_4 \beta_1 &=& \mu_2,\label{eqn16}\\
\alpha_2 \beta_4 - \alpha_4 \beta_2 &=& 0, \label{eqn15}\\
\alpha_2 \beta_3 - \alpha_3 \beta_2 &=& \mu_1, \label{eqn14}\\
\alpha_1 \beta_2 - \alpha_2 \beta_1 - \alpha_3 \beta_4 + \alpha_4
\beta_3 & = & 0. \label{eqn13}
\end{eqnarray}
If $\alpha_2 = 0$ then $\alpha_3
\beta_2=-\mu_1$, $\alpha_4 = 0$ and $\alpha_1 \beta_4 = \mu_2$ by Equations \eqref{eqn14}, \eqref{eqn15} and \eqref{eqn16} respectively. Since $\alpha_1 \neq 0 \neq \alpha_3$, therefore $\beta_2 = -\mu_1 \alpha_3^{-1}$ and $\beta_4 = \mu_2 \alpha_1^{-1}$ gives $\mu_1 \alpha_1 \alpha_3^{-1}+\mu_2 \alpha_3 \alpha_1^{-1}=0$ by Equation \eqref{eqn13}. Thus, $(\alpha_1 \alpha_3^{-1})^2 = - \mu_1^{-1} \mu_2$, which is not true as we can always choose $\mu_1,\mu_2 \in k^*$ such that $ -\mu_1^{-1}
\mu_2$ is non square. So, we can assume that $\alpha_2 \neq 0.$ If $\alpha_4 = 0$, then \eqref{eqn15} implies that $\beta_4 =0$, which contradicts \eqref{eqn16}. So, finally assume that both $\alpha_2$ and $\alpha_4$ are non zero. By solving above equations, we get, $\beta_4=\alpha_2^{-1} \alpha_4 \beta_2$, $\beta_1=\alpha_2^{-1}\alpha_1\beta_2-\alpha_4^{-1}\mu_2$, $\beta_3=\alpha_2^{-1}\alpha_3 \beta_2-\alpha_4^{-2}\alpha_2\mu_2$. Putting these values of $\beta_i$'s in \eqref{eqn14} gives $-\mu_1\mu_2^{-1}=(\alpha_2\alpha_4^{-1})^{2}$, which is not true, as we can always choose $\mu_1, \mu_2 \in k^*$ such that $-\mu_1 \mu_2^{-1}$ is non square.\medskip
{\it{(3)}} Since $L$ has breadth type $(0,3)$, its presentation given in \cite[Theorem 6.4]{riju2021}, is
\[ L = \langle x, y, z, w \mid [z, w]= -[x, y], [x, z] =-r [y, w]\rangle \] where $r$ is any non-square in $k^*$. For any given $\lambda_i \in k, \, 1 \leq i \leq 4$, we have to find existence of $\alpha_i, \beta_i \in k$ such that $${\lambda_1}[x, y] +{\lambda_2}[y, z]+ {\lambda_3}[y, w]+ {\lambda_4}[x, w] = [{\alpha_1}x+{\alpha_2}y+{\alpha_3}z , \ {\beta_1}x+ {\beta_2}y+{\beta_3}z+{\beta_4}w].$$ Opening the Lie bracket on the right hand site and comparing the terms, we get
\begin{eqnarray}
\alpha_1 \beta_2 - \alpha_2 \beta_1 - \alpha_3 \beta_4 &=& \lambda_1,
\label{eqn17}\\
\alpha_2 \beta_3 - \alpha_3 \beta_2 &=& \lambda_2 , \label{eqn18}\\
\alpha_2 \beta_4 -r (\alpha_1 \beta_3 - \alpha_3 \beta_1) &=& \lambda_3,
\label{eqn19}\\
\alpha_1 \beta_4 & = &\lambda_4. \label{eqn20}
\end{eqnarray}
Taking $\beta_i$'s as variable, we will show that above system of equations has a solution. By an easy calculation, we get, $\beta_4=\alpha_1^{-1}\lambda_4, \, \beta_3=\alpha_2^{-1}(\alpha_3\beta_2+\lambda_2), \, \beta_1=(r\alpha_3)^{-1}\lambda_3 -(r\alpha_1\alpha_3)^{-1}\alpha_2\xi+(\alpha_2\alpha_3)^{-1}\alpha_1\lambda_2+\alpha_2^{-1}\alpha_1\beta_2$, putting these values in \eqref{eqn17}, we get
$$r \lambda_2 \alpha_1^2 + (\lambda_3 \alpha_2 + r \lambda_1 \alpha_3) \alpha_1 - \lambda_4(\alpha_2^2 - r \alpha_3^2) = 0$$ which is quadratic equation in $\alpha_1$. If discriminant of this quadratic equation is either zero or quadratic residue then above system of equations has a solution in the field $k$. But discriminant is \begin{equation}\label{eqn21}
(\lambda_3^2 + 4r\lambda_2 \lambda_4) \alpha_2^2 + 2r\lambda_1 \lambda_3 \alpha_2 \alpha_3 +
r^2(\lambda_1^2 - 4 \lambda_2 \lambda_4 ) \alpha_3^2.
\end{equation} is of the form $f_1 \alpha_2^2 + f_2 \alpha_2 \alpha_3 + f_3 \alpha_3^2$, where $f_1,f_2,f_3 \in k.$ But we can find $\alpha_2, \alpha_3 \in k$ such that \eqref{eqn21} is either zero or a quadratic residue. Hence proved. \medskip
Further, let $I$ be any $1$-dimensional Lie subalgebra of $L'$. By using Theorem \ref{dimL'3}, $(L/I)'=w(L/I)$. By Lemma \ref{prelem5}, the required result follows.
\end{proof}
\begin{lem} \label{cl2lem2}
Let $L$ be a finite-dimensional $2$-step nilpotent Lie algebra of dimension at least $9$ over $k=F_q$, field of odd characteristic, such that $\dim L'=4$ and $Z(L) =L'.$ Then $w(L)=L'.$
\end{lem}
\begin{proof}
Since $b(L) \leq \dim L'$, i.e., $b(L) \leq 4$. If $b(L)=4$ then there is nothing to prove. So assume $b(L)=3$. By Theorem \ref{breadth3_classify}, there exists one dimensional ideal $I$ of $L$ such that $[L/I:Z(L/I)]=3$. Using similar arguments of Lemma \ref{ext_lemma}, we can assume $$L=\gen{x,y,z,u_1,u_2,\ldots, u_n}$$ where $[u_i,L]=I$ for $1 \leq i \leq n$ and $n \geq 2$ as $[L:L'] \geq 5$. Set $M=\gen{x,y,z}$. Observe that $M$ is $6$-dimensional Lie subalgebra of $L$ with $\dim L'=3$. Therefore, $w(M)=M'$ by using Theorem \ref{dimL'3}.
If $[u_i,M]=0$ for all $1 \leq i \leq n$ then $L$ can be written as a central direct product of $M$ and $n$-generator Lie algebra isoclinic to an Heisenberg Lie algebra generated by $\{u_1, \ldots, u_n\}$ as $Z(L)=L'$. Hence $w(L)=L'$ by using Lemma \ref{prelem4}.
If $[u_i,M]=I$ for some $i \in \{1,\ldots,n\}$. Assume $u_i=u_1$ by re-indexing the set $\{u_j: 1\leq j \leq n\}$. For notational convenience, set $w:=u_1$. Since $C_L(w)$ is a maximal subalgebra of $L$, we can suitably modify te generators $x,y,z$ such that $L'=\gen{[x,y],[x,z],[y,z],[z,w]}$ with $[x,w]=0=[y,w]$ and $I=\gen{[z,w]}$. By suitable modification of $u_i's$, we can assume that $[z,u_i]=0$ for all $2 \leq i \leq n$. For $\alpha,\beta, \gamma,\delta \in k$,
\[
\begin{aligned}
&\text{ If } \delta \neq 0, \text{ then } \alpha[x,y]+\beta[z,w]+\gamma[y,z]+\delta[x,z]=[\alpha\delta^{-1}y+z,-\gamma y-\delta x+\beta w] \\
&\text{ If } \delta=0, \gamma \neq 0 \text{ then }\alpha[x,y]+\beta[z,w]+\gamma[y,z]=[-\beta\gamma^{-1}w+y,\gamma z-\alpha x]
\end{aligned}\]
Now assume that $\delta=0=\gamma$. Set $N=\gen{u_2,\ldots,u_n}$. If $x \notin C_L(N)$, i.e., $[x,u_i]=t[z,w]$ for some $2 \leq i \leq n$ and $t \in k^*$, then $\alpha[x,y]+\beta[z,w]= [x,\alpha y+\beta t^{-1}u_i]$. If $x \in C_L(N)$ but $y \notin C_L(N)$ then, $[y,u_i]=t[z,w]$ for some $2 \leq i \leq n$ and $t \in k^*$, so $\alpha[x,y]+\beta[z,w]= [y,-\alpha x+\beta t^{-1}u_i]$. If $x,y \in C_L(N)$ but $w \notin C_L(N)$ then, $[w,u_i]=t[z,w]$ for some $2 \leq i \leq n$ and $t \in k^*$, so $\alpha[x,y]+\beta[z,w]= [x+w,\alpha y+\beta t^{-1}u_i]$. If $x,y,z,w \in C_L(N)$, then in this case $n \geq 3$ and $N'=I$ as $[N,L]=I$. Thus, $[u_i,u_j]=t[z,w]$ for some $2 \leq i,j \leq n$ and $t \in k^*$, so, $\alpha[x,y]+\beta[z,w]= [y+u_i,\alpha y+\beta t^{-1}u_j]$. Hence $w(L)=L'$.
\end{proof}
\section{$3$-step nilpotent Lie algebras}\label{sec5}
Observe that if $L$ is $3$-step nilpotent Lie algebra with $\dim L'=4$, then $L$ is minimally generated by atleast $3$ elements. Therefore, $\dim L \geq 7.$ We will prove that for $\dim L=7$, $w(L)=L'$ if $\dim Z(L) \leq 2$ and $w(L) \neq L'$ otherwise. From now onwards, $\delta \in \{0, 1\}.$
\begin{lem} \label{p7lem1}
Let $L$ be a $7$-dimensional $3$-step nilpotent Lie algebra over field $k= \mathbb{F}_q$ of odd characteristic with $b(L)=3$, $\dim Z(L) \leq 2$ and $\dim L'=4.$ Then $w(L)=L'.$
\end{lem}
\begin{proof}By Lemma \ref{breadth3_classify} and \ref{ext_lemma}, $L$ is minimally generated by $3$ elements $x, y, z$ (say). Set $C = C_L(L').$ We will divide the proof into $4$ steps.
\, \,
\noindent{\bf Step 1.} {\it If $\dim Z(L) = 1$ then $C=L'.$}
\begin{proof}
As $0 \neq L^2 \subseteq Z(L)$ and $\dim Z(L) = 1$ so $Z(L)=L^2.$ Also $[L': L^2]=3$ therefore no non-zero element from the Lie algebra $\left\langle [x, y], [x, z], [y,z] \right\rangle$ can lie in $L^2.$ Since $L' \subseteq C$ implies that $\dim C \geq 4.$ If $\dim C=6$, then, without loss of generality, we can assume that $y,z \in C.$ Observe that $C_L(x)\cap L' \subseteq Z(L)$ and $\dim (C_L(x)\cap L') =3$. So $\dim Z(L) \geq 3$.
If $\dim C=5$, then, without loss of generality, we can assume that $z \in C.$ Observe that $C_L(x)\cap C_L(y)\cap L' \subseteq Z(L)$ and $\dim (C_L(x)\cap C_L(y)\cap L') =2$. So $\dim Z(L) \geq 2$. Thus, $\dim C \neq 5$ or $6$. Hence, $C =L'.$
\end{proof}
\noindent{\bf Step 2.} {\it If $\dim Z(L) = 1$, then $w(L)=L'.$}
\begin{proof}
By Step 1 we have $C =L'.$ Observe that $\bar L := L/ L^2$ is $2$-step nilpotent Lie algebra of dimension $6$ on $3$- generators. Using the calculation of Theorem [\ref{dimL'3}, subcase(2a)] for $\bar L$ instead of $L$, we get, for $i \in k$, $$\bar L' = \bigcup \limits_{ \delta , i} \ [ {\delta}\bar x+i \bar z, \bar L].$$
We can interchange $x,y,z$ in the above equation because of symmetry. It is sufficient to show that $L^2 \subseteq \bigcap \limits_{ \delta , i} \ [ {\delta}u+i v, L]$ for some $u \neq v$ in $\{x, y, z\}$, where $i \in k$ such that $\delta$ and $i$ are not simultaneously zero. Firstly, suppose $[x,[x,y]] \neq 0$, i.e., $L^2=\gen{[x,[x,y]]}.$
If $[z,[x,y]] \neq 0$, then $[z,[x,y]] = \mu[x,[x,y]]$ for some $\mu \in k^*.$ Taking $z'= z-\mu x$, gives a new generating set $\{x,y,z' \}$ for $L$ such that $[z',[x,y]]=0$ and $[x,[x,y]] \neq 0.$ So we can always assume that $[z,[x,y]]=0.$ Since $z \notin C$, either $[z,[y,z]]$ or $[z,[x,z]]$ is non-trivial. Therefore, for $\delta=0,1$ and $i \in k$, not simultaneously zero, we can easily see that $L^2 \subseteq \bigcap\limits_{ \delta , i} \ [{\delta}x+i z, L].$
Now, let us assume that $[x,[x,y]]=0.$ Then at least one of $[x,[y,z]]$ and $[x,[x,z]]$ is not trivial. If $[y,[x,y]]=0$, then $[z,[x,y]] \neq 0.$ Therefore, for $\delta=0,1$ and $i \in k$ not simultaneously zero, we get $L^2 \subseteq \bigcap \limits_{ \delta , i} \ [{\delta}z+i x, L].$ If $[y,[x,y]] \neq 0$, then for $\delta=0,1$ and $i \in k$, not simultaneously zero, we can easily see that $L^2 \subseteq \bigcap \limits_{ \delta , i} \ [{\delta}y+i x, L]$.
\end{proof}
\noindent{\bf Step 3.} {\it If $\dim Z(L) = 2$ and $\dim L^2 = 1$, then $\dim C=5.$}
\begin{proof}
If $\dim C =6$, then as observed above $\dim Z(L) \geq 3$. If $\dim C = 4$, then $C = L'.$ We can assume that $[y,z] \in Z(L).$ If not, then $[y, z] = r [x, y]+s [x, z]$ modulo $Z(L)$ for some $r, s \in k.$ This implies $[y-s x, z+r x]=0$ under modulo $Z(L).$ Thus, $y'= y-s x$ and $z'= z+r x$ gives the new generating set $\{x, y', z'\}$ with the required property. Atleast one of $[x,[x, y]]$ and $[x, [x,z]]$ is non trivial; otherwise $x \in C$. We can assume that $[x,[x, y]] \neq 0$, i.e., $L^2=\gen{[x,[x,y]]}$. Also $[z,[x,y]]=[y,[x,z]]$ by Jacobi identity. If $[z,[x,y]]=0$ then $[z,[x,z]] \neq 0$ and $ [y,[x,y]] \neq 0$; otherwise $z,y \in C$. We can modifying the generating set such that $L'=\gen{[x,y],[x,z],[y,z],[x,[x,y]]}$ and $[y,z] \in Z(L), \, [z,[x,y]]=[y,[x,z]]=0=[x,[x,z]], \, [z,[x,z]] \neq 0, \, [y,[x,y]] \neq 0$. Indeed if $[x,[x,z]] \neq 0 $ then $[x,[x,z]]= \mu[z,[x,z]]$ for some $\mu \in k^*$, i.e., $[x-\mu z, [x,z]]=0$. Taking $x'=x-\mu z$ gives $\gen{x',y,z}$ as required generating set. Now $ [y,[x,y]] \neq 0$ means $ [y,[x,y]]= \lambda [x,[x,y]]$ for some $\lambda \in k^*$, i.e., $y-\lambda x \in C$, which is not possible.
So now assume that $[z,[x,y]] \neq 0$. We claim that then $[z,[x,z]]=0=[y,[x,y]]$. Suppose not, then $[z,[x,z]]=\mu_1 [y,[x,z]]$ for some $\mu_1 \in k^*$, i.e., $[z-\mu_1 y,[x,z]]=0$. Taking $z-\mu_1 y$ in place of $z$ gives generating set with $[z,[x,y]]=0$, which is not possible. If $[y,[x,y]] \neq 0$ then $[y-\mu_2 z, [x,y]]=0$ for some $\mu_2 \in k^*$. Again taking $y-\mu_2 z$ in place of $y$ gives generating set with $[z,[x,y]]=0$. So our claim is proved. We can modify the generating set such that $L'=\gen{[x,y],[x,z],[y,z],[x,[x,y]]}$ and $[y,z] \in Z(L), \, [z,[x,y]] \neq 0$, i.e., $[y,[x,z]] \neq 0$, $[x,[x,z]]=0$, $[y,[x,y]]=0$ and $[z,[x,z]]=0.$ Thus $[z,[x,y]]=\alpha [x,[x,y]]$ for $\alpha \in k^*$, i.e., $z-\alpha x \in C$, which is contradiction. So $\dim C \neq 4$. \end{proof}
\noindent{\bf Step 4.} {\it If $\dim Z(L) = 2$ and $\dim L^2 = 1$, then $w(L) = L'.$}
\begin{proof}
By Step 3, we know that $\dim C = 5.$ Assume that $z \in C.$ We discuss each case $[y,z] \in Z(L)$, $[x,z] \in Z(L)$ and $[y,z], [x,z] \notin Z(L)$ separately. Since $\bar L := L/L^2 $ is the $2$-step nilpotent Lie algebra, then, as explained in Step 2, to prove that $w(L) = L'$, it is sufficient to show that
$$L^2 \subseteq \bigcap \limits_{ \delta , i} \ [ {\delta}u+i v, L]$$
for some $u \neq v$ in $\{x, y, z\}$, where $\delta=0,1$ and $i \in k$ such that $\delta$ and $i$ are not simultaneously zero.
{\it Case (i).} Let $[y,z] \in Z(L)$ then $[x, [y,z]]=[y, [x, z]]=[z,[x, y]]=0.$ But $[x,[x, z]]\neq 0$ and $[y, [x, y]]\neq 0$. Thus for $\delta=0,1$ and $i \in k$, not simultaneously zero, we can see that
$ L^2 = \bigcap \limits_{ \delta, i} \ [{\delta}x+i y, L'] \subseteq \bigcap \limits_{ \delta, i} \ [{\delta}x+i y, L].$ Hence $w(L)=L'$ by Lemma \ref{prelem3}.
{\it Case (ii).} Next, assume that $[x,z] \in Z(L).$ Then $[x, [x, y]]$ and $[y, [y, z]]$ are non-trivial elements and $[x, [y, z]]=0.$ Thus for $\delta=0,1$ and $i \in k$, not simultaneously zero, we can easily see that
$ L^2 = \bigcap \limits_{ \delta, i} \ [{\delta}y+ix, L'] \subseteq \bigcap \limits_{ \delta, i} \ [{\delta}y+ix, L].$ Hence $w(L)=L'$ by Lemma \ref{prelem3}.
{\it Case (iii).} Let $[y, z],[x, z] \notin Z(L).$ Then $[y, z]$ and $[x, z]$ cannot be a multiple of each other. Indeed if $[y, z] = \mu[x, z]$ for some $\mu \in k^*$, then taking $\{x, y-\mu x, z\}$ as a generating set with $[y-\mu x, z] \in Z(L)$ arrives in Case (i). We can modify the generating set for $L$ to $\{x', y', z\}$ such that $[x', y'] \in Z(L)$ and $z \in C.$ If not, then modulo $Z(L)$, $[x, y] = \lambda_1 [x,z] + \lambda_2 [y, z]$ for some $\lambda_1, \lambda_2 \in k.$ So, $[ x+\lambda_2 z, y-\lambda_1 z]=0$ by taking modulo $Z(L)$. Taking $x'= x+\lambda_2 z, \, y' =y-\lambda_1 z$ gives us required generating set.
Firstly, assume $[x',[x',z]]=0$ and $[y',[y',z]]=0.$ Then
$[x', [y',z]] = [y', [x',z]] \neq 0$ as otherwise $x',y' \in C$. Thus, for $\delta = 0, 1$ and $i \in k$, not simultaneously zero, we can easily check that
$$ L^2 = \bigcap \limits_{ \delta, i} \ [{\delta}x'+i y', L'] \subseteq \bigcap \limits_{ \delta, i} \ [{\delta}x'+iy', L].$$
Now, assume that $[y',[y',z]]$ or $[x',[x',z]]$ is non-trivial.We will only discuss $[y',[y', z]] \neq 0$ as other one follows similarly. Without loss of generality, we can assume $[x',[y',z]]=[y',[x',z]]= 0.$ Indeed $[x',[y',z]] = s [y',[y',z]]$, i.e., $[x'-s{y'},[y',z]] =0.$ Taking $\tilde{x}= x'-s{y'}$ gives generating set $\{\tilde{x}, y', z\}$ for which $[y', [\tilde{x},z]]=[\tilde{x},[y',z]]=0$, $[\tilde{x},y'] \in Z(L)$, $z \in C$ and $[y',[y',z]] \neq 0.$ For any $ t, i \in k$ we have $t[\tilde{x}, [ \tilde{x} ,z]] =[\tilde{x}+ib' , t[\tilde{x},z]]$ and $t[y',[y', z]]=[y' , t[y',z ]].$ Thus, for $\delta = 0, 1$ and $i \in k$, not simultaneously zero, we get
$ L^2 = \bigcap \limits_{ \delta, i} \ [{\delta}\tilde x+i y', L'] \subseteq \bigcap \limits_{ \delta, i} \ [{\delta}\tilde x+i y', L].$ Hence $w(L)=L'$ by Lemma \ref{prelem3}.
\end{proof} Now, we will discuss the case where $\dim Z(L) = \dim L^2 = 2.$ In this case, we claim that $b(L) = 4$. Suppose not, i.e., $b(L) = 3.$ Then by Theorem \ref{breadth3_classify} there exists an ideal $I$ of $L$ such that $[L/I : Z(L/I)] = 3.$ Since $Z(L) = L^2 $ has dimension $2$, i.e., $L/I$ is $3$-step nilpotent. So, assume that $z+I \in Z(L/I).$ Observe that $[(L/I)':(L/I)^2]=2$, which cannot happen as $L/I$ possesses only two non-central generators. Hence, the claim as well as the lemma is proved.
\end{proof}
The following lemma describes the case where $w(L) \neq L'$ for a $3$-step nilpotent Lie algebra of dimension $7.$
\begin{lem}\label{p7lem2} Here, $k=\mathbb{R}$ or $\mathbb F_{q}$, $\mathbb F_q$ is the field of odd characteristic. Let $L$ be a $7$-dimensional, $3$-step nilpotent Lie algebra of $b(L)=3$, $\dim Z(L) = 3$ and $\dim L'=4.$
Then $w(L) \neq L'.$ Further, each element of $L'$ is sum of at most two elements of $w(L).$
\end{lem}
\begin{proof}
Since $ \dim L/L'=3$, so let $L = \gen{x, y, z}.$ Then $L'= \langle [x,y], [y,z], [x,z], L^2 \rangle.$ Also $\dim \frac{L'}{Z(L)}=1$, i.e., $L'/Z(L)= \gen{[x,y]}$ if $[x,y] \notin Z(L).$ Thus, $L^{2}=\gen{[x,[x,y]],[y,[x,y]}.$ So, $\dim L^2 \leq 2.$ We can take $Z(L)= \langle [y,z], [x,z], L^2) \rangle.$ Thus, by the Jacobi identity, $z \in C_L(L').$ Now, we consider two cases, namely, $\dim L^2$ is $1$ or $2.$
If $\dim L^2 = 1$ then we can modifying the generating set $\{x, y, z\}$ such that $[x,[x,y]] \neq 0$ and $[y,[x,y]] = 0.$ Therefore, $L^2 = \gen{[x,[x,y]]}.$ Observe that $[y,z]+[x,[x,y]] \notin w(L)$, by similar arguments as those used in [Lemma \ref{2-step_dim8}, part(1)].
If $\dim L^2=2$ then $L^2= \langle [x,[x,y]],[y,[x,y]]\rangle.$ Observe that if neither $[x, z]$ nor $[y, z]$ lies in $L^2$ then one of them will be a scalar multiple of the other modulo $L^2.$ So assume, we have a generating set $\{x, y, z\}$ such that $[x, z] \in L^2.$ Let $[x,z] = \lambda_1[x,[x,y]] + \lambda_2[y,[x,y]] $ for some $\lambda_1,\lambda_2 \in k.$ Then $[x,z-\lambda_1[x,y]]=\lambda_2[y,[x,y]].$ Taking $z'=z-\lambda_1[x,y]$, we obtain a modified generating set $\{x, y, z'\}$ such that $[x,z']=\lambda_2[y,[x,y]]$ and $z' \in C_L(L').$ If $\lambda_2 =0$, then $[x,z'] = 0$, otherwise take ${\lambda_2}^{-1}z'$ in place of $z'$. Assume that $\lambda_2 = 1$, i.e., $[x,z']=[y,[x,y]].$
Assume there exist $\alpha_i, \beta_i \in k$ for any given $\mu_1, \mu_2 \in k^*$ such that
$${\mu_1}[y,z']+\mu_2[x,[x,y]] = [{\alpha_1}x+{\alpha_2}y+{\alpha_3}z'+{\alpha_4}[x,y], {\beta_1}x+{\beta_2}y+{\beta_3}z'+{\beta_4}[x,y]].$$
After opening the Lie brackets and comparing the terms on both sides, we get
\begin{eqnarray} \label{numeric6}
\beta_2 \alpha_1 - \alpha_2 \beta_1& =& 0 \label{numeric6}\\
\beta_3 \alpha_2 - \alpha_3 \beta_2 &=& \mu_1 \label{numeric7}\\
\lambda_2(\beta_3 \alpha_1 - \alpha_3\beta_1)+\beta_4 \alpha_2 - \alpha_4 \beta_2&=&0 \label{numeric8}\\
\beta_4 \alpha_1 - \alpha_4 \beta_1&=&\mu_2.\label{numeric9}
\end{eqnarray}
We have two cases, $\lambda_2=0$ and $\lambda_2=1.$ First, $\lambda_2=0$ is similar in calculation as in the previous paragraph. So, let us discuss case $\lambda_2=1.$
If $\alpha_1=0$ then by (\ref{numeric9}), $\alpha_4, \beta_1$ both are non-zero. So (\ref{numeric6}), gives $\alpha_2=0.$ By (\ref{numeric7}),(\ref{numeric9}) we get $\beta_2=-\alpha_3^{-1}\mu_1, \beta_1 = - \alpha_4^{-1} \mu_2$, putting these in (\ref{numeric8}), we get ${(\alpha_3 \alpha_4^{-1})}^2=-\mu_1 \mu_2^{-1},$ but we can choose $\mu_1, \mu_2 \in k$ such that $(-\mu_1 \mu_2^{-1})$ is a non-quadratic residue. Thus, the claim is proved in this case. If $\alpha_1, \alpha_2$ are both non-zero then by doing similar calculation done in Lemma [\ref{2-step_dim8}, part(3)] will prove our claim.
\end{proof}
\begin{lem}\label{main1}
Let $L$ be a $3$-step finite dimensional nilpotent Lie algebra such that $b(L)=3$, $\dim Z(L) = 3$, $\dim L'=4$ and $\dim L \geq 8$. Then $w(L)=L'.$
\end{lem}
\begin{proof}
As we observed in the first paragraph of the above Lemma \ref{p7lem2}, $\dim L^2 \le 2$.
{\bf{Case(1)}} Let $\dim L^2 = 1$ then there exists an ideal $I$ of $L$ such that $[L/I : Z(L/I)] = 3$ by Theorem \ref{breadth3_classify}. If $I \neq L^2$, then by using \ref{keyresult} and Lemma \ref{ext_lemma}, we can find $2$-generator Lie subalgebra $M$ of $L$ such that $M'=L'$, which is not possible as $\dim (L'/L^2) = 3.$ So, $I = L^2$. By Theorem \ref{keyresult}, there exits a $3$-generator, $7$-dimensional Lie subalgebra $M$ of $L$ such that $M'=L'.$
If $\dim L=8$, then $L=\gen{x, y, z, w}$ such that $M=\gen{x, y, z}$ and $[w, M] = I$ by Theorem \ref{keyresult}. If $\dim L \geq 9$ then, for some integer $n \ge 2$, $L = \gen{x, y, z, u_1, \ldots, u_n}$ such that $M = \gen{x, y, z}$ with $M' = L'$ and $[u_i, L] = I$ for $1 \le i \le n.$ We have the following two subcases:
{\it{ Subcase(1a)}} $[u_i, M] = I$ for some $i \in \{1,\ldots,n\}$.
{\it{ Subcase(1b)}} $[u_i, M] = 0$ for all $1 \le i \le n$.
Let us discuss them one by one.
{\it{ Subcase(1a)}} Let $[u_i, M] = I$ for some $i, \, 1 \le i \le n.$ Take $w:=u_i$ and $N=\gen{x,y,z,w}$. $N$ is $8$-dimensional Lie subalgebra of $L$ with $N'=L'$ and $[w,M]=I$. So, it is sufficient to study $N$, i.e., this case reduces to the former one where $\dim L=8$. If $0 \neq [x,[x,y]] \in L^{2}=I$ then take $I=\gen{[x,[x,y]]}$. We can modify the generating set for $M$ such that $L'= M'=\gen{ [x,y],[x,z],[y,z],[x,[x,y]]}$ with $[z,[x, y]] =0.$ Therefore $(L/I)'=\langle [\bar x, \bar y],[\bar x, \bar z],[\bar y, \bar z]\rangle$, where $\bar v = v+I$ for all $v \in L.$ Since $[w,M]=I$, therefore $[x,w]=\lambda_1[x,[x,y]]$ for some $\lambda_1 \in k$, then $[x,w-\lambda_1[x,y]]=0.$ Replacing $w$ by $w-\lambda_1[x,y]$, we can assume that $[x,w]=0.$ If $[z, w] \neq 0$, then $[z, w] =\lambda_2 [x, [x, y]]$ for some $\lambda_2 \in k^*.$ Let $\mu_i \in k$, $1 \leq i \leq 4$,
\[
\begin{aligned}
&\text{ If } \mu_1 \neq 0 \text{ then } {\mu_1}[\bar x,\bar y]+{\mu_2}[\bar y,\bar z]+{\mu_3}[\bar x,\bar z]=[\bar x-\mu_2{\mu_1}^{-1} \bar z, {\mu_1}\bar y+{\mu_3}\bar z]\\
&\text{ If } \mu_1= 0 \text{ then }{\mu_2}[\bar y,\bar z]+{\mu_3}[\bar x,\bar z]=[\bar z, -\mu_3 \bar x-\mu_2 \bar y]\\
\end{aligned}
\]
Further, \begin{eqnarray*}
{\mu_4} [x,[x,y]] &=& [x- \mu_2 {\mu_1}^{-1} z, {\mu_4}[x,y]].\\
{\mu_4} [x,[x,y]] &=& [z, {\mu_4}\lambda_2^{-1}w].
\end{eqnarray*}
We can easily check that for $\delta = 0, 1$ and $i \in k$,
$$L'/I = \bigcup \limits_{\delta, i} \ [{\delta}{\bar x}+ i \bar z, \bar L] \text{ and } I \subseteq \bigcap \limits_{\delta, i} \ [{\delta} x+i z, L],$$
where $i$ and $\delta$ are not simultaneously zero.
Hence, $w(L) = L'$ by Lemma \ref{prelem3}. Lastly, if $[z, w] =0$ then $[y , w] \neq 0.$ By similar arguments, the result holds for this case also.
{\it{ Subcase(1b)}} Let $[u_i, M] = 0$ for all $1 \le i \le n$ and $\dim L \geq 9$. Then $L$ is a central product of $M$ and $K := \gen{u_1, \ldots, u_n}.$ $K$ is non-abelian as $Z(L) \subseteq L'$. So $\exists \, u_i, u_j \in K$ for some $1 \leq i<j \leq n$ such that $I = \gen{[u_i, u_j]}$. Then $N:=\gen{x, y, z, w, v}$, where $w = u_i$ and $v = u_j$, is $9$-dimensional subalgebra with $N'=L'.$ For $i \in k$, we can easily show that,
$$N'/I = [\bar z+ \bar w, \bar N]\bigcup \limits_{i} \ [ \bar x+i \bar z+ \bar w, \bar N] \ $$
$$I \subseteq \ [z +w, N] \bigcap \limits_{ i} \ [ x +i z+ w, N] ,$$
where $\bar a = a+ I$ for $a \in N.$ Hence, $w(N) = N'$ by Lemma \ref{prelem3}.
{\bf{Case(2)}}{\it{ Let $\dim L^2 = 2.$}} We can show that $I \nsubseteq L^2$ as in the above case. Then, by Theorem \ref{keyresult}, either (i) there exists a $2$-generator, $5$-dimensional, $3$-step nilpotent Lie subalgebra $M$ such that $L$ is central product of $M$ and $K$ with $\dim K' = 1$ or (ii) there exists a $3$-generator, $7$-dimensional, $3$-step nilpotent Lie subalgebra $M$ such that $M' = L'$ and $L=M+K$ where $K$ is at most $2$-step nilpotent subalgebra of $L$. In case(i), $w(L) = L'$ by Lemma \ref{prelem4}. So assume (ii).
If $\dim L = 8$, then $L = \gen{x, y, z, w}$ such that $M = \gen{x, y, z}$ and $\gen{[w, M]} = I.$ If $\dim L \geq 9$ then, for some integer $n \ge 2$, $L = \gen{x, y, z, u_1, \ldots, u_n}$ such that $M = \gen{x, y, z}$ and $[u_i, L]= I$ for $1 \le i \le n.$
First, assume that $[u_i, M] = I$ for some $1 \le i \le n.$ Then the subalgebra $N:=\gen{x, y, z, w}$, where $w = u_i$, is $8$-dimensional such that $[w, M] = I$ and $N' = L'.$ As observed above, it is sufficient to study $N$, i.e., the case where $\dim L= 8.$
Using the arguments of Lemma \ref{p7lem2}, assume that $I= \gen{[y, z]}$,
$$M^2 = \gen{[x, [x, y]], [y, [x,y]]},$$
$[x, z] \in M^2$ and $z \in C_M(M').$ If $[y,w]=\lambda_1[y,z]$ for some $\lambda_1 \in k$, then $[y,w-\lambda_1 z]=0.$ Set $w'= w-\lambda_1 z.$ If $[z,w']=\lambda_2[y,z]$ for some $\lambda_2 \in k$, then $[z, w'+\lambda_2 y] =0.$ Changing $w$ by $w'+\lambda_2 y$, we can assume that $[y, w] = [z, w] = 0.$ Let $[x,w] =\lambda_3[y,z]$ for some $\lambda_3 \in k^*.$ Using arguments from the preceding subcase(1a), it is not difficult to get:
$$ L'/I = \bigcup \limits_{ \delta, i} \ [{\delta} \bar x+i {\bar y}, \bar L] \text{ and } I \subseteq \bigcap\limits_{ \delta, i} \ [{\delta} x+i y, L] ,$$
where $\bar a = a +I$ for $a \in L$ and $i \in k$ and $\delta = 0, 1$ such that $i$ and $\delta$ are not simultaneously zero. Hence, $w(L) = L'$ by Lemma \ref{prelem3}.
Now, consider the case $[u_i, M] = 0 \, \forall \, 1 \le i \le n.$ Here $L$ is a central product of $K := \gen{u_1, \ldots, u_n}$ and $M$. $K$ is non-abelian as $Z(L) \subseteq L'$. So $\exists \, u_i, u_j \in K$ for some $1 \leq i<j \leq n$ such that $I = \gen{[u_i, u_j]}$. Then $N:=\gen{x, y, z, w, v}$, where $w = u_i$ and $v = u_j$, is ${9}$-dimensional subalgebra such that $N'=L'.$ For $i \in k$, we can easily see that
$$N'/I = \ [\bar y+\bar w,\bar N] \bigcup \limits_{i} \ [\bar x +i \bar y+ \bar w ,\bar N], $$
$$I \subseteq [y +w , N]\bigcap \limits_{ i} \ [x+i y+ w , N].$$ Thus, $w(N) = N'$.
\end{proof}
\section{$4$-step nilpotent Lie algebras}\label{sec6}
From \cite{graaf2007}, $L_{6,21}(0), L_{6,21}(\epsilon)$, where $\epsilon \neq 0$ are only $4$-step nilpotent Lie algebras of breadth $3$ with $4$-dimensional derived subalgebra up to isoclinism. We will discuss these in the following lemma.
\begin{lem}\label{p6lem}
Let $L$ be a $6$-dimensional, $4$-step nilpotent Lie algebra over field $k=\mathbb{F}_q$, field of odd Characteristic with $\dim L'=4.$ Then $w(L) = L'$ if and only if $\dim Z(L) = 1.$ Furthermore, if $w(L) \neq L'$, then each element of $L'$ is sum of at most two elements of $w(L).$ More precisely, If $L=L_{6,21}(\epsilon)$, where $\epsilon \neq 0$ then $w(L) = L'.$ If $L=L_{6,21}(0)$ then $w(L) \neq L'$ and each element of $L'$ is sum of at most two elements of $w(L).$
\end{lem}
\begin{proof} Since $L_{6,21}(0)$ and $ L_{6,21}(\epsilon)$, where $\epsilon \neq 0$ are only $4$-step nilpotent Lie algebras (upto isoclinism) of dimension $6$ whose derived subalgebra has dimension $4.$ If $L=L_{6,21}(0)$ then its presentation is
\begin{equation*}
L= \left\lbrace u_1,u_2,u_3,u_4,u_5,u_6: \begin{aligned} &[u_1,u_2]=u_3,[u_1,u_3]=u_4,\\
&[u_1,u_4]=u_6, [u_2,u_3]=u_5
\end{aligned} \right\rbrace
\end{equation*}
Observe that $Z(L)=\left\langle u_5,u_6 \right\rangle$ .i.e., $\dim Z(L)=2$ and $L$ is minimally generated by $u_1,u_2.$ So, $[u_1,u_4]+[u_2,u_3] \notin w(L)$ by the same arguments used in Lemma [\ref{2-step_dim8}, part(1)]. Thus, $w(L) \neq L'.$ Further, by applying Theorem \ref{dimL'3} and Lemma \ref{prelem5} on $L/I$ where $I=<u_5>$, we can write every element of $L'$ as a sum of at most two elements of $w(L)$.
If $L=L_{6,21}(\epsilon)$, where $\epsilon \neq 0$, then it has presentation
\begin{equation*}
L_{6,21}(\epsilon) =\left\lbrace u_1,u_2,u_3,u_4,u_5,u_6:\begin{aligned} &[u_1,u_2]=u_3,[u_1,u_3]=u_4,[u_1,u_4]=u_6,\\ &[u_2,u_3]=u_5,[u_2,u_5]=\epsilon u_6
\end{aligned} \right\rbrace.
\end{equation*}
Here $Z(L)=\left\langle u_6 \right\rangle$, i.e., $\dim Z(L)=1$, $L$ is minimally generated by $u_1,u_2$ and $L' =<u_3,u_4,u_5,u_6>$ with $L^3=<u_6>.$
\[
\begin{aligned}
& {\begin{aligned} \text{If } \beta \neq 0 \text{ then } \alpha u_3+ \beta u_4+\gamma u_5
&=\alpha[u_1,u_2]+\beta [u_1,u_3]+\gamma [u_2,u_3]\\
&=[u_1+\gamma \beta^{-1}u_2, \alpha u_2+\beta u_3].
\end{aligned}}\\
&\text{If } \beta=0 \text{ then } \alpha u_3 +\gamma u_5 =\alpha[u_1,u_2]+\gamma [u_2,u_3]= [u_2,-\beta u_1+\gamma u_3].
\end{aligned}
\]
\begin{eqnarray*}
\text{Hence } L'/L^3 &= & [u_2, \bar{L}] \bigcup\limits_{\substack{{\beta \neq 0}\\{\gamma \in k}}} [\bar{u_1} +\gamma \beta^{-1} \bar{u_2}, \bar{L}] \\
\text{and } L^3 &\subseteq & [u_2, L] \bigcap\limits_{\substack{{\beta \neq 0}\\{\gamma \in k}}} [u_1 +\gamma \beta^{-1} u_2, L]
\end{eqnarray*}Hence $w(L)=L'$ by Lemma \ref{prelem3}.
\end{proof}\medskip
\begin{lem} \label{p7lem3}
Let $L$ be a $7$-dimensional $4$-step nilpotent Lie algebra over $k=\mathbb{F}_q$, field of odd characteristic, with $b(L)=3$ and $\dim L'=4.$ Then $w(L)=L'.$
\end{lem}
\begin{proof}
Since $Z(L) \subseteq L'$, so $\dim Z(L) \leq 4.$ \begin{enumerate}
\item[•] If $\dim Z(L)= 4$ then $L$ is a $2$-step nilpotent Lie algebra, which is not the case.
\item[•] If $\dim Z(L)= 3$ then $L$ is a $3$-step nilpotent Lie algebra, which is not the case.
\end{enumerate}
So $\dim Z(L) \leq 2.$ By Theorem \ref{keyresult}, there exists a $4$-step, $6$-dimensional nilpotent Lie subalgebra $M$ of $L$ such that $M' = L'.$ If $\dim Z(M)= 1$, then $w(M) = M' = L'$ by Lemma \ref{p6lem}. Thus, $w(L) = L'.$ If $\dim Z(M) = 2$ then there exists an ideal $I$ of $L$ such that $|L/I:Z(L/I)|=3$ by Theorem \ref{breadth3_classify}.Take $L = \left\langle x, y, z \right\rangle$ such that $M = \left\langle x, y \right\rangle.$ Therefore, $I = L^3$ using the arguments given in the proof of Theorem \ref{keyresult}(last paragraph). Thus, $\bar{L} := L/I = \left\langle \bar{x}, \bar{y}, \bar{z} \right\rangle $ is $3$-step nilpotent such that $\bar{z} \in Z(\bar{L})$. So, $[z, L] = I$ as $z \notin Z(L)$. Observe that $\bar{L}' = \left\langle [\bar x, \bar y], [\bar x, [\bar x, \bar y]], [\bar y, [\bar x, \bar y]]\right\rangle .$ By doing similar calculation as in Theorem [\ref{dimL'3}, subcase(2c)], we can get, $\bar{L}' = \bigcup \limits_{{\delta},{i}} \ [{\delta}{\bar{x}}+i{\bar{y}} , \bar L]$ where for $i \in k$ and $\delta = 0, 1$ such that $i$ and $\delta$ are not simultaneously zero.
Observe that $M$ is in the isoclinism class $L_{6,21}(0)$ of \cite{graaf2007}. So, we can assume that $I=\gen{[y, M^2]}$. Since $C_M(z)$ is maximal, we can modify $z$ such that $[y,z]=0$ and $I=\langle [x,z] \rangle.$ Then $I \subseteq \bigcap\limits_{\substack{{\delta \in \{0,1\}}\\{i \in k}}} [{\delta}x+iy , L]$, where $i$ and $\delta$ cannot be simultaneously zero. Hence, $w(L)=L'$ by Lemma \ref{prelem3}.
\end{proof}
\begin{lem}\label{4-step nilpotent} Let $L$ be a $4$-step nilpotent Lie algebra of dimension at least $7$ over field $k=\mathbb{F}_q$ of odd characteristic with $b(L)=3$, $\dim L'=4.$ Then $w(L)=L'.$
\end{lem}
\begin{proof}
Given $\dim L \geq 7$ and $\dim L'=4$, $Z(L) \subseteq L'$ then $\dim Z(L)\leq 2$ as $\dim Z(L)=3$ and $[L': Z(L)]=1.$ By Lemma \ref{maximal_nilpotency_class}, $L$ is at most $3$-step nilpotent, which contradicts the given hyothesis. By Theorem \ref{keyresult}, there exists a $2$-generator Lie subalgebra $M$ of $L$ such that $\dim M = 6$, $M' =L'$ and $L=M+K$ with $\dim K \leq 1.$ Also $K = \gen{u_1, \ldots, u_n}$, for some $n \ge 1$ such that $[u_i, L] = I$ for $1 \le i \le n.$ If $[u_i, M] = I$ for some $1 \le i \le n$, then the subalgebra $L_1:=\gen{x, y, z}$, where $z = u_i$ is $7$-dimensional such that $L_1' = L'.$ Hence,$w(L) = L'$ by Lemma \ref{p7lem3}. Now, assume that $[u_i, M] = 0$ for all $1 \le i \le n.$ Also $K$ is non-abelian as $Z(L) \subseteq L'$. Thus, $L$ is a central product of $M$ and $K.$
{\textbf{Case(1)} } If $\dim Z(L) = 1$ then $\dim Z(M)= 1$. So, $w(M) = M'$ by Lemma \ref{p6lem}. Hence, $w(L) =L'.$
{\textbf{Case(2)} } If $\dim Z(L) = 2$ then $K' = I$ and so $\exists \, u_i, u_j \in K$ for some $1 \leq i<j\leq n$, such that $I = \gen{[u_i, u_j]}$. Let $N:= \gen{x, y, z, w}$, where $z= u_i$ and $w = u_j.$ Then $L' =N'.$ Consider $\bar N := N/I.$ As we saw
$(\bar{N})' = \gen{[\bar x, \bar y], [\bar x, [\bar x, \bar y]], [\bar y, [\bar x, \bar y]]}$ in the proof of Lemma \ref{p7lem3}.
We can easily show that $i, j \in k$, $(\bar{N})' = \bigcup \limits_{{i},{j}} \ [{i\bar{x}}+j {\bar{y}}+ \bar z, \bar N]$ and $I \subseteq \bigcap\limits_{{i},{j}} [i x+j y+ z, N].$ Hence, $w(N)=N'$ by Lemma \ref{prelem3}, and therefore we have $w(L)=L'.$
\end{proof}
\section{Proof of Theorem \ref{thmA}}\label{sec7}
\noindent {\it Proof of Theorem \ref{thmA}.} Let $L$ be a finite-dimensional nilpotent Lie algebra over $k$ such that $\dim L'=4.$ Also, let $Z(L) \subseteq L'$. Observe that $L$ is at most $5$-step nilpotent. By Remark \ref{remark}, we have $b(L) \ge 3.$ Observe that $\dim L \geq 6.$ If $b(L) = 4$, then $w(L) = L'.$ Therefore, we assume that $b(L) = 3.$ When $L$ is $2$-step nilpotent, the assertion follows from Lemmas \ref{2-step_dim8} and \ref{cl2lem2}. Now, let $L$ be a $3$-step nilpotent. There is no $3$-step nilpotent Lie algebra of dimension $6$ satisfying the given hypothesis. Therefore, $\dim L \geq 7.$ If $\dim Z(L) \leq 2$, then $w(L) = L'$ according to the Lemma \ref{p7lem2}. If $\dim L \geq 8$ and $\dim Z(L) = 3$, then by Lemma \ref{main1}, we have $w(L) = L'.$ Finally, if $\dim L = 7$ and $\dim Z(L) = 3$, then by Lemma \ref{p7lem2} we have $w(L) \neq L'.$ It only remains to deal the cases where $L$ is $4$-step or $5$-step nilpotent.
Let $L$ be a $4$-step nilpotent. If $\dim L = 6$, then it follows from Lemma \ref{p6lem} that $w(L) = L'$ if and only if $\dim Z(L)= 1.$ So assume that $\dim L \geq 7.$ Then, using Lemma \ref{4-step nilpotent}, $w(L)=L'.$
Finally, assume that $L$ is $5$-step nilpotent. Our claim is $b(L) = 4$. Suppose, $b(L)\neq 4$, i.e., we can assume that $b(L) = 3.$ Since $L/Z(L)$ is $4$-step nilpotent and $Z(L) \subseteq L'$, we have $\dim Z(L) =1.$ Hence, by Theorem \ref{breadth3_classify}, $I=Z(L)$ is the only choice such that $[L/I:Z(L/I)] =3 $. Taking $Z_2(L):=\{x \in L: [x,y] \in Z(L) \, \forall \, y \in L\}$, we get $[\bar L : \bar{Z_2(L)}] = 3$, where $\bar L= L/Z(L), \bar{Z_2(L)}= Z_2(L)/ Z(L)$, i.e., $\dim L/Z_2(L) = 3$, which is not true as $L/Z_2(L)$ is a $3$-step nilpotent. Hence, our claim is proved, and the proof of the theorem is complete. \hfill $\Box$
\bibliographystyle{plain}
|
train/arxiv
|
BkiUazE5qhLACAkw_AnW
| 5
| 1
|
\section{Introduction}
A fundamental assumption in machine learning is that training and test data are \gls{IID}. This assumption ensures consistency-results from statistical learning theory, meaning that the learning machine obtained from an \gls{ERM} attains the lowest achievable risk as sample size grows ~\cite{Vapnik98:SLT,scholkopf2019}.
Unfortunately, a considerable amount of research and real-world applications in the past decades has provided a staggering evidence against this assumption \cite{zhao2018,zhao2020,ren2019,taori2020} (see \textcite{damour2020} for case studies).
The violation of the \gls{IID} assumption is usually caused by a \gls{DS} and can result in inconsistent learning machines~\cite{sugiyama2012}, implying the loss of performance guarantee of machine learning models in the real world.
Therefore, to tackle \gls{DS}, recent work advocates for
\gls{DG} \cite{blanchard2011,muandet2013,li2017,li2018,zhou2021}. This generalization to utterly unseen domains is crucial for robust deployment of the models in practical application, especially when new, unforeseeable domains emerge after model deployment. However, the most important question that \gls{DG} seeks to answer is how to identify the right \emph{invariance} allowing for generalization.
The contribution of this work is twofold. First, we advocate that real-world distributions are composed of smaller ``units'' called \emph{invariant elementary distributions} that remain invariant across different domains; see Section \ref{sec:ied} for a motivating example.
Second, we propose to implement this hypothesis through so-called \glspl{GDU}. Specifically, we developed a modular neural network layer that consists of \glspl{GDU}. Each \gls{GDU} learns an embedding of an individual elementary domain that allows us to express the domain similarities during training. For this purpose, we adopt the theoretical framework of \gls{RKHS} to retrieve a geometrical representation of each distribution in the form of a \gls{KME} without information loss \cite{Berlinet04:RKHS,Smola07Hilbert,sriperumbudur2010,Muandet17:KME-Review}. This representation accommodates methods based on analytical geometry to measure similarities between distributions. We show that these similarity measures can be learned and utilized to improve the generalization capability of deep learning models to previously unseen domains.
The remainder of this paper is organized as follows: The theoretical framework is laid out in Section~\ref{sec:content} with our modular \gls{DG} layer implementation shown in Section~\ref{sec:layer}. In Section~\ref{sec:related_work}, we outline related work. Our experimental evaluations are presented in Section~\ref{sec:experiments}. Finally, we discuss potential limitations of our approach and future work in Section~\ref{sec:conclusion}.
\begin{figure*}
\center
\begin{tikzpicture}[scale=0.6]
\pgfplotsset{scale=1}
\begin{axis}[name=Ax1,
every axis plot post/.append style={mark=none, domain=-1.5:6, samples=100,smooth, thick},
axis x line*=bottom,
axis y line*=left,
ymin=0]
\addplot[blue, ultra thick, dotted] {gauss(0,0.5)};
\addplot[red, ultra thick, dotted] {gauss(2.5, 1.75)};
\addplot[color=orange] {gauss_mix(2.5, 1.75, 0, 0.5, 1, 0.06)};
\end{axis}
\begin{axis}[scale=0.8,
name=Ax2,at={($(Ax1.north east)+(10,0)$)},
axis background/.style={fill=green!5},
anchor=north west,
axis on top=true,
axis line style={gray},
axis x line=middle,
axis y line=middle,
ymin=-0.5, ymax=0.5,
xmin=-0.5, xmax=0.5,
ticks=none
]
\end{axis}
\begin{axis}[name=Ax3,at={($(Ax2.north east)+(22,0)$)},
every axis plot post/.append style={mark=none, domain=-6:8, samples=100,smooth, thick},
axis x line*=bottom,
axis y line*=left,
ymin=0,
anchor=north west]
\addplot[blue, ultra thick, dotted] {gauss(0, 0.5)};
\addplot[red, ultra thick, dotted] {gauss(2.5, 1.75)};
\addplot[color=violet] {gauss_mix3(-2.5, 1.3, 0, 0.8, 2.5, 1.75, 1.5)};
\end{axis}
\foreach \Point [count=\S from 1] in { 0.38, 0.21, 0.19, 0.01, -0.07, 0.11, -0.01, 0.08, -0.22,
-0.03, 0.18, 0.14, -0.14, 0.28, 0.21, -0.01, -0.16, 0.34,
0.09, 0.39, 0.05, -0.06, -0.01, 0.08, 0.07, -0.45, 0.13,
0.05, -0.05, -0.01, -0.27, -0.07, -0.24, -0.1 , 0.28, -0.14,
0.34, -0.32, 0.12, -0.56, -0.14, -0.12, -0.33, -0.08, -0.1 ,
0.09, 0.07, -0.12, -0. , -0.15, 0.18, 0.04, 0.17, -0.01,
0.34, -0.18, -0.27, -0.2 , -0.01, -0.07, -0.39, -0.35, -0.09}{
\draw[thin, cyan!100!black!60] (\Point + 1.5, 0) -- (\Point + 1.5, 0.5);
}
\node[cyan] at (1.3, -0.8) {$V_1$};
\node[cyan] at (7.8, 5.2) (DOMAIN1){\textbullet};
\draw [thin, latex-latex, cyan, dashed] (1.5, 0.5) [bend left] to (DOMAIN1);
\node[cyan] at (DOMAIN1.north) {$\mu_{V_1}$};
\foreach \Point [count=\S from 1] in { 0.87, -2.25, 0.71, -0.66, -1.04, -0.15, 0.63, 1.46, 0.29,
0.23, 0.49, -0.13, -0.57, -1.08, 1.33, -0.41, -0.11, -0.05,
0.05, -0.73, -0.41, 0.35, 0. , 0.17, -1.79, -0.97, 1. ,
1.21, -0.33, -0.31, -0.28, -0.63, -0.25, 0.65, -0. , -0.64,
0.99, 0.72, -0.39, 1.06, -0.16, -0.53, -1.45, -0.25, 2.08,
-0.46, -0.25, -0.06, -0.3 , 1.36, -0.19, 0.02, -0.1 , 0.42,
-0.01, -0.24, -0.71, 0.48, -0.71, 0.59, 1.19, -0.21, 0.1 ,
}{
\draw[thin, magenta] (\Point + 3.5, 0) -- (\Point + 3.5, 0.5);
}
\node[magenta] at (11.5, 2.5) (DOMAIN2){\textbullet};
\node[magenta] at (DOMAIN2.south) {$\mu_{V_2}$};
\node[magenta] at (3.3, -0.8){$V_2$};
\draw [thin, latex-latex, magenta, dashed] (3.3, 0.5) [bend left] to (DOMAIN2);
\node[black, thin] at (16, 0.7)(XVAL) [above]{$x_i$};
\draw[thin, black] (16, 0.0) -- (XVAL);
\node[black] at (11.1, 5.2) (PHIX)[]{\textbullet};
\node[black] at (PHIX) [above] {$\phi(x_i)$};
\draw [black, dashed] (XVAL.south) -- (PHIX);
\draw [darkgray, thick, latex-latex, dotted] (PHIX) -- (DOMAIN1) node[midway, above]{$\beta_{i1}$};
\draw [darkgray, thick, latex-latex, dotted] (PHIX) -- (DOMAIN2) node[midway, left]{$\beta_{i2}$};
\node[black] at (3.5, -1.8) {\textbf{During training}};
\node[black] at (17.3, -1.8) {\textbf{During inference}};
\end{tikzpicture}
\caption{A visualization of an ``invariant elementary distribution (I.E.D.)'' assumption for domain generalization (DG): the observed data distributions (orange and violet) are composed of the same set of \emph{unobserved} elementary distributions (bule and red) that remain invariant across different domains. Hence, the first challenge during the training phase (left panel) is to extract these elementary distributions from the observed data (orange). The unobserved elementary distributions are represented by the elementary bases $V_1$ and $V_2$ (cyan and pink). The second challenge during the inference phase (right panel) is to create a weighted ensemble of learning machines that utilize the similarities between the embedding of the unseen observation $\phi(x_i)$ and the embeddings of these distributions $\mu_{V_1}$ and $\mu_{V_2}$ in the RKHS $\mathcal{H}$ (green rectangle) as weights $\beta_{i1}$ and $\beta{i_2}$.}
\end{figure*}\label{fig:concept}
\section{Domain Generalization with Invariant Elementary Distributions} \label{sec:content}
Let $\mathcal{X}$ and $\mathcal{Y}$ be the input and output space, with a joint distribution $\mathbb{P}$.
In the multi-source \gls{DG} setting, we are given a set of $D$ labeled source datasets $\lbrace\mathcal{D}^s_{i}\rbrace_{i=1}^{D}$ with $\mathcal{D}^s_{i} \subseteq \mathcal{X}\times \mathcal{Y}$.
Each of the source datasets is assumed to be \gls{IID} generated by a joint distribution $\mathbb{P}_i^{s}$ with support on $\mathcal{X}\times \mathcal{Y}$, henceforth denoted \emph{domain}. The set of probability measures with support on $\mathcal{X}\times \mathcal{Y}$ is denoted by $\mathcal{P}$. In general, we aim to minimize the empirical risk, see Section \ref{sec:reg} for details. Important notation is summarized in Table \ref{tab:notations}.
\subsection{Invariant Elementary Distributions}
\label{sec:ied}
A multi-source dataset $\mathcal{D}^s$ comprises the merged individual source datasets $\lbrace\mathcal{D}^s_{i}\rbrace_{i=1}^{D}$. Similar to \textcite{mansour2012}, \textcite{albuquerque2019}, and \textcite{hoffman2018a}, we assume that the distribution of the source dataset can be described as a convex combination $\mathbb{P}^{s} = \sum^{D}_{i=1} \alpha^s_{i} \mathbb{P}^s_{i}$, where $\alpha^s= (\alpha^s_1, \dots, \alpha^s_D)$ is an element of the probability simplex (i.e., \mbox{$ \alpha^s \in \Delta^{\tiny{D}} := \lbrace \alpha \in \R^D \,|\, \alpha_i \geq 0 \wedge \sum_{i=1}^{D}\alpha_{i}=1 \rbrace$}). In other words, $\alpha_i$ quantifies the contribution of each individual source domain to the combined source domains.
\begin{wraptable}{r}{0.5\textwidth}
\caption{Important notation}
\begin{tabular}{c l}\toprule
K & number of elementary distributions\\ \midrule
M & number of elementary domain bases\\ \midrule
N & number of basis vectors\\ \midrule
$\mathbb{P}^s$ & combined multi-source distribution\\ \midrule
$\mathbb{P}^s_i$ & $i$-th single-source distribution\\ \midrule
$\mathbb{P}_i$ & $i$-th elementary distribution\\ \midrule
$V_i$ & $i$-th domain basis \\ \midrule
$v_j^i$ & $j$-th vector in $V_i$\\ \midrule
$\alpha^s_i$ & coefficient for $\mathbb{P}^s_i$\\ \midrule
$\alpha_i$ & coefficient for $\mathbb{P}_i$\\ \midrule
$\beta_{ij}$ & coefficient for sample $x_i$ and $\mu_{V_j}$ \\ \bottomrule
\end{tabular}\label{tab:notations}
\vspace{-25pt}
\end{wraptable}
In contrast to \textcite{mansour2009,mansour2012,hoffman2018a}, we generalize their problem descriptions: We express the distribution of each domain as a convex combination of $K$ elementary distributions $\lbrace \mathbb{P}_{i} \rbrace^{K}_{i=1} \subset \mathcal{P}$, meaning that $\mathbb{P}^{s} = \sum^{K}_{i=1} \alpha_{i} \mathbb{P}_{i}$, where $ \alpha \in \Delta^{\tiny{K}}$. We assume the elementary distributions to be invariant across the domains. The advantage is that we can find an invariant subspace at a more elementary level, as opposed to when we consider the source domains as some sort of basis for all unseen domain
\paragraph{Motivating example.} In this work, we postulate the elementary domain bases are the invariant subspaces that allow us to generalize to unseen domains. In practice, the question arises if and when elementary domains evolve. Consider the practical case of classifying the outcome of virus infections based on electronic health records collected from multiple sources such as patients, cohorts, and medical centers. Naturally, several factors determining the trajectory such as gender, pre-existing diseases, and virus mutations can change simultaneously across these sources. While, to a certain degree, these common factors remain invariant across individuals, the contribution of each of these factors may differ between individuals. In terms of the assumptions made in our work, we model each of these factors with a corresponding elementary distribution $\mathbb{P}_{i}$. For a previously unseen individual we can then determine the coefficients $\alpha_i^s$ and therewith quantify the contribution of each factor.
\subsection{Kernel Mean Embedding of Distributions}
\label{sec:kme}
In this work, we leverage the \gls{KME} of distributions \cite{Berlinet04:RKHS,Smola07Hilbert,Muandet17:KME-Review} to discover the elementary distributions and evaluate similarities between them.
Let $\mathcal{H}$ be a reproducing kernel Hilbert space (\gls{RKHS}) of real-valued functions on $\X$ with a reproducing kernel $k: \X \times \X \rightarrow \R$ \cite{scholkopf2001}.
The \gls{KME} of a probability measure $ \mathbb{P} \in \mathcal{P}$ in the \gls{RKHS} $\mathcal{H}$ is defined by a mapping $\phi(\mathbb{P}) = \mu_{\mathbb{P}} := \int_{\X} k(\textbf{x}, \cdot ) \, d \mathbb{P}(\textbf{x})$.
We assume that the kernel $k$ is characteristic, i.e., the mapping $\mu_{\mathbb{P}}$ is injective \cite{Fukumizu04:DRS,Sriperumbudur08injectivehilbert}.
Theoretically, this essential assumption ensures that there is no information loss when mapping the distribution into $\mathcal{H}$.
Given the samples $\lbrace x_1, \dots , x_n \rbrace$ generated \gls{IID} from $\mathbb{P}$, $\mu_{\mathbb{P}}$ can be approximated by the empirical \gls{KME} $\hat{\mu}_{\mathbb{P}} = (1/n)\sum_{i=1}^{n}k(x_i, \cdot) = (1/n)\sum_{i=1}^{n}\phi(x_i)$.
We refer non-expert readers to \textcite{Muandet17:KME-Review} for a thorough review on this topic.
\vspace{-5pt}
\paragraph{Challenges.}
Figure~\ref{fig:concept} depicts two challenges that come with our assumption of elementary distributions.
First, since we do not have access to the samples from the hidden elementary distributions, the elementary \gls{KME} cannot be estimated directly from the samples at hand.
To overcome this challenge, we instead seek a proxy \gls{KME} $\mu_{V_i} := (1/N)\sum_{j=1}^{N} \phi(v_j^{i}) = (1/N)\sum_{j=1}^{N} k(v_j^{i}, \cdot)$ for each elementary \gls{KME} $\mu_{\mathbb{P}_i}$
from a domain basis $V_{i}$, where $V_{i} = \lbrace v_{1}^{i}, \ldots, v_{N}^{i} \rbrace \subseteq \X$ for all $i \in \lbrace 1, \dots, M \rbrace$.
Hence, the \gls{KME} $\mu_{V_i}$ can be interpreted as the \gls{KME} of the empirical probability measure $\hat{\mathbb{P}}_{V_i} = (1/N)\sum_{j=1}^{N} \delta_{v_j^i}$.
Here, we assume that $M=K$.
The sets $V_{i}$ are referred to as \emph{elementary domain basis}.
Intuitively, the elementary domain basis $V_{1},\ldots,V_{M}$ represents each elementary distribution by a set of vectors that mimic samples generated from the corresponding distribution.
In Figure~\ref{fig:concept}, $V_1$ and $V_2$ as well as their mapping in $\mathcal{H}$ visualize this first challenge.
The second challenge points to the objective of learning the unknown similarity between a single sample $x_i$ and an elementary domain $V_j$, which we denote by $\beta_{ij}$. Considering the advantage of \glspl{KME}, that is to tackle this challenge from a geometrical viewpoint, we quantify similarities between \glspl{KME}. For example, in Figure~\ref{fig:concept}, the similarity between $\phi(x_i)$ and $\mu_{V_1}$ and $\mu_{V_2}$ could be quantified as their distance or angle. These similarity coefficients enable the learning machine to represent a convex combination of elementary domain-specific learning machines, commonly known as ensembles.
\section{Domain Generalization Layer} \label{sec:layer}
This section aims to transfer the theoretical ideas presented in Section~\ref{sec:content} into a deep learning framework. For the purpose of implementation, let $x \in \R^{h \times w}$ denote the input data point and $h_{\xi}:\R^{h \times w}\rightarrow \R^{e}$ the \gls{FE} that maps the input into a low-dimensional representation $\Tilde{x} \in \R^e$. Then the prediction layer $g_{\theta}: \R^e \rightarrow \Y$ infers the label $y$. To tackle the \gls{DG} problem, we introduce a layer module called the \glsfirst{GDU}.
A \gls{GDU} consists of three main components: (1) a similarity function $\gamma: \mathcal{H} \times \mathcal{H} \rightarrow \R$ that is the same for all elementary domains, (2) an elementary basis $V_i$ and (3) a learning machine $f(\Tilde{x}, \theta_i)$ for each elementary domain $i \in \{1, \dots, M \}$.
The architecture of the layer proposed herein is depicted in Figure~\ref{fig:layerscheme}.
\begin{figure}[ht]
\centering
\begin{minipage}[h]{.4\textwidth}
\centering
\begin{tikzpicture}[x=1.0cm,y=0.4cm, scale=0.6, every node/.style={scale=0.6}]
\newcommand \shiftKME{-4}
\newcommand \shiftMATMUL{-1}
\newcommand \shiftSUM{1.5}
\newcommand \shiftOUT{3}
\newcommand \shiftLOSS{5}
\newcommand \shiftREG{7}
\newcommand{7}{7}
\newcommand{9}{9}
\newcommand{0}{0}
\newcommand{-2.5}{-2.5}
\newcommand{-7}{-7}
\node at (-1.5, -7) [circle, minimum size=1cm, draw, thick](SUM){$\Sigma$};
\draw (-0.9, 7+1) rectangle (-2.1, 7-1){};
\node [scale=1.6] at (-1.5, 7) {$\tilde{x}$};
\draw [] (-1.5, 7-1) -- (-1.5, 9-4) {};
\foreach \Point [count=\S from 1] in {-5}{
\draw (\Point-1, 0 cm +0.2cm) node[minimum height=0.8cm,minimum width=0.8cm, fill=green!10, draw=black, thick] (GDU) {$GDU_{\S}$};
\draw[] (0, 9-4) -- (\Point, 9-4 ){};
\draw[-] (\Point, 9-4) -- (\Point, 0 cm +1cm){};
\node at (\Point-1, 0 cm -0.8cm) [circle, minimum size=0.2cm, draw, thick](GDUOUT){$\beta_{1}$};
\draw [-] (GDU.south) -- (GDUOUT){};
\draw[->] (\Point, 0 cm +1cm) -| (GDU.north){};
\draw (\Point+1, 0 cm +0.2cm) node[minimum height=0.8cm,minimum width=0.8cm, draw=black, thick] (LM) {$f(\tilde{x}; \theta_{\S})$};
\node at (\Point+1, 0 cm -0.8cm) [circle, minimum size=0.2cm, draw, thick](LMOUT){$\hat{y}_{1}$};
\draw [-] (LM.south) -- (LMOUT){};
\draw[->] (\Point, 0 cm +1cm) -| (LM.north){};
\draw (\Point-2, 0 cm +1.5cm) rectangle (\Point +2 , 0 cm -2cm){};
\node at (\Point, 0 cm -1.5cm) [circle, minimum size=0.6cm, draw, thick](PROD){};
\draw[fill=black] (\Point, 0 cm -1.5cm) circle (2pt);
\draw [->] (LMOUT.south) |- (PROD.east){};
\draw [->] (GDUOUT.south) |- (PROD.west){};
\draw [->] (PROD.south) |- (SUM){};
}
\foreach \Point [count=\S from 1] in {-1.5}{
\draw[->, dashed] (\Point, 9-4) -- (\Point, 0 cm +1cm){};
\node [] at (\Point, 0 cm ) {$\cdots$};
\draw (\Point, 0 cm -1.5cm ) node[minimum height=0.6cm,minimum width=0.8cm] (DF) {$\cdots$};
\draw [-, dashed] (DF.south) |- (SUM.north){};
}
\newcommand{green!40!black!100}{green!40!black!100}
\foreach \Point [count=\S from 1] in {2.5}{
\draw (\Point-1, 0 cm +0.2cm) node[minimum height=0.8cm,minimum width=0.8cm, fill=green!10, draw=black, thick] (GDU) {$GDU_{M}$};
\draw[] (0, 9-4) -- (\Point, 9-4 ){};
\draw[-] (\Point, 9-4) -- (\Point, 0 cm +1cm){};
\node at (\Point-1, 0 cm -0.8cm) [circle, minimum size=0.2cm, draw, thick](GDUOUT){$\beta_{M}$};
\draw [-] (GDU.south) -- (GDUOUT){};
\draw[->] (\Point, 0 cm +1cm) -| (GDU.north){};
\draw (\Point+1, 0 cm +0.2cm) node[minimum height=0.8cm,minimum width=0.8cm, draw=black, thick] (LM) {$f(\tilde{x}; \theta_{M})$};
\node at (\Point+1, 0 cm -0.8cm) [circle, minimum size=0.2cm, draw, thick](LMOUT){$\hat{y}_{M}$};
\draw [-] (LM.south) -- (LMOUT){};
\draw[->] (\Point, 0 cm +1cm) -| (LM.north){};
\draw (\Point-2, 0 cm +1.5cm) rectangle (\Point +2 , 0 cm -2cm){};
\node at (\Point, 0 cm -1.5cm) [circle, minimum size=0.6cm, draw, thick](PROD){};
\draw[fill=black] (\Point, 0 cm -1.5cm) circle (2pt);
\draw [->] (LMOUT.south) |- (PROD.east){};
\draw [->] (GDUOUT.south) |- (PROD.west){};
\draw [->] (PROD.south) |- (SUM){};
}
\draw [->] (SUM.south) -- (-1.5, -7-4) {};
\end{tikzpicture}
\end{minipage}
\hspace{.12\textwidth}
\begin{minipage}[h]{.4\textwidth}
\centering
\begin{tikzpicture}[x=1cm,y=0.5cm, scale=0.6, every node/.style={scale=0.8}]
\newcommand \kmeX{0}
\newcommand \cellCornerLU{-3}
\newcommand \cellCornerRU{3}
\newcommand \cellTopY{-8}
\newcommand \cellBotY{-15}
\newcommand \lineShiftU{-2}
\newcommand \lineShiftL{-5}
\draw[dashed, fill=green!10, draw=black] (\cellCornerLU, \cellTopY ) rectangle (\cellCornerRU, \cellBotY );
\draw (\kmeX+1, \cellTopY + \lineShiftU) node[minimum height=1cm,minimum width=2.4cm, draw=black,fill=white](INNERPRODBOX) {};
\node at (\kmeX+1, \cellTopY + \lineShiftU) {$\gamma(\phi(\Tilde{x}), \mu_{V_{i}}) $};
\draw (\kmeX-2, \cellTopY + \lineShiftL) node[minimum height=0.8cm,minimum width=0.8cm, draw=black, fill=white](DOMAIN) {\small$V_i$};
\draw (\kmeX+1, \cellTopY \lineShiftL) node[minimum height=0.8cm,minimum width=0.8cm, draw=black, fill=white](DOMAINBASIS) {\small$\mu_{V_i}$};
\node at (\kmeX-4.5, \cellTopY + \lineShiftU ) (KMEINPUT) {$\Tilde{x}$};
\draw[->, left] (DOMAIN.east) -- (DOMAINBASIS.west) node [midway, sloped, above, scale=1.2, color=black] {$\phi$};
\draw[->, left] (KMEINPUT.east) -- (INNERPRODBOX.west)node [midway, sloped, above, scale=1.2, color=black] {$\phi$};
\draw[->, left] (DOMAINBASIS.north) -- (INNERPRODBOX.south);
\draw[->, left] (INNERPRODBOX.east) -- (\kmeX+5.2, \cellTopY + \lineShiftU) node[midway, sloped, above] {$ \beta_{i}$};
\end{tikzpicture
\end{minipage}
\caption{Visualization of the \gls{DG} layer (left panel) and its main component, the \gls{GDU} (right panel). The \gls{DG} layer consists of several \glspl{GDU} that represent the elementary distributions. During training, these \glspl{GDU} learn the elementary domain bases $V_1,\ldots,V_M$ that approximate these distributions.}
\label{fig:layerscheme}
\end{figure}
Essentially, the process is as follows: First, the $j$-th \gls{GDU} takes $\Tilde{x}_i$ as an input and yields $\beta_{ij}$ as an output. The \gls{KME} of each domain basis $V_j$ is required in order to apply $\gamma$ to compute similarity between $\Tilde{x}_i$ and $V_j$. These \glspl{KME} are obtained by $ \phi(V_j) := \mu_{V_j} = (1/N) \sum_{k=1}^{N} \phi(v_{k}^{j}) = (1/N) \sum_{k=1}^{N}k(v_{k}^{j}, \cdot)$. The \gls{GDU}, therefore, has the task to allocate coefficients $\beta_{ij}$ for each elementary domain based on a similarity function $\gamma$. The function $\gamma$ outputs the $\beta_{ij} = \gamma(\phi(\Tilde{x}_i), \mu_{V_j})$ coefficients that in turn represent similarities between the \gls{KME} of both, the corresponding domain basis $V_j$ and the input $\Tilde{x}_i$. Theoretically speaking, $\mu_{V_j}$ and the feature mapping $\phi(\Tilde{x}_i)$ are elements of the associated \gls{RKHS} $\mathcal{H}$, which allow us to evaluate similarities of non-linear features in a higher dimensional feature space.
Each \gls{GDU} is then connected to a learning machine $f(\Tilde{x}_i, \theta_j)$ that yields an elementary domain-specific inference. The final prediction of the layer is then an ensemble of these learning machines $g_{\theta}(\Tilde{x}_i) = \sum_{j=1}^{M}\beta_{ij} f(\Tilde{x}_i, \theta_j)$ where $ \theta = (\theta_1, \ldots, \theta_M).$ In Figure~\ref{fig:layerscheme}, we give an overview of how data is processed and information is stored in the \gls{GDU}.
In summary, \glspl{GDU} leverage the \gls{IED} assumption and represent our algorithmic contribution: The elementary domain basis is stored as weights in the layer. Storing information as a weight matrix (i.e., domain memory) provides the computational advantage of allowing for the use of backpropagation to learn the elementary domain basis. Hence, we avoid the dependency on problem-adaptive methods (e.g., domain-adversarial training) and metadata (e.g., domain labels).
\subsection{Domain Similarity Measures}\label{sec:similarity}
For the similarity function $\gamma$, we consider two similarity measures $H(\phi(x), \mu_{V_{i}})$, namely the \gls{CS}~\cite{kim2019} and \gls{MMD}~\cite{borgwardt2006,gretton2012}. To ensure that the resulting coefficients $\beta_i$ lie on the probability simplex, we apply the kernel softmax function~\cite{gao2019} and interpret its output as the similarity between an observation $\Tilde{x}$ and an elementary domain basis $V_i$. We get
\begin{equation}
\beta_i = \gamma(\phi(\Tilde{x}), \mu_{V_{i}}) = \frac{\exp\!\big(\kappa H(\phi(\Tilde{x}), \mu_{V_{i}}) \big)}{\sum_{k=1}^{M}\exp\!\big(\kappa H(\phi(\Tilde{x}), \mu_{V_{k}}) \big)},
\end{equation}
where $\kappa > 0$ is a positive softness parameter for the kernel softmax. Geometrically speaking, these similarities correspond to the angle and distance of two \glspl{KME} in the \gls{RKHS} $\mathcal{H}$. The function $\phi$ maps the observation $\Tilde{x}$ and domain basis $V_i$ into $\mathcal{H}$ meaning that $ \phi(x) =\mu_{\delta_{x}}$ is the \gls{KME} of a Dirac measure $\delta_x$ and $\phi(V_i) = \mu_{V_i}$.
\paragraph{\gls{CS}.} The \gls{CS} function
$H(\phi(x), \mu_{V_{i}}) = \frac{\langle \phi(x), \mu_{V_{i}} \rangle_{\mathcal{H}}}{\| \phi(x) \|_{\mathcal{H}} \| \mu_{V_i} \|_{\mathcal{H}}}\,$
is used as an angle-based similarity.
\paragraph{\gls{MMD}.} We consider the \gls{MMD} for calculating a distance-based similarity measure.
The distance is then given as $\| \phi(x) - \mu_{V_{i}} \|_{\mathcal{H}}$. Subsequently, the similarity function $H$ is the negative \gls{MMD}: $H(\phi(x), \mu_{V_{i}}) = -\| \phi(x) - \mu_{V_{i}} \|_{\mathcal{H}}$. The intuition behind the negative \gls{MMD} is to put higher weights on samples that are closer to the \gls{KME} of an elementary domain basis.
\subsection{Projection-based Generalization}\label{sec:proj}
For classification tasks, we introduce an alternative approach to infer the $\beta_i$ coefficients that is based on the idea of kernel sparse coding \cite{gao2010,gao2013}. Herein the goal is to find an approximated representation of each feature mapping $\phi(x_i)$ using the elements of a dictionary $\lbrace \mu_{V_{j}} \rbrace_{j=1}^M$. This approach allows us to approximate the feature mapping with these elements by $\phi(x_i) \approx \sum_{j=1}^{M}\beta_{ij} \mu_{V_j}$. In contrast to the aforementioned approaches, an elementary domain \gls{KME} $\mu_{V_j}$ does not necessarily represent the \gls{KME} of an elementary distribution $\mu_{\mathbb{P}_j}$. Therefore, we present another approach that aims to find a set $\lbrace \mu_{V_{j}} \rbrace_{j=1}^M$ that permits $\mu_{\mathbb{P}^s}$ to be represented as a linear combination.
Since $\mathbb{P}$ is assumed to be a convex combination of elementary distributions, we can find a linear combination to represent $\mu_{\mathbb{P}^s}$ by the domain \glspl{KME} $\mu_{V_i}$, as long as $ \mu_{\mathbb{P}^s} \in \mathcal{H}_M := \text{span}\lbrace \mu_{V_i} \,|\, i = 1, \dots, M \rbrace$. The \gls{RKHS} $\mathcal{H}_M$ is a subspace of the actual \gls{RKHS} $\mathcal{H}$, which allows us to represent elements of $\mathcal{H}$ at least approximately in the subspace $\mathcal{H}_M$. By keeping the $\mathcal{H}_M$ large, we gain more representative power. To make $\mathcal{H}_M$ as large as possible, we have to ensure its spanning elements are linearly independent or, even better, orthogonal. Orthogonal \glspl{KME} ensure two desirable properties. First, pairwise orthogonal elements in $\mathcal{H}_M$ guarantee no redundancy. Second, having orthogonal elements allows us to make use of the orthogonal projection. This projection geometrically yields the best approximation of $\phi(x)$ in $\mathcal{H}_M$. In other words, we can achieve the best possible approximation of the feature mapping by using its orthogonal components (see\ Proposition~\ref{proposition_normed}). The orthogonal projection is given by
\begin{align}
\Pi_{\mathcal{H}_M}: \mathcal{H} \rightarrow \mathcal{H}_M, \quad
\phi(x) \mapsto \sum_{i=1}^{M}\frac{\langle \phi(x), \mu_{V_i} \rangle_{\mathcal{H}}}{\| \mu_{V_i} \|_{\mathcal{H}}^2}\mu_{V_i}.
\end{align}
\begin{proposition} \label{proposition_normed}
For a \gls{KME} $\mu_{\mathbb{P}}$ of a given mixture distribution $\mathbb{P}$ the following holds $\mu_{\mathbb{P}} \in \text{span} \lbrace \mu_{V_i} \,|\, V_{i}, \forall i=1, \dots, M \rbrace$, where $\langle \mu_{V_{i}}, \mu_{V_{j}} \rangle_{\mathcal{H}} = 0,\forall i \neq j$ (i.e., the \gls{KME} of the elementary domains basis are pairwise orthogonal). The value of the function $\sum_{i=1}^{M}\norm{\mu_{\mathbb{P}}-\beta_i\mu_{V_{i}}}^2$
is minimal if the coefficients are set as $\beta_{i}^{*} = \langle \mu_{\mathbb{P}}, \mu_{V_i} \rangle_{\mathcal{H}}/\| \mu_{V_i} \|_{\mathcal{H}}^2$.
\end{proposition}
The Proposition \ref{proposition_normed} can be used to give an approximation of $\mu_{\mathbb{P}}$ by projecting it into $\mathcal{H}_M$, i.e., $\mu_{\mathbb{P}} \approx \sum_{i=1}^{M}\beta_{i} \mu_{V_i}$ where $\beta_i = \langle \mu_{\mathbb{P}}, \mu_{V_i} \rangle_{\mathcal{H}}/\| \mu_{V_i} \|_{\mathcal{H}}^2$.
This best approximation property is the main advantage of our assumption in Proposition \ref{proposition_normed} (i.e., having orthogonal \gls{KME}) and thus a potential advantage of projection-based \gls{DG}. Appendix~\ref{sup:proof_proposition_normed} provides the proof of Proposition~\ref{proposition_normed}.
\subsection{Model Training}\label{sec:reg}
For model training, we adapt the \gls{DA} framework from \textcite{zhuang2021}. Thus, our learning objective function is formalized as $\mathcal{L}(g) + \lambda_{D} \Omega_{D}(\| g \|_{\mathcal{H}})$.
The goal of the training can be described in terms of the two components of this function.
Consider a batch of training data $\lbrace x_{1}, \dots, x_{b} \rbrace$, where $b$ is the batch size. During training, we minimize the loss function $\mathcal{L}(g)= \frac{1}{b} \sum_{i=1}^{b} \mathcal{L}(\hat{y}_{i}, y_{i}) = \frac{1}{b} \sum_{i=1}^{b} \mathcal{L}(\sum_{j=1}^{M}\gamma(\phi(x_i), \mu_{V_j}) f_{j}(x_{i}), y_{i})$ for an underlying task and the respective batch size. In addition, our objective is that the model learns to distinguish between different domains. Thus, the regularization $\Omega_{D}$ is introduced to control the domain basis. In our case, we require the regularization $\Omega_{D}$ to ensure that the \gls{KME}s of the elementary domain basis are able to represent the \gls{KME}s of the elementary domains. Therefore, we minimize the \gls{MMD} between the feature mappings $\phi(x_i)$ and the associated representation $\sum_{j=1}^{M} \beta_{ij}\mu_{V_{j}}$. Note that $\beta_{ij} = \gamma(\phi(x_i), \mu_{V_j})$. Hence, the regularization $\Omega_{D} = \Omega_{\small{D}}^{OLS}$ is defined as $
\Omega_{D}^{OLS} \big( \| g \|_{\mathcal{H}} \big) = \frac{1}{b}\sum_{i=1}^{b} \| \phi(x_{i}) - \sum_{j=1}^{M} \beta_{ij}\mu_{V_{j}} \|_{\mathcal{H}}^2
$ (see Appendix~\ref{ap:detailsonreg} for details). The intuition is the objective to represent each feature mapping $\phi(x_i)$ by the domain \gls{KME}s $\mu_{V_i}$. Thus, we try to minimize the \gls{MMD} between the feature map and a combination of $\mu_{V_i}$. The minimum of the stated regularization can be interpreted as the ordinary least square-solution of a regression-problem of $\phi(x_i)$ by the components of $\mathcal{H}_{M}$. In other words, we want to ensure that the basis $V_i$ is contained in feature mappings $\phi(x_i)$.
In the particular case of projection, we want the \gls{KME} of the elementary domain to be orthogonal to ensure high expressive power. For this purpose, the additional term $\Omega^{\perp}_{D}$ will be introduced to ensure the desired orthogonality. Considering a kernel function with $k(x, x) = 1$, orthogonality would require the Gram matrix $K_{ij} = \langle \mu_{V_i}, \mu_{V_j} \rangle_{\mathcal{H}}$ to be close to the identity matrix $I$. There are a variety of methods for regularizing matrices available~\cite{xie2017,bansal2018}. A well-known method to ensure orthogonality is the \gls{SO} regularization $\Omega_{\small{D}}^{\perp} = \lambda \| K - I \|^2_{F}$ ~\cite{bansal2018}. As pointed out by \textcite{bansal2018}, the \gls{SRIP} and \gls{MC} regularization can be a promising alternative for \gls{SO} and thus are additionally implemented in the \gls{DG} layer. Hence, in the case of projection, the regularization is given by $\Omega_{D} \big( \| g \|_{\mathcal{H}} \big) = \lambda_{OLS} \Omega^{OLS}_{D} \big( \| g \|_{\mathcal{H}} \big) + \lambda_{ORTH} \Omega_{\small{D}}^{\perp} \big( \| g \|_{\mathcal{H}} \big), ~\lambda_{OLS}, \lambda_{ORTH} \geq~0$.
Lastly, sparse coding is an efficient technique to find the least possible basis to recover the data subject to a reconstruction error \cite{olshausen1997}. Several such applications yield strong performances, for example in the field of computer vision \cite{lee2007,yang2009}. Kernel sparse coding transfers the reconstruction problem of sparse coding into $\mathcal{H}$ by using the mapping $\phi$, and, by applying a kernel function, the reconstruction error is quantified as the inner product \cite{gao2010,gao2013}. To ensure sparsity, we apply the $L_{1}$-norm on the coefficients~$\beta$ and add $ \Omega^{L_{1}}_D(\| \gamma \|) := \| \gamma(\phi(x_i), \mu_{V_j}) \|_{1}$ to the regularization term $\Omega_D$ with the corresponding coefficient $\lambda_{L_1}$. Appendix~\ref{ap:vis_DGLayer} gives a visual overview of the model training.
\section{Related Work} \label{sec:related_work}
\Gls{DG}, also known as \gls{OOD} generalization, is among the hardest problems in machine learning~\cite{blanchard2011,muandet2013,Arjovsky19:IRM}.
In contrast, \gls{DA}, which predates \gls{DG} and \gls{OOD} problems, deals with a slightly simpler scenario in which some data from the test distribution are available~\cite{ganin2015}.
Hence, based on the available data, the task is to develop learning machines that transfer knowledge learned in a source domain specifically to the target domain.
Approaches pursued in \gls{DA} can be grouped primarily into (1) discrepancy-based \gls{DA}~\cite{sun2016,peng2018,ben-david2010,fang2020,tzeng2014,long2015,baktashmotlagh2016} (2) adversary-based \gls{DA}~\cite{tzeng2017,liu2016,ganin2015,long2018}, and (3) reconstruction-based \gls{DA}~\cite{bousmalis2016,hoffman2018,kim2017,yi2017,zhu2017,ghifary2014}.
In \gls{DA}, learning the domain-invariant components requires access to unlabeled data from the target domain.
Unlike problems in \gls{DA}, where the observed data from the test domains can be used to find the most appropriate invariant structures \cite{ben-david2010}, the lack thereof in \gls{DG} calls for a postulation of invariant structure that will enable the \gls{OOD} generalization.
To enable generalization to unseen domains without any access to data from them, researchers have made significant progress in the past decade and developed a broad spectrum of methodologies \cite{zhou2021,zhou2021a,li2019,blanchard2011}. For thorough review see, e.g., \textcite{zhou2021,Wang21:DG}.
Existing works can be categorized into methods based on domain-invariant representation learning \cite{muandet2013,li2018,li2018b}, meta-learning \cite{li2018a,balaji2018}, data augmentation \cite{zhou2020}, to name a few.
Another recent stream of research from a causal perspective includes invariant risk minimization~\cite{Arjovsky19:IRM}, invariant causal prediction~\cite{Peters16:ICP}, and causal representation learning~\cite{Schoelkopf21:CRL}.
The overall motivation here is to learn the representation that is robust to domain-specific spurious correlations. In other words, it is postulated that ``causal'' features are the right kind of invariance that will enable \gls{OOD} generalization.
Despite the successful applications, \gls{DG} remains a challenging research gap.
We differentiate our work from existing ones as follow.
First, we postulate the existence of domain-invariant structure at the distributional level rather than at the data representation, which is a common assumption in \gls{DG}.
This is motivated by theoretical results~\cite{mansour2009,hoffman2018a}, stating that a distribution-weighted combination of source hypotheses represents the ideal hypothesis.
Furthermore, our distributional assumption, as we argued in Section \ref{sec:content}, generalizes previous work that proposes to use domain-specific knowledge to tackle the problem of \gls{DG} from a more elementary setting.
For example, approaches such as \textcite{piratla2020,monteiro2021} can be compared to our \glsplural{GDU} as domain-specific predictors, in the special case, where each elementary domain represents a single source domain. However, \glsplural{GDU} do not assume the existence of a single common classifier for all the domains, providing a combination of multiple common classifiers shared between different source domains.
Second, we incorporate the I.E.D. assumption directly into the design of our model architectures as shown in Figure~\ref{fig:layerscheme}.
Designing effective model architectures for \gls{DG} has been largely neglected~\parencite[Sec.\ 4.1]{zhou2020}.
Third, we do not assume access to the domain labels, which can be difficult to obtain in practice~\cite{niu2017}.
As \gls{DG} methods that can deal with the absence of domain labels (e.g., \cite{huang2020, carlucci2019, li2018d}) are still scarce~\parencite[Sec.\ 4.2]{zhou2020},
we provide an easy-to-apply method to achieve \gls{OOD} generalization: \texttt{model.add(DGLayer())} in TensorFlow.
\section{Experiments}
\label{sec:experiments}
The goal of our experiments is to validate the proposed \gls{IED} assumption and the working principle of the \glsplural{GDU}. Further, we conduct a benchmark experiment based on real-world datasets for \gls{DG}. We distinguish two modes of training the \gls{DG} layer: fine tuning (FT), where we extract features using a pre-trained model, and end-to-end training (E2E), where the \gls{FE} and the \gls{DG} layer are jointly trained.~\footnote{All source code is made available on GitHub \url{https://github.com/im-ethz/pub-gdu4dg}.}.
\subsection{Digits Experiment}
We use five publicly available digits image datasets, namely MNIST~\cite{lecun1998a}, MNIST-M~\cite{ganin2015a}, SVHN~\cite{netzer2011}, USPS, and Synthetic Digits (SYN)~\cite{ganin2015a}. For this experiment, we follow the experimental setup of \textcite{peng2019,feng2020,zhang2020,zhao2018}. The task is to classify digits between zero and nine.
Each of these datasets is considered an out-of-training target domain which is inaccessible during training, and the remaining four are the source domains. Details are given in Appendix~\ref{sec:appendix_digits5}. In Table \ref{tab:results_digits} summarizes the classification results for this experiment. Our \gls{DG} layer noticeably improves mean accuracy and decreases the standard deviation in comparison to the \gls{ERM} and \gls{ERM} ensemble baselines, making the results more stable across the ten iterations reported.
\begin{table*}[!h]
\caption{Results Digits experiment. All experiments were repeated ten times and the mean (standard deviation) accuracy is reported. Best results according to the mean accuracy are highlighted in \textbf{bold}.}
\label{tab:results_digits}
\begin{center}
\begin{scriptsize}
\begin{sc}
\begin{tabular}{llccccc}
& & \multicolumn{1}{c}{\textbf{MNIST}} & \multicolumn{1}{c}{\textbf{MNIST-M}} & \multicolumn{1}{c}{\textbf{SVHN}} & \multicolumn{1}{c}{\textbf{USPS}} & \multicolumn{1}{c}{\textbf{SYN}} \\
\toprule
\multirow{2}{*}[-2pt]{\textit{\textbf{ERM}}} & \textit{Single} & \textit{97.98 (0.34)} & \textit{63.00 (3.20)} & \textit{70.18 (2.74)} & \textit{93.70 (1.74)} & \textit{83.62 (1.47)} \\
\cmidrule{2-7} & \textit{Ensemble} & \textit{98.21 (0.39)} & \textit{62.87 (1.50)} & \textit{72.01 (3.59)} & \textit{95.16 (0.89)} & \textit{83.80 (1.22)} \\
\midrule
& CS & 98.53 (0.16) & 68.55 (0.80) & 78.90 (1.41) & 95.83 (0.50) & 88.39 (0.82) \\
\cmidrule{2-7} \textbf{FT} & MMD & 98.60 (0.08) & 68.62 (0.70) & 79.20 (2.01) & \textbf{96.24 (0.71)} & 88.27 (0.41) \\
\cmidrule{2-7} & Projection & 98.57 (0.17) & 68.56 (0.91) & 79.34 (0.72) & 96.24 (0.71) & 88.58 (0.53) \\
\midrule
& CS & 98.62 (0.19) & \textbf{69.25 (0.61)} & \textbf{79.42 (1.27)} & 96.17 (0.52) & 87.92 (0.84) \\
\cmidrule{2-7} \textbf{E2E} & MMD & 98.58 (0.16) & 69.04 (0.83) & 79.20 (0.90) & 96.00 (0.44) & 88.18 (0.86) \\
\cmidrule{2-7} & Projection & \textbf{98.67 (0.12)} & 68.67 (0.98) & 78.56 (1.68) & 96.24 (0.77) & \textbf{88.77 (0.48)} \\
\bottomrule
\end{tabular}
\end{sc}
\end{scriptsize}
\end{center}
\end{table*}
\subsection{Ablation Study}
We chose the digits dataset to analyze each component of our \gls{DG} layer. We (a) discuss heuristics for choosing $\sigma$ and $M$ (Appendix \ref{sec:heuristics}), (b) vary $M$, $N$ on Figure~\ref{fig:MN_ablation}, and the strength of the regularization terms on Figure~\ref{fig:heatmapp_CS}, Figure~\ref{fig:heatmapp_MMD}, and Figure~\ref{fig:heatmapp_proj} to assess the sensitivity of the \gls{DG} layer to the choice of hyper-parameters (Appendix~\ref{sec:ablation_study}), (c) visualize the output of the \gls{FE} (Figure~\ref{fig:exp_digits_tsne}), and (d) interpret the learned elementary domains (Appendix \ref{sec:interpret_elemdom}). For (a), we cluster the output of the \gls{FE} using k-means algorithm with different number of clusters $K$ and set the number of elementary domains $M$ to the $K^{*}$ yielding the best clustering. For $\sigma$, we resort to the median heuristics \cite{Muandet17:KME-Review}. Further, our ablation study (b) reveals stable results across different sets of hyper-parameters. While the layer is not sensitive to the choice of regularization strength, we recommend not to omit the regularization completely although the computational expanses decrease without the orthogonal regularization. As an illustration in (c), we project the output of the \gls{FE} trained with a dense layer (ERM) and with the \gls{DG} layer by t-SNE (t-distributed stochastic neighbor embedding). The \gls{DG}-trained \gls{FE} yields more concentrated and bounded clusters in comparison to the one trained by \gls{ERM}. For (d), the \gls{MMD} heatmap and t-SNE embeddings of the learned elementary and source domains on Figure \ref{fig:vis_elem_domains} indicate that the \gls{DG} captures distributional structures in the dataset.
\subsection{ECG Experiment}
The PhysioNet/Computing in Cardiology Challenge 2020~\cite{pmid33176294,PhysioNet,physionet2} aims at identifying clinical diagnoses from 12-lead ECG recordings coming from 6 different databases. This publicly available pooled dataset contains 43,101 recordings sampled with various sampling frequencies and lengths. Each recording is labeled as having one or more of 24 cardiac abnormalities and, hence, the task is to perform a multi-label binary classification.
For our experiment, we iterate over the databases, taking one at a time as the test domain while utilizing the remaining 5 databases for training.
The performance was measured according to the original PhysioNet challenge score which is defined as a generalized intersection-over-union score where partial credit is assigned to misdiagnoses that result in similar treatments or outcomes. The score is then adjusted for a solution which always selects the normal/majority class and normalized for the perfect solution. Therefore, the score can have negative values and a best possible score of 1.
Table~\ref{tab:results_ecgs} reports results for the ECG experiments. For this kind of real-world time-series data we observe an improvement in mean score and reduction in standard deviation over the ERM and ERM ensemble baselines across all generalization tasks. We attribute poorer performance for the \emph{PTB} dataset to the fact that it contains considerably longer recordings than other datasets (except for \emph{INCART} which, however, contains only 75 samples) and higher sampling rate (1000Hz vs 500Hz and 257Hz). The negative challenge score for the \emph{PTB-XL} dataset is due to the presence of previously unobserved labels in other datasets as well as a considerably smaller amount of data for training since the \emph{PTB-XL} dataset comprises the majority of all samples (21,837 out of 43,101).
Appendix~\ref{sec:appendix_ecg} provides details for this experiment.
\begingroup
\setlength{\tabcolsep}{0.23em}
\begin{table*}[!h]
\caption{Results ECG experiment. All experiments were repeated five times and the mean (standard deviation) challenge metric is reported. Higher is better. Best overall results according to the mean challenge metric are highlighted in \textbf{bold}.}
\label{tab:results_ecgs}
\begin{center}
\begin{scriptsize}
\begin{sc}
\begin{tabular}{p{16pt}p{34pt}cccccc}
& & \multicolumn{1}{c}{\textbf{CPSC}} & \multicolumn{1}{c}{\textbf{CPSC-Extra}} & \multicolumn{1}{c}{\textbf{INCART}} & \multicolumn{1}{c}{\textbf{PTB}} & \multicolumn{1}{c}{\textbf{PTB-XL}} & \multicolumn{1}{c}{\textbf{G12EC}} \\
\toprule
\multirow{2}{*}[-2pt]{\textit{\textbf{ERM}}} & \textit{Single} & \textit{0.0840 (0.0220)} & \textit{0.2715 (0.0270)} & \textit{0.2290 (0.0059)} & \textit{-8.8206 (0.3908)} & \textit{-0.3373 (0.0403)} & \textit{0.2011 (0.0015)} \\
\cmidrule{2-8}
& \textsc{\textit{Ensemble}} & \textit{0.1699 (0.0346)} & \textit{0.2488 (0.0079)} & \textit{0.2456 (0.0109)} & \textit{-8.9115 (0.1023)} & \textit{-0.4136 (0.0780)} & \textit{0.2079 (0.0161)} \\
\midrule
& CS & 0.1830 (0.0061) & 0.2950 (0.0035) & 0.1595 (0.0313) & -8.8802 (0.1069) & -0.1932 (0.0168) & 0.1853 (0.0036) \\
\cmidrule{2-8} {\textbf{FT}} & MMD & 0.1877 (0.0077) & 0.3011 (0.0035) & 0.2100 (0.0413) & -8.8082 (0.1458) & -0.1567 (0.0211) & 0.1919 (0.0036) \\
\cmidrule{2-8} & Projection & \textbf{0.1941 (0.0050)} & \textbf{0.3135 (0.0015)} & -0.1041 (0.0015) & -8.8817 (0.0478) & -0.2166 (0.0191) & 0.2409 (0.0042) \\
\midrule
& CS & 0.1067 (0.0170) & 0.2866 (0.0146) & \textbf{0.2539 (0.0289)} & -9.2947 (0.3004) & -0.1651 (0.0494) & 0.1927 (0.0080) \\
\cmidrule{2-8} \textbf{E2E} & MMD & 0.1034 (0.0143) & 0.2834 (0.0228) & 0.2398 (0.0257) & -9.0600 (0.3100) & -0.1433 (0.0293) & 0.1925 (0.0067) \\
\cmidrule{2-8} & Projection & 0.1411 (0.0269) & 0.2962 (0.0065) & -0.1467 (0.0513) & \textbf{-8.5904 (0.3310)} & \textbf{-0.0178 (0.0291)} & \textbf{0.2947 (0.0117)} \\
\bottomrule
\end{tabular}
\end{sc}
\end{scriptsize}
\end{center}
\end{table*}
\endgroup
\subsection{WILDS Benchmark}\label{sec:WILDSBench}
To challenge the \gls{IED} assumption and the generalization capabilities of the \gls{DG} layer, we use WILDS, a curated set of real-world experiments for benchmarking \gls{DG} methods \cite{koh2021wilds}.
We consider the following four datasets: \textit{Camelyon17}, \textit{RxRx1}, \textit{iWildCam}, and \textit{FMoW}, which represent the task of real-world \gls{DG}. Appendix \ref{sec:appendix_wilds} provides details on the datasets and experiments.
Following the WILDS benchmarking procedure \cite{koh2021wilds}, we compare our proposed \gls{DG} layer to the following baselines. First, empirical risk minimization (\gls{ERM}), which minimizes the average training loss over the pooled dataset.
Second, a group of \gls{DG} algorithms provided by the WILDS benchmark, namely, Coral, Fish, IRM, and DRO. The Coral algorithm introduces a penalty for differences in means and covariances of the domains feature distributions. The Fish algorithm achieves \gls{DG} by approximating an inter-domain gradient matching objective, i.e., maximizing the inner product between gradients from different domains \cite{shi2021}. Conceptually, Fish learns feature representations that are invariant across domains.
Invariant risk minimization (IRM) introduces a penalty for feature distributions with different optimal classifiers for each domain \cite{Arjovsky19:IRM}. The idea is to enable \gls{OOD} generalization by learning domain-invariant causal predictors.
Lastly, group \gls{DRO} explicitly minimizes the training loss on the worst-case domain \cite{sagawa2020,hu2018}.
We present our benchmarking results in Table \ref{table:wilds}. When reproducing the \gls{ERM} results, we noticed two deviations from the original results presented in \textcite{koh2021wilds}. For \textit{Camelyon17}, we achieved a better accuracy with a simpler \gls{FE}. For the remaining datasets, our \gls{ERM} achieved lower performances although we used the same specifications made in \textcite{koh2021wilds}. Hence, we report our reproduced \gls{ERM} results as a baseline for comparison. As shown by \textcite{koh2021wilds}, \gls{ERM} remains one of the strongest baselines in real-world \gls{DG} problems.
Our method consistently increases the mean performance and decreases the standard deviation in comparison to \gls{ERM}. For \textit{RxRx1}, however, our method does not outperform the baseline.
In cotrast to our digits and ECG experiments, the WILDS benchmarking requires deeper \gls{FE} such as ResNet-50 or DenseNet-121. We discuss the scalability of our \gls{DG} Layer in the Appendix~\ref{sec:appendixScalabilit}.
\newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}}
\begin{table*}[!h]
\caption{Results on WILDS benchmarking tasks. We compute the metrics following \textcite{koh2021wilds} and report the mean (standard deviation) across ten runs for Camelyon17 and three runs for the remaining. Best performances in comparison with our \gls{ERM} are highlighted in \textbf{bold}.}
\label{table:wilds}
\begin{center}
\begin{scriptsize}
\begin{sc}
\begin{tabular}{p{16pt}p{34pt}cccccP{40pt}}
& & \textbf{Camelyon17} & \textbf{RxRx1} & \multicolumn{2}{c}{\textbf{iWildCam}} & \multicolumn{2}{c}{\textbf{FMoW}} \\
&& \multirow{2}{*}{Avg Acc} & \multirow{2}{*}{Avg Acc} & \multirow{2}{*}{Avg Acc} & \multirow{2}{*}{Macro F1} & \multirow{2}{*}{Avg Acc} & Avg Worst-Region Acc \\
\toprule
\textit{\textbf{ERM}} & & \textit{77.8 (12.7)} & \textit{\textbf{26.9 (0.0)}} &\textit{67.0 (3.9)}&\textit{ 16.7 (0.6)} & \textit{46.1 (1.5)} & \textit{24.5 (0.9)} \\
\cmidrule{1-8} &CS & \textbf{79.2 (10.1)} & 25.3 (0.1) & 68.9 (1.6) & 16.5 (0.4) & 51.7 (0.2) & 30.4 (0.3)\\
\cmidrule{2-8}\textbf{FT} &MMD & 78.5 (11.9) & 25.3 (0.1) & 70.3 (3.2) & 16.9 (0.4) & \textbf{51.9 (0.7)} & \textbf{31.8 (0.9)}\\
\cmidrule{2-8} &Projection & 78.6 (11.9) & 25.1 (0.3) & \textbf{71.2 (2.6)} & \textbf{17.2 (0.6)} & 51.5 (0.2) & 30.2 (1.1) \\
\cmidrule{1-8} &CS & 76.7 (11.4) & 16.3 (0.2) & 65.6 (1.8) & 15.6 (1.0) & 47.0 (0.6) & 28.2 (0.7)\\
\cmidrule{2-8}\textbf{E2E} &MMD & 75.5 (12.8) & 16.2 (0.1) & 67.5 (2.6) & 15.7 (0.6) & 47.6 (2.1) & 27.1 (2.7)\\
\cmidrule{2-8} &Projection & 76.7 (6.3) & 13.9 (0.1) & 67.4 (0.2) & 15.4 (1.1) & 47.4 (0.9) & 28.6 (1.6)\\
\bottomrule \toprule
\multicolumn{8}{c}{\textit{Results reported by WILDS \cite{koh2021wilds}}} \\
\midrule
\multicolumn{2}{l}{\textbf{ERM}} & 70.3 (6.4) & 29.9 (0.4) &71.6 (2.5)& 31.0 (1.3) & 53.0 (0.6) & 33.7 (1.5) \\
\multicolumn{2}{l}{\textbf{CORAL}} & 59.3 (7.7) & 28.4 (0.3) &73.3 (4.3)& 32.8 (0.1) & 50.5 (0.4) & 31.7 (1.2) \\
\multicolumn{2}{l}{\textbf{Fish}} & 74.7 (7.1) & -- &64.7 (2.6)& 22.0 (1.8) & 51.8 (0.3) & 34.6 (0.2) \\
\multicolumn{2}{l}{\textbf{IRM}} & 64.2 (8.1) & 8.2 (1.1) &59.8 (3.7)& 15.1 (4.9) & 50.8 (0.1) & 30.0 (1.4) \\
\multicolumn{2}{l}{\textbf{Group DRO}} & 68.4 (7.3) & 23.0 (0.3) &72.7 (2.0)& 23.9 (2.1) & 52.1 (0.5) & 30.8 (0.8) \\
\bottomrule
\end{tabular}
\end{sc}
\end{scriptsize}
\end{center}
\end{table*}
\section{Conclusion and Discussions}
\label{sec:conclusion}
We introduced the \gls{IED} assumption, postulating that real-world distributions are composed of elementary distributions that remain invariant across different domains. This invariance thus enables knowledge transfer to unseen domains. Empirical results based on real-world data support the validity of the \gls{IED} assumption. In \textit{Camelyon17}, for example, we would expect similar subpopulations across hospitals, which may constitute elementary domains. In contrast, when the number of elementary domains become excessively large, as we suspect in the \textit{RxRx1} dataset, representing them becomes difficult. Further, we presented a modular neural network layer consistsing of Gated Domain Units (GDUs) that leverage the \gls{IED} assumption. Our \glspl{GDU} can substantially improve the downstream performance of learning machines in real-world \gls{DG} tasks.
We expect the \gls{IED} assumption and \glsplural{GDU} to be adapted yielding novel applications that tackle \gls{DG}.
For example, adaptively increasing the number of elementary domains during learning until their distributional variance, as a measure of their heterogeneity, reaches a plateau.
\paragraph{Limitations.} A major limitation of our \gls{IED} assumption is to provide theoretical evidence that this assumption holds in practice. We aim to expand the scope of the theoretical understanding of the \gls{IED} assumption and the \glspl{GDU}.
In addition, the particular theoretical setting of \textcite{albuquerque2019} (i.e., each elementary domain represents a source domain) seems promising to extend their generalization guarantee to cases where our \gls{IED} assumption holds.
Second, our GDU layer induces additional computational overhead due to the regularization and model size that increases as a function of the number of elementary domains. Noteworthy, our improvement is achieved with a relatively small number of elementary domains indicating that the increased complexity is not a coercive consequence of applying the \gls{DG} layer. Also, the results achieved are not a consequence of increased complexity, as the ensemble baseline shows.
\paragraph{Societal Impact.}
On a downside, our method may be used to target unknown populations or ethnicities with potentially harmful applications. For example, consider facial recognition in public spaces, where our approach can improve model performance for yet unseen populations, thereby allowing for more extensive surveillance as a whole. However, this impact is a general result of \gls{DG}.
\section*{References}
\printbibliography[heading=none]
\newpage
|
train/arxiv
|
BkiUdqM5qYVBihbKCvDF
| 5
| 1
|
\section{Introduction}
\label{intro}
LS~I +61 303 is a Be X-ray binary with strong and variable radio emission. It has been suggested as the potential counterpart of the gamma-ray sources 2CG 135+01 and 3EG J0241+6103 (Gregory \& Talor 1978, Kniffen et al. 1997). Massi et al. (2001) detected relativistic jets in the source, which was then included in the microquasar class. The existence of jets was confirmed by the observations presented by Massi et al. (2004). Very recently, LS~I +61 303 was detected by MAGIC, a very large atmospheric imaging Cherenkov telescope located at La Palma (Albert et al. 2006). This makes LS~I +61 303 the second high-energy microquasar detected by ground-based telescopes. Moreover, MAGIC found clear evidence for variability in the detected signal: the source was stronger at orbital phases 0.5-0.6, whereas the periastron passage occurs at phase 0.23.
At X-rays the source has been detected by XMM-Newton, Beppo-SaX and INTEGRAL (Sidoli et al. 2006, Chernyakova et al. 2006) showing variability on short timescales and a harder spectrum when the source was brighter (Sidoli et al. 2006).
The orbital parameters of LS~I +61 303 have been determined by Casares et al. (2005). The eccentricity of the system is 0.72$\pm$0.15 and the orbital inclination is $i\sim30\pm20$ deg. The nature of the compact object is not well-established. A low-mass black hole cannot be ruled out (Massi 2004). If a mass of $\sim$ 2.5 M$_{\odot}$ is assumed for the compact object, then the mass of the Be star should be $\sim 12$ M$_{\odot}$, for $i=30$ deg.
From a theoretical point of view, both accreting and non-accreting models have been proposed to explain the high-energy emission of LS~I+61 303. In the accreting scenario (e.g. Bosch-Ramon et al. 2006a) a relativistic jet is launched from the surroundings of the compact object. Strong magnetic fields are supported by an underluminous accretion disk, which is advection-dominated. High-energy emission can be produced in the jet by leptonic (e.g. Bosch-Ramon et al. 2006b, Gupta \& B\"ottcher 2006, Bednarek 2006a) or hadronic (Romero et al. 2003, 2005) interactions. In non-accreting models (Dubus 2006b, Chernyakova et al. 2006) particles are accelerated up to relativistic energies in the interacting region where a pulsar wind collides with the stellar wind. Inverse Compton (IC) cooling of electrons (Maraschi \& Treves 1981, Dubus 2006b) or $pp$ interactions (Chernyakova et al. 2006) could result in the production of high-energy photons.
A colliding wind model has been proposed as the mechanism producing the high-energy emission in the $\gamma$-ray binary PSR B1259-63, a system that contains a radio pulsar in an eccentric orbit around a Be star (Aharonian et al. 2005). In the case of LS~I+61 303, the existence of jets extending up to $\sim 400$ AU from the core seems to support an accretion/jet model (see the discussion in Mirabel 2006). Albert et al. (2006) favor a microquasar model and suggest that the detection of the high-energy signal after the periastron passage would support a leptonic origin for the emission. However, $\gamma$-ray propagation effects alone could be responsible for such a behavior (Bednarek 2006a).
In this paper we revisit the hadronic model for high-energy emission in LS~I+61 303 proposed by Romero et al. (2005) before the MAGIC detection. Using an improved accretion model and more sophisticated calculations for the opacity effects inside the binary system we show that a hadronic origin for the emission detected by MAGIC can not be ruled out.
\section{The model}
\label{sec:1}
We consider gamma-ray production in hadronic interactions of relativistic protons from a jet launched close to the compact object and cold protons from the equatorial wind of the Be star (Romero et al. 2003, 2005).
The star has a slow equatorial wind that fits a density profile $\rho_{\rm w}(r)=\rho_0({r}/{R_*})^{-n}$, with $n=3.2$ and $\rho_0=5\times 10^{-11}$ g cm$^{-3}$ (Mart\'{\i} and Paredes 1995). The wind remains mainly near to the equatorial plane, confined in a disk with half-opening angle $\varphi=15^\circ$, and an initial outflowing radial velocity $v_{{\rm r}0}\sim 5$ km s$^{-1}$ (Mart\'{\i} and Paredes 1995). The disk effective temperature close to the star is $T_{\rm disk}=17000$ K. The values of the adopted parameters in our model are listed in Table \ref{tab}.
\begin{table*}[t]
\caption[]{Model parameters}
\label{tab}
\centering
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
Parameter: description [units] & values \\[3pt]
\tableheadseprule\hline\noalign{\smallskip}
$M_{\star}$: stellar mass [$M_{\odot}$] & 12 \\
$R_{\star}$: stellar radius [$R_{\odot}$] & 10 \\
$M_{\rm BH}$: compact object mass [$M_{\odot}$] & 2.5\\
$e$: eccentricity & 0.72 \\
$i$: orbital plane inclination [$^\circ$] & 30\\
$\omega$: angle of periastron [$^\circ$] & 20 \\
$R_{\rm 0}$: initial jet radius [$R_{\rm Sch}$] & 5 \\
$z_0$: jet initial point in the compact object RF [$R_{\rm Sch}$] & 50 \\
$\chi$: jet semi-opening angle tangent & 0.1 \\
$B_{\rm eq}$: equipartition magnetic field at the base of the jet [G] & $10^{8}$ \\
$\alpha$: relativistic proton power-law index & 2.5 \\
$\gamma_p^{\rm min}$: lowest Lorentz factor of relativistic protons & 2 \\
$\gamma_p^{\rm max}$: highest Lorentz factor of relativistic protons & $\sim 10^7$\\
$\Gamma$: macroscopic jet Lorentz factor & 1.25\\
$T_{\star}$: stellar surface temperature [K] & $26000$ \\
$T_{\rm disk}$: circumstellar disk temperature [K] & 17000 \\
$R_{\rm disk}$: circumstellar disk outer radius [$R_{\star}$] & 12 \\
$R_{\rm in}$: radius of the circumstellar disk brightest region [$R_{\star}$] & 3 \\
$L_{\rm disk}$: circumstellar disk luminosity [erg s$^{-1}$]& $2\times 10^{37}$\\
$\varphi$: disk half-opening angle [$^\circ$]& 15 \\
$v_{\rm r0}$: radial velocity of the wind at the base [cm s$^{-1}$] & $3\times 10^5$\\
$v_{\varphi0}$: azimuthal velocity of the wind at the base [cm s$^{-1}$] & $1.13\times 10^7/\sin i$\\
$\rho_0$: density of wind at the base [g cm$^{-3}$]& $5 \times10^{-11}$\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table*}
The $pp$ interactions can occur either because there is some mixing of the stellar wind with the jet matter or because some protons escape from the jet into the wind. The level of interaction is quantified through a ``mixing factor", here assumed as $f_{\rm m}\propto v_{\rm rel}^{-1}\sim 0.1$. This phenomenological prescription accounts for a more efficient rejection of particles when a larger relative velocity is thought to enhance the boundary effects between the jet and the wind\footnote{The problem of
matter exchange through the boundary layers of a relativistic jet is a
difficult one. Its proper treatment is far beyond the scope of this work.}. At phase $\phi=0.5$ the mixing factor is maximum, reaching $f_{\rm m}\sim 0.5$.
The jet is assumed to be in a steady state. We are not dealing here with the details of the launching mechanism, which is supposed to be related to magneto-centrifugal effects (Blandford \& Payne 1982). The hadronic jet power is a fraction of the accretion power: $L_{\rm jet}=q_{\rm jet} \dot{M}_{\rm accr} c^2$. The value of $q_{\rm jet}$ is taken as a free parameter. In order to reproduce the MAGIC observations we have that $q_{\rm jet}\sim 0.1$.
The mass accretion rate $\dot{M}_{\rm accr}$ is strongly dependent on the relative velocity of the compact object with respect to the wind. To estimate the evolution of the accretion rate onto the black hole we have taken into account the azimuthal wind velocity $v_{\phi}\sim1.1\times10^7 (R_\star/r)/ \sin i$ cm s$^{-1}$ (Gregory \& Neish 2002, Casares et al. 2005) in addition to the radial velocity, $v_{\rm r} = 3\times 10^5 (r/R_*)^{n-2}$ cm s$^{-1}$. The Bondi-Hoyle accretion regime\footnote{We notice that the close approach of the stars during the periastron might induce a transient Roche-lobe overflow.} was considered to obtain the values of $\dot{M}_{\rm accr}$ shown in Figure 1. This is just a rough approximation used by several authors (Mart\'{\i} and Paredes 1995, Gregory \& Neish 2002). Close to the compact object there should be a transition to disk accretion, but the absence of disk signatures in the X-ray spectrum suggests that the accretion disk in LS~I+61 303 is small. We shall assume that changes in $\dot{M}_{\rm accr}$ are propagated to the jet on timescales much shorter than the orbital period.
\begin{figure}[h]
\centering
\includegraphics{Romero_fig1.eps}
\caption{Normalized mass accretion rate. At the periastron passage (phase 0.23) it is $\dot{M}_{\rm acc}=1.7\times10^{19}$ g s$^{-1}$.
}\label{Macc}
\end{figure}
\section{The very high-energy spectrum and light curve}
\label{sec:2}
In Figure \ref{c-luz} we show the gamma-ray luminosity expected from $pp\rightarrow pp+\pi^0$ and the subsequent neutral pion decays. The light curve is plotted for photon energies $\sim 200$ GeV, which is the lower energy reported by MAGIC for LS~I +61 303. The differential $\gamma$-ray emissivity is calculated applying the $\delta$-function approximation (Aharonian \& Atoyan 2000). The jet is assumed to expand in a conical way (see parameters in Table \ref{tab}). The magnetic field ($B\leq 10^8$ G at the base of the jet) results from the equipartition condition (see Bosch Ramon et al. 2006a) and then decreases with the distance as $\propto z^{-2}$. Notice that the field will change with the accretion rate and hence, with the orbital phase. Since hadronic energy losses are negligible, the maximum proton energy has been obtained at the jet formation point equating the shock acceleration rate to the proton synchrotron rate. This leads to $\gamma_p=7\times 10^6$ at the base of the jet during the periastron passage. Photohadron interactions in the stellar field (the typical energy of the stellar photons is $\sim 1$ eV) are well below the threshold even for the most energetic protons. These type of losses are possible with X-ray photons form the accretion disk, but the X-ray luminosity is low enough as to neglect them in a first approximation. The differential distribution of relativistic protons has an index $\alpha=2.5$ in order to match the slope of the MAGIC spectrum. We note that the usual particle acceleration through the Fermi mechanism can lead to such a soft distribution for shocks with low Mach number. Actually, $\alpha$ could be time dependent, affected at some extent by the strong variations in the accretion rate.
There are two maxima in the light curve shown in Figure \ref{c-luz}, one at the periastron and the other around phase 0.5, when the relative velocity is minimum. The solid line shows the luminosity corrected by $\gamma \gamma$ absorption in the stellar and disk radiation fields. We have calculated the opacity of the anisotropic stellar field as in Dubus (2006a), taking into account angular effects and the finite size of the star. The orientation of the orbit is given by Casares et al. (2005). The circumstellar disk is modeled as a blackbody of $T_{\rm disk}=17000$ K for the inner region (up to an inner radius of 3 stellar radii) and then with an emissivity that goes as $\propto\rho_{\rm w}^2$ (Waters 1986, Bosch-Ramon et al. 2006b). The disk farther truncates at 12 $R_\star$. The inner region of the disk is normalized to emit $L_{\rm disk}\sim 2 \times 10^{37}$ erg s$^{-1}$, which is a non negligible fraction of the total thermal emission of the system (Casares et al. 2005). The total $\gamma \gamma$ cross-section has been considered for interactions with photons from the disk. The energy dependence of the optical depth from both disk and stellar contribution is shown in Figure \ref{tau-E}. Figure \ref{tau-200} presents the variation of the total optical depth with the orbital phase for a photon of energy of 200 GeV.
The spectral energy distribution (SED) from $pp$ interactions, estimated for different orbital phases, is shown in Figure \ref{spectros}. Close to the periastron the absorption results to be significant, and mainly dominated by the circumstellar disk emission. On the other hand, at phase $\phi=0.53$ most of the radiation escapes from the source, making it detectable at this phase and not during the first accretion peak, when the ambient wind density is maximum. Notice that the more realistic modeling of the wind and the absorption changes these results from those previously presented by Romero et al. (2005) \footnote{Also, we considered in the present case the usual convention for the orbital phase as time proportional. In Romero et al. (2005) we have used the orbital parameter as linearly related to the so-called true anomaly.}.
We have computed the high-energy $\gamma$-ray spectra resulting from cascades traversing the anisotropic stellar radiation field during the periastron passage. The star is characterized by a surface temperature $T_\star=26000$ K, and a stellar radius $R_\star=10$ R$_\odot$. Monte Carlo simulations were performed after developing a computational code based on the scheme outlined by
Protheroe (1986) and Protheroe et al. (1992). The $\gamma$-ray spectra produced through IC interactions were approximated as in Jones (1968), modified in similar way to Bednarek (1997). We introduced the effects of the finite size of the star and the spatial variation of the ambient photon field density by considering the geometric configuration as in Dubus (2006a). The magnetic field in the cascade region, originated in the star, was assumed to be lower than 0.1 G, assuring in this way a strongly Compton dominated regime.
We followed the cascades induced by the injected gamma-ray flux with mentioned photon index $\alpha=2.5$, during the periastron passage (i.e. orbital separation of $\sim 2.56\,R_\star$).
In Figure \ref{cascados} we present the results of the simulations. The one-dimensional cascade development is calculated in the anisotropic stellar field along the line of sight, forming a 30 deg angle with the jet axis. At phase 0.5 the opacity is not enough to sustain a local cascade (gamma-rays that escape in the direction of the star can produce cascades in the stellar photosphere, and some gamma-rays could, in principle, be redirected toward the observer, see Bednarek 2006a and 2006b). We notice that at the periastron passage the effect of the cascades is to produce a softer $\gamma$-ray spectrum with index $\alpha\sim 2.8$ in the MAGIC energy range. Such a feature should be detectable through larger exposures than those reported in Albert et al. (2006).
\begin{figure}
\centering
\includegraphics{Romero_fig2.eps}
\caption{Calculated $\gamma$-ray luminosity from hadronic interactions. The dashed line corresponds to the generated luminosity. In solid line, we show the luminosity corrected by absorption in the stellar and disk photon fields, according to the values of $\tau$ shown in Figure \ref{tau-200}.
}\label{c-luz}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[angle=-90,width=110mm]{Romero_fig3.eps}
\caption{Optical depth as function of the $\gamma$-ray photon energy. It was calculated for fixed values of the orbital phase $\phi$ indicated in the figure. The lower panel shows absorption by the stellar photon field, and the upper one, by the photon field of the circumstellar disk.
}\label{tau-E}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics{Romero_fig4.eps}
\caption{Optical depth for the propagation of $\gamma$-ray photons with energy $E=200$ GeV.
}\label{tau-200}
\end{figure}
\begin{figure
\centering
\includegraphics{Romero_fig5.eps}
\caption{Estimated spectral energy distribution, affected by absorption, for different values of the orbital phase. In this plot $q_{\rm jet}=1$. Note that at $\phi=0.53$ (solid line) the expected signal should be stronger than at the periastron passage ($\phi=0.23$) for the range of energies covered by MAGIC. The actual cutoff for high-energy photons is around 1 PeV.}
\label{spectros}
\end{figure}
\begin{figure
\centering \includegraphics{Romero_fig6.eps}
\caption{Emerging spectral energy distribution after the IC dominated cascade developed in the stellar photon field (the effect of the decretion disk of the Be star is not included). The straight line only indicates the slope of the injected spectrum, with photon index $\alpha= 2.5$ (which mimics the original proton spectral index). These results were obtained through Monte Carlo simulations, properly taking into account the geometric configuration. The curves were normalized with $q_{\rm jet}=1$ and the actual cutoff is around 1 PeV.}\label{cascados}
\end{figure}
\section{Discussion}
\label{sec:3}
The luminosity obtained at phase 0.5 implies that the constant coupling the jet kinetic power and the accretion luminosity should be $q_{\rm jet}\sim 0.1$ in order to explain the $7\times 10^{33}$ erg s$^{-1}$ inferred from MAGIC detections. Powerful jets can be present in microquasars, as showed by Gallo et al. (2005) for the case of Cygnus X-1.
At energies below the break of the hadronic spectrum ($\sim$ GeV)
the emission of leptonic origin should change the shape of the SED yielding the EGRET spectrum. The hadronic interactions that produce neutral pions also generate relativistic electrons and positrons by the decay of charged $\pi$-mesons. We have not computed here the emission of such leptons, for we are concerned only with the very high energy range. Some estimations of the synchrotron luminosity of secondaries were presented in Romero et al. (2005).
Both leptonic and hadronic microquasar models for the gamma-ray emission in LS~I +61 303 can reproduce the main features of the observed gamma-ray emission. The detection of the source at phase 0.5 is a consequence of the opacity effects and is independent of the dominant radiative process in the jet. Even in hadronic models like the one presented here, near the periastron passage gamma-rays will initiate electromagnetic cascades with IC emission, hence the actual situation could be a mixture of radiative processes. More realistic simulations of the photospheric cascades should include the effects of circumstellar disk absorption.
In a pure leptonic model the radiative losses equating the acceleration rate fix the maximum energies of the $\gamma$-ray photons at $\sim 1$ TeV (Bosch-Ramon et al. 2006b). Detection of $\gamma$-ray emission of LS~I +61 303 at higher energies than those observed by MAGIC (e.g. at $E\geq 10$ TeV) could give support to a hadronic model, and to the presence of intense magnetic fields at the base of the jet.
A clear distinction about the nature of most of the primary gamma-rays can be established through the detection of neutrinos, which are produced only in hadronic models. In the context of our model, we expect around 3-4 muon neutrinos per squared km per year at Earth, so the source might be detectable by IceCube (Christiansen et al. 2006, see also the discussion in Torres \& Halzen 2006).
As we mentioned in the Introduction, an alternative model for the gamma-ray production in LS~I +61 303 involves an energetic pulsar which could generate a strong wind that would stop the accretion. Particles might be accelerated at the colliding wind region, producing gamma-rays through IC and $pp$ interactions (Dubus 2006b, Chernyakova et al. 2006). However, it is not clear how the morphological and variability properties of the system might be explained in this scenario. What is clear is that the high accretion rates and the observed low X-ray luminosity seem to exclude a scenario based upon an accreting pulsar, since there is no trace of the emission from the heated surface.
Future observations with MAGIC will help to detect the source close to the periastron and the spectral evolution along the orbit, providing in this way more constraints to the models.
\begin{acknowledgements}
We thank Valenti Bosch-Ramon, Wlodek Bednarek, E. Derishev, and Josep M. Paredes for discussions and an anonymous referee for valuable comments. This work was supported by grants PICT 03-13291, BID 1728/OC-AR (ANPCyT) and PIP 5375 (CONICET).
\end{acknowledgements}
|
train/arxiv
|
BkiUbbk5qsMAIq_v3v2D
| 5
| 1
|
\section*{Introduction}
\setlength\parindent{0em}
Zeta functions of various kinds, such as Hurwitz zeta function, Epstein zeta function and Dirichlet $L$-function, are all-pervasive objects in modern mathematics, especially in analytical number theory, and among which the prototype zeta function is the famous Riemann zeta function. It is classically defined as the sum of the infinite series\cite{ref1,ref2,refEdwards}
\begin{equation}\label{eq:1Riemann}
\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s}
\end{equation}
with the complex variable \begin{math}s=\sigma+it\end{math}. Specially, the series converges if \begin{math}\sigma=\Re s>1\end{math}. We can extend $\zeta(s)$ from $s$ with $\Re s > 1$ to $s$ with \begin{math}\Re(s)>0, s\not=1\end{math} by the following formula
\begin{equation}\label{eq:2end}
\eta(s)=\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n^s}=(1-2^{1-s})\zeta(s)
\end{equation}
where $\eta(s)$ is the Dirichlet eta function or alternating eta function.
Historically, people prefer to study the closed form of the Riemann zeta function at positive integer arguments in that those special values seem to dictate the properties of the objects they associated. In condensed matter physics for instance, the famous Sommerfeld expansion, which is usful for the calculation of particle number and internal energy of electrons, involves Riemann zeta function at even integers\cite{refSMFexpan}, while the spin-spin correlation functions of isotropic spin-$1/2$ Heisenberg model are expressed by $\ln2$ and Riemann zeta functions with odd integer arguments\cite{refHeisenberg}. It was, without doubt, a profound discovery of Euler in 1736 to work out the prolonged Basel problem\cite{ref3}
\begin{equation}\label{eq:1Eulersum}
\zeta(2)=\frac{{\pi}^2}{6}
\end{equation}
superbly. It is well-known that for positive even integer arguments the Riemann zeta function can be expressed explicitly as\cite{ref6}
\begin{equation}\label{eq:1Beven}
\zeta(2n)=\frac{(-1)^{n+1}(2\pi)^{2n}}{2(2n)!}B_{2n}
\end{equation}
in terms of the Bernoulli numbers \begin{math}B_{n}\end{math}. On the contrary, however, the explicit formula for Riemann zeta function at odd values is difficult if not fundamentally impossible to obtain. Euler himself once conjectured that $\zeta(2n+1)=c(n)\pi^{2n+1}$ and $c$ involves the irrational constant $\eta(1) = \ln 2$\cite{zeta3error}. This suggests that Riemann zeta function at odd integers produces a recurrence relation that is self-recursive. Even up to now, for positive odd integer arguments the Riemann zeta function can only be expressed by series and integral(see \eqref{zetafuncInt} and \eqref{eqZetaSeries} for detail). One possible integral expression is\cite{ref7}
\begin{equation}\label{eq:1Bodd}
\zeta(2n+1)=\frac{(-1)^{n+1}(2\pi)^{2n+1}}{2(2n+1)!}\int_{0}^{1}B_{2n+1}(x)\cot({\pi}x)dx
\end{equation}
where \begin{math}B_{2n+1}(x)\end{math} are Bernoulli polynomials. A relevant aspect is that, for Riemann zeta function, the celebrated Goldbach-Euler theorem\cite{refamm} assumes the elegant form
\begin{equation}\label{eq:1frac}
\sum_{n=2}^{\infty}\textbf{frac}(\zeta(n))=1,
\end{equation}
where $\textbf{frac}(x)=x-[x]$ denotes the fractional part of the real number $x$. It turns out that
\begin{equation}\label{eq:1fracoddeven}
\sum_{n=1}^{\infty}\textbf{frac}(\zeta(2n))=\frac{3}{4}, \sum_{n=1}^{\infty}\textbf{frac}(\zeta(2n+1))=\frac{1}{4}.
\end{equation}
Indeed, the formulas \eqref{eq:1Beven} and \eqref{eq:1Bodd}, along with \eqref{eq:1fracoddeven} do reveal somewhat similarity for the values of Riemann zeta function at even and odd arguments. Meanwhile, the calculation of Riemann zeta function and related series is a hot topic in computational mathematics. The traditional methods are Euler-Maclaurin formula and Riemann-Siegel formula, and algorithms are still being developed in earnest ever since\cite{integral,series1,series2,refLima}. Typically, a particular numerical method is limited to a special domain. Therefore, when concentrating on Riemann zeta function at odd integers, a special method should be constructed in view of the connection of Riemann zeta function values between odd and even integers.
In this paper we mainly obtain a recurrence formula \eqref{eq:4recur} relating to the Riemann zeta function and based on which we construct an algorithm for the calculation of the Riemann zeta function at odd integers. In addition, numerical calculation implies that the algorithm can reach considerable accuracies with small odd integer arguments, not to speak of larger ones. Quantificationally, the behavior of the error bound is $O(10^{-n})$ where $n$ is the argument.
\section*{Notations and Preliminaries}
We begin by recalling the definition of the Bernoulli polynomials \begin{math}B_{n}(x)\end{math} and their basic properties in a nutshell to render the paper essentially self-contained. The generating function of the Bernoulli polynomials \begin{math}B_{n}(x)\end{math} is \cite{ref1,ref2,refEdwards}
\begin{equation}\label{eq:2Ber_poly}
\frac{te^{tx}}{e^t-1}=\sum_{n=0}^{\infty}\frac{B_{n}(x)}{n!}t^n.
\end{equation}
Taking a derivative with respect to $x$ on both sides of \eqref{eq:2Ber_poly}, we find that
\begin{equation}\label{eq:2Bdiff}
B_{n}^{\prime}(x)=nB_{n-1}(x).
\end{equation}
Bernoulli polynomials can also be expressed explicitly from Bernoulli numbers
\begin{equation}\label{eq:2Num_Poly}
B_{n}(x)=\sum_{k=0}^{n}\left(\begin{array}{c}n\\k\end{array}\right)B_{k}x^{n-k}.
\end{equation}
For convenience, we introduce two kinds of reduced Bernoulli numbers(RBNs), one relates to the even-labeled Bernoulli numbers (denoted by $+$)
\begin{equation}\label{eq-2.5}
B_{n}^{+}=(-1)^{n+1}B_{2n},
\end{equation}
and another relates to the odd-labeled Bernoulli polynomials (denoted by $-$)
\begin{equation}\label{eq-2.6}
B_{n}^{-}=(-1)^{n+1}\int_{0}^{1}B_{2n+1}(x)\cot({\pi}x)dx.
\end{equation}
In this section we will demonstrate the asymptotic representation of the two kinds of RBNs in a uniform framework and establish their integral representation subsequently.
\subsection*{Asymptotic representations of RBNs}
The asymptotic expressions of Bernoulli polynomials at even and odd subscript are respectively\cite{ref11}
\begin{subequations}\label{eq3Bsim}
\begin{equation}\label{eq:3Bsima}
(-1)^{n+1}\frac{(2\pi)^{2n}}{2(2n)!}B_{2n}(x){\sim}\cos(2{\pi}x),
\end{equation}
\begin{equation}\label{eq:3Bsimb}
(-1)^{n+1}\frac{(2\pi)^{2n+1}}{2(2n+1)!}B_{2n+1}(x){\sim}\sin(2{\pi}x).
\end{equation}
\end{subequations}
Moreover, the Bernoulli polynomials can also be expressed in a stronger form based on Fourier sine and cosine series expansion\cite{ref12}
\begin{subequations}\label{eq3Bfourier}
\begin{equation}\label{eq:3Bfouriera}
B_{2n}(x)=(-1)^{n+1}\frac{2(2n)!}{(2\pi)^{2n}}\sum_{k=1}^{\infty}\frac{\cos(2{\pi}kx)}{k^{2n}},
\end{equation}
\begin{equation}\label{eq:3Bfourierb}
B_{2n+1}(x)=(-1)^{n+1}\frac{2(2n+1)!}{(2\pi)^{2n+1}}\sum_{k=1}^{\infty}\frac{\sin(2{\pi}kx)}{k^{2n+1}}.
\end{equation}
\end{subequations}
Obviously \eqref{eq3Bsim} is just the corollary of \eqref{eq3Bfourier}. Joining together we therefore obtain the asymptotic behavior of RBNs
\begin{subequations}
\begin{equation}\label{eq:3asy_Beven}
B_{n}^{+}=(-1)^{n+1}B_{2n}(0){\sim}\frac{2(2n)!}{(2\pi)^{2n}}
\end{equation}
\begin{equation}\label{eq:3asy_Bodd}
B_{n}^{-}=(-1)^{n+1}\int_{0}^{1}B_{2n+1}(x)\cot({\pi}x)dx{\sim}\frac{2(2n+1)!}{(2\pi)^{2n+1}}.
\end{equation}
\end{subequations}
where we use the fact that $\int_{0}^{1}\sin(2{\pi}x)\cot({\pi}x)dx=1$.
\subsection*{Integral representations of RBNs}
Let us consider two auxiliary integrals\cite{ref6}
\begin{subequations}
\begin{equation}\label{eq:3.7a}
I_{c}(n,m)=\int_{0}^{1}B_{2n}(t)\cos(m{\pi}t)dt
\end{equation}
\begin{equation}\label{eq:3.7b}
I_{s}(n,m)=\int_{0}^{1}B_{2n+1}(t)\sin(m{\pi}t)dt
\end{equation}
\end{subequations}
with $m$ and $n$ are integers and \begin{math}n\geq1\end{math}. Specially, when \begin{math}n=1\end{math}, direct computation shows that
\begin{subequations}\label{eqIntsinecosine}
\begin{align}\label{eq:3Ic1}
I_{c}(1,m)&=\int_{0}^{1}\Big(t^2-t+\frac16\Big)\cos(m{\pi}t)dt \nonumber\\
&=\left\{
\begin{array}{cl}
0 &, m=1,3,5,\cdots\\
\frac{2!}{(m\pi)^2}&, m=2,4,6,\cdots
\end{array}
,
\right.
\end{align}
\begin{align}\label{eq:3Is1}
I_{s}(1,m)&=\int_{0}^{1}\Big(t^3-\frac32t^2+\frac12t\Big)\sin(m{\pi}t)dt \nonumber\\
&=\left\{
\begin{array}{cl}
0 &, m=1,3,5,\cdots\\
\frac{3!}{(m\pi)^3} &, m=2,4,6,\cdots
\end{array}
.
\right.
\end{align}
\end{subequations}
By virtue of \eqref{eq:2Bdiff} and integrating by parts twice, readily yields
\begin{subequations}\label{eqIcsrec}
\begin{equation}\label{eq:3Icrec}
I_{c}(n,m)=-\frac{(2n)(2n-1)}{(m{\pi})^2}I_{c}(n-1,m)
\end{equation}
\begin{equation}\label{eq:3Isrec}
I_{s}(n,m)=-\frac{(2n+1)(2n)}{(m{\pi})^2}I_{s}(n-1,m).
\end{equation}
\end{subequations}
Combining \eqref{eqIntsinecosine} and \eqref{eqIcsrec} we find that
\begin{subequations}
\begin{equation}\label{eq:3.10a}
I_{c}(n,m)=\frac{(-1)^{n+1}(2n)!}{(m{\pi})^{2n}}
\end{equation}
\begin{equation}\label{eq:3.10b}
I_{s}(n,m)=\frac{(-1)^{n+1}(2n+1)!}{(m{\pi})^{2n+1}}
\end{equation}
\end{subequations}
hold if $m$ is even. Immediately, the integral representations of the two RBNs \begin{math}B_{n}^{+}\end{math} and \begin{math}B_{n}^{-}\end{math} are
\begin{subequations}
\begin{equation}\label{eq:3.11a}
B_{n}^{+}{\sim}2(-1)^{n+1}I_{c}(n,2)
\end{equation}
\begin{equation}\label{eq:3.11b}
B_{n}^{-}{\sim}2(-1)^{n+1}I_{s}(n,2).
\end{equation}
\end{subequations}
\section*{Algorithm to calculate the Riemann zeta function}
Mathematically, Riemann zeta function is said to be monotonically decreasing since its values are only falling and never rising with increasing values of $s$ with $s\geq2$. Besides, $\zeta(2)=\frac{\pi^2}{6}$, $\zeta(+\infty)>1$, thus $0<\zeta(s)-1<1$. Analogously, one can show that $0<\frac{1}{\eta(s)}-1<1$. For brevity we denote the reciprocal function as below
\begin{equation}\label{eq:4def}
\rho(s)\equiv\textbf{frac}\Big(\frac{1}{\eta(s)}\Big)=\frac{1}{\eta(s)}-1.
\end{equation}
Now that what we concerned most is the values of the Riemann zeta function at integers for the moment, the asymptotic behavior of the ratio of the reciprocal function \eqref{eq:4def} at odd integers and even integers interests us. Motivated by \eqref{eq:1frac} and \eqref{eq:1fracoddeven}, we manage to demonstrate a formula, the so-called \textit{recurrence formula}(not in a strict sense, though), on condition that the argument is a positive integer. Motivated by the recurrence, we manage to construct an algorithm to compute the Riemann zeta function.
\subsection*{Demonstration of the recurrence formula}
\begin{Thm}
\textit{If $n$ is a positive integer such that \begin{math}n \geq 1\end{math}, the recurrence relation holds
\begin{equation}\label{eq:4recur}
\lim_{n\rightarrow\infty}\frac{\rho(2n+1)}{\rho(2n)}=\frac12.
\end{equation}}
\end{Thm}
\begin{proof}
By using of \eqref{eq:1Beven} and \eqref{eq:1Bodd} and the definition of reciprocal function \eqref{eq:4def}, we have %
\begin{equation}\label{eq:4.4}
\frac{\rho(2n+1)}{\rho(2n)}=\frac{(2n+1)!-(2^{2n}-1){\pi}^{2n+1}B_{n}^{-}}{(2n)!-(2^{2n-1}-1){\pi}^{2n}B_{n}^{+}}\frac{B_{n}^{+}}{{\pi}B_{n}^{-}}\frac{2^{2n-1}-1}{2^{2n}-1}.
\end{equation}
Since the limitation of the rightmost term is exactly equal to \begin{math}1/2\end{math} if $n$ is large enough, what we want to prove is
\begin{equation}\label{eq:4.5}
\frac{(2n+1)!-(2^{2n}-1){\pi}^{2n+1}B_{n}^{-}}{(2n)!-(2^{2n-1}-1){\pi}^{2n}B_{n}^{+}}\frac{B_{n}^{+}}{{\pi}B_{n}^{-}}{\sim}1
\end{equation}
or equivalently
\begin{equation}\label{eq:4.6}
\frac{(2n+1)!}{{\pi}^{2n+1}}\frac{1}{B_{n}^{-}}-\frac{(2n)!}{{\pi}^{2n}}\frac{1}{B_{n}^{+}}{\sim}2^{2n-1}.
\end{equation}
From the asymptotic formulae \eqref{eq:3asy_Beven} and \eqref{eq:3asy_Bodd} of the two kinds of RBNs we find that, without any difficulty, we have finished the demonstration of the recurrence formula of the Riemann zeta function.
\end{proof}
As a matter of fact, we can extend the validity of \eqref{eq:4recur} from positive integers to positive real numbers straightforward. We therefore can obtain the asymptotic behavior of Riemann zeta function as
\begin{equation}\label{zetaValueFromRecurrence}
\frac{1}{\zeta(s)}{\sim}\frac{2^{s-1}-1}{2^{2s-3}}\Big(\frac{2}{\zeta(2)}-1\Big)+\frac{2^{s-1}-1}{2^{s-1}}.
\end{equation}
by using of \eqref{eq:4recur}. The application of \eqref{zetaValueFromRecurrence} can be diverse, here we just pick a example relating to prime number theorem. The positive integer $x$ is $s$-free if and only if in the prime factorization of $x$, no prime number occurs more than $s$-1. Indeed, if \begin{math}Q(x,s)\end{math} denotes the number of $s$-free integers (e.g. 2-free integers being square-free integers) between 1 and $x$, one can show that\cite{refsFree}
\begin{equation}\label{eq:5Q}
Q(x,s)=\frac{x}{\zeta(s)}+O(\sqrt[s]{x}),
\end{equation}
therefore we find that the asymptotic density of $s$-free integers \begin{math}Q(x,s)/x{\sim}\frac{1}{\zeta(s)}\end{math} is nothing but \eqref{zetaValueFromRecurrence}.
Another intriguing issue is to what degree can \eqref{zetaValueFromRecurrence} reveals its ability to obtain the $\zeta$-values. Figure \ref{figError} is thus plotted as follow.
\begin{figure}[!htb]
\centerline{\includegraphics[width=8.0cm]{riemannLUOformulafig.eps}\hspace{4mm}}
\caption{\textit{Asymptotic behavior of Riemann zeta function}.~
The solid line represents the approximate values($\zeta^{\bf{ap}}(s)$) obtained by \eqref{zetaValueFromRecurrence}, while the stars "*" represent the accurate values($\zeta^{\bf{ac}}(s)$) when $s$ is an integer. The crosses "$\times$" in the inserted figure, indicate the base 10 logarithm of the absolute errors($\epsilon(s)=\lg\big(\vert{\zeta^{\bf{ap}}(s)-\zeta^{\bf{ac}}(s)}\vert\big)$) at integers.}\label{figError}
\end{figure}
The fact that all the stars "*" lie on the solid curve indicates that \eqref{zetaValueFromRecurrence} may be a suitable candidate for the calculation of Riemann zeta function. The emergence of the abnormal slope between $s=3$ and $s=4$ in the inserted figure, however, implies that any $\zeta$-value obtained from its nearest neighbors should be much more accuracy. We therefore come up with a satisfactory proposal which is postponed until next subsection.
\subsection*{Basic ideas for the algorithm}
Abundant methods to evaluate the $\zeta(2n)$ have appeared in the mathematical literatures from now and then ever since Euler's seminal work. In contract, the explicit formula for odd-argument $\zeta$-values remains to be an open problem though some results shed light on it\cite{HeTX1,HeTX2}. By analogy to $\zeta(2n)$, several authors have established the series and integral representations of $\zeta(2n+1)$, which, to some degree, provides some perspectives on the difficulty of evaluating $\zeta(2n+1)$ as opposed to $\zeta(2n)$. From the viewpoint of numerical method, one natural way to construct the corresponding algorithm to evaluate the odd-argument Riemann zeta function is by viture of the even-argument $\zeta$-values near to them. In the current paper, only the two nearest $\zeta$-values are taken into consideration currently for simplicity. When $n$ is large enough, \eqref{eq:4recur} can be rewritten as
\begin{subequations}
\begin{equation}\label{eq:5up}
\rho^{l}(2n+1){\sim}\frac12\rho(2n)
\end{equation}
\begin{equation}\label{eq:5low}
\rho^{r}(2n+1){\sim}2\rho(2n+2)
\end{equation}
\end{subequations}
where $\rho^{l}(2n+1)$ and $\rho^{r}(2n+1)$ represent two different representations of the asymptotic behavior of $\rho(2n+1)$. Judging by appearance, One can use any of the formula above to calculate the Riemann zeta function at odd integers. When considering that those two formulae give the upper and lower bound of the zeta-values at odd integers(see Theorem 2), we come up with the idea that we can combine them together by a special method. It happens to us that there may exist a somewhat mysterious map from $\zeta(2n)$ and $\zeta(2n+2)$ to $\zeta(2n+1)$, which will ensure us to obtain the approximation values of $\zeta(2n+1)$ with higher precision. Let us give a proposition relating to Dirichlet eta function firstly before we move forward to give another theorem.
\begin{Lem}\label{Olss}
\textit{If $n$ is a positive integer such that \begin{math}n \geq 1\end{math}, the two inequalities hold
\begin{equation}\label{eq:5ineq1}
\frac{4}{\eta(2n+2)}-\frac{1}{\eta(2n)}>3
\end{equation}
\begin{equation}\label{eq:5ineq2}
\eta(2n)>\frac{2^{2n-1}-2}{2^{2n-1}-1}
\end{equation}}
\end{Lem}
Those two inequalities are quite new to the authors because we haven't seen them in any literature or monograph before. However, we are not intended to give the details here since the demonstration is rather elementary. The theorem below holds once we take advantage of lemma 1.
\begin{Thm}
\textit{If $n$ is a positive integer such that \begin{math}n \geq 1\end{math}, the inequality holds
\begin{equation}\label{eq:4ineqchain}
\zeta^{l}(2n+1)>\zeta^{r}(2n+1)>1
\end{equation}}
where $\zeta^{l}(2n+1)$ and $\zeta^{r}(2n+1)$ correspond to $\rho^{l}(2n+1)$ and $\rho^{r}(2n+1)$ respectively.
\end{Thm}
Since Riemann zeta function is a monotonic decreasing function, the exactly value $\zeta(2n+1)$ is just between $\zeta^{l}(2n+1)$ and $\zeta^{r}(2n+1)$ for any given positive integer $n$. For the benefit of accuracy we regard the geometric mean values of the \eqref{eq:5up} and \eqref{eq:5low} as the approximate values of the reciprocal function $\rho(2n+1)$, namely
\begin{equation}\label{eq:5mean}
\rho(2n+1){\approx}\sqrt{\rho(2n)\rho(2n+2)}
\end{equation}
which is the most valuable ingredient of our algorithm.
The basic steps for the calculation of $\zeta(2n+1)$ are presented as follow. Firstly, $\rho(2n)$ and $\rho(2n+2)$ should be calculated from \eqref{eq:1Beven}, \eqref{eq:2end} and \eqref{eq:4def}) in sequence. Secondly, the value of $\rho(2n+1)$ is ready to be obtained in light of \eqref{eq:5mean}. Lastly, the ultimate aim, i.e. $\zeta(2n+1)$ is just at hand from \eqref{eq:4def}) and \eqref{eq:2end}, reversely. Our algorithm doesn't bother circulation of any kind, it just looks like a formula, therefore we refer it as the \textit{direct formula method}.
In order to start our method, we need to know some $\zeta$-values at even integers. For example, $\zeta(2)$ and $\zeta(4)$ should be available to get $\zeta(3)$. We can obtain $\zeta(2n)$ through \eqref{eq:1Beven} systematically for small argument. However, it is almost impossible to obtain Bernoulli numbers by the ordinary recursive methods thus we hardly know the values of $\zeta(2n)$ if the argument is large enough. Many methods for computing Bernoulli numbers have been invented. David Harvey introduced an efficient multimodular algorithm\cite{bernmm} which ensures us to obtain the Bernoulli numbers $B_n$ at $n=10^8$. However, one can also use the intrinsic function \textbf{Zeta}[$s$] in Mathematica since it is also based on an efficient algorithm. Therefore, for convenience, our computation platform is mainly on Mathematica and we regard those values as benchmarks.
\section*{Calculation of Riemann zeta function at odd integers}
The calculation of Riemann zeta function plays an essential role in the study of number theory and associated subjects such as statistical physics and condensed matter physics. Various approaches to accomplish this task have been proposed\cite{integral,series1,series2,ref14}, especially for the evaluation of zeta function at integer arguments or in the critical strip (for the computation of Riemann's zeros). Most of the methods available consist of using integral forms of some particular functions or recursive series forms. Quite recently, Babolian \textit{et al} transform $\zeta(s)$ to some appropriate integral forms and introduce a method to compute the Riemann zeta function based on Gauss-Hermite and Gauss-Laguerre quadratures\cite{integral}. Numerical result show that 20 points are capable of producing an accuracy of seven-decimal place for small arguments. Besides, many rapidly converging series for $\zeta(2n+1)$ have been introduced by Srivastava in a review article\cite{series2} and by other authors\cite{series1,refLima}. In this section we firstly give some numerical examples according to our method to illustrate its accuracy, then we compare our method to two selected ones to show that our method is especially powerful to calculate the $\zeta$-values at large odd integer arguments.
\subsection*{Numerical test and error bound of the algorithm}
We regard the $\zeta$-values obtained by Mathematica as benchmarks. The result of the Riemann zeta function at odd integers with $n=1,2,\cdots,10$ obtained by our method(approximate value) is presented in table \ref{tab1}. The accuracy values and the absolute errors are also presented at the same time.
\begin{table}[h!]
\caption{Comparison between accurate values $\zeta^{\bf{ac}}(2n+1)$ and approximate values $\zeta^{\bf{ac}}(2n+1)$.}\label{tab1}
\begin{tabular}{|c|c|c|r|}
\hline
$n$ &$\zeta^{\bf{ap}}(2n+1)$ &$\zeta^{\bf{ac}}(2n+1)$ &\multicolumn{1}{|c|}{Errors} \\
\hline
1 &1.201335874256 &1.202056903160 &-0.007210289040\\
\hline
2 &1.036972837734 &1.036927755143 &0.000045082590\\
\hline
3 &1.008365209797 &1.008349277382 &0.000015932415\\
\hline
4 &1.002011075857 &1.002008392826 &0.000002683031\\
\hline
5 &1.000494555053 &1.000494188604 &0.000000364486\\
\hline
6 &1.000122758824 &1.000122713348 &0.000000045476\\
\hline
7 &1.000030593607 &1.000030588236 &0.000000005371\\
\hline
8 &1.000007637815 &1.000007637198 &0.000000000617\\
\hline
9 &1.000001908283 &1.000001908213 &0.000000000070\\
\hline
10 &1.000000476941 &1.000000476933 &0.000000000008\\
\hline
\end{tabular}
\end{table}
Table \ref{tab1} tells us that, the idea that making the geometric mean instead of any of the upper or lower bound(see Theorem 2) be the best estimate of the Riemann zeta function dramatically reduces errors and satisfactory accuracy such as twelve decimal places in the tenth odd-argument of the Riemann zeta function can be achieved. It's interesting to find that only the Ap\'{e}ry's constant $\zeta(3)$ sightly larger than the approximate value obtained by our method. It's also funny to see the errors present an upside-down stair configuration, which implies that the error declines about ten times as long as the argument $n$ increase 1.
In table \ref{tab2} we present the absolute errors $ \epsilon(n)$ versus $n$, for the purpose of exploring the error bound when the argument $n$ is large enough.
\begin{table}[h!]
\caption{The errors of $\zeta(2n+1)$ based on our method.}\label{tab2}
\begin{tabular}{|l|l|l|l|}
\hline
\multicolumn{1}{|c|}{$n$} &\multicolumn{1}{|c|}{Errors} &\multicolumn{1}{|c|}{$n$} &\multicolumn{1}{|c|}{Errors}\\
\hline
$1\times10^2$ &$1.05\times10^{-97}$ &$1\times10^4$ &$1.04\times10^{-9544}$\\
\hline
$2\times10^2$ &$3.94\times10^{-193}$ &$2\times10^4$ &$3.92\times10^{-19087}$\\
\hline
$5\times10^2$ &$2.10\times10^{-479}$ &$5\times10^4$ &$2.08\times10^{-47714}$\\
\hline
$1\times10^3$ &$1.59\times10^{-956}$ &$1\times10^5$ &$1.56\times10^{-95426}$\\
\hline
$2\times10^3$ &$9.09\times10^{-1911}$ &$2\times10^5$ &$8.75\times10^{-190851}$\\
\hline
$5\times10^3$ &$1.70\times10^{-4773}$ &$5\times10^5$ &$1.55\times10^{-477123}$\\
\hline
\end{tabular}
\end{table}
It's clear that, from table \ref{tab2}, the error is of the order $O(10^{-n})$ approximately. By using of least square method, we notice that
\begin{equation}\label{eq:4errorbound}
\lg(\epsilon(n))=-0.9542n-1.6884.
\end{equation}
This formula suggests that when the argument of Riemann zeta function is large enough, our algorithm should be powerful enough to obtain the $\zeta$-values at odd integers.
\subsection*{Compare with the existed methods.}
In this subsection, we aim to compare our algorithm with the already existed ones, namely the Gauss-Hermite quadrature(Integral method, see \cite{integral}, Corollary 3.1) and rapid converging series(Series method, see \cite{series2}, eq.(3.30)). The Gauss-Hermite quadrature formula has the form\cite{integral}
\begin{equation}\label{GaussHermiteformula}
\int_{-\infty}^{\infty}f(x)e^{-x^2}{\rm{d}}x=\sum_{k=1}^{N}w_{k}f(x_k)+R_N
\end{equation}
where $x_k$ is one of the zeros of $H_N(x)$, the Hermite polynomial of degree $N$, and $w_k=-\frac{2^{N+1}N!\sqrt{\pi}}{H_N'(x_k)H_{N+1}(x_k)}$ is the corresponding weight. $R_N=\frac{N!\sqrt{\pi}}{2^{N}(2N)!}f^{2N}(\eta)$,$\eta\in(-\infty,\infty)$ is, obviously, the error bound of the above integral. Riemann zeta function is such an amazing function that it can be transformed into\cite{integral}
\begin{equation}\label{zetafuncInt}
\zeta(s)=\frac{\int_{-\infty}^{\infty}\big(\vert x\vert^{2s-1}{\rm{e}}^{-x^2}/(1-{\rm{e}}^{-x^2})\big){\rm{d}}x}{\int_{-\infty}^{\infty}\vert x\vert^{2s-1}{\rm{e}}^{-x^2}{\rm{d}}x}
\end{equation}
whose numerator and denominator are of the form presented in \eqref{GaussHermiteformula}. Among all the series representations of Riemann zeta function, the series below
\begin{align}\label{eqZetaSeries}
&\zeta(2n+1)=\frac{(-1)^{n-1}(2\pi)^{2n}}{(2n)![2^{2n}(2n-3)-2n+1]}\cdot\nonumber\\
&\Big[\sum_{m=1}^{n-1}(-1)^m\binom{2n-1}{2m-2}\frac{(2m)!(2^{2m}-1)}{(2\pi)^{2m}}\zeta(2m+1)+2\sum_{k=0}^{\infty}\frac{\zeta(2k)}{(2k+2n-1)(k+n)(2k+2n+1)2^{2k}}\Big]
\end{align}
converges most rapidly as pointed out by H.M. Srivastava\cite{series2}. When $n=1$ for instance, the error bound $R_N^{\bf(s)}$ of the $N$-th partial sum of the infinite series in \eqref{eqZetaSeries} satisfies
\begin{align}\label{errorboundSeries}
&\vert R_N^{\bf(s)}\vert =\frac{4\pi^2}{15}\sum_{k=N+1}^{\infty}\frac{\zeta(2k)}{(2k+1)(k+1)(2k+3)4^{k}}\nonumber\\
& <\frac{4\pi^2}{15}\frac{\zeta(2N+2)}{(2N+3)(N+2)(2N+5)}\sum_{k=N+1}^{\infty}\frac{1}{4^{k}}\nonumber\\
& =\frac{4\pi^2}{45}\frac{1}{(2N+3)(N+2)(2N+5)(4^{N}-\frac{1}{2})}
\end{align}
where we have used the fact that $\zeta(s)<\frac{1}{1-2^{1-s}}$ since $\eta(s)<1$. if $N=25$, the error bound is $\vert R_{25}^{\bf(s)}\vert < 1.0\times10^{-20}$, which is superior to other rapid series $\vert R_{25}\vert < 0.9\times10^{-18}$ as noted in \cite{series1,zeta3error}. Specially, when $N$ is larger than some typical numbers, the asymptotic behavior of \eqref{errorboundSeries} reads
\begin{equation}\label{errorboundSeriesAsym}
\lg(\vert R_N^{\bf(s)}\vert) \sim -2\lg2(N+1)-3\lg N.
\end{equation}
The accuracy of the latter two methods rely on the number of zeros(denoted as $N_1$) of the associated polynomial(in this occasion it is Hermite polynomial) and the terms(denoted as $N_2$) of partial sum of the infinite series respectively. We set the two integers be the same value, i.e. $N_1=N_2=25$ since the corresponding methods are both efficient as have been declared by many Mathematicians.
\begin{table}[h!]
\caption{Errors of three different methods for $\zeta(2n+1)$.}\label{tab3}
\begin{tabular}{|c|l|c|l|}
\hline
\multirow{2}{*}{$n$}
&\multirow{2}{*}{Integral method}
&Series method
&\multirow{2}{*}{Our method}\\
& &($\times10^{-20}$)&\\
\hline
3 &$2.42\times10^{-7}$ &$3.17434484$ &$1.50\times10^{-5}$\\
\hline
6 &$1.21\times10^{-10}$ &$3.14630746$ &$4.52\times10^{-8}$\\
\hline
9 &$4.31\times10^{-11}$ &$3.14592124$ &$4.99\times10^{-11}$\\
\hline
12 &$1.36\times10^{-12}$ &$3.14591532$ &$9.79\times10^{-14}$\\
\hline
15 &$7.40\times10^{-13}$ &$3.14591522$ &$1.35\times10^{-16}$\\
\hline
18 &$6.20\times10^{-13}$ &$3.14591522$ &$1.85\times10^{-19}$\\
\hline
21 &$8.46\times10^{-14}$ &$3.14591522$ &$2.54\times10^{-22}$\\
\hline
24 &$7.99\times10^{-15}$ &$3.14591522$ &$3.48\times10^{-25}$\\
\hline
27 &$7.77\times10^{-16}$ &$3.14591522$ &$4.78\times10^{-28}$\\
\hline
30 &$1.11\times10^{-16}$ &$3.14591522$ &$6.55\times10^{-31}$\\
\hline
\end{tabular}
\end{table}
The behaviors of the error bound of integral method and series method, as can be seen from table \ref{tab3}, are totally different. When the argument increases, the errors of the former decrease exponential from a high level, while the latter maintain at a nearly constant low level despite of the variation of $n$. Our method exhibits the worst results for small arguments, but the errors decrease dramatically with argument increasing. It outstrips integration method and series method before $n=12$ and $n=21$ respectively. Our method superior to them absolutely afterwards. To reach the accuracy obtained by our method, the number of nodes and terms in the above two methods should be augmented largely. In the series method for instance, the terms of the order $n$ in the infinity series should be included according to \eqref{eq:4errorbound} and \eqref{errorboundSeriesAsym}. Obviously, it is almost impossible to carry on within the limited CPU time when $n$ is an astronomical number.
\section*{Conclusion.}
In summary we firstly introduce two kinds of reduced Bernoulli numbers(RBNs) and prove their asymptotic behaviors in an uniform framework, and their series and integral representations are available at the same time. What's more, we discover and prove a recurrence formula \eqref{eq:4recur} of the Riemann zeta function original and construct an algorithm to evaluate the Riemann zeta function at odd integers based on it. The idea of our method is quiet simple, but it turns out to be a competent algorithm. The behavior of the error bound $\epsilon(n)$ is governed by $\lg(\epsilon(n))=-0.9542n-1.6884$ or $\epsilon(n)=O(10^{-n})$ approximately, which, of course, suggests that our method is especially suit for the calculation of $\zeta$-values at large odd integer arguments. Therefore, our results can also work as benchmarks to test the accuracy of other related algorithms. However, more works should be carried on to improve the accuracy at small arguments in future. Remarkably, the recurrence formula \eqref{eq:4recur} is likely to act as a touchstone to explore the closed form of the Riemann zeta function at positive integers since it witnesses the connection between $\zeta$-values at odd integers and even integers.
\section*{Acknowledgements}
The authors would like to show their appreciation to Junesang Choi, Yong Lin and Changle Liu for some useful discussions, and express their thanks to Jinlin Liu and Jiurong Han for their suggestions. Especially, they wishes to thank the anonymous referees of this paper for valuable suggestions which have improved the presentation of the paper.
|
train/arxiv
|
BkiUbZLxaKPQonJtdEDU
| 5
| 1
|
\section{Introduction}
Research into 5d transition metal compounds has blossomed in recent years due to the potential for new electronic and magnetic phenomena to emerge from the interplay between crystal field effects, electronic correlations (Hubbard U) and strong spin-orbit coupling (SOC). Iridium is an appealing element in which to study these physics due to its tendency to adopt a 4+ oxidation state, where the ground electronic configuration 5d$^5$ is predicted to yield a {\it J}~=~$\frac{1}{2}$ electronic state in the presence of strong SOC \cite{MottStrongSOC}. Further, iridates are known to crystallize in numerous structure types, each with unique Ir-Ir connectivity, and this fact has made them an active area of study\cite{Yang-Kim,Na2IrO3Synthesis,Liu,BPhelan}. In aggregate, recent literature on the magnetic and electronic properties of iridates are unified by a desire to probe the influence of strong SOC and U on electronic behavior\cite{BalentsSOCReview}. Iridium--based honeycombs, in particular the {\it A}$_2$IrO$_3$ ({\it A}~=~Li/Na) family, have fallen under intense scrutiny following predictions that they might host a spin-liquid ground state\cite{Na2IrO3Prediction,XtalfieldandcorrelationA2IrO3,A2IrO3KitaevHeisenberg,Kimchi,Andrade,Rau,Lei,Katukuri}. Efforts to produce new iridium oxide honeycomb lattices have intensified\cite{ThinFilmHoneycomb,HarmonicHoneycomb, BaroudiCava}, but 4d and 5d materials containing continuous honeycomb connectivity remain scarce.
Here we report the synthesis and characterization of two new honeycomb iridates based on Ir$^{\rm 5+}$ (5d$^{\rm 4}$).
The first, NaIrO$_3$~is produced via~{\it chimie douce} oxidative deintercalation of sodium from Na$_2$IrO$_3$, which preserves the planar honeycomb network of edge-sharing IrO$_6$ octahedra. The second, Sr$_3$CaIr$_2$O$_9$, is a 2:1 ordered perovskite, which shares its structure with Sr$_3$CaRu$_2$O$_9$\cite{Poeppelmeier}, and consists of layers of IrO$_6$ octahedra forming a buckled honeycomb lattice. Magnetic susceptibility data collected on both compounds demonstrate weak temperature-independent magnetism, and resistivity measurements show semiconducting behavior consistent with hopping conductivity. These compounds are thus ideal to investigate the interplay between non-cubic crystal fields and SOC. By comparison with related 4d and 5d honeycomb compounds with partially filled t$_{\rm 2g}$ subshells, we find that d electron count correlates strongly with the magnetic behavior, and present a thorough structural analysis of Sr$_3$CaIr$_2$O$_9$~and Sr$_3$CaRu$_2$O$_9$~that ultimately suggests the correlation is driven by SOC.
\section{Results}
\subsection{Syntheses and Structures}
NaIrO$_3$~was prepared {\it via}~the oxidative deintercalation reaction: \begin{equation}\rm Na_2IrO_3+\frac{x}{2}~Br_2~\underset{Acetonitrile}{\longrightarrow}~(1-x)~Na_{2}IrO_3+x~NaIrO_3+x~NaBr.\end{equation} The 1:1 relationship between oxidant consumption and Na$^+$ deintercalation of equation 1 was confirmed by laboratory X-ray powder diffraction (XRPD) data collected on powders after vacuum evaporation of the solvent, which showed crystalline NaBr as the only side product. In contrast to the precursor material Na$_2$IrO$_3$~which degrades rapidly in laboratory air\cite{Krizan}, NaIrO$_3$~is stable under ambient conditions, as XRPD data showed no evidence of decomposition after several months of air exposure. However, NaIrO$_3$~degrades rapidly upon heating above 200$^{\circ}$C in air, as evidenced by the broadening of diffraction peaks and appearance of new reflections. Attempts at producing intermediate stoichiometries (i.e. Na$_{2-x}$IrO$_3$) were unsuccessful, and instead produced two-phase samples containing a mixture of Na$_2$IrO$_3$~and NaIrO$_3$. Further, no phase width was observed in NaIrO$_3$, as reactions performed with excess bromine produced only NaIrO$_3$~with no observable difference in lattice parameters or physical properties.
The structure of NaIrO$_3$~was solved in space group $P\bar{1}$ {\it via}~Rietveld refinement to laboratory XRPD data (Fig.~\ref{NaIrO3_rietveld}(a)). Final structural parameters are available in the SI. Solution of the structure in higher symmetry cells was prevented by the high degree of inter-plane stacking disorder, which arises due to the fact that there are several possible translations that can take place in the honeycomb plane from one layer to another, each of which generates a different stacking pattern at minimal energetic cost. Stacking disorder is a well known structural perturbation in many layered compounds, and evidence of this disorder can be plainly seen in the poorly-fit peak broadening and incorrect intensities present at low diffraction angles. Further evidence of this structural disorder is provided by synchrotron XRPD data (Fig.~\ref{NaIrO3_rietveld} (a, inset)), which show highly asymmetric $\langle$0 0 l$\rangle$ peaks--this may occur as a consequence of decomposition in the intense synchrotron beam. Using DIFFaX\cite{diffax}, we modeled the diffraction patterns for three distinct stacking variants. When viewed from perpendicular to the honeycomb plane, these stacking variants can be described based on which atoms in the honeycomb plane eclipse (sit directly above ) the Na ion in the adjacent plane. The three primary stacking possibilities are: 1) Na-Ir eclipsed; 2) Na-Na fully eclipsed, which produces infinite Na channels; 3)~staggered, in which none of the atoms in the honeycomb plane eclipse those in the adjacent plane, which is the pattern adopted by the parent material Na$_2$IrO$_3$. The simulated diffraction patterns for these stacking arrangements, along with XRPD data collected on a sample of Na$_2$IrO$_3$~are shown in figure Fig.~\ref{NaIrO3_rietveld}(b). The observed XRPD pattern closely resembles the simulation obtained for the Na-Ir eclipsed variant, but with noticeable broadening and attenuation of several peaks. A fourth pattern was simulated based on a model consisting of 95\% Na-Ir eclipsed stacking with a 5\% probability of a fully-eclipsed stacking fault. This model produces a good qualitative agreement with the raw XRPD data, suggesting that the fully eclipsed stacking variant may only be slightly less energetically favorable than the Na-Ir eclipsed variant. The presence of the fully eclipsed stacking fault suggests that further synthetic work may yield control over stacking order in this compound. For an in-depth discussion on stacking disorder in honeycomb iridates, see ref.~\cite{Wallace}. This is the second known structure with the formula NaIrO$_3$-a post-perovksite structure with the same formula is obtained when synthesized under high-pressure and temperature \cite{NaIrO3_postPerov}. Neither structural polymorph of NaIrO$_3$~can be synthesized using conventional solid-state techniques, a fact that underscores the novelty and relative instability of the Ir$^{5+}$ oxidation state.
The structure of Sr$_3$CaIr$_2$O$_9$~was solved in space group {\it P}2$_{\rm 1}$/{\it c} (14) {\it via}~Rietveld refinement to neutron powder diffraction (NPD) and XRPD data (Fig.~\ref{srcairo_structure} (a)), and was found to be isostructural to Sr$_3$CaRu$_2$O$_9$\cite{Poeppelmeier}. Final structural parameters are available in the SI. The monoclinic structure of Sr$_3$CaIr$_2$O$_9$, visible in Fig.~\ref{srcairo_structure} (b), can be described as a 2:1 ordered perovskite with the formula A$_3$BB'$_2$O$_9$, where IrO$_6$ and CaO$_6$ octahedra share corners to form the perovskite lattice and Sr$^{2+}$ cations occupy the 12-fold coordinate A sites. In contrast to the Ba$_3$$M$Ir$_2$O$_9$ ({\it M} = Mg, Ca, Sc, Ti, Zn, Sr, Zr, Cd, In, etc.) family of 6H-perovskite-like structures, which host face-sharing Ir$_2$O$_9$ dimers\cite{Hinatsu}, the IrO$_6$ octahedra of Sr$_3$CaIr$_2$O$_9$~share corners to form a buckled honeycomb lattice in the b--c plane. While the structure of Sr$_3$CaIr$_2$O$_9$~may seem odd given the apparent wealth of other Barium-based compounds with analogous stoichiometry, it is actually unsurprising that a conventional perovskite lattice is formed in this case due to the improved match in ionic radii between Sr and Ir. The same is true of Sr$_3$CaRu$_2$O$_9$, which also has a similar collection of Ba-based 6H-perovskite-type cousins. While many different perovskites containing iridium have been reported in addition to the widely studied Sr$_2$IrO$_4$, this seems to be a rare example of an iridium-based perovskite containing honeycomb connectivity. Furthermore, the corner-sharing bonding motif of the honeycomb lattice is the first of its kind in iridates, and thus opens a new avenue along which to search for new honeycomb materials that are free from stacking disorder and are air--stable.
The corefinement of the Sr$_3$CaIr$_2$O$_9$~structure to both XRPD and NPD data yielded stable oxygen positions that can be used to investigate the honeycomb connectivity in detail (Fig.~\ref{srcairo_structure} (b)). As a result of octahedral tilting, the Ir--O--Ir bond angles between nearest-neighbor sites on the honeycomb lattice are significantly less than the ideal 180$^{\circ}$ expected for an undistorted perovskite. The Ir--O bond lengths are also distorted: for the first iridium site, three adjacent Ir--O bonds are significantly shorter than the overall average of 1.99 \AA, while the opposite three are significantly longer. A similar "three short, three long" pattern is also observed to a lesser degree on the second iridium site, where the three short bonds form a plane that is roughly perpendicular to the plane formed by the longer three bonds. This pattern results in significant variations in the bridging Ir--O--Ir bond lengths: the shortest Ir--Ir distance (3.96(3) \AA) is formed via Ir--O bonds of very similar lengths (2.04(4) \& 2.05(3) \AA, respectively), while the longest (d$_{\rm Ir-Ir}$ = 4.01(1) \AA) is formed via Ir--O bonds of very disproportionate lengths (2.14(3) \& 1.92(4) \AA, respectively). These non-uniform exchange interactions result in an effective dimerization between Ir sites even though the distances between adjacent iridium sites on the honeycomb lattice are remarkably similar. These observations are consistent with a non-spherical perturbation to the d-electron states of Ir 5+ , which will be discussed in detail in the next section.
\subsection{Physical Properties}
Fig.~\ref{honeycomb_magnetization} shows magnetic susceptibility data collected on NaIrO$_3$~and Sr$_3$CaIr$_2$O$_9$, along with linear fits to the $\chi_0$-corrected inverse susceptibilities (inset). In contrast to Na$_2$IrO$_3$, which has a magnetic moment of 2.0(1) $\mu_B$ per Ir site and a Weiss temperature $\theta_W$~=~-159(3)~K, honeycomb NaIrO$_3$~shows only a small temperature-independent susceptibility $\chi_0$~=~3.29$\times$10$^{-4}$~emu/mol~Ir in the high temperature regime and a Curie tail at low temperatures. A Curie-Weiss fit to the magnetic susceptibility data (Fig.~\ref{honeycomb_magnetization} (inset)) yields a Curie constant of C~=~4.3(1)$\times$10$^{-3}$ emu/mol Ir K, and a Weiss temperature $\theta_W$~=~0(2) K. Similar magnetic behavior is observed in Sr$_3$CaIr$_2$O$_9$~(Fig.~\ref{honeycomb_magnetization}). Sr$_3$CaIr$_2$O$_9$~was initially obtained as a phase-pure powder after 6 short heatings, and a Curie-Weiss fit to the magnetic susceptibility data collected on this sample (Fig.~\ref{honeycomb_magnetization} (inset)) yielded a temperature-independent susceptibility of $\chi_0$~=~2.6(1)$\times$10$^{-3}$~emu/mol~Ir, a Curie constant C~=~9.3(1)$\times$10$^{-3}$ emu/mol Ir K, and a Weiss temperature $\theta_W$~=~-8(5)~K. Optimization of the stoichiometry and heating schedule for Sr$_3$CaIr$_2$O$_9$~(See E.S.I.) yielded a sample whose diffraction peaks were significantly sharper and more intense, consistent with an improvement in crystallinity and homogeneity. This improvement resulted in a substantial reduction of C, $\Theta_W$ and $\chi_0$ (4.6(1)$\times$10$^{-3}$ emu/mol Ir K, -8(5)~K, and 3.29$\times$10$^{-4}$~emu/mol~Ir, respectively). In all samples, the small observed magnetic moments are likely due to dilute magnetic impurities or orphan spins--roughly 1~\% of free {\it S}~=~$\frac{1}{2}$ spins in the bulk could account for the observed susceptibility. Furthermore, the small Weiss temperatures observed from all datasets indicate that the magnetic electrons are non-interacting, which lends further weight to the argument for magnetic defects and impurities rather than intrinsic magnetism. These results are thus consistent with a {\it J}~=~0 state in the Ir$^{5+}$ metal centers of both compounds. Furthermore, the magnetic moments observed for Ir$^{\rm 5+}$ in these compounds are consistent with what is observed in many other Ir$^{\rm 5+}$ compounds (Table 1).
The observed temperature-independent susceptibility is not due to delocalization of charge carriers to form a metal, as is the case in other iridates such as [Rb/K]Ir$_4$O$_8$\cite{Talanov,Schoop}. Fig.~\ref{resistivity} shows electrical resistivity data collected on polycrystalline bar of NaIrO$_3$~and Sr$_3$CaIr$_2$O$_9$. In both datasets, sample resistance increases on cooling, diverging rapidly below {\it T}$\sim$100 K, and exceeding the detection limit of the instrument below {\it T}$\sim$75 K in the case of NaIrO$_3$. A simple Arrhenius-like activation barrier did not yield a good fit to the resistivity data. Instead, the behavior of both NaIrO$_3$~andSr$_3$CaIr$_2$O$_9$~is well described by a variable-range hopping model. The conductivity is consistent with hopping in two or three dimensions: a plot of ln$\rho$ vs. $T^{\rm-\frac{1}{3}}$ (Fig. 4(inset)) or ln$\rho$ vs. $T^{\rm-\frac{1}{4}}$ (not shown) yields a linear relationship. This is similar to what is found for Na$_2$IrO$_3$~and indeed many other 4d and 5d honeycombs, and consistent with the materials structure\cite{Na2IrO3Synthesis,Luo_Li2RhO3}.
\section{Discussion}
Both NaIrO$_3$~and Sr$_3$CaIr$_2$O$_9$~are important new additions to a select few 4d and 5d compounds that exhibit honeycomb connectivity between MO$_6$ octahedra, tabulated in Table 2. For a given d electron count, there is little difference in observed magnetic moments between the 4d and 5d groups, despite the fact that both crystal field and SOC energy scales change significantly when moving down the periodic table. While the magnetic moments observed for the d$^5$ configurations are unsurprising given the expected magnetic moment of $\mu$$\sim$1.9$\mu_{B}$ for a {\it S}~=~$\frac{1}{2}$ system, the weak temperature-independent magnetism observed for both 4d$^4$ Ru$^{\rm 4+}$ and 5d$^4$ Ir$^{\rm 5+}$ is an unexpected result in the low-spin octahedral crystal field case, which should have 2 unpaired electrons and total spin {\it S}~=~1. Fig.~\ref{NRGscales}~shows three possible origins of a nonmagnetic state in an Ir$^{\rm 5+}$O$_6$ (Ru$^{\rm 4+}$O$_6$) octahedron. One possibility is that direct overlap between adjacent Ir sites generates new molecular orbitals, which removes the threefold degeneracy of the t$_{\rm 2g}$ manifold and results in a {\it S}~=~0 state, a scenario that has been proposed for Li$_2$RuO$_3$ based on the significant variation in Ru-Ru bondlengths\cite{Khomskii}. The second possibility is that distortions from perfect octahedral symmetry remove the degeneracy of the t$_{\rm 2g}$ manifold locally (i.e. a Jahn-Teller distortion), thus resulting in a completely filled, twofold degenerate ground state. Such distortions are thought to be highly important in the Ir$^{5+}$ perovskites [Sr/Ba]$_2$IrO$_4$, and may also be driven by extended crystal field effects (i.e. interactions beyond nearest-neighbor oxygen and iridium atoms)\cite{Bogdanov,Hozoi}. The final possibility outlined here is that SOC is the strongest perturbation to the cubic crystal field in Ir$^{\rm 5+}$, which is consistent with electronic structure calculations performed by Phelan {\it et al.} on Sr$_x$La$_{11-x}$Ir$_4$O$_{24}$\cite{BPhelan}. While the computational route invariably leads to the conclusion that SOC produces a nonmagnetic state in Ir$^{\rm 5+}$, one can also reach this conclusion on paper using group theory. To understand how SOC produces a {\it J}~=~0 state in this case, one must generate appropriate term symbols for electronic states in the presence of SOC by referencing the double group for O$_h$ symmetry. In addition to the five irreducible representations of standard character table for group O$_h$, the double group has six new irreducible representations ($\rm\Gamma_6^{+/-}$, $\rm\Gamma_7^{+/-}$, and $\rm\Gamma_8^{+/-}$)\cite{DresselhausBook}. By referencing to the O$_h$ double group, one finds that SOC splits the $\rm\Gamma_{25}^+$ (t$_{\rm 2g}$) manifold into two sets of spin orbitals: a twofold degenerate set with $\rm\Gamma_7^+$ symmetry and a fourfold degenerate set with $\Gamma_8$ symmetry. The $\rm\Gamma_{12}^+$ (e$_{\rm g}$) manifold is not split in the presence of SOC, but does acquire a new irreducible representation $\Gamma_8^+$. Because orbitals from the original $\rm\Gamma_{25}^+$ and $\rm\Gamma_{12}^+$ manifolds share the same irreducible representations, they interact to form low- and high-energy pairs, akin to bonding and antibonding orbitals in a conventional MO diagram. The net result for the d$^4$ case is a completely filled $\rm\Gamma_8^+$ manifold, which gives rise to the {\it J}~=~0 state.
While the first scenario (direct metal-metal bonding) is well supported for the case of Li$_2$RuO$_3$, it is not a likely explanation for our Ir$^{\rm 5+}$ compounds, as NaIrO$_3$~and Sr$_3$CaIr$_2$O$_9$~show similar magnetic properties despite having wildly different Ir-Ir connectivity and internuclear distances. Further information can be obtained by examining the Ir--O bond lengths and octahedral distortions, as they provide direct signatures of orbital degeneracy. Fig.~\ref{KDE} compares the Ir--O bond lengths present in Sr$_3$CaIr$_2$O$_9$~and Sr$_3$CaRu$_2$O$_9$~using Gaussian kernel density estimates and illustrations to provide a visual understanding of the distortions present in these octahedra. Both compounds exhibit similar "three short, three long" distortions of the two octahedral metal sites, as discussed in section 2.1. These distortions are distinctly asymmetric, and thus inconsistent with a Jahn--Teller effect. The asymmetry is also likely not due to extended crystal field effects, as both Ir sites have similar extended coordination spheres (proximity to Ca$^{\rm 2+}$ and Sr$^{\rm 2+}$ ions). Rather, the observed asymmetry is likely a direct consequence of SOC, which, due to the mixing of bare-ion orbital identities, gives rise to asymmetric (i.e. direction-dependent) exchange. The fact that the octahedral distortions are amplified in Sr$_3$CaIr$_2$O$_9$~compared to Sr$_3$CaRu$_2$O$_9$~provides further support for the influence of SOC. Future spectroscopic experiments can directly probe for the excited states predicted by this model, and the optical signatures should also respond to application of a magnetic field, thus yielding a powerful method of investigating SOC--driven physics in these compounds.
\section{Conclusions}
Two new insulating honeycomb iridates, NaIrO$_3$~and Sr$_3$CaIr$_2$O$_9$, have been synthesized and characterized {\it via}~diffraction experiments, magnetometry, and resistivity. In both compounds, iridium exists in a 5+ oxidation state, yielding a 5d$^4$ electronic configuration. Magnetization measurements demonstrate that both compounds exhibit negligible magnetic susceptibility, implying that both of these compounds are in close proximity to either a {\it S}~=~0 or a {\it J}~=~0 magnetic state. Structural studies performed on Sr$_3$CaIr$_2$O$_9$~suggest SOC is the dominant energy scale in determining the ground magnetic state of Ir$^{\rm 5+}$ compounds. Further spectroscopic experiments on these compounds will shed light on the nature of the magnetism observed in iridates.
\section{Acknowledgements}
DCW and TMM gratefully acknowledge support from the David and Lucile Packard Foundation and the American Chemical Society Petroleum Research Fund. The Institute for Quantum Matter is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Material Sciences and Engineering under Grant No. DE-FG02-08ER46544. R. DCW and TMM are grateful to the NIST Center for Neutron Research for the neutron diffraction beam time obtained on BT-1.
\end{doublespace}
\newpage
|
train/arxiv
|
BkiUbpU5qoTDtofJ9SD1
| 5
| 1
|
\section{Introduction}
In the last decade, numerous exotic states were observed in experiments, which results in a renaissance of the study on hadron spectra. Among those exotic hadrons, some of them are unambiguously beyond the conventional $q\bar{q}$ or $qqq$ model, such as the charged heavy quarkonium-like states $Z_c$ and $Z_b$ \cite{Choi:2007wga,Aaij:2014jqa,Chilikin:2013tch,Chilikin:2014bkk,Ablikim:2013xfr, Ablikim:2013wzq,Ablikim:2013mio,Belle:2011aa,Adachi:2012cx,Esposito:2014hsa,Albaladejo:2015lob,Pilloni:2016obd} and the heavy pentaquark candidates $P_c(4380)$ and $P_c(4450)~$\cite{Aaij:2015tga}
Various theoretical interpretations concerning the intrinsic structures of these exotic states have been proposed in literature, such as the threshold effect~\cite{Szczepaniak:2015eza,Swanson:2014tra,Swanson:2015bsa,Bugg:2008wu, Guo:2015umn,Mikhasenko:2015vca,Liu:2015fea,Liu:2013vfa,Liu:2014spa,Liu:2015taa}, tetraquark state~\cite{ Faccini:2013lda,Wang:2017lot,Ebert:2008kb,Patel:2014vua, Deng:2014gqa,Deng:2015lca,Zhao:2014qva,Chen:2010ze, Qiao:2013dda,Liu:2008qx, Maiani:2007wz}, hadronic molecule state \cite{Meng:2007fu,Kang:2016ezb,Liu:2008xz, Cleven:2013sq,Wang:2013cya, Ding:2008mp,Aceti:2014uea,He:2015mja,Karliner:2015ina,Chen:2015ata,Zhang:2013aoa,Cui:2013yva,Wang:2013daa,Cui:2013vfa,Khemchandani:2013iwa,Lin:2017mtz}, hadro-quarkonium state\cite{Dubynskiy:2008mq,Alberti:2016dru,Li:2013ssa,Anwar:2018bpu}. We refer to Refs. \cite{Chen:2016qju,Guo:2017jvc,Esposito:2016noz, Ali:2017jda} for a recent review about these studies.
An intriguing characteristic of those $XYZ$ states is that most of them are located close to two-particle thresholds, which inspires many theorists to regard the $XYZ$ states with this characteristic as the candidates of hadronic molecules, i.e., bound systems of two hadrons analogous to conventional nuclei.
What we are interested in this work is the charged heavy quarkonium-like states $Z_c(3900)$, $Z_c(4020)$, $Z_c(4430)$, $Z_b(10610)$, and $Z_b(10650)$. They stay in the vicinity of $D^*\bar{D}$, $D^*\bar{D}^{*}$, $\bar{D}D^*(2S)$ (or $\bar D^* D(2S)$), $B^*\bar{B}$ and $B^*\bar{B}^*$ threshold, respectively. Correspondingly, these $Z_c$ and $Z_b$ states $^{[1]}$\footnotetext[1]{If not stated, we use $Z_c$ ($Z_b$) to represent an arbitrary charged charmonium-like (bottomonium-like) state in the following sections. } can be regarded as the hadronic molecules composed of these open-flavor meson pairs.
The decay patterns of $Z_c$ and $Z_b$ also show some interesting characteristics. Both the valence-quark contents and spin-parity quantum numbers of $Z_c(3900)$ and $Z_c(4020)$ are the same. As hadronic molecule candidates, they should have the similar decay patterns in the heavy quark limit \cite{Bondar:2011ev,Du:2016qcr,Voloshin:2012dk}. However, for the hidden-charm channels, the existence of $Z_c(3900)$ was only confirmed in the $J/\psi\pi$ invariant mass spectrum; The $Z_c(4020)$ was observed in the $h_c\pi$ channel and has a mild signal in the $\psi(2S)\pi$, but no obvious signal is observed in the $J/\psi\pi$ channel \cite{ Ablikim:2013mio,Ablikim:2013wzq, Ablikim:2013xfr,Xiao:2013iha,Ablikim:2014dxl}. There are structures around $4.02$ GeV and $3.9$ GeV observed in $\psi(2S)\pi$ distributions, but the current experiment conclusion is still indefinite due to the complexity of the data \cite{Ablikim:2017oaf}. Besides, it is found that another charged state
$Z_c(4430)$ prefers to decay into $\psi(2S)\pi$ instead of $J/\psi\pi$~\cite{Choi:2007wga,Aaij:2014jqa,Chilikin:2013tch,Chilikin:2014bkk}. These observations are challenging both the theoretical and experimental understanding of the intrinsic structures of exotic hadrons.
Under the molecular state ansatz, a nonrelativistic constituent quark model was introduced in Ref. \cite{Liu:2014eka} to estimate the decay amplitudes of $Z_c$ and $Z_b$, and the numerical results favored the molecular state assignments for $Z_c$ and $Z_b$ by comparing with experiments. But there are several theoretical uncertainties left in Ref. \cite{Liu:2014eka}, which may affect the numerical results significantly. For instance, it is not a good approximation to treat the pion meson, the lightest Nambu-Goldstone boson, as a nonrelativistic system. In addition, the relativistic effect of the light quark in the $Q\bar q$ system is supposed to be even larger than that in the $q\bar q$ mesons, and the wave-functions of charmed and bottom mesons obtained in the nonrelativistic quark model may not work very well. The wave-functions which reflect the long-distance behavior of hadronic molecules are also ignored in Ref.~\cite{Liu:2014eka}.
Taking into account that the scattering amplitude might be very sensitive to the potentials and some relevant spatial wave functions, in this work we attempt to use a relativized quark model to improve the results. More decay channels, such as the one involving $P$-wave heavy quarkonium, will also be studied.
The article is arranged as follows: In Sec.~\ref{sec-model}, the relativized quark model and the quark-interchange model are introduced to describe the hadronic molecule decaying into one heavy quarkonium state and one light meson. The numerical results concerning the branching fraction ratios are displayed in Sec.~III. A summary is given in Sec.~IV.
\section{The Model}\label{sec-model}
\subsection{The relativized quark model}\label{QM}
The relativized quark model is employed in our calculation due to its success in describing both the heavy and light meson spectra \cite{Godfrey:1985xj}.
For the quark-quark interaction $i({p}_i)j({p}_j)\rightarrow i({p'}_i)j({p'}_j)$, the explicit effective Hamiltonian in the momentum space reads
\begin{eqnarray}
\label{w3}
&&H_{I_{ij}}(q)= \frac{\lambda_i}{2}\frac{\lambda_j}{2} \sum_a V^{{ij}}_a =\frac{{\lambda}_{i}}{2} \frac{{\lambda}_{j}}{2}\left [V_{c}(q)+V_{l}(q)+V_{h}(q)+V_{so}(q)+V_{t}(q)\right],
\end{eqnarray}
where the $p_{i(j)}$ and $p'_{i(j)}$ are the momenta of the quark $i(j)$ in the initial and final states. The ${\lambda}$ is the Gell-Mann matrix. For an antiquark, it is replaced by $-{\lambda}^*$. The $V_c$, $V_l$, $V_h$, $V_{so}$ and $V_t$ represent the one-gluon-exchange (OGE) Coulomb-like interaction, linear confinement interaction, hyperfine interaction, spin-orbit interaction, and tensor interaction, respectively. Their explicit forms are
\begin{eqnarray}
\label{potential}
V_c(q)&=&\sum_k{\omega^{\frac{1}{2}}_{ij}}\frac{4\pi\alpha_ke^{-\frac{q^2}{4\tau^2_{kij}}}}{q^2}{\omega^{\frac{1}{2}}_{ij}}, \nonumber \\
V_l(q)&=&\frac{6\pi b}{q^4}e^{-q^2/{4\sigma_{ij}^2}}, \nonumber \\
V_h(q)&=&-\sum_k{\rho_{ij}^{1+\frac{1}{2}\epsilon_{const}}}\frac{8\pi\alpha_ke^{-\frac{q^2}{4\tau^2_{kij}}}}{3m_im_j}\mathbf{s}_i\cdot\mathbf{s}_j{\rho_{ij}^{1+\frac{1}{2}\epsilon_{const}}},\nonumber \\
V^G_{so}(q)&=&\sum_k\frac{4\pi\alpha_ke^{-\frac{\mathbf{q}^2}{4\tau^2_{kij}}}}{q^2}\Big[\rho^{1+\frac{1}{2}\epsilon_{so(v)}}_{ii}{\frac{i(\mathbf{q}\times\mathbf{P}_i)\cdot{\mathbf{s}_i}}{2m^2_i}}{\rho^{1+\frac{1}{2}\epsilon_{so(v)}}_{ii}}-\rho^{1+\frac{1}{2}\epsilon_{so(v)}}_{jj}{\frac{i(\mathbf{q}\times\mathbf{P}_j)\cdot\mathbf{s}_j}{2m^2_j}}{\rho^{1+\frac{1}{2}\epsilon_{so(v)}}_{jj}}\nonumber \\
&-&\rho^{1+\frac{1}{2}\epsilon_{so(v)}}_{ij}{\frac{i(\mathbf{q}\times\mathbf{P}_j)\cdot\mathbf{s}_i-i(\mathbf{q}\times\mathbf{P}_i)\cdot\mathbf{s}_j}{m_im_j}}{\rho^{1+\frac{1}{2}\epsilon_{so(v)}}_{ij}}\Big],\nonumber \\
V^l_{so}(q)&=&-\frac{6\pi b}{q^4}e^{-q^2/{4\sigma_{ij}^2}}\Big[\rho^{1+\frac{1}{2}\epsilon_{so(s)}}_{ii}{\frac{i(\mathbf{q}\times\mathbf{P}_i)\cdot{\mathbf{s}}_i}{2m^2_i}}{\rho^{1+\frac{1}{2}\epsilon_{so(s)}}_{ii}}-\rho^{1+\frac{1}{2}\epsilon_{so(s)}}_{jj}{\frac{i(\mathbf{q}\times\mathbf{{P}}_j)\cdot\mathbf{s}_j}{2m^2_j}}{\rho^{1+\frac{1}{2}\epsilon_{so(s)}}_{jj}}\Big],\nonumber \\
V_{t}(q)&=&{\rho_{ij}^{1+\frac{1}{2}\epsilon_{tens}}}\frac{4\pi\sum_k\alpha_ke^{-\frac{q^2}{4\tau^2_{kij}}}}{m_im_jq^2}[({\mathbf{s}}_i\cdot{\mathbf{q}})( {\mathbf{s}}_j\cdot{\mathbf{q}} )-\frac{q^2}{3} \mathbf{s}_i\cdot\mathbf{s}_j]{\rho_{ij}^{1+\frac{1}{2}\epsilon_{tens}}},
\end{eqnarray}
with $\mathbf{q}=\mathbf{p}_{i(j)}-\mathbf{p}'_{i(j)}$, and $\mathbf{P}_{i(j)}=\frac{\mathbf{p}_{i(j)}+\mathbf{p}'_{i(j)}}{2}$. The $\mathbf{s}_{i(j)}$ and $m_{i(j)}$ represent the spin operator and mass of the quark with index $i$($j$), respectively. The spin-orbit interaction is divided into two parts, i.e., $V_{so}=V^G_{so}+V^l_{so}$, where the superscripts $G$ and $l$ indicate the interactions arising from the OGE and the linear confinement potentials, respectively.
The $\alpha(q^2)$ is the running coupling constant calculated in perturbative QCD and parametrized by three Gaussian functions as follows to simplify the numerical calculation,
\begin{eqnarray}
\alpha(q^2)=\frac{12\pi}{(33-2N_f)ln(q^2/\Lambda^2)}\approx \sum^{3}_{k=1}\alpha_ke^{-q^2/4{\gamma}^2_k},
\end{eqnarray}
where $\Lambda$ denotes the QCD scale and $ \Lambda=200$ MeV. $N_f$ is the number of the quark flavors which satisfy $4m^2_f<q^2$ ($m_f$ is the quark mass). $k$ denotes the index of the Gaussian function. The $\alpha_k$ is the coefficient of the $k$th gaussian function in the parametrization, and $\gamma_{k}$ is its oscillating parameter.
Compared with the nonrelativistic quark model employed in Ref.~\cite{Liu:2014eka}, two factors
\begin{eqnarray}
\omega_{ij}=1+\frac{p_ip_j}{E_iE_j}~~\text{and}~~\rho_{ij}=\frac{m_im_j}{E_iE_j},
\end{eqnarray}
are introduced to describe the dependence of the potentials on the momenta of the interacting quarks. Moreover, a smearing function ${\sigma^3_{ij}\over \pi^{3/2}}e^{-\sigma^2_{ij}r^2}$ is introduced to account for the nonlocal effect, since the interactions depend on both $\mathbf{P}_{i(j)}$ and $\mathbf{q}$. The relevant parameters in Eq.~(\ref{potential}) read
\begin{eqnarray}
\label{w42}
\sigma_{ij}&=&\sigma_0(\frac{1}{2}+\frac{1}{2}(\frac{4m_im_j}{(m_i+m_j)^2})^4)+s^2(\frac{2m_im_j}{m_i+m_j})^2,\nonumber\\
\tau^{-2}_{kij}&=&{\gamma_k}^{-2}+{\sigma_{ij}}^{-2}.
\end{eqnarray}
The values of all the parameters referred are determined by fitting the mass spectra of mesons and are listed in Table \ref{par}.
\begin{table}
\caption{The values of the parameters. }\label{par}
\begin{tabular}{cccccccccc}
\toprule[1pt]\toprule[1pt]
$m_{u(d)}$ {(}GeV{)} & $m_{c}$ {(}GeV{)} & $m_{b}$ {(}GeV{)} & $b$ {(}$\text{GeV}^{2}${)} & $\epsilon_{const}$ & $\epsilon_{so(v)}$ & $\epsilon_{so(s)}$ & $\epsilon_{tens}$ & $\sigma_{0}$ {(}GeV{)} & $s$\tabularnewline
$0.220$ & $1.628$ & $4.977$ & $0.18$ & $-0.168$ & $-0.035$ & $0.055$ & $0.025$ & $1.80$ & $1.55$\tabularnewline
$\alpha_{s}$ & $\alpha_{1}$ & $\alpha_{2}$ & $\alpha_{3}$ & $\gamma_{1}$ (GeV) & $\gamma_{2}$ (GeV)& $\gamma_{3}$ (GeV)& $\text{\ensuremath{\Lambda}}$ (MeV) & & \tabularnewline
$0.60$ & $0.25$ & $0.15$ & $0.20$ & $0.5$ & $1.58$ & $15.81$ & 200 & & \tabularnewline
\bottomrule[1pt]\bottomrule[1pt]
\end{tabular}
\end{table}
The relevant spectra calculated in the relativized quark model are displayed in Tables \ref{massspectrumD} and \ref{massspectrumB} in Appendix A, where the nonrelativistic quark model results and the experimental data are also listed for comparison. For the heavy quarkonium, the relativistic effects can be neglected due to the large masses of the heavy quarks. The mass spectra match the experimental data well in both the nonrelativistic and relativized quark models. However, the mass spectra of the open-flavor mesons in the relativized quark model fit the experimental data much better than those in the nonrelativistic quark model, especially for the radially excited states. It indicates that the relativistic effects are important in the open-flavor meson regime. For the light mesons, the relativistic effects are also not negligible. In Ref. \cite{Godfrey:1985xj}, the authors showed that the mass spectra of the light mesons and their excitations are well reproduced in the relativized quark model. Therefore one can also expect that the relevant decay amplitudes calculated in the relativized quark model are more reliable than those in the nonrelativistic quark model.
\subsection{The quark-interchange model}\label{sec2}
The exotic heavy quarkonium-like states $Z_c(3900)$, $Z_c(4020)$, $Z_b(10610)$ and $Z_b(10650)$ are generally supposed to be the hadronic molecules composed of $\bar DD^{\ast}+c.c.$, $\bar D^{\ast}D^{\ast}$, $\bar{B}B^{\ast}+c.c.$ and $\bar{B}^{\ast}B^{\ast}$, respectively. This is mainly because their masses are close to the thresholds of the corresponding components. However, the hadronic molecule interpretations are not well established yet. To understand these exotic states better, it is necessary to study their properties from various aspects. The strong decay modes of a hadron usually have close connections with its intrinsic structure.
For a hadronic molecule, we can describe its strong decays in terms of the near threshold scattering between the two hadron components.
We consider the meson-meson scattering process
\begin{eqnarray}
A(q\bar Q)+B(Q\bar q)\rightarrow C(q\bar q) +D(Q\bar Q),
\end{eqnarray}
where $q$ ($\bar q$) and $Q$ ($\bar Q$) are the light and heavy quarks (antiquarks) in the mesons. To calculate the amplitude at the quark level, we employ the Barnes-Swanson quark-interchange model introduced in Refs. \cite{Ackleh:1991dy,Wong:2001td,Barnes:2003dg}.
In this model, the meson-meson scattering amplitudes are evaluated at Born order with the interquark Hamiltonian, which are decomposed as
\begin{eqnarray}\label{prior-post}
H=\sum^4_{i=1}\sqrt{{\mathbf{p}}_i ^2+m^2_i}+\sum_{i<j}H_{I_{ij}}=H^0_{AB}+H^I_{AB}=H^0_{CD}+H^I_{CD},
\end{eqnarray}
where $H^0$ is the Hamiltonian of two free mesons, $H^I_{AB}$ ($H^I_{CD}$) represents the interactions between the mesons $A$ and $B$ ($C$ and $D$). For a molecular state decaying into a heavy qaurkonium and a light meson, the heavy quark and antiquark in the initial open-flavor mesons have to form the final heavy quarknium state, therefore the short-range interactions are expected to play the dominant role in such decays. The molecular state wave function can account for part of the long-range effects, which will be discussed later. In Ref.~\cite{Capstick:1986bm}, the three-quark interactions in the baryons are treated perturbatively. Similarly, we do not take into account the three-quark and four-quark interactions in this work.
According to Eq. (\ref{prior-post}), we obtain the ``Prior" and the ``Post" $T$-matrix elements as illustrated in Fig.~\ref{prior} and Fig.~\ref{post}, respectively. Their difference is referred as the ``Prior-Post" ambiguity, which introduces the uncertainty to the decay widths and is expected to vanish if all of the pertinent wave functions are precise solutions of $H^0$ \cite{Ackleh:1991dy}. In this work, we take the average values of the ``Prior" and ``Post" decay widths to calculate the decay ratios.
\begin{figure}[tb]
\centering
\includegraphics[width=0.5\hsize]{prior.eps}\\
\caption{Prior diagram of scattering process $AB\rightarrow CD$. The curly line denotes the interactions between the quarks. }\label{prior}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.5\hsize]{post.eps}\\
\caption{Post diagram of scattering process $AB\rightarrow CD$.}\label{post}
\end{figure}
At the quark level, the amplitude for a hadronic molecule $Z_c$ ($Z_b$) decaying into a charmonium (bottomonium) via emitting a light meson reads
\begin{eqnarray}
\label{w6}
T^{J}_{J_z} &=& \langle [\Psi_C(q\bar q)\Psi_D(Q \bar Q)\varphi_{CD}^{rel}(q\bar q, Q\bar Q)]^{J'}_{J'_z}|{\sum_{i<j} H_{I_{ij}} }| [\Psi_A(q \bar Q) \Psi_B(Q \bar q) \varphi_{AB}^{rel}]^J_{J_z}\rangle \nonumber\\
&=&I_{\text{flavor}}\times I_{\text{color}} \times I_{\text{spin-space}}, \nonumber \\
\Psi&=& \phi_c\otimes \phi_f\otimes \varphi,\nonumber \\
I_{\text{favor}} &=& \langle \phi_f(C)\phi_f(D)|\phi_f(A)\phi_f(B)\rangle , \nonumber \\
I_{\text{color}}&=&\langle \phi_c(C)\phi_c(D)|\frac{\lambda_i}{2}\frac{\lambda_j}{2} |\phi_c(A)\phi_c(B)\rangle , \nonumber \\
I_{\text{spin-space}}&=&\delta_{J_zJ'_z}\langle [\varphi_C\varphi_D\varphi_{CD}^{rel}]^{J'}_{J'_z}|{\sum_{i<j}\sum_a V^{{ij}}_a }| [\varphi_A \varphi_B\varphi_{AB}^{rel}]^J_{J_z}\rangle,
\end{eqnarray}
where $J$ $(J')$ and $J_z$ $(J'_z)$ are the total angular momentum and its $z$-component of the initial (final) state. $\Psi$ is the meson wave function obtained in the relativized quark model in Sec. \ref{QM}. It is composed of the $\phi_f$, $\phi_c$, and $\varphi$, which represent the flavor, color, and spin-space wave functions$^{[2]}$\footnotetext[2]{In this paper we work in the momentum space to calculate the amplitude. It is of course also feasible to work in the coordinate space.}, respectively. Correspondingly, the $T$-matrix element is factored into the product of three matrix elements $I_{\text{flavor}}$, $I_\text{{color}}$, and $I_{\text{spin-space}}$. In the flavor space, the $I_{\text{flavor}}$ cancels out when we calculate the branching fraction ratios of the molecular states decaying into the ground and radially excited heavy quarkonium states. The $I_\text{{color}}$ takes $\frac{4}{9}$ for $qq$ and $\bar q \bar q$, and $-\frac{4}{9}$ for $q \bar q$ interactions, respectively.
The $\varphi_{AB}^{rel}$ ($\varphi_{CD}^{rel}$) represents the relative wave function of the $AB$ ($CD$) system in the momentum space. We assume the $Z_c$ ($Z_b$) state with $J^P=1^+$ to be an $S$-wave molecule and neglect the contributions from the higher orbital excitations. Then, a Gaussian wave function is introduced to approximately describe $\varphi_{AB}^{rel}$:
\begin{eqnarray}
\label{ww7}
&&\varphi_{AB}^{rel}(\mathbf P_A)=\frac{1}{\pi^{3/4}\beta^{3/2}}\text{Exp}[-\frac{{\mathbf P^2_A}}{2\beta^2}],\nonumber \\
&& r_0={\langle r^2 \rangle}^{\frac{1}{2}}=\sqrt{3\over 2}{1\over \beta}, ~~P_0={ \langle P^2_A \rangle}^{\frac{1}{2}}=\sqrt{3\over 2}{ \beta},
\end{eqnarray}
where $\mathbf P_A$ is the c.m. momentum of the constituent meson $A$, and $\beta$ is related to the root mean square radius $r_0$ and momentum $P_0$ of the $Z_c$ ($Z_b$) state. The $r_0$-dependence of the branching fraction ratios is discussed in the next section.
The $I_{\text{spin-space}}$ in Eq. (\ref{w6}) is the matrix element in the spin and spatial space and reads
\begin{eqnarray}
\label{w7}
\small
I_{\text{spin-space}}&=&\Big\langle [\varphi_C\varphi_D\varphi^{rel}_{CD}]^{J'}_{J_z'}|{ H_{I_{ij}} }| [\varphi_A \varphi_B\varphi^{rel}_{AB}]^J_{J_z}\Big\rangle \nonumber\\
&=&\sum_a\sum_{ij}\Big \langle \left[[{{(\varphi_C\chi_C)^{J_C}}(\varphi_D\chi_D)^{J_D}}]^{J_{CD}} (\varphi^{rel}_{CD})^{L_{CD}}\right]_{J'_z}^{J'} |
V^{ij}_a|\, \left [{(\varphi_A\chi_A)^{J_A}(\varphi_B\chi_B)^{J_B}}\varphi^{rel}_{AB}\right ]_{J_z}^{J} \Big \rangle \nonumber\\
&=&\sum_a\sum_{ij}{\sum_{S,L,{S',L',{L''}}}}\delta_{JJ'}\delta_{J_zJ'_z}\mathscr{W}^{S,L}_{S',L',L''} (-1)^{J+S+L''}
\left \{
\begin{array}{c c c}
S' & S & t \\
L & L'' & J \\
\end{array}
\right \}
\nonumber\\
& \times & \left\langle \,\left[(\Phi_C \Phi_D)^{L'} {\varphi^{rel}_{CD}}^{L_{CD}}\right]^{L''} | |f(q^2)v^t(\mathbf q)||\,
\Big (\Phi_A\Phi_B\Phi_{AB} \Big)^{L} \right\rangle
\nonumber\\
& \times &
\Big\langle (\chi_C\chi_D)^{S'}|| v^t(\mathbf s)||(\chi_A\chi_B)^{S}\Big\rangle,
\end{eqnarray}
where
\begin{eqnarray}
\mathscr{W}^{S,L}_{S',L',L''} &=&(-1)^{{L_{CD}}+J_{CD} +S'+L''}\hat{S} \hat{L} \hat{J_A} \hat{J_B}\hat{S'} \hat{L'} \hat{J_C} \hat{J_D}\hat{L''}\hat{J_{CD}} \nonumber\\
&\times & \left \{
\begin{array}{c c c}
S_A & S_B & S \\
L_A & L_B & L \\
J_A & J_B & J
\end{array}
\right \}\left \{
\begin{array}{c c c}
S_C & S_D & S' \\
L_C & L_D & L' \\
J_C & J_D & J_{CD}
\end{array}
\right \}\left \{
\begin{array}{c c c}
L_{CD} & L' & L'' \\
S' & J' & J_{CD} \\
\end{array}
\right \},
\end{eqnarray}
with $\hat X\equiv\sqrt{2X+1}$. The $\Phi$ and $\chi$ represent the spatial and spin wave functions of pertinent mesons, respectively. For the meson $M$ ($M=A$, $B$, $C$, and $D$), $S_{M}$, $L_M$ and $J_M$ denote its spin, orbital angular momentum, and total angular momentum, respectively.
In the $S$-wave molecular state, the $J_A$ and $J_B$ couple into the total angular momentum $J$. In the final state, the $J_C$ couples with $J_D$ to form the intermediate angular momentum $J_{CD}$. Then, the coupling between $J_{CD}$ and the relative orbital angular momentum $L_{CD}$ leads to the total angular momentum $J'$. Via the spin rearrangement, we decompose the $J$ $(J')$ into the total spin $S$ $(S')$ and the orbital angular momentum $L$ $(L'')$ of the initial (final) state with the coefficients $\mathscr{W}^{S,L}_{S',L',L''}$. The notions $|(\chi_A\chi_B)^{S} \rangle$ and $|(\chi_C\chi_D)^{S'} \rangle$ denote that the $S_{A}$ couples with $S_B$ into $S$ and $S_C$ couples with $S_D$ into $S'$, respectively. The $ |(\Phi_A\Phi_B\Phi_{AB} )^{L} \rangle$ represents that $L_A$ and $L_B$ couples into the angular momentum $L$. The notation $|\left[(\Phi_C \Phi_D)^{L'} {\varphi^{rel}_{CD}}^{L_{CD}}\right]^{L''} \rangle$ represents the coupling of the orbital angular momentums. The $L_C$ and $L_D$ couples into $L'$, $L'$ then couples with $L_{CD}$ into the total orbital angular momentum $L''$.
We also decompose the $V^{ij}_a$ in Eq. (\ref{potential}) into the spin and momentum space by rewriting it as $V^{ij}_a=f(q^2)v^t(\mathbf s)v^t(\mathbf q)$, where the $v^t(\mathbf s)$ ($v^t(\mathbf q)$) denote the tensor operator of order $t$ in the spin (momentum) space, and the $f(q^2)$ is the scalar part of the potential. The detailed calculations of the spin-space factor $I_\text{spin-space}$ are discussed in the following sections.
\section{Numerical results}\label{sec4}
\subsection{$S$-wave decays $Z_c\to \psi(nS)\pi$ and $Z_b\to \Upsilon(nS)\pi$}\label{sec-numerical}
We define the branching fraction ratios as
\begin{eqnarray}
\label{swaveratio}
R_{2}^{Z_c}=\frac{\Gamma({Z_c} \rightarrow \psi(2S) \pi)}{{\Gamma({Z_c}\rightarrow J/\psi \pi) }}, ~R_{2}^{Z_b}=\frac{\Gamma({Z_b}\rightarrow \Upsilon(2S)\pi)}{ \Gamma({Z_b} \rightarrow \Upsilon(1S) \pi) },~R_{3}^{Z_b}=\frac{\Gamma({Z_b}\rightarrow \Upsilon(3S)\pi)}{ \Gamma({Z_b} \rightarrow \Upsilon(1S) \pi) }.
\end{eqnarray}
Some of the ratios have been measured in experiments, although with large uncertainties.~We assume that the charged heavy quarkonium-like states $Z_c(3900)$, $Z_c(4020)$ and $Z_c(4430)$ are hadronic molecules composed of $D^*\bar D$, $D^*\bar D^*$, and $\bar D D^*(2s)$ or $\bar D^* D(2s)$, respectively. To justify whether these assumptions are reasonable or not, we calculate the ratios defined in Eq. (\ref{swaveratio}) by employing the quark models introduced in Section \ref{sec-model}.
As illustrated in Eq. (\ref{potential}) and Eq. (\ref{w7}), the spin-orbit and tensor potentials contain a vector operator $v^{1}(\mathbf q)$ and a tensor operator $v^{2}(\mathbf q)$, respectively. They do not contribute to the $S$-wave decays
because of $\langle L''=0||v^{1,2}(\mathbf q)||L=0\rangle=0$. The spin and spatial operators in the coulomb-like, the linear confinement and the hyperfine interactions are scalar. Then, these potentials contribute to the S-wave decays. Eq. (\ref{w7}) is simplified as
\begin{eqnarray}
\label{ww8}
I_\text{spin-space}=\langle\Phi_{C}\Phi_{D}{\varphi^{rel}_{CD}}|f(q^2)|\Phi_{A}\Phi_{B} {\varphi^{rel}_{AB}}\rangle\langle[\chi_{C}(q\bar q)\chi_{D}(\bar Q Q)]_{S_z}^{S}|v(\mathbf{s})|[\chi_{A}(q\bar Q)\chi_{B}(Q\bar q)]_{S_z}^{S}\rangle,
\end{eqnarray}
where we have used the $v^{0}(\mathbf q)=1$ and omitted all the orbital angular momentums since they are $0$. The spin operator is $v(\mathbf s)=1~\text{or} ~\mathbf{s}_i\cdot\mathbf{s}_j$. We calculate the spin matrix elements using spin rearrangement and list the results in Table \ref{ss}.
\begin{table}[htbp]
\caption{The matrix elements $\langle\left [\chi_C\chi_D\right]^{S}_{S_z}|{\mathbf{s}_i \cdot \mathbf{s}_j} |\,
\left [ \chi_A\chi_B \right]^{S}_{S_z} \, \rangle$ and $\langle \left [\chi_C\chi_D\right]^{S}_{S_z} |\mathbf{1}\,|
\left [ \chi_A\chi_B \right]^{S}_{S_z} \, \rangle$. The results of the T1 (T2) are the same in the prior and post diagrams. The $S$ and $S_z$ denote the total spin and its $z$-component of the state. $[S_A, S_B]^S$ represents that the $S_A$ and $S_B$ combine into the total spin $S$. }\label{ss}
\begin{tabular}{c|cccccc|c}
\toprule[1pt]\toprule[1pt]
\multicolumn{7}{c|}{$\langle\left [\chi_C\chi_D\right]^{S}_{M}|{\mathbf{s}_i \cdot \mathbf{s}_j} |\,
\left [ \chi_A\chi_B \right]^{S}_{M} \, \rangle$} &$\langle \left [\chi_C\chi_D\right]^{S}_{M} |\mathbf{1}\,|
\left [ \chi_A\chi_B \right]^{S}_{M} \, \rangle$\\
\midrule[1pt]
$[S_A,S_B]^S-[S_C,S_D]^{S}$& C1-prior & C2-prior &C1-post & C2-post & T1& T2 & All diagrams \\
\midrule[1pt]
$[0,1]^1-[0,1]^1$ & $-\frac{3}{8}$ & $\frac{1}{8}$ & $-\frac{3}{8}$ & $\frac{1}{8}$ & $-\frac{1}{8}$ &$\frac{3}{8}$& $\frac{1}{2}$ \\
$[1,1]^1-[0,1]^1$ & $-\frac{3}{4\sqrt{2}}$ & $\frac{1}{4\sqrt{2}}$ & $\frac{1}{4\sqrt{2}}$ & $\frac{1}{4\sqrt{2}}$ &$-\frac{1}{4\sqrt{2}}$&$-\frac{1}{4\sqrt{2}}$ & $\frac{1}{\sqrt{2}}$ \\
$[0,1]^1-[1,1]^1$ & $\frac{1}{4\sqrt{2}}$ & $\frac{1}{4\sqrt{2}}$& $-\frac{3}{4\sqrt{2}}$ & $\frac{1}{4\sqrt{2}}$ &$-\frac{1}{4\sqrt{2}}$&$-\frac{1}{4\sqrt{2}}$ & $\frac{1}{\sqrt{2}}$ \\
$[1,1]^1-[1,1]^1$ & $0$ & $0$ & $0$ & $0$ & $-\frac{1}{2}$ &$\frac{1}{2}$ & $0$ \\
\bottomrule[1pt]\bottomrule[1pt]
\end{tabular}
\end{table}
The space factors $I_{\text{space}}\equiv\langle\Phi_{C}\Phi_{D}({{\varphi^{rel}_{CD}}})^{L_{CD}}_{m_{CD}}|f(q^2)|\Phi_{A}\Phi_{B}\varphi^{rel}_{AB}\rangle$ are the overlap integrals of the wave functions and the interaction potentials. Their explicit forms are
\begin{eqnarray}
\label{w8}
I_{\text{space}}^{\text{C1-Prior}} &=& \int d\Omega_{\mathbf{P_{C}}}\int d^{3}\mathbf{P_{A}}\int d^{3}\mathbf{p}\int d^{3}\mathbf{q} \Phi_{C}^{*}(\mathbf{q}+\mathbf{p}/2-2\mathbf{P}_{C})\Phi_{D}^{*}(\mathbf{q}-\mathbf{p}/2-\mathbf{P_{C}}-2\mathbf{P_{A}})\nonumber\\
&\times&Y^{L_{CD}*}_{m_{CD}}(\Omega_{\mathbf{P_C}}) \Phi_{A}(\mathbf{q}-\mathbf{p}/2-a \mathbf{P_{A}})\Phi_{B}(\mathbf{q}-\mathbf{p}/2-a P_{A}-2\mathbf{P_{C}}) \Phi_{AB}(\mathbf{P_{A}}) f(q^2) ,\nonumber \\
I_{\text{space}}^{\text{C2-Prior}} &=& \int d\Omega_{\mathbf{P_{C}}}\int d^{3}\mathbf{P_{A}}\int d^{3}\mathbf{p}\int d^{3}\mathbf{q}\Phi_{C}^{*}(\mathbf{q}+\mathbf{p}/2+\mathbf{P}_{C}-2\mathbf{P_{A}})\Phi_{D}^{*}(\mathbf{q}-\mathbf{p}/2+\mathbf{P_{C}})\nonumber \\
&\times& Y^{L_{CD}*}_{m_{CD}}(\Omega_{\mathbf{P_C}})\Phi_{A}(\mathbf{q}-\mathbf{p}/2-b \mathbf{P_{A}})\Phi_{B}(\mathbf{q}-\mathbf{p}/2-b P_{A}+2\mathbf{P_{C}}) \Phi_{AB}(\mathbf{P_{A}})f(q^2),\nonumber \\
I_{\text{space}}^{\text{C1-Post}} &=& \int d\Omega_{\mathbf{P_{C}}}\int d^{3}\mathbf{P_{A}}\int d^{3}\mathbf{p}\int d^{3}\mathbf{q}\Phi_{C}^{*}(\mathbf{q}+\mathbf{p}/2-\mathbf{P}_{C})\Phi_{D}^{*}(\mathbf{q}+\mathbf{p}/2+\mathbf{P_{C}}-2\mathbf{P_{A}})\nonumber\\
&\times& Y^{L_{CD}*}_{m_{CD}}(\Omega_{\mathbf{P_C}})\Phi_{A}~~(\mathbf{q}-\mathbf{p}/2-a \mathbf{P_{A}})\Phi_{B}(\mathbf{q}+\mathbf{p}/2-a P_{A}-2\mathbf{P_{C}})\Phi_{AB}(\mathbf{P_{A}})f(q^2),\nonumber\\
I_{\text{space}}^{\text{C2-Post}} &=& \int d\Omega_{\mathbf{P_{C}}}\int d^{3}\mathbf{P_{A}}\int d^{3}\mathbf{p}\int d^{3}\mathbf{q} \Phi_{C}^{*}(\mathbf{q}-\mathbf{p}/2+\mathbf{P}_{C}-2\mathbf{P_{A}})\Phi_{D}^{*}(\mathbf{q}-\mathbf{p}/2+\mathbf{P_{C}})\nonumber\\
&\times &Y^{L_{CD}*}_{m_{CD}}(\Omega_{\mathbf{P_C}})\Phi_{A}(\mathbf{q}-\mathbf{p}/2-b \mathbf{P_{A}})\Phi_{B}(\mathbf{q}+\mathbf{p}/2-b P_{A}+2\mathbf{P_{C}}) \Phi_{AB}(\mathbf{P_{A}})f(q^2),\nonumber\\
I_{\text{space}}^{\text{T1}} &=& \int d\Omega_{\mathbf{P_{C}}}\int d^{3}\mathbf{P_{A}}\int d^{3}\mathbf{p}\int d^{3}\mathbf{q}\Phi_{C}^{*}(\mathbf{q}+\mathbf{p}/2-\mathbf{P}_{C})\Phi_{D}^{*}(\mathbf{q}-\mathbf{p}/2-\mathbf{P_{C}}-2\mathbf{P_{A}})\nonumber\\
&\times&Y^{L_{CD}*}_{m_{CD}}(\Omega_{\mathbf{P_C}}) \Phi_{A}(\mathbf{q}-\mathbf{p}/2-a \mathbf{P_{A}})\Phi_{B}(\mathbf{q}+\mathbf{p}/2-a \mathbf{P_{A}}-2\mathbf{P_{C}})\Phi_{AB}(\mathbf{P_{A}}) f(q^2),\nonumber\\
I_{\text{space}}^{\text{T2}} &=& \int d\Omega_{\mathbf{P_{C}}}\int d^{3}\mathbf{P_{A}}\int d^{3}\mathbf{p}\int d^{3}\mathbf{q}\Phi_{C}^{*}(\mathbf{q}-\mathbf{p}/2+\mathbf{P}_{C}-2\mathbf{P_{A}})\Phi_{D}^{*}(\mathbf{q}-\mathbf{p}/2+\mathbf{P_{C}}\mathbf{P_{A}})\nonumber\\
&\times&Y^{L_{CD}*}_{m_{CD}}(\Omega_{\mathbf{P_C}})\Phi_{A}(\mathbf{q}-\mathbf{p}/2-b\mathbf{P_{A}})\Phi_{B}(\mathbf{q}+\mathbf{p}/2-b P_{A}+2\mathbf{P_{C}}) \Phi_{AB}(\mathbf{P_{A}})f(q^2),\nonumber
\end{eqnarray}
where
\begin{eqnarray}
a=\frac{m_q}{m_q+m_Q}~~,~~ b=\frac{m_Q}{m_q+m_Q},
\end{eqnarray}
the $Y_{m_{CD}}^{L_{CD}}(\Omega_{\mathbf{P_C}})$ is the spherical harmonic function,
the $\mathbf P_A$ ($\mathbf{P_C}$) is the c.m. momentum of the meson $A$ ($C$), and $m_q $ ($m_Q$) is the light (heavy) quark mass. The integral of each diagram due to the linear confinement potential is divergent, but the singular parts exactly cancel out when summing up all of the four diagrams (``Post" or ``Prior"), which arises from the different signs of the color factors for different diagrams. More details are given in the Appendix B.
The $ r_{0}$-dependence of the branching fraction ratios are displayed in Fig.~\ref{zc} and Fig.~\ref{zb}. It is obvious that the ratios increase with larger $ r_{0}$, which corresponds to the broader molecular wave functions.~The wave functions of the states with the radial quantum number $n$ contain $n-1$ nodes. The interaction potentials also contain nodes. When $ r_0$ is small enough, the nodes are located outside the integration. Then, the exotic state prefers to decaying into the ground heavy quarkonium via emitting a light meson due to the phase space. The decay ratio is smaller than $1$. When the $r_0$ increases, the nodes from the potential and the radial excited states may be contained in the integration. In the decay into the ground heavy quarkonium, the parts of the integrals before and after the potential node interfere with each other destructively. In the decay into the radial excited heavy quarkonium, the nodes in the wave functions interfere with those in the potentials. This may lead to the enhancement of the decay amplitude. Thus, even with smaller phase space, an exotic state may decay into a radial excited heavy quarkonium more easily. More interference effects are included with broader molecular wave functions. Then, the ratio increases with larger $r_0$. When the $r_0$ is large enough, the tails of wave functions enter the integration and slightly influence the numerical results. The decay ratios tends to be stable.
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\hsize]{zc.eps}\\
\caption{The $r_0$-dependence of the branching fraction ratios for $Z_c(3900)$, $Z_c(4020)$ and $Z_c(4430)$ decaying into $J/\psi\pi$ and $\psi(2S)\pi$.} \label{zc}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\hsize]{zb.eps}\\
\caption{The $r_0$-dependence of the branching fraction ratios for $Z_b(10610)$ and $Z_b(10650)$ decaying into $\Upsilon(nS)\pi$, $h_b(1P)\pi$ and $h_b(2P)\pi$.} \label{zb}
\end{figure}
The formation of the hadronic molecules is usually supposed to be dominated by the long-range interactions between the components, for intance, the one-pion exchange potential. For a shallow bound hadronic molecule (with mass $M$) composed of particles A and B, the $r_0$ is estimated to be
\begin{eqnarray}
r_0=\sqrt{\frac{1}{2\mu E_B}},
\end{eqnarray}
where $\mu=\frac{m_{A}m_{B}}{m_{A}+m_{B}}$ is the reduced mass of the constituent hadrons and $E_{B}=m_A+m_B-M$ is the binding energy of the molecule. For the $Z_c(3900)$, $Z_c(4020)$, $Z_c(4475)$, $Z_b(10610)$ and $Z_b(10650)$ states which are located above the corresponding thresholds, we still use the equation to estimate their sizes with $E_B$ defined as $|m_A+m_B-M|$. The results are listed in Table \ref{size}. With these values of $r_0$, we calculate the S-wave decay ratios and list them in Table \ref{rel}.
\begin{table}
\caption{The sizes of the molecular states with the central values of the masses used in the estimation.}\label{size}
\begin{tabular}{ccccccc}
\toprule[1pt]\toprule[1pt]
& $Z_{c}(3900)$ & $Z_{c}(4020)$ & $Z_{c}(4430)(D^{*}\bar{D}(2S))$ & $Z_{c}(4430)(D^{*}(2S)\bar{D})$ & $Z_{b}(10610)$ & $Z_{b}(10650)$\tabularnewline
$r_{0}$ {[}fm{]} & $0.9$ & $1.7$ & $0.5$ & $3$ & $1.6$ & $1.6$\tabularnewline
\bottomrule[1pt]\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{table}
\caption{The S-wave decay ratios when we use the $r_0$ listed in Table \ref{size}. The experiment data is from the Refs.[71, 72]. The $R^{Z_c(4430)}$ and $R^{Z_c(4430)}_{2}$ represents the decay ratios of the $Z_c(4430)$ composed of $D^*\bar D(2S)$ and $D^*(2S)\bar D$, respectively. ``..." denotes that the corresponding experimental result is absent. }\label{rel}
\begin{center}
\begin{tabular}{ccccccccc}
\toprule[1pt]\toprule[1pt]
& $R_{2}^{Z_{c}(3900)}$ & $R_{2}^{Z_{c}(4020)}$ & $R^{Z_{c}(4430)}$ & $R_2^{Z_{c}(4430)}$ & $R_{2}^{Z_{b}(10610)}$ & $R_{3}^{Z_{b}(10610)}$ & $R_{2}^{Z_{b}(10650)}$ & $R_{3}^{Z_{b}(10650)}$\tabularnewline
Theory & $0.2$ & $1.1$ & $0.1$ & $0.7$ & $3.4$ & $0.8$ & $4.4$ & $1.6$\tabularnewline
Experiment & $...$ & $...$ & {$\sim10$} & {$\sim10$} & $6.75\pm2.56$ & $4.00\pm1.67$ & $8.12\pm4.20$ & $9.53\pm4.80$\tabularnewline
\bottomrule[1pt]\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
The $R_2^{Z_c(3900)}$ is much smaller than 1, indicating that the branching fraction of $Z_c(3900)$ into $J/\psi\pi$ is much larger than that of $\psi(2S)\pi$.
Interestingly, $R_2^{Z_c(4020)}$ is around 1. When $r_{0}=1.5$ fm, we find that $|T(Z_c(3900)\to \psi(2S) \pi)/T(Z_c(3900)\to J/\psi \pi)|\sim 1.8$ and $|T ( Z_c(4020)\to \psi(2S) \pi)/T( Z_c(4020)\to J/\psi \pi)|\sim 2.5$. It implies that both the $D^*\Bar{D}$ and $D^*\Bar{D}^*$ molecules couple to $\psi(2S)\pi$ more strongly than to $J/\psi \pi$. The smaller partial width $\Gamma(Z_c(3900)\to \psi(2S)\pi)$ is due to the fact that the phase space of this channel is smaller, and the partial width is sensitive to the final state momentum. The $Z_c(3900)$ is observed in the $J/\psi\pi$ invariant mass spectrum, which is consistent with our prediction that the ratio $R_2^{Z_c(3900)}$ is much smaller than 1.
In the $e^+e^-\to \psi(2S)\pi^+\pi^- $ process, an obvious resonance-like structure around $4.03$ GeV is observed in the $\psi(2S)\pi^\pm$ invariant mass spectrum for data at the c.m. energy $\sqrt{s}=4.416$ GeV \cite{Ablikim:2017oaf}. This structure can be identified as the $Z_c(4020)$. The resonance-like structure around $3.9$ GeV can also be seen in $\psi(2S)\pi^\pm$ distributions at some c.m. energies, but this structure could also arise from the reflection effect of the other structure around $4.03$ GeV in the Dalitz plot. Due to the complexities of the Dalitz plots for the $e^+e^-\to \psi(2S)\pi^+\pi^- $ process at different c.m. energies,
the BESIII collaboration did not give a definite conclusion in their paper and claimed that their fit cannot describe the data well \cite{Ablikim:2017oaf}. The experimental ratios $R_2^{Z_c(3900)}$ and $R_2^{Z_c(4020)}$ are thus still unknown.
The mass of $Z_c(4430)$ is close to the threshold of $\bar DD^*(2S)$ or $\bar D(2S)D^*$, and the more favorable quantum numbers are $J^P=1^+$. Due to these properties, the $Z_c(4430)$ has ever been identified as a molecular state composed of $\bar DD^*(2S)$ or $\bar D(2S)D^*$. We display its strong decay ratios with different $r_0$ in two configurations in Fig. \ref{zc}.
We find the decay ratio is smaller than $1$, which is much smaller than the estimated ratio $\sim 10$ in experiments.
Without introducing any other dynamic mechanisms, this result implies that the assignment of a pure $\bar DD^*(2S)$ or $\bar D(2S)D^*$ hadronic molecule for $Z_c(4430)$ is not favourable. The ratio $R_2^{Z_c(4430)}$ calculated in this paper is different from that estimated in the naive nonrelativistic quark model \cite{Liu:2014eka}, which shows the model sensitivity of numerical results. This model sensitivity can be partly ascribed to the uncertainties of the relevant wave functions.
As listed in Tables \ref{massspectrumD} and \ref{massspectrumB}, the relativized quark model reproduces the charmed and bottomed meson spectra much better than the nonrelativistic model. Thus, the relativized quark model is more suitable in describing the hadronic molecule decays discussed in this paper.
We list the theoretical values of $R_{2,3}^{Z_b}$ in Table \ref{rel}. The calculated ratios $R_2^{Z_b(10610)}$ and $R_2^{Z_b(10650)}$ approximately fall within the ranges of experimental values, but the theoretical ratios $R_3^{Z_b(10610)}$ and $R_3^{Z_b(10650)}$ significantly deviate from the experimental central values. However, one should also notice that the uncertainties of the experimental data are still quite large, and
the estimated ratios $R_3^{Z_b(10610)}$ and $R_3^{Z_b(10650)}$ are still of the same order as the experimental values.
As a relatively weak argument, these theoretical results to some extent can support the assumptions of identifying $Z_b(10610)$ and $Z_b(10650)$ as the $B^*\Bar{B}$ and $B^*\Bar{B}^*$ molecules, respectively.
\subsection{$P$-wave decays $Z_b \to h_b(nP) \pi$}\label{sec4b}
For the decays $Z_{c(b)}\to h_{c(b)}(nP)\pi$, there is a P-wave orbital excitation between the two hadrons in the final state. Since the masses of $Z_c(3900)$ and $Z_c(4020)$ are supposed to be below the $h_c(2P)\pi$ threshold, we do not discuss the ratios in relevant with the $Z_c$ states. For the two $Z_b$ states, the $h_b(1P)\pi$ and $h_b(2P)\pi$, we define the branching fraction ratio
\begin{eqnarray}
\label{pwaveratio}
\tilde{R}_2^{Z_b}=\frac{\Gamma({Z_b} \rightarrow h_b(2P) \pi)}{{\Gamma({Z_b}\rightarrow h_b(1P)\pi })}.
\end{eqnarray}
In the decay process, the total spin $S=1$ in the initial state flips into the total spin $S'=0$ in the final state, while the initial orbital momentum $L=0$ flips into $L''=1$ in the final state. Since $\langle 1||v^{0,2}(\mathbf{q}^2)||0\rangle=0$, the OGE Coulomb-like, the linear, the hyperfine and the tensor potentials do not contribute. For the spin-orbital potential, the spin operator $v^1(\mathbf s)=\mathbf{s}_i$ is a vector. The reduced matrix element for the $\mathbf{s}_q$ is,
\begin{eqnarray}
\label{w18}
&&\left \langle \, \left [\chi_C(q\bar q)\chi_D(Q\bar Q)\right]^{S'}||\mathbf{s}_q||\,
\left [ \chi_A(q\bar Q)\chi_B(Q\bar q) \right]^{S} \right\rangle\nonumber\\
&&=\sum_{S_{14},S_{23}}(-1)^{S_D+S_B-2s_Q-s_{\bar q}-s_{\bar Q}}\hat{S}_A\hat{S}_B\hat{S}_{14}\hat{S}_{23}\left \{
\begin{array}{c c c}
s_q & s_{\bar c} & S_A \\
s_{\bar q} & s_c & S_B \\
S_{14} &S_{23} & S
\end{array}
\right \} \delta_{S_D,S_{23}}(-1)^{S+S_C+S_{13}-1}\hat{S}\hat{S'}\nonumber\\
&&\times \left \{
\begin{array}{c c c}
S_{14} & S_{23} & S \\
S' & 1 & S_C \\
\end{array}
\right \}(-1)^{S_{14}}\hat{S}_{14}\hat{S}_C\left \{\begin{array}{c c c}
S_C & 1 & S_{14} \\
1/2 & 1/2 & 1/2 \\
\end{array}
\right \}\sqrt{s_q(s_q+1)(2s_q+1)},
\end{eqnarray}
where $s_{q}$ $(s_{\bar q})$ and $s_{Q}$ $(s_{\bar Q})$ are the spin of light and heavy quarks (antiquarks), respectively. $S_{14}$ and $S_{23}$ represent the spin of the two light and two heavy quarks in the initial state, respectively.
The calculations of the reduced matrix elements for the $\mathbf{s}_{\bar q}$, $\mathbf{s}_Q$ and $\mathbf{s}_{\bar Q}$ are similar. We list the results in Table \ref{wt3}.
\begin{table}[htbp]
\caption{$\langle \, \left [\chi_C\chi_D\right]^{S'}||\mathbf{s}_q||\,
\left [ \chi_A\chi_B \right]^{S} \, \rangle$ in Eq. (\ref{w18}). $S$ and $S'$ denote the total spin of the initial and final states, respectively.}\label{wt3}
\begin{center}
\begin{tabular}{ccccccc}
\toprule[1pt]\toprule[1pt]
$[S_A,S_B]^S-[S_C, S_D]^{S'}$ &$s_q$ & $s_{\bar Q}$ & $s_Q$ & $s_{\bar q}$ \\
$[0,1]^1-[0,0]^0$ & $-\frac{\sqrt{3}}{4}$ & $\frac{\sqrt{3}}{4}$ &$-\frac{\sqrt{3}}{4}$ &$\frac{\sqrt{3}}{4}$ \\
$[1,1]^1-[0,0]^0$ & $\frac{\sqrt{3}}{2\sqrt{2}}$ & $\frac{\sqrt{3}}{2\sqrt{2}}$ & $-\frac{\sqrt{3}}{2\sqrt{2}}$ &$-\frac{\sqrt{3}}{2\sqrt{2}}$ \\
$ [0,1]^1-[1,1]^1$ & $-\frac{1}{4}$ & $\frac{1}{4}$ & $\frac{3}{4}$ &$-\frac{3}{4}$ \\
$[1,1]^1-[1,1]^1$ & $\frac{1}{2\sqrt{2}}$ & $\frac{1}{2\sqrt{2}}$ &-$\frac{1}{2\sqrt{2}}$ &- $\frac{1}{2\sqrt{2}}$ \\
\bottomrule[1pt]\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
For the spatial reduced matrix, there is a relation
\begin{eqnarray}
\label{br}
C_{LL_{z};1\mu}^{L''L_{z}''} I_{\text{space}}&=&{C_{LL_{z};1\mu}^{L''L_{z}''}} \left \langle\left[(\Phi_{C}\psi_{D})^{L'}\Phi_{CD}^{L_{CD}}\right]^{L''}||f(q){v^{t}(q)}||[\Phi_{A}\Phi_{B}\Phi_{AB}]^{L}\right \rangle \nonumber \\
&=&\sqrt{2L''+1}C_{L'L'_{z},L_{CD}m_{CD}}^{L''L_{z}''}\langle\Phi_{C}(\Phi_{D})_{m_{D}}^{L_{D}}(\varphi_{CD}^{rel})_{m_{CD}}^{L_{CD}}|f(q)v(\mathbf{q})_{\mu}^{1}|\Phi_{A}\Phi_{B}\Phi_{AB}\rangle.
\end{eqnarray}
For the decay $Z_b\rightarrow h_b \pi$, one has $L=L_z=0$, $t=1$, and $L_{CD}=L'=L''=1$. The calculation of the $ I_{\text{space}}$ is similar to Eq. (\ref{w8}).
The $ r_{0}$-dependence of the ratio $\tilde{R}_2^{Z_b}$ is illustrated in Fig. \ref{zb}. In this figure, we find that the $\tilde{R}_2^{Z_b}$ increases with larger $ r_{0} $. The $Z_b(10610)$ and $Z_b(10650)$ prefer to decaying into the $h_b(2P)\pi$ channel when the $ r_{0}$ are larger than $1.0$ fm and $1.7$ fm, respectively.
We list the numerical results when the $r_0$ is $1.6$ fm in Table \ref{numericalresultsp}. Our results is larger than those in Ref \cite{Cleven:2011gp}, and fall in the range of the experimental results.
\begin{table}
\caption{The P-wave decay ratios when the $r_0$ is $1.6$ fm. The experimental data comes from Ref. \cite{Garmash:2015rfd}. }
\label{numericalresultsp}
\begin{tabular}{ccccccc}
\toprule[1pt]\toprule[1pt]
& $\tilde{R_2}^{Z_b(10610)}$ & $\tilde{R_2}^{Z_b(10650)}$\\
Theory & $2.1$ & $1.0$ \\
Ref \cite{Cleven:2011gp} & $0.21$ & $0.27$ \\
Experiment data & $1.43\pm0.85$ & $1.84\pm0.95$ \\
\bottomrule[1pt]\bottomrule[1pt]
\end{tabular}
\end{table}
\section{Summary} \label{sec5}
In this work, we assume that the $Z_c$ and $Z_b$ states are hadronic molecules composed of open-flavor mesons. In the framework of the relativized quark model and the quark-interchange model,
we calculate the branching fraction ratios of $Z_c$ ($Z_b$) states decaying into ground and radially excited charmonia (bottomonia) via emitting a pion meson. These ratios can be compared with the experimental data, which are useful in judging whether the molecule state assignment for the corresponding $Z_c$ or $Z_b$ state is reasonable or not. Our calculations indicate that the $Z_c(3900)$ and $Z_c(4020)$ have a larger coupling with $\psi(2S)\pi$ than $J/\psi\pi$. However, constrained by the phase space, the partial width $\Gamma(Z_c(3900)\to J/\psi\pi)$ is much larger than $\Gamma(Z_c(3900)\to \psi(2S)\pi)$, which is consistent with the current experimental observations. However, the explicit values of $R_2^{Z_c(3900)}$ and $R_2^{Z_c(4020)}$ still need to be checked by the future experiments. The value of $R_2^{Z_c(4430)}$ calculated in this relativized quark model is much smaller than the experiment estimation in Refs. \cite{Choi:2007wga,Aaij:2014jqa,Chilikin:2013tch,Chilikin:2014bkk}, which does not favor the assumption of identifying the $Z_c(4430)$ as a pure $\bar{D}D^*(2S)$ or $\bar D^* D(2S)$ molecule. The ratios $R_2^{Z_b}$ and $R_3^{Z_b}$ are approximately consistent with the experimental estimations. Besides, the calculated $P$-wave decay ratio $\Gamma({Z_b} \to h_b(2P) \pi) / \Gamma({Z_b} \to h_b(1P) \pi)$ also approximately falls within the range of experimental values, which implies the $B^*\Bar{B}$/$B^*\Bar{B}$ molecule assignment for $Z_b(10610)$/$Z_b(10650)$ is favorable.
It should be stressed that our calculations are based on the assumption that the $Z_c$ and $Z_b$ states are hadronic molecules, and we use the Gaussian distribution functions to describe their relative wave functions. This simple assumption about the formalism of molecular wave functions will definitely bring some uncertainties to the numerical results. Fortunately, we notice that the decay ratios are not very sensitive to the free parameter $r_{0} $ of the wave functions.
The theoretical framework used in this work will be helpful in revealing the underlying structures of some exotic states. And it is also very promising that the predictions based on this framework could be checked in the near future with the huge data samples accumulated by the BESIII, LHCb, Belle and Belle-II collaborations.
\subsection*{Acknowledgments}
We are grateful to the helpful discussions with Yan-Rui Liu, Jia-Jun Wu, and Yuan Song. We also thank Prof. Ulf-G. Mei{\ss}ner for a careful reading and helpful suggestions. This work is supported by the National
Natural Science Foundation of China (NSFC) under Grants No.11575008 and No.
11621131001, by the National Key Basic Research Program of
China (2015CB856700), by the NSFC and Deutsche Forschungsgemeinschaft (DFG) through
funds provided to the Sino--German Collaborative Research Center ``Symmetries and the Emergence of Structure in QCD'' (NSFC Grant No.~11621131001,
DFG Grant No.~TRR110).
\subsection*{Appendix}
\subsection{The mass spectra}
In the relativized quark model, the kinematic term is replaced by the relativistic term $E_i=\sqrt{m_i^2+\mathbf{p}_i^2}$. We calculate the mass spectra of the heavy mesons and the heavy qaurkonia. The mass spectra of the mesons involved in this work are listed in Tables \ref{massspectrumD} and \ref{massspectrumB}.
\begin{table}
\caption{Mass spectra of the charmed mesons. $M^R_{th}$, $M^{NR}_{th}$, and $M_{exp}$ are the mass spectra in the relativized quark model, the nonrelativistic quark model \cite{Liu:2014eka}, and in experiments \cite{Agashe:2014kda}, respectively. }\label{massspectrumD}
\begin{tabular}{ccccccccccccccc}
\toprule[1pt]\toprule[1pt]
& $D$ & $D^*$ &$D(2S)$ & $D^*(2S)$ & $J/\psi$ & $\psi(2S)$ & $h_c(1P)$ & $h_c(2P)$ &$\chi_{c0}(1P)$ &$\chi_{c1}(1P)$ &$\chi_{c2}(1P)$ \\
$M^{R}_{th}$ [GeV] & 1.873 & 2.038 & 2.582 & 2.645 & 3.091 & 3.679 &3.515 &3.956 &3.443 &3.508 &3.548 \\
$M^{NR}_{th}$ [GeV]& 1.920 & 1.993 & 2.711 & 2.769 & 3.089 & 3.701 &--&-- &-- &-- &-- \\
$M_{exp}$ [GeV] & 1.865 & 2.010 & 2.539 & 2.612 & 3.097 & 3.686 &3.525 &-- &3.414 &3.511 &3.556 \\
\bottomrule[1pt]\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{Mass spectra of the bottom mesons. $M^R_{th}$, $M^{NR}_{th}$, and $M_{exp}$ are the mass spectra in the relativized quark model, the nonrelativistic quark model \cite{Liu:2014eka}, and in experiments \cite{Agashe:2014kda}, respectively.} \label{massspectrumB}
\begin{tabular}{cccccccccccccc}
\toprule[1pt]\toprule[1pt]
& $B$ & $B^*$ &$B_1$ &$B^{\ast}_1$ &$\Upsilon(1S)$ & $\Upsilon(2S)$ & $\Upsilon(3S)$ &$h_b(1P)$ &$h_b(2P)$ &$\chi_{b0}(1P)$ &$\chi_{b1}(2P)$ &$\chi_{b2}(1P)$ \\
$M^R_{th}$ [GeV] &5.310&5.369&5.905&5.934&9.466&10.010&10.359 &9.881 &10.251 &9.847 &9.876 &9.896 \\
$M^{NR}_{th}$ [GeV]& 5.387 & 5.411 & 5.748 &-& 9.471 &9.944 &10.347 &-- &-- &-- &-- &-- \\
$M_{exp}$ [GeV] & 5.279 & 5.325 &- &-& 9.460 & 10.023 & 10.355 &9.899&10.260&9.859&9.893&9.912 \\
\bottomrule[1pt]\bottomrule[1pt]
\end{tabular}
\end{table}
\subsection{$I_{space}$}
The linear confinement effect $V_{l}$ contributes to the $S$-wave decay amplitudes. In Eq. (\ref{w8}), the $I_{\text{space}}$ in relevant with $V_l$ is
\begin{eqnarray}
\label{w14}
&&\int d^3\mathbf{q}e^{-\frac{u}{2}(q-q_0)^2}V_{l}=6\pi b\int d^3\mathbf{q} e^{-\frac{u}{2}(q-q_0)^2}\frac{e^{-\frac{q^2}{4\sigma^2_{ij}}}}{q^4}\nonumber\\
&&~~~~~~=-6\pi b(2\pi)^{3/2}\sqrt{z}{e^{-\frac{\mu q^2_0}{2}} }_1F_1(-\frac{1}{2},\frac{3}{2};\frac{\mu^2 q^2_0}{2z})+6{\pi}b(2\pi)e^{-\frac{\mu q^2_0}{2}}e^{-\frac{zq^2}{2}}\frac{2}{q}|_{q\rightarrow0}.\nonumber
\end{eqnarray}
where $z=\mu+\frac{1}{2{\sigma_{ij}}^2}$. $q_0$ and $\mu$ are parameters in relevant with the momenta and masses of the mesons in the initial and final states. Their explicit forms are referred to Ref. \cite{Wong:2001td}. When $q=0$, there is $q_0=0$. The divergent terms in the Prior or Post diagrams cancel out exactly due to the color factors.
For the P-wave decays, the spin-orbital effect $V^{G,l}_{so}$ contribute and is factorized as $f(q)\frac{(\mathbf{q}\times\mathbf{P_i})\cdot \mathbf{s}_i}{m^2}$. The $I_{\text{space}}$ is,
\begin{eqnarray}
\label{w19}
&&I_{\text{space}}\sim \int d\mathbf{q}e^{{-\frac{\mu}{2}(\mathbf{q}-\mathbf{q}_0)^2}}f(q){q}_{\mu}\nonumber\\
&&=\frac{1}{\mu}\frac{\partial}{\partial q^{\mu}_0}\int d\mathbf{q}
e^{ -\frac{\mu}{2} (\mathbf{q}-\mathbf{q}_0)^2}f(q)+{q}_{0\mu}\int d{\mathbf{q}}e^{ -\frac{\mu}{2} (\mathbf{q}-\mathbf{q}_0)^2}f(q).\nonumber
\end{eqnarray}
The divergences arising from the two integrals are
\begin{eqnarray}
\label{w20}
&&\frac{1}{\mu}\frac{\partial}{\partial {q}^{\mu}_0}e^{-\frac{\mu q^2_0}{2}}[\frac{\sinh(\mu qq_0)}{\mu q_0q^2}+\frac{\cosh(\mu q_0q)}{q}]|_{q\rightarrow0}+{q}_{0{\mu}}e^{-\frac{\mu q^2_0}{2} }[\frac{\sinh(\mu qq_0)}{\mu q_0q^2}+\frac{\cosh(\mu q_0q)}{q}]|_{q\rightarrow0}\nonumber\\
&&=\frac{1}{\mu}e^{-\frac{\mu q^2_0}{2} }[\frac{\cosh(\mu qq_0)}{ q^2_0q}+\frac{\mu \sinh(\mu q_0q)}{q_0}-\frac{\sinh(\mu q_0q)}{\mu q^3_0q^2}]|_{q\rightarrow0}.\nonumber
\end{eqnarray}
At $q=0$, the two singular parts cancel out.
|
train/arxiv
|
BkiUbtw25V5hYDkHGUKX
| 5
| 1
|
\section{Introduction}
The gravitational instability of collisionless matter in a cosmological
framework is usually studied within the Newtonian approximation,
which basically consists in neglecting terms higher than the first in
metric perturbations around a matter--dominated Friedmann--Robertson--Walker
(FRW) background, while keeping non--linear density and velocity perturbations.
This approximation is usually thought to produce accurate
results in a wide spectrum of cosmological scales, namely on
scales much larger than the Schwarzschild radius of collapsing bodies and
much smaller than the Hubble horizon scale, where the peculiar gravitational
potential $\varphi_g$, divided by the square of the speed of light $c^2$ to
obtain a dimensionless quantity, keeps much less than unity, while the peculiar
matter flow never becomes relativistic.
To be more specific, the Newtonian approximation
consists in perturbing only the time--time component of the FRW
metric tensor by an amount $2\varphi_g/c^2$, where $\varphi_g$
is related to the matter density fluctuation $\delta$ via the
cosmological Poisson equation,
$\nabla_x^2 \varphi_g ({\vec x},\tau) = 4
\pi G a^2(\tau) \varrho_b(\tau) \delta({\vec x}, \tau)$,
where $\varrho_b$ is the background matter density,
$a(\tau)$ the appropriate FRW scale--factor and $\tau$ the conformal time.
The fluid dynamics is then usually studied in Eulerian coordinates by
accounting for mass conservation
and using the cosmological version of the Euler equation for a
self--gravitating pressureless fluid to close the system.
To motivate the use of this ``hybrid approximation", which
deals with perturbations of the matter and the geometry at a different
perturbative order, one can either formally expand the correct
equations of General Relativity (GR) in inverse powers of the speed of light
or simply notice that the peculiar gravitational potential is strongly
suppressed with respect to the matter perturbation by the square of the ratio
of the perturbation scale $\lambda$ to the Hubble radius $r_H= c H^{-1}$
($H$ being the Hubble constant):
$\varphi_g/c^2 \sim \delta ~(\lambda / r_H)^2$.
Such a simplified approach, however, already fails in producing an accurate
description of the trajectories of relativistic particles, such as photons.
Neglecting the relativistic perturbation of the space--space components
of the metric, which in the so--called longitudinal gauge is just
$-2\varphi_g/c^2$, would imply a mistake by a factor of two in
well--known effects such as the Sachs--Wolfe, Rees--Sciama and
gravitational lensing. The level of accuracy not only depends on the peculiar
velocity of the matter producing the spacetime curvature, but also on the
nature of the particles carrying the signal to the observer.
Said this way, it may appear that the only relativistic correction required
to the usual Eulerian Newtonian picture is that of writing the metric tensor
in the ``weak field" form (e.g. Peebles 1993)
\begin{equation}
ds^2 = a^2(\tau) \biggl[ - \biggl(1 + {2\varphi_g \over c^2} \biggr) ~c^2
d\tau^2 + \biggl(1 - {2\varphi_g \over c^2} \biggr) ~d l^2 \biggr] \;.
\end{equation}
As we are going to show, this is not the whole story.
It is well--known in fact that
the gravitational instability of aspherical perturbations (which is the generic
case) leads to the formation of very anisotropic structures whenever pressure
gradients can be neglected (e.g. Shandarin et al. 1995 and references therein).
Matter first flows in almost two--dimensional structures called pancakes,
which then merge and fragment to eventually form one--dimensional filaments
and point--like clumps.
During the process of pancake formation the matter density, the shear
and the tidal field formally become infinite along evanescent
two--dimensional configurations corresponding to caustics; after this
event a number of highly non--linear phenomena, such as vorticity
generation by multi--streaming, merging, tidal disruption and
fragmentation, occur.
Most of the patology of the caustic formation process, such as the local
divergence of the density, shear and tide, and the formation of multi--stream
regions, are just an artifact of extrapolating the pressureless
fluid approximation beyond
the point at which pressure gradients and viscosity become important.
In spite of these limitations, however, it is generally believed that
the general anisotropy of the collapse configurations, either pancakes or
filaments, is a generic feature of cosmological structures originated through
gravitational instability, which would survive even in the presence of a
collisional component.
This simple observation shows the inadequacy of the standard Newtonian
paradigm. According to it the lowest scale at which the approximation can
be reasonably applied is set by the amplitude of the gravitational potential
and is given by the Schwarzschild radius of the collapsing body, which is
negligibly small for any relevant cosmological mass scale.
What is completely missing in this criterion is the role of the shear which
causes the presence of non--scalar contributions to the metric perturbations.
A non--vanishing shear component is in fact an unavoidable feature of
realistic cosmological perturbations and affects the dynamics
in (at least) three ways, all related to non--local effects, i.e. to the
interaction of a given fluid element with the environment.
First, at the lowest perturbative order the shear is related to the
tidal field generated by the surrounding material by a simple proportionality
law. Second, it is related to a {\em dynamical} tidal induction: the
modification
of the environment forces the fluid element to modify its shape and density.
In Newtonian gravity, this is an {\em action--at--a--distance} effect, which
starts to manifest itself in second--order perturbation theory as an
inverse--Laplacian contribution to the velocity potential (e.g. Catelan et al.
1995).
Third, and most important here, a non--vanishing shear field leads to the
generation of a traceless and divergenceless metric perturbation which can be
understood as gravitational radiation emitted by non--linear perturbations.
This contribution to the metric perturbations is statistically
small on cosmologically interesting scales, but it becomes relevant whenever
anisotropic (with the only exception of exactly one--dimensional) collapse
takes place. In the Lagrangian picture such an effect
already arises at the post--Newtonian (PN) level.
Note that the two latter effects are only detected if one
allows for non--scalar perturbations in physical quantities. Contrary to a
widespread belief, in fact, the choice of scalar perturbations in the initial
conditions is not enough to prevent tensor modes to arise beyond the linear
regime in a GR treatment. Truly tensor perturbations are dynamically generated
by the gravitational instability of initially scalar perturbations,
independently of the initial presence of gravitational waves.
This point is very clearly displayed in the GR Lagrangian second--order
perturbative approach. The pioneering work in this field is by
Tomita (1967), who calculated the gravitational waves
$\pi^\alpha_{~\beta}$ emitted by
non--linearly evolving scalar perturbations in an Einstein--de Sitter
background, in the synchronous gauge. Matarrese, Pantano \& Saez
(1994a,b) obtained an equivalent result but with a different formalism
in comoving and synchronous coordinates.
Recently a number of different approaches to
relativistic effects in the non--linear dynamics of cosmological
perturbations have been proposed. Matarrese, Pantano \& Saez (1993) proposed
an algorithm based on neglecting the magnetic part of the Weyl tensor
in the dynamics, obtaining strictly local fluid--flow evolution equations,
i.e. the so--called ``silent universe". This formalism, however, cannot be
applied to cosmological structure formation {\em inside} the horizon,
where the non--local tidal
induction cannot be neglected, i.e. the magnetic Weyl tensor
$H^\alpha_{~\beta}$ is non--zero, with the exception of highly specific initial
configurations (Matarrese et al. 1994a; Bertschinger \& Jain 1994;
Bruni, Matarrese \& Pantano 1995a; the dynamical role of $H^\alpha_{~\beta}$
was also discussed by Bertschinger \& Hamilton 1994 and Kofman \& Pogosyan
1995).
Rather, it is probably related to the non--linear dynamics of an irrotational
fluid {\em outside} the (local) horizon (Matarrese et al. 1994a,b).
One possible application (Bruni, Matarrese \& Pantano 1995b), is in fact
connected to the {\em Cosmic No--hair Theorem}.
Matarrese \& Terranova (1995) followed the more ``conservative" approach of
expanding the Einstein and continuity equations in inverse powers of the
speed of light, which then defines a Newtonian limit and, at the next
order, post--Newtonian corrections. Their approach differs from previous ones,
because of the gauge choice: we used synchronous and comoving coordinates,
because of which this approach can be called a Lagrangian one.
Various approaches have been proposed in the literature, which are somehow
related. A PN approximation has been followed by Futamase
(1991) to describe the dynamics of a clumpy universe.
Tomita (1991) used non--comoving coordinates in
a PN approach to cosmological perturbations.
Shibata \& Asada (1995) recently developed a PN approach to cosmological
perturbations, also using non--comoving coordinates.
Kasai (1995) analyzed the non--linear
dynamics of dust in the synchronous and comoving gauge.
\section{Method}
We consider a pressureless fluid with vanishing vorticity. Using
synchronous and comoving coordinates, the line--element reads
\begin{equation}
ds^2 = a^2(\tau)\big[ - c^2 d\tau^2 + \gamma_{\alpha\beta}({\vec q}, \tau)
dq^\alpha d q^\beta \big] \;,
\end{equation}
where we have factored out the scale--factor of the isotropic FRW solutions.
By subtracting the isotropic Hubble--flow, we introduce a {\em peculiar
velocity--gradient tensor}
$\vartheta^\alpha_{~\beta} = {1 \over 2} \gamma^{\alpha\gamma}
{\gamma_{\gamma\beta}}'$,
where primes denote differentiation with respect to $\tau$.
Thanks to the introduction of this tensor we can write the Einstein's
equations in a cosmologically convenient form.
The energy constraint reads
\begin{equation}
\vartheta^2 - \vartheta^\mu_{~\nu} \vartheta^\nu_{~\mu} + 4 {a' \over a}
\vartheta + c^2 \bigl( {\cal R} - 6 \kappa \bigr) = 16 \pi G a^2
\varrho_b \delta \;,
\end{equation}
where ${\cal R}^\alpha_{~\beta}(\gamma)$ is the
conformal Ricci curvature of the three--space with metric
$\gamma_{\alpha\beta}$; for the background FRW solution
$\gamma^{FRW}_{\alpha\beta} = (1 + {\kappa\over 4} q^2)^{-2}
\delta_{\alpha\beta}$, one has ${\cal R}^\alpha_{~\beta}(\gamma^{FRW})
= 2 \kappa \delta^\alpha_{~\beta}$.
We also introduced the density contrast
$\delta \equiv (\varrho - \varrho_b) /\varrho_b$.
The momentum constraint reads
\begin{equation}
\vartheta^\alpha_{~\beta||\alpha} = \vartheta_{,\beta} \;.
\end{equation}
The double vertical bars denote covariant derivatives in the
three--space with metric $\gamma_{\alpha\beta}$.
Finally, after replacing the density from the energy constraint and
subtracting the background contribution, the extrinsic curvature evolution
equation becomes
\begin{equation}
{\vartheta^\alpha_{~\beta}}' + 2 {a' \over a} \vartheta^\alpha_{~\beta} +
\vartheta \vartheta^\alpha_{~\beta} + {1 \over 4}
\biggl( \vartheta^\mu_{~\nu} \vartheta^\nu_{~\mu} - \vartheta^2 \biggr)
\delta^\alpha_{~\beta} + {c^2 \over 4} \biggl[ 4 {\cal R}^\alpha_{~\beta}
- \bigl( {\cal R} + 2 \kappa \bigr) \delta^\alpha_{~\beta} \biggr]
= 0 \;.
\end{equation}
The Raychaudhuri equation for the evolution of the
{\em peculiar volume--expansion scalar} $\vartheta$ reads
\begin{equation}
\vartheta' + {a' \over a} \vartheta + \vartheta^\mu_{~\nu} \vartheta^\nu_{~\mu}
+ 4 \pi G a^2 \varrho_b \delta =0 \;.
\end{equation}
The main advantage of this formalism is that there is only one dimensionless
(tensor) variable in the equations, namely the spatial metric tensor
$\gamma_{\alpha\beta}$. The only remaining variable is the density contrast
which can be written in the form
\begin{equation}
\delta({\vec q}, \tau) = (1 + \delta_0({\vec q})) \bigl[\gamma({\vec q}, \tau)/
\gamma_0 ({\vec q}) \bigr]^{-1/2} - 1 \;,
\end{equation}
where $\gamma \equiv {\rm det} ~\gamma_{\alpha\beta}$.
\section{Results and conclusions}
The method is then based on a $1/c^2$ expansion of equations above which
first of all leads to a new, purely Lagrangian, derivation of
the Newtonian approximation (Matarrese \& Terranova 1995). One of the most
important result in this
respect is that we obtained a simple expression for the
Lagrangian metric; exploiting the vanishing of the spatial
curvature in the Newtonian limit we were able to write it in terms of the
displacement vector ${\vec S}({\vec q}, \tau) = {\vec x}({\vec q},\tau) -
{\vec q}$, from the Lagrangian coordinate ${\vec q}$ to the Eulerian
one ${\vec x}$ of each fluid element (e.g. Buchert 1995 and references
therein), namely
\begin{equation}
d s^2 = a^2(\tau) \biggl[ - c^2 d \tau^2 + \delta_{AB}
\biggl(\delta^A_{~\alpha} + {\partial S^A({\vec q}, \tau)
\over \partial q^\alpha} \biggr)
\biggl(\delta^B_{~\beta} + {\partial S^B({\vec q}, \tau)
\over \partial q^\beta} \biggr) \biggr] \;.
\end{equation}
A straightforward application of this formula is related to the
Zel'dovich approximation.
The spatial metric is that of Euclidean space in time--dependent
curvilinear coordinates, consistently with the intuitive notion
of Lagrangian picture in the Newtonian limit.
Read this way, the complicated equations of Newtonian gravity in the
Lagrangian picture become much easier: one just has to deal with the spatial
metric tensor and its derivatives.
The displacement vector is then completely fixed by solving the Raychaudhuri
equation together with the momentum constraint in the $c \to \infty$ limit.
Next, we can consider the post--Newtonian corrections to the metric and
write equations for them. In particular, we can derive a
simple and general equation for the gravitational--waves $\pi_{\alpha\beta}$
emitted by non--linear structures described through Newtonian gravity. The
result can be expressed both in Lagrangian and Eulerian coordinates. In the
latter case one has,
\begin{equation}
\nabla^2_x \pi_{AB} = \Psi^{(E)}_{v,AB} + \delta_{AB} \nabla_x^2 \Psi_v^{(E)}
+ 2 \biggl( \bar \vartheta \bar \vartheta_{AB} -
\bar \vartheta_{AC}
\bar \vartheta^C_{~~B} \biggr) \;,
\end{equation}
with capital latin labels $A,B, \dots = 1,2,3$ indicating Eulerian
coordinates and $\nabla_x^2 \Psi_v^{(E)} = - \frac{1}{2}
( \bar \vartheta^2 - \bar \vartheta^A_{~B} \bar \vartheta^B_{~A} )$,
which generally allows a simple derivation of $\pi_{AB}$, given the
(gradients of the) velocity potential,
$\bar \vartheta_{AB} = \partial^2 \Phi_v/\partial x^A \partial x^B$,
by a convolution in Fourier space.
These formulae would allow to calculate the
amplitude of the gravitational--wave modes in terms of the velocity
potential, which in turn can be deduced from observational data on
radial peculiar velocities of galaxies.
In the standard case, where the cosmological perturbations form
a homogeneous and isotropic random field, we can obtain a heuristic
perturbative estimate of their amplitude in terms of the
{\em rms} density contrast and of the ratio of the typical perturbation scale
$\lambda$ to the Hubble radius $r_H=c H^{-1}$. One simply has
$\pi_{rms} / c^2 \sim \delta_{rms}^2 (\lambda / r_H )^2$.
This effect gives rise to a stochastic background of
gravitational waves which gets a non--negligible amplitude in
the so--called {\em extremely--low--frequency} band
(e.g. Thorne 1995), around $10^{-14}$ -- $10^{-15}$ Hz.
We can roughly estimate that the present--day closure density of this
gravitational--wave background is
\begin{equation}
\Omega_{gw}(\lambda) \sim \delta_{rms}^4
\biggl( {\lambda \over r_H} \biggr)^2 \;.
\end{equation}
In standard scenarios for the formation of structure in the universe,
the typical density contrast on scales
$1$ -- $10$ Mpc implies that $\Omega_{gw}$ is about $10^{-5}$ --
$10^{-6}$. We might speculate that such a background would give rise to
secondary CMB anisotropies on intermediate angular scales: a sort of
{\em tensor Rees--Sciama effect}. This issue will be considered in
more detail elsewhere.
The previous PN formula also applies to isolated structures, where
the density contrast can be much higher than the {\em rms} value,
and shear anisotropies play a fundamental role. A calculation of
$\pi_{\alpha\beta}$ in the case of a homogeneous ellipsoid
showed that the PN tensor modes become
dominant, compared to the Newtonian contributions to the metric tensor,
during the late stages of collapse, and possibly even in a
shell--crossing singularity. It is
important to stress that this effect generally contradicts the standard
paradigm that the smallest scale for the applicability of the
Newtonian approximation is set by the Schwarzschild radius of the object.
Such a critical scale is indeed only relevant for nearly spherical collapse,
whereas this effect becomes important if the collapsing structure
strongly deviates from sphericity.
|
train/arxiv
|
BkiUfLc25V5jNiZ1Jd4X
| 5
| 1
|
\section{Introduction}
Galaxy mergers have long been suggested to be associated with a variety of observational phenomena, including Ultra-Luminous Infrared Galaxies (ULIRGs), Quasars, post-starburst galaxies, Active Galactic Nuclei (AGN), and Submillimeter Galaxies (SMGs), etc. In certain galaxy evolution models, these objects may actually correspond to different phases during galaxy interactions \citep{hop06}. Galaxy mergers have significant impact on galaxy morphology, kinematics, stellar masses, and star formation histories of galaxies, as well as the mass of its central black holes. Recent studies on the galaxy luminosity function (LF) and stellar masses function (SMF) reveal that the number densities and stellar mass densities of galaxies in the red sequence have been increased by a factor of 2 to 3 since $z \sim 1$ while that of blue forming galaxies remain similar over the same period \citep{fab07,bel07}, suggesting a migration of galaxies from the blue cloud to the red sequence. One plausible mechanism responsible for this transformation is through merging of two blue galaxies, the so-called 'wet mergers' (gas-rich mergers) followed by subsequent mergers among re-sequence galaxies, the so-called 'dry mergers' (gas-poor mergers) \citep{fab07}. Therefore in order to better understand how the present-day massive galaxies are assembled, and how the galaxies have been evolved, we need to have improved constraints on the abundance of mergers and where they occur. There are three aspects regarding galaxy interactions that I focus on in this talk: the absolute galaxy merger rates, the influence of mergers in triggering star formation, and the environment of merging galaxies.
\section{Evolution of merger rates}
The are essentially two ways to identify interacting system. One is to pick up close pairs using the projected separation and redshift information. The advantage of using close pairs is that they are still well-separated and their properties have not been strongly influenced by the effect of mergers. This allows us to select interacting systems based on the properties of their progenitors such as their masses (or luminosities) and colors. The drawback is that these systems may not necessarily be close in physical space due to the difficulty in separating the Hubble flow from the peculiar velocity. It is therefore required to calibrate the true merger fraction among the galaxy pairs selected. The other approach relies on morphological classification. While visual classification of identifying mergers seem to be relatively robust, it becomes more challenging as going to higher redshifts and it is also very time-consuming for large surveys. What becomes more popular recently is to use non-parametric parameters such as CAS, denoted for concentration, asymmetry, and clumpiness, and Gini and M$_{20}$ measurements, where Gini is a measurement of flux distribution in galaxy's pixels, and M$_{20}$ is the 2nd order moment of light. It has been shown that the combinations of the above parameters can effectively pick up galaxies in the middle- and late-stage mergers \citep{con03,lot04,lot08}. However, it is not entirely clear which types of mergers (major vs minor mergers) are more sensitive to those measurements.
Using the aforementioned two approaches, there have been enormous progress made in order to study the time evolution of mergers rates. The evolution of merger fraction is conventionally parameterized by a power law of the form $(1+z)^{m}$. The exact values of $m$ have been diverse (see Lin et al. 2008 and references therein). This discrepancy comes from many factors, including how the merger samples are selected, what redshift ranges are probed, and what types of galaxies (luminosity, stellar mass, color, morphology, etc) are included in the samples. For example, it has been shown that the galaxy merger fraction depends strongly on galaxy luminosity and perhaps halo masses \citep{lin04,pat08,ste09,dom09}, and its redshift evolution is also dependent of the stellar mass \citep{con06}. Furthermore, the value of $m$ is a strong function of galaxy spectral types as well. A few studies have revealed that the frequency of dry mergers is not low and could be an important mechanism building up the masses of early-type galaxies \citep{van05,bel06}. In order to investigate the role of dry mergers in the evolution of galaxies and when it appears to become important, we classify candidates of wet, dry. and mixed mergers using colors of close pairs in the DEEP2 Redshift Survey. What we found is that blue galaxy pairs show slightly faster evolution with $m = 1.27\pm0.35$ while red-red pairs and red-blue pairs possess negative slope $m = -0.92\pm0.59$ and $-1.52\pm0.42$ respectively. Despite that the physical volume of the Universe increases with time as $(1+z)^{3}$, the departure of $m$ from 3 based on a simple density evolution argument is mainly due to the clustering effect of galaxies, as well as the change of blue and red galaxy populations. With certain assumptions on the merger time scale and the merger probability of kinematic pairs, we are able to investigate the relative roles of wet, dry, versus mixed mergers. As shown in Fig. \ref{figNmratio}, it is found that the relative fraction of wet mergers decreases with time, while those of dry and mixed mergers keep growing since $z \sim 1$ \citep{lin08}. These results indicate that dry mergers play an important role at low redshift in assembling the mass of red-sequence galaxies, whereas wet mergers are being more essential at earlier epoch. Similar conclusions have also been drawn in the studies of VVDS data \citep{der09} and of GOODS data \citep{bun09}.
Regardless of the difficulty in extending the merger rate measurement beyond $z \sim 1$ because of the need of deep near-infrared data, attempts to probe the merger rate up to $z \sim3$ have been made recently by several groups \citep{con03,rya08,blu09}. The exploration of merger rates at $z \sim 2-3$ is essential because this period is rough the peak of the cosmic star formation rate, quasar activities, and submillimeter sources. Based on a limit number of sample, it is suggested that the merger activities at $z \sim 2$ could be more frequent compared to $z \sim 1$ by a factor of 2-4. However, whether this trend continues to higher redshift, plateaus or peaks at $z \sim 2$ is still debatable. This will be a rich area for future studies that use HST/WFC3 and powerful ground-based near-infrared spectroscopy.
\begin{figure}
\centerline{\rotatebox{270}{\includegraphics[width=7.0cm]{linl1.eps}}}
\caption{Fraction of major mergers for wet (open triangles), dry (open circles), and
mixed mergers (solid circles) as a function of redshift. The symbols represent results from the DEEP2, TKRS, CNOC2
and MGC surveys. The three curves show the semi-analytical predictions of Sp-Sp (solid line), E-E (dashed line), and E-Sp (dash-dotted line) mergers by
\citet{kho03} but for a field-like environment (this figure is taken from Lin et al. 2008). \label{figNmratio}}
\end{figure}
\section{Triggered star formation in interacting galaxies}
Hydrodynamic simulations of galaxy mergers that assume the density-dependent star formation predict that bursts of star formation occur after the first pass during the galaxy encounters \citep{mih96,bar04,cox06}. The idea that interactions between two gas-rich galaxies are effective in triggering star formation, however, is not new. Using local samples, Larson and Tinsley (1978) demonstrated that the distribution of interacting systems have bluer colors and a larger scatter in the (U - B, B - V ) diagram than normal galaxies, indicating a recent burst of star formation happening in the interacting galaxies. Another piece of evidence supporting the tidally-triggered star formation comes from the far-infrared selected samples: most ULIRGs (ultra-luminous infrared galaxies) in the local universe are found to be strongly interacting/merging systems (Borne 1999). However, ULIRGs are extreme populations of starbursting galaxies. The fact that they are mostly associated with mergers do not imply that galaxy interactions necessarily induce high-level of star formations. The key to constrain the strength of induced star formation in merging systems is to have large statistical samples of interacting galaxies and compare their star formation activities with those of isolated galaxies. Such exercises have been performed in studies from CfA2, 2dF, and SDSS surveys \citep{bar00,lam03,nik04}. These works found an anti-correlation between the star formation efficiency and kinematic parameters such as separations and velocity differences between pairs, supporting triggered star formation due to close encounters. It is worth noting that the level of SF enhancement seen in very close pairs is no greater than a factor of 2 compared to wide-separated pairs and control samples, even after restricting the samples to be late-type mergers. In other words, the efficiency of inducing star formation via galaxy interaction is not as dramatic as one would expect from studies on ULIRGs or HyperLIRGs.
Now, what happens for galaxy interactions at higher redshift? There is consensus that the high-redshift galaxies are probably gas-richer, and their structures could be different from local samples (eg., they could be more bulgeless). If so, would that have significant impact on the activities during mergers? To explore this issue, we looked for interacting systems including close pairs and morphologically merging galaxies in the EGS field (one of the DEEP2 field), where we have Spitzer MIPS 24 $\mu$m observations \citep{lin07}. We measured the total infrared luminosity converted from 24 $\mu$m fluxes and the stellar masses for both interacting systems and field galaxies over the redshift range $0.1 < z < 1.1$. We find that while control samples form a nice sequence in the relation of $L_{IR}$ and $M_{*}$ , close pairs and morphologically merging galaxies tend to lie on the upper end of this sequence. It is worth mentioning that $L_{IR}$ increases as we approach to higher redshifts for a given stellar mass range. At $z \sim 1$, the control sample can also exceed the ULIRG threshold without interactions as long as their stellar masses are large enough, in contrast to the case at very low redshifts, where mergers seems to be the only possible mechanism to boost the star formation up to the ULIRG level. Statistically we found that at $z\sim 1$, 7.1 $\pm$ 4.3\% of interacting galaxies with $M_{*}$ $> 2\times10^{10}$ are found to be ULIRGs, compared to 2.6 $\pm$ 0.7\% in the control sample. If we plot the star formation rate efficiency versus the pair separation, again, we see an anti-correlation between these two parameters (Fig. \ref{fig2}). On average, we find the overall SFR enhancement during the interactions is close to a factor of 2. We do not find an evolution of this enhancement over the redshift range $0.1 < z < 1.1$. However, we note that there is a large spread of SFR efficiency among interacting galaxies at a given separation, indicating that the starbursting efficiency also depend on other factors. It could be due to that we are sampling different stages of mergers, or due to the variation in internal properties of these interacting systems. If we interpret the spread of SFR seen in our sample is mostly attributed to differen merger stages, our results imply that the overall amount of enhanced star formation due to merger is probably too small to consume all the available gas in a couple of Gyrs.
There is still room for improvement of the above studies. In order to decipher whether it is the internal property of galaxies or the merger stage that dominate the scatter in the SFR of interacting galaxies, first it would be desirable to divide the samples based on various parameters, such as the gas contents, mass ratios, orbital configurations. Second is to have the ability to classify interacting systems into sequential merger stages. In addition to having larger samples, the development of techniques that can be used to map various stages of mergers will allow detailed comparisons with model predictions and also help to improve our understanding of mergers.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=6.0cm]{linl2.eps}
\caption{The specific IR luminosity $L_{IR}/M_{*}$ as a function of
projected separation for kinematic galaxy pairs (circles) and
merging galaxies (triangles) in three redshift bins.
Merging galaxies are assigned to $r_{p}$ $\sim5$ $h^{-1}$kpc , which is the
minimum separation of kinematic close pairs. Galaxies with
$L_{IR}$ greater than $10^{12}$ $L_{\odot}$ (ULIRGs) are shown
with larger symbols. Distributions of $L_{IR}/M_{*}$ in control pairs are
also shown along the right-hand axes for comparison. Data points
at $L_{IR}/M_{*}$ = 0 refer to those sources with no 24 $\mu$m detection. The
number within the parenthesis in the bottom panel denotes the
$L_{IR}/M_{*}$ of that data point. Close pairs ($r_{p}$ $<$ 50 $h^{-1}$kpc ) and
merging galaxies are found to have higher median $L_{IR}/M_{*}$ and wider
spread of $L_{IR}/M_{*}$
than those of wide pairs ($r_{p}$ $>$ 50 $h^{-1}$kpc ) and control pairs (see
Table 1) (taken from Lin et al. 2007). \label{fig2}}
\end{center}
\end{figure}
\section{Environment of mergers}
As discussed in \S 2, the dry mergers become more frequent at later times, mostly due to the increasing population of red galaxies toward low redshifts. On the other hand, we know that there exists well-known color-density relation \citep{hog04,coo06}, i.e., the fraction of red galaxies is greater in high-density regions compared to that in low-density regions. How the growth of red-sequence galaxies relates to the environment, and what is the role of mergers played in building up such color-density relation remains outstanding questions.
Another reason to study the environment of mergers is to separate the environment effect on intrinsic galaxy properties when we investigate the triggered star formation activities. \citet{bar07} demonstrates that most of close pairs reside in more massive halos compared to field galaxies. Given the fact that galaxy properties are tied to the environment, direct comparisons of the star formation rate between paired galaxies and isolated galaxies may be contaminated by the environment effect. Using constraints derived from cosmological N-body simulation, it is possible to apply isolation criterion for observed close pairs that are suitable for studies of tidally-triggered star formation.
It has been recently shown that dry mergers preferentially occur in the groups and clusters. For example, \citet{tra08} suggested that dry mergers are important process to build up massive galaxies in the cores of galaxy groups/clusters by studying four X-ray luminous groups at intermediate redshifts ($z \sim 0.4$); \citet{mci08} also found that the frequency of mergers between luminous red galaxies (LRGs) is higher in groups and clusters compared to overall population of LRGs regardless of their environment by a factor of $2-9$ in the SDSS sample. Yet there lacks quantitative measurements of wet, dry, and mixed merger rates as a function of environments. One way to probe the connection between mergers and the formation of red galaxies is to compare the environment distribution of mergers to the post-starburst galaxies, the so-called 'K+A' or 'E+A' galaxies \citep{dre83}. These K+A galaxies are suggested to be the transition phase between star forming galaxies and the dead red-sequence galaxies, and hence they could be the direct progenitors of early-type red galaxies. In Fig. \ref{KA}, we show the overdensity distribution $(1 + \delta_{3}) $ \citep{coo06} of K+A galaxies found in DEEP2 sample \citep{yan09} versus wet, dry, and mixed mergers respectively, where $(1 + \delta_{3}) $ is the projected surface density relative to the mean density at a given redshift \citep{lin09}. It can be seen that the distributions of mixed and dry mergers alone are distinct from that of K+A galaxies, whereas wet mergers and wet plus mixed mergers distributes more similarly to K+A galaxies \citep{lin09}. This servers an indirect evidence about the linkage between mergers involving at least one gas-rich galaxy and the formation of K+A galaxies. Quantitative comparisons await bigger samples in both K+A galaxies and merger systems.
\begin{figure}
\centerline{\rotatebox{90}{\includegraphics[width=7.5cm]{linl3.eps}}}
\caption{The overdensity $(1 + \delta_{3}) $ distribution of wet (upper left panel), dry (lower-left panel), mixed (upper-right panel), and wet+mixed mergers (lower-right). The distributions of K+A galaxies are presented as solid curves while those of mergers are shown as shaded areas \citep{lin09}. \label{KA}}
\end{figure}
\section{Summary}
I have argued that studies of merger rates, effectiveness of the tidally-triggered star formation, and the host environment of mergers are the three keys to pin down the role of mergers in the evolution history of galaxies. Despite that our understanding of the above issues have been dramatically improved over the last few years, there remains several pieces of details missing in the whole picture. For example, under what kind of conditions will wet mergers lead to the quench of star formation and hence become K+A galaxies? What is the outcome of mixed mergers? What is the merger rate beyond redshift one? How well can we separate the contributions of minor mergers from major mergers? And finally, what is the role of AGN during galaxy encounters? Both detailed studies of individual interacting systems and statistical results based on future larger and deeper surveys of merging galaxies, together with improved numerical modeling that incorporate more sophisticated physics will provide essential knowledge to resolve the aforementioned issues.
\acknowledgements
I thank the DEEP2 team and D. Patton for their contributions to several works presented here, E. Barton and D. McIntosh for their helpful discussions, and the conference organizers for a wonderful meeting.
|
train/arxiv
|
BkiUgEHxK6wB9dbuWmUE
| 5
| 1
|
\section{\label{sec:background}Introduction}
\noindent Graphene exhibits massless Dirac Fermions \cite{Wallace,Geim, Neto08, Raza_book} with semi-metallic behavior, for which the collective carrier modes in the form of plasmons have been a topic of study \cite{Andersen, Sarma, Rana, Gangadharaiah, Jablan, Mishchenko}. When graphene is patterned in the form of an armchair graphene nanoribbon (acGNR) \cite{Nakada96, Brey06, Son, Raza08, Raza08_ac_prb}, two important deviations occur. First, the acGNRs develop a band gap and second the dispersions do not remain linear anymore and hence the electrons and holes behave as massive Dirac Fermions.
While acGNRs of atomic widths $N$ of $mod(N,3)=0,1$ exhibit significant band gap opening irrespective of theoretical model, acGNRs with $mod(N,3)=-1$ have zero band gap and massless dispersion within the continuum and the first nearest-neighbor $p_z$-orbital tight binding (1nn pzTB) model \cite{Nakada96, Brey06, Saito, dra11a}. One has to use more detailed methods like density functional theory (DFT), \cite{Son} extended H\"uckel theory (EHT) \cite{Raza08, Raza08_ac_prb} or beyond 1nn TB model to get a more detailed band structure. Although there are quantitative differences amongst these methods, nonetheless qualitatively these methods converge upon massive Dirac Fermions with a band gap opening for $mod(N,3)=-1$ acGNRs. The band gaps predicted by EHT for these acGNRs are of the order of a few tens of meV for extremely narrow ribbons, and decreases as the width of the nanoribbon increases \cite{Raza08_ac_prb}.
\begin{figure}[htb]
\centerline{\includegraphics[width=8.5cm]{acGNR5.eps}}
\caption{(Color online) Ball and stick model of the hydrogenated acGNR$5$. x- (y-) is the longitudinal (transverse) direction. Atomic visualization is done by H\"uckel-NV \cite{HuckelNV}.}
\label{fig:acGNR5geometry}
\end{figure}
In this paper, we examine the plasmon dispersion in intrinsic and extrinsic acGNR with atomic width $N = 5$ (acGNR5) by using a third-nearest-neighbor (3nn) pzTB model, benchmarked with EHT, within the random phase approximation (RPA) as discussed in Sec. II. We discuss the plasmon dispersion results in Sec. III followed by the conclusions.
\section{\label{sec:model}Theoretical Model}
The unit cell for a hydrogen passivated acGNR$5$ is highlighted in Fig. \ref{fig:acGNR5geometry} that contains 10 carbon and 4 hydrogen atoms. The unit vector is given as $\vec{a} = d \widehat{x} = 3 a_{cc} \widehat{x}$, where $a_{cc} = 1.42 \mathrm{\AA}$ is the carbon bond length. The pzTB Hamiltonian of the unit cell is a $10 \times 10$ matrix containing 3nn couplings. We transform the real-space Hamiltonian to the reciprocal space $H(k)$ to calculate
the eigenvalues $E_i(k)$ and eigenfunctions $c_i^{(\alpha)} (k)$ for the eigenstate $i=1,2,\ldots,10$, where $i$ is the band index and $\alpha$ represents the atomic location. The band index ranges from $i=1 \; (i=\overline{1})$ corresponding to the lowest-lying conduction (highest-lying valence) band and $i=5 \; (i=\overline{5})$ corresponding to the highest-lying conduction (lowest-lying valence) band.
One finds that the electron-hole symmetry is broken due to finite 2nn and 3nn couplings.
The band structure for acGNR$5$ is shown in Fig. \ref{fig:bandstructure}. 3nn tight-binding parameters ($E_0$,$t_0$,$t_1$,$t_2$) for the fit of the acGNR5 nanoribbon to the EHT \cite{Raza08_ac_prb} are reported in Table \ref{tab:tbparams}. Parameters are obtained by fitting the top three valence bands and bottom three conduction bands of the EHT data to the 3nn pzTB band structure at 51 k-points uniformly spaced across the Brillioun zone. Fitting is accomplished using a least-squares algorithm and no geometric relaxation of the bond lengths is incorporated. This set of hopping parameters agrees well with Ref. \cite{wohlthat} for the $t_0$ and $t_1$ parameters. However, the 3nn hopping parameter ($t_2$) we report is significantly smaller, due to the smaller gap predicted by EHT \cite{Raza08, Raza08_ac_prb} when compared with DFT results \cite{Son}.
\begin{figure}[htb]
\centerline{\includegraphics[width=8.5cm]{Figure2_1.eps}}
\caption{Carrier dispersion for acGNR5 calculated using the 3nn tight binding Hamiltonian with hopping parameters given in Table \ref{tab:tbparams}.
Panel (a) shows the complete 10-band structure, and panels (b) and (c) show progressively more detail in the dispersion near the gap of $E_g \approx 64$ meV over two different ranges of momentum.
In addition to the gap, asymmetry between the conduction and valence bands has been introduced by the non-zero 2nn and 3nn coupling parameters.}
\label{fig:bandstructure}
\end{figure}
The band structure computed using the 3nn Hamiltonian with hopping parameters from Table \ref{tab:tbparams} is shown in Fig. \ref{fig:bandstructure}.
The carrier dispersion characteristics of the $i=\overline{1}, \, 1$ (valence, conduction) bands in this system show a finite gap of $E_g \approx 64$ meV and represent a massive Dirac Fermion system with a dispersion
characterized by the relation:
\begin{equation}
E_{ki} = \pm \sqrt{(m_{0} v_{Fi}^2)^2 + (\hbar v_{Fi} k)^2}
\label{eq:diracfermion}
\end{equation}
where $v_{Fi}$ is the Fermi velocity for the $i=\overline{1}, \, 1$ (valence, conduction) bands, and the $+ \, (-)$ sign is chosen for the conduction (valence) band. The band gap $E_g$ corresponds to a relativistic rest mass of the massive Dirac Fermion system of $m_0=E_g/2 v_{Fi}^2$.
\begin{figure}[htb]
\centerline{\includegraphics[width=8.5cm]{overlapintegralfamily.eps}}
\caption{3nn tight-binding overlap integral computed for (a) the interband transition between conduction and valence bands,
and for (b) the intraband transition of the conduction band along the line $k^\prime = k+q$.
In each panel, a family of 12 curves is shown for $0 \le q \le q_{max}$ where $q_{max}=11 \, \Delta q$, in steps of $\Delta q$ where $\Delta q = \pi/(800 \, d)$.
In (a) the overlap for $q=0$ is identically 0, and the overlap for $q=q_{max}$ has a maximum of approximately 0.66,
whereas in (b) the overlap for $q=0$ is identically 1, and the overlap for $q=q_{max}$ has a minimum of approximately 0.76.
The vertical bars show the bounds $-q_{max} \le k < 0$, corresponding to the region
where $| \langle \overline{1}; k^\prime = k+q_{max} | 1; k \rangle | = 1$ in the continuum model.
Outside of this range, $| \langle \overline{1}; k^\prime=k+q_{max} | 1;k \rangle | = 0$ for the continuum model.
The conduction-conduction band has the opposite symmetry in the continuum model.}
\label{fig:overlaps}
\end{figure}
\begin{table}[tp]
\centering
\caption{3nn tight Binding Parameters for the acGNR5 nanoribbon obtained using a fit to EHT \cite{Raza08_ac_prb} data.}
\begin{tabular}{c r l}
\hline
$E_0$ & 0.11031 eV& onsite energy\\
$t_0$ & -2.69341 eV& 1nn hopping parameter\\
$t_1$ & 0.02201 eV& 2nn hopping parameter\\
$t_2$ & -0.03225 eV& 3nn hopping parameter\\
\hline
\end{tabular}
\label{tab:tbparams}
\end{table}
To compute the plasmon dispersion in the random phase approximation (RPA) for the nanoribbon with 3nn Hamiltonian, we follow the procedure outlined in Ref. \cite{dra11a}.
Due to large energy differences and small electronic wavefunction overlap integrals at $q \approx 0$ for both 1nn pzTB in Ref. \cite{dra11a} and 3nn pzTB in this paper, we use a two-band dielectric function including only the $i=1$ conduction and $i=\overline{1}$ valence bands to study the plasmon dispersion relation. However, some of the details of the 3nn pzTB model are different, which we discuss next.
In the RPA expression for the interband polarizability, electronic wavefunction overlap integrals between states in the two bands at momenta $k$ and $k^\prime = k+q$, where $q$ is the plasmon momentum, play a significant role. The polarizability is written as,
\begin{align}
\Pi_{mn} (q, \omega) &= \lim_{\eta \rightarrow 0} \frac{g_s}{L_x} \times \nonumber \\
&\sum\limits_k \frac{f(E_{k m}) - f(E_{k^\prime n} )}{E_{k m} - E_{k^\prime n } + \hbar \omega + i \hbar \eta}| \langle n; k^{\prime} | m; k \rangle |^2
\label{eq:chi}
\end{align}
where $m$ and $n$ are band indices, $g_s = 2$ is the spin degeneracy, $L_x$ is the sample length,
$k$ is the momentum of the initial state, $k^\prime = k+q$ is the momentum of the final state,
and $f(E) = 1/[1+e^{(E-\mu)/k_B T}]$ is the Fermi-Dirac distribution function with chemical potential $\mu$ and Boltzmann constant $k_B$, where $T$ is the temperature in $K$. $\hbar$ is the reduced Planck's constant and $\eta$ is a small number. We consider intrinsic acGNRs with the chemical potential $\mu = 0$.
In Fig. \ref{fig:overlaps}, we illustrate several of these overlap integrals as functions of $k$. It should be noted that the the overlap integral is no longer confined in the region defined by $\mathrm{sign}(k \, k^\prime) = -1$, as it is in the continuum model. Rather, the overlap integral is non-zero well beyond this range. However, direct transitions at $q=0$ are still forbidden.
The broadening of the overlap integral beyond the hard boundaries of the continuum model
indicates that there is a coupling to free-carrier states for collective modes at all nonzero $q$, and as a result, plasmons in this system are Landau damped.
Following Ref. \cite{dra11a}, we calculate the longitudinal dielectric function for acGNRs in the RPA. In the RPA, the dielectric matrix for acGNRs can be written as \cite{Zupanovic}:
\begin{equation}
\epsilon_{ijmn} (q, \omega) = \delta_{im} \delta_{jn} - v_{ijmn} (q) \: \Pi_{mn} (q, \omega)
\label{eq:rpa}
\end{equation}
where $v_{ijmn} (q)$ is the Coulomb matrix element in one dimension, $\Pi_{mn} (q, \omega)$ is the polarizability of the acGNRs, and $i$, $j$, $m$, and $n$ are the band indices. Non-trivial solutions to the field equations require:
\begin{equation}
\det{[\epsilon_{ijmn} (q, \omega)]} = 0
\label{eq:disprel}
\end{equation}
\emph{Intrinsic Plasmons:}
In the two-band approximation for intrinsic acGNRs at $T = 0$, the self-polarizabilities of the $i=\overline{1}, \, 1$ bands are given
as, $\Pi_{\overline{1}\overline{1}} (q, \omega) = \Pi_{11} (q, \omega) = 0$. Further, symmetries in the acGNRs require\cite{Fertig,Zupanovic} that
the Coulomb matrix elements $v_{\overline{1},1,\overline{1},1} (q) = v_{\overline{1},1,1,\overline{1}} (q) = v_{1,\overline{1},1,\overline{1}} (q) = v_{1,\overline{1},\overline{1},1} (q)$. This result gives the dispersion relation of the collective (plasmon) state in the 2-band approximation by simplifying Eq. \ref{eq:disprel} as follows:
\begin{equation}
1-v_{\overline{1},1,\overline{1},1} (q) \, [ \Pi_{\overline{1} 1} (q,\omega) + \Pi_{1 \overline{1}} (q, \omega) ] = 0
\label{eq:twobanddisp}
\end{equation}
We compute the Coulomb matrix elements $v_{\overline{1}, 1, \overline{1}, 1} (q)$ as described in Ref. \cite{dra11a} using
the $p_z$-orbital wavefunction localization parameter $w=1 \mathrm{\AA}$ \cite{Raza11}.
Solving Eq. \ref{eq:twobanddisp} gives the dispersion relation for the collective modes (plasmons) in the acGNR.
\emph{Extrinsic Plasmons:}
The dispersion relation for plasmons in extrinsic acGNR
can also be obtained from Eq. \ref{eq:disprel} in the two-band approximation. For a chemical potential $\mu$ in the $i=1$ conduction band at $T=0$, states with momenta between $-k_f \le k \le k_f$
where $E_{k_f 1} = \mu$ are filled, and states outside of this range are empty.
For the extrinsic case $\Pi_{11} (q, \omega)$ is no longer 0, and we write the plasmon dispersion relation as:
\begin{align}
\left ( 1 - v_{\overline{1},1,\overline{1},1} (q) \, [\Pi_{\overline{1}1} (q, \omega) + \Pi_{1\overline{1}} (q, \omega)] \right ) \nonumber \\
\times \left ( 1 - v_{1,1,1,1} (q) \, \Pi_{11} (q, \omega) \right )= 0
\label{eq:extrinsicdisp}
\end{align}
Plasmons for negative chemical potentials $\mu$ will exhibit similar behavior.
\begin{figure}[htb]
\centerline{\includegraphics[width=8.5cm]{dispextrinsic5.3.eps}}
\caption{Extrinsic plasmon dispersion relations for acGNR$5$. Dispersion curves are calculated for a range of chemical potentials $\mu$.}
\label{fig:dispsum}
\end{figure}
\section{\label{sec:results}Discussion of Results}
\begin{figure}[htb]
\centerline{\includegraphics[width=8.5cm]{velocity.eps}}
\caption{Group velocity of extrinsic plasmons as a function of the chemical potential $\mu$ in the $q \rightarrow 0$ limit.
The solid points are calculated from the dispersion relation data presented in Fig. \ref{fig:dispsum}. The dashed curve
is calculated using the analytic model for $\lim_{q \rightarrow 0} v_g (\mu)$ discussed in the text
with $v_{1,1,1,1} (0) / (2 e^2 / \epsilon_0) = 11.1294$ and $v_{F1} = 8.33219 \times 10^5$ m/s.}
\label{fig:velocity}
\end{figure}
\emph{Intrinsic Plasmons:}
The intrinsic plasmon obtained using our formalism exhibits an onset threshold in both the $q$ and $E$ dimensions. The $q$ threshold can be understood with
the data presented in Fig. \ref{fig:overlaps}(a).
For small values of $q$, the overlap integral is nearly zero. Because the polarizabilities $\Pi_{\overline{1}1} (q, \omega)$ and $\Pi_{1\overline{1}}$ are proportional to this overlap,
the dielectric function never crosses 0, and so no collective mode exists. As the overlap gets larger, the dielectric function eventually crosses zero and the intrinsic
plasmon dispersion exists. The threshold in $E$ is a result of the fact that the polarizabilities are not large enough to cause a zero-crossing for small $E$. As the plasmon
energy increases above the bottom of the conduction band, the resonant enhancement in the polarizabilities causes a zero-crossing.
Because we are interested in plasmons in the $q \rightarrow 0$ limit, we do not consider the intrinsic case further.
\emph{Extrinsic Plasmons:}
Dispersion relations for plasmons in extrinsic acGNR5 computed using the tight-binding formalism described above are also plotted
in Fig. \ref{fig:dispsum} for several values of the chemical potential $\mu > E_g /2$ corresponding to a geometric distribution of $k_F$.
From these results, it can be readily observed that the dispersion curves have a $q \sqrt{v_{1,1,1,1} (q )}$ character for values of the chemical potential within a few meV
of the band edge ($\mu \gtrsim E_g/2$).
Further, as the chemical potential increases, the dispersion relation is observed to asymptotically approach a limit which corresponds to
the plasmon dispersion in a massless Dirac fermion system.
It is interesting to analyze the behavior of the extrinsic plasmon group velocity in the $q \rightarrow 0$ limit as a function of the chemical potential $\mu$.
In this limit, the interband polarizabilities $\Pi_{1\overline{1}} (q, \omega)= \Pi_{\overline{1}1} (q, \omega) =0$ because the interband overlap
integral $\langle \overline{1}; k|1; k \rangle = 0$ (see Fig. \ref{fig:overlaps}(a)).
Further, the intraband overlap integral $\langle 1; k | 1; k \rangle = 1$ in this limit (see Fig. \ref{fig:overlaps}(b)).
As a result, the intraband polarizability becomes:
\begin{equation}
\Pi_{11} (q, \omega) = \frac{g_s}{L_x} \sum\limits_k \frac{2 \Delta E}{(\hbar \omega)^2}
\end{equation}
where $\Delta E = E_{k+q,1} - E_{k,1}$.
Thus, as $q \rightarrow 0$, the dielectric function becomes:
\begin{equation}
1-v_{1,1,1,1} (q) \, \Pi_{11} (q, \omega) = 0
\label{eq:simpledielectricfunction}
\end{equation}
Solving Eq. \ref{eq:simpledielectricfunction} for $\omega$, the plasmon group velocity in the $q \rightarrow 0$ limit can then be written:
\begin{equation}
v_g(k_F) = \left [\lim_{q \rightarrow 0} \left (\frac{2 \, v_{1,1,1,1} (q)}{\hbar^2 q^2} \int\limits_{-k_F}^{k_F} \Delta E \, dk \right ) \right ]^{1/2}
\label{eq:velocity}
\end{equation}
where the integral is taken over the filled states between $-k_F$ and $k_F$.
Substituting the relationship between chemical potential and Fermi wavenumber $k_F = \sqrt{\mu^2 - (m_0 v_{F1}^2)^2}/\hbar v_{F1}$ into
the analytic result for Eq. \ref{eq:velocity} gives the group velocity
as a function of chemical potential $v_g (\mu)$,
which is shown as a dashed curve in Fig. \ref{fig:velocity}.
\section{\label{sec:summary}Conclusions}
In summary, we have computed the plasmon dispersion for an acGNR5 nanoribbon using a 3nn tight-binding model.
This nanoribbon represents a massive Dirac Fermion system.
Hopping parameters for the model were obtained by fitting the 3nn band structure to band data obtained from an EHT
calculation. The intrinsic plasmon dispersion relation obtained exhibits a threshold in both $q$ and $E$.
The extrinsic plasmon dispersion relation obtained follows the $q \sqrt{V (q )}$ dependence expected in 1D systems for values of the chemical potential near the band edge ($\mu \gtrsim E_g/2$),
and the dispersion relation asymptotically approaches one corresponding to a massless Dirac fermion system as the chemical potential $\mu$ increases.
Good agreement between the group velocity of these plasmons in the $q \rightarrow 0$ limit and an analytic model based on the behavior of the polarizabilities as $q \rightarrow 0$
is obtained.
Finally, we note that some damping of these plasmons may be expected to occur from plasmon scattering to free electron states due to the nature of the
relevant overlap integrals.
\begin{acknowledgments}
DRA acknowledges partial support for this work from the National Institutes of Health.
\end{acknowledgments}
|
train/arxiv
|
BkiUeNnxaL3SuiswghLG
| 5
| 1
|
\section{Introduction}
\vspace{-1mm}
Smart meters (SMs) provide advanced monitoring of consumer energy usage, thereby enabling optimized management and control of electricity distribution systems~\cite{ipakchi2009grid}. Unfortunately, the data collected by SMs can reveal information about consumers' activities.
For instance, an individual's energy usage pattern may leak information about the times at which they run individual appliances~\cite{hart1992nonintrusive}.
Two approaches have been proposed to tackle the privacy threat posed by such information leakage.
One strategy involves manipulating user data before sending it to the utility provider (UP)~\cite{6007070}; this approach improves privacy at the cost of reduced operational insight for the UP.
The other strategy employs rechargeable batteries at each consumer site to try to decouple energy usage from energy requests~\cite{kalogridis2010privacy}; allowing devices to run off of either the battery or the UP and allowing the battery to charge at times of both activity and inactivity improves privacy at the cost of introducing individual batteries and, potentially, increasing consumer costs
(e.g., if energy is requested when it best conceals the consumers' usage without regard to the energy bill).
This paper investigates the latter approach.
Understanding the privacy implications of any strategy requires an appropriate privacy metric.
A variety of metrics are used to study privacy in energy distribution systems. These include statistical distance metrics~\cite{kalogridis2010privacy}, differential privacy~\cite{6847974}, distortion metrics~\cite{giaconi2018joint}, and information metrics like mutual information, which can be applied under a variety of assumptions on users' energy, including i.i.d.~\cite{varodayan2011smart, kalogridis2010privacy, 6003811, tan2012smart, gomez2013privacy}, stationary~\cite{6102315,65eac443f7d6420a9bb100e3a77b70a6}, and first-order time-homogeneous Markov random processes~\cite{7536745}; see~\cite{GDH_SPM_18} for a comprehensive review.
Alternative privacy metrics such as maximal leakage~\cite{7460507} have operational descriptions and relate to information measures like Sibson mutual information; its generalization, maximal $\alpha$-leakage~\cite{LKSP_CORR_18}, establishes additional relationships to Arimoto mutual information, mutual information, and Renyi entropy~\cite{7460507,LKSP_CORR_18}. Many of these measures can be understood as measures of an adversary's ability to gain insight into an unknown random variable $X$ by observing $Y$,
with measures differing only in the loss functions they use to quantify that insight~\cite{LKSP_CORR_18}.
We here use mutual information to measure privacy both because its interpretation in terms of an adversary that minimizes log-loss with respect to an evolving soft-decision model~\cite{LKSP_CORR_18} is well-matched to the evolving nature of energy distribution over time and because mutual information provides a useful bridge to adjacent fields such as hypothesis testing~\cite{PV_TIT_95}, estimation~\cite{SV_TIT_05}, and learning~\cite{XR_CORR_17}.
Since user energy consumption may be non-stationary, we seek privacy guarantees that apply across general random process models of energy consumption.
Moreover, given that
no battery can store unlimited energy, we impose finite capacity bounds on batteries. We therefore model the energy management unit (EMU) as a deterministic finite-state channel. We then adapt the Ahlswede-Kaspi coding strategy proposed for permuting channels~\cite{ahlswede1987optimal} to the SM privacy setting. This work generalizes the battery policy proposed in \cite{AE_SGC_17} by including the price of the energy requested from the grid and minimizing information leakage subject to a bound on the resulting energy bill.
We denote vectors by bold letters, e.g. $\vect{x}$, and random variables by uppercase letters, e.g. $X$. The operator $\norm{\cdot}$ denotes the sum over vector elements, e.g. $\norm{\vect{x}}=\sum_i x_i$. Intervals on the integers are denoted by double brackets, e.g. $\interval{a}{b}=\{a, a+1, \ldots, b-1, b\}$. The $n$-fold cartersian product of the interval is denoted by $\interval{a}{b}^n=\interval{a}{b}\times\ldots\times \interval{a}{b}$. Given a vector $\vect{x}$ of size $n$ and a set of indices ${\cal A}\subseteq\interval{1}{n}$, we denote by $\vect{x}_{{\cal A}}$ the vector $\vect{x}_{{\cal A}} = \{x_i:i\in{\cal A}\}$. The support of the probability distribution $P_X$ is denoted by ${\hbox{supp}}(P_X)$, and the positive part operator is ${(a)}^+=\max(0,a)$.
\section{Energy Management System with a Finite Battery Model}
Figure 1 depicts an energy management system and the random processes therein. The privacy guarantee is defined in terms of the information leakage from the user to the provider, and the task of the EMU is to choose a battery policy that minimizes the leakage while satisfying the operation and cost constraints. Formal definitions follow.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{system}
\label{fig:system}
\caption{Energy Management System with Finite Battery Model}
\end{figure}
We model user energy consumption as a discrete-time random process $\randomvect{x}^n$ on alphabet ${\cal X}^n = \interval{0}{\alpha}^n$. The random variable $\random{x}_i$ describes the energy consumed by the user at time step $i$ with $i=0,1,...,n-1$. For exposition simplicity we assume ${\cal X}\subseteq\mbox{\bb Z}$; the results generalize to arbitrary discrete alphabets. We use $P_{\randomvect{x}^n}\in{\cal P}_{\randomvect{x}^n}$ to denote the energy consumption pattern distribution, where ${\cal P}_{\randomvect{x}^n}$ is a fixed family of such distributions.
Since user energy consumption profiles tend to exhibit non-stationarities~\cite{kalogridis2010privacy},
${\cal P}_{\randomvect{x}^n}$ may contain non-stationary random processes.
The EMU maps consumption sequence $\randomvect{x}^n\in{\cal X}^n$ to a request sequence $\randomvect{y}^n\in{\cal Y}^n$ using a battery policy $P_{\randomvect{y}^n|\randomvect{x}^n}$ that is not allowed to vary with $\randomvect{x}^n$; random variable $\random{y}_i$ describes the energy requested from the UP at time step $i=0,1,...,n-1$. We again focus on integer random variables (${\cal Y}\subseteq\mbox{\bb Z}$) for simplicity. We require ${\cal Y}\supseteq{\cal X}$ so that the UP can satisfy the user's energy consumption even when no battery is available. We allow ${\cal Y}$ to contain negative values to model scenarios where users can sell energy back to the grid.
To be considered feasible, battery policy $P_{\randomvect{y}^n|\randomvect{x}^n}$ must create a request sequence that meets the energy demands of the user and does not request energy it cannot use or store. Let $\beta$ denote the finite capacity of a given battery (in energy units) and $\random{s}_i$ denote the amount of energy stored in that battery, the ``energy state,'' at time $i$. Then $\random{s}_{i}$ takes values in ${\cal S}=\interval{0}{\beta}$ and is governed by the charging dynamics
\bes \label{eq:battery_filling}
\random{s}_{i} = s_0+ \sum_{k=0}^{i-1} \random{y}_{k} - \sum_{k=0}^{i-1} \random{x}_{k},
\end{IEEEeqnarray}
where $s_0 \in {\cal S}$ is the initial battery state.
A power outage occurs when $\random{s}_{i} + { \random{y}_{i} - \random{x}_{i} } < 0$; energy is wasted when $\random{s}_{i} + { \random{y}_{i} - \random{x}_{i} } > \beta$.
Under this model, the battery resembles a box, energy units resemble balls that can be inserted (stored) and removed (consumed), and the set ${\cal Y}^n(s_0,\vect{x})$ of feasible requests, defined formally below, contains all sequences of insertions and removals allowed by the box. This feasibility constraint resembles~\cite{ahlswede1987optimal}[Eq. 2.4] from the work of Ahlswede and Kaspi; this link is studied in~\cite{AE_SGC_17}.
\begin{definition} \label{def:feasible_set}
Given a battery with initial state $s_0\in{\cal S}$ and capacity $\beta$, the {\it set of feasible energy requests} for energy consumption sequence $\vect{x}\in{\cal X}^n$ is
\bes \label{eq:feasible_energy_requests}
{\cal Y}^n(s_0,\vect{x}) \stackrel{\Delta}{=} \{ \vect{y}\in{\cal Y}^n : s_{i} \in \interval{0}{\beta}\ \ \forall\ i\in\interval{0}{n}\}. \IEEEeqnarraynumspace
\end{IEEEeqnarray}
The {\it set of feasible battery policies} is
\bes \label{eq:omega_policies}
\Omega(s_0)\stackrel{\Delta}{=} \{ P_{\randomvect{y}^n|\randomvect{x}^n} : {\hbox{supp}}( P_{\randomvect{y}^n|\randomvect{x}^n=\vect{x}}) \subseteq {\cal Y}^n(s_0,\,\vect{x})\ \ \forall \ \vect{x}\in{\cal X}^n \}. \IEEEeqnarraynumspace \medmuskip=0mu \thinmuskip=-1mu \thickmuskip=1mu \nulldelimiterspace=-1pt \scriptspace=0pt
\end{IEEEeqnarray}
\end{definition}
Our aim in feasible policy design is to minimize privacy subject to a constraint on policy cost. Towards this end, we next define our measures of information leakage (where privacy is high when information leakage is low) and cost.
We measure a battery policy's information leakage by its worst-case performance.
\begin{definition} \label{def:privacy_measure}
The {\it information leakage} of policy $P_{\randomvect{y}^n|\randomvect{x}^n}$ is
\bes \label{eq:privacy_measure}
\bar{{\cal I}}(P_{\randomvect{y}^n|\randomvect{x}^n})=\max_{P_{\randomvect{x}^n}\in{\cal P}_{\randomvect{x}^n}} \frac{1}{n} I(\randomvect{x}^n;\randomvect{y}^n).
\end{IEEEeqnarray}
\end{definition}
We measure the cost of a policy $P_{\randomvect{y}^n|\randomvect{x}^n}$ as the difference between the user's energy bill under that policy and the user's energy bill under the feasible battery policy that minimizes the energy bill. (Under this definition, cost can be negative only for infeasible policies.) To calculate energy bills, we model the energy market price as a deterministic sequence, $\vect{m}\in\mbox{\bb R}^n$. Under this definition, the cost of an energy request sequence $\vect{y}$ is $\vect{m}^T\vect{y}$. We assume that the market price is constant over each of $K$ blocks of time. The price and duration of the $k$-th block, $k=0, 1, \ldots, K-1$, are $m_k$ and $l_k$, respectively, giving
\bes \label{eq:market_price_model}
\vect{m} = (\underbrace{ m_0, \ldots,m_0}_{l_0} ,\underbrace{ m_1,\ldots ,m_1}_{l_1}, \ldots, \underbrace{ m_{K-1}, \ldots,m_{K-1}}_{l_{K-1}}). \IEEEeqnarraynumspace
\end{IEEEeqnarray}
\begin{definition} \label{def:cost}
Consider an EMU with battery capacity $\beta$, initial state $s_0\in{\cal S}$, and market price $\vect{m}$. The {\it system cost of energy consumption sequence $\vect{x}\in{\cal X}^n$ under battery policy $P_{\randomvect{y}^n|\randomvect{x}^n}$} is
\bes
g(\randomvect{y}^n, \vect{x}) = \mathbbm{E}_{P_{\randomvect{y}^n|\randomvect{x}^n=\vect{x}}}[\vect{m}^T\randomvect{y}^n - \vect{m}^T\vect{y}^*(\vect{x})],
\end{IEEEeqnarray}
where $\vect{y}^*(\vect{x}) = \argmin_{\vect{y}\in{\cal Y}^n(s_0, \vect{x})} \vect{m}^T\vect{y}$. For any $\Delta\geq 0$, the set of {\it feasible $\Delta$-affordable battery policies} is
\bes \label{eq:extended_policies}
\Gamma(\Delta) \stackrel{\Delta}{=} \left\{ P_{\randomvect{y}^n|\randomvect{x}^n}\in\Omega(s_0) : g(\randomvect{y}^n, \vect{x}) \leq \Delta\ \ \forall\ \vect{x}\in{\cal X}^n \right\}\!. \IEEEeqnarraynumspace
\end{IEEEeqnarray}
\end{definition}
Finally, the privacy-cost function defines the optimal tradeoff between privacy and cost achievable by feasible battery policies.
\begin{definition} \label{def:privacy_guarantee}
Given an EMU with battery capacity $\beta$, initial state $s_0$ and market price $\vect{m}$, the {\it privacy cost function} is defined, for each $\Delta\geq0$, as
\bes \label{eq:privacy_guarantee}
{\cal I}(\Delta)\stackrel{\Delta}{=} \min_{P_{\randomvect{y}^n|\randomvect{x}^n}\in\Gamma(\Delta)} \bar{{\cal I}}(P_{\randomvect{y}^n|\randomvect{x}^n}).
\end{IEEEeqnarray}
\end{definition}
To bound ${\cal I}(\Delta)$, we adapt techniques developed by Ahlswede and Kaspi~\cite{ahlswede1987optimal} from channel capacity to privacy-cost. While the resulting solution employs a non-causal battery policy, detailed analysis of~\cite{ahlswede1987optimal} shows that knowing just $\beta+1$ time steps ahead suffices to achieve optimality, where $\beta$ is the battery capacity. Thus, we envision practical implementations that rely on consumption predictions. This approach also provides insight on what prediction capabilities are needed.
\section{Geometry of the Feasible Sets}
\subsection{Shared Output Sequences}
\newcommand{\vect{y}_{\!{\cal A}}}{\vect{y}_{\!{\cal A}}}
Lemma \ref{le:shared_request_iff} characterizes a necessary and sufficient condition under which a set ${\cal A}$ of input pairs $(s_0,\vect{x})$ share a common feasible output sequence $\vect{y}_{\!{\cal A}}$. Such shared output sequences are good for privacy since a UP that sees $\vect{y}_{\!{\cal A}}$ cannot distinguish which input pair $(s_0,\vect{x})\in{\cal A}$ caused it. Conversly, when two inputs $(s_0,\vect{x}), (\hat{s}_0,\hat{\vect{x}})$ share no feasible output $\vect{y}_{\!{\cal A}}$, the EMU cannot hide from the UP which pair caused the request. The following measure of distance
is useful for that analysis.
\begin{definition}
The distance between two input pairs $(s_0,\vect{x}), (\hat{s}_0,\hat{\vect{x}})\in{\cal S}\times{\cal X}^n$ is defined as
\bes
d_n\Big( (s_0,\vect{x}), (\hat{s}_0,\hat{\vect{x}}) \Big) = \!\!\!\max_{i\in\interval{0}{n-1}} \Big| \left(s_0 - \norm{\vect{x}^i} \right)- \left( \hat{s}_0 - \norm{\hat{\vect{x}}^i} \right) \Big|.\! \IEEEeqnarraynumspace \medmuskip=1mu \thinmuskip=0mu \thickmuskip=2mu \nulldelimiterspace=-1pt \scriptspace=0pt
\end{IEEEeqnarray}
\end{definition}
Lemma \ref{le:shared_request_iff} shows that the distance between input pairs determines the existence of a shared feasible output $\vect{y}$.
The result emphasizes the central role that battery capacity $\beta$ plays in privacy.
\begin{lemma} \label{le:shared_request_iff}
Let ${\cal A}$ denote a subset of the input pair alphabet ${\cal S}\times{\cal X}^n$. The following two statements are equivalent.
a) The distance between every two pairs $(s_0,\vect{x}),(\hat{s}_0,\hat{\vect{x}})\in{\cal A}$ is less than or equal to the capacity of the battery, i.e.
\bes \label{eq:shared_request_a}
d_n\Big( (s_0,\vect{x}), (\hat{s}_0,\hat{\vect{x}}) \Big) \leq \beta \textrm{ for all } (s_0,\vect{x}), (\hat{s}_0,\hat{\vect{x}})\in {\cal A}. \IEEEeqnarraynumspace
\end{IEEEeqnarray}
\quad b) All sequences in ${\cal A}$ share a feasible request $\vect{y}_{\!{\cal A}}$, i.e.
\bes \label{eq:shared_request_b}
\vect{y}_{\!{\cal A}} \in \bigcap_{(s_0,\vect{x})\in {\cal A} } {\cal Y}^n(s_0,\vect{x})
\end{IEEEeqnarray}
\end{lemma}
\begin{proof}
Let the sequence $\vect{y}_{\!{\cal A}}$ be such that for all $i$:
\bes
\norm{\vect{y}_{\!{\cal A}}^i} = -\min_{(s_0,\vect{x})\in {\cal A} } (s_0 - \norm{\vect{x}^{i}}).
\end{IEEEeqnarray}
Thus, for any $(\hat{s}_0,\hat{\vect{x}})\in{\cal A}$, the battery state at time $i+1$ is
\bes \label{eq:iff_achievability}
s_{i+1} = (\hat{s}_0 - \norm{\hat{\vect{x}}^{i}}) - \min_{(s_0,\vect{x})\in {\cal A} } (s_0 - \norm{\vect{x}^{i}}). \IEEEeqnarraynumspace \medmuskip=2mu \thinmuskip=1mu \thickmuskip=3mu
\end{IEEEeqnarray}
Now $d_n\big( (s_0,\vect{x}), (\hat{s}_0,\hat{\vect{x}}) \big) \leq \beta$ implies that $s_{i+1} \in \interval{0}{\beta}$ for all $i$, so $\vect{y}_{\!{\cal A}}$ is a feasible sequence.
The converse follows since for any sequence $\vect{y}$ and any two input pairs $(s_0,\vect{x}),(\hat{s}_0,\hat{\vect{x}})\in{\cal A}$ such that $d_n\big( (s_0,\vect{x}), (\hat{s}_0,\hat{\vect{x}}) \big) > \beta$, the absolute difference between the corresponding battery states at some time step $i$ satisfies
\bes
\big|s_{i+1}-\hat{s}_{i+1}\big| = \left|(s_0 - \norm{\vect{x}^{i}}) - (\hat{s}_0 - \norm{\hat{\vect{x}}^{i}})\right|>\beta.
\end{IEEEeqnarray}
Thus $s_{i+1}$ and $\hat{s}_{i+1}$ cannot both belong to ${\cal S}=\interval{0}{\beta}$.
\end{proof}
\subsection{Cardinality bounds}
Building on Lemma~\ref{le:shared_request_iff}, Theorem~\ref{th:covering} gives an upper bound on the number of distinguishable input pairs $(s_0,\vect{x}^n)\in{\cal S}_0\times{\cal X}^n$, where ${\cal S}_0\subseteq{\cal S}$ is the set of possible initial battery states. The result is derived by building a covering $\{{\cal A}_i\}$ of ${\cal S}_0\times{\cal X}$ such that all input pairs in each ${\cal A}_i$ share a common feasible request $\vect{y}_i$. The result shows that the minimal time $\lambda\stackrel{\Delta}{=} \floor{ {(\beta+1)}/{\alpha} }$ needed to fully discharge a battery of capacity $\beta$ under maximal consumption $\alpha\stackrel{\Delta}{=}\max{\cal X}$ is a central parameter in the construction of privacy preserving battery policies. The proof is inspired by the code construction presented by Ahlswede and Kaspi \cite[Proposition 1]{ahlswede1987optimal}.
\begin{theorem} \label{th:covering}
Let the input alphabet be ${\cal S}_0\times{\cal X}^n$, with $\overline{{\cal S}_0}$ and $\underline{{\cal S}_0}$ denoting the maximum and minimum values of ${\cal S}_0$, respectively. There exists a set of request sequences ${\cal V}^n({\cal S}_0) \subseteq {\cal Y}^{n}$ such that
\bes
\log \big|{\cal V}^n({\cal S}_0) \big| \leq \ceil{ \frac{n- \floor{(\beta+\underline{{\cal S}_0}-\overline{{\cal S}_0})/\alpha} }{\lambda} }.
\end{IEEEeqnarray}
Moreover, for every input pair $(s_0,\vect{x}) \in {\cal S}_0 \times {\cal X}^{n}$, at least one sequence $\vect{v}\in{\cal V}^n({\cal S}_0)$ is feasible, that is
\bes
{\cal Y}^n(s_0,\vect{x}) \cap {\cal V}^n({\cal S}_0) \not= \emptyset.
\end{IEEEeqnarray}
\end{theorem}
\begin{proof}
At time step $i$, the value of $s_0 - \norm{\vect{x}^{i}}$ for any input pair $(s_0,\vect{x})\in{\cal S}_0\times{\cal X}^{i}$ with ${\cal X} = \interval{0}{\alpha}$ is bounded by
\bes \label{eq:zbound}
\underline{{\cal S}_0}- i\alpha \leq s_0 - \norm{\vect{x}^{i}} \leq \overline{{\cal S}_0}.
\end{IEEEeqnarray}
At time step $l = \floor{(\beta+\underline{{\cal S}_0}-\overline{{\cal S}_0})/\alpha}$, the distance between any two input pairs $(s_0,\vect{x}),(\hat{s}_0,\hat{\vect{x}})\in{\cal S}_0\times{\cal X}^{l}$ is bounded by
\bes
d_l\Big( (s_0,\vect{x}), (\hat{s}_0,\hat{\vect{x}}) \Big) \leq \overline{{\cal S}_0} - (\underline{{\cal S}_0}- l\alpha) \leq \beta.
\end{IEEEeqnarray}
Therefore, Lemma \ref{le:shared_request_iff} guarantees the existence of a request $\vect{y}_0$ that is feasible for every input pair in ${\cal S}_0\times{\cal X}^{l}$.
Following a similar reasoning, consider the set of possible input pairs during the subsequent $\lambda$ times steps, i.e. ${\cal S}\times{\cal X}^\lambda$ with ${\cal S} = \interval{0}{\beta}$. Define a cover of the input alphabet, ${\cal S}\times{\cal X}^\lambda\subseteq \left({\cal A}_1 \bigcup {\cal A}_2\right)$, with subsets given by
\bes
{\cal A}_1 = \left\{ (s_0,\vect{x}) \in {\cal S}\times{\cal X}^\lambda : s_0 - \norm{\vect{x}} \in \interval{0}{\beta} \right\}, \IEEEeqnarraynumspace
\end{IEEEeqnarray}
and
\bes
{\cal A}_2 = \left\{ (s_0,\vect{x}) \in {\cal S}\times{\cal X}^\lambda : s_0 - \norm{\vect{x}} \in \interval{-\lambda\alpha}{-1} \right\}. \IEEEeqnarraynumspace
\end{IEEEeqnarray}
Note ${\cal A}_1 \bigcup {\cal A}_2$ contains all sequences in ${\cal S}\times{\cal X}^\lambda$ as (\ref{eq:zbound}) implies that $s_0 - \norm{\vect{x}} \in \interval{-\lambda \alpha }{\beta}$.
The distance between any two input pairs in ${\cal A}_i$ with $i=1,2$ is bounded by $\beta$. Therefore, by Lemma \ref{le:shared_request_iff}, there exists a shared feasible sequence $\vect{y}_i$ for all pairs in ${\cal A}_i$.
Setting $\kappa = \ceil{(n-l)/\lambda}$ and
\bes
\label{eq:req_secs}
{\cal V}^n({\cal S}_0) = \{\vect{y}_0\} \times \underbrace{ \{\vect{y}_1,\vect{y}_2\} \times ... \times \{\vect{y}_1,\vect{y}_2\} }_{\kappa} \medmuskip=1mu \thinmuskip=0mu \thickmuskip=2mu \nulldelimiterspace=-1pt \scriptspace=0pt \IEEEeqnarraynumspace
\end{IEEEeqnarray}
completes the proof.
\end{proof}
To map input pairs $(s_0,\vect{x})$ to energy request in ${\cal V}^n({\cal S}_0)$ it suffices to forecast, at the start of each block of length $\lambda$, whether the battery will deplete during the current block, i.e. $s_0 - \norm{\vect{x}^\lambda} \lessgtr 0$.
In \cite{AEE_ARXIV_19}, it is shown that the upper bound in Theorem \ref{th:covering} is tight.
The construction of the set of request sequences given by (\ref{eq:req_secs}) describes the forecasting capabilities required to implement optimal battery policies.
\begin{theorem} \label{th:packing}
Let $s_0$ denote the state of the battery at time $0$, and let ${\cal S}_l$ denote the possible states of the battery at time $l\in\interval{0}{n}$. Then there exists a set ${\cal W}^l(\{s_0\},{\cal S}_l) \subseteq {\cal X}^l$ with cardinality
\bes \label{eq:packing}
\left|{\cal W}^l(\{s_0\},{\cal S}_l)\right| \geq 2^{\hat{\kappa}} \ceil{\frac{l\alpha-\hat{\kappa} \ceil{\lambda}\alpha}{|{\cal S}_l|}},
\end{IEEEeqnarray}
and
\bes
{\cal Y}^l(s_0,\vect{w},{\cal S}_l) \cap {\cal Y}^l(s_0',\vect{w}',{\cal S}_l) = \emptyset,
\end{IEEEeqnarray}
for any distinct $(s_0,\vect{w})$, $(\hat{s}_0,\hat{\vect{w}})$ in ${\cal W}^l({\cal S}_0,{\cal S}_l)$, $\lambda = (\beta+1)/\alpha$ and $\hat{\kappa} = \max(0,\floor{l/\ceil{\lambda}-1})$.
\end{theorem}
\begin{proof}
We prove the result by constructing a set of sequences that satisfies the conditions of the theorem. The construction is done by concatenation of $\hat{\kappa}$ blocks of length $\ceil{\lambda}$ and one block of length $l-\hat{\kappa}\ceil{\lambda}$, i.e.
\bes
{\cal W}^l(\{s_0\},{\cal S}_l) = \underbrace{ {\cal W}^{\ceil{\lambda}} \times ... \times {\cal W}^{\ceil{\lambda}}}_{\hat{\kappa}} \times {\cal W}^{l - \hat{\kappa}\ceil{\lambda}}_{{\cal S}_l}. \IEEEeqnarraynumspace
\end{IEEEeqnarray}
Let the alphabet defining the first $\hat{\kappa}$ blocks be ${\cal W}^{\ceil{\lambda}}=\{\vect{w}',\vect{w}''\}$, where $\vect{w}'$ and $\vect{w}''$ are any sequences in ${\cal X}^{\ceil{\lambda}}$ such that $\norm{\vect{w}'} = 0$ and $\norm{\vect{w}''} = \ceil{\lambda}\alpha$. This implies that
\bes
d\Big((s_0,\vect{w}'),(s_0,\vect{w}'')\Big) = |\norm{\vect{w}''}-\norm{\vect{w}'}| = \ceil{\lambda}\alpha > \beta. \IEEEeqnarraynumspace
\end{IEEEeqnarray}
Therefore, by Lemma \ref{le:shared_request_iff}, no output sequence is shared between the input pairs $(s_0,\vect{w}')$ and $(s_0,\vect{w}'')$, i.e
\bes
{\cal Y}^{\ceil{\lambda}}(s_0,\vect{w}')\cap{\cal Y}^{\ceil{\lambda}}(s_0,\vect{w}'') = \emptyset.
\end{IEEEeqnarray}
Thus, the input sequence $\vect{w}_{\vect{y}}\in\{\vect{w}',\vect{w}''\}$, and the initial battery state of the second block $s_{\ceil{\lambda}} = s_0-\norm{\vect{w}_{\vect{y}}}+\norm{\vect{y}}$ are uniquely determined by the output sequence $\vect{y}$. The argument above can be applied recursively for the first $\hat{\kappa}$ blocks.
Following a similar reasoning, let the alphabet defining the last block be given by ${\cal W}^{l-\hat{\kappa}\ceil{\lambda}}=\{\vect{w}_0,\vect{w}_1,...,\vect{w}_N\}$ with $N=\floor{{(l\alpha-\hat{\kappa}\ceil{\lambda}\alpha})/{|{\cal S}_l|}}$ and $\vect{w}_i$ any sequence in ${\cal X}^{l-\hat{\kappa}\ceil{\lambda}}$ such that $\norm{\vect{w}_i} = i|{\cal S}_l|$. Consequently, for any given $\vect{y}$, only one sequence $\vect{w}_j$ satisfies the constraint $s_l = s_{\hat{\kappa}\ceil{\lambda}}-\norm{\vect{w}_j}+\norm{\vect{y}}\in{\cal S}_l$ simultaneously. This completes the proof.
\end{proof}
\subsection{Impact of the Output Alphabet on Information Leakage}
In the following, we characterize the impact of the output alphabet on the information leakage ${\cal I}(\Delta)$. In particular, we show that the information leakage does not increase when the policy operates with a constrained output alphabet ${\cal Y}_c$. Lemma \ref{le:outputset} shows that it is possible to remove extreme values, i.e. $y_i \not \in \interval{0}{\alpha}$, while retaining the feasibility of the sequence $\vect{y}\in{\cal Y}(s_0,\vect{x})$.
\begin{lemma} \label{le:outputset}
Let two output alphabets ${\cal Y}_c^n$ and ${\cal Y}^n$ be such that $\interval{0}{\alpha}^n \subseteq {\cal Y}_c^n \subseteq {\cal Y}^n \subseteq \mbox{\bb Z}^n$. Then there exists a function $F_n: {\cal Y}^n \to {\cal Y}_c^n$ such that for any $(s_0,\vect{x})\in{\cal S}\times{\cal X}^n$ and $\vect{y}\in{\cal Y}^n(s_0,\vect{x})$ it holds that
\bes
F_n(\vect{y}) \in {\cal Y}^n_c(s_0,\vect{x}).
\end{IEEEeqnarray}
\end{lemma}
\begin{proof}
We first define the set of functions $\lbrace h_i\rbrace_{i=1}^n$ that will yield the construction of $F_n$. For each function $h_i$ with $i\in\interval{1}{n}$ set $d_i\in\interval{0}{(y_i-\alpha)^+}$ and define $h_i: {\cal Y}^n \to {\cal Y}_c^n$ as
\bes
h_i(\vect{y}) &=
\begin{cases}
\vect{y}+d_i(\vect{e}_{i+1}-\vect{e}_{i}),& \text{when } i\in\interval{1}{n-1}\\
\vect{y}-d_i\vect{e}_{i},& \text{otherwise}.\\
\end{cases}
\end{IEEEeqnarray}
That is, the function $h_i$ reallocates the purchase of $d_i$ units of energy from time step $i$ to the next time step $i+1$.
Note that when this occurs on the last time step, i.e. when $i=n$, the excess energy request is not reallocated but removed from the sequence.
\newcommand\sy{\vect{s}}
\newcommand\shy{\tilde{\vect{s}}
Let $\sy\in{\cal S}^{n+1}$ be the sequence of battery states induced by the feasible sequence $\vect{y}$. By the battery charging dynamics (\ref{eq:battery_filling}), the sequence of battery states induced by $\tilde{\vect{y}} = h_i(\vect{y})$ is given by
\bes
\shy = \sy-d_i\vect{e}_{i+1},
\end{IEEEeqnarray}
with $d_i\in\interval{0}{(y_i-\alpha)^+}$. Note that
\bes
\shy_{i+1} = \sy_{i+1}-d_i \leq \sy_{i+1},
\end{IEEEeqnarray}
and since $x_i\leq\alpha\leq y_i-d_i$
\bes
\shy_{i+1} = \shy_{i} + (y_i-d_i)-x_i \geq \sy_i.
\end{IEEEeqnarray}
As $\sy\in{\cal S}^{n+1}$, this implies that $\shy\in{\cal S}^{n+1}$, i.e. $h_i(\vect{y})$ is feasible.
The above argument shows that any excess energy request can be reallocated to the next time step. A similar argument shows that any excess, i.e. $y_i \geq \alpha$, can be reallocated to the previous time step. Furthermore any excess energy selling, i.e. $y_i<0$, can be reallocated to the next and previous time steps without impacting the feasibility of the energy request. A recursive application of the arguments above yields the existence of the function $F_n$ constructed as
\bes
F_n(\vect{y})=h_n\circ h_{n-1}\cdots \circ h_1 (\vect{y}),
\end{IEEEeqnarray}
so that $F_n(\vect{y}) \in {\cal Y}_c(s_0,\vect{x})$.
\end{proof}
The lemma above shows that battery policies that operate over an output alphabet with a maximum energy request that matches the peak energy consumption of the user, i.e. ${\cal Y}=\interval{0}{\alpha}$, are sufficient to satisfy the feasibility.
\begin{lemma} \label{le:ordering0}
Let the output alphabet ${\cal Y}$ contain the input interval ${\cal X} = \interval{0}{\alpha}$, then
\bes
{\cal I}_{{\cal X}}(\infty) = {\cal I}_{{\cal Y}}(\infty).
\end{IEEEeqnarray}
\end{lemma}
\begin{proof}
Lemma \ref{le:outputset} states the existence of a function ${F_n}: {\cal Y}^n \to {\cal Y}_c^n$ such that if $P_{Y^n|X^n}\in\Gamma(\infty)$ then $F_n\circ P_{Y^n|X^n}\in\Gamma\infty)$. The function ${F_n}$ induces the Markov chain
\begin{IEEEeqnarray}{rCl}
\randomvect{x}^n \to \randomvect{y}^n \to {F}(\randomvect{y}^n).
\end{IEEEeqnarray}
Therefore $I(\randomvect{x}^n;{F}_n(\randomvect{y}^n)) \leq I(\randomvect{x}^n;\randomvect{y}^n)$ by the data processing inequality. The converse follows by noting that $\Gamma_{{\cal X}}(\infty) \subseteq \Gamma_{{\cal Y}}(\infty)$. This completes the proof.
\end{proof}
However, in general the function $F_n$ does not preserve the price paid for the energy, as $\vect{y}$ and $F_n(\vect{y})$ may yield different energy bills. The following lemma identifies the conditions that guarantee that the energy bill do not change after the application of $F_n$.
\begin{lemma} \label{le:outputset2}
Define output alphabet ${\cal Y}_c^n = \interval{-{\beta}/{\underline{l}}}{{\beta}/{\underline{l}} + \alpha}^n$ where $\underline{l} = \min_k l_k$ and $l_k$ is the length of the $k$-th market price period as defined in (\ref{eq:market_price_model}). Consider a $\Delta$-feasible battery policy $P_{Y^n|X^n}\in\Gamma(\Delta)$. Then there exist a function $\widehat{F}: {\cal Y}^n \to {\cal Y}_c^n$ such that $F\circ P_{Y^n|X^n}\in\Gamma_c(\Delta)$.
\end{lemma}
\begin{proof}
Note that the battery charging dynamics (\ref{eq:battery_filling}) determine the state of the battery at the market transition times $t_{i+1}=t_k+l_k$ with $i=1, \ldots, k$ as
\bes
s_{t_{k+1}}=s_{t_k}-\norm{\vect{x}^{l_k}}+\norm{\vect{y}^{l_k}},
\end{IEEEeqnarray}
where $s_{t_k}\in\interval{0}{\beta}$ and $\norm{\vect{x}^{l_k}} \in \interval{0}{l_k\alpha}$. Therefore, when $\norm{\vect{y}^{l_k}} \in \interval{-\beta}{\beta+l_k\alpha}$, the battery state $s_{t_{k+1}}$ takes values on $\interval{0}{\beta}$ for any value of the previous state $s_{t_{k}}$. This concludes the proof.
\end{proof}
The lemma above shows that the resulting output sequence $\widehat{F}_n(\vect{y})$ does not depend on the input pair $(s_0,\vect{x})$ and instead depends only on the original output sequence $\vect{y}$. This insight leads to the following result.
Lemma \ref{le:ordering} shows that the privacy cost function ${\cal I}( \Delta)$ does not vary when the EMU operates with a constrained output alphabet ${\cal Y}_c$.
This result is consistent with prior results reported for privacy based on hypothesis testing \cite[Theorem 1]{li2015privacy} and multi-user scenarios \cite[Theorem 2]{gomez2015smart}.
\begin{lemma} \label{le:ordering}
Define output alphabet ${\cal Y}_c^n = \interval{-{\beta}/{\underline{l}}}{{\beta}/{\underline{l}} + \alpha}^n$ where $\underline{l} = \min_k l_k$ and $l_k$ is the length of the $k$-th market price period as defined in (\ref{eq:market_price_model}). Let ${\cal I}(\Delta)$ and ${\cal I}_c(\Delta)$ represent the privacy-cost functions under output alphabets ${\cal Y}^n$ and ${\cal Y}_c^n$ for any output alphabet ${\cal Y}^n\supset{\cal Y}_c^n $. Then
\bes
{\cal I}_c( \Delta ) = {\cal I}( \Delta ).
\end{IEEEeqnarray}
\end{lemma}
\begin{proof}
Let $\Gamma(\Delta)$ and $\Gamma_c(\Delta)$ denote the set of feasible $\Delta$-affordable battery policies under output alphabets ${\cal Y}^n$ and ${\cal Y}_c^n$. It follows from \cite{AEE_ARXIV_19} that a function ${F}: {\cal Y}^n \to {\cal Y}_c^n$ exists such that if $P_{Y^n|X^n}\in \Gamma(\Delta)$ then $F\circ P_{Y^n|X^n}\in \Gamma_c(\Delta)$. Noting that the function ${F}$ induces the Markov chain
\bes
\randomvect{x}^n \to \randomvect{y}^n \to {F}(\randomvect{y}^n)
\end{IEEEeqnarray}
yields $I(\randomvect{x}^n;{F}_n(\randomvect{y}^n)) \leq I(\randomvect{x}^n;\randomvect{y}^n)$ by the data processing inequality.
The converse follows by noting that $\Gamma_c(\Delta) \subseteq \Gamma(\Delta)$.
\end{proof}
We note that the proof for the existence of the function $F$ presented in \cite{AEE_ARXIV_19} requires forecasting of $\underline{l}$ time steps ahead.
\section{Universal Privacy bounds}
In the following, we bound the information leakage given in Definition \ref{def:privacy_guarantee}. We first study the case for which only the feasibility constraint is imposed.
\begin{theorem} \label{th:IOmega}
The privacy cost function ${\cal I}(\infty)$ is bounded by
\bes
\frac{1}{n} \floor{\frac{n}{\ceil{\lambda}}} \leq {\cal I}(\infty) \leq \frac{1}{n} \ceil{\frac{n-\floor{\beta/\alpha}}{\lambda}},
\end{IEEEeqnarray}
where $\lambda=(\beta+1)/\alpha$.
\end{theorem}
\begin{proof}
\emph{Upper bound.} Theorem \ref{th:covering} shows the existence of a set ${\cal V}^n(\{s_0\})$ with cardinality bounded by
\bes
\log |{\cal V}^n(\{s_0\})| \leq \ceil{ \frac{n- \floor{(\beta+s_0-s_0)/\alpha} }{\lambda} } = \ceil{\frac{n-\floor{\beta/\alpha}}{\lambda}}, \IEEEeqnarraynumspace \medmuskip=1mu \thinmuskip=0mu \thickmuskip=2mu \nulldelimiterspace=-1pt \scriptspace=0pt
\end{IEEEeqnarray}
such that the intersection ${\cal V}^n(\{s_0\})\cap{\cal Y}(s_0,\vect{x})$ is not empty for every input pair $(s_0,\vect{x})$. Letting the output $\randomvect{y}^n$ take values in ${\cal V}^n(\{s_0\})\cap{\cal Y}(s_0,\vect{x})$ completes the proof.
\emph{Lower bound.} Theorem \ref{th:packing} shows that there exists a set ${\cal W}^n = {\cal W}^n(\{s_0\},{\cal S})$ with cardinality bounded by
\bes
\log |{\cal W}^n| \geq \floor{\frac{n}{\ceil{\lambda}}},
\end{IEEEeqnarray}
such that no two sequences in ${\cal W}^n$ share a common output sequence, i.e. $H(\randomvect{w}^n|\randomvect{y}^n) = 0$. Letting $\randomvect{w}^n$ take uniformly distributed values over ${\cal W}^n$ completes the proof.
\end{proof}
Note that for a sampling period $T_0$ and a maximum power consumption $\hat{w} = \alpha/T_0$, the total amount of information leaked during a time interval $T=n T_0$ is bounded by
\bes
n{\cal I}(\infty) \leq \floor{\frac{n}{{\lambda}}} = \floor{\frac{T/T_0}{(\beta+1)/(\hat{w}T_0)}} = \floor{\frac{T\hat{w}}{\beta+1}}.
\end{IEEEeqnarray}
Thus the upper bound is independent of the sampling period $T_0$, i.e. sampling periods under $T_0=(\beta+1)/\hat{w}$ does not increase the privacy guarantee ${\cal I}(\infty)$. For integer values of $\lambda$, both bounds on Lemma \ref{th:IOmega} coincide, providing the exact value of the privacy guarantee $n{\cal I}(\infty)=\floor{{n}/{{\lambda}}}$. Consequently, the step behaviour of the privacy guarantee when $n$ increases, is not an aberration introduced by the tools used in this paper, but the real behaviour of the system.
Theorem \ref{th:I_non_stingy_user} presents our main result, where we bound the information leakage for arbitrary cost constraints $\Delta$. The proof proceeds by constructing a battery policy that combines two components for every request sequence. One of the components guarantees the feasibility constraint, while the other guarantees the cost constraint.
\newcommand{\max_{{\cal P}_{\randomvect{x}^n}}}{\max_{{\cal P}_{\randomvect{x}^n}}}
\newcommand{\randomvect{s}_{\!\,\omega}}{\randomvect{s}_{\!\,\omega}}
\newcommand{\randomvect{s}_{\!\,\gamma}}{\randomvect{s}_{\!\,\gamma}}
\newcommand{\randomvect{v}^n_{\!\omega}}{\randomvect{v}^n_{\!\omega}}
\newcommand{\randomvect{v}^n_{\!\gamma}}{\randomvect{v}^n_{\!\gamma}}
\newcommand{\hat{\randomvect{s}}_{\!\,\omega}}%{\randomvect{s}^K}{\hat{\randomvect{s}}_{\!\,\omega}
\newcommand{\hat{\randomvect{s}}_{\!\,\gamma}}%{\hat{\randomvect{s}}^K}{\hat{\randomvect{s}}_{\!\,\gamma}
\begin{theorem} \label{th:I_non_stingy_user}
Consider an EMU with battery capacity $\beta$, initial state $s_0$, market price $\vect{m}$, and output alphabet ${\cal Y}^n$ satisfying ${\cal Y}^n_c \subseteq {\cal Y}^n$ with ${\cal Y}^n_c$ defined in Lemma \ref{le:ordering}, then
\bes
\label{eq:upper_bound}
{\cal I}( \Delta ) \leq {\cal I}( \infty ) + {\cal I}_\Gamma(\Delta),
\end{IEEEeqnarray}
where
\bes \label{eq:multiletter}
{\cal I}_\Gamma(\Delta)= \min_{P_{\hat{\randomvect{s}}_{\!\,\gamma}}%{\hat{\randomvect{s}}^K|\hat{\randomvect{s}}_{\!\,\omega}}%{\randomvect{s}^K} \in \Gamma_\omega(\Delta)} \max_{P_{\hat{\randomvect{s}}_{\!\,\omega}}%{\randomvect{s}^K}\in{\cal P}_{\hat{\randomvect{s}}_{\!\,\omega}}%{\randomvect{s}^K}} \frac{1}{n} I(\hat{\randomvect{s}}_{\!\,\gamma}}%{\hat{\randomvect{s}}^K-\hat{\randomvect{s}}_{\!\,\omega}}%{\randomvect{s}^K;\hat{\randomvect{s}}_{\!\,\omega}}%{\randomvect{s}^K). \IEEEeqnarraynumspace
\end{IEEEeqnarray}
Here $\hat{\randomvect{s}}_{\!\,\omega}}%{\randomvect{s}^K$ and $\hat{\randomvect{s}}_{\!\,\gamma}}%{\hat{\randomvect{s}}^K$ are random processes in $\interval{0}{\beta}^K$ with joint distribution determined by
\bes \label{eq:gammaprime}
\Gamma_\omega(\Delta) = \left\{ P_{\hat{\randomvect{s}}_{\!\,\gamma}}%{\hat{\randomvect{s}}^K|\hat{\randomvect{s}}_{\!\,\omega}}%{\randomvect{s}^K} : \mathbbm{E}(\hat{\randomvect{s}}_{\!\,\gamma}}%{\hat{\randomvect{s}}^K\bm{\delta}) \leq \Delta - \beta\norm{(\bm{\delta})^+} \right\}\!, \IEEEeqnarraynumspace
\end{IEEEeqnarray}
where $ \bm{\delta}\in\mbox{\bb Z}^K$ denotes the vector of market price differences, with entries given by $\bm{\delta}_0=-m_0$, $\bm{\delta}_k = m_{k-1}-m_{k}$ for $k=1, 2, \ldots, K-1$ and $ \bm{\delta}_K=m_{K-1}$.
\end{theorem}
\begin{proof}
We prove the result for ${\cal Y}^n = \mbox{\bb Z}^n$; Lemma \ref{le:ordering} generalizes the proof for every ${\cal Y}^n$ satisfying ${\cal Y}_c^n \subseteq {\cal Y}^n$.
%
The proof follows by dividing the optimization process into two steps.
In the first step, we present a battery policy $\omega$ such that the resulting request sequence $\randomvect{v}^n_\omega$ satisfies the power outage and energy waste constraints, i.e., $\omega \in \Omega(s_0)$ as defined in (\ref{eq:omega_policies}).
These policies are discussed on Theorem \ref{th:IOmega}.
In the second step, we define a random vector $\randomvect{v}^n_\gamma$ such that $\randomvect{y}^n = \randomvect{v}^n_\omega + \randomvect{v}^n_\gamma$ also satisfies the cost constraints.
Specifically, we set
\bes
\randomvect{v}^n_{\!\gamma} = \sum_{t\in{\cal T}} \Big( (\vect{e}_{t} - \vect{e}_{t+1}) (\randomvect{s}_{\!\,\gamma} - \randomvect{s}_{\!\,\omega} )_{t} \Big), \label{eq:v_gamma}
\end{IEEEeqnarray}
where ${\cal T}$ denotes the ordered set of time steps at which a market transition takes place, i.e., ${\cal T}=\{~\!0,~\!l_0,~\!l_0\!+\!l_1,\ldots,~\!n\!-\!1\}$.
This implies that
\bes
g(\randomvect{y}^n, \vect{x}) &= \mathbbm{E}[ (\randomvect{s}_{\!\,\gamma})_{\cal T}\bm{\delta} +\vect{m}^T\vect{x} - \vect{m}^T\vect{y}^*(\vect{x}) ] \label{eq:gs}\\
&= \mathbbm{E}[(\randomvect{s}_{\!\,\gamma})_{\cal T}\bm{\delta}] + \beta\norm{(\bm{\delta})^+}, \label{eq:gs2}
\end{IEEEeqnarray}
where (\ref{eq:gs}) follows by (\ref{eq:v_gamma}) and the battery charging dynamics (\ref{eq:battery_filling}) and (\ref{eq:gs2}) follow by noting that ${\cal Y}^n = \mbox{\bb Z}^n$. Selecting the transformation $\gamma$ determining $(\randomvect{s}_{\!\,\gamma})_{\cal T}$ from the set described in (\ref{eq:gammaprime}) yields
\bes
I(\randomvect{x}&^n;\randomvect{y}^n) \leq I(\randomvect{x}^n;\randomvect{v}^n_{\!\omega}) + I(\randomvect{x}^n;\randomvect{v}^n_{\!\gamma}|\randomvect{v}^n_{\!\omega})\\
&= I(\randomvect{x}^n;\randomvect{v}^n_{\!\omega}) + H(\randomvect{v}^n_{\!\gamma}|\randomvect{v}^n_{\!\omega}) - H(\randomvect{v}^n_{\!\gamma}|\randomvect{v}^n_{\!\omega},\randomvect{x}^n,\randomvect{s}_{\!\,\omega}) \IEEEeqnarraynumspace \label{eq:condional}\\
&= I(\randomvect{x}^n;\randomvect{v}^n_{\!\omega}) + H(\randomvect{s}_{\!\,\gamma}-\randomvect{s}_{\!\,\omega}|\randomvect{v}^n_{\!\omega}) - H(\randomvect{s}_{\!\,\gamma}-\randomvect{s}_{\!\,\omega}|\randomvect{s}_{\!\,\omega}) \IEEEeqnarraynumspace \label{eq:condional2} \\
&\leq I(\randomvect{x}^n;\randomvect{v}^n_{\!\omega}) + I(\randomvect{s}_{\!\,\gamma}-\randomvect{s}_{\!\,\omega};\randomvect{s}_{\!\,\omega}),
\end{IEEEeqnarray}
where (\ref{eq:condional}) follows as $\randomvect{x}^n$ and $\randomvect{v}^n_{\!\omega}$ determine $\randomvect{s}_{\!\,\omega}$ by the battery charging dynamics (\ref{eq:battery_filling}); (\ref{eq:condional2}) follows by (\ref{eq:v_gamma}) and noting that $\randomvect{s}_{\!\,\gamma}-\randomvect{s}_{\!\,\omega}$ is independent of $\randomvect{v}^n_{\!\omega}$ and $\randomvect{x}^n$ given $\randomvect{s}_{\!\,\omega}$. Thus
\bes
n&{\cal I}( \Delta )
%
= \!\!\!\min_{P_{\randomvect{y}^n|\randomvect{x}^n} \in \Gamma(\Delta) } \max_{{\cal P}_{\randomvect{x}^n}} I(\randomvect{x}^n;\randomvect{y}^n)\\
%
&\leq \!\!\!\min_{ \gamma \in \Gamma_\omega(\Delta) } \min_{\omega\in\Omega(s_0)} \max_{{\cal P}_{\randomvect{x}^n}} \Big(I(\randomvect{x}^n;\randomvect{v}^n_{\!\omega}) + I(\randomvect{s}_{\!\,\gamma}-\randomvect{s}_{\!\,\omega};\randomvect{s}_{\!\,\omega}) \Big) \IEEEeqnarraynumspace \\
%
%
%
&\leq\!\!\! \min_{\omega\in\Omega(s_0)} \max_{{\cal P}_{{\randomvect{x}}^n}}~\! I({\randomvect{x}}^n;{\randomvect{v}}^n_{\!\omega}) + \!\!\!\min_{ \gamma \in \Gamma_\omega(\Delta) } \max_{{\cal P}_{\randomvect{s}_\omega}} ~\! I(\randomvect{s}_{\!\,\gamma}-\randomvect{s}_{\!\,\omega};\randomvect{s}_{\!\,\omega}). \medmuskip=2mu \thinmuskip=1mu \thickmuskip=3mu \IEEEeqnarraynumspace \label{eq:IOmega_IGamma}
\end{IEEEeqnarray}
This completes the proof.
\end{proof}
While direct computation of the information leakage in (\ref{eq:privacy_guarantee}) relies on finding an $n$-dimensional joint distribution satisfiying $\Gamma(\Delta)$, the bound presented in
(\ref{eq:upper_bound})
relies on a $K$-dimensional distribution and the simplified version of $\Gamma(\Delta)$ defined in (\ref{eq:gammaprime}). This significantly eases the computation of the information leakage as described in Section \ref{sec:numerical}.
\newcommand{\Delta_{\max}}{\Delta_{\max}}
Note also that (\ref{eq:multiletter}) implies that ${\cal I}_\Gamma(0) \leq |{\cal S}_\omega| = {K}/{n} \log_2 (\beta+1)$ and ${\cal I}_\Gamma(\Delta)= 0$ for any $\Delta \geq \Delta_{\max}$ with $ \Delta_{\max} = \beta\|\bm{\delta}\|_1 -\beta m_0$.
Interestingly, a time-sharing argument presented in \cite{AEE_ARXIV_19} yields
\bes \label{eq:singleletter}
{\cal I}(\Delta) \leq \frac{1}{n}\ceil{\frac{n-\floor{\beta/\alpha}}{\lambda}} + \left(1-\frac{\Delta}{\Delta_{\max}}\right)^{\!\!+} \frac{K}{n} \log_2 (\beta+1). \IEEEeqnarraynumspace \medmuskip=2mu \thinmuskip=1mu \thickmuskip=3mu
\end{IEEEeqnarray}
\subsection{Worst case consumption proccess}
\begin{theorem} \label{co:selling}
Let the output alphabet ${\cal Y}^n$ satisfy ${\cal Y}^n_c\subseteq{\cal Y}^n$ with ${\cal Y}^n_c$ defined in Lemma \ref{le:outputset2}, then the privacy guarantee ${\cal I}(\Delta)$ as defined by Definition \ref{def:privacy_guarantee}, is bounded by
\bes
{\cal I}(\infty) + ({\cal I}_{\Gamma}^{{l'}} - \gamma)^+ \leq {\cal I}(\Delta),
\end{IEEEeqnarray}
where ${l_k'} = l_k-\floor{l_k/\ceil{\lambda}-1}^+$, ${\cal I}_{\Gamma}^{{l'}}$ is defined by Definition \ref{def:market_capacity} and $\gamma = \sum {l_k'}/\lambda$.
\end{theorem}
\begin{proof}
We prove the result by constructing a random process $\randomvect{w}^n$ that achieves the lower bound. Let the input alphabet ${\cal W}^n\subseteq{\cal X}^{n}$ be divided according to the market price partitioning, i.e. ${\cal W}^n={\cal W}^{l_1}\times{\cal W}^{l_2}\times...\times{\cal W}^{l_K}$. Where each set ${\cal W}^{l_k}$ is divided in two, i.e. ${\cal W}^{l_k} = {\cal W}_{\Omega,k} \times {\cal W}_{\Gamma}^{l_k-\kappa_k\lambda}$ with ${\cal W}_{\Omega,k} = {\kappa_k\lambda}$ $\kappa_k = \floor{(l_k+1/\alpha)/{\lambda}-1}^+$. Letting the random processes $\randomvect{w}^n_{\Omega}$ and $\randomvect{w}_{\Gamma}^n$ take values in ${\cal W}^n_{\Omega}$ and ${\cal W}^n_{\Gamma}$ implies that
\begin{IEEEeqnarray}{ll}
I(\randomvect{w}^n;\randomvect{y}^n) &= \sum_{k=1}^K I(\randomvect{w}_{\Omega}^{l_k};\randomvect{y}^n|\randomvect{w}^{l_k})
+ \sum_{k=1}^K I(\randomvect{w}_{\Gamma}^{l_k};\randomvect{y}^n|\randomvect{w}^{l_k}), \IEEEeqnarraynumspace \medmuskip=1mu \thinmuskip=0mu \thickmuskip=2mu \nulldelimiterspace=-1pt \scriptspace=0pt
\end{IEEEeqnarray}
by the chain rule. For the first term, it holds that
\begin{IEEEeqnarray}{ll}
I(\randomvect{w}_{\Omega}^{l_k};\randomvect{y}^n|\randomvect{w}^{l_k}) &= H(\randomvect{w}_{\Omega}^{l_k}|\randomvect{w}^{l_k}) - H(\randomvect{w}_{\Omega}^{l_k}|\randomvect{w}^{l_k},\randomvect{y}^n) \IEEEeqnarraynumspace \\
&= \floor{\frac{l_k}{\ceil{\lambda}}-1}^+,
\end{IEEEeqnarray}
since by Theorem \ref{th:packing} it holds that $\randomvect{w}^{l_k}_{\Omega}$ is uniquely determined by $\randomvect{w}^{l_k}$ and $\randomvect{y}^n$.
For the second term, it holds that
\begin{IEEEeqnarray}{ll}
\min_{P_{\randomvect{y}^n|\randomvect{x}^n}\in\Pi({\cal Y},\Delta)} \max_{P_{\randomvect{x}^n}\in{\cal P}_{\randomvect{x}^n}} \sum_{k=0}^{K} I(\randomvect{w}_{\Gamma}^{l_k};\randomvect{y}^n|\randomvect{w}^{l_k}) &= {\cal I}_{\Gamma}^{{l_k}}.
\end{IEEEeqnarray}
This completes the proof.
\end{proof}
\section{Numerical results} \label{sec:numerical}
\begin{figure}[t]
\centering
\resizebox {0.95\columnwidth} {!} {
\input{./new_bounds.tikz}
}
\label{fig:new_bounds}
\caption{Upper and lower bounds on the privacy cost function as a function of the privacy budget.}
\end{figure}
In this section, we numerically assess the upper bounds on the privacy cost described in Theorem \ref{th:IOmega} and Theorem \ref{th:I_non_stingy_user}. For comparison purposes, we also include the lower bounds on the privacy cost given in \cite{AEE_ARXIV_19}. We model the market price after the UK Economy 7 tariff, where users are charged an off-peak price of 0.071 \pounds/kWh within a 7 hour block and a peak price of 0.152 \pounds/kWh otherwise \cite{ukEconomy7}. We assume the user has an LG Chem RESU 6.5 battery with a capacity of $4.2$ kWh and a peak power of 4.2 kW. For simplicity we match the users' maximum power consumption to the peak power of the battery, i.e., $4.2$ kW \cite{GDH_SPM_18}. The SM sends the UP integrated energy readings every 30 min following UK specifications for SMs \cite{GDH_SPM_18}. Thus, we set the time elapsed between time steps $i$ and $i+1$ to 30 min. Defining 2.1 kWh as 1 unit of energy yields the following parameters in our system model: battery capacity $\beta= 4.2\;\textnormal{kWh}/2.1\;\textnormal{kWh} = 2$; maximum consumption between time steps $\alpha= 4.2\;\textnormal{kW} \times 0.5\;\textnormal{h}/2.1\;\textnormal{kWh} = 1$; market lengths $l_0 = 7\;\textnormal{h} /0.5\;\textnormal{h} = 14$ and $l_1= 17\;\textnormal{h}/0.5\;\textnormal{h} = 34$; corresponding market prices of $m_0=0.152 \;\pounds/\;\textnormal{kWh} \times 2.1\;\textnormal{kWh} =0.3192$ \pounds\; and $m_1=0.071 \pounds/\;\textnormal{kWh} \times 2.1\;\textnormal{kWh} = 0.1791$ \pounds\; per unit of energy.
Figure 2 depicts the bounds on the privacy cost ${\cal I}(\Delta)$ for different values of the system cost $\Delta$ and initial battery state $s_0 = 0$ during a one day period, i.e. $n=24\;\textnormal{h}/0.5\;\textnormal{h} = 48$. Following (\ref{eq:singleletter}), when the user does not wish to increase the system cost for privacy, the privacy cost is bounded by ${\cal I}(0) = 0.4\;\textrm{bits}$.
For large values of the system cost $\Delta$
the cost constraint is always satisfied, i.e. ${\cal I}_\Gamma(\Delta)=0$, and the privacy leakage is governed by the feasibility constraints.
\ifCLASSOPTIONcaptionsoff
\fi
\balance
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUdic4uBhjDigVQF1o
| 5
| 1
|
\section{Introduction}
Let $(\Om,\cF,\dbP)$ be a complete probability space on which a one-dimensional standard Brownian motion $W=\{W(t);0\les t<\i\}$ is defined, with $\dbF=\{\cF_t\}_{t\ges0}$ being the natural filtration of $W$ augmented by all the $\dbP$-null sets in $\cF$. Let $\xi$ be a (random) payoff at some future time $T$ of a certain European type contingent claim, and $c(\cd)$ be a consumption rate. Following \cite{Duffie-Epstein 1992}, we let $Y(\cd)$ solve the following equation:
\bel{BSDE1}Y(t)=\dbE_t\[\xi+\int_t^T\(f(c(s),Y(s))+A(Y(s))Z(s)^2\)ds\],\q t\in[0,T],\ee
hereafter, $\dbE_t[\,\cd\,]=\dbE[\,\cd\,|\,\cF_t]$ is the conditional expectation operator, and $f:\dbR\times\dbR\to\dbR$ is a given map, called the {\it aggregator},
$$Z(t)^2={d\over dt}\lan Y\ran(t),$$
with $t\mapsto\lan Y\ran(t)$ being the quadratic variation process of $Y(\cd)$, and $A(Y(t))$ is called the {\it variance multiplier}. Such defined $Y(\cd)$ is called a {\it recursive utility process} (which has also been called {\it stochastic differential utility process}) of the payoff $\xi$ and the consumption rate $c(\cd)$. The main feature of such a process $Y(\cd)$ is that the current value $Y(t)$ depends on the future values $Y(s)$, $t<s\les T$ of the process. This notion was firstly introduced by Duffie and Epstein \cite{Duffie-Epstein 1992} in 1992.
It is easy to see that $(Y(\cd),Z(\cd))$ solves \rf{BSDE1} if and only if it is an adapted solution to the following backward stochastic differential equation (BSDE, for short):
\bel{BSDE2}Y(t)=\xi+\int_t^Tg(s,Y(s),Z(s))ds-\int_t^TZ(s)dW(s),\q t\in[0,T],\ee
with
\bel{g=z2}g(s,y,z)=f(c(s),y)+A(y)z^2.\ee
Thanks to the discovery of the relation between \rf{BSDE1} and \rf{BSDE2}, recursive utility process was later extended to the adapted solution of general BSDEs (see \cite{El Karoui-Peng-Quenez 1997,Lazrak-Quenez 2003,Lazrak 2004}).
\ms
Now, if instead of $\xi$, we have an $\cF_T$-measurable process $\psi(t)$, not necessarily $\dbF$-adapted, which is called a {\it position process} (see \cite{Riedel 2004} for a study of discrete-time cases). It could also be called an {\it anticipated wealth flow process}. For example, it could be an anticipated received dividend process of a stock (which depends on the uncertain performance of the company), anticipated received mortgage payments (for a bank, say, with an uncertainty of default or prepayment), anticipated claim payments of an insurance policy, the random maintenance costs of an owned facility, etc. The feature of such kind of process is that at time $t$, the actually anticipated value of the process is not $\cF_t$-measurable. To ``calculate'' the recursive utility for such a process at the current time $t$, mimicking \rf{BSDE1}, we might formally solve the following BSDE:
\bel{BSDE4}Y(t;r)=\psi(t)+\int_r^Tg(s,Y(t;s),Z(t;s))ds-\int_r^TZ(t;s)dW(s),
\qq r\in[t,T],\ee
with the current time $t$ being a parameter. Intuitively, $Y(t;r)$ should represent the utility of the process $\psi(\cd)$ at a future time $r$, estimated/predicted at the current time $t$. Therefore, the utility at the current time $t$ should be given by $Y(t;t)$. However, by taking $r=t$ in the above, we obtain
\bel{BSDE5}Y(t;t)=\psi(t)+\int_t^Tg(s,Y(t;s),Z(t;s))ds-\int_t^TZ(t;s)dW(s),\q t\in[0,T],\ee
which is not an equation for the process $t\mapsto Y(t;t)$ since $Y(t;s)$ appears on the right-hand side of the above. A careful observation shows that $Y(t;r)$ obtained through \rf{BSDE4} has some time-inconsistent nature, by which we mean the following: If everything is ideal, the value $Y(t;r)$, which is supposed to be the utility of the process $\psi(\cd)$ at a future time $r$ estimated/predicted at the current time $t$ should be equal to $Y(r;r)$, the realistic utility at future time $r$. But this seems to have very little hope. In another word, $t\mapsto Y(t;t)$ determined by a family of BSDEs as above seems not to be a good description of the recursive utility process for the position process $\psi(\cd)$.
\ms
Suggested by \rf{BSDE4}--\rf{BSDE5}, we propose the following modified equation:
\bel{BSVIE1}Y(t)=\psi(t)+\int_t^Tg(s,Y(s),Z(t,s))ds-\int_t^TZ(t,s)dW(s),\qq t\in[0,T].\ee
Note that the above modification is simply to force $Y(t;s)=Y(s;s)$ in \rf{BSDE5}, then rename $Y(t;t)$ to be $Y(t)$. The advantage of such a modification is that as long as a solution $(Y(\cd),Z(\cd\,,\cd))$ of \rf{BSVIE1} exists, $Y(\cd)$ is time-consistent. Then, $Y(\cd)$ could serve as a good description of the recursive utility for the process $\psi(\cd)$ (by suitably selecting the aggregator $g(s,y,z)$). However, a couple of natural questions arise: (i) Is there any convincing mathematical justification for the model \rf{BSVIE1}, and (ii) By ``brutally'' forcing $Y(t;s)=Y(s;s)$, is the resulting equation \rf{BSVIE1} well-posed? For question (i), we will sketch a convincing argument in the appendix at the end of the paper, justifying our modification. We will borrow some ideas from the study of time-inconsistent optimal control problems (\cite{Yong 2012}). For question (ii), it turns out that \rf{BSVIE1} is nothing but a so-called {\it backward stochastic Volterra integral equation} (BSVIE, for short), which has been studied since the early 2000 for various cases, and the current paper is actually a continuation of those investigations. With the well-posedness of \rf{BSVIE1} (see below for details), the map $t\mapsto Y(t)$ will be called an {\it equilibrium recursive utility process} of $\psi(\cd)$. Interestingly, our mathematical justification presented in the appendix will perfectly justify the word ``equilibrium''.
\ms
BSVIEs have been studied since 2002 (\cite{Lin 2002}). Let us now elaborate a little more on BSVIEs. Let
$$ g:[0,T]^2\times\dbR\times\dbR\times\dbR\times\Om \to\dbR,\qq\psi:[0,T]\times\Om \to\dbR$$
be two given random fields. We consider the following BSVIE:
\bel{bsvie-II} Y(t)=\psi(t) + \int_t^T g(t,s,Y(s),Z(t,s),Z(s,t))ds - \int_t^T Z(t,s)dW(s),\q t\in[0,T]. \ee
By an {\it adapted solution} to BSVIE \rf{bsvie-II}, we mean an $(\dbR\times\dbR)$-valued random field $(Y,Z)=\{(Y(t)$, $Z(t,s));0\les s,t\les T\}$ such that
\begin{enumerate}[(i)]
\item $Y(\cd)$ is $\dbF$-progressively measurable (not necessarily continuous),
\item for each fixed $0\les t\les T$, $Z(t,\cd)$ is $\dbF$-progressively measurable, and
\item equation \rf{bsvie-II} is satisfied in the usual It\^{o} sense for Lebesgue measure almost every $t\in[0,T]$.
\end{enumerate}
Condition (ii) implies that for any $t\in[0,T)$, the random variable $Z(t,s)$ is $\cF_s$-measurable for any $s\in[t,T]$.
In \rf{bsvie-II}, $g$ and $\psi$ are called the {\it generator} and the {\it free term}, respectively.
Let us point out that in this paper, we only study the BSVIEs with $Y(\cd)$ being one-dimensional.
The case that $Y(\cd)$ being higher dimensional will be significantly different in general, and will be investigated in the near future.
However, the Brownian motion $W(\cd)$ assumed to be one-dimensional is just for convenience of our presentation.
\ms
When $Z(s,t)$ is absent, \rf{bsvie-II} is reduced to the form
\bel{bsvie-I}Y(t)=\psi(t)+\int_t^Tg(t,s,Y(s),Z(t,s))ds-\int_t^TZ(t,s)dW(s),\q t\in[0,T], \ee
which is a natural extension of BSDEs, and is a little more general than \eqref{BSVIE1} since $g$ depends on both $t$ and $s$. BSVIEs of form \rf{bsvie-I}, referred to as {\it Type-I BSVIEs}, were firstly studied by Lin \cite{Lin 2002}, followed by several other researchers: Aman and N'Zi \cite{Aman-N'Zi 2005}, Wang and Zhang \cite{Wang-Zhang 2007}, Djordjevi\'{c} and Jankovi\'{c} \cite{Djordjevic-Jankovic 2013,Djordjevic-Jankovic 2015}, Hu and {\O}ksendal \cite{Hu 2018}.
\ms
BSVIEs of the form \rf{bsvie-II} (containing $Z(s,t)$) were firstly introduced by Yong \cite{Yong 2006,Yong 2008}, motivated by the study of optimal control for forward stochastic Volterra integral equations (FSVIEs, for short). We call \rf{bsvie-II} a {\it Type-II BSVIE} to distinguish it from Type-I BSVIEs.
Type-II BSVIE \rf{bsvie-II} has a remarkable feature that its adapted solution, similarly defined as that for Type-I BSVIEs, might not be unique due to lack of restriction on the term $Z(s,t)$ (with $0\les t\les s\les T$).
Suggested by the natural form of the adjoint equation in the Pontryagin type maximum principle, Yong \cite{Yong 2008} introduced the notion of {\it adapted M-solutions}: A pair $(Y(\cd),Z(\cd\,,\cd))$ is called an adapted M-solution to \rf{bsvie-II}, if in addition to (i)--(iii) stated above, the following condition is also satisfied:
\bel{M-solution} Y(t)=\dbE[Y(t)]+\int_0^tZ(t,s)dW(s),\q\ae~t\in[0,T],~\as \ee
Under usual Lipschitz conditions, well-posedness was established in \cite{Yong 2008} for the adapted M-solutions to Type-II BSVIEs of form \rf{bsvie-II}.
This important development has triggered extensive research on BSVIEs and their applications. For instance, Anh, Grecksch and Yong \cite{Anh-Grecksch-Yong 2011} investigated BSVIEs in Hilbert spaces; Shi, Wang and Yong \cite{Shi-Wang-Yong 2013} studied well-posedness of BSVIEs containing mean-fields (of the unknowns); Ren \cite{Ren 2010}, Wang and Zhang \cite{Wang-Zhang p} discussed BSVIEs with jumps; Overbeck and R\"oder \cite{Overbeck-Roder p} even developed a theory of path-dependent BSVIEs; Numerical aspect was considered by Bender and Pokalyuk \cite{Bender-Pokalyuk 2013}; relevant optimal control problems were studied by Shi, Wang and Yong \cite{Shi-Wang-Yong 2015},
Agram and {\O}ksendal \cite{Agram-Oksendal 2015}, Wang and Zhang \cite{Wang-Zhang 2017}, and Wang \cite{Wang 2018}; Wang and Yong \cite{Wang-Yong 2015} established various comparison theorems for both adapted solutions and adapted M-solutions to BSVIEs in multi-dimensional Euclidean spaces.
\ms
Recently, inspired by the Four-Step Scheme in the theory of forward-backward stochastic differential equations (FBSDEs, for short) (\cite{Ma-Yong 1999}) and the time-inconsistent stochastic optimal control problems (\cite{Yong 2012}), Wang and Yong \cite{Wang-Yong 2018} established a representation of adapted solutions to Type-I BSVIEs and adapted M-solutions to Type-II BSVIEs in terms of the solution to a system of (non-classical) partial differential equations and the solution to a (forward) stochastic differential equation.
\ms
We point out that in all the above-mentioned works on BSVIEs, the generator $g(t,s,y,z,z')$ of the BSVIE \rf{bsvie-II} satisfies a uniform Lipschitz condition in $(y,z,z')$ so that the generator has a linear growth in $(z,z')$.
However, when the generator $g(s,y,z)$ of BSVIE \eqref{BSVIE1} is given by \eqref{g=z2}, it has a quadratic growth in $z$. Hence, a theory needs to be established for BSVIEs with the generators $g(t,s,y,z,z')$ growing quadratically in $z$, which are called quadratic BSVIEs (QBSVIEs, for short, if the quadratic growth of the generator in $z$ needs to be emphasized). We point out that at the moment, we are not able to handle the case that $z'\mapsto g(t,s,y,z,z')$ is quadratic, and it is also lack of motivation for that case.
\ms
Recall that for BSDE \rf{BSDE2}, when $(y,z)\mapsto g(s,y,z)$ satisfies a uniform Lipschitz condition, with $g(\cd\,,0,0)$ being $L^p$-integrable (with some $p>1$), for any $\cF_T$-measurable $L^p$-integrable random variable $\xi$, it admits a unique adapted solution $(Y(\cd),Z(\cd))$ (\cite{Pardoux-Peng 1990, Ma-Yong 1999, Yong-Zhou 1999}) which could be called a recursive utility process for $\xi$. On the other hand, when $z\mapsto g(s,y,z)$ has an up to quadratic growth, the BSDE \rf{BSDE2} is called a {\it quadratic BSDE} (QBSDE, for short).
In 2000, Kobylanski \cite{Kobylanski 2000} established the well-posedness of QBSDE with $\xi$ being bounded. Since then, some efforts have been made by researchers to relax the assumptions on the generator as well as on the terminal value $\xi$. Among relevant works, we would like to mention Briand and Hu \cite{Briand-Hu 2006,Briand-Hu 2008},
Hu and Tang \cite{Hu-Tang 2016}, Briand and Richou \cite{Briand-Richou 2017}, and Zhang \cite[Chapter 7]{Zhang 2017}. Further, BSDEs with superquadratic growth was investigated by Delbaen, Hu and Bao \cite{Delbaen-Hu-Bao 2011}, where some general negative results concerning the well-posedness can be found. Therefore, one can say that the theory of recursive utility for terminal payoff $\xi$ has reached a pretty mature stage.
\ms
The purpose of this paper is to establish the well-posedness of QBSVIEs under certain conditions. The method introduced by Yong \cite{Yong 2008} and techniques found in Briand--Hu \cite{Briand-Hu 2006, Briand-Hu 2008} will be combined and further developed. In addition, a comparison theorem for adapted solutions of Type-I QBSVIEs will be established. Consequently, equilibrium recursive utility processes and continuous-time equilibrium dynamic risk measures will be investigated. See Yong \cite{Yong 2007} and Wang--Yong \cite{Wang-Yong 2015}, Agram \cite{Agram 2018} for some earlier works. See also Di Persio \cite{Di Persio 2014} for stochastic differential utility, and Kromer--Overbeck \cite{Kromer-Overbeck 2017} for dynamical capital allocation by means of BSVIEs.
\ms
The rest of this paper is organized as follows.
In \autoref{Preliminaries}, we introduce some preliminary notations and definitions, and present some lemmas which are of frequent use in the sequel.
\autoref{I-BSVIE} is devoted to the study of existence and uniqueness of adapted solutions for Type-I QBSVIEs, and \autoref{II-BSVIE} is devoted to the study of existence and uniqueness of adapted M-solutions for Type-II QBSVIE.
A comparison theorem for adapted solutions to Type-I QBSVIEs \rf{bsvie-I} will be established in \autoref{Comparison-thm}, and an application of Type-I BSVIEs to continuous-time equilibrium dynamic risk measures will be presented in \autoref{application}. Some conclusion remarks will be collected in \autoref{remarks}.
Finally, a mathematical justification of the BSVIE model is sketched in the appendix.
\section{Preliminaries}\label{Preliminaries}
For $0\les a<b\les T$, we denote by $\cB([a,b])$ the Borel $\si$-field on $[a,b]$
and define the following sets:
\begin{alignat*}{3}
\D[a,b] &\deq\big\{(t,s)\bigm|a\les t\les s\les b\big\}, \q& \D^c[a,b] &\deq\big\{(t,s)\bigm|a\les s<t\les b\big\},\\
[a,b]^2 &\deq\big\{(t,s)\bigm|a\les t,s\les b\big\}=\D[a,b]\cup\D^c[a,b], \q& \D^*[a,b] &\deq\cl{\D^c[a,b]}.
\end{alignat*}
Note that $\D^*[a,b]$ is a little different from the complement $\D^c[a,b]$ of $\D[a,b]$ in $[a,b]^2$, since both $\D[a,b]$ and $\D^*[a,b]$ contain the diagonal line segment. In the sequel we shall deal with various spaces of functions and processes, which we collect here first
for the convenience of the reader:
\begin{align*}
L^1(a,b)&\ts=\Big\{h:[a,b]\to\dbR~|~h(\cd)~\hb{is $\cB([a,b])$-measurable, }\int_a^b|h(s)|ds<\i\Big\},\\
L^\i_{\cF_b}(\Om)
&\ts=\Big\{\xi:\Om\to\dbR~|~\xi~\hb{is $\cF_b$-measurable and bounded}\Big\},\\
L^\i_{\cF_b}(a,b)
&\ts=\Big\{\f:[a,b]\times\Om\to\dbR~|~\hb{$\f(\cd)$ is $\cB([a,b])\otimes\cF_b$-measurable and bounded}\Big\},\\
L_\dbF^2(a,b)
&\ts=\Big\{\f:[a,b]\times\Om\to\dbR~|~\f(\cd)~\hb{is $\dbF$-progressively measurable, }\dbE\int_a^b|\f(s)|^2ds<\i \Big\},\\
L^\i_\dbF(a,b)
&\ts=\Big\{\f(\cd)\in L^2_\dbF(a,b)\bigm|\hb{$\f(\cd)$ is bounded}\Big\},\\
L_\dbF^2(\Om;C[a,b])
&\ts=\Big\{\f:[a,b]\times\Om\to\dbR~|~\f(\cd)~\hb{is continuous, $\dbF$-adapted, }\dbE\big[\ds\sup_{a\les s\les b}|\f(s)|^2\big]<\i \Big\},\\
L_\dbF^\i(\Om;C[a,b])
&\ts=\Big\{\f(\cd)\in L^2_\dbF(\Om;C[a,b])\bigm|\ds\sup_{a\les t\les b}|\f(t)|\in L^\i_{\cF_b}(\Om)\Big\},\\
L_{\cF_b}^\i(\Om;C^U[a,b])
&\ts=\Big\{\f(\cd)\in L_{\cF_b}^\i(a,b)\bigm|\hbox{there exists a modulus of continuity $\rho:[0,\i)\to[0,\i)$}\\
&\ts\hp{=\Big\{\ } \hbox{such that}~ |\f(t)-\f(s)|\les \rho(|t-s|),~(t,s)\in[a,b],~\as\Big\},\\
L_\dbF^2(\D[a,b])
&\ts=\Big\{\f:\1n\D[a,b]\1n\times\1n\Om\1n\to\1n\dbR~|~\f(t,\cd)~\hb{is $\dbF$-progressively measurable on $[t,b]$, }\ae~t\1n\in\1n[a,b],\\
&\ts\hp{=\Big\{\ }\qq\qq\qq\qq\qq \dbE\int_a^b\int_t^b|\f(t,s)|^2dsdt<\i \Big\},\\
L_\dbF^2([a,b]^2)
&\ts=\Big\{\f:\1n[a,b]^2\1n\times\1n\Om\1n\to\1n\dbR~|~\f(t,\cd)~\hb{is $\dbF$-progressively measurable on $[a,b]$, }\ae~t\1n\in\1n[a,b],\\
&\ts\hp{=\Big\{\ }\qq\qq\qq\qq\qq\dbE\int_a^b\int_a^b|\f(t,s)|^2dsdt<\i \Big\},\\
\cH^2_\D[a,b] &\ts= L_\dbF^2(a,b)\times L_\dbF^2(\D[a,b]),\qq\cH^2[a,b]\ts= L_\dbF^2(a,b)\times L_\dbF^2([a,b]^2).
\end{align*}
Now, we recall the definitions of adapted solutions and adapted M-solutions for Type-I BSVIE \rf{bsvie-I} and Type-II BSVIE \rf{bsvie-II}, respectively (see \cite{Yong 2008}).
\begin{definition}\rm (i) A pair of processes $(Y(\cd),Z(\cd,\cd))\in\cH^2_\D[0,T]$ is called an {\it adapted solution} of BSVIE \rf{bsvie-I} if \rf{bsvie-I} is satisfied in the usual It\^o sense for Lebesgue measure almost every $t\in[0,T]$.
\ms
(ii) A pair of processes $(Y(\cd),Z(\cd,\cd))\in\cH^2[0,T]$ is called an {\it adapted solution} of BSVIE \rf{bsvie-II}
if \rf{bsvie-II} is satisfied in the usual It\^{o} sense for Lebesgue measure almost every $t\in[0,T]$. Further, it is called an {\it adapted M-solution} of BSVIE \rf{bsvie-II} on $[r,T]$ if, in addition, the following holds:
\bel{M1-solution}
Y(s)=\dbE_r[Y(s)]+\int_r^sZ(s,t)dW(t),\q\ae~ s\in[r,T].
\ee
Here, we recall that $\dbE_r=[\,\cd\,|\,\cF_r]$.
\end{definition}
Let $\cM^2[r,T]$ be the set of all $(y(\cd),z(\cd,\cd))\in\cH^2[r,T]$ satisfying \rf{M1-solution}. Clearly, $\cM^2[r,T]$ is a closed subspace of $\cH^2[r,T]$.
Further, for any $(y(\cd),z(\cd,\cd))\in\cM^2[r,T]$, we have
$$\dbE|y(s)|^2=\dbE\big|\dbE_r[y(s)]\big|^2+\dbE\int_r^s|z(s,t)|^2
dt\ges\dbE\int_r^s|z(s,t)|^2dt,\q\ae~s\in[r,T].$$
It follows that
\begin{align*}
\|(y(\cd),z(\cd,\cd))\|^2_{\cH^2[r,T]}&\equiv\dbE\[\int_r^T|y(s)|^2ds
+\int_r^T\int_r^T|z(s,t)|^2dtds\]\\
&=\dbE\[\int_r^T|y(s)|^2ds+\int_r^T\int_r^s|z(s,t)|^2dtds+\int_r^T
\int_s^T|z(s,t)|^2dtds\]\\
&\les\dbE\[2\int_r^T|y(s)|^2ds+2\int_r^T\int_s^T|z(s,t)|^2dtds\]\\
&\equiv 2\|(y(\cd),z(\cd,\cd))\|_{\cM^2[r,T]}^2\les 2\|(y(\cd),z(\cd,\cd))\|^2_{\cH^2[r,T]},
\end{align*}
which implies that $\| \cd \|_{\cM^2[r,T]}$ is an equivalent norm of $\|\cd\|_{\cH^2[r,T]}$ on $\cM^2[r,T]$.
\ms
Next, we recall the following definition (see \cite{Kazamari 1994} for relevant details).
\begin{definition} \rm
A uniformly integrable $\dbF$-martingale $M=\{M(t):0\les t\les T\}$ with $M(0)=0$ is called a {\it BMO martingale} on $[0,T]$ if
$$ \|M(\cd)\|^2_{{\rm BMO}(0,T)}\deq\sup_{\tau\in\sT[0,T]}\left\|\dbE_\t\big[|M(T)-M(\t)|^2\big]\right\|_\i<\i,$$
where $\sT[0,T]$ is the set of all $\dbF$-stopping times $\t$ valued in $[0,T]$.
\end{definition}
Sometimes, the norm $\|\cd\|_{{\rm BMO}(0,T)}$ is written as $\|\cd\|_{{\rm BMO}_\dbP(0,T)}$, indicating the dependence on the probability $\dbP$.
\ms
Next, let $X=\{X_t,\cF_t;0\les t\les T\}$ be a measurable, adapted process satisfying
$$\dbP\[\int_0^T|X(s)|^2ds<\i \]=1.$$
Recall the {\it Dol\'ean-Dade exponential} of $X$:
\bel{Girsanov-E}
\cE\{X\}_t\deq e^{\int_0^tX(s)dW(s)-{1\over2}\int_0^t|X(s)|^2ds},\q t\in[0,T],
\ee
and define a probability measure $\cl\dbP$ on $\cF_T$ by
\bel{Girsanov-P}d\cl\dbP=\cE\{X\}_{\1n_T}d\dbP.
\ee
Then, we have the following lemma which is a combination of the Girsanov's theorem (see Karatzas--Shreve \cite{Karatzas-Shreve 2012} for a proof) and a result found in Kazamaki \cite{Kazamari 1994}.
\begin{lemma}\label{lemma-Girsanov} \sl If $t\mapsto\int_0^tX(s)dW(s)$ is a BMO martingale on $[0,T]$, then $\cE\{X\}_t$ is a uniformly integrable martingale and the process $\cl W=\{\cl W(t),\cF_t\bigm|0\les t\les T\}$ defined by
\bel{lemma-Girsanov-tiW}
\cl W(t)\deq W(t)-\int^t_0 X(s)ds,\q0\les t\les T
\ee
is a standard Brownian motion on $(\Om,\cF_T,\cl\dbP)$.
\end{lemma}
Next, we introduce the following spaces. Let $0\les a< b< c\les T$, and
\begin{align*}
&\cl{\rm BMO}(a,b)=\Big\{\f:[a,b]\times\Om\to\dbR~\big|~\f(\cd)\in L_\dbF^2(a,b), \\
&\q \qq\qq\qq\qq \|\f(\cd)\|^2_{\cl{\rm BMO}(a,b)}\deq\sup_{\t\in\sT[a,b]}\Big\|\dbE_\t\[\int_\t^b|\f(s)|^2ds\,
\]\Big\|_\i<\i \Big\},\\
& \cl{\rm BMO}(\D[a,b])=\Big\{\f:\D[a,b]\times\Om\to\dbR~\big|~\f(\cd,\cd)\in L_\dbF^2(\D[a,b]), \\
&\q \qq\qq\qq\qq \|\f(\cd,\cd)\|^2_{\cl{\rm BMO}\big(\D[a,b]\big)}\deq\esssup_{t\in[a,b]}\sup_{\t\in\sT[t,b]}\,\Big\|\,\dbE_\t
\[\int_\t^b|\f(t,s)|^2ds\]\Big\|_\i<\i \Big\},\\
& \cl{\rm BMO}\big([a,b]\times[b,c]\big)=\Big\{\f:[a,b]\times[b,c]\times\Om\to\dbR~\big|~\f(\cd,\cd)\in L_\dbF^2([a,b]\times[b,c]), \\
&\q \qq\qq\qq\qq \|\f(\cd,\cd)\|^2_{\cl{\rm BMO}([a,b]\times[b,c])}\deq\esssup_{t\in[a,b]}\sup_{\t\in\sT[b,c]}\,\Big\|\,\dbE_\t
\[\int_\t^c|\f(t,s)|^2ds\]\Big\|_\i<\i \Big\}.
\end{align*}
We note that for $\f(\cd)\in\cl{\rm BMO}(a,b)$, if we let $\f(s)\equiv 0,~s\in[0,a)$, then $\int_0^s\f(r)dW(r);~0\les s\les b$ is a BMO martingale on $[0,b]$.
Similarly, for $\f(\cd\,,\cd)\in\cl{\rm BMO}(\D[a,b])$, if we let $\f(t,s)\equiv 0,~s\in[0,t)$, then $\int_0^s\f(t,r)dW(r);~0\les s\les b$ is a BMO martingale on $[0,b]$ for almost all $t\in[a,b)$. The situation for $\cl{\rm BMO}\big([a,b]\times[b,c]\big)$ is also similar. The following lemma plays a basic role in our subsequent arguments. We refer the reader to \cite[Theorem 3.3]{Kazamari 1994} for the proof and details.
\begin{lemma}\label{lemma-BMO} \sl
For $K>0$, there are constants $c_1,c_2>0$ depending only on K such that for any BMO martingale $M(\cd)$,
we have for any one-dimensional BMO martingale $N(\cd)$ such that $\|N(\cd)\|_{{\rm BMO}(0,T)}\les K$,
$$
c_1\|M(\cd)\|_{{\rm BMO}_{\dbP}(0,T)}\les\|\cl M(\cd)\|_{{\rm BMO}_{\cl \dbP}(0,T)}\les c_2\|M(\cd)\|_{{\rm BMO}_{\dbP}(0,T)},
$$
where $\cl M(\cd)\deq M(\cd)-\lan M, N\ran(\cd)$ and $d\cl\dbP=\bar\cE\{N(\cd)\}_{_T}d\dbP$.
\end{lemma}
We now consider the following BSDE:
\bel{pre-bsde-1-d}
Y(t)=\xi+\int_t^T f(s,Y(s),Z(s))ds-\int_t^T Z(s)dW(s),\q t\in[0,T].
\ee
Let us introduce the following hypothesis.
\begin{taggedassumption}{(A0)}\label{A0} \rm Let the generator $f:[0,T]\times\dbR\times\dbR\times\Om\to\dbR$ be $\cB([0,T]\times \dbR\times\dbR)\otimes\cF_T$-measurable
such that $s\mapsto f(s,y,z)$ is $\dbF$-progressively measurable for all $(y,z)\in \dbR\times\dbR$. There exist constants $\b$, $\g$, $L$ and a function $h(\cd)\in L^1(0,T)$ such that
\begin{align}
\label{|f|}|f(s,y,z)|\les h(s)+\b|y|+{\g\over 2}|z|^{2},\q (s,y,z)\in[0,T]\times\dbR\times \dbR;\\
\nn|f(s,y_1,z_1)-f(s,y_2,z_2)|\les L|y_1-y_2|+L(1+|z_1|+|z_2|)|z_1-z_2|,\\
\label{|f-f|}\qq\qq\qq\qq\qq\qq (s,y_i,z_i)\in[0,T]\times\dbR\times\dbR,~i=1,2.
\end{align}
\end{taggedassumption}
\begin{lemma}\label{lemma-briand-hu} \sl
Let {\rm\ref{A0}} hold. Then, for any $\xi\in L^\i_{\cF_T}(\Om)$, BSDE \rf{pre-bsde-1-d} admits a unique adapted solution $(Y(\cd),Z(\cd))\in L_\dbF^\i(\Om;$ $C[0,T])\times\cl{\rm BMO}(0,T)$. Moreover,
\bel{lemma-briand-hu-main*}
e^{\g|Y(t)|}\les\dbE_t\[e^{\g e^{\b(T-t)}|\xi|+\g\int_t^T|h(s)|e^{\b(s-t)}ds}\].
\ee
\end{lemma}
\begin{proof} \rm By \cite[Theorem 7.3.3]{Zhang 2017}, BSDE \rf{pre-bsde-1-d} admits a unique adapted solution $(Y(\cd),Z(\cd))\in L^\infty_\dbF(\Om;C[0,T])\times L^2_\dbF(0,T)$. Then, by \cite[Theorem 7.2.1]{Zhang 2017}, we see that the adapted solution $(Y(\cd),$ $Z(\cd))\in L^\infty_\dbF(\Om;C[0,T])\times\cl{\rm BMO}(0,T)$. Further, by \cite[Proposition 1]{Briand-Hu 2008}, we have inequality \rf{lemma-briand-hu-main*}.
\end{proof}
\section{Adapted Solution to Type-I QBSVIE}\label{I-BSVIE}
In this section, we will establish the existence and uniqueness of the adapted solution to Type-I QBSVIE. Keep in mind that we may just use ``BSVIE'', instead of ``Type-I QBSVIE'', for convenience. First, let us look at the following simple example.
\begin{example}\rm
Consider the one-dimensional BSVIE:
\bel{ex-3-1} Y(t)=\psi(t)+\int_t^T{Z(t,s)^2\over 2}ds-\int_t^TZ(t,s)dW(s), \ee
where $\psi(\cd)\in L^\i_{\cF_T}(0,T)$, and $W(\cd)$ is a one-dimensional standard Brownian motion. In order to solve equation \rf{ex-3-1}, we introduce a family of BSDEs parameterized by $t\in[0,T]$:
\bel{ex-3-2} \eta(t,s)=\psi(t)+\int_s^T {\z(t,r)^2\over 2}dr-\int_s^T\z(t,r)dW(r),\q s\in[t,T]. \ee
By \autoref{lemma-briand-hu}, BSDE \rf{ex-3-2} admits a unique adapted solution
$(\eta(t,\cd),\z(t,\cd))\in L_\dbF^\i(\Om;C[t,T])\times\cl{\rm BMO}(t,T)$.
Let
$$ Y(t)=\eta(t,t) ~\hbox{and} ~Z(t,s)=\zeta(t,s),\q (t,s)\in\D[0,T], $$
then
$$ Y(t)=\psi(t)+\int_t^T {Z(t,s)^2\over 2}ds-\int_t^T Z(t,s)dW(s),\q t\in[0,T], $$
which implies that $(Y(\cd),Z(\cd,\cd))$ is an adapted solution to BSVIE \rf{ex-3-1}.
The uniqueness of the solutions to BSVIE \rf{ex-3-1} can be obtained by the following \autoref{thm-exist-unique-no-y}. Moreover, the first term $Y(\cd)$ of the unique solution to BSVIE \rf{ex-3-1} could be solved explicitly:
\bel{Y-ex}
Y(t)=\ln\{\dbE[e^{\psi(t)}|\cF_t]\},\q t\in[0,T].
\ee
\end{example}
Clearly, from the expression \rf{Y-ex}, we see that as long as
$$\sup_{t\in[0,T]}\dbE\[e^{\psi(t)}\]<\infty,$$
by a usual approximation technique, one could find that BSVIE \rf{ex-3-1} will still have the adapted solution with $Y(\cd)$ given by \rf{Y-ex}. Some general exploration in this direction will be carried out elsewhere.
\ms
From the above example, we see that BSVIE \rf{ex-3-1} can be fully characterized by a family of BSDEs \rf{ex-3-2}. The main reason is that the generator of equation \rf{ex-3-1} is independent of $y$. This suggests us first consider a special case of Type-I QBSVIE \rf{bsvie-I}.
\subsection{A special case}
Consider the following BSVIE:
\bel{bsvie-no-y}
Y(t)=\psi(t)+\int_t^T g(t,s,Z(t,s))ds-\int_t^T Z(t,s)dW(s),
\ee
where the generator $g:\D[0,T]\times\dbR\times\Om\to\dbR$ and the free term $\psi:[0,T]\times\Om\to\dbR$ are given maps.
We adopt the following assumption concerning $g(\cd)$, which is comparable with \ref{A0}.
\begin{taggedassumption}{(A1)}\label{A1} \rm Let the generator $g:\D[0,T]\times \dbR\times \Om\to\dbR$ be $\cB(\D[0,T]\times \dbR)\otimes\cF_T$-measurable such that $s\mapsto g(t,s,z)$ is $\dbF$-progressively measurable on $[t,T]$, for all $(t,z)\in [0,T)\times\dbR$. There exist two constants $\g$, $L$ and a function $h(\cd)\in L^1(0,T;\dbR)$ such that
\begin{align*}
&|g(t,s,z)|\les h(s)+{\g\over 2}|z|^{2},\q (t,s,z)\in\D[0,T]\times \dbR;\\
&|g(t,s,z_1)-g(t,s,z_2)|\les L(1+|z_1|+|z_2|)|z_1-z_2|,\q (t,s,z_i)\in\D[0,T]\times\dbR,~i=1,2.
\end{align*}
\end{taggedassumption}
\noindent Now, we state the following existence and uniqueness result of BSVIE \rf{bsvie-no-y}.
\begin{theorem}\label{thm-exist-unique-no-y} \sl
Let {\rm\ref{A1}} hold. Then for any $\psi(\cd)\in L^\i_{\cF_T}(0,T)$, BSVIE \rf{bsvie-no-y} admits a unique adapted solution $(Y(\cd),Z(\cd,\cd))\in L^\i_\dbF(0,T)\times\cl{\rm BMO}(\D[0,T])$.
\end{theorem}
\begin{proof}
We first show the existence of the adapted solution to BSVIE \rf{bsvie-no-y}.
Consider the following BSDEs parameterized by $t\in[0,T]$:
\bel{eta-zeta-no-y}
\eta(t,s)=\psi(t)+\int_s^T g(t,r,\z(t,r))dr-\int_s^T\z(t,r)dW(r),\q s\in[t,T].
\ee
For almost all $t\in[0,T]$, by \autoref{lemma-briand-hu}, under {\rm\ref{A1}},
BSDE \rf{eta-zeta-no-y} admits a unique adapted solution $(\eta(t,\cd),\z(t,\cd))\in L_\dbF^\i(\Om;C[t,T])\times\cl{\rm BMO}(t,T)$.
Let
$$Y(t)=\eta(t,t),\q Z(t,s)=\z(t,s),\q (t,s)\in\D[0,T],$$
then $(Y(\cd),Z(\cd,\cd))\in L^\i_\dbF(0,T)\times\cl{\rm BMO}(\D[0,T])$ and
$$
Y(t)=\psi(t)+\int_t^Tg(t,s,Z(t,s))ds-\int_t^T Z(t,s)dW(s),\q t\in[0,T],
$$
which implies that $(Y(\cd),Z(\cd,\cd))$ is an adapted solution for BSVIE \rf{bsvie-no-y}.
\ms
The uniqueness is followed from the next theorem.
\end{proof}
Consider the following BSVIEs: For $i=1,2$,
\bel{bsvie-no-y-com}
Y_i(t)=\psi_i(t)+\int_t^T g_i(t,s,Z_i(t,s))ds-\int_t^T Z_i(t,s)dW(s),\q t\in[0,T].
\ee
We have the following comparison theorem.
\begin{theorem}\label{thm-comparison-no-y} \sl
Let $g_1(\cd)$ and $g_2(\cd)$ satisfy {\rm\ref{A1}}, $\psi_1(\cd),\psi_2(\cd)\in L^\i_{\cF_T}(0,T)$. Let $(Y_i(\cd),Z_i(\cd,\cd))\in L^\i_\dbF(0,T)\times\cl{\rm BMO}(\D[0,T])$ be the adapted solution of corresponding BSVIE \rf{bsvie-no-y-com}. Suppose
\bel{psi-g-no-y-com}
\psi_1(t)\les\psi_2(t),\q g_1(t,s,z)\les g_2(t,s,z),\q \as,~\ae~(t,s,z)\in\D[0,T]\times\dbR,
\ee
then we have
\bel{com-no-y}
Y_1(t)\les Y_2(t),\q\as,~\ae~t\in[0,T].
\ee
In particular, if $g_1(\cd)=g_2(\cd)$ and $\psi_1(\cd)=\psi_2(\cd)$, the comparison implies the uniqueness of adapted solution to BSVIEs \rf{bsvie-no-y}.
\end{theorem}
\begin{proof}
We note that
\begin{align}
\nn Y_1(t)-Y_2(t)&=\psi_1(t)-\psi_2(t)+\int_t^T\left[g_1(t,s,Z_1(t,s))-g_2(t,s,Z_2(t,s))\right]ds\\
\label{com-no-y-y1-y2} &{\hp =}\qq-\int_{t}^{T}\left[Z_1(t,s)-Z_2(t,s)\right]dW(s).
\end{align}
Define the process $\th(\cd,\cd)$ such that
\begin{align}
&\th(t,s)=0,\q(t,s)\in\D^*[0,T];\\
&|\th(t,s)|\les C(1+|Z_1(t,s)|+|Z_2(t,s)|),\q (t,s)\in\D[0,T];\\
\label{com-no-y-beta}&g_1(t,s,Z_1(t,s))-g_1(t,s,Z_2(t,s))
=\big[Z_1(t,s)-Z_2(t,s)\big]\th(t,s),\q(t,s)\in\D[0,T].
\end{align}
Hereafter, $C>0$ stands for a generic constant which could be different from line to line. Then, for almost all $t\in[0,T]$, $W(t;\cd)$ defined by
\bel{W(t,s)}
W(t;s)\deq W(s)-\int_0^s\th(t,r)dr,\q s\in[0,T]
\ee
is a Brownian motion on $[0,T]$ under the equivalent probability measure $\cl{\dbP}_t$ defined by
$$d\cl{\dbP}_t\deq\cE\{\th(t,\cd)\}_{\1n_T}d\dbP. $$
The corresponding expectation is denoted by $\dbE^{\bar\dbP_t}$. Thus, by \rf{com-no-y-y1-y2} and \rf{W(t,s)}, we have
\begin{align*}
Y_1(t)-Y_2(t)&=\psi_1(t)-\psi_2(t)+\int_t^T\left[g_1(t,s,Z_2(t,s))-g_2(t,s,Z_2(t,s))\right]ds\\
&\hp{=\ }-\int_t^T\left[Z_1(t,s)-Z_2(t,s)\right]dW(t;s).
\end{align*}
Taking the conditional expectation with respect to $\cl{\dbP}_t$ on the both sides of the above equation and then by \rf{psi-g-no-y-com}, we have
\begin{align*}
Y_1(t)-Y_2(t)&=\dbE^{\bar\dbP_t}_t\[\psi_1(t)-\psi_2(t)+\int_t^T\left[g_1(t,s,Z_2(t,s))
-g_2(t,s,Z_2(t,s))\right]ds\]\les 0,\q\as
\end{align*}
Hence, \rf{com-no-y} follows.
\end{proof}
\begin{remark}
\rm \autoref{thm-exist-unique-no-y} and \autoref{thm-comparison-no-y} are both concerned with the BSVIE \rf{bsvie-no-y},
a very special case of Type-\uppercase\expandafter{\romannumeral1} BSVIE \rf{bsvie-I}, in which, the generator $g(\cd)$ is independent of the variable $y$. This makes the BSVIE \rf{bsvie-no-y} much easier to handle. Even though, \autoref{thm-exist-unique-no-y} and \autoref{thm-comparison-no-y} serve as a crucial bridge to the proof of the results for general Type-I BSVIEs.
\end{remark}
\subsection{The general case}
In this subsection, we will consider the following Type-\uppercase\expandafter{\romannumeral1} BSVIE:
\bel{bsvie-1-d}
Y(t)=\psi(t)+\int_t^Tg(t,s,Y(s),Z(t,s))ds-\int_t^TZ(t,s)dW(s),\qq t\in[0,T].
\ee
We first introduce the following assumption, which is comparable to \ref{A0}.
\begin{taggedassumption}{(A2)}\label{A2}\rm
Let the generator $g:\D[0,T]\times\dbR\times\dbR\times\Om\to\dbR$ be $\cB(\D[0,T]\times\dbR\times\dbR)\otimes\cF_T$-measurable such that $s\mapsto g(t,s,y,z)$ is $\dbF$-progressively measurable on $[t,T]$ for all $(t,y,z)\in [0,T]\times\dbR\times\dbR$. There exist two constants $L$ and $\g$ such that:
\begin{align*}
|g(t,s,y,z)|\les L(1+|y|)+{\g\over 2}|z|^2,\q\forall (t,s,y,z)\in\D[0,T]\times\dbR\times\dbR;\\
&|g(t,s,y_1,z_1)-g(t,s,y_2,z_2)|\les L\big\{|y_1-y_2|+(1+|z_1|+|z_2|)|z_1-z_2|\big\},\\
&\qq\qq\qq\qq\qq\qq\q\,~\forall (t,s,y_i,z_i)\in\D[0,T]\times\dbR\times\dbR,~i=1,2.
\end{align*}
\end{taggedassumption}
At the same time, we introduce the following additional assumption which will be used to establish a better regularity for the adapted solutions.
\begin{taggedassumption}{(A3)}\label{A3}\rm
Let $g:[0,T]^2\times\dbR\times\dbR\times\Om\to\dbR$ be measurable such that for every $(t,y,z)\in[0,T]\times\dbR\times\dbR$, $s\mapsto g(t,s,y,z)$ is $\dbF$-progressively measurable. There exists a modulus of continuity $\rho:[0,\i)\to[0,\i)$ (a continuous and monotone increasing function with $\rho(0)=0$) such that
\begin{align*}
|g(t,s,y,z)-g(t',s,y,z)|\les\rho(|t-t'|)(1+|y|+|z|^2),\q \forall~t,t',s\in[0,T],~(y,z)\in\dbR\times\dbR.
\end{align*}
\end{taggedassumption}
Note that in \ref{A3}, the generator $g(t,s,y,z)$ is defined for $(t,s)$ in the square domain $[0,T]^2$ instead of the triangle domain $\D[0,T]$, and the uniform continuity of the map $t\mapsto g(t,s,y,z)$ (uniform for $(s,y,z)$ in any bounded set) is assumed. Now, we state the main result of this subsection.
\begin{theorem}\label{thm-bsvie-1-d-exist-unique} \sl
Let {\rm\ref{A2}} hold. Then for any $\psi(\cd)\in L^\infty_{\cF_T}(0,T)$, BSVIE \rf{bsvie-1-d} admits a unique adapted solution $(Y(\cd),Z(\cd,\cd))\in L^\i_{\dbF}(0,T)\times \cl{\rm BMO}(\D[0,T])$.
\end{theorem}
We will prove \autoref{thm-bsvie-1-d-exist-unique} by means of contraction mapping theorem. For any $(U(\cd),V(\cd,\cd))\in L^\i_{\dbF}(0,T)\times\cl{\rm BMO}(\D[0,T])$, consider the following BSVIE:
\bel{bsvie-1-d-uv}
Y(t)=\psi(t)+\int_t^Tg(t,s,U(s),Z(t,s))ds-\int_t^TZ(t,s)dW(s).
\ee
By \autoref{thm-exist-unique-no-y}, BSVIE \rf{bsvie-1-d-uv} admits a unique adapted solution $(Y(\cd),Z(\cd,\cd))\in L^\i_\dbF(0,T)\times\cl{\rm BMO}$ $(\D[0,T])$. Thus, the map
\bel{1-d-gamma}
\G(U(\cd),V(\cd,\cd))\deq(Y(\cd),Z(\cd,\cd)),\q(U(\cd),V(\cd,\cd))\in L^\i_{\dbF}(0,T)\times\cl{\rm BMO}(\D[0,T])
\ee
is well-defined. In order to prove \autoref{thm-bsvie-1-d-exist-unique}, we present the following lemma.
\begin{lemma}\label{le-l-d-Gamma-e-b} \sl
Let {\rm\ref{A2}} hold and $\e\in(0,{1\over2L}]$. Then for any $\psi(\cd)\in L^\i_{\cF_T}(0,T)$, the map $\G(\cd,\cd)$ defined by \rf{1-d-gamma} satisfies the following:
\bel{le-1-d-Gamma-e-b-main}
\G(\cB_\e)\subseteq\cB_\e,
\ee
where $\cB_\e$ is defined by the following:
\bel{1-d-b-e}\ba{ll}
\ns\ds\cB_\e\deq\Big\{(U(\cd),V(\cd,\cd))\in L^\i_{\dbF}(T-\e,T)\times
\cl{\rm BMO}(\D[T-\e,T])\bigm|\\
\ns\ds\qq\qq\qq\qq\|U(\cd)\|_{L^\i_\dbF(T-\e,T)}\les2\|\psi(\cd)\|_\i+1,
\q\|V(\cd\,,\cd)\|^2_{\cl{\rm BMO}(\D[T-\e,T])}\les A\Big\},
\ea\ee
with
$$A={2\over\g^2}e^{\g\|\psi(\cd)\|_\i}+{1\over\g}e^{2(\g+1)
\|\psi(\cd)\|_\i+\g+2}.$$
\end{lemma}
\begin{proof}
For any $(U(\cd),V(\cd,\cd))\in \cB_\e $, consider a family of BSDEs (parameterized by $t\in[0,T]$):
\bel{le-l-d-Gamma-e-b-1}
\eta(t,s)=\psi(t)+\int_s^Tg(t,r,U(r),\z(t,r))dr-\int_s^T\z(t,r)dW(r),\q s\in[t,T].
\ee
Note that $U(\cd)$ is bounded.
For almost all $t\in[T-\e,T]$, by \autoref{lemma-briand-hu}, the above BSDE admits a unique adapted solution $(\eta(t,\cd),\z(t,\cd))\in L^\i_\dbF(\Om;C[t,T])\times\cl{\rm BMO}(t,T)$. Let
\bel{le-1-d-Gamma-e-b-yz}
Y(t)=\eta(t,t),\q Z(t,s)=\z(t,s),\q (t,s)\in\D[T-\e,T].
\ee
Then by \autoref{thm-exist-unique-no-y}, $(Y(\cd),Z(\cd,\cd))\in L^\i_{\dbF}(0,T)\times\cl{\rm BMO}(\D[0,T])$ is the unique adapted solution to BSVIE \rf{bsvie-1-d-uv}. The rest of the proof is divided into two steps.
\ss
{\bf Step 1:} {\it Estimate of $\|Y(\cd)\|_\i$. }
\ms
For BSDE \rf{le-l-d-Gamma-e-b-1}, by \ref{A2}, we have
$$|g(t,r,U(r),\z)|\les L\big(1+|U(r)|\big)+{\g\over2}|\z|^2.$$
Thus, note that $\e\in(0,{1\over 2L}]$, by \autoref{lemma-briand-hu} with $h(s)=L(1+|U(s)|)$, $\g=\g$ and $\b=0$, we have
\bel{le-l-d-Gamma-e-b-9}\ba{ll}
\ns\ds e^{\g|\eta(t,s)|}\les\dbE_s\[e^{\g \big(|\psi(t)|+L\int_s^T(1+|U(r)|)dr\big)}\]\les e^{\g\big[\|\psi(\cd)\|_\i+L\e\big(1+\|U(\cd)\|_{L^\i_\dbF(T-\e,T)}\big)\big]}\\
\ns\ds\qq\q~\les e^{\g(2\|\psi(\cd)\|_\i+1)},\q T-\e\les t\les s\les T,\ea\ee
which is equivalent to
\bel{le-l-d-Gamma-e-b-9.1}|\eta(t,s)|\les2\|\psi(\cd)\|_\i+1,\q T-\e\les t\les s\les T.\ee
Consequently, noting $Y(t)=\eta(t,t)$, one has
$$\|Y(\cd)\|_{L^\i_\dbF(T-\e,T)}\les2\|\psi(\cd)\|_\i+1.$$
{\bf Step 2:} {\it Estimate of $\|Z(\cd\,,\cd)\|^2_{\cl{\rm BMO}(\D[T-\e,T])}$.}
\ms
Define
\bel{le-l-d-Gamma-e-b-2}\p(y)\deq\g^{-2}\big(e^{\g|y|}-\g|y|-1\big);\q y\in\dbR. \ee
Then, we have
\bel{le-l-d-Gamma-e-b-3}
\p'(y
=\g^{-1}[e^{\g|y|}-1]\hb{sgn}(y),\q
\p''(y
=e^{\g|y|},\ee
which leads to $\p''(y)=\g|\p'(y)|+1$. Applying It\^o's formula to $s\mapsto\p(\eta(t,s))$, we have
\begin{align}
\nn &\p(\psi(t))-\p(\eta(t,s))\\
\label{le-l-d-Gamma-e-b-4} &\q=-\int_s^T \p'(\eta(t,r))g(t,r,U(r),\z(t,r))dr+{1\over 2}\int_s^T \p''(\eta(t,r))|\z(t,r)|^2dr\\
\nn &\q\hp{=\ } +\int_s^T\p'(\eta(t,r))\z(t,r)dW(r),\q s\in[t,T].
\end{align}
Taking conditional expectation on the both sides of \rf{le-l-d-Gamma-e-b-4} and by {\rm\ref{A2}}, we have
\begin{align*}
&\p(\eta(t,s))+{1\over 2}\dbE_s\[\int_s^T\p''(\eta(t,r))|\z(t,r)|^2dr\]\\
&~\les\p(\|\psi(\cd)\|_\i)+L\dbE_s\[\int_s^T|\p'(\eta(t,r))|\big(1+|U(r)|\big)dr\]
+{\g\over2}\dbE_s\[\int_s^T|\p'(\eta(t,r))|\,|\z(t,r)|^2dr\].
\end{align*}
Combining this with \rf{le-l-d-Gamma-e-b-3}, one obtains
\bel{le-l-d-Gamma-e-b-5}\p(\eta(t,s))+{1\over 2}\dbE_s\[\int_s^T|\z(t,r)|^2dr\]\les\p(\|\psi(\cd)\|_\i)+L\dbE_s\[
\int_s^T|\p'(\eta(t,r))|(1+|U(r)|)dr\].\ee
Then, noting that $\p(\eta(t,s))\ges0$, we simply drop it to get
\begin{align*}
&\dbE_s\[\int_s^T|Z(t,r)|^2dr\]\les2\p(\|\psi(\cd)\|_\i)+2L\dbE_s\[\int_s^T|\p'(\eta(t,r))|(1+|U(r)|)dr\]\\
&~\les{2\over\g^2}e^{\g\|\psi(\cd)\|_\i}+{2L\over\g}\e e^{\g(2\|\psi(\cd)\|_\i+1)}e^{2(\|\psi(\cd)\|_\i+1)}
\les{2\over\g^2}e^{\g\|\psi(\cd)\|_\i}+{1\over\g}e^{2(\g+1)\|\psi(\cd)\|_\i+\g+2}.
\end{align*}
Hence,
\bel{Z<A}\|Z(\cd\,,\cd)\|^2_{\cl{\rm BMO}(\D[T-\e,T])}\les{2\over\g^2}e^{\g\|\psi(\cd)\|_\i}+{1\over\g}e^{2(\g+1)
\|\psi(\cd)\|_\i+\g+2}=A.\ee
This proves our claim. \end{proof}
The next result is concerned with the local solution of BSVIE \rf{bsvie-1-d}.
\begin{proposition}\label{pro-l-d-Gamma-e-c} \sl
Let {\rm\ref{A2}} hold and the map $\G(\cd\,,\cd)$ be defined by \rf{1-d-gamma}.
Then there is $\e>0$ such that $\Gamma(\cd\,,\cd)$ is a contraction on $\mathcal{B}_\e$, where $\mathcal{B}_\e$ is defined by \rf{1-d-b-e}.
This implies that BSVIE \rf{bsvie-1-d} admits a unique adapted solution on $[T-\e,T]$.
\end{proposition}
\begin{proof}
Let $\e\in(0,{1\over2L}]$. For any $(U(\cd),V(\cd\,,\cd)),(\wt U(\cd),\wt V(\cd\,,\cd))\in\cB_\e$, set
\bel{pro-l-d-Gamma-e-c-1}
(Y(\cd),Z(\cd,\cd))=\G(U(\cd),V(\cd\,,\cd))\q\hbox{and}\q(\wt Y (\cd),\wt Z(\cd))=\G(\wt U(\cd),\wt V(\cd\,,\cd));
\ee
that is,
\begin{align}
\label{pro-l-d-Gamma-e-c-2} \eta(t,s)&=\psi(t)+\int_s^T g(t,r,U(r),\z(t,r))ds-\int_s^T\z(t,r)dW(r),\\
\label{pro-l-d-Gamma-e-c-3} \wt\eta(t,s)&=\psi(t)+\int_s^T g(t,r,\wt U(r),\wt\z(t,r))dr-\int_s^T\wt\z(t,r)dW(r),
\end{align}
and
\bel{pro-l-d-Gamma-e-c-4}
Y(t)=\eta(t,t),~\wt Y (t)=\wt\eta(t,t),\q Z(t,r)=\z(t,r),~\wt Z(t,r)=\wt\z(t,r).
\ee
By \autoref{le-l-d-Gamma-e-b}, $(Y(\cd),Z(\cd\,,\cd))$ and $(\wt Y(\cd),
\wt Z(\cd\,,\cd))\in\cB_\e$. By \ref{A2}, for almost all $t\in[T-\e,T]$, we can define the process $\th(t,\cd)$ in an obvious way such that:
\begin{align}
\label{pro-l-d-Gamma-e-c-5-1} &\th(t,s)=0, \q(t,s)\in[T-\e,T]\times[0,t],\\
\label{pro-l-d-Gamma-e-c-5-2} &|\th(t,s)|\les L(1+|\z(t,s)|+|\wt\z(t,s)|),\q (t,s)\in\D[T-\e,T],\\
\label{pro-l-d-Gamma-e-c-5-3} &g(t,s,\wt U(s),\z(t,s))-g(t,s,\wt U(s),\wt\z(t,s))=[\z(t,s)-\wt\z(t,s)]\th(t,s).
\end{align}
Note that $(Y(\cd),\z(\cd\,,\cd)),(\wt Y(\cd),\wt\z(\cd\,,\cd))\in\cB_\e$. Thus, by \rf{pro-l-d-Gamma-e-c-5-1}--\rf{pro-l-d-Gamma-e-c-5-2},
\begin{align}
\nn\|\th(\cd,\cd)\|^2_{\cl{\rm BMO}(\D[T-\e,T])}&\les 3L^2T+3L^2\|\z(\cd,\cd)\|^2_{\cl{\rm BMO}(\D[T-\e,T])}+3L^2\|\wt\z(\cd,\cd)\|^2_{\cl{\rm BMO}(\D[T-\e,T])}]\\
&\les3L^2T+6L^2A.
\end{align}
Thus, for almost all $t\in[T-\e,T]$, $\int_0^s\th(t,r)dW(r); 0\les s\les T$ is a BMO martingale and
\bel{pro-1-d-Gamma-e-c-6}
\left\|\int_0^\cd\th(t,r)dW(r)\right\|^2_{{\rm BMO}(0,T)}\les3L^2T+6L^2A.
\ee
By \autoref{lemma-Girsanov}, $W(t;\cdot)$ defined by
\bel{pro-l-d-Gamma-e-c-7}
W(t;s)\deq W(s)-\int_0^s\th(t,r)dr,\q s\in[0,T]
\ee
is a Brownian motion on $[0,T]$ under the equivalent probability measure $\cl{\dbP}_t$, which is defined by
\bel{pro-l-d-Gamma-e-c-8}
d\cl{\dbP}_t\deq\cE\{\th(t,\cd)\}_{\1n_T}d\dbP.
\ee
Denote the expectation in $\bar\dbP_t$ by $\dbE^{\bar\dbP_t}$.
Combining \rf{pro-l-d-Gamma-e-c-2}, \rf{pro-l-d-Gamma-e-c-3}, and \rf{pro-l-d-Gamma-e-c-5-3}--\rf{pro-l-d-Gamma-e-c-7},
we have
\begin{align}
\nn &\eta(t,s)-\wt\eta(t,s)+\int_s^T[\z(t,r)-\wt\z(t,r)]d W(t;r)\\
\label{pro-l-d-Gamma-e-c-9}&~=\int_s^T\left[g(t,r,U(r),\z(t,r))-g(t,r,\wt U(r),\z(t,r))\right]dr.
\end{align}
Taking square and then taking conditional expectation with respect to
$\bar\dbP_t$ on the both sides of the above equation, we have (noting
$T-\e\les t\les s\les T$)
\begin{align}
\nn &|\eta(t,s)-\wt\eta(t,s)|^2+\dbE^{\bar\dbP_t}_s\[\int_s^T
|\z(t,r)-\wt\z(t,r)|^2dr\]\\
\nn &~=\dbE^{\bar\dbP_t}_s\Big\{\[\int_s^T\(g(t,r,U(r),\z(t,r))-g(t,r,\wt U(r),\z(t,r))\)dr\]^2\Big\}\\
\label{pro-l-d-Gamma-e-c-10}&~\les\dbE^{\bar\dbP_t}_s\Big\{\[\int_s^T
\(L|U(r)-\wt U(r)|\)dr\]^2\Big\}\\
\nn &~\les L^2(T-t)^2\|U(\cd)-\wt U(\cd)\|^2_{L^\i_\dbF(T-\e,T)}\les L^2\e ^2\|U(\cd)-\wt U(\cd)\|^2_{L^\i_\dbF(T-\e,T)}.
\end{align}
Let $s=t$, by \rf{pro-l-d-Gamma-e-c-4} and \rf{pro-l-d-Gamma-e-c-10}, we have
\bel{pro-l-d-Gamma-e-c-11}
\|Y(\cd)-\wt Y(\cd)\|^2_{L^\i_\dbF(T-\e,T)}\les L^2\e^2\|U(\cd)-\wt U(\cd)\|^2_{L^\i_\dbF(T-\e,T)}.
\ee
Also, by \rf{pro-l-d-Gamma-e-c-4}, \rf{pro-l-d-Gamma-e-c-10}, \rf{pro-1-d-Gamma-e-c-6}, and \autoref{lemma-BMO}, there is a constant $C$ (which is depending on $\|\psi(\cd)\|_\i$ and is independent of $t$) such that
\begin{align}
\nn &\sup_{s\in[t,T]}\dbE_s\[\int_s^T|Z(t,r)-\ti Z(t,r)|^2dr\]= \sup_{s\in[t,T]}\dbE_s\[\int_s^T|\z(t,r)-\wt \z(t,r)|^2dr\]\\
\label{pro-l-d-Gamma-e-c-12}&\q\les C\sup_{s\in[t,T]}\dbE^{\bar\dbP_t}_s\[\int_s^T|\z(t,r)-\ti \z(t,r)|^2dr\]\les CL^2\e^2\|U(\cd)-\wt U(\cd)\|^2_{L^\i_\dbF(T-\e,T)}.
\end{align}
Thus,
\bel{pro-l-d-Gamma-e-c-13}
\|Z(\cd,\cd)-\wt Z(\cd,\cd)\|^2_{\cl{\rm BMO}(\D[T-\e,T])}\les CL^2\e^2\|U(\cd)-\wt U(\cd)\|^2_{L^\i_\dbF(T-\e,T)}.
\ee
Combining \rf{pro-l-d-Gamma-e-c-11}--\rf{pro-l-d-Gamma-e-c-13}, we see that for some small $\e>0$, the map $\G(\cd\,,\cd)$ is a contraction on the set $\cB_\e$. Hence, BSVIE \rf{bsvie-1-d} admits a unique adapted solution on $[T-\e,T]$.
\end{proof}
Let us make some comments on the above local existence of the unique adapted solution.
\vskip-1cm
\setlength{\unitlength}{.01in
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\begin{picture}(230,240)
\put(0,0){\vector(1,0){170}}
\put(0,0){\vector(0,1){170}}
\put(110,0){\line(0,1){150}}
\put(150,0){\line(0,1){150}}
\put(0,110){\line(1,0){150}}
\put(0,150){\line(1,0){150}}
\thicklines
\put(0,0){\color{red}\line(1,1){150}}
\put(122,135){\makebox(0,0){$\textcircled{\small 1}$}}
\put(55,130){\makebox(0,0){$\textcircled{\small 2}$}}
\put(-10,150){\makebox(0,0)[b]{$\scriptstyle T$}}
\put(150,-12){\makebox(0,0)[b]{$\scriptstyle T$}}
\put(-15,105){\makebox(0,0)[b]{$\scriptstyle T-\e$}}
\put(105,-12){\makebox(0,0)[b]{$\scriptstyle T-\e$}}
\put(180,-5){\makebox{$t$}}
\put(0,180){\makebox{$s$}}
\put(35,80){\makebox(0,0){$\scriptstyle\D[0,T-\e]$}}
\put(75,35){\makebox(0,0){$\scriptstyle\D^*[0,T-\e]$}}
\end{picture}
\bs
\centerline{(Figure 1)}
\bs
\no We have seen that $(Y(s),Z(t,s))$ is defined for $(t,s)\in\D[T-\e,T]$, the region marked $\textcircled{\small 1}$ in the above figure. Now, for any $t\in[0,T-\e]$, we can rewrite our Type-I BSVIE as follows:
\bel{BSVIE(0,T-e)} Y(t)=\psi^{T-\e}(t)+\int_t^{T-\e}g(t,s,Y(s),Z(t,s))ds-\int_t^{T-\e}
Z(t,s)dW(s),\q t\in[0,T-\e],\ee
where
\bel{psi(T-e)}\psi^{T-\e}(t)=\psi(t)+\int_{T-\e}^Tg(t,s,Y(s),Z(t,s))ds
-\int_{T-\e}^TZ(t,s)dW(s),\q t\in[0,T-\e].\ee
If $\psi^{T-\e}(\cd)\in L^\i_{\cF_{T-\e}}(0,T-\e)$, then \rf{BSVIE(0,T-e)} is a BSVIE on $[0,T-\e]$. However, unlike BSDEs, having $(Y(s),Z(t,s))$ defined on $\D[T-\e,T]$, $\psi^{T-\e}(t); t\in[0,T-\e]$ has still not been defined yet. Since, on the right-hand side of \rf{psi(T-e)}, although $Y(s)$ with $s\in[T-\e,T]$ has already been determined, $Z(t,s)$ has not been defined for $(t,s)\in[0,T-\e]\times[T-\e,T]$, the region marked $\textcircled{\small 2}$ in the above figure, which is needed to define $\psi^{T-\e}(t)$. Moreover, we need that $\psi^{T-\e}(t)$ is $\cF_{T-\e}$-measurable (not just $\cF_T$-measurable). Hence, \rf{psi(T-e)} is actually a {\it stochastic Fredholm integral equation} (SFIE, for short) to be solved to determine $\psi^{T-\e}(t);t\in[0,T-\e]$.
\ms
Now, we are at the position to prove \autoref{thm-bsvie-1-d-exist-unique}.
\ms
\it \textbf{Proof of \autoref{thm-bsvie-1-d-exist-unique}}. \rm The proof will be divided into three steps.
\ss
{\bf Step 1:} {\it Estimate of $|Y(\cdot)|^2$.}
\ms
For given $\psi(\cd)\in L^\infty_{\cF_T}(0,T)$, we can find a constant $\wt C>0$ such that $\|\psi(\cd)\|_\i^2\les\wt C$ and (by \ref{A2})
\bel{thm-bsvie-1-d-exist-unique-1}
|2xg(t,s,y,0)|\les\wt C+\wt C|x|^2+\wt C|y|^2,\q\forall (t,s,x,y)\in\D[0,T]\times\dbR\times\dbR.
\ee
Let us consider the following (integral form of) ordinary differential equation:
\bel{thm-bsvie-1-d-exist-unique-2}
\a(t)=\wt C+\int_t^T\wt C\a(s)ds+\int_t^T\wt C[\a(s)+1]ds,\q t\in[0,T].
\ee
It is easy to see that the unique solution to the above ordinary differential equation is given by
$$\a(t)=\(\wt C+{1\over2}\)e^{2\wt C(T-t)}-{1\over2},\q t\in[0,T],$$
which is a (continuous) decreasing function. Thus,
$$\|\psi(\cd)\|_\i^2\les\wt C=\a(T)\les\a(0).$$
By \autoref{pro-l-d-Gamma-e-c}, there exists an $\e>0$ (depending on $\|\psi(\cd)\|_\i$) such that $\G(\cd\,,\cd)$ defined by \rf{1-d-gamma} is a contraction on $\cB_\e$. Therefore, a Picard iteration sequence converges to the unique adapted solution $(Y(\cd),Z(\cd,\cd))$ of the BSVIE on $[T-\e,T]$. Namely, if we define:
\bel{thm-bsvie-1-d-exist-unique-4}
\left\{\begin{aligned}
(Y^0(\cd),Z^0(\cd,\cd))&=0,\\
(Y^{k+1}(\cd),Z^{k+1}(\cd,\cd))&=\G(Y^k(\cd),Z^k(\cd,\cd)),\q k\ges0;
\end{aligned}\right.\ee
that is,
\begin{align*}
&(Y^0(\cd),Z^0(\cd,\cd))=0,\\
&\eta^{k+1}(t,s)=\psi(t)+\int_s^T g(t,r,Y^k(r),\z^{k+1}(t,r))dr-\int_s^T\z^{k+1}(t,r)dW(r),\\
&Y^{k+1}(t)=\eta^{k+1}(t,t),\q Z^{k+1}(t,s)=\z^{k+1}(t,s),\q (t,s)\in\D[T-\e,T],
\end{align*}
then
\bel{thm-bsvie-1-d-exist-unique-5}
\lim_{k\to\i}\|(Y^k(\cd),Z^k(\cd,\cd))-(Y(\cd),Z(\cd,\cd))\|_{
L^\i_\dbF(T-\e,T)\times\cl{\rm BMO}(\D[T-\e,T])}=0.
\ee
Next, for almost all $t\in[T-\e,T]$, similar to \rf{pro-l-d-Gamma-e-c-5-2}, \rf{pro-l-d-Gamma-e-c-5-3}, \rf{pro-l-d-Gamma-e-c-7}, and \rf{pro-l-d-Gamma-e-c-8},
there exists a process $\th^{k+1}(t,\cd)$ such that
\bel{thm-bsvie-1-d-exist-unique-6}
g(t,r,Y^k(r),\z^{k+1}(t,r))-g(t,r,Y^k(r),0)=\z^{k+1}(t,r)\th^{k+1}(t,r),
\ee
and
\bel{thm-bsvie-1-d-exist-unique-7}
W^{k+1}(t;s)\deq W(s)-\int_0^s\th^{k+1}(t,r)dr,\q s\in[0,T]
\ee
is a Brownian motion on $[0,T]$ under the corresponding equivalent probability measure $\dbP^{k+1}_t$ defined by
$$\dbP_t^{k+1}=\cE\{\th^{k+1}(t,\cd)\}_{\1n_T}d\dbP.$$
For simplicity, we denote $\dbP^{k+1}_t$ by $\dbP^{k+1}$ here, suppressing the subscript $t$. The corresponding expectation is denoted by $\dbE^{k+1}$.
It follows that
\begin{align}
\nn\eta^{k+1}(t,s)&=\psi(t)+\int_s^Tg(t,r,Y^k(r),\z^{k+1}(t,r))dr-\int_s^T\z^{k+1}(t,r)dW(r),\\
\label{thm-bsvie-1-d-exist-unique-8} &=\psi(t)+\int_s^Tg(t,r,Y^k(r),0)dr-\int_s^T\z^{k+1}(t,r)dW^{k+1}(t;r).
\end{align}
Applying the It\^o formula to the map $s\mapsto|\eta^{k+1}(t,s)|^{2}$ and taking conditional expectation $\dbE^{k+1}_\t=\dbE^{k+1}[\,\cd\,|\,\cF_\t]$ for any $\t\in[T-\e,s]$, by \rf{thm-bsvie-1-d-exist-unique-1}, we have
\begin{align}
\nn& \dbE_\t^{k+1}\[|\eta^{k+1}(t,s)|^2\]+\dbE_\t^{k+1}\[\int_s^T|\z^{k+1}
(t,r)|^2dr\]\\
\label{thm-bsvie-1-d-exist-unique-9}&~=\dbE_\t^{k+1}\[|\psi(t)|^2\]
+\dbE_\t^{k+1}\[\int_s^T2\eta^{k+1}(t,r)g(t,r,Y^k(r),0)dr\]\\
\nn&~\les\wt C+\wt C\int_s^T\dbE_\t^{k+1}\[|\eta^{k+1}(t,r)|^2\]dr+\wt C\int_s^T\Big\{\dbE_\t^{k+1}\[|Y^{k}(r)|^2\]+1\Big\}dr.
\end{align}
We now prove the following inequality by induction:
\bel{thm-bsvie-1-d-exist-unique-10}
|Y^k(t)|^2\les\a(t),\q t\in[T-\e,T],\q\hbox{for any}~k\ges0.
\ee
In fact, by \rf{thm-bsvie-1-d-exist-unique-4}, it is obvious to see $|Y^0(t)|^2=0\les\a(t)$.
Suppose $|Y^k(t)|^2\les\a(t)$ for any $t\in[T-\e,T]$, then
\bel{thm-bsvie-1-d-exist-unique-11}
\dbE_\t^{k+1}\[|\eta^{k+1}(t,s)|^2\]\les\wt C+\wt C\int_s^T\dbE_\t^{k+1}\[|\eta^{k+1}(t,r)|^2\]dr+\wt C\int_s^T[\a(r)+1]dr.
\ee
In light of \rf{thm-bsvie-1-d-exist-unique-2}, by the comparison theorem of ordinary differential equations, we have
\bel{thm-bsvie-1-d-exist-unique-12}
\dbE_\t^{k+1}\[|\eta^{k+1}(t,s)|^2\]\les\a(s).
\ee
Let $\t=s$ and $s=t$, we have
\bel{thm-bsvie-1-d-exist-unique-13}
|Y^{k+1}(t)|^2\les\a(t),\q t\in[T-\e,T].
\ee
Thus, by induction, \rf{thm-bsvie-1-d-exist-unique-10} holds. Then by \rf{thm-bsvie-1-d-exist-unique-5}, we have
\bel{thm-bsvie-1-d-exist-unique-14}
|Y(t)|^{2}\les\a(t),\q t\in[T-\e,T].
\ee
\ss
{\bf Step 2:} {\it A related stochastic Fredholm integral equation is solvable on $[0,T-\e]$.}
\ms
We now solve SFIE \rf{psi(T-e)} on $[0,T-\e]$. Let us introduce a family of BSDEs parameterized by $t\in[0,T-\e]$:
\bel{thm-bsvie-1-d-exist-unique-16}
\eta(t,s)=\psi(t)+\int_s^T g(t,r,Y(r),\z(t,r))dr-\int_s^T\z(t,r)dW(r),\q s\in[T-\e,T].
\ee
By \autoref{lemma-briand-hu}, the above BSDE admits a unique adapted solution $(\eta(t,\cd),\z(t,\cd))$ on $[T-\e,T]$.
Note that \rf{thm-bsvie-1-d-exist-unique-14}, similar to \rf{thm-bsvie-1-d-exist-unique-12}, we have
\bel{thm-bsvie-1-d-exist-unique-171}
|\eta(t,s)|^2\les\a(s),\q s\in[T-\e,T].
\ee
Similar to \rf{Z<A}, we have
\bel{thm-bsvie-1-d-exist-unique-172}
\esssup_{t\in[0,T-\e]}\|\z(t,\cd)\|^2_{\cl{\rm BMO}([T-\e,T])}<\i.
\ee
Let $\psi^{T-\e}(t)=\eta(t,T-\e)$ and $Z(t,s)=\z(t,s)$,
we have $(\psi^{T-\e}(\cd),Z(\cd,\cd))\in L^\i_{\cF_{T-\e}}(0,T-\e)\times\cl{\rm BMO}([0,T-\e]\times[T-\e,T])$
and $(\psi^{T-\e}(\cd),Z(\cd,\cd))$ is a solution to SFIE \rf{psi(T-e)}. Moreover, by \rf{thm-bsvie-1-d-exist-unique-171}, we have
\bel{thm-bsvie-1-d-exist-unique-18}
|\psi^{T-\e}(t)|^2=|\eta(t,T-\e)|^2\les\a(T-\e)\les\a(0),\q t\in[0,T-\e].
\ee
Next, we will prove the solution to SFIE \rf{psi(T-e)} is unique.
Let
$$\ba{ll}
\ns\ds(\psi^{T-\e}(\cd),Z(\cd,\cd)),~(\wt\psi^{T-\e}(\cd),\wt Z(\cd,\cd))\in L^\i_{\cF_{T-\e}}(0,T-\e)\times\cl{\rm BMO}([0,T-\e]\times[T-\e,T]).\ea$$
be two solutions to SFIE \rf{psi(T-e)}. Then
\begin{align}
\label{thm-bsvie-1-d-exist-unique-19}\psi^{T-\e}(t)-\wt\psi^{T-\e}(t)&
=\int_{T-\e}^T\[g(t,s,Y(s),Z(t,s))-g(t,s,Y(s),\wt Z(t,s))\]ds\\
\nn&\hp{=\ }-\int_{T-\e}^T\[Z(t,s)-\wt Z(t,s)\]dW(s),\q t\in[0,T-\e].
\end{align}
For almost all $t\in[0,T-\e]$, similar to \rf{pro-l-d-Gamma-e-c-5-2}, \rf{pro-l-d-Gamma-e-c-5-3}, \rf{pro-l-d-Gamma-e-c-7}, and \rf{pro-l-d-Gamma-e-c-8}, there is a process $\wt\th(t,\cd)$ such that:
\bel{thm-bsvie-1-d-exist-unique-20}
g(t,s,Y(s),Z(t,s))-g(t,s,Y(s),\wt Z(t,s))=[Z(t,s)-\wt Z(t,s)]\wt\th(t,s),
\ee
and
\bel{thm-bsvie-1-d-exist-unique-21}
\cl W (t;s)\deq W(s)-\int_0^s\wt\th(t,r)dr,\q s\in[0,T]
\ee
is a Brownian motion on $[0,T]$ under the corresponding equivalent probability measure $\cl\dbP_t$. The corresponding expectation is denoted by $\dbE^{\bar\dbP_t}$. Combining \rf{thm-bsvie-1-d-exist-unique-19}--\rf{thm-bsvie-1-d-exist-unique-21}, we have
\bel{thm-bsvie-1-d-exist-unique-22}
\psi^{T-\e}(t)-\wt\psi^{T-\e}(t)=-\int_{T-\e}^T\[Z(t,s)-\wt Z(t,s)\]d\cl W(t;s),\q t\in[0,T-\e].
\ee
Taking conditional expectation $\dbE^{\bar\dbP_t}_{T-\e}[\,\cd\,]\equiv\dbE^{\bar\dbP_t}[\,\cd\,|\,
\cF_{T-\e}]$ on the both sides of the equation \rf{thm-bsvie-1-d-exist-unique-22}, we have
\bel{thm-bsvie-1-d-exist-unique-23}
\dbE^{\bar\dbP_t}_{T-\e}\[\psi^{T-\e}(t)-\wt\psi^{T-\e}(t)\]
=0, \q t\in[0,T-\e].
\ee
Note that $\psi^{T-\e}(t)$ is $\cF_{T-\e}$-adapted for any $t\in[0,T-\e]$. It follows that
\bel{thm-bsvie-1-d-exist-unique-24}
\psi^{T-\e}(t)=\wt\psi^{T-\e}(t),\q\as,~t\in[0,T-\e].
\ee
By \rf{thm-bsvie-1-d-exist-unique-22}--\rf{thm-bsvie-1-d-exist-unique-24}, we have
\bel{thm-bsvie-1-d-exist-unique-25}
\int_{T-\e}^T\left[Z(t,s)-\wt Z(t,s)\right]d\cl W(t;s)=0,\q t\in[0,T-\e],
\ee
which implies
\bel{thm-bsvie-1-d-exist-unique-26}
Z(t,s)=\wt Z(t,s),\q\as,~(t,s)\in[0,T-\e]\times[T-\e,T].
\ee
Combining \rf{thm-bsvie-1-d-exist-unique-24}--\rf{thm-bsvie-1-d-exist-unique-26}, SFIE \rf{psi(T-e)} admits a unique solution.
\ss
{\bf Step 3:} {\it Complete the proof by induction.}
\ms
Combining Steps 1 and 2, we have uniquely determined
\bel{thm-bsvie-1-d-exist-unique-27}\left\{\begin{aligned}
Y(t),\q &\q t\in[T-\e,T],\\
Z(t,s),&\q (t,s)\in \D[T-\e,T]\bigcup\([0,T-\e]\times[T-\e,T]\).
\end{aligned}\right.\ee
Now, we consider BSVIE \rf{BSVIE(0,T-e)} on $[0,T-\e]$.
By \rf{thm-bsvie-1-d-exist-unique-18}, we see that the above procedure can be repeated. We point out that the introduction of $\a(\cd)$ is to uniformly control the terminal state $\psi(T-\e)$, $\psi(T-2\e)$, etc. Then we can use induction to finish the proof of the existence and uniqueness of adapted solution to BSVIE \rf{bsvie-1-d}.$\hfill\qed$
\ms
\begin{remark}\rm When the terminal condition $\psi(\cd)$ is bounded, the well-posedness of QBSVIE \rf{bsvie-1-d} is established by \autoref{thm-bsvie-1-d-exist-unique}. If $\psi(\cd)$ is unbounded, the unboundedness of $\psi(\cd)$ will bring some essential difficulties in establishing the solvability of QBSVIE \rf{bsvie-1-d}. At the moment, we are not able to overcome the difficulties. We hope to come back in our future publications.
\end{remark}
We now would like to look at some better regularity for the adapted solution of BSVIEs under the additional condition \ref{A3}.
\begin{theorem}\label{thm-bsvie-exist-unique-c} \sl
Let {\rm\ref{A2}}--{\rm\ref{A3}} hold. Then for any $\psi(\cd)\in L^\i_{\cF_T}(\Om;C^U[0,T])$, BSVIE \rf{bsvie-1-d} admits a unique adapted solution $(Y(\cd),Z(\cd,\cd))\in L_\dbF^\i(\Om;C[0,T])\times\cl{\rm BMO}(\D[0,T])$.
\end{theorem}
\begin{proof} Without loss of generality, let us assume that
$$|\psi(t')-\psi(t)|\les\rho(|t-t^\prime|),\q \forall~t,t'\in[0,T],$$
with the same modulus of continuity $\rho(\cd)$ given in \ref{A3}.
\ms
By \autoref{thm-bsvie-1-d-exist-unique}, BSVIE \rf{bsvie-1-d} admits a unique adapted solution $(Y(\cd),Z(\cd,\cd))\in L_\dbF^\i(0,T)\times \cl{\rm BMO}(\D[0,T])$. We just need to prove that $Y(\cd)\in L_\dbF^\i(\Om;C[0,T])$, i.e., $Y(\cd)$ is continuous. Consider the following family of BSDEs (parameterized by $t\in[0,T]$):
\bel{thm-a-con-1}
\eta(t,s)=\psi(t)+\int_s^Tg(t,r,Y(r),\z(t,r))dr-\int_s^T\z(t,r)dW(r),
\q s\in[0,T].
\ee
By \autoref{lemma-briand-hu}, for any $t\in[0,T]$, BSDE \rf{thm-a-con-1} admits a unique adapted solution $(\eta(t,\cd),\z(t,\cd))\in L_\dbF^\i(\Om;C[0,T])\times\cl{\rm BMO}(0,T)$.
By \autoref{thm-bsvie-1-d-exist-unique}, we have $Y(t)=\eta(t,t)$, $Z(t,s)=\z(t,s)$ for any $(t,s)\in\D[0,T]$.
Now, let $0\les t<t'\les T$. Similar to \rf{pro-l-d-Gamma-e-c-5-2}, \rf{pro-l-d-Gamma-e-c-5-3}, \rf{pro-l-d-Gamma-e-c-7}, and \rf{pro-l-d-Gamma-e-c-8}, there is a process $\th(t,t';\cd)$ such that
\bel{thm-a-con-2}
g(t',s,Y(s),\z(t,s))-g(t',s,Y(s),\z(t',s))=[\z(t,s)-\z(t',s)]\th(t,t';s),
\ee
and
\bel{thm-a-con-3}
W (t,t^\prime;s)\deq W(s)-\int_0^s\th(t,t^\prime;r)dr,\q s\in[0,T]
\ee
is a Brownian motion on $[0,T]$ under the corresponding equivalent probability measure $\dbP_{t,t'}$.
The corresponding expectation is denoted by $\dbE^{\dbP_{t,t'}}$.
Combining \rf{thm-a-con-1}, \rf{thm-a-con-2}, and \rf{thm-a-con-3}, we have
\begin{align*}
\eta(t,s)-\eta(t^\prime,s)&=\psi(t)-\psi(t^\prime)-\int_s^T[\z(t,r)-\z(t^\prime,r)]dW(t,t^\prime;r)\\
&{\hp=\ } +\int_s^T[g(t,r,Y(r),\z(t,r))-g(t^\prime,r,Y(r),\z(t,r))]dr.
\end{align*}
Taking conditional expectation $\dbE_s^{\dbP_{t,t'}}[\,\cd\,]\equiv\dbE_s^{\dbP_{t,t'}}[\,\cd\,|\cF_s]$ on the both sides of the above equation, we have
\begin{align*}
\eta(t,s)-\eta(t',s)&=\dbE_s^{\dbP_{t,t'}}\[\psi(t)-
\psi(t')+\int_s^T\(g(t,r,Y(r),\z(t,r))
-g(t',r,Y(r),\z(t,r))\)dr\].
\end{align*}
Combining this with {\rm\ref{A3}}, by \autoref{lemma-BMO}, we have
\begin{align*}
|\eta(t,s)-\eta(t',s)|&\les\dbE_s^{\dbP_{t,t'}}\[|\psi(t)
-\psi(t')|+\int_s^T|g(t,r,Y(r),
\z(t,r))-g(t',r,Y(r),\z(t,r))|dr\]\\
&\les \rho(|t-t^\prime|)+\rho(|t-t^\prime|)\dbE_s^{\dbP_{t,t'}}\[\int_s^T(1+|Y(s)|+|\z(t,r)|)dr\]\\
& \les C (1+\|Y(\cd)\|_{L_{\dbF}^\i(0,T)}) \rho(|t-t^\prime|)+C\rho(|t-t^\prime|)\dbE_s^{\dbP_{t,t'}}\[\int_s^T|\z(t,r)|^2dr\]\\
& \les C (1+\|Y(\cd)\|_{L_{\dbF}^\i(0,T)}) \rho(|t-t^\prime|)+C\rho(|t-t^\prime|)\|\z(t,\cd)\|_{\cl{\rm BMO}_{\dbP_{t,t'}}(t,T)}\\
& \les C (1+\|Y(\cd)\|_{L_{\dbF}^\i(0,T)}) \rho(|t-t^\prime|)+C\rho(|t-t^\prime|)\|\z(t,\cd)\|_{\cl{\rm BMO}_{\dbP}(t,T)}\\
& \les C (1+\|Y(\cd)\|_{L_{\dbF}^\i(0,T)}+\|\z(\cd,\cd)\|_{\cl{\rm BMO}_{\dbP}(\D[0,T])}) \rho(|t-t^\prime|)\\
&= C (1+\|Y(\cd)\|_{L_{\dbF}^\i(0,T)}+\|Z(\cd,\cd)\|_{\cl{\rm BMO}_{\dbP}(\D[0,T])}) \rho(|t-t^\prime|),
\end{align*}
where $C>0$ is a generic constant (which could be different from line to line).
This leads to
$$\lim_{|t-t'|\to0}\[\sup_{s\in[0,T]}|\eta(t,s)-\eta(t',s)|\]=0,~\as $$
On the other hand, since $\eta(t,\cd)\in L_\dbF^\i(\Om;C[0,T])$ for any $t\in[0,T]$, one has
\bel{state}\lim_{|s-s'|\to 0}|\eta(t,s)-\eta(t,s')|=0,\q\forall t\in[0,T],~\as\ee
It follows that $(t,s)\mapsto\eta(t,s)$ is continuous, i.e.,
$$\lim_{(t',s')\to(t,s)}|\eta(t',s')-\eta(t,s)|=0,\q\forall(t,s)\in[0,T]^2,~\as$$
Consequently, $t\mapsto\eta(t,t)=Y(t)$ is continuous.
\end{proof}
\section{Adapted M-solution to Type-II QBSVIE}\label{II-BSVIE}
We now consider the following one-dimensional Type-II QBSVIE:
\bel{bsvie-m} Y(t)=\psi(t)+\int_t^Tg(t,s,Y(s),Z(t,s),Z(s,t))ds-\int_t^T Z(t,s)dW(s),\qq t\in[0,T]. \ee
Since $Z(s,t)$ is presented in the generator $g(\cd)$, we shall consider the adapted M-solution. Let us first introduce the following assumption:
\begin{taggedassumption}{(A4)}\label{A4} \rm
Let the generator $g:\D[0,T]\times\dbR\times\dbR\times\dbR\times \Om\to\dbR$ be $\cB(\D[0,T]\times\dbR\times\dbR\times\dbR)\otimes\cF_T$-measurable
such that $s\mapsto g(t,s,y,z,z')$ is $\dbF$-progressively measurable on $[t,T]$ for all $(t,y,z,z')\in [0,T]\times\dbR\times\dbR\times\dbR$.
There exist two constants $L$ and $\g$ such that:
\begin{align*}
&|g(t,s,y,z,z')|\les L(1+|y|)+{\g\over 2}|z|^2,\q\forall (t,s,y,z,z')\in\D[0,T]\times\dbR\times\dbR\times\dbR;\\
&|g(t,s,y_1,z_1,z_1')-g(t,s,y_2,z_2,,z_2')|\les L\(|y_1-y_2|+(1+|z_1|+|z_2|)|z_1-z_2|+|z_1'-z_2'|\),\\
&\qq\qq\qq\qq\qq\qq\qq\qq\qq~\forall (t,s,y_i,z_i,z_i')\in\D[0,T]\times\dbR\times\dbR\times\dbR,~ i=1,2.
\end{align*}
\end{taggedassumption}
Note that in \ref{A4}, we have assumed that $z'\mapsto g(t,s,y,z,z')$ is bounded. This will allow us to use the results for Type-I QBSVIEs. Therefore, the following result can be regarded as a byproduct of the results for Type-I QBSVIEs from the previous section. The case that allowing $z'\mapsto g(t,s,y,z,z')$ to be unbounded seems to be more difficult and might be treated in our future investigations. Now, here is the main result of this section.
\begin{theorem}\label{thm-bsvie-m-exist-unique} \sl
Let {\rm\ref{A4}} hold. Then for any $\psi(\cd)\in L^\i_{\cF_T}(0,T)$, Type-II QBSVIE \rf{bsvie-m} admits a unique adapted M-solution $(Y(\cd),Z(\cd,\cd))\in\cM^2[0,T]\bigcap \lt(L^\i_{\dbF}(0,T)\times \cl{\rm BMO}(\D[0,T])\rt)$.
\end{theorem}
\begin{proof}
For any $(y(\cd),z(\cd,\cd))\in \cM^2[0,T]$, consider the following BSVIE:
\bel{bsvie-m-uv}
Y(t)=\psi(t)+\int_t^Tg(t,s,Y(s),Z(t,s),z(s,t))ds-\int_t^TZ(t,s)dW(s),\q t\in[0,T].
\ee
In light of {\rm\ref{A4}}, by \autoref{thm-bsvie-1-d-exist-unique},
BSVIE \rf{bsvie-m-uv} admits a unique adapted solution $(Y(\cd),Z(\cd,\cd))\in L_\dbF^\i(0,T)\times\cl{\rm BMO}(\D[0,T])$.
Determine $Z(s,t);(t,s)\in\D[0,T]$ by martingale representation theorem, i.e.,
$$Y(s)=\dbE[Y(s)]+\int_0^sZ(s,t)dW(t),\q s\in[0,T].$$
This means that BSVIE \rf{bsvie-m-uv} admits a unique adapted M-solution $(Y(\cd),Z(\cd,\cd))\in\cM^2[0,T]$.
Thus the map
\bel{m-gamma}
\wt\G(y(\cd),z(\cd,\cd))\deq(Y(\cd),Z(\cd,\cd)),\qq (y(\cd),z(\cd,\cd))\in \cM^2(0,T)
\ee
is well-defined.
In order to prove BSVIE \rf{bsvie-m} admits a unique adapted M-solution, we need to prove that $\wt\G(\cd\,,\cd)$ has a fixed point in $\cM^2[0,T]$. The proof is divided into two steps.
\ms
{\bf Step 1.} There is an $\e>0$ such that $\wt\G(\cd,\cd)$ is a contraction on $\cM^2[T-\e,T]$ and hence BSVIE \rf{bsvie-m} admits a unique adapted M-solution on $[T-\e,T]$.
\ms
For any $(y(\cd),z(\cd,\cd)),(\wt y(\cd),\wt z(\cd,\cd))\in\cM^2[T-\e,T]$, with $\e>0$ undetermined, set
\bel{m-gamma-yz}
(Y(\cd),Z(\cd,\cd))=\wt\G(y(\cd),z(\cd,\cd)),\q(\wt Y(\cd),\wt Z(\cd\,,\cd))=\wt\G(\wt y(\cd),\wt z(\cd,\cd));
\ee
that is, for $t\in[T-\e,T]$,
\begin{align}
\label{bsvie-m-gamma-yz1} Y(t)&=\psi(t)+\int_t^Tg(t,s,Y(s),Z(t,s),z(s,t))ds-\int_t^TZ(t,s)dW(s),\\
\label{bsvie-m-gamma-yz2} \wt Y(t)&=\psi(t)+\int_t^Tg(t,s,\wt Y(s),\wt Z(t,s),\wt z(s,t) )ds-\int_t^T\wt Z(t,s)dW(s),
\end{align}
and
\begin{align}
\label{bsvie-m-gamma-m1}Y(s)&=\dbE[Y(s)|\cF_{T-\e}]+\int_{T-\e}^s Z(s,t)dW(t),~s\in[T-\e,T],\\
\label{bsvie-m-gamma-m2} \wt Y(s)&=\dbE[\wt Y(s)|\cF_{T-\e}]+\int_{T-\e}^s \wt Z(s,t)dW(t),~s\in[T-\e,T].
\end{align}
Similar to \autoref{le-l-d-Gamma-e-b}, noting that $z'\mapsto g(t,s,y,z,z')$ is bounded, there is an $\e>0$ such that $\wt\G(y(\cd),z(\cd\,,\cd))\in\cB_\e$ for any $(y(\cd),z(\cd\,,\cd))\in\cM^2(T-\e,T)$, where $\cB_\e$ is defined by \rf{1-d-b-e}.
Thus, we have
\bel{(Y,Z),(Y,Z)}
(Y(\cd),Z(\cd\,,\cd)),~(\wt Y(\cd),\wt Z(\cd\,,\cd))\in\cB_\e.
\ee
By {\rm\ref{A4}}, for any $t\in[T-\e,T]$, there is a process $\th(t,\cd)$ such that:
\begin{align}
\label{th-m-Gamma-beta1} & \th(t,s)=0, \q t\in[T-\e,T],~s
\in[0,t],\\
\label{th-m-Gamma-beta2} & |\th(t,s)|\les L(1+|Z(t,s)|+|\wt Z (t,s)|),\q(t,s)\in\D[T-\e,T],\\
\nn & g(t,s,\wt Y(s),Z(t,s),\wt z(s,t))-g(t,s,\wt Y(s),\wt Z(t,s),\wt z(s,t))\\
\label{th-m-Gamma-beta3} &~=[Z(t,s)-\wt Z(t,s)]\th(t,s),~\q(t,s)\in\D[T-\e,T].
\end{align}
Similar to \rf{pro-1-d-Gamma-e-c-6}, we have
\bel{th-m-beta-bmo}
\|\th(\cd,\cd)\|^2_{\cl{\rm BMO}(\D[T-\e,T])}\les3L^2T+6L^2A.
\ee
For almost all $t\in[T-\e,T]$, by \autoref{lemma-Girsanov}, $W(t;\cdot)$ defined by
\bel{th-m-w}
W(t;s)\deq W(s)-\int_0^s\th(t,r)dr,\q s\in[0,T]
\ee
is a Brownian motion on $[0,T]$ under the equivalent probability measure $\cl{\dbP}_t$,
which is defined by
\bel{th-m-p}
d\cl{\dbP}_t\deq\cE\{\th(t,\cd)\}_{\1n_T}d\dbP.
\ee
The corresponding expectation is denoted by $\dbE^{\bar\dbP_t}$.
Combining \rf{bsvie-m-gamma-yz1}--\rf{bsvie-m-gamma-yz2} and \rf{th-m-Gamma-beta3}--\rf{th-m-w}, we have
\begin{align}
\nn & Y(t)-\wt Y(t)+\int_t^T [Z(t,s)-\wt Z(t,s)]dW(t,s)\\
\label{th-m-yz-tiyz}&~ =\int_t^T\left[g(t,s,Y(s),Z(t,s),z(s,t))-g(t,s,\wt Y(s),Z(t,s),\wt z(s,t))\right]ds.
\end{align}
Taking square and then taking the conditional expectation $\dbE_t^{\bar\dbP_t}[\,\cd\,]=\dbE^{\bar\dbP_t}[\,\cd\,|\,\cF_t]$, we have
\begin{align}
\nn & |Y(t)-\wt Y(t)|^2+\dbE_t^{\bar\dbP_t}\[\int_t^T|Z(t,s)-\wt Z(t,s)|^2ds\]\\
\nn &~=\dbE_t^{\bar\dbP_t}\[\int_t^T\(g(t,s,Y(s),Z(t,s),z(s,t))
-g(t,s,\wt Y(s),Z(t,s),\wt z(s,t))\)ds\]^2\\
\label{th-m-yz-tiyz-cond}&~\les L^2\dbE_t^{\bar\dbP_t}\[\int_t^T\(|Y(s)-\wt Y(s)|+|z(s,t)-\wt z(s,t)|\)ds\]^2.
\end{align}
By $(Y(\cd),Z(\cd,\cd)),(\wt Y(\cd),\wt Z(\cd,\cd))\in\cB_\e$ and \autoref{lemma-BMO}, there is a constant $C>0$ (which is depending on $\|\psi(\cd)\|_\i$ and is independent of $t$) such that
\begin{align}
\nn &|Y(t)-\wt Y(t)|^2+\dbE_t\[\int_t^T|Z(t,s)-\wt Z(t,s)|^2ds\]\\
\nn&~\les C\dbE_t\[\int_t^T\(|Y(s)-\wt Y(s)|+|z(s,t)-\wt z(s,t)|\)ds\]^2\\
\label{th-m-yz-tiyz-contra}&~\les C(T-t)\dbE_t\[\int_t^T\(|Y(s)-\wt Y(s)|^2+|z(s,t)-\wt z(s,t)|^2\)ds\].
\end{align}
Thus, integrating the above on $[T-\e,T]$, we obtain
\begin{align}
\nn &\dbE\int_{T-\e}^T|Y(t)-\wt Y(t)|^2dt+\dbE\int_{T-\e}^T\int_t^T|Z(t,s)-\wt Z(t,s)|^2dsdt\\
\label{th-m-yz-tiyz-in-contra}&~\les C\e\dbE\int_{T-\e}^T\int_t^T\left[|Y(s)-\wt Y(s)|^2+|z(s,t)-\wt z(s,t)|^2\right]dsdt,
\end{align}
with a possible different constant $C>0$. By the variation of constants formula, we obtain
\begin{align}
\nn &\dbE\int_{T-\e}^T|Y(t)-\wt Y(t)|^2dt+\dbE\int_{T-\e}^T\int_t^T|Z(t,s)-\wt Z(t,s)|^2dsdt\\
\label{th-m-yz-tiyz-in-contra1}&~\les C\e\dbE\int_{T-\e}^T\int_t^T |z(s,t)-\wt z(s,t)|^2dsdt\les C\e\dbE\int_{T-\e}^T|y(t)-\wt y(t)|^2dt.
\end{align}
The constant appears above is generic (only depends on the constants $L$, $\g$, $T$, and $\|\psi(\cd)\|_\i$, and is independent of $\e>0$). Therefore, when $\e$ is small enough, $\wt\G(\cd,\cd)$ is a contraction on $\cM^2(T-\e,T)$. Consequently, BSVIE \rf{bsvie-m} admits a unique adapted solution on $[T-\e,T]$.
Further, by \rf{(Y,Z),(Y,Z)}, the unique adapted M-solution $(Y(\cd),Z(\cd,\cd))$ also belongs to $L^\i_{\dbF}(T-\e,T)\times \cl{\rm BMO}(\D[T-\e,T])$.
\ms
\vskip-1cm
\setlength{\unitlength}{.01in
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\begin{picture}(230,240)
\put(0,0){\vector(1,0){170}}
\put(0,0){\vector(0,1){170}}
\put(110,0){\line(0,1){150}}
\put(150,0){\line(0,1){150}}
\put(0,110){\line(1,0){150}}
\put(0,150){\line(1,0){150}}
\thicklines
\put(0,0){\color{red}\line(1,1){150}}
\put(122,137){\makebox(0,0){$\textcircled{\small 1}$}}
\put(55,130){\makebox(0,0){$\textcircled{\small 2}$}}
\put(135,120){\makebox(0,0){$\textcircled{\small 3}$}}
\put(130,55){\makebox(0,0){$\textcircled{\small 4}$}}
\put(-10,150){\makebox(0,0)[b]{$\scriptstyle T$}}
\put(150,-12){\makebox(0,0)[b]{$\scriptstyle T$}}
\put(-15,105){\makebox(0,0)[b]{$\scriptstyle T-\e$}}
\put(105,-12){\makebox(0,0)[b]{$\scriptstyle T-\e$}}
\put(180,-5){\makebox{$t$}}
\put(0,180){\makebox{$s$}}
\put(35,80){\makebox(0,0){$\scriptstyle\D[0,T-\e]$}}
\put(75,35){\makebox(0,0){$\scriptstyle\D^*[0,T-\e]$}}
\end{picture}
\bs
\centerline{(Figure 2)}
\bs
The above determined $Y(t)$ for $t\in[T-\e,T]$ and determined
$Z(t,s)$ for $(t,s)\in\D[T-\e,T]$ (the region marked $\textcircled{\small 1}$ in the above figure) by using Type-I BSVIEs, and for $(t,s)\in\D^*[T-\e,T]$ (the region marked $\textcircled{\small 3}$ in the above figure) by using martingale representation.
\ms
{\bf Step 2.} BSVIE \rf{bsvie-m} admits a unique adapted M-solution on $[0,T]$.
\ms
By Step 1, BSVIE \rf{bsvie-m} admits a unique solution on $[T-\e,T]$. For almost every $s\in[T-\e,T]$, $\dbE_{T-\e}[Y(s)]\in L^2_{\cF_{T-\e}}(\Om)$, by martingale representation theorem, there is a unique $Z(\cd,\cd)\in L^2(T-\e,T;L^2_\dbF(0,T-\e))$ such that:
\bel{thm-m-z'}
\dbE_{T-\e}[Y(s)]=\dbE[Y(s)|+\int_0^{T-\e}Z(s,t)dW(t),\q s\in[T-\e,T].
\ee
Hence, we have uniquely determined $(Y(t),Z(t,s))$ for $(t,s)\in[T-\e,T]\times[0,T]$ (the region marked $\textcircled{\small 1}$, $\textcircled{\small 3}$ and $\textcircled{\small 4}$) and the following is well-defined:
\bel{thm-m-gs}
g^{T-\e}(t,s,z)=g(t,s,Y(s),z,Z(s,t)),\q (t,s)\in[0,T-\e]\times[T-\e,T].
\ee
Note that $[0,T-\e]\times[T-\e,T]$ is the region marked $\textcircled{\small 2}$ in the above Figure 2. Now, consider the following SFIE:
\bel{thm-m-sfie}
\psi^{T-\e}(t)=\psi(t)+\int_{T-\e}^Tg^{T-\e}(t,s,Z(t,s))ds -\int_{T-\e}^TZ(t,s)dW(s), \q t\in[0,T-\e].
\ee
Similar to the Step 2 of the proof of \autoref{thm-bsvie-1-d-exist-unique}, SFIE \rf{thm-m-sfie} admits a unique solution $(\psi^{T-\e}(\cd),Z(\cd,\cd))$
on $[0,T-\e]\times[T-\e,T]$ and the following estimate holds:
\bel{them-m-psi-l}
|\psi^{T-\e}(t)|^2\les\a(0),\q t\in[0,T-\e],
\ee
where $\a(\cd)$ solves an equation similar to \rf{thm-bsvie-1-d-exist-unique-2}. The above uniquely determined
\bel{thm-m-y-z-psi}\left\{\begin{aligned}
Y(t), &\q t\in[T-\e,T],\\
Z(t,s),&\q (t,s)\in \([T-\e,T]\times[0,T]\)\bigcup\([0,T-\e]\times[T-\e,T]\).
\end{aligned}\right.\ee
Now, we consider
\bel{thm-m-bsvie-[T-2e,T-e]}
Y(t)=\psi^{T-\e}(t)+\int_t^{T-\e}g(t,s,Y(s),Z(t,s),Z(s,t))ds-\int_t^{T-\e}Z(t,s)dW(s)
\ee
on $[0,T-\e]$. Since $\psi^{T-\e}(\cd)\in L^\i_{\cF_{T-\e}}(0,T-\e)$, \rf{thm-m-bsvie-[T-2e,T-e]} is a BSVIE on $[0,T-\e]$.
Then the above procedure can be repeated. Since the step-length $\e>0$ can be fixed, we then could use induction to complete the proof.
\end{proof}
\section{A Comparison Theorem for Type-I BSVIEs}\label{Comparison-thm}
Consider the following BSVIEs: For $i=1,2$,\rm
\bel{bsvie-1-d-comparison}
Y^i(t)=\psi^i(t)+\int_t^T g^i(t,s,Y^i(s),Z^i(t,s))ds-\int_t^TZ^i(t,s)dW(s),\q t\in[0,T].
\ee
We assume that the generators $g^i(\cd)$, $i=1,2$ of BSVIEs \rf{bsvie-1-d-comparison} satisfy \ref{A2}. Then by \autoref{thm-bsvie-1-d-exist-unique}, BSVIE \rf{bsvie-1-d-comparison} admits a unique adapted solution
$(Y^i(\cd),Z^i(\cd,\cd))\in L^\i_{\dbF}(0,T)\times\cl{\rm BMO}(\D[0,T])$ for any $\psi^i(\cd)\in L^\i_{\cF_T}(0,T)$.
In order to study the comparison theorem of the solutions to BSVIE \rf{bsvie-1-d-comparison}, we introduce the following BSVIE:
\bel{bsvie-1-d-comparison-bar}
\bar Y(t)=\bar\psi(t)+\int_t^T \bar g(t,s,\bar Y(s),\bar Z(t,s))ds-\int_t^T\bar Z(t,s)dW(s),\q t\in[0,T],
\ee
with the generator $\bar g(\cd)$ also satisfies \ref{A2}.
Further, we adopt the following assumption.
\begin{taggedassumption}{(C)}\label{C}\rm
Let the generator $\bar g:\D[0,T]\times\dbR\times\dbR\times\Om\to\dbR$ satisfy that $y\mapsto \bar{g}(t,s,y,z)$ is nondecreasing for any $(t,s,z)\in\D[0,T]\times\dbR$.
\end{taggedassumption}
We present the comparison theorem for BSVIE \rf{bsvie-1-d-comparison} now.
\begin{theorem}\label{thm-comparison} \sl
Let $g^1(\cd),g^2(\cd)$ and $\bar g(\cd)$ satisfy {\rm\ref{A2}} and let $\bar g(\cd)$ satisfy {\rm\ref{C}}. Suppose
\bel{g<g}g^1(t,s,y,z)\les\bar g(t,s,y,z)\les g^2(t,s,y,z),\q\forall (y,z)\in\dbR\times\dbR,~\as,~\ae~(t,s)\in\D[0,T].\ee
Then for any $\psi^1(\cd),\psi^2(\cd)\in L^\i_{\cF_T}(0,T)$ satisfying
\bel{thm-comparison-c2}
\psi^1(t)\les\psi^2(t),~\as,~\ae~t\in[0,T],
\ee
the corresponding unique adapted solutions $(Y^i(\cd),Z^i(\cd,\cd))$, $i=1,2$ of BSVIEs \rf{bsvie-1-d-comparison} satisfy
\bel{thm-comparison-c3}
Y^1(t)\les Y^2(t),\q\as,~\ae~t\in[0,T].
\ee
If, in addition, the generators $g^1(\cd)$, $g^2(\cd)$ and $\bar g(\cd)$ satisfy {\rm\ref{A3}}, and
\begin{align}
g^1(t,s,y,z)\les\bar g(t,s,y,z)\les g^2(t,s,y,z),\qq\forall (t,y,z)\in[0,T]\times\dbR\times\dbR,~\as,~\ae~s\in[0,T].
\end{align}
Then for any $\psi^1(\cd),\psi^2(\cd)\in L^\i_{\cF_T}(\Om;C^U[0,T])$ satisfying
\bel{thm-comparison-c-c2}
\psi^1(t)\les\psi^2(t),\q t\in[0,T],~\as,
\ee
the corresponding unique adapted solutions $(Y^i(\cd),Z^i(\cd,\cd))$, $i=1,2$ of BSVIEs \rf{bsvie-1-d-comparison} satisfy
\bel{thm-comparison-c-c3}
Y^1(t)\les Y^2(t),\q t\in[0,T],\q\as
\ee
\end{theorem}
\begin{proof} Let $\bar\psi(\cd)\in L^\i_{\cF_T}(0,T)$ such that
\bel{thm-comparison-p1}
\psi^1(t)\les\bar\psi(t)\les\psi^2(t),\q\as,~\ae~t\in[0,T].
\ee
Without loss of generality, let
\bel{thm-comparison-psiL}
\|\psi(\cd)\|_\i\les L,
\ee
where $\psi(\cd)=\psi^1(\cd),\psi^2(\cd),\bar\psi(\cd)$.
By \autoref{thm-bsvie-1-d-exist-unique}, BSVIE \rf{bsvie-1-d-comparison} admits a unique adapted solution
$(Y^1(\cd), Z^1(\cd,\cd))\in L^\i_{\dbF}(0,T)\times\cl{\rm BMO}(\D[0,T])$ for $i=1$. Set $\wt Y_0(\cd)= Y^1(\cd)$ and consider
\bel{thm-comparison-p2}
\wt Y_1(t)=\bar\psi(t)+\int_t^T \bar g(t,s,\wt Y_0(s),\wt Z_1(t,s))ds-\int_t^T \wt Z_1(t,s)dW(s),\q t\in[0,T].
\ee
By \autoref{thm-exist-unique-no-y}, there is a unique adapted solution $(\wt Y_1(\cd),\wt Z_1(\cd,\cd))\in L^\i_{\dbF}(0,T)\times\cl{\rm BMO}(\D[0,T])$ to the above BSVIE. By \rf{g<g}, we have
\bel{thm-comparison-p3}
g^1(t,s,\wt Y_0(s),z)\les\bar g(t,s,\wt Y_0(s),z),\q\forall z\in\dbR,~\as,~\ae~(t,s)\in\D[0,T].
\ee
Combining this and \rf{thm-comparison-p1}, by \autoref{thm-comparison-no-y}, for almost all $t\in[0,T]$, there exists a measurable set $\Om_t^1\subseteq\Om$ satisfying $\dbP(\Om_t^1)=0$ such that
\bel{thm-comparison-p4}
\wt Y_0(t)=Y^1(t)\les\wt Y_1(t),~\om\in\Om\backslash\Om_t^1,~\ae~t\in[0,T].
\ee
Next, we consider the following BSVIE
\bel{thm-comparison-p5}
\wt Y_2(t)=\bar\psi(t)+\int_t^T \bar g(t,s,\wt Y_1(s),\wt Z_2(t,s))ds-\int_t^T \wt Z_2(t,s)dW(s),\q t\in[0,T].
\ee
Let $(\wt Y_2(\cd),\wt Z_2(\cd,\cd))$ be the unique solution to the above equation.
Since $y\mapsto\bar g(t,s,y,z)$ is nondecreasing, by \rf{thm-comparison-p4}, we have
\bel{thm-comparison-p6}
\bar g(t,s,\wt Y_0(s),z)\les \bar g(t,s,\wt Y_1(s),z),\q\forall z\in\dbR,~\as,~\ae~(t,s)\in\D[0,T].
\ee
Similar to the above, for almost everywhere $t\in[0,T]$, there exists a measurable set $\Om_t^2\subseteq\Om$ satisfying $\dbP(\Om_t^2)=0$ such that
\bel{thm-comparison-p7}
\wt Y_1(t)\les \wt Y_2(t),~\om\in\Om\backslash\Om_t^2,~\ae~t\in[0,T].
\ee
By induction, we can construct a sequence $(\wt Y_k(\cd),\wt Z_k(\cd,\cd))$ and $\Om^k_t$ satisfying $\dbP(\Om^k_t)=0$ such that
\bel{thm-comparison-p8}
\wt Y_{k+1}(t)=\bar\psi(t)+\int_t^T \bar g(t,s,\wt Y_k(s),\wt Z_{k+1}(t,s))ds-\int_t^T \wt Z_{k+1}(t,s)dW(s),\q t\in[0,T],
\ee
and
\bel{thm-comparison-p9}
Y^1(t)=\wt Y_0(t)\les \wt Y_1(t)\les \wt Y_2(t)\les\cds,\q\om\in\Om\setminus\(\bigcup_{k\ges 1}\Om^k_t\),~\ae~t\in[0,T].
\ee
Note that $\dbP[\Om\setminus(\bigcup_{k\geq 1}\Om^k_t)]=0$.
We may assume that
\bel{thm-compariosn-p26}
|\psi(t)|\les\a(0),\q t\in[0,T],
\ee
where $\psi(\cd)=\psi^1(\cd),\psi^2(\cd),\bar\psi(\cd)$ and $\a(\cd)$ solves an ODE of form \rf{thm-bsvie-1-d-exist-unique-2}.
By \autoref{pro-l-d-Gamma-e-c}, there is an $\e>0$ such that $\wt Y_k(\cd)$ is Cauchy in $L^\i_{\dbF}(T-\e,T)$ and
\bel{thm-comparison-p10}
\lim_{k\to\i}\|\wt Y_k(\cd)-\bar Y(\cd)\|_{L^\i_{\dbF}(T-\e,T)}=0.
\ee
Combining \rf{thm-comparison-p9} and \rf{thm-comparison-p10}, we have
\bel{thm-comparison-p11}
Y^1(t)\les\bar Y(t),\q \as,~\ae~t\in[T-\e,T].
\ee
Next, consider the following SFIEs:
\begin{align}
\label{thm-comparison-p12} \psi^{1,T-\e}(t)&=\psi^1(t)+\int_{T-\e}^Tg^1(t,s,Y^1(s),Z^1(t,s))ds-\int_{T-\e}^T Z^1(t,s)dW(s),\q t\in[0,T-\e];\\
\label{thm-comparison-p13}\bar\psi^{T-\e}(t)&=\bar\psi(t)+\int_{T-\e}^T\bar g(t,s,\bar Y(s),\bar Z(t,s))ds-\int_{T-\e}^T\bar Z(t,s)dW(s), \q t\in[0,T-\e].
\end{align}
Similar to the Step 2 in \autoref{thm-bsvie-1-d-exist-unique},
the above SFIEs \rf{thm-comparison-p12} and \rf{thm-comparison-p13} admit unique solutions
$(\psi^{1,T-\e}(\cd),Z^1(\cd,\cd))$, $(\bar\psi^{T-\e}(\cd),\bar Z(\cd,\cd))\in L^\i_{\cF_{T-\e}}(0,T-\e)\times \cl{\rm BMO}([0,T-\e]\times[T-\e,T])$, respectively.
Similar to \rf{thm-bsvie-1-d-exist-unique-18}, we have
\bel{thm-comparison-p14}
|\psi^{1,T-\e}(t)|\les\a(0),\q|\bar\psi^{T-\e}(t)|\les\a(0),\q t\in[0,T-\e].
\ee
For almost all $t\in[0,T-\e]$, similar to \rf{pro-l-d-Gamma-e-c-5-2}--\rf{pro-l-d-Gamma-e-c-5-3} and \rf{pro-l-d-Gamma-e-c-7}--\rf{pro-l-d-Gamma-e-c-8}, there is a process $\th(t,\cd)$ such that:
\bel{thm-comparison-p15}
g^1(t,s,Y^1(s),Z^1(t,s))-g^1(t,s,Y^1(s),\bar Z(t,s))=\big[Z^1(t,s)-\bar Z(t,s)\big]\th(t,s),
\ee
and
\bel{thm-comparison-p16}
W (t;s)\deq W(s)-\int_0^s\th(t,r)dr,\q s\in[0,T]
\ee
is a Brownian motion on $[0,T]$ under the corresponding equivalent probability measure $\cl\dbP_t$.
The corresponding expectation is denoted by $\dbE^{\bar\dbP_t}$.
Combining \rf{thm-comparison-p12}--\rf{thm-comparison-p13} and \rf{thm-comparison-p15}--\rf{thm-comparison-p16}, we have
\begin{align}
\nn& \psi^{1,T-\e}(t)-\bar\psi^{T-\e}(t)\\
\nn&~=\psi^1(t)-\bar\psi(t) +\int_{T-\e}^T\big[g^1(t,s,Y^1(s),\bar Z(t,s))-\bar g(t,s,\bar Y(s),\bar Z(t,s))\big]ds\\
\label{thm-comparison-p17}&~\hp{=\ } -\int_{T-\e}^T \big[Z^1(t,s)-\bar Z(t,s)\big]dW(t;s), \q t\in[0,T-\e].
\end{align}
Since $y\mapsto\bar g(t,s,y,z)$ is nondecreasing for any $(t,s,z)\in\D[0,T]\times\dbR$, by \rf{thm-comparison-p11}, we have
\bel{thm-comparison-p25}
\bar g(t,s, Y^1(s),z)\les\bar g(t,s,\bar Y(s),z),\q (t,s,z)\in[0,T]\times[T-\e,T]\times\dbR.
\ee
Taking conditional expectation $\dbE^{\bar\dbP_t}_t[\,\cd\,]\equiv\dbE^{\bar\dbP_t}[\,\cd\,|\,\cd\,]$, on the both sides of \rf{thm-comparison-p17}, by \rf{g<g}, \rf{thm-comparison-p25} and \rf{thm-comparison-p11}, we have
\begin{align}
\nn& \psi^{1,T-\e}(t)-\bar\psi^{T-\e}(t)\\
\nn&~=\dbE_t^{\bar\dbP_t}
\[\psi^1(t)-\bar\psi(t)+\int_{T-\e}^T\big[g^1(t,s,Y^1(s),\bar Z(t,s))-\bar g(t,s,\bar Y(s),\bar Z(t,s))\big]ds\]\\
\label{thm-comparison-p18}&~\les\dbE_t^{\bar\dbP_t}\[\psi^1(t)-\bar
\psi(t)+\int_{T-\e}^T\big[g^1(t,s,Y^1(s),
\bar Z(t,s))-\bar g(t,s,Y^1(s),\bar Z(t,s))\big]ds\]\\
\nn&~\les0,\q t\in[0,T-\e].
\end{align}
Now, we consider the following BSVIEs:
\begin{align}
\label{thm-comparison-p19} y^1(t)&=\psi^{1,T-\e}(t)+\int_t^{T-\e}g^1(t,s,y^1(s),z^1(t,s))ds-\int_t^{T-\e}z^1(t,s)dW(s),\q t\in [0,T-\e];\\
\label{thm-comparison-p20}\bar y(t)&=\bar \psi^{T-\e}(t)+\int_t^{T-\e}\bar g(t,s,\bar y(s),\bar z(t,s))ds-\int_t^{T-\e}\bar z(t,s)dW(s),\q t\in[0,T-\e].
\end{align}
By \autoref{thm-bsvie-1-d-exist-unique}, the above equations \rf{thm-comparison-p19}, \rf{thm-comparison-p20} admit unique solutions
$(y^1(\cd),z^1(\cd,\cd))$, $(\bar y(\cd),\bar z(\cd,\cd))\in L^\i_\dbF(0,T-\e)\times\cl{\rm BMO}(\D[0,T-\e])$, respectively.
By the Step 3 in the proof of \autoref{thm-bsvie-1-d-exist-unique}, we have
\begin{align}
y^1(t)=Y^1(t),~z^1(t,s)=Z^1(t,s),\q (t,s)\in\D[0,T-\e]; \\\
\bar y(t)=\bar Y(t),~\bar z(t,s)=\bar Z(t,s),\q (t,s)\in\D[0,T-\e].
\end{align}
Hence, by induction, we have
\bel{thm-comparison-p22}
Y^1(t)\les\bar Y(t),\q\as,~\ae~t\in[0,T].
\ee
Similarly, we can prove that
\bel{thm-comparison-p23}
\bar Y(t)\les Y^2(t),\q\as,~\ae~t\in[0,T].
\ee
Thus, the inequality \rf{thm-comparison-c3} holds.
\ms
Next, by what we have proved,
\bel{thm-comparison-c-p1}
Y^1(t)\les Y^2(t),\q\as,\q t\in[0,T].
\ee
Let $\{t_k\}_{k\ges 1}\subseteq[0,T]$ be all the rational numbers in $[0,T]$. For any fixed $t_k$, by \rf{thm-comparison-c-p1}, there is a $\Om_k\subseteq\Om$ satisfying $\dbP(\Om_k)=0$ such that:
\bel{thm-comparison-c-p2}
Y_1(t_k)\les Y_2(t_k),\qq\om\in\Om\backslash\Om_{t_k}.
\ee
Let $\wt\Om=\bigcup_{k\ges 1}\Om_{t_k}$, then $\dbP(\wt\Om)=0$.
By \rf{thm-comparison-c-p2}, we have
\bel{thm-comparison-c-p3}
Y_1(t)\les Y_2(t),\q t\in\{t_k\}_{k\ges1},~\om\in\Om\backslash\wt\Om.
\ee
By \autoref{thm-bsvie-exist-unique-c}, there is a $\bar\Om\subseteq\Om$ satisfying $\dbP(\bar\Om)=0$ such $Y_i(\cd\,,\om)$, $i=1,2$ are continuous for any $\om\in\Om\backslash\bar\Om$.
For any fixed $\om\in\Om\backslash(\wt\Om\cup\bar\Om)$, by \rf{thm-comparison-c-p3}, we have
\bel{thm-comparison-c-p4}
Y_1(t,\om)\les Y_2(t,\om),\q t\in\{t_k\}_{k\ges 1}.
\ee
Since $Y_i(\cd,\om)$, $i=1,2$ are continuous on $[0,T]$ and $\{t_k\}_{k\ges 1}\subseteq[0,T]$ is dense on $[0,T]$, we have
\bel{thm-comparison-c-p5}
Y_1(t,\om)\les Y_2(t,\om),\q t\in[0,T].
\ee
Note that $\dbP(\Om\backslash(\wt\Om\cup\bar\Om))=0$, we have
\bel{thm-comparison-c-p6}
Y_1(t)\les Y_2(t),\q t\in[0,T],~\as
\ee
This completes the proof. \end{proof}
\section{Continuous-Time Equilibrium Dynamic Risk Measures}\label{application}
We have seen the so-called equilibrium recursive utility process in the introduction section, which serves as a very important motivation of studying BSVIEs. In this section, we will look another closely related application of BSVIEs.
\ms
Static risk measures have been studied by many researchers. Among many of them, we mention Artzner--Delbaen--Eber--Heath \cite{Artzner-Delbaen-Eber-Heath 1999}, F\"ollmer--Schied \cite{Follmer-Schied 2002}, and the references cited therein. For discrete-time dynamic risk measures, we mention Riedel \cite{Riedel 2004} and Detlefsen--Scandolo \cite{Detlefsen-Scandolo 2005}, and the references cited therein.
\ms
We now look at continuous-time dynamic risk measures. Any $\xi\in L^\i_{\cF_T}(\Om)$ represents the payoff of certain European type contingent claim at the maturity time $T$. According to El Karoui--Peng--Quenez \cite{El Karoui-Peng-Quenez 1997}, we introduce the following definition.
\begin{definition}\label{dynamic risk} \rm A map $\rho:[0,T]\times L^\i_{\cF_T}(\Om)\to\dbR$ is called a {\it dynamic risk measure} if the following are satisfied:
\begin{itemize}
\item[(i)] (Adaptiveness) For any $\xi\in L^\i_{\cF_T}(\Om)$, $t\mapsto\rho(t;\xi)$ is $\dbF$-adapted;
\item[(ii)] (Monotonicity) For any $\xi,\bar\xi\in L^\i_{\cF_T}(\Om)$ with $\xi\ges\bar\xi$, one has $\rho(t;\xi)\les\rho(t;\bar\xi)$,
for all $t\in[0,T]$;
\item[(iii)] (Translation Invariant) For any $\xi\in L^\i_{\cF_T}(\Om)$ and $c\in\dbR$, $\rho(t;\xi+c)=\rho(t;\xi)-c$.
\end{itemize}
Further, $\rho$ is said to be {\it convex} if the following holds:
\begin{itemize}
\item[(iv)] (Convexity): $\xi\mapsto\rho(t;\xi)$ is convex;
\end{itemize}
and $\rho$ is said to be {\it coherent} if the following are satisfied:
\begin{itemize}
\item[(v)] (Positive Homogeneity): For any $\xi\in L^\i_{\cF_T}(\Om)$ and $\l\ges0$, $\rho(t;\l\xi)=\l\rho(t;\xi)$;
\item[(vi)] (Subadditivity): For any $\xi,\bar\xi\in L^\i_{\cF_T}(\Om)$, $\rho(t;\xi+\bar\xi)\les\rho(t;\xi)+\rho(t;\bar\xi)$.
\end{itemize}
\end{definition}
Each item in the above definition can be naturally explained. For example, (ii) means that between two gains, the one dominantly larger one has a smaller risk; (vi) means that combining two investments will have smaller risk.
The following is a combination of the results from \cite{El Karoui-Peng-Quenez 1997} and \cite{Kobylanski 2000} (see also \cite{Briand-Hu 2006}, \cite{Briand-Hu 2008}, \cite{Briand-Richou 2017}).
\begin{proposition} \sl Let $g:[0,T]\times\dbR\to\dbR$ be measurable such that $z\mapsto g(t,z)$ is convex and grow at most quadratically. Then for any $\xi\in L^\i_{\cF_T}(\Om)$, the following BSDE:
\bel{BSDE*}Y(t)=-\xi+\int_t^Tg(s,Z(s))ds-\int_t^TZ(s)dW(s),\q t\in[0,T],\ee
admits a unique adapted solution $(Y(\cd),Z(\cd))\equiv (Y(\cd\,;\xi),Z(\cd\,;\xi))$. Let $\rho:[0,T]\times L^\i_{\cF_T}
(\Om)\to\dbR$ be defined by the following:
$$\rho(t,\xi)=Y(t;\xi),\q (t,\xi)\in[0,T]\times L^\i_{\cF_T}(\Om).$$
Then $\rho$ is a dynamic convex risk measure.
\end{proposition}
One of the most interesting examples is the following.
$$Y(t)=-\xi+\int_t^T{1\over2\g}|Z(s)|^2ds-\int_t^TZ(s)dW(s),\q t\in[0,T].$$
The above admits a unique adapted solution $(Y(\cd),Z(\cd))$, and
$$\rho(t,\xi)\equiv Y(t)=\g\ln\dbE\Big[e^{-{\xi\over\g}}\Bigm|\cF_t\Big]\deq e_{\g,t}(\xi),\q t\in[0,T],$$
is called a {\it dynamic entropic risk measure} for $\xi$.
\bs
Now, if we have an $\cF_T$-measurable wealth flow process $\psi(\cd)$ instead of just a terminal payoff $\xi$, then formally, the corresponding dynamic risk should be measured via the following parameterized BSDE:
$$Y(t,r)=-\psi(t)+\int_r^Tg(s,Y(t,s),Z(t,s))ds-\int_r^TZ(t,s)dW(s)), \q(r,t)\in\D[0,T],$$
and the current dynamic risk should be $Y(t;t)$. But, similar to the introduction section, simply taking $r=t$ in the above leads to the following:
$$Y(t,t)=-\psi(t)+\int_t^Tg(s,Y(t,s),Z(t,s))ds-\int_t^TZ(t,s)dW(s)), \q t\in[0,T],$$
which is not a closed form equation for the pair $(Y(t,t),Z(t,s))$ of processes. As we indicated in the introduction, $Y(t,r)$ above has some hidden {\it time-inconsistency} nature. One expects that the dynamic risk measure should be time-consistent. Namely, the value of the risk today (for a process $\psi(\cd)$) should match the one that one expected yesterday. Therefore, it is natural to use BSVIEs to describe/measure the dynamic risk of the process $\psi(\cd)$. We now make this precise.
\ms
We call $\psi(\cd)\in L^\i_{\cF_T}(0,T)$ a position process (a name borrowed from \cite{Riedel 2004}), and $\psi(t)$ could represent the total (nominal) value of certain portfolio process which might be a combination of certain (say, European type) contingent claims (which are mature at time $T$, thus they are usually only $\cF_T$-measurable), some current cash flows (such as dividends to be received, premia to be paid), positions of stocks, mutual funds, and bonds, and so on, at time the current time $t$.
Thus, the position process $\psi(\cd)$ is merely $\cF_T$-measurable (not necessarily $\dbF$-adapted). Now, mimicking Definition \ref{dynamic risk}, we introduce the following.
\begin{definition}\label{dynamic-risk-measures}\rm
A map $\rho:[0,T]\times L^\i_{\cF_T}(0,T)\to L^\i_{\dbF}(0,T)$ is called an {\it equilibrium dynamic risk measure} if the following hold:
\ms
(i) (Past Independence) For any $\psi_1(\cd),\psi_2(\cd)\in L^\i_{\cF_T}(0,T)$, if
$$\psi_1(s)=\psi_2(s),\q\as,~\ae~s\in[t,T],$$
for some $t\in[0,T)$, then
$$\rho(t;\psi_1(\cd))=\rho(t;\psi_2(\cd)),\q\as$$
(ii) (Monotonicity) For any $\psi_1(\cd),\psi_2(\cd)\in L^\i_{\cF_T}(0,T)$, if
$$\psi_1(s)\les\psi_2(s), \q\as,~\ae~s\in[t,T],$$
for some $t\in[0,T)$, then
$$\rho(s;\psi_1(\cd))\ges\rho(s;\psi_2(\cd)),\q\as,~s\in[t,T].$$
(iii) (Translation Invariance) There exists a deterministic integrable function $r(\cd)$ such that for any $\psi(\cd)\in L^\i_{\cF_T}(0,T)$,
$$\rho(t;\psi(\cd)+c)=\rho(t;\psi(\cd))-ce^{\int_t^Tr(s)ds},\q\as,~t\in[0,T].$$
\no Further, $\rho$ is said to be {\it convex} if the following holds:
\ms
(iv) (Convexity) For any $\psi_1(\cd),\psi_2(\cd)\in L^\i_{\cF_T}(0,T)$ and $\l\in[0,1]$,
$$\rho(t;\l\psi_1(\cd)+(1-\l)\psi_2(\cd))\les\l\rho(t;\psi_1(\cd))
+(1-\l)\rho(t;\psi_2(\cd)),\q\as,~t\in[0,T].$$
\no And $\rho$ is said to be {\it coherent} if the following are satisfied:
\ms
(v) (Positive Homogeneity) For any $\psi(\cd)\in L^\i_{\cF_T}(0,T)$ and $\l>0$,
$$\rho(t;\l\psi(\cd))=\l\rho(t;\psi(\cd)),\q\as,~t\in[0,T].$$
(vi) (Subadditivity) For any $\psi_1(\cd),\psi_2(\cd)\in L^\i_{\cF_T}(0,T)$,
$$\rho(t;\psi_1(\cd)+\psi_2(\cd))\les\rho(t;\psi_1(\cd))
+\rho(t;\psi_2(\cd)),\q\as,~t\in[0,T].$$
\end{definition}
The word ``equilibrium'' indicates the time-consistency of the risk measure $\rho$ which is some kind of modification of the naive one. Similar situation has happened in the study of time-inconsistent optimal control problems (see \cite{Yong 2012}). The meaning of each item is similar to the static case. In (iii), the function $r(\cd)$ is the riskless interest rate.
\ms
Let us now look at the following Type-I BSVIE:
\bel{appli-absvie}
Y(t)=-\psi(t)+\int_t^Tg(t,s,Y(s),Z(t,s))ds-\int_t^TZ(t,s)dW(s),\q t\in[0,T].
\ee
We have the following result.
\begin{proposition}\label{risk-measure-rho-trans} \sl
Let the generator be given by
$$g(t,s,y,z)\equiv r(s)y+g_0(t,s,z);(t,s,y,z)\in\D[0,T]\times\dbR\times\dbR,$$
satisfying {\rm\ref{A2}}, where $r(\cd)$ is a non-negative deterministic function. Then the following are true:
\begin{enumerate}[\rm(i)]
\item The map $\psi(\cd)\mapsto\rho(t;\psi(\cd))$ is translation invariant.
\item Suppose $z\mapsto g_0(t,s,z)$ is convex, so is $\psi(\cd)\mapsto\rho(t;\psi(\cd))$.
\item Suppose $z\mapsto g_0(t,s,z)$ is positively homogeneous and sub-additive, so is $\psi(\cd)\mapsto\rho(t;\psi(\cd))$.
\end{enumerate}
\end{proposition}
By \autoref{thm-comparison}, the proof of \autoref{risk-measure-rho-trans}
is very similar to \cite[Corollary 3.4, Proposition 3.5]{Yong 2007}, we omit them here. By \autoref{risk-measure-rho-trans}, we can construct a large class of equilibrium dynamic risk measures by choosing suitable generator $g(\cd)$ of BSVIE \rf{appli-absvie}. More precisely, we have the following result.
\begin{theorem}\label{risk-measure-main} \sl
Let the generator $g(t,s,y,z)\equiv r(s)y+g_0(t,s,z);(t,s,y,z)\in\D\times\dbR\times\dbR$ satisfy {\rm\ref{A2}},
where $r(\cd)$ is a non-negative deterministic function and
$z\mapsto g_0(t,s,z)$ is convex, then $\psi(\cd)\mapsto\rho(t;\psi(\cd))$ is an equilibrium dynamic convex risk measure. If $z\mapsto g_0(t,s,z)$ is positively homogeneous and sub-additive, then $\psi(\cd)\mapsto\rho(t;\psi(\cd))$ is an equilibrium dynamic coherent risk measure.
\end{theorem}
From \autoref{risk-measure-rho-trans}, the proof of the above result is obvious. According to the above results, we can have some examples of equilibrium dynamic risk measures by the choices of $g_0(t,s,z)$: If
$$g_0(t,s,z)=\bar g(t,s)|z|,\qq\bar g(t,s)\ges0,$$
then, it is sub-additive and positively homogeneous in $z$. The
corresponding equilibrium dynamic risk measure is coherent. If
$$g_0(t,s,z)=\bar g(t,s)\sqrt{1+|z|^2},\qq\bar g(t,s)\ges0,$$
then, it is convex in $z$. The corresponding equilibrium dynamic risk measure is convex. If
$$g_0(t,s,z)=\bar g(t,s)|z|^2,\qq\bar g(t,s)\ges0,$$
then one has an entropy type equilibrium dynamic risk measure.
\section{Concluding Remarks}\label{remarks}
Recursive utility process (or stochastic differential utility process) and dynamic risk measures for terminal payoff can be described by the adapted solutions to proper BSDEs. For $\cF_T$-measurable position process $\psi(\cd)$, instead of the terminal payoff $\xi$, one could also try to find its recursive utility process and/or dynamic risk. One possibility is again to use BSDEs. However, one immediately finds that the resulting processes (recursive utility or dynamic risk measure) are kind of time-inconsistent nature. Type-I BSVIEs turn out to be a proper tool for describing them. This serves one of major motivations of studying BSVIEs. Recall from \cite{Yong 2006, Yong 2008}, we know that mathematical extension of BSDEs and optimal control of forward stochastic Volterra integral equations are other two motivations. To meet the needs for the equilibrium recursive utility processes and equilibrium dynamic risk measures, we have to allow the generator of the BSVIE to have a quadratic growth in $Z(t,s)$. We have developed a theory of Type-I QBSVIEs, including the well-posedness, regularity and a comparison theorem, etc. in this paper. As a byproduct, we also have obtained the well-posedness of Type-II QBSVIEs. Then a theory of equilibrium recursive utility and equilibrium dynamic risk measures are successfully established with the results of Type-I QBSVIEs.
\bs
\no\bf Acknowledgement. \rm The authors would like to thank two anonymous referees for their suggestive comments which leads to the current version of the paper.
|
train/arxiv
|
BkiUaOvxK7Ehm9ic9xBh
| 5
| 1
|
\section{Introduction}
During the last two decades, a series of theoretical and experimental studies of particle production in heavy ion collisions (HICs) at
Relativistic Heavy Ion Collider (RHIC) and Large Hadrons Collider (LHC) energies has been performed. These results provided us with various
sources of information on properties of the hot and dense matter (Quark Gluon Plasma) formed in these collisions.
Although several issues still remain open, those are mainly related to a description of nuclear effects related to the initial-state
formation before it interacts with a nuclear target, as well as to the parton propagation in a nuclear medium.
In this context, the phenomenological studies of hard processes in proton-nucleus ($pA$) collisions can provide us with
an additional quantitative information about various nuclear effects expected also in HICs. This can help us to disentangle
between the medium effects of different types and constrain their relative magnitudes and contributions \cite{salgado}.
A key feature of the Drell-Yan (DY) process is the absence of final state interactions and fragmentation associated
with an energy loss or absorption phenomena. For this reason, the DY process can be considered as a very clean probe
for the Initial State Interaction (ISI) effects \cite{peng}. In practice, this process can be used as a convenient tool in studies
of the Quantum Chromodynamics (QCD) at high energies, in particular, the saturation effects expected to determine the initial conditions
in hadronic collisions as well as the initial-state energy loss due to the projectile quark propagation in the nuclear medium before
it experiences a hard scattering.
In the present paper, we study the DY process on nuclear targets at high energies using the color dipole approach
\cite{k95,bhq97,kst99,krt01,dynuc,rauf,gay,pkp,Basso,Basso_pp}, which is known to give as precise prediction for the DY cross section
as the Next-to-Leading-Order (NLO) collinear factorization framework and allows to include naturally the coherence effects in nuclear collisions.
Moreover, the color dipole formalism provides a straightforward generalisation of the DY process description from the proton-proton
to proton-nucleus collisions and is thus suitable for studies of nuclear effects directly accessing the impact parameter dependence
of nuclear shadowing and nuclear broadening -- the critical information which is not available in the parton model.
In contrast to the conventional parton model where the dilepton production
process is typically viewed as the parton annihilation in the center of mass (c.m.)
frame, in the color dipole approach operating in the target rest frame
the same process looks as a bremsstrahlung of a $\gamma^*/Z^0$ boson
off a projectile quark. In $pA$ collisions assuming the high energy limit,
the projectile quark probes a dense gluonic field in the target
and the nuclear shadowing leads to a nuclear modification of the
transverse momentum distribution of the DY production cross section.
The onset of shadowing effects is controlled by the coherence length,
which can be interpreted as the mean lifetime of $\gamma^*/Z^0$-quark
fluctuations, and is given by
\begin{equation}
\label{eq-cl}
l_c = \frac{1}{x_2 m_N}
\frac
{(M_{l\bar{l}}^2 + p_T^2)(1-\alpha)}
{ \alpha(1-\alpha) M_{l\bar{l}}^2 + \alpha^2 m_f^2 + p_T^2} \,,
\end{equation}
where $M_{l\bar{l}}$ is the dilepton invariant mass and $p_T$ its transverse momentum. Moreover, $\alpha$ is the fraction
of the light-cone momentum of the projectile quark carried out by the gauge boson. As demonstrated in Fig. \ref{fig:LCL}, in the RHIC and LHC kinematic regions, the coherence length exceeds the nuclear radius $R_A$,
$l_c \gtrsim R_A$, which implies that
the long coherence length (LCL) limit can be safely used
in practical calculations of the DY cross section in $pA$ collisions.
Besides the quark
shadowing effects naturally accounted for
in the dipole picture, one should also take into account the nuclear effects
due to multiple rescattering of initial-state projectile partons (ISI effects) in a medium
before a hard scattering. The latter are important close to the kinematic limits,
e.g. at large Feynman $x_F\to 1$ and $x_T=2 p_T/\sqrt{s}\to 1$ ($\sqrt{s}$ is the
collision energy in c.m. frame), due to restrictions imposed by energy conservation.
In the present paper, we take into account also non-linear QCD effects,
which are amplified in nuclear collisions and related to multiple scatterings
of the higher Fock states containing gluons in the dipole-target interactions.
They generate the gluon shadowing effects effective at small Bjorken $x$
in the target and large rapidity values.
\begin{figure}[t]
\large
\begin{center}
\scalebox{0.7}{\includegraphics{852-Mean_CL-200.eps}}
\scalebox{0.7}{\includegraphics{851-Mean_CL-5020.eps}}
\caption{(Color online)
The mean coherence length $l_c$ of the DY reaction
in $pA$ collisions at RHIC and LHC energies
for different dilepton rapidities and invariant mass ranges.}
\label{fig:LCL}
\end{center}
\end{figure}
\normalsize
In our study, all the basic ingredients for the DY nuclear production
cross section (such as the dipole cross section parameterisations
and Parton Distribution Functions (PDFs)) have been determined from other processes.
Consequently, our predictions are parameter-free and should be considered as
an important test for the onset of distinct nuclear effects.
Note that the nuclear DY process mediated by a virtual photon has been already
studied within the color dipole framework by several authors
(see e.g. Refs.~\cite{dynuc,rauf,gay}). However,
the results of this paper represent a further step
updating and improving the previous analyses in the literature
providing new predictions for the transverse momentum,
dilepton invariant mass and rapidity distributions
of the nuclear DY production cross section
at RHIC and LHC energies as well as in comparison to the most recent data.
Besides, the effects of quantum coherence at large energies including
the gluon shadowing as a leading-twist shadowing correction as well as
an additional contribution of the $Z^0$ boson and $\gamma^*/Z^0$ interference
are incorporated. Moreover, the impact of the effective
initial state energy loss effects on the DY nuclear production cross section
is studied for the first time. We also investigate nuclear effects providing
a detailed analysis of the azimuthal correlation between
the produced DY pair and a forward pion taking into
account the $Z^0$ boson contribution
in addition to virtual photon,
generalising thus the results presented in Ref.~\cite{stastody}.
This paper is organized as follows. In the Section~\ref{formalism},
we present a brief overview of gauge boson production in the color dipole framework.
Moreover, we discuss in detail the saturation effects, gluon shadowing
and initial-state energy loss effects included in the analysis.
Section~\ref{res} is devoted to predictions for the dilepton invariant mass,
rapidity and transverse momentum distributions of the DY nuclear production
cross sections in comparison with the available data.
The onset of various nuclear effects is estimated in the LCL limit
and the predictions for the nucleus-to-nucleon ratio,
$R_{pA}=\sigma_{pA}/A\sigma_{pp}\footnote{Here $A$ represents the atomic mass
number of the nuclear target}$, of the DY production
cross sections are presented.
The latter can be verified in the future by experiments at RHIC an LHC.
Furthermore, the azimuthal correlation function between the produced dilepton
and a pion is evaluated for $pA$ collisions at RHIC and LHC for different
dilepton invariant masses including the high-mass region.
Finally, in Section~\ref{conc} we summarise our main conclusions.
\section{Drell-Yan process in hadron-nucleus collisions}
\label{formalism}
\subsection{DY nuclear cross section}
The color dipole formalism is treated in the target rest frame
where the process of DY pair production can be viewed as a radiation
of gauge bosons $G^*=\gamma^*/Z^0$ by a projectile quark
(see e.g. Ref.~\cite{Basso_pp,pkp}). Assuming only the lowest $|qG^*\rangle$
Fock component, the cross section for the inclusive gauge boson
production with invariant mass $M_{l\bar{l}}$ and transverse momentum $p_T$
can be expressed in terms of the projectile quark (antiquark)
densities $q_f$ ($\bar{q}_f$) at momentum fraction $x_q$
and the quark-nucleus cross section as follows (see e.g. Refs.~\cite{dynuc,Basso_pp}),
\begin{eqnarray}
\label{eq:gb_cs}
\frac{d \sigma (pA\rightarrow G^* X)}{\, \mathrm{d}^2 p_T\, d\eta} =
J(\eta, p_T)\,\frac{x_1}{x_1 + x_2}\,
\sum_f\sum_{\lambda_G=L,T}\,
\int\limits_{x_1}^1
\frac{d \alpha}{\alpha^2}\,
\bigl [
\, q_f(x_q,\mu_F^2) + \bar{q}_{{f}}(x_q,\mu_F^2)
\bigr ]\,
\frac{d\sigma^f_{\lambda_G} (qA \rightarrow qG^*X)}{d(\ln\alpha)\, d^2p_T}\, ,
\end{eqnarray}
where
\begin{equation}
J(\eta, p_T)\equiv \frac{dx_F}{d\eta} =
\frac{2}{\sqrt{s}} \sqrt{M_{l\bar{l}}^2 + p_T^2}\, \cosh(\eta)
\end{equation}
is the Jacobian of transformation between the Feynman variable $x_F = x_1 - x_2$
and pseudorapidity $\eta$ of the virtual gauge boson
$G^*$, $x_q = x_1/\alpha$, where $\alpha$ is the fraction
of the light-cone momentum of the projectile quark carried out by the gauge boson,
and $\mu_F^2=p_T^2+(1-x_1)M_{l\bar{l}}^2$ is the factorization scale in quark PDFs.
As in Ref. \cite{Basso_pp} we take $\mu_F\simeq M_{l\bar{l}}$, for simplicity.
The transverse momentum distribution in Eq.~({\ref{eq:gb_cs}) of the gauge boson $G^*$
bremsstrahlung in quark-nucleus interactions
can be obtained by a generalization of the well-known formulas
for the photon bremsstrahlung from Refs.~\cite{dynuc,rauf,kst99}.
Then the corresponding differential cross section for a given incoming
quark of flavour $f$ reads,
\begin{eqnarray}
\label{ptdistcc}
\frac{d\sigma^f_{T,L} (qA \rightarrow qG^*X)}{d(\ln\alpha)\,d^2p_T}
& = &
\frac{1}{(2\pi)^2}\, \sum_\text{quark pol.}
\int d^2\rho_1\,d^2\rho_2\,
\exp\bigl [i\,{\bf p}_T \cdot ({\bm\rho}_1 - {\bm\rho}_2)\bigr ]\,
\Psi^{\cal{V-A}}_{T,L}(\alpha,{\bm\rho}_1,m_f)\,
\Psi^{\cal{V-A},*}_{T,L}(\alpha,{\bm\rho}_2,m_f) \nonumber \\
&\times &
\frac{1}{2}\bigl [ \sigma_{q\bar{q}}^A(\alpha {\bm\rho}_1,x_2) +
\sigma_{q\bar{q}}^A(\alpha {\bm\rho}_2,x_2) -
\sigma_{q\bar{q}}^A(\alpha|{\bm\rho}_1- {\bm\rho}_2|,x_2)\bigr]\,,
\end{eqnarray}
where $x_2 = x_1 - x_F$ and ${\bm\rho}_{1,2}$ are
the quark-$G^*$ transverse separations in the total radiation amplitude
and its conjugated counterpart, respectively.
Assuming that the projectile quark is unpolarized,
the vector $\Psi^{\cal{V}}$ and axial-vector $\Psi^{\cal{A}}$ wave functions
in Eq.~(\ref{ptdistcc}) are not correlated such that
\begin{eqnarray}
\label{Psi2}
& &
\sum_\text{quark pol.} \Psi^{\cal{V-A}}_{T,L}(\alpha,{\bm\rho}_1,m_f)\,
\Psi^{\cal{V-A},*}_{T,L}(\alpha,{\bm\rho}_2,m_f) =
\nonumber \\
& = &
\Psi^{\cal{V}}_{T,L}(\alpha,{\bm\rho}_1,m_f)\,
\Psi^{\cal{V},*}_{T,L}(\alpha,{\bm\rho}_2,m_f) +
\Psi^{\cal{A}}_{T,L}(\alpha,{\bm\rho}_1,m_f)\,
\Psi^{\cal{A},*}_{T,L}(\alpha,{\bm\rho}_2,m_f)\,,
\end{eqnarray}
where the averaging over the initial and summation
over final quark helicities is performed and
the quark flavour dependence comes
only via the projectile quark mass $m_f$.
The corresponding wave functions
$\Psi_{T,L}^{\cal{V-A}}(\alpha,{\bm\rho})$ can be found in Ref.~\cite{pkp}.
Our goal is to evaluate the DY production cross section
in $pA$ collisions at high energies and a large mass number $A$
of the nuclear target.
This regime is characterised by a limitation
on the maximum phase-space parton density that can be reached
in the hadron wave function (parton saturation) \cite{hdqcd}.
The transition between the linear and non-linear regimes
of QCD dynamics is typically specified by a characteristic
energy-dependent scale called the saturation scale
$Q_{s}^2$, where the variable $s$ denotes c.m. energy squared of the collision.
Such saturation effects are expected to be amplified
in nuclear collisions since the nuclear saturation scale $Q_{s,A}^2$
is expected to be enlarged with respect to
the nucleon one $Q_{s,p}^2$ by rougthly a factor of $A^{1/3}$.
In general, the dipole-nucleus cross section
$\sigma_{q\bar{q}}^A(\rho,x)$ can be written
in terms of the forward dipole-nucleus scattering amplitude
${\cal N}^A (\rho, x, \mbox{\boldmath $b$})$ as follows,
\begin{eqnarray}
\sigma_{q\bar{q}}^A(\bm\rho,x) = 2\,\int d^2\mbox{\boldmath $b$}\, {\cal N}^A (\bm\rho, x, \mbox{\boldmath $b$})\,.
\label{sigda}
\end{eqnarray}
At high energies, the evolution of $\mathcal{N}^A(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$})$
in rapidity $Y = \ln (1/x)$ is given, for example, within
the Color Glass Condensate (CGC) formalism \cite{CGC}, in terms
of an infinite hierarchy of equations known as so called
Balitsky-JIMWLK equations \cite{BAL,CGC},
which reduces in the mean field approximation
to the Balitsky-Kovchegov (BK) equation \cite{BAL,kov}.
In recent years, several groups have studied
the solution of the BK equation taking into account
the running coupling corrections to the evolution kernel.
However, these analyses have assumed the translational
invariance approximation, which implies that
$\mathcal{N}^A(\rho, x,\mbox{\boldmath $b$}) = \mathcal{N}^A(\rho,x)\,S(\mbox{\boldmath $b$})$ and
$\sigma_{q\bar{q}}^A(\bm\rho, x, \mbox{\boldmath $b$}) = \sigma_0\,\mathcal{N}(\rho, x)$,
where $\mathcal{N}(\rho, x)$ is a partial dipole amplitude on a nucleon,
and $\sigma_0$ is the normalization of the dipole cross section
fitted to the data. Basically, they disregard the impact parameter dependence.
Unfortunately, the impact-parameter dependent numerical solutions
of the BK equation are very difficult to obtain \cite{stastobk}.
Moreover, the choice of the impact-parameter profile
of the dipole amplitude entails intrinsically nonperturbative physics,
which is beyond the QCD weak coupling approach of the BK equation.
In what follows, we explore an alternative path and
employ the available phenomenological models, which explicitly
incorporate an expected $b$-dependence of the scattering amplitude.
\subsection{Models for the dipole cross section}
As in our previous studies
\cite{erike_ea2,vmprc,vic_erike,babi,Diego,Diego2},
we work in the LCL limit and employ the model
initially proposed in Ref.~\cite{kopeliovich-lcl}
which includes the impact parameter dependence
in the dipole-nucleus amplitude and describes
the experimental data on the nuclear structure function
(for more details, see Ref.~\cite{armesto,erike_ea2}).
In particular, this model enables us to incorporate
the shadowing effects via a simple
eikonalization of the standard dipole-nucleon
cross section $\sigma_{q\bar q}(\bm\rho,x)$ such that the forward
dipole-nucleus amplitude in Eq.~(\ref{sigda}) is given by
\begin{eqnarray}
{\cal N}^A (\bm\rho,x,\mbox{\boldmath $b$}) =
1-\exp\left(-\frac{1}{2}\,T_A(\mbox{\boldmath $b$})\,\sigma_{q\bar{q}}(\bm\rho,x)\right) \,,
\label{Na}
\end{eqnarray}
where $T_A(\mbox{\boldmath $b$})$ is the nuclear profile (thickness)
function, which is normalized to the mass number $A$
and reads
\begin{eqnarray}
T_A(\mbox{\boldmath $b$}) =
\int_{-\infty}^{\infty} \rho_A(\mbox{\boldmath $b$},z) dz\, .
\end{eqnarray}
Here $\rho_A(\mbox{\boldmath $b$},z)$ represents
the nuclear density function defined at the impact parameter $\mbox{\boldmath $b$}$
and the longitudinal coordinate $z$. In our calculations we
used realistic parametrizations of $\rho_A(\mbox{\boldmath $b$},z)$ from
Ref.~\cite{saxon}.
The eikonal formula (\ref{Na}) based upon the Glauber-Gribov
formalism \cite{gribov}
resums the multiple elastic rescattering diagrams
of the $q\bar{q}$ dipole in a nucleus in the high-energy limit.
The eikonalisation procedure is justified in the LCL regime where
the transverse separation $\rho$ of partons in the multiparton
Fock state of the photon
is frozen during propagation through the nuclear matter and
becomes an eigenvalue of the scattering matrix.
For the numerical analysis of the nuclear DY observables,
we need to specify a reliable parametrisation for
the dipole-proton cross section.
In recent years, several groups have constructed a number
of viable phenomenological models based on saturation physics
and fits to the HERA and RHIC data (see e.g.
Refs.~\cite{GBW,iim,kkt,dhj,Goncalves:2006yt,buw,kmw,agbs,Soyez2007,bgbk,kt,ipsatnewfit,amirs}).
As in our previous study of the DY process in $pp$ collisions
\cite{Basso_pp},
in order to estimate theoretical uncertainty in our analysis,
in what follows,
we consider several phenomenological models for the dipole cross
section $\sigma_{q\bar{q}}$ which take into account the DGLAP evolution
as well as the saturation effects.
The first one is the model proposed in Ref.~\cite{bgbk},
where the dipole cross section is given by
\begin{equation}
\sigma_{q\bar{q}}(\bm\rho, x) = \sigma_0\,
\left[1-\exp\left( - \frac{\pi^2}{\sigma_0\,N_c}\,\rho^2\,
\alpha_s(\mu^2)\,xg(x, \mu^2)\right)\right]\,,
\label{bgbk}
\end{equation}
where $N_c=3$ is the number of colors, $\alpha_s(\mu^2)$
is the strong coupling constant at $\mu$ scale,
which is related to the dipole size
$\rho$ as $\mu^2=C/\rho^2 + \mu_0^2$
with $C$, $\mu_0$ and $\sigma_0$ parameters fitted
to the HERA data. Moreover, in this model
the gluon density evolves according to DGLAP equation \cite{dglap}
accounting for gluon splittings only,
\begin{equation}
\frac{\partial xg(x,\mu^2)}{\partial \ln \mu^2 } =
\frac{\alpha_s(\mu^2)}{2\pi} \int_x^1 dz\,
P_{gg}(z) \frac{x}{z} g\Big(\frac{x}{z}, \mu^2\Big)\,,
\label{dglap}
\end{equation}
where the gluon density at initial scale $\mu_0^2$ is parametrized as
\cite{bgbk}
\begin{equation}
xg(x,\mu_0^2) = A_g x^{-\lambda_g} (1-x)^{5.6}\,.
\end{equation}
The set of best fit values of the model parameters reads:
$A_g = 1.2$, $\lambda_g = 0.28$, $\mu_0^2 = 0.52$ GeV$^{2}$,
$C = 0.26$ and $\sigma_0 = 23$ mb.
In what follows we denote by BGBK the predictions
for the DY observables obtained using Eq.~(\ref{bgbk}) as an input
in calculations of the dipole-nucleus scattering amplitude.
The model proposed in Ref.~\cite{bgbk} was generalised
in Ref.~\cite{kmw} in order to take into
account the impact parameter dependence of
the dipole-proton cross section and to describe
the exclusive observables at HERA. In this model,
the corresponding dipole-proton cross section is given by
\begin{eqnarray}
\sigma_{q\bar{q}}(\bm\rho, x) =
2\,\int d^2b_p\,\left[1-\exp\left(- \frac{\pi^2}{2N_c}\,
\rho^2\,\alpha_s(\mu^2)\, xg(x,\mu^2)T_G({\bf b}_p)\right)\right]
\label{ipsat}
\end{eqnarray}
with the DGLAP evolution of the gluon distribution given
by Eq.~(\ref{dglap}). The Gaussian impact parameter dependence
is given by
$T_G({\bf b_p})=(1/2\pi B_G)\,\exp(-b_p^2/2 B_G)$,
where $B_G$ is a free parameter extracted from the $t$-dependence of
the exclusive electron-proton ($ep$) data.
The parameters of this model were updated in
Ref.~\cite{ipsatnewfit} by fitting to the recent high precision
HERA data \cite{heradata} providing the following values:
$A_g = 2.373$, $\lambda_g = 0.052$, $\mu_0^2 = 1.428$ GeV$^{2}$,
$B_G = 4.0$ GeV$^{2}$ and $C = 4.0$.
Hereafter, we will denote as IP-SAT the resulting predictions
obtained using Eq.~(\ref{ipsat})
as an input in calculations of ${\cal N}^A$, Eq.~(\ref{Na}).
For comparison with the previous results existing in the literature,
we also consider the Golec-Biernat-Wusthoff (GBW) model \citep{GBW}
based upon a simplified saturated form
\begin{equation}
\label{gbw}
\sigma_{q\bar{q}}(\bm\rho,x) =
\sigma_0\,\left(1 - e^{-\frac{\rho^2Q_s^2(x)}{4}}\right)
\end{equation}
with the saturation scale
\begin{equation}
Q_s^2(x) = Q_0^2\left( \frac{x_0}{x} \right)^\lambda \,,
\label{satsca1}
\end{equation}
where the model parameters $Q_0^2 = 1$ GeV${}^2$,
$x_0 = 4.01 \times 10^{-5}$, $\lambda = 0.277$ and $\sigma_0 = 29$ mb
were obtained from the fit to the DIS data accounting for a contribution
of the charm quark.
Finally, we also consider the running coupling solution
of the BK equation for the partial dipole
amplitude obtained in the Ref.~\cite{bkrunning}
using the GBW model as an initial condition such that
$\sigma_{q\bar{q}}^p(\bm\rho,x) = \sigma_0\,{\cal{N}}^p(\bm\rho,x)$ where
the normalisation $\sigma_0$ is fitted to the HERA data.
\subsection{Gluon shadowing corrections}
In the LHC energy range
the eikonal formula for the
LCL regime, Eq.~(\ref{Na}), is not exact.
Besides the lowest $|qG^*\rangle$
Fock state, where $G^*=\gamma^*/Z^0$,
one should include also the higher Fock
components containing gluons, e.g. $|qG^*\,g\rangle$, $|qG^*\,gg\rangle$, etc.
They cause an additional suppression known as the gluon shadowing (GS).
Such high LHC energies allow so to activate the coherence effects
also for these gluon fluctuations, which
are heavier and consequently have a shorter coherence length than
lowest Fock component $|qG^*\rangle$.
The corresponding suppression factor $R_G$, as the ratio
of the gluon densities in nuclei and nucleon,
was derived
in Ref.~\cite{kopeliovich-gs}
using the Green function technique
through the calculation of
the inelastic correction $\Delta\sigma_{tot}(q\bar{q}g)$ to the total
cross section $\sigma_{tot}^{\gamma^*\,A}$, related to the creation
of a $|q\bar{q}\,g\rangle$ intermediate Fock state
\begin{eqnarray}
R_G(x,Q^2,\mbox{\boldmath $b$}) \equiv \frac{x g_A(x,Q^2,\mbox{\boldmath $b$})}{A\cdot x g_p(x,Q^2)}
\approx 1 - \frac{\Delta\sigma_{tot}(q\bar{q}g)}{\sigma_{tot}^{\gamma^*A}} \,.
\end{eqnarray}
GS corrections are included in calculations replacing
$\sigma_{q \bar q}^N(\bm\rho, x) \rightarrow
\sigma_{q \bar q}^N(\bm\rho, x)\,R_G(x,Q^2,\mbox{\boldmath $b$})$.
They lead to additional nuclear suppression
in production of DY pairs at small Bjorken
$x=x_2$ in the target. In Fig.~\ref{fig:rg} (left panel) we present our results for the $x$ dependence of the ratio $R_G(x,Q^2,\mbox{\boldmath $b$})$ for different vales of the impact parameter $b$. As expected, the magnitude of the shadowing corrections decreases at large values of $b$. In the right panel we present our predictions for the $b$-integrated nuclear ratio $R_G(x,Q^2)$ for different values of the hard scale $Q^2$. This figure shows a not very strong onset of GS, which was
confirmed by the NLO global analyses of
DIS data \cite{nlo-dis}. A weak $Q^2$ dependence
of GS demonstrates that GS is a leading twist effect,
with $R_G(x,Q^2)$ approaching unity only very slowly
(logarithmically) as $Q^2\rightarrow\infty$.
\begin{figure}[h!]
\large
\begin{center}
\scalebox{0.7}{\includegraphics{803-RG-Ceff-bdep-208.eps}}
\scalebox{0.7}{\includegraphics{802-RG-Ceff-208.eps}}
\caption{(Color online) Left panel: The $x$-dependence of the ratio
$R_G(x,Q^2,\mbox{\boldmath $b$})$ for different values of the impact parameter. Right panel: The $x$-dependence of the $b$--integrated ratio
$R_G(x,Q^2)$ for distinct values of the hard scale $Q^2$.}
\label{fig:rg}
\end{center}
\end{figure}
\normalsize
\subsection{Effective energy loss}
The effective initial-state energy loss (ISI effects)
is expected to suppress noticeably the nuclear cross
section when reaching the kinematical limits,
\[
x_L = \frac{2p_L}{\sqrt{s}}\rightarrow1\,,
\qquad x_T = \frac{2p_T}{\sqrt{s}}\rightarrow1 \,.
\]
Correspondingly, a proper variable which controls
this effect is $\xi = \sqrt{x_L^2 + x_T^2}$. The magnitude of suppression
was evaluated in Ref.~\cite{kopeliovich-isi}.
It was found within the Glauber approximation
that each interaction in the nucleus
leads to a suppression factor $S(\xi)\approx 1-\xi$.
Summing up over the multiple initial state interactions in a $pA$ collision
at impact parameter $b$, one arrives at a nuclear ISI-modified PDF
\begin{equation}
\label{eq-ISI}
q_{f}(x,Q^2) \Rightarrow q_{f}^A(x,Q^2,b) =
C_v \, q_{f}(x,Q^2)\,
\frac{e^{-\xi \sigma_{\rm eff}T_A(b)}-e^{-\sigma_{\rm eff}T_A(b)}}
{(1-\xi)(1-e^{-\sigma_{\rm eff}T_A(b)})} \,.
\end{equation}
Here, $\sigma_{\rm eff}=20$~mb is the effective hadronic
cross section controlling the multiple interactions.
The normalisation factor $C_v$ is fixed by the Gottfried
sum rule (for more details, see Ref.~\cite{kopeliovich-isi}).
It was found that such an additional nuclear suppression
emerging due to the ISI effects represents an energy
independent feature common for all known reactions
experimentally studied so far, with any leading particle
(hadrons, Drell-Yan dileptons, charmonium, etc).
In particular, such a suppression was indicated at midrapidity,
$y=0$, and at large $p_T$ by the PHENIX data \cite{phenix-isi-dAu}
on $\pi^0$ production in central $dAu$ collisions and
on direct photon production in central $AuAu$ collisions
\cite{phenix-isi-AuAu}, where no shadowing is expected
since the corresponding Bjorken $x=x_2$ in the target is large.
Besides large $p_T$-values,
the same mechanism of nuclear attenuation is effective also
at forward rapidities (large Feynman $x_F$), where we expect
a much stronger onset of nuclear suppression
as was demonstrated by the BRAHMS and STAR data \cite{rhic-isi-forw}.
In our case, we predict that the ISI effects induce a
significant suppression
of the DY nuclear cross section at large dilepton $p_T$,
dilepton invariant mass and at forward rapidities as one can see
in the next Section.
\section{Results}
\label{res}
In what follows, we present our predictions for the DY pair production
cross section in the process $pA\rightarrow \gamma^*/Z^0 \rightarrow l \bar l$
obtained within the color dipole formalism
and taking into account the medium effects discussed in the previous
Section.
Following Ref.~\cite{GBW}, we use the quark mass values to be
$m_u = m_d = m_s = 0.14$ GeV, $m_c = 1.4$ GeV and $m_b = 4.5$ GeV.
Moreover, we take the factorisation scale $\mu_F$ defined above
to be equal to the dilepton invariant mass, $M_{l\bar{l}}$,
and employ the CT10 NLO parametrisation
for the projectile quark PDFs \cite{ct10}
(both sea and valence quarks are included).
As was demonstrated in Refs.~\cite{Basso_pp,Basso:2015lua},
there is a little sensitivity of DY predictions on
PDF parameterisation in $pp$ collisions at high energies
so we do not vary the projectile quark PDFs.
\begin{figure}[h!]
\large
\begin{center}
\scalebox{0.7}{\includegraphics{100-pPb-deta-M60_120-5020-data.eps}}
\scalebox{0.7}{\includegraphics{300-pPb-dpT-M60_120-5020-data.eps}} \\
\scalebox{0.7}{\includegraphics{400-pPb-deta-M60_120-5020-data.eps}}
\scalebox{0.7}{\includegraphics{600-pPb-dpT-M60_120-5020-data.eps}}
\caption{(Color online) The dipole model predictions for the DY
nuclear cross sections at large dilepton invariant masses
compared to the recent experimental data from ATLAS and CMS
experiments \cite{cms_data_pA,atlas_data_pA}
at c.m. collision energy $\sqrt{s}=5.02$ TeV.
The predictions obtained
for several parameterisations of the dipole cross section
described in the text are shown in the top panels while
the effects of the gluon shadowing and the initial-state energy loss
are demonstrated in the bottom panels.}
\label{fig:data}
\end{center}
\end{figure}
\normalsize
In Fig.~\ref{fig:data} we compare our predictions for the DY nuclear
cross section with available LHC data \cite{cms_data_pA,atlas_data_pA}
for large invariant dilepton masses, $60 < M_{l\bar{l}} < 120$ GeV,
taking into account the saturation effects.
In the top panels, we test the predictions of
various models for the dipole cross section comparing them
with the experimental data for the rapidity and
transverse momentum distributions of the DY
production cross sections in $pA$ collisions.
As was already verified in Ref.~\cite{Basso_pp}
for DY production in $pp$ collisions,
the dipole approach works fairly well in description of
the current experimental data at high energies.
In particular, the BGBK model provides a consistent prediction
describing the data on the rapidity distribution
quite well in the full kinematical range.
In the bottom panels of Fig.~\ref{fig:data}, we took the BGBK model
and considered the impact of gluon shadowing corrections
as well as the initial-state effective energy loss (ISI effects),
Eq.~(\ref{eq-ISI}).
In the range of large dilepton invariant masses
concerned, the gluon shadowing corrections are rather small
since the corresponding Bjorken $x=x_2$ in the target becomes large.
On the other hand, the ISI effects significantly
modify the behaviour of the rapidity distribution
at large $\eta > 2$. Unfortunately, the current data are not able
at this moment to verify the predicted strong onset of
ISI effects due to large error bars.
In the case of the transverse momentum distribution
for large invariant masses and $0 \le \eta \le 2$,
the impact of both the gluon shadowing
and the ISI effects is negligible.
\begin{figure}[h!]
\large
\begin{center}
\scalebox{1.0}{\includegraphics{901-combine-dM-pAu-200.eps}}
\caption{(Color online)
The dilepton invariant mass dependence of the nucleus-to-nucleon ratio,
$R_{pA} = \sigma^{\rm DY}_{pA}/(A \cdot \sigma^{\rm DY}_{pp})$,
of the DY production cross sections for
c.m. energy $\sqrt{s}=0.2$ TeV corresponding to RHIC experiments.}
\label{fig:mass_rhic}
\end{center}
\end{figure}
\normalsize
\begin{figure}[h!]
\large
\begin{center}
\scalebox{1.0}{\includegraphics{902-combine-dM-pPb-5020.eps}}
\caption{(Color online) The dilepton invariant mass dependence of the
nucleus-to-nucleon ratio,
$R_{pA} = \sigma^{\rm DY}_{pA}/(A \cdot \sigma^{\rm DY}_{pp})$,
of the DY production cross sections for
c.m. enegy $\sqrt{s}=5.02$ TeV corresponding to LHC experiments.}
\label{fig:mass_lhc}
\end{center}
\end{figure}
\normalsize
In order to quantify the impact of the nuclear effects,
in what follows, we estimate the invariant mass, rapidity
and transverse momentum dependence
of the nucleus-to-nucleon ratio
of the DY production cross sections (nuclear modification factor),
$R_{pA} = \sigma^{\rm DY}_{pA}/(A \cdot \sigma^{\rm DY}_{pp})$,
considering the DY process at RHIC ($\sqrt{s}=0.2$ TeV) and LHC
($\sqrt{s}=5.02$ TeV) energies.
The color dipole predictions for the DY production cross section
in $pp$ collisions have been discussed in detail
in Ref.~\cite{Basso_pp}. For consistency,
the numerator and denominator of the nuclear modification factor
are evaluated within the same model for the dipole
cross section as an input.
In Fig.~\ref{fig:mass_rhic} we present our predictions
for the dilepton invariant mass dependence of the ratio
$R_{pA}(M_{l\bar{l}})$ at RHIC considering both central and
forward rapidities.
In the top panels, we show that the dipole model predictions
are almost insensitive to the parameterisations used to treat
the dipole-proton interactions. The magnitude of the saturation effects
decreases at large dilepton invariant masses and increases at forward rapidities.
Such a behaviour is expected, since at smaller $M_{l\bar{l}}$ and
at larger $\eta$ one probes smaller values of the Bjorken-$x_2$ variable
in the target.
In the bottom panels of Fig.~\ref{fig:mass_rhic},
we present the predictions taking into account also
the GS corrections and ISI effects.
As was mentioned above we predict a weak
onset of GS corrections
at central rapidities whereas GS leads to
a significant suppression in the forward region.
Besides, as expected, the impact of GS effects decreases
with $M_{l\bar{l}}$ due to rise of the Bjorken $x_2$-values.
In contrast to that, the ISI effects become effective causing
a strong nuclear suppression
at large $M_{l\bar{l}}$ and/or $\eta$.
This behaviour is also well understood since large
dilepton invariant masses and/or rapidities correspond
to large Feynman $x_F$ leading to a stronger onset
of ISI effects as follows from Eq.~(\ref{eq-ISI}).
A similar behaviour has been predicted for the LHC energy
range as is shown in Fig.~\ref{fig:mass_lhc} where the impact
of saturation and GS effects is even more pronounced.
\begin{figure}[h!]
\large
\begin{center}
\scalebox{1.0}{\includegraphics{903-combine-deta.eps}}
\caption{(Color online)
The pseudorapidity dependence of the nucleus-to-nucleon ratio, $R_{pA}(\eta)$,
of the DY production cross sections at RHIC and LHC energies for two ranges
($5 < M_{l\bar{l}} < 25$ GeV) and ($60 < M_{l\bar{l}} < 120$ GeV) of dilepton invariant mass.}
\label{fig:rapidity}
\end{center}
\end{figure}
\normalsize
In Fig.~\ref{fig:rapidity} we present our predictions
for rapidity dependence of the nucleus-to-nucleon ratio, $R_{pA}(\eta)$,
of the DY production cross sections at RHIC and LHC energies
considering two ranges, ($5 < M_{l\bar{l}} < 25$ GeV)
and ($60 < M_{l\bar{l}} < 120$ GeV), of dilepton invariant mass.
We would like to emphasize that the onset
of saturation effects reduces $R_{pA}(\eta)$ at large rapidities
and have a larger impact in the small invariant mass range.
For large invariant masses, we predict a reduction
of $\approx 10 \%$ in the $R_{pPb}$ ratio at LHC energy.
At RHIC energy we predict a weak onset of GS effects
even at large $\eta > 3$.
In contrast to RHIC energy range,
at the LHC the GS effects lead to a significant additional suppression,
modifying thus the ratio $R_{pPb}$ especially at small dilepton invariant masses
and large rapidity values.
On the other hand, the onset of the ISI effects
is rather strong for both RHIC and LHC kinematic regions,
and becomes even stronger at forward rapidities for both
invariant mass ranges. This makes the phenomenological
studies of the rapidity dependence of $R_{pA}$ ideal for constraining
such effects.
\begin{figure}[h!]
\large
\begin{center}
\scalebox{1.0}{\includegraphics{904-combine-dpT-pAu-200.eps}}
\caption{(Color online)
The transverse momentum dependence of the nucleus-to-nucleon ratio
of the DY production cross sections, $R_{pA}(p_T)$,
for the dilepton invariant mass
range $5 < M_{l\bar{l}} < 25$ GeV at $\sqrt{s}=0.2$ TeV and $\eta=0,1$.}
\label{fig:pt_rhic}
\end{center}
\end{figure}
\normalsize
\begin{figure}[h!]
\large
\begin{center}
\scalebox{1.0}{\includegraphics{905-combine-dpT-pPb-smallM.eps}}
\caption{(Color online)
The transverse momentum dependence of the nucleus-to-nucleon ratio
of the DY production cross sections, $R_{pA}(p_T)$,
for the dilepton invariant mass
range $5 < M_{l\bar{l}} < 25$ GeV at $\sqrt{s}=5.02$ TeV and $\eta=0,2,4$.}
\label{fig:pt_lhc_small}
\end{center}
\end{figure}
\normalsize
Fig.~\ref{fig:pt_rhic} shows our predictions for the transverse momentum dependence
of the nuclear modification factor, $R_{pA}(p_T)$,
for the invariant mass range
$5 < M_{l\bar{l}} < 25$ GeV at RHIC c.m. energy $\sqrt{s}=0.2$ TeV
and two distinct pseudorapidity values $\eta=0$
and $\eta=1$.
At large transverse momenta, the role
of the saturation effects is negligibly small
and can be important only at small $p_T \le 2$ GeV.
Similarly, the GS effects are almost irrelevant at RHIC energies.
However, Fig.~\ref{fig:pt_rhic} clearly demonstrates
a strong onset of ISI effects causing a significant suppression at large $p_T$,
where no coherence effects are expected. In accordance with Eq.~(\ref{eq-ISI})
and in comparison with $\eta=0$,
we predict stronger ISI effects at forward rapidities as is depicted
in Fig.~\ref{fig:pt_rhic} for $\eta=1$. Due to a significant elimination
of coherence effects the study of the DY process at large $p_T$ in $pA$ collisions
at RHIC is a very convenient tool for investigation of net ISI effects.
On the other hand, at LHC energies (see Fig.~\ref{fig:pt_lhc_small})
the manifestation of the saturation and GS effects rises at forward rapidities
and becomes noticeable for $p_T \le 10$ GeV.
As was already mentioned for RHIC energies,
the ISI effects cause a significant attenuation
at large transverse momenta and forward
rapidities, although no substantial suppression is expected
in the DY process
due to absence of the final state interaction, energy loss or absorption.
For these reasons a study
of the ratio $R_{pA}(p_T)$ also at the LHC especially at large $p_T$
and at small invariant mass range is very effective to constrain the
ISI effects.
\begin{figure}[h!]
\large
\begin{center}
\scalebox{1.0}{\includegraphics{905-combine-dpT-pPb-largeM.eps}}
\caption{(Color online) The transverse momentum dependence
of the nucleus-to-nucleon ratio
of the DY production cross sections, $R_{pA}(p_T)$,
for the dilepton invariant mass
range $60 < M_{l\bar{l}} < 120$ GeV at $\sqrt{s}=5.02$ TeV and $\eta=0,2,4$.}
\label{fig:pt_lhc_large}
\end{center}
\end{figure}
\normalsize
In order to reduce the contribution of coherence effects (gluon shadowing, CGC) in the LHC kinematic
region one should go to the range of large dilepton invariant masses
as is shown in Fig.~\ref{fig:pt_lhc_large}. Here we
present our predictions for the ratio $R_{pPb}(p_T)$ at the LHC
c.m. collision energy $\sqrt{s}=5.02$ TeV for
the range $60 < M_{l\bar{l}} < 120$ GeV and several values of $\eta=0,2,4$.
According to expectations we have found that
the saturation and GS effects turn out
to be important only at small $p_T$ and large $\eta$.
Such an elimination of coherence effects taking
into account larger dilepton invariant masses
causes simultaneously
a stronger onset of ISI effects
as one can seen in Fig.~\ref{fig:pt_lhc_large}
in comparison with Fig.~\ref{fig:pt_lhc_small}.
For this reason, investigation of net ISI effects
at large $M_{l\bar{l}}$
does not require such high $p_T$- and rapidity values,
what allows to obtain the experimental data of higher
statistics and consequently with smaller error bars.
Fig.~\ref{fig:pt_lhc_large} demonstrates again
a large nuclear suppression in the forward region ($\eta = 4$) over an
extended range of the dilepton transverse momenta.
Consequently, such an analysis of the DY nuclear cross section at
forward rapidities by e.g. the LHCb Collaboration can be
very useful to probe the ISI effects experimentally.
\begin{figure}[h!]
\large
\begin{center}
\scalebox{0.7}{\includegraphics{lhcpA_nuclei}}
\caption{(Color online) The correlation function $C(\Delta \phi)$
for the associated DY pair and pion production
in $pA$ collisions at the LHC ($\sqrt{s}=5.02$ TeV)
for different mass numbers $A$.}
\label{fig:cor1}
\end{center}
\end{figure}
\normalsize
\begin{figure}[h!]
\large
\begin{center}
\scalebox{0.7}{\includegraphics{rhicpA_200_lowmass_y2_5}}\\
\scalebox{0.7}{\includegraphics{lhcpA_5000_lowmass_y4}}\\
\scalebox{0.7}{\includegraphics{lhcpA_5000_highmass_y4}}
\caption{(Color online) The correlation function $C(\Delta \phi)$
for the associated DY pair and pion production
in $pPb$ collisions at RHIC ($\sqrt{s}=0.2$ TeV)
and LHC ($\sqrt{s}=5.02$ TeV) energies and different values
of the dilepton invariant mass. }
\label{fig:cor2}
\end{center}
\end{figure}
\normalsize
Finally, let us discuss the azimuthal correlation between the DY pair and a forward pion produced in $pA$ collisions
taking into account the $Z^0$ boson contribution in addition to the virtual photon as well as the saturation effects.
As was discussed earlier in Refs.~\cite{stastody,amir,Basso_pp}, the dilepton-hadron correlations can serve
as an efficient probe of the initial state effects. The gauge boson radiation off the projectile quark has a back-to-back
correlation in the transverse momentum. However, the multiple scatterings of the quark in a high density gluonic system
implies that it acquires a transverse momentum comparable with the saturation scale. As a consequence, the intrinsic
angular correlations are expected to be suppressed, with the suppression being directly related to the magnitude of
the saturation scale. As the saturation scale is strongly dependent on the nuclear atomic number, we expect that
the effects predicted in Ref. \cite{Basso_pp} for the DY process in $pp$ collisions to be amplified in the $pA$ case.
Considering the $G^*=\gamma^*/Z_0$ boson as a trigger particle, the corresponding correlation function can be written as
\begin{eqnarray}
C(\Delta \phi) =
\frac{ 2\pi\, \int_{p_T, p_T^h > p_T^{\rm cut}} dp_T p_T \;
dp_T^h p_T^h \;
\frac{d \sigma(pA \to h G^* X)}{d Y d y_h d^2p_T d^2p_T^h }}
{\int_{p_T > p_T^{\rm cut}} dp_T p_T \;
\frac{d \sigma(pA\rightarrow G^* X)}{dY d^2 p_T} }\,,
\label{corr}
\end{eqnarray}
where $p_T^{\rm cut}$ is the experimental low cut-off on transverse momenta of the resolved $G^*$ (or dilepton)
and a hadron $h$, $\Delta \phi$ is the angle between them. The differential cross sections entering the numerator and denominator
of $C(\Delta \phi)$ have been derived for $pp$ collisions in Ref.~\cite{Basso_pp} taking into account both the $\gamma^*$ and $Z^0$ boson
contributions and can now be directly generalised for $pA$ collisions by accounting the nuclear dependence of the saturation scale.
We refer to Ref.~\cite{Basso_pp} for details of the differential cross sections. The main input in the calculation of the correlation function
is the unintegrated gluon distribution $F(x_g,k^g_T)$, where $x_g$ and $k^g_T$ are the momentum fraction and transverse momentum
of the target gluon, which is directly associated to the description of the QCD dynamics in the high energy limit \cite{hdqcd}. As demonstrated
in Ref.~\cite{stastody}, the correlation function for $\Delta \phi \approx \pi$ is determined by the low-$k^g_T$ behaviour of the unintegrated
gluon distribution which is strongly associated with the saturation effects. Since in this regime the current parametrizations for $F(x_g,k^g_T)$
are similar, the resulting predictions for $C(\Delta \phi \approx \pi)$ are almost model independent. In order to compare our predictions with
those presented in Refs. \cite{stastody,Basso_pp}, in what follows we study the correlation function $C(\Delta \phi)$ taking
the unintegrated gluon distribution (UGDF) in the following form
\begin{equation}
F(x_g,k^g_T) = \frac{1}{\pi Q_{s,A}^2(x_g) }\, e^{-{k^g_T}^2/Q_{s,A}^2(x_g)} \,,
\end{equation}
where
$Q_{s,A}^2 (x) = A^{1/3} c(b)\,Q_{s,p}^2(x)$
is the saturation scale and $Q_{s,p}^2(x)$ is given by Eq. (\ref{satsca1}).
In numerical analysis, the CT10 NLO parametrization \cite{ct10}
for the parton distributions and the Kniehl-Kramer-Potter (KKP)
fragmentation function $D_{h/f}(z_h,\mu_F^2)$ of a quark
to a neutral pion \cite{kkp} have been used. Moreover, we assume
that the minimal transverse momentum ($p_T^{\rm cut}$)
of the gauge boson $G^*$ and the pion $h=\pi$ in Eq.~(\ref{corr})
are the same and equal to 1.5 and 3.0 GeV for RHIC and LHC energies, respectively.
As in our previous study \cite{Basso_pp}, we assume
that the factorisation scale is given by the dilepton
invariant mass, i.e. $\mu_F = M_{l\bar{l}}$.
The analysis of the correlation function in $pp$ collisions performed in Ref.~\cite{Basso_pp} has demonstrated that an increase of
the saturation scale at large rapidities implies a larger value for the transverse momentum carried by the low-$x$ gluons in the target
which generates the decorrelation between the back-to-back jets. Since the magnitude of the saturation scale is amplified by
the factor $A^{1/3}$ in nuclear collisions we should also expect a similar effect in $pA$ collisions. In particular, the double-peak
structure of $C(\Delta \phi)$ in the away-side dilepton-pion angular correlation function predicted to be present in $pp$ collisions \cite{Basso_pp}
should also occur in the $pA$ case. As discussed in detail in Refs.~\cite{stastody,amir,Basso_pp}, this double peak in the region
where $\Delta \phi \approx \pi$ is directly associated to the interplay between the local minimum of the $h+G$ cross section
for gluon $k^g_T = |\vec{p}_T + \vec{p}_{Tq}| \rightarrow 0$, where $\vec{p}_T$ ($\vec{p}_{Tq}$) is the transverse momentum of the gauge boson (quark),
and the two maxima for the cross section when $k_T \rightarrow Q_s$. Therefore, the double-peak structure is sensitive to the magnitude
of the saturation scale as well. In Fig.~\ref{fig:cor1} we present our predictions for the correlation function $C(\Delta \phi)$ of the associated DY pair and pion
in $pA$ collisions at LHC energies considering different values of the atomic mass number. We notice that the larger values of $A$ imply the stronger
smearing of the back-to-back scattering pattern and suppress the away-side peak in the $\Delta \phi$ distribution. This behaviour is expected since
in high energy collisions the produced parton on average has intrinsic transverse momentum of the order of the saturation scale which increases for larger $A$.
Such an increase in $Q_s$ washes away the intrinsic back-to-back correlations. Moreover, at larger $Q_s$ one observes that the single-particle inclusive cross section
in the denominator of Eq.~(\ref{corr}) is enhanced while the two-particle correlated cross section (in numerator of Eq.~(\ref{corr})) is suppressed.
As a consequence, $C(\Delta \phi)$ decreases with an increase of the saturation scale.
Our predictions for RHIC and LHC energies and $pPb$ collisions are presented in Fig.~\ref{fig:cor2} considering small and large dilepton invariant masses.
Our results for small invariant masses, shown in the upper and middle panels, agree with those presented in Refs.~\cite{stastody,Basso_pp}. On the other hand,
our predictions for the correlation function for large invariant masses (lower panel) are in variance with the results obtained in Ref.~\cite{Basso_pp} for $pp$ collisions.
We also predict a double-peak structure for large invariant masses in $pPb$ collisions. As discussed before, in $pA$ collisions the saturation scale is amplified
by a factor $A^{1/3}$, implying larger values for the average transverse momentum acquired by the quark in its multiple scatterings off the target. Moreover,
the typical transverse momentum of the produced particles in $pA$ collisions at $\sqrt{s} = 5.02$ TeV is smaller that in $pp$ collisions at $\sqrt{s} = 14$ TeV.
As a consequence, the impact of gluonic interactions in the produced quark is larger in $pPb$ than in $pp$ collisions. It implies a certain imbalance of the back-to-back
photon-quark jets also for large invariant masses in $pA$ collisions, washing out the intrinsic correlations and thus generating the double-peak structure observed
in Fig.~\ref{fig:cor2}. The away-side peak is strongly suppressed at forward rapidities and the double-peak structure is present in the kinematic range probed
by RHIC and LHC. Consequently, we believe that our predictions can be compared with the future experimental analysis. If experimentally confirmed, this decorrelation
and the double-peak structure are important probes of underlying saturation physics.
A more elaborate study of the double-peak structure in the correlation function requires a multidimentional numerical analysis of the $C(\Delta \phi)$ function
at different pion and dilepton rapidities, transverse momenta and dilepton invariant mass as well as experimental cuts. Besides, it would be instructive to investigate
how the ISI, GS and coherence effects influence this function in various models for unintegrated gluon distributions. These questions are a subject of a separate
big project which can be planned for the future provided that the corresponding experimental data become available.
\section{Summary}
\label{conc}
In this paper, we carried out an extensive phenomenological analysis of the inclusive DY $\gamma^*/Z^0 \to l\bar l$ process
in $pA$ collisions within the color dipole approach. In particular, the inclusion of the $Z^0$ contribution enabled us to study
for the first time the impact of the nuclear effects at large invariant dilepton masses. In distinction to hadron production,
the DY reaction in $pA$ collisions is a very effective tool for study of nuclear effects since no final state interactions
are expected, either the energy loss or absorption. For this reason, the DY process represents a direct and clean probe
of the initial-state medium effects, not only in $pA$ interactions but also in heavy ion collisions.
The analysis of the DY process off nuclei in different kinematic regions allows us to investigate the magnitude of particular nuclear effects.
In this paper, the contribution of the saturation, gluon shadowing (GS) and initial state energy loss (ISI) effects in DY observables
were estimated considering the kinematical range probed at RHIC and LHC. The corresponding predictions for the dilepton invariant mass and
transverse momentum differential distributions have been compared with available data at the LHC and a reasonable
agreement was found. Moreover, the invariant mass, rapidity and transverse momentum dependencies of the nucleus-to-nucleon
ratio of production cross sections, $R_{pA} = \sigma^{\rm DY}_{pA}/(A \cdot \sigma^{\rm DY}_{pp})$, were estimated.
Our results demonstrated that the ratio $R_{pA}$ is strongly modified by the GS and ISI effects. In particular, we found that both GS and ISI effects
cause a significant suppression in DY production. While the GS effects dominate at small Bjorken-$x$ in the target, the ISI effects
(in accordance with Eq.~(\ref{eq-ISI})) become effective at large transverse momenta $p_T$ and invariant masses $M_{l\bar{l}}$
of dilepton pairs as well as at large Feynman $x_F$ (or forward rapidities). Consequently, at forward rapidities in some kinematic regions at the LHC
one can investigate only a mixing of both (GS and ISI) effects even at large $p_T$- values. In contrast to other inclusive processes, the advantage
of the DY reaction is due to elimination of the GS-ISI mixing by reduction of coherence effects at larger values of the dilepton invariant mass.
Then, an investigation of nuclear suppression at large $p_T$ represents a clear manifestation of net ISI effects even at forward rapidities as
is demonstrated in Fig.~\ref{fig:pt_lhc_large}. Thus, such a study of nuclear suppression at large dilepton invariant masses, transverse momenta and rapidities
especially at the LHC energy favours the DY process as an effective tool for investigation of the ISI effects.
Besides, we have analysed the correlation function $C(\Delta \phi)$ in azimuthal angle $\Delta \phi$ between the produced dilepton and a forward pion
which results by a fragmentation from a projectile quark radiating the virtual gauge boson. The corresponding observable has been studied at various energies
in $pA$ collisions in both the low and high dilepton invariant mass ranges as well as at different rapidities of final states. We found a characteristic
double-peak structure of the correlation function around $\Delta \phi \simeq \pi$ at various dilepton mass values and for a very forward pion.
Our results indicated that a measurement of the correlation function at different energies at RHIC and LHC can be useful to probe underlying
dynamics by setting further even stronger constraints on saturation physics. Finally, our results have demonstrated that the study of the DY reaction
in $pA$ collisions is ideal to probe the nuclear effects expected to be present at high energies and large nuclei.
\section*{Acknowledgements}
E.B. is supported by CAPES and CNPq (Brazil), contract numbers 2362/13-9
and 150674/2015-5. V.P.G. has been supported by CNPq, CAPES and FAPERGS, Brazil.
R.P. is supported by the Swedish Research Council, contract number 621-2013-428.
J.N. and M.K. are partially supported by the grant 13-20841S of the Czech Science Foundation
(GA\v CR) and by the Grant M\v SMT LG15001. J.N. is supported by the Slovak Research and
Development Agency APVV-0050-11 and by the Slovak Funding Agency, Grant 2/0020/14.
\bibliographystyle{unsrt}
|
train/arxiv
|
BkiUdSrxaJJQnJnbu_ZQ
| 5
| 1
|
\section{Introduction} \label{sec:intro}
\begin{figure*}
\centering
\begin{subfigure}{\columnwidth}
\tikzset{roundrect/.style={
rectangle,
rounded corners,
minimum size=6mm,
draw=black,
}}
\centering
\begin{tikzpicture}[thick,auto, >=Stealth, node distance=2cm,scale=0.95, every node/.style={scale=0.95}]
\node[roundrect, align=center] (oracle) at (0,0) {\textbf{Oracle}};
\node[roundrect, below right of=oracle, xshift=1.2cm, align=center] (classifier) {\textbf{Predictor} \\ {\footnotesize BNN}};
\node[roundrect, below left of=oracle, xshift=-1.2cm, align=center] (guide) {\textbf{Guide}\\{\footnotesize Acquisition Score}\\ {\footnotesize (heuristic)}};
\coordinate[above=of classifier, xshift=-0.9cm,yshift=-2cm] (corner);
\coordinate[above=of classifier, xshift=-0.9cm,yshift=-2cm] (corner);
\draw [->] (oracle) edge [out=0, "{\footnotesize label}"] (classifier)
(classifier) edge [out=190,in=350, "{\footnotesize prediction uncertainty}"] (guide)
(guide) edge [in=180, near start, "{\footnotesize chosen sample}"] (oracle);
\end{tikzpicture}
\caption{Active learning pipeline.}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\centering
\tikzset{roundrect/.style={
rectangle,
rounded corners,
minimum size=6mm,
draw=black,
}}
\begin{tikzpicture}[thick,auto, >=Stealth, node distance=2cm,scale=0.85, every node/.style={scale=0.85}]
\node[roundrect, align=center] (oracle) at (0,0) {\textbf{Oracle}};
\node[roundrect, black!30!red, below left of=oracle, xshift=-0.5cm, align=center] (guide) {\textbf{Guide}\\ {\footnotesize Policy BNN}};
\node[roundrect, black!30!red, below of=guide, align=center, font=\footnotesize] (state) {Bulk filter\\ (heuristic)};
\node[roundrect, right of=state, xshift=2cm, align=center] (predict) {\textbf{Predictor}\\ \footnotesize BNN};
\draw [->] (oracle) edge [out=0, in=90, "{\footnotesize label}"] (predict)
(predict) edge [out=200, in=340, "{\footnotesize prediction uncertainty}"] (state)
(state) edge [out=100, black!30!red, in=260] node[align=center, black!30!red, swap, font=\scriptsize] {probabilistic\\ environment\\ state} (guide)
(guide) edge [out=90, in=180] node[align=center, pos=0.5, swap, font=\scriptsize] {chosen\\ sample} (oracle)
(oracle) edge [out=340, black!30!red, in=0] node[align=center, black!30!red, pos=0.6, font=\footnotesize] {reward} (guide);
\end{tikzpicture}
\caption{Our method.}
\end{subfigure}
\caption{The proposed pipeline. The standard active learning pipeline is summarized as the interplay between three parts. \emph{(a)} An \emph{oracle} provides a set of labeled data for a \emph{predictor} (here a BNN) to learn on. It in turn provides predictive uncertainties to the \emph{guide}, a usually fixed, hard-coded acquisition function, which in turn communicates to the oracle which points to label next, restarting the cycle.
\emph{(b)} We replace the fixed acquisition function with a policy BNN that learns with a probabilistic state and reinforcement feedback from the oracle how to optimally choose the next points (new parts in red). It is thus able to adapt itself flexibly to the data set at hand.}\label{fig:figureone}
\end{figure*}
The active learning setup consists of a base predictor that chooses the order in which the data points are supposed to be labeled using an acquisition function. Contrary to the {\it tabula rasa} ansatz of the present deep learning age, the state of the art in active learning has maintained to use hand-designed acquisition functions to rank the unlabeled sample space. Different studies observed different acquisition functions to perform optimally for specific applications after evaluating various choices. The critical fact is that active learning is meant to address applications where data labeling is extremely costly and it is not possible to know the ideal acquisition function for a given application a priori. Once an acquisition function is chosen and active learning has been performed based on it, the labeling budget is already exhausted, leaving no possibility for another try with an alternative acquisition function. This limitation can only be circumvented by adapting the acquisition function to data during the active learning process, getting feedback from the impact of the previous labeling rounds on model fit. For real-world scenarios to which active learning is applicable, learning also the acquisition function is not only an option driven solely by practical concerns such as avoidance of handcrafting effort, but also an absolute necessity stemming from epistemic limitations.
The acquisition functions in active learning are surrogate models that map a data point to a value that encodes the expected contribution of observing its label to model fit. The founding assumption of the active learning setup is that evaluating the acquisition score of a data point is substantially cheaper than acquiring its ground-truth label. Hence, the acquisition functions are expected to be both computationally cheap and maximally accurate in detecting most information-rich regions of the sample space. These competing goals are typically addressed by information-theoretic heuristics.
Possibly the most frequently used acquisition heuristic is {\it Maximum Entropy Sampling}, which assigns the highest score to the data point for which the predictor reports highest entropy (i.e.~uncertainty). This criterion builds on the assumption that the most valuable data point is the one the model is maximally unfamiliar about. While being maximally intuitive, this method remains agnostic to exploiting knowledge from the current model fit about how much the new label can impact the model uncertainty. Another heuristic with comparable reception, {\it Bayesian Active Learning by Disagreement (BALD)}~\cite{houlsby2012collaborative}, benefits from this additional information by maximizing the mutual information between the predictor output and the model parameters.
A second major vein of research approaches the active learning problem from a geometric instead of an uncertainty based perspective, e.g.~via selection of a core-set~\cite{sener2017active}.
None of the aforementioned heuristics has a theoretical superiority that is sufficient to rule out all other options. Maxent strives only to access unexplored data regions. BALD performs the same by also taking into account the expected effect of the newly explored label on the uncertainty of the model parameters. While some studies argue in favor of BALD due to this additional information it enjoys~\cite{srinivas2012information}, others prefer to avoid this noisy estimate drawn from an under-trained model~\cite{qiu2017maximum}.
This paper presents a data-driven method that alleviates the consequences of the unsolved acquisition function selection problem. As prediction uncertainty is an essential input to acquisition heuristics, we choose a deep Bayesian Neural Net (BNN) as our base predictor. In order to acquire high-quality estimates of prediction uncertainty with an acceptable computational cost, we devise a deterministic approximation scheme that can both effectively train a deep BNN and calculate its posterior predictive density following a chain of closed-form operations. Next, we incorporate all the probabilistic information provided by the BNN predictions into a novel state design, which brings about another full-scale probability distribution. This distribution is then fed into a probabilistic policy network, which is trained by reinforcement feedback collected from every labeling round in order to inform the system about the success of its current acquisition function. This feedback fine-tunes the acquisition function, bringing about improved performance in the subsequent labeling rounds.
Figure~\ref{fig:figureone} depicts the workflow of our method.
We evaluate our method on three benchmark vision data sets from different domains and complexities: MNIST for images of handwritten digits, FashionMNIST for greyscale images of clothes, and CIFAR-10 for colored natural images. We observe in our experiments that the policy net is capable of inventing an acquisition function that outperforms all the handcrafted alternatives if the data distribution permits. In the rest of the cases, the policy net converges to the best-performing handcrafted choice, which varies across data sets and is unknown prior to the active learning experiment.
\section{The Model}\label{sec:model}
Our method consists of two major components: a predictor and a policy net guiding the predictor by learning a data set specific acquisition function.
As the predictor, described in Section~\ref{ssec:bnn} we use a BNN, whose posterior predictive density we use to distill the system state. The policy net, another BNN, takes this state as input to decide which data points to request labels for next.\footnote{ For illustrative purposes we rely on a Central Limit Theorem approach to efficiently marginalize the weights of these BNNs. In general any approach to training a BNN which provides trustworthy predictive uncertainties could be used.} We describe this second part of the pipeline in Section~\ref{ssec:agent}. Since we introduce a reinforcement learning based method for active learning, we refer to it as \emph{Reinforced Active Learning (RAL)}.
\subsection{Predictor: Bayesian Neural Network}\label{ssec:bnn}
Let $\mathcal{D} = \{(\mathbf{x}_n, \mathbf{y}_n)_{n=1}^N\}$ be a data set consisting of $N$ tuples of feature vectors $\mathbf{x}_n \in \mathds{R}^m$ and labels $\mathbf{y}_n \in \{0,1\}^C$ for a $C$ dimensional binary output label. Parameterizing an arbitrary neural network $f(\cdot)$ with parameters $\mathbf{w}$ following some prior $p(\mathbf{w})$, we assume the following probabilistic model
\begin{equation}
\mathbf{w} \sim p(\mathbf{w}),\quad\mathbf{y}|\mathbf{x}, \mathbf{w} \sim \prod_c^C\mathcal{B}er\big (y_c|\Phi(f_c(\mathbf{x};\mathbf{w})\big),
\end{equation}
where $f_c$ is the $c$th output channel of the net, $\Phi(u)=\int_{-\infty}^u \mathcal{N}(x|0,1) dx$, and $\mathcal{B}er(\cdot|\cdot)$ is a Bernoulli distribution.
The calculation of the posterior predictive%
\begin{equation}
p({\bf y}^* | {\bf x}^*, \mathcal{D}) = \int p({\bf y}^* | {\bf x}^*, \mathbf{w}) p(\mathbf{w} | \mathcal{D}) d\mathbf{w}
\end{equation}
involves the calculation of the posterior distribution on the latent variables, which can be accomplished by Bayes rule
\begin{equation}
p(\mathbf{w} | \mathcal{D} ) = \dfrac{p({\bf Y}|{\bf X},\mathbf{w}) p(\mathbf{w})}{\int p({\bf Y}|{\bf X},\mathbf{w}) p(\mathbf{w}) d\mathbf{w}},
\end{equation}
for $\mathbf{X} = \{\mathbf{x}_1,...,\mathbf{x}_N\}$ and $\mathbf{Y} = \{\mathbf{y}_1,...,\mathbf{y}_N\}$.
As this is intractable in general we require approximate inference techniques. We aim for
high-quality prediction uncertainties. A sampling based approach is not practical for vision-scale applications where neural nets with a large parameter count are being used. Instead we use variational inference (VI). In order to keep the calculations maximally tractable while benefiting from stochasticity $\mathbf{w}$, we formulate a normal mean-field variational posterior (which could be generalized):
\begin{equation}
q_\theta(\mathbf{w}) = \prod_i \mathcal{N}(w_i|\mu_i, \sigma_i^2), \label{eq:var_dist}
\end{equation}
where the tuple $({\mu}_{i},{\sigma}_{i}^2)$ represents the variational parameter set for weight $w_i$ of the network $f(\cdot)$ and ${\theta = \{(\mu_i, \sigma_i^2)_i\}}$. VI approximates the true intractable posterior by optimizing $\theta$ to minimize the Kullback-Leibler (KL) divergence between $q_\theta(\mathbf{w})$ and $p(\mathbf{w} | {\bf X}, {\bf Y} )$, which boils down to minimizing the negative evidence lower bound
\begin{align}
\mathcal{L}_\mathrm{class}(\theta; \mathcal{D}) &= -\sum_{n=1}^N \mathds{E}_{q_\theta(\mathbf{w})}\big[ \log p({\bf y}_n | f({\bf x}_n; \mathbf{w}) )\big]\nonumber\\
&\qquad\qquad + \mathrm{KL}\big(q_\theta(\mathbf{w})||p(\mathbf{w})\big).
\end{align}
In this formula, the first term on the r.h.s.~maximizes the data fit (i.e.~minimizes the reconstruction loss), and the second term penalizes divergence of the posterior from the prior, inducing the Occam's razor principle to the preferred solution.
The modeler has control on the model families of both $q_\theta(\mathbf{w})$ and $p(\mathbf{w})$. Hence, choosing the prior $p(\mathbf{w})$ suitably to the normally distributed $q_\theta(\mathbf{w})$ assures an analytically tractable solution for the $\mathrm{KL}(\cdot||\cdot)$ term.\footnote{We use $p(w_i) = \mathcal{N}(w_i|0, \alpha^{-1})$ with a fixed precision $\alpha$.} However, the data fit term involves a nonlinear neural net, which introduces difficulties for keeping the calculation of the expectations tractable. A major issue is that we need to differentiate this term with respect to the variational parameters $\theta$, which appear in the density $q_\theta(\mathbf{w})$ with respect to which the expectation is taken. This problem is overcome by the reparameterization trick~\cite{kingma2013auto}, which re-formulates $q_\theta(\mathbf{w})$ as a sampling step from a parameter-free distribution and a deterministic variable transformation.
\begin{align}
\mathcal{L}_\mathrm{class}(\theta; \mathcal{D}) &= -\sum_{n=1}^N \mathds{E}_{p(\varepsilon)}\big[ \log p({\mathbf{y}}_n | f(\mathbf{x}_n; \mathbf{w}=\mu + \sigma \varepsilon))\big]\nonumber\\
&\qquad\qquad+ \mathrm{KL}\big(q(\mathbf{w})||p(\mathbf{w})\big),
\end{align}
where the variational parameters $\theta$ now appear only inside the expectation term, and we could take the gradient of the loss with respect to them and approximate integral in the expectation by Monte Carlo sampling. Although this will provide an unbiased estimator of the exact gradient, due to distorting the global variables of a highly-parameterized system, this estimator will have prohibitively high variance. The remedy is to postpone sampling one step further.
Let the pre-activation and feature map of $j$th neuron of layer $l$ for data point $n$ be $b_{njl}$ and $h_{njl}$, respectively. We then have
\begin{equation}
w_{ijl} \sim \mathcal{N}(w_{ijl}|\mu_{ijl},{\sigma}_{ijl}^2),\quad b_{njl} = \sum_{i=1}^{I_{l-1}} w_{ijl} h_{n i {l-1} },
\end{equation}
as a repeating operation at every layer transition within a BNN.\footnote{The same line of reasoning directly applies to convolutional layers where the sum on $b_{njl}$ is performed in a sliding window fashion.} As $h_{n i {l-1} }$ is the sampling output of layer $l-1$, it is a deterministic input to layer $l$. Consequently, $b_{njl}$ is a weighted linear sum of $I_{l-1}$ independent normal random variables, which is another normal random variable with
\begin{equation}
b_{njl} \sim \mathcal{N} \Bigg (b_{njl} \Bigg | \sum_{i=1}^{I_{l-1}} \mu_{ijl} h_{n i {l-1} }, \sum_{i=1}^{I_{l-1}} \sigma_{ijl}^2 h_{n i {l-1} }^2 \Bigg ).
\end{equation}
We now take separate samples for local variables, further reducing the estimator variance stemming from the Monte Carlo integration. This is known as Variational Dropout~\cite{kingma2015variational}, as the process performed for the expected log-likelihood term is equivalent to Gaussian dropout with rate ${\sigma}_{ijl}^2/\mu_{ijl}^2$ for weight $w_{ijl}$.
\subsubsection{Fast Dropout and the CLT Trick}
Fast Dropout~\cite{wang2013fast} has been introduced as a technique to perform Gaussian dropout without taking samples. The technique builds on the observation that $b_{njl}$ is essentially a random variable that consists of a sheer sum of a provisionally large number of other random variables. This is a direct call to the Central Limit Theorem (CLT) that transforms the eventual distribution into a normal density, which can be trivially modeled by matching the first two moments
\begin{align*}
p(b_{njl}) &\approx \mathcal{N}(b_{njl} | \phi_{njl}, \lambda_{njl}^2),\quad\text{where}\\
\phi_{njl} &= \mathds{E} \Bigg[ \sum_{i=1}^{I_{l-1}} w_{ijl} h_{n i {l-1} } \Bigg ]= \sum_{i=1}^{I_{l-1}} \mathds{E}[w_{ijl}] \mathds{E}[ h_{ni{l-1}} ],\\
\lambda_{njl}^2 &= \mathrm{var} \Bigg[ \sum_{i=1}^{I_{l-1}} w_{ijl} h_{n i {l-1} } \Bigg ] \\
&= \sum_{i=1}^{I_{l-1}} \mathrm{var}[h_{n i {l-1} }] \mathds{E}[w_{ijl}]^2 + \mathrm{var}[w_{ijl}] \mathds{E}[ h_{n i {l-1} }^2 ]. %
\end{align*}
Here, $\mathds{E}[w_{ijl}]=\mu_{ijl}$ and $\mathrm{var}[w_{ijl}]=\sigma_{ijl}^2$, as determined in Equation~\ref{eq:var_dist}. We also require the first two moments over of the $h_{nil-1}=r(b_{nil-1})$, for which it suffices to solve
\begin{align*}
\mathds{E}[h_{n i {l-1}}] &= \int r(b_{n i {l-1} }) p(b_{n i {l-1} }) d b_{n i {l-1} }, \\
\mathds{E}[h_{n i {l-1}}^2] &= \int r(b_{n i {l-1} })^2 p(b_{n i {l-1} }) d b_{n i {l-1} }.
\end{align*}
These two are analytically tractable for standard choices of activation functions, such as when $r(\cdot)$ is the ReLU activation and $p(b_{n i {l-1} })$ is a normal distribution~\cite{frey1999variational}. Note that
$b_{n i {l-1} }$ is either the linear activation of the input layer, a weighted sum of normals, hence another normal, or a hidden layer, which
will then similarly follow CLT and therefore already be approximated as a normal. Hence, the above pattern repeats throughout the entire network, allowing a tight closed-form approximation of the analytical solution. Below, we refer to this method as the {\it CLT trick}.
\subsubsection{Closed-Form Uncertainty Estimation with BNNs}
Fast Dropout uses the aforementioned CLT trick only for implementing dropout. Here we extend the same method to perform variational inference by minimizing a deterministic loss, i.e.~avoiding Monte Carlo sampling altogether. Even though the CLT trick has previously been used mainly for expectation propagation, its direct application to variational inference has not been investigated prior to our work. Furthermore, the state of the art in deep active learning relies on test-time dropout~\cite{gal17a}, which is computationally prohibitive. Speeding up this process requires parallel computing on the {\it end-product}, hence reflects additional costs on the user of the model not addressable at the production time. A thus far overlooked aspect of the CLT trick is that it also allows closed-form calculation of the posterior predictive density. Once training is over, we get a factorized surrogate for our posterior. Plugging this surrogate into the predictive density formula, for a new observation ${\bf x}^*$ we get
\begin{align}
p(&y_c^*|\mathbf{x}^*, \mathcal{D}) \approx \int \mathcal{B}er\big(y_c^* | \Phi(f_c(\mathbf{x}^*; \mathbf{w}) ) \big) q_\theta(\mathbf{w}) d\mathbf{w}\nonumber\\
&\approx \int \mathcal{B}er\big(y_c^* | \Phi(f_c^*) \big) \mathcal{N}\Big(f_c^*|g^L_c(\mathbf{x}^*),h^L_c(\mathbf{x}^*)\Big) d f_{c}^*\nonumber\\
&= \mathcal{B}er\Bigg (y_c^* \Bigg | \Phi \Bigg( \dfrac{g^L_c(\mathbf{x}^*)}{ \sqrt{h^L_c(\mathbf{x}^*)+1}} \Bigg ) \Bigg),\label{eq:pred}
\end{align}
where the functions $g^L_c(\mathbf{x}^*)$ and $h^L_c(\mathbf{x}^*)$ encode the cumulative map from the input layer to the moments of the top-most layer after repetitive application of the CLT trick across the layers.\footnote{Once $g_c^L(\mathbf{x}^*)$ and $h_c^L(\mathbf{x}^*)$ are computed one could also choose a categorical likelihood and approximate the integral via sampling.} With $p(y^*_c|\mathbf{x}^*, \mathcal{D})$ is tightly approximated by an analytical calculation of a known distributional form, its high-order moments are readily available for downstream tasks, being active learning in our case.
\subsection{Guide: The Policy Net}\label{ssec:agent}
As opposed to the standard active learning pipeline, our method is capable of adapting its acquisition scheme to the characteristics of individual data distributions. Differently from earlier work on data-driven label acquisition, our method can perform the adaptation {\it on the fly}, i.e.~while the active learning labeling rounds take place. This adaptiveness is achieved within a reinforcement learning framework, where a policy net is trained by rewards observed from the environment.
We denote the collection of unlabeled points by $\mathcal{D}_u$ and the labeled ones by $\mathcal{D}_l$. The variables $N_u$ and $N_l$ denote the number of data points in each respective case.
\paragraph{State.} In active learning, the label acquisition process takes place on the entire unlabeled sample set. However, a feasible reinforcement learning setup necessitates a condensed state representation. To this end, we first rank the unlabeled sample set by an information-theoretic heuristic, namely the maximum entropy criterion. As such heuristics assign similar scores to samples with similar character, consecutive samples in ranking inevitably have high correlation. In order to break the trend and enhance diversity, we follow the ranking from the top and pick up every $K$th sample until we collect $M$ samples $\{ {\bf x}_1, \cdots, {\bf x}_M \}$. We adopt the related term from the Markov Chain Monte Carlo sampling literature and refer to this process as {\it thinning}. Feeding these samples into our predictor BNN (Equation~\ref{eq:pred}), we attain a posterior predictive density estimate for each and distill the state of the unlabeled sample space in the following distributional form:
\begin{equation}
S \sim \prod_{c=1}^C \prod_{m=1}^M \mathcal{N}\left (g_c^L(\mathbf{x}_m), h_c^L(\mathbf{x}_m)\right ),
\end{equation}
where $g_c^L(\cdot)$ and $h_c^L(\cdot)$ are mean and variance of the activation an output neuron, calculated as in Equation~\ref{eq:pred}.
\paragraph{Action.} At each labeling round, a number of data points are sampled from the set $\{\mathbf{x}_{i_1},...,\mathbf{x}_{i_M}\}$ according to the probability masses assigned on them by the present policy.
\paragraph{Reward.} The straight-forward reward would be the performance of the updated predictor on a separate validation set. This, however, clashes with the constraint imposed on us by the active learning scenario. The motivating assumption is that labels are valuable and scarce, so it is not feasible to construct a separate labeled validation set large enough to get a good guess of the desired test set performance for the policy net to calculate rewards. In our preliminary experiments, we have observed that merging the validation set with the existing training set and performing active learning on the remaining sample set consistently provides a steeper learning curve than keeping a validation set for reward calculation. Hence, we abandon this option altogether. Instead, we propose a novel reward signal
\begin{equation}
R = R_\text{improv} + R_\text{div},
\end{equation}
consisting of the two components detailed below.
The first component $R_\text{improv}$ assesses the improvement in data fit of the chosen point once it has been labeled. From a Bayesian perspective, a principled measure of model fit is the marginal likelihood. For a newly labeled pair $(\mathbf{x},\mathbf{y})$ the reward is
\begin{align}
R_\text{improv} &= \prod_{c=1}^C\int \mathcal{B}er\big (y_c|\Phi(f_c(\mathbf{x}; \mathbf{w}))\big ) q_\text{new}(\mathbf{w}) d\theta\nonumber\\
&\quad-\prod_{c=1}^C\int \mathcal{B}er\big(y_c|\Phi(f_c(\mathbf{x}; \mathbf{w}))\big) q_\text{old}(\mathbf{w}) d\theta,
\end{align}
where $q_\text{old}(\cdot), q_\text{new}(\cdot)$ are our respective variational posteriors before and after training with the new point. The second component, $R_\text{div}$, encourages diversity across the chosen labels throughout the whole labeling round:
\begin{equation}
R_\text{div} = \frac{\# \text{unique labels requested}}{\# \text{label requests in this episode}}.
\end{equation}
\paragraph{Policy net.}
The policy net $\pi(\cdot)$ is a second BNN parameterized by $\phi\sim p(\phi)$. Compared to the classifier, taking deterministic data points as input, the policy net takes the state $S$, which follows a $C\cdot M$-dimensional normal distribution. Inputing such a stochastic input into our deterministic inference scheme is straightforward by using the first two moments of the state during the first moment-matching round. The output of the policy net, in turn, parameterizes an $M$ dimensional categorical distribution over possible actions. In order to benefit fully from the BNN character of the policy and to marginalize over the $\phi$ we again follow the approach we use for the classifier propagating the moments and first compute $M$ binary probabilities for taking action $\tilde{a}_m$ at time point $t$
\begin{equation}
p(\tilde{a}_m^t) = \mathds{E}_{q(\phi)}\left[\mathcal{B}er\big(\tilde{a}_m^t|\Phi(\pi_m(S_t;\phi))\big)\right],
\end{equation}
and finally normalize them to choose the action $A_t$ via
\begin{equation*}
A_t \sim \mathcal{C}\!\mathit{at}(\mathbf{a}^t), \qquad\text{where~~} a^t_m = \tilde{a}^t_m\Big/{\textstyle \sum_j} \tilde{a}^t_j,
\end{equation*}
and $\mathcal{C}\!\mathit{at}(\cdot)$ is a Categorical distribution.
\paragraph{Algorithm.} We adopt the episodic version of the standard REINFORCE algorithm~\cite{williams1992simple} to train our policy net. We use a moving average over all the past rewards as the baseline. A labeling episode consists of choosing a sequence of points to be labeled (with a discount factor of $\gamma=0.95$) after which the BNN is retrained and the policy net takes one update step. We parameterize the policy $\pi_\phi(\cdot|S_t)$ itself by a neural network with parameters $\phi$. Our method iterates between labeling episodes, training the policy net $\pi$, and training the BNN $f$ until the labeling budget is exhausted. The pseudocode of our method is given in Algorithm~\ref{alg:pseudo}.
\begin{algorithm}[tb]
\SetAlgoLined
\KwIn{$\mathcal{D} = \{\mathcal{D}_u, \mathcal{D}_l\}$, labeling budget, state size~$M$, policy $\pi_\phi$, net $f_\theta$, episode length $T$}
\BlankLine
\tcp{Train an initial net}
train $f_\theta$ on $\mathcal{D}_l$ as described in Section~\ref{ssec:bnn}\\
\While{budget available}{
\tcp{The labeling episode}
generate state distribution $S_0$ from $\mathcal{D}_u$\\
\For{$t\in {1,...,T}$}{
sample $A_t$ via $\pi_\phi(S_{t-1})$\\
$\mathcal{D}_l \leftarrow \mathcal{D}_l \cup\{\text{data point selected via $A_t$}\}$\\
$\mathcal{D}_u \leftarrow \mathcal{D}_u \backslash\{\text{data point selected via $A_t$}\}$\\
generate state distribution $S_t$ from $\mathcal{D}_u$
}
\tcp{Update the agent and net}
train $f_\theta$ on $\mathcal{D}_l$\\
compute rewards $R_\text{div}, R_\text{improv}$ and returns $G_t$\\
update $\phi$ via gradient descent on $G_t\nabla_\phi \left(\log\pi_\phi (A_t|S_t) + \mathrm{KL}\big(q(\phi)||p(\phi)\big)\right)$
}
\caption{The RAL training procedure}\label{alg:pseudo}
\end{algorithm}
\section{Experiments}\label{sec:experiments}
As RAL is the first method to adapt its acquisition function while active learning takes place, its natural reference model is the standard active learning setup with a fixed acquisition heuristic.\footnote{see \url{github.com/manuelhaussmann/ral} for a reference pytorch implementation of the proposed model.}
We choose the most established two information-theoretic heuristics: Maximum Entropy Sampling (Maxent) and BALD. Gal~\textit{et.al.}~\shortcite{gal17a} already demonstrated how BNNs (in their case with fixed Bernoulli dropout) provide an improved signal to acquisition functions that can be used to improve upon using predictive uncertainty from deterministic nets.
We will use our own BNN formulation for both RAL as well as these baseline acquisition functions, to give them access to the same closed-form predictive uncertainty and to to ensure maximal comparability between our model and the baselines by having an absolutely identical architecture and training procedure for all methods.
For Maxent one selects the point that maximizes the predictive entropy,
\begin{align*}
\argmax_{(x,y) \in \mathcal{D}_u}~&\mathrm{H}[p(y|x, \mathcal{D}_l)], \qquad\text{where}\\
\mathrm{H}[p(y|x, \mathcal{D}_l)]&=-\sum_{c=1}^C p(y=c|x,\mathcal{D}_l)\log p(y=c|x, \mathcal{D}_l),
\end{align*}
while BALD chooses the point that maximizes the expected reduction in posterior entropy, or equivalently
\begin{align*}
\argmax_{(x,y) \in \mathcal{D}_u} H[p(y|x,\mathcal{D}_l)] - \mathds{E}_{p(\mathbf{w}|\mathcal{D}_l)}\big[H[p(y|x, \mathbf{w})]\big]. %
\end{align*}
We can compute maximum entropy as well as the first of the two BALD terms in closed form, while we calculate the second term of BALD via a sampling based approach.
We also include random sampling as a---on our kind of data rather competitive---baseline and evaluate on three data sets to show the adaptability of the method to the problem at hand.
\paragraph{Experimental Details.}
To evaluate the performance of the proposed pipeline,
we take as the predictor is a standard LeNet5 sized model (two convolutional layers of 20, 50 channels and two linear layers of 500, 10 neurons) and as the guide a policy net consisting of two layers with 500 hidden neurons.
We use three different image classification data sets to simulate varying difficulty while keeping the architectures and hyperparameters fixed. MNIST serves as a simple data set containing greyscale digits, FashionMNIST is a more difficult data set of greyscale clothing objects, and CIFAR-10 finally is a very difficult data set given the small classifier depth that requires the classification of colored natural images.
The assumption of active learning that labels are scarce and expensive also entails the problem that a large separate validation set to evaluate and finetune hyperparameters is not feasible.
Both nets are optimized via Adam~\cite{kingma2014adam} using their suggested hyperparameters.
In general we followed the assumption that an AL setting does not allow us to reserve valuable labeled data for hyper-parameter optimization so that they all remain fixed to the common defaults in the literature.
The predictor is trained for 30 epochs between labeling rounds (labeling five points per round), while the policy net gets one update step after each round. To simulate the need to avoid a costly retraining after each labeling round the predictor net is initialized to the learned parameters from the last one, with a linearly decreasing learning rate after each round.
In each experiment the state is constructed by ranking the unlabeled data according to their predictive entropy and then taking every twentieth point until $M=50$ points. Since all three data sets consider a ten class classification problem, the result is a $500$ dimensional normal distribution as the input to the policy network. We stop after having collected 400 points starting from an initial set of 50 data points.
\paragraph{Results.} We summarize the results in Table~\ref{tab:results}. RAL can learn to adapt itself to the data set at hand and can always outperform the baselines. Note that our central goal is to evaluate the relative performance of RAL and the baselines in these experiments and not the absolute performance. For a real world application one would use deeper architectures for more complex data sets, incorporate pretrained networks from similar labeled data sets, and use data augmentation to make maximal use from the small labeled data. Further benefits would come from using semi-supervised information, e.g.\ by assigning pseudo-labels to data points the classifier assigns a high predictive certainty~\cite{wang2017cost}. Such approaches would significantly improve the classifier performance for all models, but since they would blur the contribution of the respective acquisition function, we consciously ignore them here.
Note that although RAL uses a thinned Maxent ranking to generate its state, it can improve upon that strategy in every case. An ablation study showed that while the thinning process can improve over the plain Maxent in some settings if one were to use it as a fixed strategy, it is not sufficient to explain the final performance difference between RAL and Maxent.
REINFORCE owes its success to the bulk filtering step, which substantially facilitates the RL problem by filtering out a large portion of the search space. The simplified problem can thus be improved within a small number of episodes. More interactions with the environment would certainly bring further improvement at the expense of increasing labeling cost. We here present only a proof-of-concept for the idea that can improve on the feedback-free AL even within limited interaction rounds. Further algorithmic improvements are worthwhile investigating as future work, such as applying TRPO~\cite{schulman2015trust} or PPO~\cite{schulman2017proximal} in place of vanilla REINFORCE.
\begin{table}
\centering
\begin{adjustbox}{max width=0.95\columnwidth}
\begin{tabular}{crrr}
\toprule
& \multicolumn{1}{c}{\textsc{MNIST}} & \multicolumn{1}{c}{\textsc{FashionMNIST}} & \multicolumn{1}{c}{\textsc{CIFAR-10}} \\ \midrule
Random & $10.41 \pm 2.28$ & $24.64 \pm 0.48$ & $69.78 \pm 0.69$ \\
Maxent & $8.61 \pm 1.25$ & $25.72\pm 1.28$ & $69.80 \pm 0.32$ \\
BALD & $6.91 \pm 0.23$ & $26.85\pm 0.49$ & $69.69 \pm 1.69$ \\
RAL (ours) & $\mathbf{6.81 \pm 0.99}$ & $\mathbf{23.69\pm 0.73}$ & $\mathbf{68.96 \pm 1.03}$ \\ \bottomrule
\end{tabular}
\end{adjustbox}
\caption{{Results.} The table gives the average final error ($\pm$ one standard deviation over five runs) after labeling 400 labeled points.}\label{tab:results}
\end{table}
\section{Related Work}\label{sec:relwork}
The gold standard in AL methods has long remained to base on hard-coded and hand-designed acquisition heuristics (see~\cite{settles2012active} for a review).
A first extension is to not limit oneself to one heuristic, but to learn how to choose between multiple ones, e.g.~by a bandit gorithm~\cite{baram2004online,chu2016can,hsu2015active} or a Markov Decision Process~\cite{ebert2012ralf}. However this still suffers from the problem of being limited to existing heuristics.
A further step to gain more flexibility is to formulate the problem as a meta-learning approach. The general idea~\cite{fang2017learning,konyushkova17,pang2018meta} is to use a set of labeled data sets to learn a general acquisition function that can either be applied as is to the target data set or finetuned on a sufficiently similar set of labeled data. Our approach differs from those attempts insofar as we learn the acquisition function based solely on the target data distribution while the data is labeled.
If we take the scarcity of labels serious we can't allow ourselves the luxury of a separate large validation set to adapt a general heuristic. A separate large enough validation set also could not outperform the simple ablation study of allowing simpler acquisition functions that do not need a separate data set to instead combine that set with the labeled data they are training on. This is simply due to that as long as little labeled data is available the gain from being able to learn from extra data tends to outweigh the benefit one would get by a complicated acquisition function, and as soon as data becomes more abundant the effectiveness of any active learning method sharply.
We therefore discard them from comparative analysis.
A related area is the field of metareasoning~\cite{callaway2017learning}, where an agent has to learn how to request based on a limited computational budget.
Alongside the sampling-based alternatives for BNN inference, which are already abundant and standardized~\cite{blundell2015weight,gal2016dropout,kingma2015variational,louizos2017bayesian,molchanov2017variational}, deterministic inference techniques are also emerging. While direct adaptations of expectation propagation are the earliest of such methods~\cite{Gast_2018_CVPR,hernandez2015probabilistic}, they do not yet have a widespread reception due to their relative instability in training. This problem arises from the fact that EP do not provide any convergence guarantees, hence an update might either improve or deteriorate the model fit on even the training data. Contrarily, variational inference maximizes a lower bound on the log-marginal likelihood.
Early studies exist on deterministic variational inference of BNNs~\cite{kandemir2018sampling,wu2018fixing}. However, neither quantifies the uncertainty quality by using the posterior predictive of their models for a downstream application.
Earlier work that performs active learning with BNNs does exist~\cite{hernandez2015probabilistic,gal17a,pmlr-v80-depeweg18a}. However, all of these studies use hard-coded acquisition heuristics.
Our state construction method that forms a normal distribution from the posterior predictives of data points shortlisted by a bootstrap acquisition criterion is novel for the active learning setting. Yet, it has strong links to model-based reinforcement learning methods that propagate uncertainties through one-step predictors along the time axis~\cite{deisenroth2011pilco}.
\section{Conclusion}
We introduce a new reinforcement based method for labeling criterion learning. It is able to learn how to choose points in parallel to the labeling process itself, instead of requiring large already labeled subsets to learn on in an off-line setting beforehand. We achieve this by formulating the classification net, the policy net as well as the state probabilistically. We demonstrate its ability to adapt to a variety of qualitatively different data set situations performing similar to or even outperforming handcrafted heuristics. In the future work, we plan to extend the policy net with a density estimator that models the input data distribution so that it can also take the underlying geometry into account, making it less dependent on the quality of the probabilities.
\newpage
\bibliographystyle{named}
|
train/arxiv
|
BkiUc3bxK6-gDw5Au4s-
| 5
| 1
|
\section{Attention-Based Models}
\label{sec:attention_models}
We denote the set of speech utterances, suitably parameterized into feature
vectors as: ${\mathbf x} = ({\mathbf x}_1, {\mathbf x}_2, \cdots, {\mathbf x}_T)$, where ${\mathbf x}_i \in {\mathbb R}^{d}$, and the
corresponding ground-truth label sequence as: ${\mathbf y}^{*} = (y^{*}_0, y^{*}_1,
y^{*}_2, \cdots, y^{*}_{L+1})$, where $y^{*}_i \in {\mathcal G}$ (graphemes, in this
work).
We assume that the set of labels, ${\mathcal G}$, contains two special
labels, ${\left<\texttt{sos}\right>}$ and ${\left<\texttt{eos}\right>}$, which denote the start and the end of the sentence,
respectively, such that $y^{*}_0 = {\left<\texttt{sos}\right>}$ and $y^{*}_{L+1}={\left<\texttt{eos}\right>}$.
\begin{figure}
\centering
\includegraphics[width=0.4\columnwidth]{attention_model}
\caption{The attention-based model defines a probability distribution over the
next label, conditioned on the history of previous predictions:
$P({\mathbf y}_u|y_{u-1}, \cdots, y_0, {\mathbf x})$.}
\label{fig:attention_model}
\end{figure}
An attention-based model~\cite{ChanJaitlyLeEtAl15} consists of three components:
an \emph{encoder network} which maps input acoustic vectors into a higher-level
representation, an \emph{attention model} which summarizes the output of the
encoder based on the current state of the decoder, and a \emph{decoder network}
which models an output distribution over the next target conditioned on the
sequence of previous predictions: $P({\mathbf y}_u|y^{*}_{u-1}, y^{*}_{u-2}, \cdots,
y^{*}_0, {\mathbf x})$.
The model is depicted in Figure~\ref{fig:attention_model}.
The encoder network consists of a deep recurrent neural network which receives
as input the sequence of acoustic feature vectors, ${\mathbf x}$, and computes a
sequence of encoded features, ${\mathbf{h}}^\text{enc} = ({\mathbf{h}}^\text{enc}_1, \cdots,
{\mathbf{h}}^\text{enc}_T)$, and is analogous to an acoustic model in a traditional ASR
system.
The decoder network - which is analogous to the pronunication and language
modeling components in a traditional ASR system - consists of a deep recurrent
neural network which is augmented with an attention
mechanism~\cite{BahdanauChoBengio15}.
The decoder network predicts a single label at each step, conditioned on the
history of previous predictions.
At each prediction step, the attention mechanism summarizes the encoded features
based on the decoder state to compute a context vector, ${\mathbf{c}}_{u}$, as described
in Section~\ref{sec:multi-headed-attention}.
The attention model thus corresponds to the component of a traditional ASR
system which learns the alignments between the input acoustics and the output
labels.
This context vector is input to the decoder along with the previous label,
$y^{*}_{u-1}$.
The final decoder layer produces a set of logits which are input to a softmax
layer which computes a distribution over the set of output labels: $P({\mathbf y}_{u} |
y^{*}_{u-1}, \cdots, y^{*}_0={\left<\texttt{sos}\right>})$.
\subsection{Multi-headed Attention}
\label{sec:multi-headed-attention}
The attention mechanism used in the present work differs from our previous
work~\cite{PrabhavalkarRaoSainathEtAl17} in two important ways: firstly, we
replace dot-product attention~\cite{ChanJaitlyLeEtAl15} with additive
attention~\cite{BahdanauChoBengio15} which we find to be more stable; secondly,
we use multiple, independent attention heads~\cite{VaswaniShazeerParmarEtAl17}
allowing the model to simultaneously attend to multiple locations in the
input utterance, which we find to significantly improve model
performance.
More specifically, we denote the recurrent hidden state of the decoder network
after predicting $u-1$ labels as ${\mathbf{h}}^\text{att}_{u-1}$.
The model employs $M$ independent attention heads, each of which computes
attention values, $\beta^{i}_{t, u} \in {\mathbb R}$, for $1 \leq i \leq M$, $1 \leq t
\leq T$:
\begin{equation}
\beta^{i}_{t, u} = \mathbf{u}^i \tanh(W^{i} {\mathbf{h}}^\text{att}_{u-1} + V^i{\mathbf{h}}^\text{enc}_t) \label{eq:additive-attention}
\end{equation}
The individual attention values are then transformed into soft attention weights
through a softmax operation, and used to compute a summary of the encoder
features, ${\mathbf{c}}^{i}_u$:
\begin{equation}
\alpha^{i}_{t, u} = \frac{\exp(\beta^{i}_{t,
u})}{\sum_{s=1}^{T}\exp(\beta^{i}_{s, u})} \quad \quad
{\mathbf{c}}^{i}_{u} = \sum_{t=1}^{T} \alpha^{i}_{t,u} Z^{i}{\mathbf{h}}^\text{enc}_t
\end{equation}
\noindent The matrices $V^{i}, W^{i}, \text{and } Z^{i}$ and the vector,
$\mathbf{u}^i$, are parameters of the model.
Finally, the overall context vector is computed by concatenating together the
individual summaries: ${\mathbf{c}}_u = [{\mathbf{c}}^{1}_u; {\mathbf{c}}^{2}_u; \cdots; {\mathbf{c}}^{M}_u]$.
\subsection{Training and Inference}
\label{sec:ce-loss}
Most attention-based models are trained by optimizing the cross-entropy (CE)
loss function, which maximizes the the log-likelihood of the training data:
\begin{equation}
{\mathcal L}_\text{CE} = \sum_{({\mathbf x}, {\mathbf y}^{*})} \sum_{u=1}^{L+1} -\log P(y^{*}_u | y^{*}_{u-1}, \cdots,
y^{*}_0={\left<\texttt{sos}\right>}, {\mathbf x})
\end{equation}
\noindent where, we always input the ground-truth label sequence during
training (i.e., we do not use scheduled sampling~\cite{BengioVinyalsJaitlyEtAl15}).
Inference in the model is performed using a beam-search
algorithm~\cite{SutskeverVinyalsLe14}, where the models predictions are fed back
until the model outputs the ${\left<\texttt{eos}\right>}$ symbol which indicates that inference is
complete.
\section{Minimum Word Error Rate Training of Attention-based Models}
\label{sec:embr}
In this section we described how an attention-based model can be trained to
minimize the expected number of word errors, and thus the word error rate.
We denote by ${\mathcal W}({\mathbf y}, {\mathbf y}^{*})$ the number of word errors in a hypothesis, ${\mathbf y}$,
relative to the ground-truth sequence, ${\mathbf y}^{*}$.
In order to minimize word error rates on test data, we consider as our loss
function, the \emph{expected number of word errors over the training set}:
\begin{equation}
\label{eq:embr}
{\mathcal L}_\text{werr} ({\mathbf x}, {\mathbf y}^{*}) = \mathbb{E}[{\mathcal W}({\mathbf y}, {\mathbf y}^{*})] = \sum_{{\mathbf y}} P({\mathbf y}|{\mathbf x}) {\mathcal W}({\mathbf y}, {\mathbf y}^{*})
\end{equation}
Computing the loss in~\eqref{eq:embr} exactly is intractable since it
involves a summation over all possible label sequences.
We therefore consider two possible approximations which ensure tractability:
\emph{approximating the expectation in~\eqref{eq:embr} with
samples}~\cite{SakShannonRaoEtAl17, Shannon17}, or restricting the summation to
an N-best list as is commonly done during sequence-training for
ASR~\cite{Povey03}.
\subsection{Approximation By Sampling}
We can approximate the expectation in~\eqref{eq:embr} using an empirical
average over samples drawn from the model~\cite{Shannon17}:
\begin{small}
\begin{equation}
\label{eq:embr-sampling}
{\mathcal L}_\text{werr}({\mathbf x}, {\mathbf y}^{*}) \approx {\mathcal L}^\text{Sample}_\text{werr}({\mathbf x}, {\mathbf y}^{*}) =
\frac{1}{N} \sum_{{\mathbf y}_i \sim P({\mathbf y}|{\mathbf x})} {\mathcal W}({\mathbf y}_i, {\mathbf y}^{*})
\end{equation}
\end{small}
\noindent where, ${\mathbf y}_i$ are N samples drawn from the model distribution.
Critically, the gradient of the expectation in~\eqref{eq:embr-sampling} can be
itself be expressed as an expectation, which allows it to be approximated using
samples~\cite{Shannon17}:
\begin{footnotesize}
\begin{align}
\nabla {\mathcal L}^\text{Sample}_\text{werr} ({\mathbf x}, {\mathbf y}^{*})
&= \sum_{{\mathbf y}} P({\mathbf y}|{\mathbf x}) \left[{\mathcal W}({\mathbf y}, {\mathbf y}^{*}) - \mathbb{E}[{\mathcal W}({\mathbf y},
{\mathbf y}^{*})]\right] \nabla \log P({\mathbf y}|{\mathbf x}) \nonumber \\
&\approx \frac{1}{N} \sum_{{\mathbf y}_i \sim P({\mathbf y}|{\mathbf x})}
[{\mathcal W}({\mathbf y}_i, {\mathbf y}^{*}) - \widehat{{\mathcal W}}] \nabla \log P({\mathbf y}|{\mathbf x}) \label{eq:embr-grad-final}
\end{align}
\end{footnotesize}
\noindent where, we exploit the fact that
$\mathbb{E}[\nabla \log P({\mathbf y}|{\mathbf x})] = 0$, and $\widehat{{\mathcal W}} = \frac{1}{N}
\sum_{i=1}^{N} {\mathcal W}({\mathbf y}_i, {\mathbf y}^{*})$ is the average number of word errors over the
samples.
Subtracting $\widehat{{\mathcal W}}$, serves to reduce the variance of the gradient
estimates, and is important to stabilize training~\cite{Shannon17}.
\subsection{Approximation Using N-best Lists}
One of the potential disadvantages of the sampling-based approach is that a
large number of samples might be required in order to approximate the
expectation well.
However, since the probability mass is likely to be concentrated on the top-N
hypotheses, it is reasonable to approximate the loss function by restricting the
sum over just the top N hypotheses.
We note that this is typically done in traditional discriminative sequence
training approaches as well, where the summation is restricted to paths in a
lattice~\cite{Kingsbury09, VeselyGhoshalBurgetEtAl13}.
Denote by $\text{Beam}({\mathbf x}, N) = \{{\mathbf y}_1, \cdots, {\mathbf y}_N\}$, the set of N-best
hypotheses computed using beam-search decoding~\cite{SutskeverVinyalsLe14} for
the input utterance ${\mathbf x}$, with a beam-size, $N$.
We can then approximate the loss function in~\eqref{eq:embr} by assuming that
the probability mass is concentrated on just the N-best hypotheses, as follows:
\begin{equation}
\label{eq:embr-nbest}
{\mathcal L}^\text{N-best}_\text{werr} ({\mathbf x}, {\mathbf y}^{*}) = \sum_{{\mathbf y}_i \in \text{Beam}({\mathbf x}, N)}
\widehat{P}({\mathbf y}_i|{\mathbf x}) \left[{\mathcal W}({\mathbf y}_i, {\mathbf y}^{*}) - \widehat{W}\right] \nonumber
\end{equation}
\noindent Where, $\widehat{P}({\mathbf y}_i|{\mathbf x}) = \frac{P({\mathbf y}_i|{\mathbf x})}{\sum_{{\mathbf y}_i \in
\text{Beam}({\mathbf x}, N)} P({\mathbf y}_i|{\mathbf x})}$, represents the distribution re-normalized over
just the N-best hypotheses, and $\widehat{W}$ is the average number of word
errors over the N-best hypohtheses, which is applied as a form of variance
reduction, since it does not affect the gradient.
\subsection{Initialization and Training}
\label{sec:embr-init-train}
Based on the two schemes for approximating the expected word error rate, we can
define two possible loss functions:
\begin{equation}
{\mathcal L}^\text{Sample} = \sum_{({\mathbf x}, {\mathbf y}^{*})} {\mathcal L}^\text{Sample}_\text{werr}({\mathbf x}, {\mathbf y}^{*}) + \lambda {\mathcal L}_\text{CE}
\end{equation}
\begin{equation}
{\mathcal L}^\text{N-best} = \sum_{({\mathbf x}, {\mathbf y}^{*})} {\mathcal L}^\text{N-best}_\text{werr}({\mathbf x}, {\mathbf y}^{*}) + \lambda {\mathcal L}_\text{CE}
\end{equation}
In both cases, we interpolate with the CE loss function using a hyperparameter
$\lambda$ which we find is important to stabilize training (See
Section~\ref{sec:results}).
We note that interpolation with the CE loss function is similar to the
f-smoothing approach~\cite{SuLiYuEtAl13} in ASR.
Training the model directly to optimize ${\mathcal L}^\text{Sample}$ or ${\mathcal L}^\text{N-best}$
with random initialization is hard, since the model is not directly provided
with the ground-truth label sequence.
Therefore, we initialize the model with the parameters obtained after
CE training.
\section{Experimental Setup}
\label{sec:experiments}
The proposed approach is evaluated by conducting experiments on a mobile
voice-search task.
Models are trained on the same datasets as in our previous
works~\cite{PrabhavalkarRaoSainathEtAl17, PrabhavalkarSainathBoEtAl2017}.
The training set consists of $\sim$15M hand-transcribed anonymized utterances
extracted from Google voice-search traffic ($\sim$12,500 hours).
In order to improve robustness to noise, multi-style training data (MTR) are
constructed by artificially distorting training utterances with reverberation
and noise drawn from environmental recordings of daily events and from YouTube
using a room simulator, where the overall SNR ranges from 0-30dB with an average
SNR of 12dB~\cite{KimMisraChinEtAl17}.
Model hyperparameters are tuned on a development set of $\sim$12.9K utterances ($\sim$63K
words) and results are reported on a set of $\sim$14.8K utterances ($\sim$71.6K
words).
The acoustic input is parameterized into 80-dimensional log-Mel filterbank
features extracted over the 16kHz frequency range, computed with a 25ms window
and a 10ms frame shift.
Following~\cite{SakSeniorRaoEtAl15}, three consecutive frames are stacked
together, and every third stacked frame is presented as input to the encoder.
The same frontend is used for all models reported in this work.
Two attention-based models are trained in this work, differing only in the
structure of the encoder network: the first model (Uni-LAS) uses 5 layers of
1,400 uni-directional LSTM cells~\cite{HochreiterSchmidhuber97}, whereas the
second model (Bidi-LAS) uses 5 layers of 1,024 bi-directional LSTM
cells~\cite{SchusterPaliwal97} (i.e., 1,024 cells in the forward and backward
directions, for each layer).
The decoder network of both models consists of two layers of 1,024 LSTM cells in
each layer.
Both models use multi-headed attention as described in
Section~\ref{sec:multi-headed-attention} with $M=4$ attention heads.
Models are trained to output a probability distribution over grapheme symbols:
26 lower case alphabets \texttt{a-z}, the numerals \texttt{0-9}, punctuation
symbols \texttt{,'!} etc., and the special symbols ${\left<\texttt{sos}\right>}$, ${\left<\texttt{eos}\right>}$.
All models are trained using the Tensorflow
toolkit~\cite{AbadiAgarwalBarhamEtAl15}, with asynchronous stochastic gradient
descent (ASGD)~\cite{RechtReWrightEtAl11} using the Adam
optimizer~\cite{KingmaBa15}.
\section{Introduction}
\label{sec:intro}
There has been growing interest in the automatic speech recognition (ASR)
community in building end-to-end trained, sequence-to-sequence models which
directly output a word sequence given input speech frames, without requiring
explicit alignments between the speech frames and labels.
Examples of such approaches include the recurrent neural network transducer
(RNN-T)~\cite{Graves12, GravesMohamedHinton13}, the recurrent neural aligner
(RNA)~\cite{SakShannonRaoEtAl17}, attention-based
models~\cite{ChanJaitlyLeEtAl15, BahdanauChorowskiSerdyukEtAl15}, and
connectionist temporal classification (CTC)~\cite{GravesFernandezGomezEtAl06}
with word-based targets~\cite{SoltauLiaoSak17}.
Such approaches are motivated by their simplicity: since these models directly
output graphemes, word-pieces~\cite{SchusterNakajima12}, or words, they do not
require expertly curated pronunuciation dictionaries; since they can be trained
to directly output normalized text, they do not require separate modules to map
recognized text from the spoken to the written domain.
In our recent work, we have shown that such approaches are comparable to
traditional state-of-the-art speech recognition
systems~\cite{RaoSakPrabhavalkar17, PrabhavalkarRaoSainathEtAl17}.
Most sequence-to-sequence models (e.g.,~\cite{ChanJaitlyLeEtAl15}) are typically
trained to optimize the cross-entropy (CE) loss function, which corresponds to
improving log-likelihood of the training data.
During inference, however, model performance is commonly measured using
task-specific criteria, not log-likelihood: e.g., word error rate (WER) for
ASR, or BLEU score~\cite{PapineniRoukosWardEtAl02} for machine translation.
Traditional ASR systems account for this mismatch through discriminative
sequence training of neural network acoustic models (AMs)~\cite{Kingsbury09,
VeselyGhoshalBurgetEtAl13} which fine-tunes a cross-entropy trained AM with
criteria such as state-level minimum Bayes risk (sMBR) which are more closely
related to word error rate.
In the context of sequence-to-sequence models, there have been a few previous
proposals to optimize task-specific losses.
In their seminal work, Graves and Jaitly~\cite{GravesJaitly14} minimize expected
WER of an RNN-T model by approximating the expectation with samples drawn from
the model.
This approach is similar to the edit-based minimum Bayes risk (EMBR) approach
proposed by Shannon, which was used for minimum expected WER training of
conventional ASR systems~\cite{Shannon17} and the recurrent neural
aligner~\cite{SakShannonRaoEtAl17}.
An alternative approach is based on reinforcement learning, where the label
output at each step can be viewed as an \emph{action}, so that the task of
learning consists of learning the \emph{optimal policy} (i.e., optimal output
label sequence) which results in the greatest expected reward (lowest expected
task-specific loss).
Ranzato et al.~\cite{RanzatoChopraAuliEtAl16} apply a variant of the REINFORCE
algorithm~\cite{Williams92} to optimize task-specific losses for summarization
and machine translation.
More recently Bahdanau et al.~\cite{BahdanauBrakelLoweEtAl17} use an
actor-critic approach, which was shown to improve BLEU scores for machine
translation.
In the present work, we consider techniques to optimize attention-based
sequence-to-sequence models in order to directly minimize WER.
Our proposed approach is similar to~\cite{GravesJaitly14, Shannon17} in that we
approximate the expected WER using hypotheses from the model.
We consider both the use of sampling-based approaches~\cite{GravesJaitly14,
Shannon17} as well as approximating the loss over N-best lists of recognition
hypotheses as is commonly done in ASR (e.g.,~\cite{Povey03}).
However, unlike Sak et al.~\cite{SakShannonRaoEtAl17} we find that the process
is more effective if we approximate the expectation using N-best hypotheses
decoded from the model using beam-search~\cite{SutskeverVinyalsLe14} rather than
sampling from the model (See section~\ref{sec:results1}).
We apply the proposed techniques on an English mobile voice-search task, to
optimize grapheme-based models, with uni- and bi-directional encoders, where we
find that we can improve WER by up to 8.2\% relative to a CE-trained baseline
model.
Minimum word error rate training allows us to train grapheme-based
sequence-to-sequence models which are comparable in performance to a strong
state-of-the-art context-dependent (CD) phoneme-based speech recognition
system~\cite{SeniorSakdeChaumontQuitryEtAl15}.
The organization of the rest of the paper is as follows.
We describe the particular attention-based model used in this work in
Section~\ref{sec:attention_models} and describe the proposed approach for
minimum WER training of attention models in Section~\ref{sec:embr}.
We describe our experimental setup and our results in
Sections~\ref{sec:experiments} and~\ref{sec:results}, respectively, before
concluding in Section~\ref{sec:conclusions}.
\section{Related Work}
\label{sec:related-work}
Discriminative sequence training of acoustic models is a well-studied problem in
the context of traditional ASR systems.
Examples of such approaches include maximum mutual information
(MMI)~\cite{BahlBrowndeSouzaEtAl86}, minimum phone error (MPE)~\cite{Povey03},
and the state-level minimum Bayes risk (sMBR)~\cite{KaiserHorvatKacic00,
PoveyKingsbury07}.
State-of-the-art ASR systems are typically first trained to optimize either a
cross-entropy of CTC criterion before discriminative sequence training to
optimize criteria such as sMBR~\cite{Kingsbury09, VeselyGhoshalBurgetEtAl13,
SakVinyalsHeigoldEtAl14}.
In the context of sequence-to-sequence models, there have been a few previous
proposals to optimize task-specific losses.
In their seminal work, Graves and Jaitly~\cite{GravesJaitly14} minimize expected
WER of an RNN-T model by approximating the expectation with samples drawn from
the model.
This approach is similar to the edit-based minimum Bayes risk (EMBR) approach
proposed by Shannon, which was used for minimum expected WER training of
conventional ASR systems~\cite{Shannon17} and the recurrent neural
aligner~\cite{SakShannonRaoEtAl17}.
An alternative approach is based on reinforcement learning, where the label
output at each step can be viewed as an \emph{action}, so that the task of
learning consists of learning the \emph{optimal policy} (i.e., optimal output
label sequence) which results in the greatest reward (lowest expected
task-specific loss).
Ranzato et al.~\cite{RanzatoChopraAuliEtAl16} apply a variant of the REINFORCE
algorithm~\cite{Williams92} to optimize task-specific losses for summarization
and machine translation.
More recently Bahdanau et al.~\cite{BahdanauBrakelLoweEtAl17} use an
actor-critic approach, which was shown to improve BLEU scores for machine
translation.
Our proposed approach is similar to ~\cite{GravesJaitly14, Shannon17} in that we
approximate the expected WER using hypotheses from the model.
However, unlike Sak et al.~\cite{SakShannonRaoEtAl17} we find that the process
is more effective if we approximate the expectation using N-best hypotheses
decoded from the model using beam-search~\cite{SutskeverVinyalsLe14} rather than
sampling from the model (See section~\ref{sec:results1}).
\section{Results}
\label{sec:results}
We investigate the impact of various hyperparameters, and the choice of
approximation scheme by conducting detailed
experiments on the uni-directional LAS model.
Results on the bi-directional LAS model, along with a comparison to a
traditional CD-phone based state-of-the-art system are deferred until
Section~\ref{sec:results2}.
\subsection{Comparison of loss functions: ${\mathcal L}^\text{Sample}$ and
${\mathcal L}^\text{N-best}$}
\label{sec:results1}
\begin{figure*}
\centering
\begin{subfigure}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{nwerr_samp}
\caption{Expected number of word errors on held-out set computed using~\eqref{eq:embr} when
optimizing ${\mathcal L}^\text{Sample}$ as number of samples, $N$, varies.}
\label{fig:nwerr_samp}
\end{subfigure}
~
\begin{subfigure}[t]{0.31\textwidth}
\includegraphics[width=\textwidth]{wer_samp}
\caption{Word error rates on held-out set when optimizing ${\mathcal L}^\text{Sample}$ as a function
of the number of samples, $N$.}
\label{fig:wer_samp}
\end{subfigure}
~
\begin{subfigure}[t]{0.31\textwidth}
\includegraphics[width=\textwidth]{wer_nbest}
\caption{Word error rates on held-out set when optimizing ${\mathcal L}^\text{N-best}$ as a function of the depth of the
N-best list, $N$.}
\label{fig:wer_nbest}
\end{subfigure}
\caption{Metrics computed on held-out portion of the training set when
optimizing loss functions ${\mathcal L}^\text{Sample}$ and
${\mathcal L}^\text{N-best}$,
described in Section~\ref{sec:embr-init-train}.}
\label{fig:metrics}
\end{figure*}
Our first set of experiments evaluate the effectiveness of approximating the
expected number of word errors using samples (i.e., optimizing ${\mathcal L}^\text{Sample}$)
versus the approximation using N-best lists (i.e., optimizing
${\mathcal L}^\text{N-best}$), as described in Section~\ref{sec:embr-init-train}.
Our observations are illustrated in Figure~\ref{fig:metrics}, where we plot
various metrics on a held-out portion of the training data.
As can be seen in Figure~\ref{fig:nwerr_samp}, optimizing the sample-based
approximation, ${\mathcal L}^\text{Sample}$, reduces the expected number of word errors by
$\sim$50\% after training, with performance appearing to improve as the number
of samples, $N$, used in the approximation increases.
Unlike~\cite{SakShannonRaoEtAl17}, however, as can be seen in
Figure~\ref{fig:wer_samp}, the WER for the top-hypothesis computed using beam
search does not improve, but instead degrades as a result of training.
We hypothesize that this is a result of the mis-match between the beam-search
decoding procedure, which focuses on the head of the distribution during each
next-label prediction, and the sampling procedure which also considers
lower-probability paths~\cite{RanzatoChopraAuliEtAl16}.
As illustrated in Figure~\ref{fig:wer_nbest}, optimizing ${\mathcal L}^\text{N-best}$
(i.e., using the N-best list-based approximation) significantly improves WER by
about 10.4\% on the held-out portion of the training set.
Further, performance seems to be similar even when just the top four hypotheses
are considered during the optimization.
\begin{figure}
\centering
\includegraphics[width=0.75\columnwidth]{wer_lambda_nbest}
\caption{Word error rates on held-out portion of training set when
optimizing ${\mathcal L}^\text{N-best}$, as a function of the CE-loss interpolation
weight $\lambda$, when using $N=4$ hypotheses in the N-best list.}
\label{fig:wer_lambda_nbest}
\end{figure}
As a final note, we find that it is important to also interpolate with CE loss
function during optimization (i.e., setting $\lambda > 0$).
This is illustrated for the case where we optimize ${\mathcal L}^\text{N-best}$ using
$N=4$ hypotheses in the N-best list in Figure~\ref{fig:wer_lambda_nbest}.
\subsection{Improvements from Minimum WER Training for LAS Models}
\label{sec:results2}
\begin{table}
\centering
\begin{tabular}{|c||c||c|}
\hline
System & WER(\%) & Rescored WER(\%) \\
\hline
\hline
Bi-LAS & 7.2 & 6.6 \\
+MWER (${\mathcal L}^\text{N-best})$ & 6.9 & 6.2 \\
\hline
\hline
Uni-LAS & 8.1 & 7.3 \\
+MWER (${\mathcal L}^\text{N-best}) $ & 7.5 & 6.7 \\
\hline
\hline
CD-phone (CE + sMBR) & 7.5 & 6.7 \\
\hline
\end{tabular}
\caption{WERs on the test set after minimum WER training for uni- and
bi-directional LAS models. The proposed procedure improves WER by up
to 8.2\% relative to the CE-trained baseline system.}
\label{tbl:results}
\end{table}
We present results after expected minimum WER training (MWER) of the uni- and
bi-directional LAS models described in Section~\ref{sec:experiments} in
Table~\ref{tbl:results}, where we set $N=4$ and $\lambda=0.01$.
We report results after directly decoding the models to produce grapheme
sequences using a beam-search decoding with 8 beams (column 2) as well as after
rescoring the 8-best list using a very large 5-gram language model (column 3).
For comparison, we also report results using a traditional state-of-the-art low
frame rate (LFR)~\cite{PundakSainath16} CD-phone based system, which uses an
acoustic model composed of four layers of 1,024 uni-directional LSTM cells,
followed by one layer of 768 uni-directional cells.
The model is first trained to optimize the CE loss function, followed by
discriminative sequence training to optimize the state-level minimum Bayes risk
(sMBR) criterion~\cite{Kingsbury09}.
The model is decoded using a pruned, first-pass, 5-gram language model, which
uses a vocabulary of millions of words, as well as an expert-curated
pronunciation dictionary.
As before, we report results both before and after second-pass lattice
rescoring.
As can be seen in Table~\ref{tbl:results}, when decoded without second-pass rescoring (i.e.,
end-to-end training), MWER training improves performance of the uni- and
bi-directional LAS systems by 7.4\% and 4.2\% respectively.
The gains after MWER training are even larger after second-pass rescoring,
improving the baseline uni- and bi-directional LAS systems by 8.2\% and 6.1\%,
respectively.
Finally, we note that after MWER training the grapheme-based uni-directional LAS
system matches the performance of a state-of-the-art traditional
CD-phoneme-based ASR system.
\section{Conclusions}
\label{sec:conclusions}
We described a technique for training sequence-to-sequence systems to optmize
the expected test error rate, which was applied to attention-based systems.
Unlike~\cite{SakShannonRaoEtAl17}, we find that sampling-based approximations
are not as effective as approximations based on using N-best decoded hypotheses.
Overall, we find that the proposed approach allows us to improve WER by up to
8.2\% relative.
We find that the proposed techniques allow us to train grapheme-based
sequence-to-sequence models which match performance with a traditional
CD-phone-based state-of-the-art system on a voice-search task, which when viewed
jointly with our previous works~\cite{PrabhavalkarRaoSainathEtAl17,
RaoSakPrabhavalkar17} adds further evidence to the effectiveness of
sequence-to-sequence modeling approaches.
|
train/arxiv
|
BkiUcfbxK6mkyCKC0LFg
| 5
| 1
|
\section{Introduction}
Cosmic magnetism is usually explained by magnetohydrodynamic dynamo theory \cite{Brandenburg2005},
which is, however, only an amplification mechanism that still requires
some initial seed field. There have been many speculations about the
origin of the seed field, but consensus is still lacking \cite{Widrow2002,Raychaudhuri1972}.
One possible mechanism is a radiation induced drag force on electrons
in rotating astrophysical objects. This idea was evidently first proposed
by Cattani and Sacchi \cite{Cattani1966} and later has been applied
to different astrophysical conditions and objects \cite{Harrison1970,MishustinRuzmaikin1972,HinataDaneshvar1984,Walker1988,Contopoulos1998,Contopoulos2006,Contopoulos2015,Bisnovatyi-Kogan1977,Bisnovatyi-Kogan2002,AndoDoiSusa2010,Langer2003,Durrive2015,Gopal2005}.
However, none of these studies took into account kinetic effects.
It is shown here that predictions for these generated magnetic fields
can be significantly higher when kinetic effects are taken into account.
In the presence of existing magnetic fields, these kinetic
effects can enhance the generated magnetic fields by orders of magnitude.
A rotating astrophysical object is subject to asymmetric incoming
radiation, which exerts the Poynting-Robertson drag force on electrons
in the (toroidal) rotation direction that leads to the poloidal magnetic
field. Within a fluid framework, this can be modeled by including
an additional term into the equation for the magnetic field dynamics:
\begin{equation}
\frac{\partial\mathbf{B}}{\partial t}=-\frac{c}{e}\nabla\times\mathbf{f}_{rad}.
\end{equation}
There are two ways in which kinetic effects modify the effective radiation
force. First, the Poynting-Robertson force on an individual electron
depends on the absorbed power, which is, generally speaking, different
for the electrons of different energies; usually the more energetic electrons
absorb more power. Thus, to get the effective radiation force on the
electron fluid, one needs to average the force for each electron over
the absorbed power. Second, toroidal current can be driven even without
toroidal momentum injection just by asymmetrically heating electrons.
Indeed, by heating electrons we increase their energy and since the
collision frequency in plasma is energy dependent ($\propto v^{-3}$)
the toroidal drag force due to Coulomb collisions is going to be asymmetric
resulting in the total toroidal current \cite{FischBoozer1980}.
To simplify the problem and underscore the influence of the kinetic
effects, we consider a slab geometry, where the parallel direction
corresponds to the toroidal direction of the original rotating object
(see Fig.~\ref{fig01}). Namely, we consider two parallel and possibly
magnetized (in the parallel direction) plasma slabs that move relative
to each other with velocity $\bar{\beta}$ (velocities are measured
in the units of $c$). We label the upper slab slab 1 and the lower
slab slab 2.
\begin{figure}[]
\includegraphics[scale=0.33]{fig01}
\setlength{\belowcaptionskip}{-10pt}
\caption{Parallel radiating and absorbing slabs of plasma, immersed in different magnetic fields at different temperatures, in relative parallel motion.
\label{fig01}}
\end{figure}
\setlength{\belowcaptionskip}{0pt}
The paper is organized as follows: In Sec.~II we derive the efficiency of current generation through the Poynting-Robertson effect.
In Sec.~III, using kinetic rather than fluid theory, we show how the efficiency of the current generation through radiation effects can be much enhanced when there is a seed magnetic field already present and when kinetic effects are considered. We consider, in Sec.~IIIA, the case of blackbody emission and cyclotron absorption. In Sec.~IIIB, we consider the case of cyclotron emission and cyclotron absorption, where not only can the currents driven be driven much more effectively, but there is even the curious effect that the current in adjacent differentially moving plasma can be either in the same direction or in opposite directions. In Sec.~IV, we summarize and discuss our findings.
\section{The Poynting-Robertson effect}
Consider an electron that moves with velocity $\beta_{\Vert}$ and
emits isotropic radiation in its own reference frame. Imagine that
this electron also absorbs external radiation, which is isotropic
in its own reference frame moving with parallel velocity $\beta_{s}=-\bar{\beta}$.
Conservation of energy and momentum then gives
\noindent
\begin{align}
& mc\left(\gamma\dot{\beta}_{\parallel}+\dot{\gamma}\beta_{\parallel}\right)+\dot{p}_{\parallel}^{ems}=\dot{p}_{\parallel}^{abs},\\
& mc^{2}\dot{\gamma}=P^{abs}-P^{ems},
\end{align}
\noindent where $P^{abs}$ is the absorbed power, $p_{\parallel}^{abs}$
is the absorbed parallel momentum, $P^{ems}$ is the emitted power,
and $p_{\parallel}^{ems}$ is the emitted parallel momentum.
The time derivative of the wave momentum is determined by the power
delivered by the wave $\dot{\mathbf{p}}^{wave}=\left(\mathbf{k}/\omega\right)P^{wave}$.
Using the Lorentz transformation we can express it as $\dot{p}^{ems}=\left(\beta_{\parallel}/c\right)P^{ems},$
$\dot{p}^{abs}=\left(\beta_{s}/c\right)P^{abs}.$ Inserting these
expressions into the energy-momentum equations we find that electron
parallel velocity satisfies
\begin{equation}
\dot{\beta}_{\parallel}=-\frac{P^{abs}}{\gamma mc^{2}}(\beta_{\parallel}-\beta_{s}).
\end{equation}
We see that the electron experience drag by absorbing the external
radiation. This effect is called the Poynting\textendash Robertson
effect. It is a relativistic effect by its very nature, although it does
not require that the relative velocity between absorber and emitter be relativistic.
In the reference frame of an electron, this drag force can be simply
interpreted as a momentum transfer from asymmetric external radiation
to an electron. However, in the reference frame of the external radiation
source, there is no parallel momentum injection; there is only an
energy increase, which makes the electron heavier relativistically.
Since the total parallel momentum must be conserved, the electron must
slow down. Notice that, without external radiation, there would be
no radiation drag force (if we define force as the cause of velocity
change rather than momentum), which is consistent with the well-known
fact that isotropically radiating charge conserves its parallel velocity
\cite{Zheleznyakov1996}. It should be emphasized that here, by absorption,
we mean a generalized process of wave-particle interaction; for example,
it can denote Thomson scattering. While electrons experience radiation
drag, ions are almost unaffected by radiation and hence current is
generated.
Let us estimate crudely the current drive efficiency in this case.
Assume that the parallel velocity is randomized during
the inverse collision time $\nu^{-1}$ and use the effective electron-electron
and electron-proton Coulomb collision frequency $\nu=6\Gamma/\beta^{3}$
(see Refs. \cite{Bornatici1995,FidoneGranataJohner1988,Fisch1981}),
where\textbf{ $\Gamma=\omega_{p}^{4}\ln\varLambda/4\pi nc^{3}$}.
Then, after averaging over the Maxwell distribution, we find:
\begin{equation}
\frac{j_{\parallel}}{P_{V}^{abs}}=\frac{\left\langle \beta^{3}\right\rangle }{15}\bar{\beta}\approx0.43\beta_{th}^{3}\bar{\beta}.\label{eq:eff_PR}
\end{equation}
\noindent Here and later the current drive efficiency is expressed
in the units of $e/\Gamma mc$ except for Eq.~(\ref{eq:eff_incremental}).
\section{Kinetic formulation}
The kinetic theory of current drive by external radiation is well
developed and experimentally demonstrated \cite{Fisch1987}. This
theory has been advanced to accommodate the need for efficient non-inductive
toroidal current generation required for the successful operation
of commercial tokamaks. The theory formulates the efficiency of current
generation as the ratio of the driven current density to the absorbed
power density \cite{Fisch1981}:
\begin{equation}
\frac{j_{\parallel}}{P_{V}^{abs}}=-\frac{e}{mc}\left[\frac{n_{\parallel}}{\nu}+\frac{\beta_{\parallel}}{\beta}\frac{\partial}{\partial\beta}\left(\frac{1}{\nu}\right)\right].\label{eq:eff_incremental}
\end{equation}
\noindent The first term in Eq.~(\ref{eq:eff_incremental}) can be
associated with the Poynting-Robertson drag, while the second is due
to asymmetric heating. The second term arises because the collision
frequency $\nu$ depends sensitively on the electron energy. It leads
to the electron cyclotron current drive effect in tokamaks
\cite{FischBoozer1980}. The radiative transfer dynamo effect in astrophysical
contexts is not much different from the situation described above.
The major difference is that the radiation driving current is set
up naturally rather than controlled.
Equation (\ref{eq:eff_incremental}) gives the non-relativistic efficiency
of the current driven by a very narrow radiation band that affects
only a small region in velocity space. To calculate the efficiency
for arbitrary incoming radiation average Eq.~(\ref{eq:eff_incremental})
over the power density absorbed per frequency per solid angle per
$d^{3}\boldsymbol{\beta}$. The absorbed power density is given by
\begin{equation}
P_{V}^{abs}=\iint d\omega d\Omega\alpha_{\omega\Omega}I_{\omega\Omega},\label{eq:P_abs}
\end{equation}
\noindent where $I_{\omega\Omega}$ is the incoming electromagnetic
energy flux density per unit frequency per solid angle and $\alpha_{\omega\Omega}$
is the absorption coefficient (true absorption plus stimulated emission).
Due to the principle of detailed balance the absorption coefficient
can be expressed through the emissivity of an individual electron
$\eta_{\omega\Omega}\left(\mathbf{p}\right)$ \cite{Trubnikov1963,BornaticiCanoDeBarbieriEngelman1983}:
\begin{equation}
\alpha_{\omega\Omega}=-\frac{8\pi^{3}c^{2}}{n_{r}^{2}\omega^{2}}\int d^{3}\mathbf{p}\eta_{\omega\Omega}\left(\mathbf{p}\right)\left(\frac{\partial f}{\partial\varepsilon}+\frac{n_{\parallel}}{c}\frac{\partial f}{\partial p_{\parallel}}\right),\label{eq:coef_abs}
\end{equation}
\noindent where $n_{r}$ is the ray refractive index, we will use
approximation of tenuous plasma and assume $n_{r}\approx1$; $n_{\parallel}$
is the wave parallel refractive index, which we take to be just $n_{\parallel}=\cos\theta$.
Thus, we can calculate the current drive efficiency for a specific
type of the absorption mechanism determined by $\eta_{\omega\Omega}\left(\mathbf{p}\right)$
and for a given external radiation spectrum $I_{\omega\Omega}$.
Although, in both emitting and absorbing radiation, the two slabs
form a coupled system, to get the efficiency linear in power transferred,
note that each slab may be considered to see fixed radiation from the
other slab.
We first argue that it is the current drive efficiency that determines large-scale
magnetic field generation in optically thick plasma, for which the
effect is maximized. For optically thick plasma, the incoming radiation
flux $I=\iint d\omega d\Omega I_{\omega\Omega}$ is absorbed over
the characteristic distance $R=\alpha^{-1}$, where $\alpha=P_{V}^{abs}/I$
is the characteristic absorption coefficient. Ampere's law gives $B\cdot2\pi r\approx\left(4\pi/c\right)j_{\parallel}Rh$,
where $h$ is the height of the plasma disk, so the large-scale equilibrium
magnetic field at distance $r$ outside the plasma is proportional
to the current drive efficiency:
\begin{equation}
B\approx\frac{2}{c}\frac{h}{r}\left(\frac{j_{\parallel}}{P_{V}^{abs}}\right)I.
\end{equation}
\noindent Kinetic effects change the current drive efficiency and
thus the equilibrium field, but they do not affect much the time to reach equilibrium $t_{eq}$, which is the so-called "L/R time".
That time is still determined
by the Spitzer conductivity, since the full distribution function is
equally pushed by an electric field \cite{Fisch1985}. The time to reach equilibrium $t_{eq}$ greatly exceeds the age of the universe, and so the actual value of the magnetic seed is determined at some characteristic time $t_{seed}\ll t_{eq}$, and
it increases to the same extent as the efficiency increases (see Fig.~\ref{fig02}).
\begin{figure}[]
\includegraphics[width=8.6cm]{fig02}
\caption{\label{fig02}Schematic picture of the magnetic field growth. Magnetic field grows approximately linearly with time until it saturates at equilibrium value $B_{eq}$. Kinetic effects increase $B_{eq}$ to the same extent as they increase the current drive efficiency, but they hardly change the time to reach equilibrium $t_{eq}$. The actual value of the magnetic seed is determined at some characteristic time $t_{seed}\ll t_{eq}$, and it increases to the same extent as the efficiency increases.}
\end{figure}
\subsection{Blackbody incoming radiation and cyclotron absorption}
To take one example, let us assume that the incoming radiation from
the first slab is blackbody:
\begin{equation}
I_{\omega\Omega}=\frac{\omega^{2}}{8\pi^{3}c^{2}}\frac{T_{1}}{\bar{\gamma}\left(1+\bar{\beta}\cos\theta\right)}.
\end{equation}
\noindent If the plasma were already immersed in a parallel magnetic field,
then one of the absorption mechanisms would be synchrotron absorption.
In the non-relativistic case, it is determined by the emissivity \cite{Bekefi1966}:
\begin{equation}
\eta_{\omega\Omega}\left(\boldsymbol{\beta}\right)=\frac{e^{2}\beta_{\perp}^{2}\omega^{2}}{4\pi c}\left(1+\cos^{2}\theta\right)\delta\left[\omega_{c2}-\omega\left(1-\beta_{\Vert}\cos\theta\right)\right].\label{eq:cyclotron_emis}
\end{equation}
After some algebra it is easy to show that the current drive efficiency
in this case is
\begin{equation}
\frac{j_{\parallel}}{P_{V}^{abs}}=-\frac{\left\langle \beta_{\perp}^{2}\beta^{3}I_{2}\left(\beta_{\parallel}\right)\right\rangle +3\left\langle \beta_{\perp}^{2}\beta\beta_{\parallel}I_{1}\left(\beta_{\parallel}\right)\right\rangle }{6\left\langle \beta_{\perp}^{2}I_{1}\left(\beta_{\parallel}\right)\right\rangle },\label{eq:eff}
\end{equation}
where the averaging is over the initial distribution that
is taken to be a Maxwellian, and the following integrals are introduced:
\begin{align}
& I_{1}\left(\beta_{\parallel}\right)=\int_{-1}^{1}dx\frac{1+x^{2}}{\left(1-\beta_{\Vert}x\right)^{3}\left(1+\bar{\beta}x\right)},\\
& I_{2}\left(\beta_{\parallel}\right)=\int_{-1}^{1}dx\frac{x\left(1+x^{2}\right)}{\left(1-\beta_{\Vert}x\right)^{3}\left(1+\bar{\beta}x\right)}.
\end{align}
\noindent Keeping only the terms of the order $O(\beta_{\Vert}^{2}),$
$O\left(\bar{\beta}^{2}\right),$ $O\left(\beta_{\Vert}\bar{\beta}\right)$
we find:
\begin{equation}
\frac{j_{\parallel}}{P_{V}^{abs}}=\frac{\left\langle \beta_{\perp}^{2}\beta^{3}\right\rangle +9\left\langle \beta_{\perp}^{2}\beta\beta_{\parallel}^{2}\right\rangle }{15\left\langle \beta_{\perp}^{2}\right\rangle }\bar{\beta}\approx2.4\beta_{th}^{3}\bar{\beta}.\label{eq:eff_cyclotron_1}
\end{equation}
If we ignored the second term in the numerator of Eq.~(\ref{eq:eff_cyclotron_1})
and also did not account for $\beta_{\perp}^{2}$ in the absorption,
then the efficiency would be given by Eq.~(\ref{eq:eff_PR}), i.e.,
correspond to the case of Thomson scattering analyzed through fluid
theory.
From comparison of Eq.~(\ref{eq:eff_cyclotron_1}) and Eq.~(\ref{eq:eff_PR}),
we see that for cyclotron absorption the inclusion of the kinetic
effects boosts the generated current by a factor of 6. This is not
a huge change, though it is not insignificant either. For reference,
the fluid estimates for the galactic magnetic field are about $10^{-19}\:\textrm{G}$
\cite{Widrow2002}, while the estimates for the required lower bound
on the seed galactic field is about $10^{-14}\:\textrm{G}$ \cite{KulsrudZweibel2008}.
Cyclotron absorption mechanism needs some parallel (toroidal) magnetic
field to be already present to work. However, we see that the efficiency
(\ref{eq:eff_cyclotron_1}) is independent of the magnetic field,
so it seems that we can get poloidal magnetic field from a very small
toroidal field, generated, for example, by the Biermann battery effect
\cite{Biermann1950}. This works only when all the incoming radiation
is absorbed within the plasma, so that the effective absorption length
is less than the characteristic size of the system. For blackbody
incoming radiation flux and cyclotron absorption the effective absorption
coefficient is
\begin{equation}
\alpha=\frac{4}{3\pi}\frac{k_{B}^{4}}{c^{3}\sigma_{SB}T_{1}^{3}}\omega_{p2}^{2}\omega_{c2}^{2},
\end{equation}
or $\alpha\approx10^{-20+n+2b-3k}\:\textrm{cm\ensuremath{^{-1}}}$
for $T_{1}/k_{B}=10^{k}\:\textrm{K}$, $n_{2}=10^{n}\:\textrm{cm\ensuremath{^{-3}}}$,
and $B_{2}=10^{b}\:\textrm{G}$. If we take typical protogalactic
values $T_{1}/k_{B}=10^{4}\:\textrm{K}$ and $n=1\:\textrm{cm\ensuremath{^{-3}}}$,
then for $B_{2}=10^{-20}\:\textrm{G}$ that realistically can be produced
by the Biermann battery the effective absorption length becomes $R\sim10^{71}\:\textrm{cm}$,
which is much larger than characteristic size of the system. Thus,
the cyclotron absorption mechanism cannot be responsible for the generation
of the galactic seed field. However, it might be a very effective mechanism
of poloidal magnetic field generation in already highly magnetized
objects such as neutron stars.
\subsection{Cyclotron incoming radiation and absorption}
So far we considered that the incoming radiation is blackbody.
We can expect that if the incoming radiation were narrower
in $k_{\parallel}$, then its absorption would be more asymmetric
in parallel velocity of the second slab, which would result in
enhanced efficiency.
To investigate this, consider the case where each of the slabs is immersed in an axial magnetic field, though the respective magnetic fields are not not necessarily of the same strength.
Suppose that cyclotron radiation is emitted by an optically thin surface layer of depth $L$.
The incoming flux is then given by \cite{Bekefi1966}
\begin{equation}
I_{\omega\Omega}=\frac{n_{1}Le^{2}\beta_{th1}}{4\pi\sqrt{2\pi}c}\frac{\omega}{\left|\cos\theta\right|}\left(1+\cos^{2}\theta\right)e^{-\frac{\left(\frac{\omega-\omega_{c1}}{\omega\cos\theta}+\bar{\beta}\right)^{2}}{2\beta_{th1}^{2}}}.
\end{equation}
The current drive efficiency for cyclotron absorption has the same
form as Eq.~(\ref{eq:eff}), but with the following definition of
$I_{1}$, $I_{2}$:
\begin{align}
& I_{1}\left(\beta_{\parallel}\right)=\left(\int_{-\infty}^{-\left|a\right|}dx+\int_{\left|a\right|}^{\infty}dx\right)\frac{1}{\left|x\right|^{3}}\frac{\left(x^{2}+a^{2}\right)^{2}}{\left(x-a\beta_{\Vert}\right)^{2}}e^{-\frac{\left(x+b\right)^{2}}{2}},\\
& I_{2}\left(\beta_{\parallel}\right)=\left(\int_{-\infty}^{-\left|a\right|}dx+\int_{\left|a\right|}^{\infty}dx\right)\frac{a}{x^{4}}\frac{\left(x^{2}+a^{2}\right)^{2}}{\left(x-a\beta_{\Vert}\right)^{2}}e^{-\frac{\left(x+b\right)^{2}}{2}},
\end{align}
where $a=(\omega_{c2}-\omega_{c1})/\omega_{c2}\beta_{th1}\equiv\bigtriangleup\omega_{c}/\omega_{c2}\beta_{th1}$,
and $b=\left(\omega_{c1}/\omega_{c2}\right)(\beta_{\Vert}/\beta_{th1})+\bar{\beta}/\beta_{th1}$.
The first term in the denominator of Eq.~(\ref{eq:eff}) with $I_{1}$
and $I_{2}$ defined above is due to direct parallel momentum injection
and so depends on the sign of $\bigtriangleup\omega_{c}$, while the
second is due to asymmetric heating. Since now absorption is localized
in velocity space and most of the absorbed power goes into heating
rather than giving a parallel push, the second term completely dominates,
and the efficiency becomes essentially independent of the sign of
$\bigtriangleup\omega_{c}$. There are two qualitatively different
cases that produce current of different sign: $\left|a\right|\ll1$
(positive current) and $\left|a\right|\gtrsim1$ (negative current).
From numerical treatment it appears that for wide range of parameters
the efficiency is approximately given by
\begin{equation}
\frac{\left|j_{\parallel}\right|}{P_{V}^{abs}}\sim10^{2}\beta_{th}^{2}\bar{\beta}.
\end{equation}
This is $10^{2}/\beta_{th}$ larger efficiency than that for the blackbody
radiation, for $T/k_{B}\approx10^{4}\:\textrm{K}$ is about $\sim10^{5}$.
Therefore, at least in the case when the plasma already possesses
some toroidal magnetic field, one can expect the generated poloidal
magnetic field to be orders of magnitude larger than the previous
estimates based on the fluid theory.
These results can be understood from the following qualitative picture.
The non-relativistic cyclotron resonance condition for an electron
of slab 2 moving with velocity $\beta_{\Vert2}$ to absorb the radiation
emitted by an electron of slab 1 moving with velocity $\beta_{\parallel1}$
is
\begin{equation}
\omega_{c1}-k_{\Vert}c\left(\beta_{\Vert2}-\beta_{\Vert1}\right)=\omega_{c2}.\label{eq:res_condition}
\end{equation}
\noindent Here we use $k_{\parallel}$ corresponding to the reference
frame where the electron of slab 1 is stationary, and velocities $\beta_{\Vert1}$,
$\beta_{\Vert2}$ are measured in the frame where slab 2 is stationary.
\begin{figure}[]
\subfloat[\label{fig03_a}]{\includegraphics[width=0.24\textwidth]{fig03a}}
\subfloat[\label{fig03_b}]{\includegraphics[width=0.24\textwidth]{fig03b}}
\caption{\label{fig03}(a) $\omega_{c1}\simeq\omega_{c2}$: electrons of slab
2 with negative velocities around $-\bar{\beta}$ (left blue region)
interact with the large number of electrons of slab 1 (left red region),
while symmetric electrons of slab 2 with positive velocities around
$\bar{\beta}$ (right blue region) interact with small number of electrons
of slab 1 (right red region). (b) $\omega_{c1}\protect\neq\omega_{c2}$:
electrons of slab 2 with negative velocities around $-\bar{\beta}$
(left blue region) has large number of electrons of slab 1 in the
non-absorption window (left white region) and thus absorb less energy
than symmetric electrons of slab 2 with positive velocities around
$\bar{\beta}$ (right blue region), which have small number of electrons
of slab 1 in the non-absorption window (right white region).}
\end{figure}
If $\left|a\right|\ll1$, then essentially $\omega_{c1}\simeq\omega_{c2}$
and the resonance condition is $k_{\Vert}=0$ or $\beta_{\Vert2}=\beta_{\Vert1}$.
The former condition does not depend on the velocities and cannot
lead to the asymmetry, the latter condition leads to an asymmetric
absorption. Indeed, the electrons with positive parallel velocity
$\beta_{\Vert2}\approx\bar{\beta}$ absorb less power than the electrons
with negative parallel velocity $\beta_{\Vert2}\approx-\bar{\beta}$,
because the latter are in resonance with a much larger number of electrons
in slab 1. Thus the electrons with negative parallel velocities will
experience less Coulomb drag force than the electrons with positive
velocities resulting in positive current. This situation is shown
in Fig.~\ref{fig03}\subref{fig03_a}.
If $\left|a\right|\gtrsim1$, then the magnetic fields are different
$\omega_{c1}\neq\omega_{c2}$ and the resonance condition (\ref{eq:res_condition})
implies that the electrons of slab 2 with velocity $\beta_{\Vert2}$
will resonantly interact with the electrons of slab 1 with parallel
velocities satisfying
\begin{equation}
\begin{cases}
\beta_{\Vert1}<\beta_{\Vert2}-\left|\triangle\omega_{c}\right|/\omega_{c1},\\
\beta_{\Vert1}>\beta_{\Vert2}+\left|\triangle\omega_{c}\right|/\omega_{c1}.
\end{cases}\label{eq:interaction_vals}
\end{equation}
\noindent Thus there is a window in the absorption for each electron.
The electrons of slab 2 with negative parallel velocity around $-\bar{\beta}$
will have larger number of electrons of slab 1 in this window and
thus will receive less power than the electrons of slab 2 around $\bar{\beta}$.
The result is negative current. This situation is shown in Fig.~\ref{fig03}\subref{fig03_b}.
Notice that one gets the same efficiency but with different sign for
the current driven in the first slab. For blackbody incoming radiation,
this current would be always in the opposite direction, but, interestingly,
for cyclotron radiation, it is possible to have the situation when
currents in both slabs flow in the same direction. Indeed, since the
parameter $a$ depends on temperature it can have different values
corresponding to two different regimes ($\left|a\right|\ll1$ and
$\left|a\right|\gtrsim1$) in each slab. Thus, we reach the surprising result
that, for a differentially rotating plasma disk immersed in a toroidal
magnetic field and with a temperature gradient , it is possible that a toroidal
current will be self-consistently generated in the same direction throughout the disk.
\begin{figure}[]
\includegraphics[width=8.6cm]{fig04}
\caption{\label{fig04}Absorbed power density per electron as a function of
parallel velocity for $\beta_{th}=0.01$, $\bar{\beta}=0.05$ and
different values of $a$: $a=0.01$ (solid blue), $a=0.1$ (dotted
red), $a=1.0$ (dashed orange), $a=3.0$ (dash-dotted green).}
\end{figure}
Figure \ref{fig04} shows the absorbed power per electron as a function
of the parallel velocity for $\beta_{th}=0.01$, $\bar{\beta}=0.05$
and four different values of parameter $a$. We can clearly see that
the absorption is asymmetric. For $\left|a\right|\ll1$ the situation
is basically analogous to the case of equal magnetic fields shown
in Fig.~\ref{fig03}\subref{fig03_a} when the electrons with negative
parallel velocities (around $-\bar{\beta}$) absorb more power resulting
in positive current. In contrast, for $\left|a\right|\gtrsim1$ there
is a dip in the absorption for the electrons moving with negative
velocities resulting in negative current (see Fig.~\ref{fig03}\subref{fig03_b}).
We can also see that, as the difference between magnetic fields of
the slabs increases, i.e.,\ as the parameter $\left|a\right|$ increases,
the total absorbed power decreases rapidly. Thus, in the limit $\left|a\right|\gg1$,
the efficiency should be viewed as questionable, because the total
absorbed power density is negligible and radiation has to pass through
a very large volume of plasma to be fully absorbed.
\section{Conclusion}
The generation of cosmic magnetic fields due to radiation transfer
can be significantly larger when one takes into account kinetic effects
rather than simply relying on fluid theory. In the case where the
radiation is from cyclotron radiation, namely, when there already exists
an ambient magnetic field, an increase in fields perpendicular to
the ambient field can be orders of magnitude larger when kinetic effects
are considered. Curiously, in the case of inhomogeneous field, it
is possible to generate these perpendicular fields such that the current
produced within two differentially traveling, radiating, and absorbing
slabs is in the same direction, an effect that would not be captured
in the fluid theory.
The formalism advanced here shows how to deal with a radiative process, which is kinetic by its very nature.
It is expected that the formalism advanced here can be applied to various areas of astrophysics where
radiative kinetic effects might be important, for example, to radiative magnetic
reconnection \cite{Uzdensky2016}. It is also hoped that the approach taken here might help to make more accurate the currently widely used astrophysical radiative transfer codes, which, to the best of our knowledge, exist only
in the hydrodynamic version (see, for example, Refs. \cite{ShaneStoneJiang2012,ZEUSIII}).
\section*{Acknowledgement}
This work is supported by DOE Contract No. ~DE-AC02-09CH1-1466.
\bibliographystyle{apsrev4-1}
|
train/arxiv
|
BkiUdCU4eILhQLGUUSO5
| 5
| 1
|
\section{Introduction}
Associahedra are polytopes with importance in many areas of combinatorics. Among other things, the skeleton of the $(n-1)$-dimensional associahedron $\mathcal{A}_n$ represents the \emph{rotation graph} of binary search trees (BSTs) on $n$ keys. More precisely, each vertex of $\mathcal{A}_n$ represents a BST, and each edge represents a rotation (definitions of BSTs and rotations can be found in standard textbooks). The diameter of $\mathcal{A}_n$ is known to be precisely $2n-6$ when $n > 10$~\cite{SleatorEtAl1988,Pournin2014}; that is, any BST can be transformed into any other BST with at most $2n-6$ rotations, and this is tight.
In this paper, we study a generalization of associahedra called \emph{graph associahedra}. Graph associahedra were originally defined using \emph{tubings}~\cite{CarrDevadoss2004}. We use an equivalent definition based on \emph{search trees on graphs} (STGs).
While the keyspace of a BST is a linearly ordered set, the key space of an STG is a graph. Formally, given a connected graph $G = (V,E)$, an STG on $G$ is a rooted tree that can be constructed as follows. Choose a vertex $r \in V$ as the root. Then, recursively create STGs on the connected components of $G \setminus v$, and add them to the STG as children of $r$. Rotations on STGs can be defined similarly as for BSTs (more details in \cref{sec:prelims}). Search trees on graphs have been used in various contexts under different names (see, e.g., \cite[Section 2.2]{CardinalEtAl2018b}).
Given a connected graph $G$ on $n$ vertices, Carr and Devadoss~\cite{CarrDevadoss2004} defined the graph associahedron $\mathcal{A}(G)$ as an $(n-1)$-dimensional polytope such that the skeleton of $\mathcal{A}(G)$ is isomorphic to the rotation graph of the STGs on $G$. Since STGs on the path with $n$ vertices correspond to BSTs on $n$ nodes, we obtain the conventional associahedron when $G$ is a path.
For search trees $T_1$ and $T_2$ on a graph $G$, let the \emph{rotation distance} $d(T_1, T_2)$ be the minimum number of rotations required to transform $T_1$ into $T_2$. The diameter $\delta(\mathcal{A}(G))$ of $\mathcal{A}(G)$ is the maximum rotation distance between two search trees on $G$.
Manneville and Pilaud~\cite{MannevillePilaud2015} showed that $\max\{2n-18, m\} \le \delta(\mathcal{A}(G)) \le \binom{n}{2}$ for each connected graph $G$ on $n$ vertices and $m$ edges. Moreover, the diameter of graph associahedra is monotone under the addition of edges. Both bounds are asymptotically tight. For example, conventional associahedra ($G$ is a path) and cyclohedra ($G$ is a cycle) have linear diameter, and permutohedra ($G$ a the complete graph) have diameter $\binom{n}{2}$.
In this paper, we consider the case where $G$ is a \emph{caterpillar tree}. A caterpillar tree (or simply \emph{caterpillar}) is a tree consisting of a path and some number of leaves that are adjacent to the path.
The choice of that path is not unique\footnote{Either the ends of the path are leaves, or there is a leaf attached to an end of the path which could be considered part of the path.}, but we assume that any considered caterpillar consists of a distinguished path called the \emph{spine} and any number of leaves, called \emph{legs}. Our results are not significantly affected by the choice of the spine.
We determine the diameter of every caterpillar associahedron up to a constant factor. This involves the Shannon entropy of the ``leg distribution'', which we now properly define. Let $G$ be a caterpillar tree with $n$ spine vertices $s_1, s_2, \dots, s_n$, let $s_i$ be adjacent to $m_i$ leg vertices, and let $m = m_1 + m_2 + \dots + m_n$ be the total number of leg vertices. Then
\begin{align*}
H(G) = H(m_1, m_2, \dots, m_n) = \sum_{i \in [n], m_i > 0} \frac{m_i}{m} \log\left(\frac{m}{m_i}\right).
\end{align*}
For simplicity of presentation, we write $H'(\cdot) = H(\cdot) + 1$. We are now ready to state our main result.
\begin{restatable}{theorem}{restateCatLB}\label{p:main}
Let $G$ be a caterpillar tree with $n$ spine vertices and $m$ leg vertices. Then $\delta(\mathcal{A}(G)) \in \Theta( n + m \cdot H'(G) )$.
\end{restatable}
Notably, if $m = n$ and each spine node is adjacent to one leaf node, than $\delta(\mathcal{A}(G)) \in \Omega(n \log n)$.
Our proofs make use of techniques from the design of \emph{optimal BSTs}. A connection between rotations in BSTs and rotations in search trees on caterpillars is not surprising -- caterpillars are similar to paths, after all. However, we show a connection to \emph{queries} to BSTs. Essentially, the leg nodes in search trees on caterpillars can be seen as queries to the BST on the spine nodes. For our upper bound (\cref{sec:upper}), we use the fact that an optimal \emph{static} BST for an input distribution $X$ has amortized query cost $H'(X)$~\cite{Mehlhorn1975}. For our lower bound (\cref{sec:lower}), we use \emph{Wilber's first lower bound} \cite{Wilber1989}, which bounds the performance of \emph{dynamic} BSTs on a certain input sequence. We show that it also bounds the rotation distance between certain search trees on a caterpillar. Finally, we show that Wilber's first lower bound is asymptotically equal to $H'(X)$ if the input distribution $X$ is fixed, but the order of queries is worst possible. Note that this also implies that dynamic BSTs cannot beat optimal static BST on any distribution if the ordering is worst possible. Kujala and Elomaa~\cite{KujalaElomaa2008} previously showed that this is true even if the ordering is random, but they did not use Wilber's bound.
\paragraph{Related work.}
Improved bounds on $\delta(\mathcal{A}(G))$ are known if $G$ belongs to certain graph classes. Pournin~\cite{Pournin2014a} showed that $\delta(\mathcal{A}(G)) \approx 2.5n$ if $G$ is the cycle on $n$ vertices. Cardinal, Langerman and Pérez-Lantero showed that $\delta(\mathcal{A}(G)) \in \mathcal{O}(n \log n)$ if $G$ is a tree on $n$ vertices, and this bound is tight if $G$ has the form of a balanced binary tree.
Recently, Cardinal, Pournin, and Valencia-Pabon~\cite{CardinalEtAl2021} showed that $\delta(\mathcal{A}(G)) \in \mathcal{O}( \mathrm{td}(G) \cdot n )$, where $\mathrm{td}(G)$ is the \emph{treedepth}\footnote{The treedepth $\mathrm{td}(G)$ can be defined as the minimum height of a search tree on $G$.} of $G$, and that this bound is attained by \emph{trivially perfect graphs}. Using the relationship between treedepth and \emph{treewidth},
this extends the $\mathcal{O}(n \log n)$ upper bound to graphs with bounded treewidth. They also showed that this bound is is tight for graphs of \emph{pathwidth two} (which have treewidth at most two, but are not necessarily trees). For the definitions of treewidth and pathwidth, we refer to \cite{CardinalEtAl2021}. Our \cref{p:main} shows that the $\mathcal{O}(n \log n)$ bound is tight already for caterpillars, which are both trees and have pathwidth \emph{one} (in fact, caterpillars are precisely the graphs of pathwidth one).
We do not consider \emph{queries} to STGs in this paper. In the case where $G$ is a tree, some results from BSTs have been carried over. Bose, Cardinal, Iacono, Koumoutsos, and Langerman~\cite{BoseEtAl2019} presented a $\mathcal{O}(\log \log n)$-competitive search tree algorithm based on \emph{tango trees} for BSTs~\cite{DemaineEtAl2007}. Berendsohn and Kozma~\cite{BerendsohnKozma2022} described a variant of Splay trees \cite{SleatorTarjan1985}, and a polynomial-time approximation scheme for the optimal static search tree on a given tree for a given input distribution. Notably, it is still unknown whether an optimal static search tree on a tree can be found in polynomial time.
Berendsohn and Kozma~\cite{BerendsohnKozma2022} also showed that if we only consider a subset of search trees on a tree $G$ called $k$-cut trees, then the maximum rotation distance between two STGs is linear. A special case of $k$-cut trees are \emph{Steiner-closed trees}, which play a central role in the results of \cite{BoseEtAl2019} and \cite{BerendsohnKozma2022}.
\section{Preliminaries}\label{sec:prelims}
In this paper, we consider (simple and undirected) graphs on the one hand, and (rooted) search trees on the other. We call the vertices of search trees \emph{nodes}. In both cases, we denote by $V(\cdot)$ the set of vertices or nodes and by $E(\cdot)$ the set of edges.
Let $G$ be a graph. We denote the subgraph of $G$ induced by $U \subseteq V(G)$ by $G[U]$. For $v \in V(G)$, we write $G \setminus v = G[V(G) \setminus\{v\}]$.
Let $T$ be a rooted tree and $x \in V(T)$. For a node $x$, $T_x$ denotes the subtree of $T$ consisting of $x$ and all its descendants. The \emph{depth} of $x$ is the number of nodes in the path from the root of $T$ to $x$, and is denoted by $\mathrm{depth}_T(x)$.
\paragraph{Queries in binary search trees.}
In the \emph{dynamic BST model}, we are given a starting BST $S$ on $[n]$ and a sequence $\sigma$ of access queries. Each access query specifies a node $i \in [n]$. We start each query with a pointer at the root, and are required to move the pointer to the node $i$ to satisfy the query. To this end, we are allowed to move the pointer to the the parent or a child of the node it is currently pointing at, or execute a rotation involving that node. Let $\mathrm{OPT}(S, \sigma)$ denote the minimum number of pointer moves and rotation needed to serve $\sigma$. We charge a pointer move at the start of each query, when the pointer is moved to the root, so each query has cost at least one.
Since the rotation distance between two BSTs is $\mathcal{O}(n)$, we can always replace the starting BST $S$ by a different one at the cost of $\mathcal{O}(n)$. If the access sequence is long enough, this cost is insignificant; therefore, let us define $\mathrm{OPT}(\sigma) = \min_S \mathrm{OPT}(S, \sigma)$. For each BST $S$, we have $\mathrm{OPT}(S, \sigma) \le \mathrm{OPT}(\sigma) + \mathcal{O}(n)$.
It is not known how to compute or approximate $\mathrm{OPT}(S, \sigma)$ or the associated sequence of operations efficiently. However, a number of algorithms have been conjectured to be \emph{instance-optimal}, i.e, to serve every access sequence $\sigma$ with a cost of $\mathcal{O}(\mathrm{OPT}(\sigma) + n)$, most notably \emph{Splay}~\cite{SleatorTarjan1985} and \emph{Greedy}~\cite{Lucas1988,Munro2000}. We emphasize that Splay is an \emph{online} algorithm, i.e., it serves each query independently from future queries, and that Greedy can be made online~\cite{DemaineEtAl2009} with only a constant-factor overhead. It is currently unknown whether any online algorithm can approximate the offline optimum $\mathrm{OPT}(\sigma)$ by a constant factor; this is the subject of the \emph{dynamic optimality conjecture}.
There are several lower bounds known for $\mathrm{OPT}(\sigma)$. In this paper, we use \emph{Wilber's first lower bound}~\cite{Wilber1989}, which we define and discuss in \cref{sec:wilber_bst}.
If we do not allow rotations, then we can only move the pointer down until we hit the queried node. Thus, the minimum cost of serving $\sigma = (x_1, x_2, \dots, x_m)$ in $S$ is clearly $\sum_{i=1}^m \mathrm{depth}_S(x_i)$. A BST $S$ minimizing this quantity is called an \emph{optimal static BST}. Note that only the frequencies of the elements in $\sigma$ affect the static cost, not the order. For a BST $S$ on $[n]$ and element frequencies $m_1, m_2, \dots, m_n \in \mathbb{N}_0$, define $$\mathrm{cost}(S, m_1, m_2, \dots, m_n) = \sum_{i=1}^n m_i \cdot \mathrm{depth}_S(i).$$ Let $\text{OPT-ST}(m_1, m_2, \dots, m_n)$ be the minimum $\mathrm{cost}(S, m_1, m_2, \dots, m_n)$ over all possible BSTs $S$.
It is possible to compute an optimal static BST in $\mathcal{O}(n^2)$ time~\cite{Knuth1971}. Mehlhorn showed that $\frac{1}{m}\text{OPT-ST}(m_1, m_2, \dots, m_n)$ is within a factor two from the Shannon entropy.
\begin{lemma}[{\cite{Mehlhorn1975}}]\label{p:opt-st-entropy}
Let $X = (m_1, m_2, \dots, m_n)$ be a sequence of nonnegative integers, and let $m = m_1 + m_2 + \dots + m_n$. Then
\begin{align*}
\frac{1}{2}H(X) \cdot m \le \text{OPT-ST}(X) \le (2H(X) + 2) \cdot m = 2H'(X) \cdot m.
\end{align*}
\end{lemma}
\paragraph{Search trees on graphs.}
Let $G$ be a connected graph, and $T$ be a rooted tree, such that $V(T) = V(G)$. Let $r$ be root of $T$ and let $c_1, c_2, \dots, c_k$ be the children of $r$ in $T$. Then $T$ is a \emph{search tree on $G$} if
\begin{enumerate}[(a)]
\itemsep0pt
\item $G \setminus r$ consists of precisely $k$ connected components $C_1, C_2, \dots, C_k$ such that $V(T_{c_i}) = V(C_i)$ for each $i \in [k]$; and
\item $T_{c_i}$ is a search tree on $C_i$ for each $i \in [k]$.
\end{enumerate}
Let $T$ be a search tree on $G$, let $p \in V(G)$, let $c$ be a child of $p$ in $T$, and let $g$ be the parent of $p$ in $T$, if $p$ is not the root. A \emph{rotation} of the edge $(p,c)$ makes $c$ the parent of $p$ and child of $g$ (or root), and accordingly redistributes children so that the result is still an STG. More precisely, it (1) makes $c$ a child of $g$, if $p$ is not the root, and otherwise, makes $c$ the root; (2) makes $p$ a child of $c$; and (3) makes each child $x$ of $c$ a child of $p$ where $V(T_c)$ contains both a vertex adjacent to $p$ and a vertex adjacent to $c$. See \cref{fig:rotation} for an illustration. It can be checked that the rooted tree resulting from a rotation is indeed an STG. It is also easy to see that each STG on $G$ can be rotated into every other STG on $G$, e.g., by rotating the correct element to the root and then recursing on the subtrees.
\begin{figure}
\centering
\begin{tikzpicture}[xscale=1, yscale=0.75]
\small
\node[component] (A1) at (0,0) {$A_j$};
\node[] (ARem) at (0,1+0.1) {$\vdots$};
\node[component] (A2) at (0,2) {$A_1$};
\node[vertex] (p) at (1,1) {};
\node[above=1mm] at (p) {$p$};
\node[component] (B1) at (2,0) {$B_k$};
\node[] (BRem) at (2,1+0.1) {$\vdots$};
\node[component] (B2) at (2,2) {$B_1$};
\node[vertex] (c) at (3,1) {};
\node[above=1mm] at (c) {$c$};
\node[component] (C1) at (4,0) {$C_\ell$};
\node[] (CRem) at (4,1+0.1) {$\vdots$};
\node[component] (C2) at (4,2) {$C_1$};
\draw[] (A1) -- (p) -- (B1) -- (c) -- (C1);
\draw[] (A2) -- (p) -- (B2) -- (c) -- (C2);
\end{tikzpicture}
\hspace{5mm}
\begin{tikzpicture}[sibling distance = 9mm, level distance = 13mm, child anchor = north]
\footnotesize
\node[vertex] (p) {}
child{ node[triangle] (A1) {\strut$A_1$} }
child{ node[triangle] (A2) {\strut$A_j$} }
child{ node[vertex] (c) {}
child{ node[triangle] (B1) {\strut$B_1$} }
child{ node[triangle] (B2) {\strut$B_k$} }
child{ node[triangle] (C1) {\strut$C_1$} }
child{ node[triangle] (C2) {\strut$C_\ell$} }
};
\draw (p) -- ($(p)+(0,0.3)$);
\node[right=0.5mm] at (p) {$p$};
\node[above right] at (c) {$c$};
\node at ($(A1)!0.5!(A2)+(0,0.2)$) {$\dots$};
\node at ($(B1)!0.5!(B2)+(0,0.2)$) {$\dots$};
\node at ($(C1)!0.5!(C2)+(0,0.2)$) {$\dots$};
\end{tikzpicture}
\hspace{5mm}
\begin{tikzpicture}[sibling distance = 9mm, level distance = 13mm, child anchor = north]
\footnotesize
\node[vertex] (c) {}
child{ node[vertex] (p) {}
child{ node[triangle] (A1) {\strut$A_1$} }
child{ node[triangle] (A2) {\strut$A_j$} }
child{ node[triangle] (B1) {\strut$B_1$} }
child{ node[triangle] (B2) {\strut$B_k$} }
}
child{ node[triangle] (C1) {\strut$C_1$} }
child{ node[triangle] (C2) {\strut$C_\ell$} };
\draw (c) -- ($(c)+(0,0.3)$);
\node[left=0.5mm] at (c) {$c$};
\node[above left] at (p) {$p$};
\node at ($(A1)!0.5!(A2)+(0,0.2)$) {$\dots$};
\node at ($(B1)!0.5!(B2)+(0,0.2)$) {$\dots$};
\node at ($(C1)!0.5!(C2)+(0,0.2)$) {$\dots$};
\end{tikzpicture}
\caption{An STG rotation. \emph{(left)}~A graph $G$ with two vertices $p$ and $c$ that split $G$ into $j + k + \ell$ components. Each line represents one or more edges. \emph{(center)}~The subtree $T_p$ in an STG $T$ on $G$. \emph{(right)}~The result of the rotation $(p,c)$ in $T$.}\label{fig:rotation}
\end{figure}
\paragraph{Projections of STGs.} Both of our proofs make use of a concept defined by Cardinal, Langerman, and Pérez-Lantero \cite{CardinalEtAl2018b}.
Let $T$ be a search tree on a graph $G$, and let $v$ be a leaf vertex in $G$. Note that $v$ has at most one child in $T$.
We define $T \setminus v$ to be the following search tree on $G \setminus v$, obtained by \emph{pruning} $v$: If $v$ has no children, simply remove it. If $v$ has a parent $p$ and a child $c$, remove $v$ and make $c$ a child of $p$. If $v$ is the root of $T$ and has a child $c$, then remove $v$ and make $c$ the root.
If $G$ is a tree, then we can obtain every subgraph of $G$ by progressively removing leaves. Accordingly, if $T$ is a search tree on a tree $G$, and $U \subseteq V(G)$ is a set of vertices such that $G[U]$ is connected, then we can define the \emph{projection} of $T$ onto $U$, written $T[U]$, as the search tree obtained by progressively pruning the vertices in $V(G) \setminus U$. It is easy to see that the order of pruning does not matter.
The main utility of projections lies in the following lemma, which essentially states that projections onto $U$ are only affected by rotations between nodes in $U$.
\begin{lemma}[{\cite{CardinalEtAl2018b}}]\label{p:proj}
Let $T$ be a search tree on a tree $G$, let $U \subseteq V(G)$ such that $G[U]$ is connected, let $(x, y)$ be an edge of $T$, and let $T'$ be the tree obtained by rotating $(x,y)$. If $x, y \in U$, then $T'[U]$ is the STG obtained when rotating $(x,y)$ in $T[U]$. Otherwise, $T'[U] = T[U]$.
\end{lemma}
\section{Search trees on caterpillars}\label{sec:structure}
Let $n \in \mathbb{N}_+$ and $m_1, m_2, \dots, m_n \in \mathbb{N}_0$, and write $m = \sum_{i=1}^n m_i$. We define the caterpillar $C(m_1, m_2, \dots, m_n)$ with $n$ spine vertices and $m$ leg vertices as follows. The spine of $G$ consists of the vertices $s_1, s_2, \dots, s_n$, in that order. Additionally, for each $i \in [n]$ and $j \in [m_i]$, there is a leg vertex $\ell_{i,j}$ that is adjacent to $s_i$. Clearly, every caterpillar can be constructed this way.
Let $T$ be a search tree on $G = C(m_1, m_2, \dots, m_n)$, and let $x \in V(T)$. We call $x$ a \emph{leg node} if it corresponds to a leg vertex, and a \emph{spine node} if it corresponds to a spine vertex. We denote nodes in $T$ in the same way we denote vertices in $G$, i.e., we write $\ell_{i,j}$ for leg nodes and $s_i$ for spine nodes.
We call a leg node \emph{bound} if it has no children, and \emph{free} otherwise.
\begin{observation}\label{p:stc-structure}
Let $T$ be a search tree on $C(m_1, m_2, \dots, m_n)$. Consider a leg node $\ell_{i,j}$. If $\ell_{i,j}$ has no children, then $s_i$ is its parent. Otherwise, $\ell_{i,j}$ has exactly one child, and $s_i$ is a descendant of $\ell_{i,j}$.
\end{observation}
Define the \emph{spine BST} $\mathrm{bst}(T)$ as the projection of $T$ onto the spine vertices of $G$ (see \cref{fig:stcs} for an example). Since the spine vertices form a path, $\mathrm{bst}(T)$ indeed corresponds to a binary search tree. By \cref{p:proj}, each rotation between two spine nodes in $T$ corresponds to a BST rotation in $\mathrm{bst}(T)$. However, the converse is clearly not true in general, since two neighboring nodes $u, v$ in $\mathrm{bst}(T)$ might have leg nodes between them in $T$, in which case a rotation of $u,v$ in $BST(T)$ cannot be applied to $T$. Call an edge $(p,c)$ of $\mathrm{bst}(T)$ \emph{light} if $(p,c)$ is also an edge in $T$. Essentially, as long as we restrict ourselves to light edges, we can apply BST restructuring algorithms to $T$. This will be useful to prove our upper bound. We further observe that rotations between spine nodes do not affect the parents of leg nodes.
\begin{observation}\label{p:spine-rot}
Let $T$ be a search tree on a caterpillar, let $T'$ arise from a rotation between spine nodes in $T$, and let $\ell$ be a leg node. If $\ell$ is the root of $T$, then $\ell$ is also the root of $T'$. Otherwise, the parent of $\ell$ in $T'$ is the same as in $T$.
\end{observation}
\begin{figure}
\newcommand{6mm}{6mm}
\newcommand{5mm}{5mm}
\newcommand{3mm}{3mm}
\centering
\begin{tikzpicture}[xscale=0.5,yscale=0.7]
\small
\node[vertex] (s1) at (0,0) {};
\node[vertex] (s2) at (0,-1) {};
\node[vertex] (s3) at (0,-2) {};
\node[vertex] (s4) at (0,-3) {};
\node[vertex] (s5) at (0,-4) {};
\node[left] at (s1) {\strut$s_1$};
\node[left] at (s2) {\strut$s_2$};
\node[left] at (s3) {\strut$s_3$};
\node[left] at (s4) {\strut$s_4$};
\node[left] at (s5) {\strut$s_5$};
\draw (s1) -- (s2) -- (s3) -- (s4) -- (s5);
\node[leg] (l11) at (1,0.3) {};
\node[leg] (l12) at (1,-0.3) {};
\node[leg] (l31) at (1,-2) {};
\node[leg] (l41) at (1,-3) {};
\node[leg] (l51) at (1,-4+0.3) {};
\node[leg] (l52) at (1,-4-0.3) {};
\node[right] at (l11) {\strut$\ell_{1,1}$};
\node[right] at (l12) {\strut$\ell_{1,2}$};
\node[right] at (l31) {\strut$\ell_{3,1}$};
\node[right] at (l41) {\strut$\ell_{4,1}$};
\node[right] at (l51) {\strut$\ell_{5,1}$};
\node[right] at (l52) {\strut$\ell_{5,2}$};
\draw (l11) -- (s1) -- (l12);
\draw (l31) -- (s3);
\draw (l41) -- (s4);
\draw (l51) -- (s5) -- (l52);
\end{tikzpicture}
\hspace{3mm}
\begin{tikzpicture}[
sibling distance = 6mm, level distance = 5mm,
level 2/.style={sibling distance=2*6mm},
level 3/.style={sibling distance=6mm}
]
\small
\node[leg] (l31) {}
child { node[vertex] (s2) {}
child { node[leg] (l11) {}
child { node[leg] (l12) {}
child { node[vertex] (s1) {} }
}
}
child { node[vertex] (s4) {}
child { node[vertex] (s3) {} }
child { node[leg] (l41) {} }
child { node[leg] (l51) {}
child { node[vertex] (s5) {}
child { node[leg] (l52) {} }
}
}
}
};
\node[left] at (s1) {\strut$s_1$};
\node[right] at (s2) {\strut$s_2$};
\node[below=-1mm] at (s3) {\strut$s_3$};
\node[right] at (s4) {\strut$s_4$};
\node[right] at (s5) {\strut$s_5$};
\node[left] at (l11) {\strut$\ell_{1,1}$};
\node[left] at (l12) {\strut$\ell_{1,2}$};
\node[right] at (l31) {\strut$\ell_{3,1}$};
\node[below=-1mm] at (l41) {\strut$\ell_{4,1}$};
\node[right] at (l51) {\strut$\ell_{5,1}$};
\node[right] at (l52) {\strut$\ell_{5,2}$};
\end{tikzpicture}
\hspace{3mm}
\begin{tikzpicture}[sibling distance = 6mm, level distance = 5mm]
\small
\node[vertex] (s2) {}
child { node[vertex] (s1) {} }
child { node[vertex] (s4) {}
child { node[vertex] (s3) {} }
child { node[vertex] (s5) {} }
};
\node[left] at (s1) {\strut$s_1$};
\node[right] at (s2) {\strut$s_2$};
\node[left] at (s3) {\strut$s_3$};
\node[right] at (s4) {\strut$s_4$};
\node[right] at (s5) {\strut$s_5$};
\node at (0, -2.5) {};
\end{tikzpicture}
\hspace{3mm}
\begin{tikzpicture}[sibling distance = 6mm,
level 1/.style={level distance = 3.5mm},
level 7/.style={level distance = 5mm}
]
\small
\node[leg] (l51) {}
child { node[leg] (l41) {}
child { node[leg] (l11) {}
child { node[leg] (l52) {}
child { node[leg] (l12) {}
child { node[leg] (l31) {}
child { node[vertex] (s2) {}
child { node[vertex] (s1) {} }
child { node[vertex] (s4) {}
child { node[vertex] (s3) {} }
child { node[vertex] (s5) {} }
}
}
}
}
}
}
};
\node[left] at (s1) {\strut$s_1$};
\node[right] at (s2) {\strut$s_2$};
\node[below=-1mm] at (s3) {\strut$s_3$};
\node[right] at (s4) {\strut$s_4$};
\node[right] at (s5) {\strut$s_5$};
\node[right] at (l11) {\strut$\ell_{1,1}$};
\node[right] at (l12) {\strut$\ell_{1,2}$};
\node[right] at (l31) {\strut$\ell_{3,1}$};
\node[right] at (l41) {\strut$\ell_{4,1}$};
\node[right] at (l51) {\strut$\ell_{5,1}$};
\node[right] at (l52) {\strut$\ell_{5,2}$};
\end{tikzpicture}
\hspace{3mm}
\begin{tikzpicture}[sibling distance = 6mm, level distance = 5mm,
level 1/.style={sibling distance=2*6mm},
level 2/.style={sibling distance=6mm}
]
\small
\node[vertex] (s2) {}
child { node[vertex] (s1) {}
child { node[leg] (l11) {} }
child { node[leg] (l12) {} }
}
child { node[vertex] (s4) {}
child { node[vertex] (s3) {}
child { node[leg] (l31) {} }
}
child { node[leg] (l41) {} }
child { node[vertex] (s5) {}
child { node[leg] (l51) {} }
child { node[leg] (l52) {} }
}
};
\node[left] at (s1) {\strut$s_1$};
\node[right] at (s2) {\strut$s_2$};
\node[above] at (s3) {\strut$s_3$};
\node[right] at (s4) {\strut$s_4$};
\node[right] at (s5) {\strut$s_5$};
\node[below=-1mm] at (l11) {\strut$\ell_{1,1}$};
\node[below=-1mm] at (l12) {\strut$\ell_{1,2}$};
\node[below=-1mm] at (l31) {\strut$\ell_{3,1}$};
\node[below=-1mm] at (l41) {\strut$\ell_{4,1}$};
\node[below=-1mm] at (l51) {\strut$\ell_{5,1}$};
\node[below=-1mm] at (l52) {\strut$\ell_{5,2}$};
\end{tikzpicture}
\caption{\emph{(left)} The caterpillar $G = C(2,0,1,1,2)$.
\emph{(center left)}~An STG $T$ on $G$.
\emph{(center)}~The spine BST $S = \mathrm{bst}(T)$.
\emph{(center right)}~The STG $A( S, \pi)$ with $\pi = (\ell_{3,1}, \ell_{1,2}, \ell_{5,2}, \ell_{1,1}, \ell_{4,1}, \ell_{5,1})$.
\emph{(right)}~The STG $B(S)$.}\label{fig:stcs}
\end{figure}
\paragraph{Special STGs.}
Before proceeding with the proofs, we define two useful kinds of STGs where the spine BST has only light edges. Let $G = C(m_1, m_2, \dots, m_n)$ be a caterpillar, let $S$ be a BST on the spine nodes of $G$, and let $\pi$ be an ordering of the leg nodes of $G$. Define $A(S, \pi)$ to be the unique search tree on $G$ such that $\mathrm{bst}(A(S, \pi)) = S$, each leg node is above each spine node (i.e., the leg nodes form a path at the top of the tree), and the order of the leg nodes from \emph{bottom to top} is $\pi$. Define $B(S)$ to be the unique search tree on $G$ such that all leg nodes are bound and $\mathrm{bst}(B(S)) = S$. Clearly, every search tree $T$ on $G$ without free leg nodes is equal to $B(\mathrm{bst}(T))$. \Cref{fig:stcs} shows examples.
\section{Upper bound}\label{sec:upper}
Fix a caterpillar $G = C(m_1, m_2, \dots, m_n)$. We first consider only search trees without free leg nodes.
\begin{lemma}\label{p:upper-all-bound}
Let $T_1, T_2$ be STGs on $G$ without free leg nodes. Then, $T_1$ can be transformed into $T_2$ with $\mathcal{O}(n)$ rotations.
\end{lemma}
\begin{proof}
Let $S_1 = \mathrm{bst}(T_1)$ and $S_2 = \mathrm{bst}(T_2)$. As stated in the introduction, there is a sequence of at most $\mathcal{O}(n)$ rotations that transforms $S_1$ into $S_2$. We simply apply these rotations to $T_1$. For this to be well-defined, we need to show that we never (attempt to) rotate a heavy edge. Since $T_1$ has no free leg nodes, all edges in $S_1$ are light, so the first rotation goes through. Furthermore, a rotation between two spine nodes can never change a leg node from bound to free (or vice versa). Thus, by induction, after each rotation, all leg nodes are still bound, so we can apply the next rotation.
\end{proof}
The above lemma provides us with a ``core'' of the caterpillar associahedron with linear diameter. In the following, we show that the rotation distance from any search tree to \emph{some} STG without free leg nodes is $\mathcal{O}(n + m H'(G))$. By the triangle inequality, this means that the rotation distance between any two STGs is at most $2 \cdot \mathcal{O}(n + m H'(G)) + \mathcal{O}(n) = \mathcal{O}(n + m H'(G))$, and thus we have the upper bound of \cref{p:main}.
We first show how to reduce the problem to the case that $T = A(S, \pi)$ for some BST $S$ and some leg node ordering $\pi$. For this, the following lemma is useful.
\begin{lemma}\label{p:bst-restr-rot}
Let $T$ be a BST. There exists a sequence of $\mathcal{O}(n)$ rotations on $T$ such that
\begin{enumerate}[(i)]
\itemsep0pt
\item every rotation involves only nodes at depth at most 3; and
\item every spine node becomes the root of $\mathrm{bst}(T)$ at some point.
\end{enumerate}
\end{lemma}
\begin{proof}
We start by rotating $T$ into the \emph{right path}, where no node has a left child. Cleary~\cite{Cleary2002} showed that this is possible using $\mathcal{O}(n)$ rotations at the root or its right child. Then, repeatedly rotate the root with its right child, until the root has no right child. This way, each node becomes the root at least once. Clearly, the rotations used only involve the root, its children, and its grandchildren.
\end{proof}
\begin{lemma}\label{p:upper-reduction}
Let $T$ be an arbitrary search tree on $G$. Then $T$ can be transformed into some $A(S,\pi)$ using $2m + \mathcal{O}(n)$ rotations.
\end{lemma}
\begin{proof}
An STG has the form $A(S,\pi)$ if and only if each leg node has no spine ancestor. The basic idea of the proof is to apply rotations between spine nodes to eventually bring each spine node to the root (of the spine BST). At any point, if the current root of the spine BST has a leg node as a child, rotate it with the leg node. We refer to all such rotations between two spine node rotations as the $\texttt{cleanup}$ step. By \cref{p:spine-rot}, every leg node is transported to the top this way.
The problem with this approach is that we can only rotate light edges in the spine BST, and the only edges that we know must be light are the edges between the BST root and its children. However, if we extend our $\texttt{cleanup}$ step to consider leg nodes that are somewhat deeper in the tree (that is, nodes with two spine ancestors instead of just one), we guarantee that all BST edges \emph{near} the BST root are light. This allows us to apply \cref{p:bst-restr-rot} to bring each spine node to the root. We now describe the sequence of rotations more formally, starting with the $\texttt{cleanup}$ step.
Let $T'$ be the current STG. Let $\texttt{cleanup}(T')$ be the following sequence of rotations: As long as there is a leg node $\ell$ with a spine parent $p$ and at most two spine ancestors, rotate $(p,\ell)$. Arbitrarily resolve conflicts. Let $T''$ be the STG after applying $\texttt{cleanup}$. Clearly, no spine node $s$ with $\mathrm{depth}_{\mathrm{bst}(T'')}(s) \le 2$ has a leg node child, so all edges in $\mathrm{bst}(T'')$ involving the root or its children are light. Moreover, each leg node that is touched by $\texttt{cleanup}$ is rotated at most twice, and afterwards has no spine node ancestors.
Let $\mathcal{X}$ be the sequence of rotations obtained by applying \cref{p:bst-restr-rot} to $\mathrm{bst}(T)$. We first apply $\texttt{cleanup}$ to $T$, then apply the spine rotations in $\mathcal{X}$, with a $\texttt{cleanup}$ step after each spine rotation. Since each rotation in $\mathcal{X}$ involves either the root of the spine BST or one of its children, the rotation is applied to a light edge. Thus, the whole sequence can indeed be applied to $T$.
The number of rotations is at most $2m + \mathcal{O}(n)$. Indeed, since no rotation between spine nodes can change the parent of a leg node (see \cref{p:spine-rot}), each leg node is only touched in a single $\texttt{cleanup}$ step (where it is rotated above all spine nodes), and only twice in that $\texttt{cleanup}$ step. The length of $\mathcal{X}$ is $\mathcal{O}(n)$ by \cref{p:bst-restr-rot}.
Finally, we show that the final tree is indeed of the form $A(S, \pi)$. For this, it suffices to show that each leg node that has at least one spine ancestor in $T$ is touched in some $\texttt{cleanup}$ step. Suppose that such a leg node $\ell$ is not involved in a $\texttt{cleanup}$ step. Without loss of generality, let the parent $p$ of $\ell$ be a spine node. By \cref{p:spine-rot}, since $\ell$ is not touched in a $\texttt{cleanup}$ step and we never rotate between leg nodes, $p$ stays the parent of $\ell$ throughout the sequence of rotations. However, then $p$ will be the root of $\mathrm{bst}(T)$ at some point, so $\ell$ is rotated upwards by the next $\texttt{cleanup}$ step, a contradiction.
\end{proof}
It remains to show how to transform $A(S,\pi)$ into an STG without free leg nodes.
\begin{lemma}\label{p:A_S_pi}
Let $T = A(S,\pi)$. Then there is a sequence of $\mathcal{O}(n + H'(G) \cdot m)$ rotations that transform $T$ into a search tree without bound leg nodes.
\end{lemma}
\begin{proof}
Since every edge in $\mathrm{bst}(T)$ is light, we can first transform $\mathrm{bst}(T)$ into an arbitrary BST $S'$ using $\mathcal{O}(n)$ rotations. We will later specify $S'$. Let $T' = A(S', \pi)$ be the resulting STG. Now pick the lowest leg node $\ell_{i,j}$ in $T'$, and rotate it down until it is bound (i.e., a child of $s_i$). Clearly, this requires $\mathrm{depth}_{\mathrm{bst}(T')}(s_i) = \mathrm{depth}_{S'}(s_i)$ rotations. Repeat this until all leg nodes are bound.
The total number of rotations is
\begin{align*}
\mathcal{O}(n) + \sum_{i=0}^n m_i \cdot \mathrm{depth}_{S'}(s_i).
\end{align*}
This is precisely $\mathrm{cost}(S', m_1, m_2, \dots, m_n)$, the cost of \emph{accessing} each $i \in [n]$ with frequency $m_i$ in the static BST $S'$. As such, if we choose $S'$ to be the optimal static BST for these frequencies, we need $\mathcal{O}(n) + \text{OPT-ST}(m_1, m_2, \dots, m_n) \le \mathcal{O}(n) + 2 H'(G) \cdot m$ rotations, by \cref{p:opt-st-entropy}.
\end{proof}
\Cref{p:upper-all-bound,p:upper-reduction,p:A_S_pi} together imply the upper bound in \cref{p:main}.
In the proof of \cref{p:A_S_pi}, we essentially treat the leg nodes as queries to our optimal static BST, where a leg node $\ell_{i,j}$ queries the spine node $s_i$. Rotating the leg nodes down is akin to moving down the pointer in the static BST model. Here, the pointer always points at the parent of the one leg node that has a spine node parent.
Observe that we can similarly implement the dynamic BST model as rotations transforming $A(S, \pi)$ into a search tree without bound leg nodes, simply by allowing spine node rotations (BST rotations) in between leg node rotations (pointer moves). If the dynamic BST algorithm wants to rotate the single heavy edge in the spine BST of our STG, we have to move the leg node out of the way (and back afterwards), but this only adds a constant-factor overhead. Thus we obtain a generalization of the dynamic BST model, where we can start processing queries before finishing previous ones (although the way ``pointers'' work in this model is not very intuitive).
Let $\sigma = \sigma(\pi)$ be the sequence of spine nodes obtained by replacing every leg node in $\pi$ by its adjacent spine node.
Our observations imply that transforming $A(S,\pi)$ into an STG without free leg nodes requires no more than $\mathcal{O}(\mathrm{OPT}(S, \sigma))$ rotations, and \cref{p:A_S_pi} essentially uses the fact that $\mathrm{OPT}(S, \sigma) \le \text{OPT-ST}(m_1, m_2, \dots, m_n)$. In the next section, we show that \emph{Wilber's first lower bound}~\cite{Wilber1989} for $\mathrm{OPT}(S, \sigma)$ also holds for our generalized model.
\section{Lower bound}\label{sec:lower}
We start by defining a variant of Wilber's first lower bound and proving that it is equal to the Shannon entropy of the query distribution in the worst case (up to a constant factor). Then, we show that it also bounds the rotation distance between $A(S, \sigma)$ and $B(S)$ if $\sigma$ is the worst-case ordering and $S$ is a suitable search tree.
\subsection{Wilber's first lower bound for binary search trees}\label{sec:wilber_bst}
Let $S$ be a binary search tree on $n$ nodes, let $\sigma = (x_1, x_2, \dots, x_m)$ be a sequence of queries, and let $u$ be a node of $S$. Then we define $\lambda(S, u, \sigma)$ as follows. If $u$ has at most one child, then $\lambda(S,u,\sigma) = 0$. Otherwise, let $v, w$ be the children of $u$ and write $A = V(S_u) = \{u\} \cup V(S_v) \cup V(S_w)$. Let $\sigma\restr{A}$ be the sequence obtained from $\sigma$ by removing all elements not in $A$. Now $\lambda(S,u,\sigma)$ is defined as the number of times the sequence $\sigma\restr{A}$ switches between an element of $V(S_v)$, an element of $V(S_w)$, and $u$. More formally, $\lambda(S,u,\sigma)$ is the number of pairs of adjacent values $x,y$ in $\sigma$ such that neither $x,y \in V(S_v)$, nor $x,y \in V(S_w)$, nor $x = y = u$. Let $\Lambda(S,\sigma) = \sum_{u \in V(S)} \lambda(S,u,\sigma)$.
For convenience, define $\lambda'(S,u,\sigma)$ as $\lambda(S,u,\sigma)$ plus the number of occurrences of $u$ in $\sigma$, and let $\Lambda'(S,\sigma) = \sum_{u \in V(S)} \lambda'(S,u,\sigma) = \Lambda(S,\sigma) + m$.
It is known that $\mathrm{OPT}(S, \sigma) \in \Omega(\Lambda'(S, \sigma))$. This is not tight in general~\cite{ChalermsookEtAl2020,LecomteWeinstein2020}.
Still, Wilber~\cite{Wilber1989} showed that if $\sigma$ is the \emph{bit reversal permutation}, then $\Lambda'(S, \sigma) \in \Theta(n \log n)$ for all $S$. This bound is already matched by a balanced static tree, so, on that sequence, Wilber's bound is tight and dynamic BSTs do not perform better than balanced trees. We now generalize this result to arbitrary distributions.
\begin{lemma}\label{p:wilber-static-opt}
Let $n \in \mathbb{N}_+$, let $m_1, m_2, \dots, m_n \in \mathbb{N}_0$, and let $m = \sum_{i=1}^n m_i$.
Then there is a BST $S$ on $[n]$ and a sequence $\sigma$ of length $m$ where each $i \in [n]$ occurs precisely $m_i$ times, such that $\Lambda'(S,\sigma) \ge \frac{1}{2}H(m_1, m_2, \dots, m_n)$.
\end{lemma}
\begin{proof}
We recursively construct a BST $S_{p,q}$ on the interval $[p,q]$, where $1 \le p \le q \le n$, and in the end set $S = S_{1,n}$. The construction is essentially the approximately optimal static BST construction due to Mehlhorn~\cite{Mehlhorn1975}.
Fix $p$ and $q$. Let $k = q-p+1$ be the number of nodes in $S_{p,q}$, and, for each $i$ with $p \le i \le q$, let $a_i = \sum_{j=p}^{i-1} m_j$ and $b_i = \sum_{j=i+1}^q m_j$.
We claim that there exists an $i \in [p,q]$ such that $m_i + \min(a_i,b_i) \ge \frac{k}{2}$. Suppose not. Then, for each $i$, we have either (1)~$m_i + a_i < \frac{k}{2} < b_i$ or (2)~$m_i + b_i < \frac{k}{2} < a_i$. Clearly, for $i = p$, (2) cannot hold, and likewise for $i = q$, (1) cannot hold. Let $i'$ be the highest index where (1) holds. Then, $i < q$ and $m_i + a_i < b_i = m_{i+1} + b_{i+1} < a_{i+1} = m_i + a_i$, a contradiction.
Choose $i \in [p,q]$ such that $m_i + \min(a_i,b_i) \ge \frac{k}{2}$. Make $i$ the root of $S_{p,q}$, and attach the recursively constructed subtrees $S_{p,i-1}$ and $S_{i+1,q}$ as the left and right child to it (for $p' > q'$, we let $S_{p',q'}$ be the empty BST).
Let $c(p,q) = \sum_{j=p}^q m_j \cdot \mathrm{depth}_{S_{p,q}}(j)$. Intuitively, $c(p,q)$ is the cost of accessing the relevant nodes within $S_{p,q}$. We now recursively construct a sequence $\sigma_{p,q}$ such that $c(p,q) \le 2 \Lambda'(S_{p,q}, \sigma_{p,q})$.
First, if $p = q$, then let $\sigma_{p,q}$ simply consist of $m_p$ times the element $p$. Clearly, $c(p,q) = m_p = \Lambda'(S_{p,q}, \sigma_{p,q})$. Otherwise, let $i$ be the root of $S_{p,q}$. We construct $\sigma_{p,q}$ by combining $\sigma_{p,i-1}$, $\sigma_{i+1,q}$, and the $m_i$ occurrences of the element $i$ as follows. Start with the $m_i$ occurrences of $i$, then alternate between $\sigma_{p,i-1}$ and $\sigma_{i+1,q}$ for as long as possible, and finally append the remaining elements from either $\sigma_{p,i-1}$ or $\sigma_{i+1,q}$. Since $\sigma_{p,i-1}$ has length $a_i$, and $\sigma_{i+1,q}$ has length $b_i$, we have $\lambda'(S_{p,q}, i, \sigma_{p,q}) \ge m_i + \min(a_i, b_i) \ge \frac{k}{2}$.
Clearly, if $j \in [p,i-1]$, then $\mathrm{depth}_{S_{p,q}}(j) = 1 + \mathrm{depth}_{S_{p,i-1}}(j)$, and similarly for elements $j \in [i+1,q]$ in the right subtree. Thus, by induction,
\begin{align*}
c(p,q) &= m_i + a_i + c(p,i-1) + b_i + c(i+1,q)\\
&\le k + 2\Lambda'(S_{p,i-1}, \sigma_{p,i-1}) + 2\Lambda'(S_{i+1,q}, \sigma_{i+1,q})\\
&\le 2 \lambda'(S_{p,q}, i, \sigma_{p,q}) + 2\Lambda'(S_{p,i-1}, \sigma_{p,i-1}) + 2\Lambda'(S_{i+1,q}, \sigma_{i+1,q}) \le 2\Lambda'(S_{p,q}, \sigma_{p,q}).
\end{align*}
Now let $S = S_{1,n}$ and $\sigma = \sigma_{1,n}$. We have $c(1,n) \le 2 \Lambda'(S,\sigma)$, and by \cref{p:opt-st-entropy}, we know that $c(1,n) \ge \mathrm{cost}(m_1, m_2, \dots, m_n) \ge H(m_1, m_2, \dots, m_n)$. This concludes the proof.
\end{proof}
\subsection{Wilber's lower bound for rotation distance}
We now show that the rotation distance between $A(S, \pi)$ and $B(S)$ is at least $\frac{1}{2} \Lambda'(S, \sigma(\pi))$, where $\sigma(\pi)$ is defined as in \cref{sec:upper}, by replacing the leaf nodes in $\pi$ with their adjacent spine nodes. Our proof is based on~\cite[Lemma 8]{CardinalEtAl2018b}.
\begin{lemma}\label{p:wilber-stg}
Let $G = C(m_1, m_2, \dots, m_n)$ be a caterpillar, let $S$ be a BST on the spine nodes of $G$, and let $\pi$ be an ordering of the leg nodes of $G$.
Then, transforming $A(S,\pi)$ into $B(S)$ requires at least $\frac{1}{2}\Lambda'(S, \sigma(\pi))$ rotations.
\end{lemma}
\begin{proof}
Write $T = A(S, \pi)$, $T' = B(S)$, $\sigma = \sigma(\pi)$, and let $r$ be the root of $S$. Let the set $D$ consist of $r$ and its adjacent legs, i.e., if $r = s_i$, then $D = \{s_i\} \cup \{\ell_{i,j} \mid j \in [m_i]\}$. Suppose $r$ has two children $u$ and $v$. Then $G \setminus D$ has two connected components, one consisting of the spine nodes $V(S_u)$ and all adjacent legs, and the other consisting of $V(S_v)$ and all adjacent legs. Call the former $E$ and latter $F$. If $r$ has only one child $u$, let $E$ consist of $V(S_u)$ and all adjacent legs, and let $F = \emptyset$. If $r$ has no children, let $E = F = \emptyset$. Note that $D,E,F$ form a partition of $V(G)$.
We first consider the rotations within each of the three sets. By \cref{p:proj}, we can simply sum up the number of rotations required to transform $T[D]$, $T[E]$, and $T[F]$ into $T'[D]$, $T'[E]$, $T'[F]$, respectively.
$T[D]$ consists of the spine node $r$ and $m_r$ free leg nodes, and $T'[D]$ consists of $r$ and $m_r$ bound leg nodes. Thus, we need $m_r$ rotations to make all leg nodes bound.
If $E \neq \emptyset$, observe that $T[E] = A(S_u, \pi\restr{E})$ and $T'[E] = B(S_u)$, so we need $\frac{1}{2}\Lambda'(S_u, \sigma\restr{E}) = \frac{1}{2}\Lambda'(S_u, \sigma)$ rotations by induction. If $F \neq \emptyset$, we similarly get a lower bound of $\frac{1}{2}\Lambda'(S_v, \sigma)$.
We now show that there are at least $\frac{1}{2}\lambda(S,r,\sigma)$ other rotations. If $\lambda(S,r,\sigma) = 0$, this is trivially true, so suppose $\lambda(S,r,\sigma) > 0$ and thus $E \neq \emptyset$.
Define the \emph{alternation number} of a path $P$ in a search tree on $G$ as the number of edges $(x,y)$ in $P$ such that $x$ and $y$ are in different parts of the partition $D,E,F$. Define the alternation number $\mathrm{alt}(T^*)$ of a search tree $T^*$ on $G$ as the maximum alternation number among all paths starting at the root in $T^*$. Observe that $\mathrm{alt}(T') = 1$, and that $\mathrm{alt}(T) \ge \lambda(S,r,\sigma)+1$, since the leg nodes in $T$ have $\lambda(S,r,\sigma)$ alternations by definition, and there is one more alternation from $r$ to $E \neq \emptyset$.
We now show how rotations can affect the alternation number. Consider a rotation between the nodes $x$ and $y$, and a node $z$. The path from the root to $z$ before and after the rotation may only differ if it contains $x$ or $y$ (or both), and only in one of the following ways:
\begin{itemize}
\itemsep0pt
\item $x$ is inserted before $y$, or $y$ is inserted before $x$.
\item $x$ is deleted before $y$, or $y$ is deleted before $x$.
\item $x$ and $y$ are swapped (and are neighbors).
\end{itemize}
It is easy to see that if $x,y \in D$, or $x,y \in E$, or $x,y \in F$, then the rotation $(x,y)$ does not affect the alternation number, and otherwise, it can only differ by at most two. This means that we need at least $\frac{1}{2}|\mathrm{alt}(T) - \mathrm{alt}(T')| \ge \frac{1}{2}\lambda(S,r,\sigma)$ rotations not within one of the sets $D$, $E$, or $F$. The total number of rotations is thus at least (setting $\Lambda'(S',\sigma) = 0$ if $S'$ is empty):
\begin{align*}
m_r + \frac{1}{2}\Lambda'(S_u, \sigma) + \frac{1}{2}\Lambda'(S_v, \sigma) + \frac{1}{2}\lambda(S,r,\sigma) \ge \frac{1}{2}\Lambda'(S, \sigma).\tag*{\qedhere}
\end{align*}
\end{proof}
\Cref{p:wilber-static-opt,p:wilber-stg} together imply that $\delta(\mathcal{A}(G)) \ge \frac{1}{4} H(G) \cdot m$. As mentioned in the introduction, Manneville and Pilaud~\cite{MannevillePilaud2015} proved that $\delta(\mathcal{A}(G)) \in\Omega(m+n)$. This concludes the proof of the lower bound.
\section{Conclusion}
In this paper, we determined the diameter of each caterpillar associahedron up to a constant, revealing a surprising connection to searching in static and dynamic binary search trees. In particular, transforming $A(S, \pi)$ into $B(S)$ via rotations can be seen as a generalization of serving the access sequence $\sigma(\pi)$ in a dynamic BST. The number of rotations required is between Wilber's first lower bound $\Lambda(S, \sigma(\pi))$ and $\mathrm{OPT}(S, \sigma(\pi))$, begging the question whether other lower bounds for $\mathrm{OPT}$ hold in our generalized model, or whether it perhaps matches $\Lambda$ or $\mathrm{OPT}$. Results in this direction could give new insight into the dynamic BST model.
\paragraph{Acknowledgment.} The author would like to thank L\'aszl\'o Kozma for helpful discussions and suggestions.
|
train/arxiv
|
BkiUdvE4eIOjSM_xr_bh
| 5
| 1
|
\section{Introduction}
This article gives an analogue of the linear series construction for projective toric orbifolds. We start with a finite collection of line bundles $\mathscr{L}$ and use quiver of sections, as defined by Craw-Smith \cite{Craw-Smith}, to package efficiently the sections of line bundles in $\mathscr{L}$. For our ambient space, we introduce moduli stacks $\mathcal{M}_\theta(Q,\div)$ of quiver representation-like objects and produce rational maps $\psi_\theta: \mathcal{X} \dashrightarrow \mathcal{M}_\theta(Q, \div)$. As a further application, we apply our construction to orbifolds $[\AA^n/G]$, where $G$ is a finite abelian subgroup of $\textup{GL}(n,\ensuremath{\Bbbk})$, recovering the orbifold from the corresponding McKay quiver. We then adapt of our construction to describe a finite sequence of GIT wall crossings between $[\AA^n/G]$ and $G$-Hilb$(\AA^n)$ for $G \subset \textup{SL}(n,\ensuremath{\Bbbk})$ where $n \leq 3$.
Motivated by Olsson-Starr \cite{Olsson-Starr} among others, Kresch \cite{Kresch} introduced the notion of a projective Deligne-Mumford stack. A Deligne-Mumford stack $\mathcal{X}$ is said to be projective if it has a projective coarse moduli space $X$ and a generating sheaf. One may think of a generating (or $\pi$-very ample) sheaf as a very ample sheaf relative to the morphism $\pi: \mathcal{X} \rightarrow X$, or more loosely, as a sheaf that allows one to lift the projectivity of $X$ to $\mathcal{X}$. Despite the importance of linear series in the theory of algebraic varieties, an appropriate analogue for algebraic stacks does not exist. The naive extension to algebraic stacks is not satisfactory. Indeed, requiring a line bundle to be both very ample on the coarse moduli space and $\pi$-very ample is too restrictive; the stack must be an algebraic space, forcing all stabilizers to be trivial.
One could sidestep this issue by considering sections of more than one line bundle. In the case where $\mathcal{X}$ has cyclic stabilizers, the approach adopted by Abramovich-Hassett \cite{AbramovichHassett} uses sections of tensor powers of a single line bundle $L$ to produce closed immersions $\mathcal{X} \hookrightarrow \mathbb{P}(\bigoplus_{j=n} ^m \Gamma(\mathcal{X}, L^{\otimes j}))$ for some $n,m \in \mathbb{N}$. For $\mathcal{X}$ a toric orbifold, this article gives an alternative stacky analogue to the linear series construction that generalizes the Abramovich-Hassett construction. Our construction puts no constraints on the stabilizers. Moreover, the efficiency of the quiver of sections allows for ambient spaces of dimensions smaller than those appearing in the Abramovich-Hassett construction (cf. Example \ref{p112ex} and Example \ref{AHex1}).
Our construction is not limited to projective stacks. In fact for $G \subset \textup{GL}(n, \ensuremath{\Bbbk})$ a finite abelian group, it recovers the stack quotient $[\AA^n / G]$ from the McKay quiver. Under some constraints on $G$, Craw-Ishii \cite{CrawIshii} show that the McKay quiver allows us to move between projective crepant resolutions of $\AA^n/G$ by a finite sequence of wall crossings. When $G \subset \textup{SL}(n,\ensuremath{\Bbbk})$, one may think of the stack $[\AA^n/ G]$ as a noncommutative crepant resolution of $\AA^n / G$. This is because its coordinate ring, as defined by Chan-Ingalls \cite{ChanIngalls}, is a noncommutative crepant resolution of $\AA^n/G$ in the sense of Van den Bergh \cite{NoncommCrepant}. Therefore it is natural to ask whether one can introduce a quiver theoretic construction that allows us to move between a crepant resolution of $\AA^n/G$, say Nakamura's $G$-Hilb$(\AA^n)$, and the stack $[\AA^n/ G]$ by crossing finitely many walls. A slight adaptation of our construction gives an affirmative answer to this question, putting $G$-Hilb$(\AA^n)$ and $[\AA^n/G]$ on the same footing.
We now summarize the contents of the paper in more detail. Motivated by the natural labelling of the quiver of sections on a toric variety by torus-invariant divisors, we define the notion of a labelled quiver. A {\em labelled quiver} is a quiver $Q$ along with a map of sets $l:Q_1 \rightarrow \mathbb{Z}^d$. Naively, one wishes to define a `representation' of a labelled quiver as a representation of the underlying quiver for which any two paths with the same label are represented by the `same' linear map. Forcing this on linear maps representing two paths that have the same labels but don't share the same head and tail introduces equations that are not homogenous, with respect to the change of basis action, on the representation space. We bypass this issue by introducing new `homogenizing' parameters to the representation space, that homogenize every equation of paths induced by the labels. A {\em refined representation} of a labelled quiver $(Q,l)$ is a representation of the underlying quiver, together with a choice of nonzero homogenizing parameters. Given a refined representation $W$ and weight $\theta \in K_0(\ensuremath{\Bbbk} Q \text{-mod})^\vee$, we define a notion of $\theta$-stability on $W$ following King \cite{King}. For a weight $\theta \in K_0(\ensuremath{\Bbbk} Q \text{-mod})^\vee$ defined by a character $\chi_\theta$ of $\textup{PGL}(\alpha)$ (the group acting faithfully on the refined representation space), the main result of Section \ref{Refine} relates GIT $\chi_\theta$-stability to $\theta$-stability.
\begin{thm3.8}
Let $\chi_\theta$ be a character of $\textup{GL}(\alpha)$ and $\theta$ the corresponding element of $K_0(\ensuremath{\Bbbk} Q \text{-mod})^\vee$. A refined quiver representation $W$ is $\theta$-semistable (resp. $\theta$-stable) if and only if the corresponding point in $\mathcal{R}(Q, l, \alpha)$ is $\chi_\theta$-semistable (resp. $\chi_\theta$-stable) with respect to action of $\textup{GL}(\alpha)$.
\end{thm3.8}
\noindent This allows us to introduce families of $\theta$-semistable refined representations, which in turn enables us to define moduli stacks $\mathcal{M}_\theta(Q, l, \alpha)$ of refined representations, given some dimension vector $\alpha$. The stacks $\mathcal{M}_\theta(Q, l, \alpha)$ form the ambient stacks in our construction.
Now take $\mathcal{X}$ to be a projective toric orbifold. For a given collection of line bundles $\mathscr{L}$ on $\mathcal{X}$, we use techniques very similar to those in \cite{Craw-Smith} to define a labelled quiver of sections $(Q,\div)$ and give a rational map $\psi_\theta: \mathcal{X} \dashrightarrow \mathcal{M}_\theta(Q,\div, \alpha)$ where $\alpha = (1,\ldots,1)$. As in the classical linear series case, when $\psi_\theta$ is a morphism the tautological line bundles on $\mathcal{M}_\theta(Q,\div, \alpha)$ pull-back to recover the collection $\mathscr{L}$. Checking whether or not there exists a stability condition $\theta$ for which $\psi_\theta$ is a morphism can be tedious, hence we introduce a sufficient condition that is straightforward to check. We also explicitly describe the image of $\psi_\theta$ and address the question of representability of the morphism $\psi_\theta$. Let $\mathscr{L}_{\textup{bpf}}$ denote the collection of line bundles \[\{L_i^\vee \otimes L_j\, |\, L_i, L_j \in \mathscr{L} \text{ and } L_i^\vee \otimes L_j \text{ is base-point free}\}\] we show the following,
\begin{thm4.12}
If $\textup{rank}(\mathbb{Z} \mathscr{L}) = \textup{rank}(\mathbb{Z} \mathscr{L}_\textup{bpf})$ then $\mathscr{L}$ is base-point free, i.e.\ there exists a generic stability condition $\theta$ such that $\psi_\theta: \mathcal{X} \rightarrow \mathcal{M}_\theta(Q,\div)$ is a morphism.
\end{thm4.12}
\begin{thm4.21}
A morphism $\psi_{\theta}$ is representable \iff $\bigoplus_{j=1}^rL_j$ is $\pi$-ample.
\end{thm4.21}
For $\mathcal{X}$ a toric orbifold, the Abramovich-Hassett construction may be recovered.
\begin{rmk4.7}
Given a polarizing line bundle $L$ on $\mathcal{X}$, the Abramovich-Hassett construction for $n=0$, is recovered by applying our machinery to the collection \[\mathscr{L} = (\mathcal{O}_\mathcal{X}, L, L \otimes L^{\otimes 2}, \ldots, L^{\otimes m(m+1)/2})\] and if necessary, working with an `incomplete' quiver of sections. An incomplete quiver of sections is a quiver of sections where not all torus-invariant sections contribute to paths in the quiver, analogous to an incomplete linear series.
\end{rmk4.7}
We then apply this technology to the McKay quiver associated to a finite abelian subgroup of $\textup{GL}(n, \ensuremath{\Bbbk})$. After showing that every refined representation of the labelled McKay quiver $(Q, \div)$ is $\theta$-stable and deducing $\psi_\theta$ is a morphism for any given stability condition $\theta$, we show that $\psi_\theta: \AA^n/G \rightarrow \mathcal{M}_\theta(Q, \div)$ is a closed immersion. We tweak the GIT construction of $\mathcal{M}_\theta(Q, \div)$ by allowing the homogenizing parameters to be zero and examine a substack cut out by an ideal defined naturally from the labels of $Q$. By studying the GIT chamber decomposition, we observe that certain chambers define semistable loci in which every homogenizing parameter is nonzero, enabling us to recover the stack $[\AA^n/G]$. We also show that in the semistable locus of a second chamber the homogenizing variables are completely determined by the variables corresponding to the arrows and are therefore redundant. This recovers the Craw-Maclagan-Thomas \cite{CMT1} construction of the coherent component Hilb$^G(\AA^n)$ of Nakamura's $G$-Hilbert scheme. Using the results of Ito-Nakamura \cite{ItoNakamura} and Nakamura \cite{Nakamura} we have the following results.
\begin{thm5.4}
For finite abelian $G \subset \textup{GL}(n,\ensuremath{\Bbbk})$, there exists generic stability conditions $\chi_{\theta_1}, \chi_{\theta_2} \in \textup{PGL}(\alpha)^\vee$, such that \[[\AA^n/G] \cong [\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}})^{ss}_{\theta_1} / \textup{PGL}(\alpha)] \,\text{ and }\, \textup{Hilb}^G(\AA^n) \cong [\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}})^{ss}_{\theta_2} / \textup{PGL}(\alpha)].\]
\end{thm5.4}
\begin{coro5.5}
For $n \leq 3$ and finite abelian $G\subset \textup{SL}(n, \ensuremath{\Bbbk})$, there exists generic stability conditions $\chi_{\theta_1}, \chi_{\theta_2} \in \textup{PGL}(\alpha)^\vee$, such that \[ [\AA^n/G] \cong [\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}})^{ss}_{\theta_1} / \textup{PGL}(\alpha)] \,\text{ and }\, G\textup{-Hilb}(\AA^n) \cong [\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}})^{ss}_{\theta_2} / \textup{PGL}(\alpha)].\]
\end{coro5.5}
The paper is organized as follows. We assemble some background material on quivers and toric orbifolds in Section \ref{Back}. In Section \ref{Refine}, we define the ambient stacks in our construction. We define our stacky analogue of the classical linear series construction in Section \ref{Quiver}. Finally in Section \ref{McKay}, we apply the machinery from Section \ref{Quiver} to the McKay quiver.
\subsubsection*{Conventions and Notation} The symbol $\ensuremath{\Bbbk}$ will be reserved for an algebraically closed field of characteristic 0. All objects and maps are defined over $\ensuremath{\Bbbk}$ unless stated. The symbol $\mathbb{N}$ will be reserved for the nonnegative integers. Our semigroups contain units. For a finite set $C$ we use $\mathbb{Z} C$ to denote the free abelian group generated by $C$ and $\mathbb{N} C$ to be free the abelian semigroup generated by $C$. For an abelian group $G$ we write $G_\mathbb{Q}:= G \otimes_\mathbb{Z} \mathbb{Q}$. For us, a geometric point of a stack $\mathcal{X}$ is a morphism $\textup{Spec}(\ensuremath{\Bbbk}) \rightarrow \mathcal{X}$. We use $\mathbb{P}(w_1, \ldots, w_n)$ to denote the weighted projective stack with weights $w_1,\ldots,w_n$.
\subsubsection*{Acknowledgments}
Firstly, I would like to express my gratitude to my Ph.D. supervisor Alastair Craw for introducing me to this problem and for the countless discussions that shaped this project. I would like to thank Alastair King for many stimulating discussions and for his ideas on generalizing refined representations beyond the case $\alpha = (1,\ldots,1)$. I am indebted to Barbara Fantechi and {\'E}tienne Mann for teaching me about stacks and answering numerous stacky questions. I would also like to thank Ewan Morrison, Alexander Quintero V\'{e}lez, Dorothy Winn and the rest of office 522 for all the helpful discussions we had. Many thanks are due to the referee and my thesis examiners for useful comments and suggestions. I also acknowledge the financial support of the University of Glasgow.
\section{Background}\label{Back}
\subsection{Quivers}\label{Quivers}
Most of this subsection is borrowed from Section 2.2 of \cite{Craw-Smith} and is included here for completeness.
A quiver $Q$ is specified by two finite sets $Q_0$ and $Q_1$, whose elements are called vertices and arrows, together with two maps $h, t \colon Q_1 \rightarrow Q_0$ indicating the vertices at the head and tail of each arrow. We assume through out that $Q$ is connected. A nontrivial path in $Q$ is a sequence of arrows $p = a_1 \dotsb a_m$ with $h(a_{k}) = t(a_{k+1})$ for $1 \leq k < m$. We set $t(p) = t(a_{1})$ and $h(p)= h(a_m)$. Each $i \in Q_0$ gives a trivial path $e_i$ where $t(e_i) = h(e_i) = i$. The path algebra $\ensuremath{\Bbbk} Q$ is the $\ensuremath{\Bbbk}$-algebra whose underlying $\ensuremath{\Bbbk}$-vector space has a basis consisting of paths in $Q$; the product of two basis elements equals the basis element defined by concatenation of the paths if possible or zero otherwise. A cycle is a path $p$ in which $t(p) = h(p)$. A quiver is acyclic if it contains no cycles. A vertex is a source if it is not the head of any arrow and a quiver is rooted if it has a unique source.
The vertex space $\mathbb{Z}^{Q_0}$ is the free abelian group generated by the vertices and the arrow space $\mathbb{Z}^{Q_1}$ is the free abelian group generated by the arrows. We write $\mathbb{N}^{Q_0}$ and $\mathbb{N}^{Q_1}$ for the subsemigroups generated by the basis elements of $\mathbb{Z}^{Q_0}$ and $\mathbb{Z}^{Q_1}$. The incidence map $\textup{inc}: \mathbb{Z}^{Q_1} \rightarrow \mathbb{Z}^{Q_0}$ is defined by $\textup{inc}(\mathbf{e}_a) = \mathbf{e}_{h(a)} - \mathbf{e}_{t(a)}$. The weight lattice $\textup{Wt}(Q)$ is the image of $\textup{inc}$, that is the sublattice give by elements $\theta = \sum_{i\in Q_0} \theta_i \mathbf{e}_i \in \mathbb{Z}^{Q_0}$ for which $\sum_{i \in Q_0} \theta_i = 0$.
A representation $\overline{W} = (W_i, w_a)$ of $Q$ consists of a vector space $W_i$ for each $i \in Q_0$ and a linear map $w_a \colon W_{t(a)} \rightarrow W_{h(a)}$ for each $a \in Q_1$. The dimension vector of $\overline{W}$ is the integer vector $(\dim W_{i}) \in \mathbb{N}^{Q_0}$. A map between representations $\overline{W} = (W_i, w_a)$ and $\overline{W}' = (W_i', w_a')$ is a family $\xi_{i} \colon W_i^{\,} \rightarrow W_i'$ for $i \in Q_0$ of linear maps that are compatible with the structure maps, that is $w_a' \circ\xi_{t(a)} = \xi_{h(a)} \circ w_a$ for all $a \in Q_1$. With composition defined componentwise, we obtain the abelian category of representations of $Q$ denoted rep$_\ensuremath{\Bbbk}(Q)$. This category is equivalent to the category $\ensuremath{\Bbbk} Q$-mod of finitely generated left modules over the path algebra.
Each $\theta \in \textup{Wt}(Q)$ defines a stability notion for representations of $Q$. A representation $\overline{W}$ is $\theta$-semistable if, for every proper, nonzero subrepresentation $\overline{W}' \subset \overline{W}$, we have $\sum_{i \in Q_0} \theta_i \cdot \dim(W_i') \geq 0$. The notion of $\theta$-stability is obtained by replacing $\geq$ with $>$. For a given dimension vector $\alpha \in \mathbb{N}^{Q_0}$, a family of $\theta$-semistable quiver representations over a connected scheme $S$ is a collection of rank $\alpha_i$ locally free sheaves $\mathcal{W}_i$ together with morphisms $\mathcal{W}_{t(a)} \rightarrow \mathcal{W}_{h(a)}$ for every $a\in Q_1$. When every $\theta$-semistable representation is $\theta$-stable and the dimension vector is primitive this moduli problem is representable by a scheme $\mathcal{M}_\theta(Q, \alpha)$, see Proposition 5.3 in \cite{King}.
\subsection{Toric orbifolds}
Toric Deligne-Mumford stacks were first introduced by Borisov-Chen-Smith in \cite{BCS}, later Fantechi-Mann-Nironi \cite{FMN} gave an equivalent definition analogous to the classical definition of a toric variety. In this article we will only be concerned with toric orbifolds, that is smooth toric Deligne-Mumford stacks whose generic stabilizer is trivial or equivalently stacks whose dense Deligne-Mumford torus is just an algebraic torus $T$. We begin by introducing the Fantechi-Mann-Nironi approach then discuss the Borisov-Chen-Smith approach.
A toric orbifold is a smooth separated Deligne-Mumford stack $\mathcal{X}$ together with an open immersion $\iota: T \hookrightarrow \mathcal{X}$ with dense image such that the action of $T$ on itself extends to an action of $T$ on $\mathcal{X}$. Toric orbifolds can also be defined using stacky fans. A stacky fan is a triple $\mathbf{\Sigma} := (N, \Sigma, \beta)$, where $N$ is a finitely generated free abelian group, $\Sigma$ is a rational simplicial fan in $N_\mathbb{Q}$ with $d$ rays that span $N_\mathbb{Q}$, denoted $\rho_1, \ldots, \rho_d \in \Sigma(1)$, and $\beta: \mathbb{Z}^d \rightarrow N$ is a morphism of groups for which $\beta(\mathbf{e}_i) \otimes 1$ is on the ray $\rho_i \in N_\mathbb{Q}$. The toric orbifold associated to a stacky fan is constructed as follows. Let $\mathbb{Z}^{\Sigma(1)}:= (\mathbb{Z}^d)^\vee$ and consider the exact sequence
\begin{equation}\label{Coxsq}
\xymatrix@C=1.3cm{N^\vee \ar[r]^-{\beta^\vee} & \mathbb{Z}^{\Sigma(1)} \ar[r]^-{\deg} & \text{Coker}(\beta^\vee) \ar[r]& 0.}
\end{equation}
The the group $T = \textup{Hom}(\text{Coker}(\beta^\vee), \ensuremath{\Bbbk}^\times)$ has a natural action on $\AA^{\Sigma(1)}$ induced by the inclusion $T \subset (\ensuremath{\Bbbk}^\times)^{\Sigma(1)}$. For a cone $\sigma \in \Sigma$, $\widehat{\sigma}$ is the set of one-dimentional cones in $\Sigma$ not contained in $\sigma$ and $x^{\widehat{\sigma}} = \prod_{\rho \in \widehat{\sigma}} x_\rho$. The Cox unstable locus is
the defined
\begin{equation}\label{Coxunstable}
B_\mathcal{X}:= \Big\langle x^{\widehat{\sigma}} \in \ensuremath{\Bbbk}[ x_{\rho} \, | \, \rho \in \Sigma(1)] \,\,\Big | \,\,\sigma \in \Sigma\Big\rangle.
\end{equation}
The stack $\mathcal{X}_\mathbf{\Sigma}$ associated to the stacky fan is \[\Bigg[ \frac{\AA^{\Sigma(1)} \setminus \mathbb{V}(B_\mathcal{X})}{T}\Bigg].\] The group of line bundles $\textup{Pic}(\mathcal{X}_{\mathbf{\Sigma}})$ is given by $\textup{Hom}(T, \ensuremath{\Bbbk}^*) \cong \text{Coker}(\beta^\vee)$ and the group of torus-invariant divisors of $\mathcal{X}_\mathbf{\Sigma}$ is given by $\mathbb{Z}^{\Sigma(1)}$. Given any toric orbifold $\mathcal{X}$ there exists a stacky fan $\mathbf{\Sigma}$ such that $\mathcal{X}_\mathbf{\Sigma} \cong \mathcal{X}$, see Theorem 7.23 in \cite{FMN}. A toric orbifold is projective, in the sense of Kresch \cite{Kresch}, if its coarse moduli space is projective (cf. Corollary 5.4, \cite{Kresch}).
\section{Moduli of refined quiver representations}\label{Refine}
The goal of this section is to define an ambient space for our stacky analogue of the linear series construction. A quiver of sections of a toric orbifold is naturally labelled by torus-invariant divisors (see Section \ref{Quiver}). Motivated by this, we define the notion of a labelled quiver of which our quivers of sections are examples. Then we define the notion of a refined quiver representation of a labelled quiver and study moduli stacks of the aforementioned objects.
\begin{defn}
A {\em labelled quiver} $(Q, l)$ is a connected finite quiver $Q$ along with a free abelian group $\mathbb{Z}^d$ for some $d\in \mathbb{N}$ and a map of sets $l: Q_1 \rightarrow \mathbb{Z}^d$.
\end{defn}
Abusing notation we use $l$ to denote the {\em labelling map} $\mathbb{Z}^{Q_1} \rightarrow \mathbb{Z}^d$ generated by $l$. Let $R$ denote the image of $\ker(l)$ under the incidence map and consider the following commutative diagram,
\begin{equation}
\begin{split}
\xymatrix{
\text{ker}(l) \ar[r] \ar[d] & R:= \text{inc(ker}(l)) \ar[d]\\
\mathbb{Z}^{Q_1} \ar[r]^-{\text{inc}} \ar[d]_{l} & \textup{Wt}(Q)\\
\mathbb{Z}^{d}.}
\end{split}
\end{equation}
Given a quiver $Q$, a representation $(W_i, w_a)$ of $Q$ and an element $\b = \sum_{i \in Q_0} \b_i \mathbf{e}_i \in \mathbb{Z}^{Q_0}$ define
\begin{equation*}
\textup{det}_{\b}\, W:= \bigotimes_{i\in Q_0}(\det W_i)^{\otimes \b_i}.
\end{equation*}
Here we use the convention $W_i ^{\otimes -1} := W_i^{\vee}$.
Pick a basis $\mathfrak{B}$ of $R$.
\begin{defn} \label{rerep}
A {\em refined representation $W$ of a labelled quiver} $(Q,l)$ consists of a finite dimensional representation $\overline{W}:=(W_i, w_a)$ of $Q$ together with an isomorphism $f_\b: \ensuremath{\Bbbk} \rightarrow \text{det}_b\,W$ for every $\b \in \mathfrak{B}$. The {\em dimension vector} of an $R$-refined representation $W= (W_i, w_a, f_\b)$ is the integer vector $(\text{dim}(W_i))_{i \in Q_0}$.
We say that two refined representations $W = (W_i, w_a, f_\b),\, W' = (W_i', w_a', f'_\b)$ are {\em isomorphic} if there exist isomorphisms of vector spaces $\gamma_i: W_i \rightarrow W_i'$ for every vertex $i \in Q_0$ such that $\gamma_{t(a)}^{-1} \circ w_a \circ \gamma_{h(a)} = w_a'$ for all $a\in Q_1$and $f_\b \circ \gamma_\b =f'_\b$ for all $\b \in \mathfrak{B}$ where $\gamma_b: \text{det}_b\,W \rightarrow \text{det}_b\,W'$ is the isomorphism induced by the isomorphisms $\gamma_i$.
\end{defn}
\begin{rmk}
\begin{enumerate}
\item [i)] The independence of the choice of basis $\mathfrak{B}$ will be addressed in Remark \ref{indep}.
\item [ii)] Refined representations and their moduli maybe defined without appealing to a labelling map $l$; the crucial ingredient is the subgroup $R \subset \textup{Wt}(Q)$. Given a quiver $Q$ and an arbitrary subgroup $K \subset \textup{Wt}(Q)$ with a choice of basis $\mathfrak{B}_K$, one may define {\em $K$-refined representations of $Q$} to be finite dimensional representation $\overline{W}$ of $Q$ together isomorphisms $f_\b: \ensuremath{\Bbbk} \rightarrow \text{det}_{\b}\,W$ for every $\b \in \mathfrak{B}_K$. All the definitions and results in this section may be lifted to this setting. With the immediate applications in mind, we restrict ourselves to subgroups $R$ arising from a labelling map $l$.
\end{enumerate}
\end{rmk}
For $i\in {Q_0}$, let $W_i$ be a vector space of dimension $\alpha_i$ and $\alpha:= (\alpha_i) \in \mathbb{N}^{Q_0}$. Let $\ensuremath{\Bbbk}[z_\b\,|\, \b \in \mathfrak{B}]$ denote the coordinate ring of the vector space $\bigoplus_{\b \in \mathfrak{B}}\text{det}_{\b}\, W$. The isomorphism classes of refined representations of $(Q, l)$ are in one-to-one correspondence with the orbits in the refined representation space
\begin{equation*}
\mathcal{R}(Q, l, \alpha) := \bigg( \bigoplus_{a \in Q_1} \textup{Hom}(W_{t(a)}, W_{h(a)}) \oplus \bigoplus_{\b \in \mathfrak{B}}\text{det}_{\b}\, W \bigg) \setminus \mathbb{V}\Big(\prod_{\b \in \mathfrak{B}} z_\b\Big)
\end{equation*}
of the symmetry group
\begin{equation*}
\textup{GL}(\alpha) := \prod_{i \in Q_0} \textup{GL}(W_i)
\end{equation*}
under the change of basis action. Note that $\textup{GL}(\alpha)$ contains the diagonal one-parameter subgroup $\Delta = \{(\lambda \cdot 1,\ldots,\lambda\cdot 1): \lambda \in \ensuremath{\Bbbk}^\times\}$ acting trivially and define $\textup{PGL}(\alpha) := \textup{GL}(\alpha) / \Delta$.
We note that the characters of $\textup{GL}(\alpha)$ are given by
\begin{equation*}
\chi_\theta(g) = \prod_{i \in Q_0} \det(g_i)^{\theta_i}
\end{equation*}
for $\theta= \sum_i \theta_i \mathbf{e}_i \in \mathbb{Z}^{Q_0}$ and that every character of $\textup{GL}(\alpha)$ is of the form $\chi_\theta$ for some $\theta \in \mathbb{Z}^{Q_0}$. As the diagonal $\Delta \subset \textup{GL}(\alpha)$ acts trivially on $\mathcal{R}(Q, \div, \alpha)$ we are interested in characters $\chi_\theta$ that satisfy $\sum_i \theta_i \alpha_i =0$.
It is convenient to identify $\mathbb{Z}^{Q_0}$, and hence $\textup{GL}(\alpha)^\vee$, with a subgroup of the Grothendieck group $K_0(\ensuremath{\Bbbk} Q \text{-mod})$ as follows. Let $\overline{W}=(W_i, w_a)$ be a representation of $Q$, implicitly using $\ensuremath{\Bbbk} Q \text{-mod} \cong \text{rep}_\ensuremath{\Bbbk}(Q)$, set $\theta(\overline{W}) = \sum_i \theta_i \text{dim }W_i$, and observe that this is additive on short exact sequences.
We introduce some notation before the next definition. Let $M$ a module of some ring $R$ and $M_\bullet$ be a proper filtration of $M$ (that is, a filtration where at least one term is a nonzero proper submodule of $M$) given by
\begin{equation*}
0 \subsetneq M_1 \subset \ldots \subset M_{n-1} \subsetneq M_n=M.
\end{equation*}
For $\theta \in K_0(\text{mod-}R)^\vee$ define $\theta(M_\bullet)= \sum_{j=1}^{n-1} \theta(M_j)$.
\begin{defn} \label{stab}
Let $\theta \in K_0(\ensuremath{\Bbbk} Q \text{-mod})^\vee$. A refined representation $W$ is {\em $\theta$-semistable} if $\theta(W) = 0$ and $\theta(W_\bullet)\geq 0$ for every proper filtration $W_\bullet$ of the $\ensuremath{\Bbbk} Q$-module $\overline{W}$ that satisfies $\b(W_\bullet) = 0$ for every $\b \in \mathfrak{B}$.
The notion of {\em $\theta$-stability} is obtained by replacing $\geq$ with $>$.
\end{defn}
We introduced the notion of $\theta$-semistability to be able to make sense of families of refined representations and use the term `moduli stack'. In practice, we are interested primarily in functions $\theta \in K_0(\ensuremath{\Bbbk} Q \text{-mod})^\vee$ coming from characters of $\textup{GL}(\alpha)$. The next proposition allows us to check $\theta$-semistability by checking GIT semistability of $\chi_\theta$.
\begin{thm}\label{thetaGIT}
Let $\chi_\theta$ be a character of $\textup{GL}(\alpha)$ and $\theta$ the corresponding element of $K_0(\ensuremath{\Bbbk} Q \text{-mod})^\vee$. A refined quiver representation $W$ is $\theta$-semistable (resp. $\theta$-stable) if and only if the corresponding point in $\mathcal{R}(Q, l, \alpha)$ is $\chi_\theta$-semistable (resp. $\chi_\theta$-stable) with respect to action of $\textup{GL}(\alpha)$.
\end{thm}
\begin{proof}
We begin by pinning down the one-parameter subgroups $\lambda$ of $\textup{GL}(\alpha)$ for which $\lim_{t \rightarrow 0} (\lambda(t)\cdot W)$ exists. Write $\mathcal{R}(Q, l, \alpha) \cong \mathcal{R}(Q, \alpha) \times (\ensuremath{\Bbbk}^\times)^\mathfrak{B}$ and $\pi_1, \pi_2$ for the first and second projection respectively. The limit $\lim_{t \rightarrow 0} (\lambda(t)\cdot W)$ exists \iff $\lim_{t \rightarrow 0} (\lambda(t)\cdot \pi_1(W))$ and $\lim_{t \rightarrow 0} (\lambda(t)\cdot \pi_2(W))$ exist. By the discussion preceding Proposition 3.1 of \cite{King}, $\lim_{t \rightarrow 0} (\lambda(t)\cdot \pi_1(W))$ exists if and only if $\lambda$ defines a $\mathbb{Z}$-filtration, $W_\bullet$, of the $\ensuremath{\Bbbk} Q$-module $\pi_1(W) = \overline{W}$,
\begin{equation*}
\ldots \subset W_{n-1} \subset W_n \subset W_{n+1} \subset \ldots
\end{equation*}
for which $W_n = 0$ for $n \ll 0$ and $W_n = \overline{W}$ for $n \gg0$. Now consider $\lim_{t \rightarrow 0} (\lambda(t)\cdot \pi_2(W))$. The one-parameter subgroup $\lambda$ defines a $\mathbb{Z}$-grading on the coordinate ring $\ensuremath{\Bbbk}[z_{\b}, z_\b^{-1} \, | \, \b \in \mathfrak{B}]$ of $(\ensuremath{\Bbbk}^\times)^\mathfrak{B}$. The limit $\lim_{t \rightarrow 0} (\lambda(t)\cdot \pi_2(W))$ exists if and only if the variables $z_\b$ and $z_\b^{-1}$ are simultaneously non-negatively graded. Notice that this holds precisely when they are zero graded, that is when $\langle \chi_\b, \lambda\rangle =0$ for every $\b \in \mathfrak{B}$. Therefore, for $\lambda$ and $W$ as above, $\lim_{t \rightarrow 0} (\lambda(t)\cdot W)$ exists \iff $\lambda$ gives a $\mathbb{Z}$-filtration $(W_n)_{n \in \mathbb{Z}}$ of the quiver representation $\pi_1(W) =\overline{W}$ and $\langle \chi_\b, \lambda\rangle =0$, for every $\b \in \mathfrak{B}$.
Now assume $W$ is $\theta$-semistable. Take $\lambda$ to be a one-parameter subgroup for which the limit $\lim_{t \rightarrow 0} (\lambda(t)\cdot W)$ exists. By the discussion preceding Proposition 3.1 of \cite{King}, one may associate a filtration $W_\bullet$ to $\lambda$ such that $\langle \chi_\theta, \lambda\rangle = \theta(W_\bullet)$. By assumption, this implies $\langle \chi_\b, \lambda\rangle = \b(W_\bullet) = 0$ for all $\b \in \mathfrak{B}$. Since $W$ is $\theta$-semistable we have $\langle \chi_\theta, \lambda\rangle = \theta(W_\bullet) \geq 0$. GIT semistability of $W$ then follows from Mumford's numerical criterion, see Proposition 2.5 of \cite{King}.
Next assume $W \in \mathcal{R}(Q, l, \alpha)$ is $\chi_\theta$-semistable. By the fact that $\Delta$ acts trivially we have $\langle \chi_\theta, \Delta\rangle = \theta(W) = 0$. Let $W_\bullet$ be a proper filtration satisfying the conditions of Definition \ref{stab}. By the discussion preceding Proposition 3.1 of \cite{King}, there exists a one-parameter subgroup $\lambda$ for which the associated filtration is $W_\bullet$. By assumption we have that $\b(W_\bullet) = \langle \chi_\b, \lambda\rangle = 0$ for every $\b \in \mathfrak{B}$, so $\lim_{t \rightarrow 0} (\lambda(t)\cdot W)$ exists. Mumford's numerical criterion gives $\theta(W_\bullet) = \langle \chi_\theta, \lambda\rangle \geq 0$, as required.
\end{proof}
\begin{defn}\label{moduli}
For $\chi_\theta \in \textup{PGL}(\alpha)^\vee \subset \textup{GL}(\alpha)^\vee$, let $\mathcal{R}(Q, l, \alpha)_\theta^{ss}$ denote the open subscheme of $\mathcal{R}(Q, l, \alpha)$ parametrizing the $\theta$-semistable refined representation. The {\em moduli stack of $\theta$-semistable refined representations } is the stack quotient
\begin{equation*}
\mathcal{M}_\theta(Q, l, \alpha):= [\mathcal{R}(Q, l, \alpha)_\theta^{ss} / \textup{PGL}(\alpha)].
\end{equation*}
\end{defn}
\begin{rmk} \label{indep}
The definition of $\mathcal{M}_\theta(Q, l, \alpha)$ depends a priori on a choice of basis $\mathfrak{B}$ of $R$. However, any alternative basis $\mathfrak{B}'$ gives an isomorphic stack. Indeed, given $W=(W_i, w_a, f_\b) \in \mathcal{R}(Q, l, \alpha)$ write $\b' = n_1 \b_1 + \cdots + n_m \b_m$ and \[f_{\b'} :\ensuremath{\Bbbk} \cong \ensuremath{\Bbbk}^{\otimes n_1}\otimes \cdots \otimes \ensuremath{\Bbbk}^{\otimes n_m} \rightarrow (\text{det}_{b_1} \,W)^{\otimes n_1}\otimes \cdots \otimes\, (\text{det}_{b_m} \,W)^{\otimes n_m} \cong \text{det}_{b'} \,W\] for every $\b' \in\mathfrak{B}'$. The assignment $(W_i, w_a, f_\b) \mapsto (W_i, w_a, f_{\b'})$ gives an equivariant isomorphism from $\mathcal{R}(Q, l, \alpha)$ to $\mathcal{R}(Q, l, \alpha)$, under which semistable points are sent to semistable points. This follows from the fact semistability depends only on the subgroup $R \subset \textup{Wt}(Q)$ and the factor $(W_i,w_a)$ of $W$. The factor $(W_i, w_a)$ is not altered by the proposed isomorphism; checking $\b(W)=0$ for basis elements $\b \in \mathfrak{B}$ is equivalent to checking $r(W)=0$ on every element $r \in R$. Hence the equivariant isomorphism above defines an isomorphism of the resulting stacks $\mathcal{M}_\theta(Q, l, \alpha)$.
This is not to say that the choice of basis is unimportant. It only becomes unimportant when we insist that the linear maps $f_\b: \ensuremath{\Bbbk} \rightarrow \text{det}_b\,W$ are isomorphisms. Indeed, let $\b' = -\b$. Then given linear map $f_\b: \ensuremath{\Bbbk} \rightarrow \text{det}_b\,W$ there exists a natural linear map $(f_\b)^\vee: \text{det}_{\b'}\,W \cong (\text{det}_b\,W)^\vee \rightarrow \ensuremath{\Bbbk}^\vee \cong \ensuremath{\Bbbk}$. If $f_\b$ is an isomorphism then we define $f_{\b'}:= (f_\b^\vee)^{-1}$, otherwise there is no natural definition for $f_{b'}$. In Section 5, we will allow the maps $f_b$ to be zero and will have to choose a basis carefully.
\end{rmk}
Given a quiver $Q$, a family of quiver representations $(\mathcal{W}_i, {w_a})$ over a base scheme $S$ and an element $\b = \sum_{i \in Q_0} \b_i \mathbf{e}_i \in \mathbb{Z}^{Q_0}$ define
\begin{equation*}
\textup{det}_{\b}\, \mathcal{W}:= \bigotimes_{i\in Q_0}(\det \mathcal{W}_i)^{\otimes \b_i}.
\end{equation*}
Here we use the convention $\mathcal{W}_i ^{\otimes -1} := \mathcal{W}_i^{\vee}$.
To justify the term `moduli stack' in Definition \ref{moduli} we must give a suitable notion of families over schemes for which the moduli stack is $\mathcal{M}(Q, l, \alpha)$. One could define a family of refined representations over a scheme $S$ to be a refined representation of $(Q, l)$ in the category of locally free $\mathcal{O}_S$-modules of $S$, that is, Definition \ref{fam} without the isomorphism of line bundles $\mathcal{O}_S \rightarrow \text{det}_{\theta_\Delta}\,\mathcal{W}$. However, this would imply that $\Delta$ is a subgroup of the automorphism group of any given object. This gives stacks that are unsuitable for our applications as they don't admit closed immersions from Deligne-Mumford stacks.
If $\alpha$ is primitive, that is the greatest common factor of its components is 1, we can alter the definition of a family to sidestep this issue. We do this by adding an extra nonzero parameter to $\mathcal{R}(Q, l, \alpha)$ on which $\Delta$ acts with weight 1. This amounts to finding a character $\theta_\Delta \in \mathbb{Z}^{Q_0}$ for which $\langle \theta_\Delta, \Delta \rangle =\sum_i \theta_i \alpha_i= 1$, one may find such $\theta_\Delta$ precisely when $\alpha$ is primitive. For the rest of the section fix a primitive dimension vector $\alpha$ and pick $\theta_\Delta \in \mathbb{Z}^{Q_0}$ such that $\langle \theta_\Delta, \Delta \rangle = 1$.
\begin{defn}\label{fam}
A {\em flat family of refined representations of $(Q, l)$} over a connected scheme $S$ is a collection of rank $\alpha_i$ locally free sheaves $\mathcal{W}_i$ for $i \in Q_0$, together with a choice of morphisms $\mathcal{W}_{t(a)} \rightarrow \mathcal{W}_{h(a)}$ for $a \in Q_1$, isomorphisms of line bundles $\mathcal{O}_S \rightarrow \text{det}_b\,\mathcal{W}$ for $\b \in \mathfrak{B}$ and an isomorphism of line bundles $\mathcal{O}_S \rightarrow \text{det}_{\theta_\Delta}\,\mathcal{W}$.
\end{defn}
\begin{prop}
The stack $\mathcal{M}_\theta(Q,l, \alpha)$ is the moduli stack of families of $\theta$-semistable refined representations of $(Q, l)$.
\end{prop}
\begin{proof}
First we identify the nonzero elements of $\text{det}_{\theta_\Delta}\,W$ with $\ensuremath{\Bbbk}^\times$, $\textup{GL}(\alpha)$ acts on $\text{det}_{\theta_\Delta}\,W$ by change of basis. Consider the stack quotient $[\mathcal{R}(Q, l, \alpha)^{ss}_\theta \times \ensuremath{\Bbbk}^\times / \textup{GL}(\alpha)]$. We claim that this represents the moduli problem defined by Definition \ref{fam}. An object in $[\mathcal{R}(Q, l, \alpha)^{ss}_\theta \times \ensuremath{\Bbbk}^\times / \textup{GL}(\alpha)](S)$ is a principal $\textup{GL}(\alpha)$-bundle $\mathcal{P}:= \bigoplus_{i\in Q_0} \mathcal{P}_i$ over $S$ with a $\textup{GL}(\alpha)$-equivariant morphism $\mathcal{P} \rightarrow \mathcal{R}(Q, l, \alpha)^{ss}_\theta \times \ensuremath{\Bbbk}^\times$. Define $\mathcal{W}_i$ to be the $W_i$-bundles corresponding to $\mathcal{P}_i$. Let $(U_j)_{j \in J}$ be an open cover of $S$ that trivializes $\mathcal{P}_i$. For $j \in J$ an equivariant morphism $U_j \times \textup{GL}(\alpha) \rightarrow \mathcal{R}(Q, l, \alpha)^{ss}_\theta \times \ensuremath{\Bbbk}^\times$ is determined by the image of the identity fibre and so is determined by a morphism $U_j \rightarrow \mathcal{R}(Q, l, \alpha)^{ss}_\theta \times \ensuremath{\Bbbk}^\times$. This morphism in turn defines a section of the vector bundle $U_j \times \textup{Hom}(W_{t(a)}, W_{h(a)})$ for every $a \in Q_1$, a nonzero section of $U_j \times \text{det}_b\,W$ for every $\b \in \mathfrak{B}$ and a nonzero section of $U_j \times \text{det}_{\theta_\Delta}\,W$. Since these sections come from a globally defined map $\mathcal{P} \rightarrow \mathcal{R}(Q, l, \alpha)^{ss}_\theta \times \ensuremath{\Bbbk}^\times$, they glue to give the required family over $S$. Similarly a family over $S$ defines an object of $[\mathcal{R}(Q, l, \alpha)^{ss}_\theta \times \ensuremath{\Bbbk}^\times / \textup{GL}(\alpha)](S)$. Morphisms of families correspond naturally to morphisms of objects of $[\mathcal{R}(Q, l, \alpha)^{ss}_\theta \times \ensuremath{\Bbbk}^\times / \textup{GL}(\alpha)](S)$.
The choice of $\theta_\Delta$ implies that $\Delta$ acts with weight one on the space of isomorphisms from $\ensuremath{\Bbbk}$ to $\text{det}_{\theta_\Delta}\,W$. For every element $(W, t) \in \mathcal{R}(Q, l, \alpha)^{ss}_\theta \times \ensuremath{\Bbbk}^\times$ there exists a unique element of $\Delta$ that acts on $(W,t)$ to give $(W,1)$. The subgroup of $\textup{GL}(\alpha)$ that fixes the $\ensuremath{\Bbbk}^\times$ component is isomorphic to $\textup{PGL}(\alpha)$, so we have a stack isomorphism
\begin{equation*}
[\mathcal{R}(Q, l, \alpha)^{ss}_\theta \times \ensuremath{\Bbbk}^\times / \textup{GL}(\alpha)] \cong [\mathcal{R}(Q, l, \alpha)^{ss}_\theta / \textup{PGL}(\alpha)] = \mathcal{M}_\theta(Q, l, \alpha)
\end{equation*}
as required.
\end{proof}
The choice $\theta_\Delta$ might seem ad hoc at the moment, but a natural choice presents itself in our applications, see Remark \ref{conva}.
\section{Quivers of sections}\label{Quiver}
In this section we introduce a generalization of the classical linear series construction to projective toric orbifolds, $\mathcal{X}$. Starting with a collection of line bundles $\mathscr{L} = (L_0, L_1,\ldots, L_r)$ on $\mathcal{X}$ we produce a labelled quiver $(Q, \div)$ and give a rational map $\mathcal{X} \dashrightarrow \mathcal{M}_\theta(Q,\div, \alpha)$ with $\alpha =(1,\ldots,1)$. We then go on to study certain properties of this rational map. Throughout this section we assume that all our stacks $\mathcal{X}$ are projective toric orbifolds; we will also fix the dimension vector to be $\alpha:= (1,\ldots,1)$ and drop it from the notation.
We now extend the definition of a quiver of sections, as defined by Craw-Smith in the beginning of Section 3 of \cite{Craw-Smith}, to toric orbifolds; we reproduce their definitions and adopt their conventions. Let $\mathscr{L}: = (L_0, L_1,\ldots, L_r)$ be a list of distinct line bundle on a stack $\mathcal{X}$. A $T_\mathcal{X}$-invariant section $s \in \Gamma(\mathcal{X}, L_j \otimes L_i^\vee)$ is {\em irreducible} if it can not be written as a product of two nonzero sections $s' \in \Gamma(\mathcal{X}, L_k \otimes L_i^\vee)$ and $s'' \in \Gamma(\mathcal{X}, L_j \otimes L_k^\vee)$ for $L_k \in \mathscr{L}$. The {quiver of sections} associated to $\mathscr{L}$ is a quiver $Q$ in which the vertices $Q_0 = \{0, \ldots, r\}$ correspond to the line bundles of $\mathscr{L}$ and the arrows from $i$ to $j$ correspond to the set of irreducible $T_\mathcal{X}$-invariant sections in $\Gamma(\mathcal{X}, L_j \otimes L_i^\vee)$.
Since each arrow $a \in Q_1$ corresponds to a $T_\mathcal{X}$-invariant section $s \in \Gamma(\mathcal{X}, L_j \otimes L_i^\vee)$, we define a map $\div: Q_1 \rightarrow \mathbb{Z}^{\Sigma(1)}$ by sending $a$ to the corresponding divisor in $\mathbb{Z}^{\Sigma(1)}$. We call the labelled quiver $(Q, \div)$ the {\em labelled quiver of sections} of $\mathscr{L}$.
\begin{conventions}\label{conv}
Let $(Q, \div)$ be the labelled quiver of sections corresponding to $\mathscr{L} = (L_0, \ldots, L_r)$.
\begin{itemize}
\item[(a)] By definition, $(Q, \div)$ only depends on the line bundles $L_j \otimes L_i^\vee$ where $L_i, L_j \in \mathscr{L}$. Consequently, for any line bundle $L'$ on $\mathcal{X}$, we have $(Q, \div) = (Q', \div)$ where $(Q', \div)$ is a quiver of sections associated to $\mathscr{L}' = (L_0 \otimes L', \ldots, L_r \otimes L').$ To eliminate this redundancy, we will assume that $L_0 = \mathcal{O}_\mathcal{X}$.
\item[(b)] We will assume that $\Gamma(\mathcal{X}, L_i) \neq 0$ for $L_i \in \mathscr{L}$. This implies $Q$ is connected and rooted at $0 \in Q_0$
\end{itemize}
\end{conventions}
\begin{rmk}\label{conva}
For labelled quivers of sections of line bundles, Convention \ref{conv} a) fixes $\theta_\Delta = (1, 0,\ldots, 0)$.
\end{rmk}
Keeping the notation of Section \ref{Quivers}, define $\textup{pic}: \textup{Wt}(Q) \rightarrow \textup{Pic}(\mathcal{X})$ by $\theta = \sum_{i \in Q_0} \theta_i \mathbf{e}_i \mapsto \bigotimes_{i \in Q_0}L_i^{\otimes \theta_i}$ and let $\deg: \mathbb{Z}^{\Sigma(1)} \rightarrow \textup{Pic}(\mathcal{X})$ be the homomorphism in short exact sequence (\ref{Coxsq}). We then have the following commutative diagram
\begin{equation}\label{com1}\begin{split}
\xymatrix @C =1.5cm @R=1.3cm { \mathbb{Z}^{Q_1} \ar@{->>}[r]^-{\text{inc}} \ar[d]_{\text{div}} & \text{Wt}(Q) \ar[d]^{\text{pic}} \\
\mathbb{Z}^{\Sigma(1)} \ar@{->>}[r]^-{\text{deg}} & \text{Pic}(\mathcal{X}).}
\end{split}
\end{equation}
The subgroup $R$ is by definition the image under inc of the kernel of div, so diagram (\ref{com1}) restricts to the following commutative diagram
\begin{equation}\label{com2}\begin{split}
\xymatrix @C =1.5cm @R=1.3cm { \,\,R\,\,\ar@{^{(}->}[r]^\iota \ar[d]_{0} & \text{Wt}(Q) \ar[d]^{\text{pic}} \\
\mathbb{Z}^{\Sigma(1)} \ar@{->>}[r]^-{\text{deg}} & \text{Pic}(\mathcal{X}).}
\end{split}
\end{equation}
Define a $\textup{Wt}(Q)$-grading of the semigroup algebra $\ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R]$ by assigning the monomial $y^{u}z^{v} \in \ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R]$ degree $\textup{inc}(u) + \iota(v)$. This grading induces the change of basis action of $\textup{PGL}(\alpha)$ on $\mathcal{R}(Q, \div) \cong \textup{Spec}(\ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R])$. On the other hand, the map $\deg$ gives the $\textup{Pic}(\mathcal{X})$-grading of $\ensuremath{\Bbbk}[\mathbb{N}^{\Sigma(1)}]$ that arises from the short exact sequence (\ref{Coxsq}).
By construction $\div( \mathbb{N}^{Q_1}) \subset \mathbb{N}^{\Sigma(1)}$, so the map $\div \oplus 0: \mathbb{N}^{Q_1} \oplus R \rightarrow \mathbb{N}^{\Sigma(1)}$ induces a map of semigroup algebras $\Psi: \ensuremath{\Bbbk} [\mathbb{N}^{Q_1} \oplus R] \rightarrow \ensuremath{\Bbbk}[ \mathbb{N}^{\Sigma(1)}]$, which in turn defines a morphism $\Psi^*$ from $\AA^{\Sigma(1)}$ to $\mathcal{R}(Q, \div)$. This morphism is equivariant with respect to the actions of the groups $\textup{Hom}(\textup{Pic}(\mathcal{X}), \ensuremath{\Bbbk}^\times)$ and $\textup{PGL}(\alpha) \cong \textup{Hom}(\textup{Wt}(Q),\ensuremath{\Bbbk}^\times)$ on $\AA^{\Sigma(1)}$ and $\mathcal{R}(Q, \div)$ because the diagrams (\ref{com1}) and (\ref{com2}) commute. Thus for any $\theta \in \textup{Wt}(Q)$, $\Psi$ induces a rational map
\begin{equation*}
\psi_{\theta}: \mathcal{X} \dashrightarrow \mathcal{M}_\theta(Q, \div).
\end{equation*}
The rational map $\psi_\theta$ is a morphism of stacks \iff the inverse image under $\Psi^*$ of the $\theta$-unstable locus of $\mathcal{R}(Q, \div)$ is contained in the Cox unstable locus $\mathbb{V}(B_\mathcal{X})$, as defined in (\ref{Coxunstable}) (see Perroni's work on morphisms of toric stacks \cite{Perroni}).
We say a character $\chi_\theta \in \textup{PGL}(\alpha)^\vee$ is {\em generic} if every $\chi_\theta$-semistable point is $\chi_\theta$-stable.
\begin{defn} \label{bpf}
Take $\mathcal{X}$ as above. A collection of line bundles $\mathscr{L} = (\mathcal{O}_\mathcal{X}, L_1, \ldots, L_r)$ is {\em base-point free} if there exists generic $\chi_\theta \in \textup{PGL}(\alpha)^\vee$ for which $\psi_{\theta}: \mathcal{X} \dashrightarrow \mathcal{M}_\theta(Q, \div)$ is a morphism.
\end{defn}
\begin{rmk}
In the case where $\chi_\theta$ is not generic $\mathcal{M}_\theta(Q,\div)$ is a not a Deligne-Mumford stack. Furthermore, it rarely has a coarse moduli space. We insist $\chi_\theta$ is generic to avoid such ambient stacks.
\end{rmk}
For $L \in \textup{Pic}(\mathcal{X})$ let $(s_0, \ldots, s_n)$ be a basis of $\Gamma(\mathcal{X}, L)$ and let $\varphi_{|L|}: \mathcal{X} \dashrightarrow \mathbb{P}(\Gamma(\mathcal{X}, L)^\vee)$ be the rational map taking $x \in \mathcal{X}$ to $(s_0(x), \ldots, s_n(x)) \in \mathbb{P}(\Gamma(\mathcal{X}, L)^\vee)$. We will say $L$ is {\em base-point free} if $\varphi_{|L|}$ is a morphism. It can be shown using the universal property of a coarse moduli space, that every base-point free bundle on a stack can be pulled back from a base-point free line bundle on the coarse moduli space.
\begin{lemma}
Let $\mathcal{X}$ be a projective toric orbifold. A nontrivial line bundle $L$ on $\mathcal{X}$ is base-point free \iff the collection $\mathscr{L} = (\mathcal{O}_\mathcal{X}, L)$ is base-point free.
\end{lemma}
\begin{proof}
The map $\textup{pic}$ takes the basis vector $\mathbf{e}_1-\mathbf{e}_0$ of $\textup{Wt}(Q)\cong \mathbb{Z}$ to $L$. This implies that $\ker(\textup{pic})$ is trivial, otherwise $L^{\otimes n} \cong \mathcal{O}_\mathcal{X}$ for some $n>0$ contradicting projectivity of $\mathcal{X}$. Now $R$ is a subgroup of $\ker(\textup{pic})$ and is therefore trivial. Therefore $\mathcal{M}_\theta(Q,\div)$ is $\mathcal{M}_\theta(Q)$. Since $Q$ is acyclic, the only chamber in $\textup{Wt}(Q)_\mathbb{Q} \cong \mathbb{Q}$ is $\mathbb{Q}_{>0}$; take $\theta$ in this chamber, then $\mathcal{M}_\theta(Q) \cong \mathbb{P}(\Gamma(\mathcal{X}, L))$ from which the claim follows.
\end{proof}
\begin{ex}\label{p112ex}
Let $\mathcal{X} = \mathbb{P}(1,1,2)$ and $\mathscr{L} = (\mathcal{O}_\mathcal{X}, \mathcal{O}(1), \mathcal{O}(2))$. The labelled quiver of sections of $\mathscr{L}$ is given by the quiver in Figure \ref{p112} and the labelling map $\div: \mathbb{Z}^{5} \rightarrow \mathbb{Z}^{\Sigma(1)} \cong \mathbb{Z}^{3}$ defined by the matrix
\begin{equation*}\begin{pmatrix}
1 & 0 & 1& 0&0 \\ 0&1&0&1&0 \\ 0&0&0&0&1
\end{pmatrix}.\end{equation*}
\begin{figure}[h!]
\centering
\begin{equation*}
\entrymodifiers={++[o][F-]}
\xymatrix @C=3pc{0 \ar@<0.8ex>[r]|{a_1} \ar@<-0.8ex>[r]|{a_2} \ar@/_-1.5pc/ [rr]^{a_5} & 1 \ar@<0.8ex>[r]|{a_3} \ar@<-0.8ex>[r]|{a_4} & 2}
\end{equation*}\caption{A quiver of section of $\mathbb{P}(1,1,2)$.}\label{p112}
\end{figure}
\noindent Since $\mathbf{e}_{a_1} - \mathbf{e}_{a_3}$ and $\mathbf{e}_{a_2} - \mathbf{e}_{a_4}$ generate $\ker(\div)$, the element $\mathbf{e}_0 -2 \mathbf{e}_1+\mathbf{e}_2 \in \textup{Wt}(Q)$ generates $R$. The map $\Psi^*: \AA^3 \longrightarrow \AA^5 \times \ensuremath{\Bbbk}^\times$ sends $(x_1,x_2,x_3)$ to $(x_1,x_2,x_1,x_2,x_3,1)$. Write $\AA^{5} \times \ensuremath{\Bbbk}^\times \cong \textup{Spec}(\ensuremath{\Bbbk}[y_{a_1},\ldots, y_{a_5}, z_f^{\pm}])$. For $\theta = -3\mathbf{e}_0+2\mathbf{e}_1+ \mathbf{e}_2 \in \textup{Wt}(Q)$ the $\theta$-unstable locus is given by \[\mathbb{V}(B_\theta):= \Big \langle y^uz^v \in \ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus \mathbb{Z}]\,\Big|\, \textup{inc}(u) + \iota(v) =\theta\Big\rangle.\] The set $(\textup{inc} \oplus\iota)^{-1}(\theta) \cap (\mathbb{N}^{Q_1} \oplus \mathbb{Z})$ contains the Laurent polynomials $$y_{a_1}^4z_f, y_{a_2}^4z_f, y_{a_3}^4z_f^{-3}, y_{a_4}^4z_f^{-3}, y_{a_5}^2z_f^{-1}.$$ Therefore $\mathbb{V}(B_\theta) =\mathbb{V}(y_{a_1},\ldots,y_{a_5})$. Hence, \[\mathcal{M}_\theta(Q, \div) \cong [(\AA^{5} \times \ensuremath{\Bbbk}^\times \setminus \{0\}\times \ensuremath{\Bbbk}^\times) \,/\, (\ensuremath{\Bbbk}^\times)^2] \cong \mathbb{P}(1,1,1,1,2).\] Therefore $\Psi^*$ induces the map \[\psi_\theta: \mathbb{P}(1,1,2) \xrightarrow{\hspace*{0.7cm}} \mathbb{P}(1,1,1,1,2)\] that sends $(x_1,x_2,x_3)$ to $(x_1,x_2,x_1,x_2,x_3).$
\end{ex}
\begin{ex}\label{AHex1}
Let $\mathcal{X} = \mathbb{P}(1,1,2)$, $\mathscr{L} = (\mathcal{O}_\mathcal{X}, \mathcal{O}(1), \mathcal{O}(3))$. It can be shown that for $\theta = -2\mathbf{e}_0 + \mathbf{e}_1 + \mathbf{e}_2$ the moduli stack $\mathcal{M}_\theta(Q, \div)$ is isomorphic to $\mathbb{P}(1,1,2,2,2,2)$ and \[\psi_\theta: \mathbb{P}(1,1,2) \longrightarrow \mathbb{P}(1,1,2,2,2,2)\]
is a morphism that sends $(x_1, x_2,x_3)$ to $(x_1,x_2,x_1^2,x_1x_2,x_2^2,x_3).$
\end{ex}
\begin{rmk}\label{AHex}
Example \ref{AHex1} recovers the Abramovich-Hassett \cite{AbramovichHassett} construction for $\mathcal{X} = \mathbb{P}(1,1,2)$ with polarizing line bundle $L= \mathcal{O}(1)$ and natural numbers $n=0$ and $m=2$. More generally, given a polarizing line bundle $L$ on $\mathcal{X}$, one may recover the Abramovich-Hassett construction when $n=0$ by applying our machinery to the collection \[\mathscr{L} = (\mathcal{O}_\mathcal{X}, L, L \otimes L^{\otimes 2}, \ldots, L^{\otimes m(m+1)/2})\] if necessary, working with an `incomplete' quiver of sections. An incomplete quiver of sections is a quiver of sections where not all torus-invariant sections contribute to paths in the quiver, analogous to an incomplete linear series.
\end{rmk}
Let $L$ be a base-point free line bundle on a variety $X$. The linear series construction gives a morphism $\varphi_{|L|}: X \rightarrow \mathbb{P}(\Gamma(X,L)^\vee)$, under which the pullback of the tautological line bundle on $\mathbb{P}(\Gamma(X,L)^\vee)$ is $L$. The following proposition gives an analogous result.
\begin{prop}
Let $\mathscr{L} = (\mathcal{O}_\mathcal{X}, L_1, \ldots, L_r)$ be base-point free. The pullback of the tautological bundles on $\mathcal{M}(Q, \div)$ via $\psi_\theta$ is the collection $\mathscr{L}$.
\end{prop}
\begin{proof}
The group $\textup{GL}(\alpha) \cong (\ensuremath{\Bbbk}^\times)^{r+1}$ acts on $\text{det}_{\theta_\Delta}\,W \cong \textup{Spec}(\ensuremath{\Bbbk}[y_{\theta_\Delta}])$ (cf. Remark \ref{conva}) by $(t_0, \ldots, t_r) \cdot y_{\theta_\Delta} = t_0 \cdot y_{\theta_\Delta}$. So the subgroup fixing nonzero $y_{\theta_\Delta}$ is given by $G_{\theta_\Delta}:= \{(t_0, \ldots, t_r) \in \textup{GL}(\alpha) \, |\, t_0 =1\}$. Restricting the projection map $\textup{GL}(\alpha) \twoheadrightarrow \textup{PGL}(\alpha)$ to $G_{\theta_\Delta}$ we get an isomorphism $\textup{PGL}(\alpha) \cong G_{\theta_\Delta}$.
The tautological line bundles of $\mathcal{M}_\theta(Q, \div) \cong [(\mathcal{R}(Q, l)^{ss}_\theta \times \ensuremath{\Bbbk}^\times) / \textup{GL}(\alpha)]$ are given by the standard basis elements of $\mathbb{Z}^{Q_0} \cong \textup{GL}(\alpha)^\vee$. Under the isomorphism of stacks $[\mathcal{R}(Q, l)^{ss}_\theta \times \ensuremath{\Bbbk}^\times) / \textup{GL}(\alpha)] \cong [(\mathcal{R}(Q, l)^{ss}_\theta / G_{\theta_\Delta}]$ the pullbacks of the tautological line bundles is given by the image of the basis elements of $\mathbb{Z}^{Q_0}$ under the map dual to the inclusion $G_{\theta_\Delta} \hookrightarrow \textup{GL}(\alpha)$; now under the isomorphism $\textup{PGL}(\alpha) \cong G_{\theta_\Delta}$ these are mapped to the elements $0, \mathbf{e}_1-\mathbf{e}_0, \ldots, \mathbf{e}_r-\mathbf{e}_0 \in \textup{Wt}(Q)$.
For $\eta \in \textup{Wt}(Q)$ the pullback of the associated line bundle of $[\mathcal{R}(Q, l)^{ss}_\theta / \textup{PGL}(\alpha)]$ to $\mathcal{X}$ is given by $\textup{pic}(\eta)$. Therefore the pullbacks of the tautological line bundles $0, \mathbf{e}_1-\mathbf{e}_0, \ldots, \mathbf{e}_r-\mathbf{e}_0 \in \textup{Wt}(Q)$ to $\mathcal{X}$ are the line bundles $\mathcal{O}_\mathcal{X}, L_1, \ldots, L_r$ as required.
\end{proof}
We now work towards a condition on a collection $\mathscr{L}$ that guarantees it is base-point free.
\begin{lemma}\label{tor}
Let $\mathcal{X}$ be a projective toric orbifold and $L_1, \ldots,L_n \in \textup{Pic}(\mathcal{X})$ be such that every $L_i$, for $1\leq i\leq n$, is base-point free. Given a section $s$ of $L:=L_1 \otimes \cdots \otimes L_n$, there exists $m \in \mathbb{N}$ such that $s^m$ is in the image of the multiplication map \[\mu_m: \Gamma(\mathcal{X},L_1)^{\otimes m} \otimes_\ensuremath{\Bbbk} \cdots \otimes_\ensuremath{\Bbbk} \Gamma(\mathcal{X},L_n)^{\otimes m} \rightarrow \Gamma(\mathcal{X}, L^{\otimes m}).\]
\end{lemma}
\begin{proof}
Let $\ensuremath{\Bbbk}[x_0,\ldots,x_n]$ be the Cox ring of $\mathcal{X}$ and $\mu: \Gamma(\mathcal{X}, L_1) \otimes \cdots \otimes \Gamma(\mathcal{X},L_n) \rightarrow \Gamma(\mathcal{X},L)$ be the multiplication map. The line bundles $L_i$ are base-point free, they are therefore pullbacks of line bundles on the underlying coarse moduli space, which is a toric variety, and so correspond to polytopes $P_{L_i}$. The polytope $P_L$ corresponding to $L$ is given by $P_{L_1} + \ldots + P_{L_n}$ (see page 69 of Fulton \cite{Fulton}). While the lattice points of $P_L$ are not, in general, a sum of lattice points of the polytopes $P_{L_i}$, the vertices of $P_L$ are given by sums of vertices of $P_{L_i}$ (see page 11 of Sturmfels \cite{Sturmfels}). Therefore the sections corresponding to the vertices of $P_L$ lie in $\text{im}(\mu)$. Since the vanishing locus of the sections corresponding to the vertices of $P_L$ is equal to that of the sections of $L$, we have that the vanishing locus of the sections in $\text{im}(\mu)$ is equal to that of the sections of $L$. Now let $s \in \Gamma(\mathcal{X}, L)$, then by Hilbert's Nullstellensatz there exists a natural number $m \in \mathbb{Z}$ such that $s^m$ is in the ideal generated by $\text{im}(\mu)$ as required.
\end{proof}
For a collection of line bundles $\mathscr{L} = (\mathcal{O}_\mathcal{X}, L_1, \ldots, L_r)$ on $\mathcal{X}$, define
\begin{equation*}
\mathscr{L}_{\text{bpf}}:= \{L_i^\vee \otimes L_j\, |\, L_i, L_j \in \mathscr{L} \text{ and } L_i^\vee \otimes L_j \text{ is base-point free}\}
\end{equation*}
and let $\textup{pic}_\mathbb{Q}: \textup{Wt}(Q)_\mathbb{Q} \rightarrow \textup{Pic}(\mathcal{X})_\mathbb{Q}$ be $\textup{pic} \otimes \text{id}$.
\begin{lemma}\label{lem}
If $\textup{rank}(\mathbb{Z} \mathscr{L}) = \textup{rank}(\mathbb{Z} \mathscr{L}_{\textup{bpf}})$ then $\ker(\textup{pic}_\mathbb{Q}) \subset R_\mathbb{Q}$.
\end{lemma}
\begin{proof}
In this proof we use additive notion for the binary operation on the Picard group to avoid confusion with $- \otimes \mathbb{Q}$.
Let $\omega = \sum_{i\in Q_0} \omega_i \otimes q_i \in \ker(\textup{pic}_\mathbb{Q})$ and pick $n \in \mathbb{N}$ sufficiently large so that $n\, \omega = \sum_{i \in Q_0} n_i\, \omega_i \otimes 1$ for $n_i \in \mathbb{Z}$ and set $\lambda := \sum_{i \in Q_0} n_i \,\omega_i$. Since $R_\mathbb{Q}$ is a vector subspace of $\textup{Wt}(Q)_\mathbb{Q}$ we have $\omega \in R_\mathbb{Q}$ \iff $n \,\omega \in R_\mathbb{Q}$ for any $n \in \mathbb{Q} \setminus \{0\}$. Therefore it suffices to show $n \,\omega = \lambda \otimes 1 \in R_\mathbb{Q}$, that is there exists an element $\tau \in \mathbb{Z}^{Q_1} \otimes 1$ such that $\textup{inc}_\mathbb{Q}(\tau) =\lambda \otimes 1$ and $\div_\mathbb{Q}(\tau) = 0$.
Take the basis $E:= \{\mathbf{e}_1 - \mathbf{e}_0, \ldots, \mathbf{e}_r-\mathbf{e}_0\}$ of $\textup{Wt}(Q)$ and write $\lambda$ as a difference of positive and negative parts, that is, write $\lambda = \lambda_+ - \lambda_-$ for $\lambda_+, \lambda_- \in \mathbb{N} E$ without cancellation. Let $L_\pm = \textup{pic}(\lambda_\pm)$. The fact that $\lambda \otimes 1 \in \ker(\textup{pic}_\mathbb{Q})$ implies $L_+ \otimes 1 = L_- \otimes 1$ and the rank assumption gives us
\begin{equation}\label{q}
L_+ \otimes 1 = L_- \otimes 1 = \sum L_{b_i} \otimes q_i \quad \text{for } L_{b_i} \in \mathscr{L}_\text{bpf} \text{ and } q_i \in \mathbb{Q}.
\end{equation}
We may take $n$ big enough to ensure that each $q_i \in \mathbb{Z}$. Rearrange equations (\ref{q}) to get
\begin{equation}\label{+}
(L_+ \otimes 1) + \bigg(\sum_{q_i <0} -q_i L_{b_i} \otimes 1\bigg) = \sum_{q_i>0} q_i L_{b_i} \otimes 1
\end{equation}
\begin{equation}\label{-}
(L_- \otimes 1) + \bigg(\sum_{q_i <0} -q_i L_{b_i} \otimes 1\bigg) = \sum_{q_i>0} q_i L_{b_i} \otimes 1.
\end{equation}
Fix a section of each of the following line bundles: $L_+$, $L_-$ and $L_{b_i}$ for which $q_i<0$ (in turn fixing a section of $\sum_{q_i < 0} -q_i L_{b_i}$). Using equations (\ref{+}) and (\ref{-}), this fixes sections $s_\pm$ of $\sum_{q_i>0} q_i L_{b_i} + L_{t\pm}$ for some torsion line bundles $L_{t\pm}$. Without loss of generality we assume $L_{t\pm} =0$, otherwise multiply $n$ in the beginning of the proof by the orders of $L_{t\pm}$.
Since the incidence map is onto $\textup{Wt}(Q)$ there exists elements $\tau_1 \in \mathbb{Z}^{Q_1}$ and $\tau_2 \in \mathbb{Z}^{Q_1}$ such that $\textup{inc}(\tau_1) = \lambda_+$ and $\textup{inc}(\tau_2)= \lambda_-$. By Lemma \ref{tor} there exists $m_\pm \in \mathbb{N}$ such that $s_\pm^{m_\pm}$ are a product of sections of the line bundles $L_{b_i}$ for which $q_i>0$. By definition of the quiver of sections $Q$, every section of a line bundle in $\mathscr{L}_\text{bpf}$ gives rise to a path in the quiver, so there exists $\tau_\pm \in \mathbb{Z}^{Q_1}$ such that $\div(\tau_\pm)= s_\pm^{m_\pm}$. Define
\begin{equation*}
\tau:= (\tau_1 \otimes 1) - \bigg(\tau_+ \otimes \frac{1}{m_+}\bigg) - (\tau_2 \otimes 1) + \bigg(\tau_- \otimes \frac{1}{m_-}\bigg).
\end{equation*}
We have that $\textup{inc}_\mathbb{Q}(\tau) = \lambda \otimes 1$ because $\textup{inc}_\mathbb{Q}(\tau_+ \otimes \frac{1}{m_+}) = \textup{inc}_\mathbb{Q}(\tau_- \otimes \frac{1}{m_-})$. We also have that $(\tau_1 \otimes 1) - (\tau_+ \otimes \frac{1}{m_+})$ and $(\tau_2 \otimes 1) - (\tau_- \otimes \frac{1}{m_-})$ map via div to the section of the line bundle $\sum_{q_i < 0} -q_i L_{b_i}$ fixed above, and hence $\div_\mathbb{Q}(\tau) = 0$, as required.
\end{proof}
The following example highlights that tensoring with $\mathbb{Q}$ in the statement of Lemma \ref{lem} is necessary.
\begin{ex}
Consider the $(\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z})$-grading of $\ensuremath{\Bbbk}[x_1,x_2,x_3,x_4]$ given by:
\begin{equation*}
\deg(x_1) = (1, 0, 0);\,\, \deg(x_2)=(1,1,0);\,\, \deg(x_3)= (1,0,1);\,\, \deg(x_4)=(1,1,1)
\end{equation*}
and let $(\ensuremath{\Bbbk}^\times \times\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}) \curvearrowright \AA^4$ be the corresponding action. Take \[\mathcal{X} = [(\AA^4\setminus \{0\}) / (\ensuremath{\Bbbk}^\times \times\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z})]\] and
\begin{equation*}
\mathscr{L} = (\mathcal{O}, \mathcal{O}(1,0,0), \mathcal{O}(1,1,0), \mathcal{O}(1,0,1), \mathcal{O}(1,1,1),\mathcal{O}(2,0,0)).
\end{equation*}
We have that $\mathcal{O}(2,0,0)$ is base-point free and so is an element of $\mathscr{L}_\text{bpf}$ therefore $\textup{rank}(\mathbb{Z} \mathscr{L}) = \textup{rank}(\mathbb{Z} \mathscr{L}_{bpf})$. However it can be shown that $\ker(\textup{pic}) \nsubseteq R$.
\end{ex}
\begin{thm}\label{prop}
If $\textup{rank}(\mathbb{Z} \mathscr{L}) = \textup{rank}(\mathbb{Z} \mathscr{L}_\textup{bpf})$ then $\mathscr{L}$ is base-point free.
\end{thm}
\begin{proof}
Let $\theta \in \textup{Wt}(Q)$. The associated character $\chi_\theta \in \textup{PGL}(\alpha)^\vee$ gives a morphism of stacks $\mathcal{X} \rightarrow \mathcal{M}_\theta (Q, \div)$ \iff the inverse image of the $\theta$-unstable points of $\mathcal{R}(Q, \div)$ is contained in $\mathbb{V}(B_\mathcal{X})$. After picking a higher multiple if necessary, we may assume that the $\theta$-unstable locus in $\mathcal{R}(Q, \div)$ is precisely the vanishing locus of the monomial ideal
\begin{equation*}
B_\theta:= \Big\langle\, y^uz^v \in \ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R] \, \Big | \, \textup{inc}(u) + \iota(v) = \theta\,\Big\rangle
\end{equation*}
and its inverse image $\psi^{-1} ( \mathbb{V}(B_\theta)) \subset \AA^{\Sigma(1)}$ is the vanishing locus of the monomial ideal
\begin{equation*}
\div B_\theta:= \Big\langle\, x^{\div((u,v))} \in \ensuremath{\Bbbk}[\mathbb{N}^{\Sigma(1)}] \, \Big | \, \textup{inc}(u) + \iota(v) = \theta\,\Big\rangle.
\end{equation*}
Now let $L:= \textup{pic}(\theta)$ and define
\begin{equation*}
B_L:= \Big\langle\, x^\nu \in \ensuremath{\Bbbk}[\mathbb{N}^{\Sigma(1)}] \, \Big| \, \deg(\nu)=L\,\Big\rangle.
\end{equation*}
Given $\chi_\theta$ for which $L= \textup{pic}(\theta) \in \mathscr{L}_{\textup{bpf}}$, pick $m$ big enough such that the $m\theta$-unstable locus is cut out by $B_{m\theta}$. The line bundle $L \in \mathscr{L}_{\textup{bpf}}$ so for every $\nu \in \mathbb{N}^{\Sigma(1)}$ for which $\deg(\nu)= L$ there exists an element $\rho_s \in \mathbb{Z}^{Q_1}$ such that $\div(\rho_s,0)=\nu$, so $B_L = \div B_\theta$. The ideal $\div B_{m\theta}$ is contained in $B_{L^{\otimes m}}$ and the vanishing locus of $\div B_{m\theta}$ is contained in that of $\div B_\theta$. Therefore \[\mathbb{V}(B_{L^{\otimes m}}) \subset \mathbb{V}(\div B_{m \theta}) \subset \mathbb{V}(\div B_\theta) = \mathbb{V}(B_L).\] Since $\textup{pic}(m\theta)=L^{\otimes m}$ and $L$ is base-point free, Lemma \ref{tor} implies $\mathbb{V}(B_L) = \mathbb{V}(B_{L^{\otimes m}})$, therefore $\mathbb{V}(\div B_{m \theta}) = \mathbb{V}(\div B_\theta)$. The line bundle $L$ base-point free so $\mathbb{V}(B_L) \subset \mathbb{V}(B_\mathcal{X})$ and hence $\psi^{-1} (\mathbb{V}(B_{m\theta})) \subset \mathbb{V}(B_\mathcal{X})$. Hence, the rational map $\psi_{\theta}: \mathcal{X} \dashrightarrow \mathcal{M}_\theta(Q, \div)$ is in fact a morphism of stacks.
It remains to show we may pick a generic character for which $\psi_{\theta}: \mathcal{X} \rightarrow \mathcal{M}_\theta(Q, \div)$ is a morphism.
Let $S := \{\mathbf{e}_j - \mathbf{e}_i \in \textup{Wt}(Q) \,| \, \textup{pic}(\mathbf{e}_j - \mathbf{e}_i)\text{ is base-point free} \}$ and $\sigma \subset \textup{Wt}(Q)_\mathbb{Q}$ be the cone generated by elements of $S$ and $R$. Since the generators of the cone map under pic to base-point free line bundles, we have that any $\theta$ in $\sigma$ gives a morphism $\psi_{\theta}: \mathcal{X} \rightarrow \mathcal{M}_\theta(Q, \div).$ We claim that $\sigma \subset \textup{Wt}(Q)_\mathbb{Q}$ is top dimensional. The vector space $\textup{Wt}(Q)_\mathbb{Q}$ is isomorphic to $(\text{ker}(\textup{pic}_\mathbb{Q}) )\oplus(\text{im}(\textup{pic}_\mathbb{Q}))$. The image of $\textup{pic}_\mathbb{Q}$ is generated by $\mathscr{L}$, the rank assumption then implies that the elements of $\mathscr{L}_\text{bpf}$ also generate $\text{im}(\textup{pic}_\mathbb{Q})$. We have $\textup{pic}(S) = \mathscr{L}_\text{bpf}$. In addition Lemma \ref{lem} give us that $\ker(\textup{pic} _ \mathbb{Q}) \subset R _ \mathbb{Q}$, therefore elements of $\sigma$ span $\textup{Wt}(Q)_\mathbb{Q}$ which proves the claim. So one may pick a generic $\theta$ in the interior of $\sigma$ that gives a well defined morphism, hence $\mathscr{L}$ is base-point free.
\end{proof}
The next example gives a base-point free collection $\mathscr{L}$ that does not satisfy the rank condition $\textup{rank}(\mathbb{Z} \mathscr{L}) = \textup{rank}(\mathbb{Z} \mathscr{L}_\text{bpf})$.
\begin{ex}
Let $\mathcal{X} = \mathbb{P}(1,2,3)$ and $\mathscr{L} = (\mathcal{O}_\mathcal{X}, \mathcal{O}(1), \mathcal{O}(2), \mathcal{O}(3))$. Note that $\mathscr{L}_{\text{bpf}}$ is empty. It can be shown that given $\theta \in \mathbb{N}(\mathbf{e}_1-\mathbf{e}_0) \oplus \mathbb{N}(\mathbf{e}_2-\mathbf{e}_0) \oplus \mathbb{N}(\mathbf{e}_3-\mathbf{e}_0)$, $\mathcal{M}_\theta(Q, \div) \cong \mathbb{P}(1,1,1,2,2,3)$ and we have a morphism
\[\psi_\theta: \mathbb{P}(1,2,3) \xrightarrow{\hspace*{0.7cm}} \mathbb{P}(1,1,1,2,2,3)\] that sends $(x_1, x_2,x_3)$ to $(x_1,x_1,x_1,x_2,x_2,x_3)$.
\end{ex}
Given a base-point free collection of line bundles $\mathscr{L}$, the next proposition explicitly describes the image of $\psi_\theta$. Let $I_\mathscr{L} \subset \ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R]$ be the ideal given by the following
\begin{equation}\label{IL}
I_\mathscr{L} := \Big\langle y^{u_1}z^{v_1} - y^{u_2}z^{v_2} \, | \, \div(u_1-u_2) = 0, \textup{inc}(u_1-u_2)+\iota(v_1-v_2)=0 \Big\rangle.
\end{equation}
\begin{prop}\label{image}
Let $\mathscr{L}$ be a base-point free collection of line bundles on $\mathcal{X}$ and $\theta \in \textup{Wt}(Q)$ be such that $\psi_\theta$ is a morphism. Then the image of $\psi_\theta$ is given by $[(\mathbb{V}(I_\mathscr{L}) \setminus \mathbb{V}(B_\theta)) / \textup{PGL}(\alpha)] \subset \mathcal{M}_\theta(Q,\div)$.
\end{prop}
\begin{proof}
The image of the map from $\AA^{\Sigma(1)}$ to $\AA^{Q_1} \times (\ensuremath{\Bbbk}^\times)^R$ induced by the semigroup homomorphism $\div \oplus 0: \mathbb{N}^{Q_1} \oplus R \rightarrow \mathbb{N}^{\Sigma(1)}$ is given by the vanishing locus of the toric ideal
\begin{equation}
I:= \Big\langle y^{u_1}z^{v_1} - y^{u_2}z^{v_2} \in \ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R] \, \Big| \, \div(u_1-u_2) = 0 \Big\rangle.
\end{equation}
For any element $y^{u_1}z^{v_1} - y^{u_2}z^{v_2} \in \ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R]$ , its $\textup{Wt}(Q)$-grade is defined to be $\textup{inc}(u_1-u_2)+\iota(v_1-v_2)$. Therefore, $I_\mathscr{L} \subset \ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R]$ is defined to give the $\textup{Wt}(Q)$-homogenous part of $I$. We conclude that the image $\psi_\theta$ is given by $$[(\mathbb{V}(I_\mathscr{L}) \setminus \mathbb{V}(B_\theta)) / \textup{PGL}(\alpha)].$$
\end{proof}
\begin{rmk}
Let $\theta_1, \theta_2 \in \textup{Wt}(Q)$ be generic and such that the maps $\psi_{\theta_1}, \psi_{\theta_2}$ are morphisms. The fact that $\psi_{\theta_1}, \psi_{\theta_2}$ are morphisms implies that their images do not intersect the unstable loci $\mathbb{V}(B_{\theta_1})$ and $\mathbb{V}(B_{\theta_2})$ and so are independent of the unstable loci. Since the only difference between the two morphisms is the unstable loci of the target this implies that the image of $\psi_{\theta_1}$ is isomorphic to that of $\psi_{\theta_2}$.
\end{rmk}
Now we investigate representability of the morphism $\psi_\theta$. Let $\pi: \mathcal{X} \rightarrow X$ be the map to the coarse moduli space $X$ of $\mathcal{X}$. We recall a useful definition from Nironi \cite{Nironi}.
\begin{defn}[Def. 2.2, \cite{Nironi}]
A locally free sheaf $\mathcal{V}$ on $\mathcal{X}$ is {\em $\pi$-ample} if for every geometric point of $\mathcal{X}$ the representation of the stabilizer group at that point is faithful.
\end{defn}
\begin{thm}\label{rep}
Let $\mathscr{L}= (\mathcal{O}_\mathcal{X}, L_1, \ldots, L_r)$ be a base-point free collection of line bundles. Then $\psi_{\theta}$ is representable \iff $\bigoplus_{j=1}^rL_j$ is $\pi$-ample.
\end{thm}
\begin{proof}
By Lemma 2.3.9 of \cite{AbramovichHassett}, $\psi_{\theta}$ is representable \iff the map $g: \textup{Aut}(x) \rightarrow \textup{Aut}(\psi_{\theta}(x))$ is injective for every geometric point $x \in \mathcal{X}$. The map $g$ fits into the following commutative diagram
\begin{equation}\label{com4}\begin{split}
\xymatrix@C =1.3cm @R=1.3cm{\text{Aut}(x) \ar[r]^-{g} \ar@{^{(}->}[d] & \text{Aut}(\psi_\theta(x)) \ar@{^{(}->}[d] \\ \textup{Hom}(\textup{Pic}(\mathcal{X}), \ensuremath{\Bbbk}^\times) \ar[r]^{\textup{pic}^\vee} & \textup{Hom}(\textup{Wt}(Q), \ensuremath{\Bbbk}^\times).}
\end{split}\end{equation}
Here $\textup{pic}^\vee$ denotes the map given by applying the functor $\textup{Hom}(-,\ensuremath{\Bbbk}^\times)$ to $\textup{pic}$. We claim that the representation of $\textup{Aut}(x)$ given by $\bigoplus_{j=1}^rL_j$ is the composite \begin{equation}\label{blah} \textup{Aut}(x) \hookrightarrow \textup{Hom}(\textup{Pic}(\mathcal{X}), \ensuremath{\Bbbk}^\times) \xrightarrow{ \textup{pic}^\vee } \textup{Hom}(\textup{Wt}(Q), \ensuremath{\Bbbk}^\times).\end{equation} Indeed, take the basis $\{\mathbf{e}_i - \mathbf{e}_0 \in \textup{Wt}(Q) \,|\, i= 1,\ldots r\}$ of $\textup{Wt}(Q)$ giving an isomorphism $ \textup{Hom}(\textup{Wt}(Q), \ensuremath{\Bbbk}^\times) \cong (\ensuremath{\Bbbk}^\times)^r$. Evaluating at $\mathbf{e}_i-\mathbf{e}_0 \in \textup{Wt}(Q)$ gives a map $\textup{Hom}(\textup{Wt}(Q), \ensuremath{\Bbbk}^\times) \rightarrow \ensuremath{\Bbbk}^\times$. By definition of the $\textup{Hom}$-functor the composite \[\textup{Hom}(\textup{Pic}(\mathcal{X}), \ensuremath{\Bbbk}^\times) \xrightarrow{ \textup{pic}^\vee } \textup{Hom}(\textup{Wt}(Q), \ensuremath{\Bbbk}^\times) \rightarrow \ensuremath{\Bbbk}^\times\] is given by evaluating at $\textup{pic}(\mathbf{e}_i - \mathbf{e}_0) = L_i$ and is therefore the representation induced by the line bundle $L_i$. This proves the claim. So we have that the composite (\ref{blah}) is injective for every $x \in \mathcal{X}$ precisely when $\bigoplus_{j=1}^rL_j$ is $\pi$-ample. Commutativity of (\ref{com4}) gives that (\ref{blah}) is injective \iff $g$ is injective, as required.
\end{proof}
For $\mathcal{X}$ a toric orbifold, let $\pi: \mathcal{X} \rightarrow X$ be the map to the coarse moduli space. The group homomorphism given by the pullback functor, $\pi^* \colon \textup{Pic}(X) \rightarrow \textup{Pic}(\mathcal{X})$, identifies $\textup{Pic}(X)$ with a subgroup of $\textup{Pic}(\mathcal{X})$. We will abuse notation and use $\textup{Pic}(X) \subset \textup{Pic}(\mathcal{X})$ to denote this subgroup.
\begin{coro}
Let $\mathscr{L}$ be a base-point free collection of line bundles on $\mathcal{X}$. If $\mathscr{L}$ generates $\textup{Pic}(\mathcal{X}) / \textup{Pic}(X)$ then $\psi_\theta$ is representable.
\end{coro}
\begin{proof}
This follows from the fact that elements of $\textup{Pic}(X)$ give trivial representations of $\textup{Aut}(x)$ for every geometric point $x\in \mathcal{X}$.
\end{proof}
The following example shows that for a given base-point free collection $\mathscr{L}$, representability of $\psi_\theta$ is weaker than $\mathscr{L}$ generating $\textup{Pic}(\mathcal{X}) / \textup{Pic}(X)$.
\begin{ex}
Take $N= \mathbb{Z}$ and let $\Sigma$ be the fan associated to the toric variety $\mathbb{P}^1$ with rays $\rho_+ := \mathbb{Q}_{\geq 0}$ and $\rho_-:= \mathbb{Q}_{\leq 0}$. Let $\beta: \mathbb{Z}^{\Sigma(1)} \rightarrow \mathbb{Z}$ take $\mathbf{e}_{\rho_{\pm}}$ to $\pm2$. For $\mathbf{\Sigma} = (N, \Sigma, \beta)$ take $\mathcal{X} = \mathcal{X}_{\mathbf{\Sigma}}$. First note that $\textup{Pic}(\mathcal{X}) \cong \mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ and let $\mathscr{L}= (\mathcal{O}_\mathcal{X}, \mathcal{O}(2,1), \mathcal{O}(4,0))$. For $\theta = -3\mathbf{e}_0 + 2\mathbf{e}_1+\mathbf{e}_2 \in \textup{Wt}(Q)$ the collection $\mathscr{L}$ gives a representable morphism $\psi_\theta: \mathcal{X} \rightarrow \mathbb{P}(1,1,2,2)$ taking $(x,y) \in \AA^{\Sigma(1)}$ to $(xy, xy, x^4, y^4)$.
\end{ex}
\begin{ex}
Every example in this section defines a representable morphism. For an example that does not, take $\mathcal{X} = \mathbb{P}(1,1,2)$ and $\mathscr{L}=(\mathcal{O}, \mathcal{O}(2))$.
\end{ex}
We conclude the section by comparing our construction to that of Craw-Smith \cite{Craw-Smith} in the case where $\mathcal{X}=X$ is a toric variety. Let $\mathscr{L} = (\mathcal{O}_X, L_1, \ldots, L_r)$ be a collection of base-point free line bundles on a toric variety $X$ and $\vartheta =-r \mathbf{e}_0+ \mathbf{e}_1+\cdots+\mathbf{e}_r$, then Craw and Smith use the commutative diagram (\ref{com1}) to produce a morphism $\varphi_{|\mathscr{L}|}: X \rightarrow \mathcal{M}_\vartheta(Q)$.
\begin{prop}
Let $\mathcal{X} = X$ be a toric variety and $\mathscr{L}$ be a collection of base-point free line bundles on $X$. Then the image of $\psi_{\vartheta}$ is isomorphic to that of $\varphi_{|\mathscr{L}|}$.
\end{prop}
\begin{proof}
We have the following commutative diagram:
\begin{equation}\begin{split}
\label{comdia}
\xymatrix @R=1cm @C=2.5cm {\mathbb{Z}^{Q_1} \ar[r]^-{\text{inc}}\ar[d]_-{(\text{id},0)} & \text{Wt}(Q) \ar[d]^{\text{id}} \\
\mathbb{Z}^{Q_1} \oplus R \ar[r]^{\text{inc} \oplus \iota} \ar[d]_{\text{div}\oplus 0} & \text{Wt}(Q) \ar[d]^{\textup{pic}} \\
\mathbb{Z}^{\Sigma(1)} \ar[r]^-{\text{deg}} & \text{Pic}(X).}\end{split}
\end{equation}
The maps of semigroups $\mathbb{N}^{Q_1} \xrightarrow{(\text{id}, 0)} \mathbb{N}^{Q_1} \oplus R \xrightarrow{\div\oplus 0} \mathbb{N}^{\Sigma(1)}$ give maps of semigroup algebras $\ensuremath{\Bbbk}[\mathbb{N}^{Q_1}] \rightarrow \ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R] \rightarrow \ensuremath{\Bbbk}[\mathbb{N}^{\Sigma(1)}]$. After applying the functor Spec these give morphisms
\begin{align*}
\AA^{\Sigma(1)} \xrightarrow{\hspace{0.2cm}\Psi^*\hspace{0.2cm}} \AA^{Q_1} \times (\ensuremath{\Bbbk}^\times)^{P} \xrightarrow{\hspace{0.2cm}\pi\hspace{0.2cm}} \AA^{Q_1}.
\end{align*}
The morphism $\Psi^*$ is induced by the semigroup map $\mathbb{N}^{Q_1} \oplus R \xrightarrow{\div\oplus 0} \mathbb{N}^{\Sigma(1)}$, therefore its image lies in the subvariety $\AA^{Q_1} \times (1,\ldots,1) \subset \AA^{Q_1} \times (\ensuremath{\Bbbk}^\times)^P$. On the other hand, the morphism $\pi$ is just the projection to the first factor, therefore the image of $\Psi^*$ is isomorphic to the image of $\pi \circ \Psi^*$.
The commutativity of diagram (\ref{comdia}) implies that $\Psi^*$ is equivariant with respect to the actions of the groups $\textup{Hom}(\textup{Pic}(X), \ensuremath{\Bbbk}^\times)$ and $\textup{Hom}(\textup{Wt}(Q), \ensuremath{\Bbbk}^\times)$ induced by $\deg$ and $\textup{inc}$, similarly $\pi$ is equivariant. So the maps $\Psi^*$ and $\pi$ give rise to rational maps:
\begin{equation*}
\xymatrix{X \ar@{-->}[r]^-{\psi_\vartheta} & \mathcal{M}_\vartheta(Q,\div) \ar@{-->}[r]^-\pi &\mathcal{M}_\vartheta(Q)}.
\end{equation*}
The composite $\pi \circ \psi_\theta$ is equal to the rational map $\varphi_{|L|}$ and since $\mathscr{L}$ is a collection base-point free line bundles Corollary 4.2 of \cite{Craw-Smith} implies it is a morphism. By virtue of Proposition \ref{prop} we have that $\psi_\vartheta$ is a morphism. Now, by definition, $\psi_\vartheta$ and $\varphi_{|L|}$ descend from $\Psi^*$ and $\pi \circ \Psi^*$ respectively and since $\Psi^*$ and $\pi \circ \Psi^*$ have isomorphic images the images of $\psi_\vartheta$ and $\varphi_{|L|}$ are isomorphic.
\end{proof}
\section{Application to the McKay correspondence}\label{McKay}
In this section, we apply the construction from the previous section to toric quotient singularities. For $G \subset \textup{GL}(n, \ensuremath{\Bbbk})$ a finite abelian group and $(Q, \div)$ the labelled McKay quiver, we construct a closed immersion $[\AA^n / G] \hookrightarrow \mathcal{M}_\theta(Q, \div)$ for any $\theta \in \textup{Wt}(Q)$. In the case where $G \subset \textup{SL}(n, \ensuremath{\Bbbk})$ and $n \leq 3$, we proceed to alter the construction of $\mathcal{M}_\theta(Q, \div)$ and yield a GIT problem for which one generic stability condition gives $[\AA^n / G]$ and another gives $G$-Hilb$(\AA^n)$. We will assume $\alpha = (1,\ldots,1)$ throughout this section and drop it from the notation.
Take $n \in \mathbb{N}$ and $G$ a finite abelian subgroup of $\textup{GL}(n, \ensuremath{\Bbbk})$ with no quasireflections. We may assume that $G$ is contained in the subgroup $(\ensuremath{\Bbbk}^\times)^n$ of diagonal matrices with nonzero entries in $\textup{GL}(n, \ensuremath{\Bbbk})$. Line bundles on $[\AA^n/G]$ are given by $G$-equivariant line bundles on $\AA^n$, which in turn are determined by $G$-equivariant isomorphisms $\mathcal{O}_{\AA^n \times G} \rightarrow \mathcal{O}_{\AA^n \times G}$. From this it follows that the Picard group of $[\AA^n/G]$ is naturally isomorphic to the group characters $G^\vee$. With these preparations, take \[\mathscr{L} = (\mathcal{O}_{\AA^n} \otimes \rho \,|\, \rho \in G^\vee).\] Then the labelled quiver of sections $(Q,\div)$ of $\mathscr{L}$ coincides with the McKay quiver, see the beginning of Section 4.1 of \cite{CQV}.
From now on we will use the isomorphism $\textup{Pic}([\AA^n / G]) \cong G^\vee$ tacitly. In much the same way as we have commutative diagrams (\ref{com1}) and (\ref{com2}) we have
\begin{equation} \label{com5} \begin{split}
\xymatrix @C =1.5cm @R=1.3cm { \mathbb{Z}^{Q_1} \ar@{->>}[r]^-{\text{inc}} \ar[d]_{\text{div}} & \text{Wt}(Q) \ar[d]^{\text{pic}} & \,\,R\,\, \ar@{^{(}->}[r]^{\iota} \ar[d]_{0} & \textup{Wt}(Q) \ar[d]^{\textup{pic}} \\
\mathbb{Z}^n \ar[r]^-{\text{deg}} & G^\vee & \mathbb{Z}^{Q_1} \ar[r]^-{\deg} & G^\vee.}
\end{split}\end{equation}
As in Section \ref{Quiver}, the semigroup morphism $\div \oplus 0: \mathbb{N}^{Q_1} \oplus R \rightarrow \mathbb{N}^n$ gives a morphism $\Psi^*:\AA^n \rightarrow \AA^{Q_1} \times (\ensuremath{\Bbbk}^\times)^R$. The commutativity of (\ref{com5}) gives that $\Psi^*$ is equivariant with respect to the actions of $G$ on $\AA^n$ and $\textup{Hom}(\textup{Wt}(Q), \ensuremath{\Bbbk}^\times)$ on $\AA^{Q_1} \times (\ensuremath{\Bbbk}^\times)^R$. Given a $\theta \in \textup{Wt}(Q)$ this gives a rational map \[\psi_\theta: [\AA^n/G] \dashrightarrow \mathcal{M}_\theta(Q, \div).\]
From now on we identify the lattice $\textup{Wt}(Q)$ with the lattice $\{\theta \in \mathbb{Z}^{Q_0} \, |\, \theta_0 =0\}$ whose basis is $\{\mathbf{e}_{\rho} \,|\, \rho \in G^\vee \setminus \{0\}\}$.
\begin{prop}\label{clsdimm}
For any $\chi_\theta \in \textup{PGL}(\alpha)^\vee$, \[\psi_\theta: [\AA^n/G] \dashrightarrow \mathcal{M}_\theta(Q, \div)\] is a closed immersion.
\end{prop}
\begin{proof}
We begin by studying the $\theta$-semistable points. By definition of the quiver of sections, a path $p$ from $\rho_0$ to $\rho$ corresponds to a section $s \in \textup{Hom}(\rho_0, \rho)$. Then there exists a path $p'$ from $\rho'$ to $\rho \otimes \rho'$ given rise to by the same section $s \in \textup{Hom}(\rho', \rho' \otimes \rho)$, that is $\div(p) = \div(p')$. Note that $\textup{inc}(p) = \mathbf{e}_\rho $ and $\textup{inc}(p')= \mathbf{e}_{\rho' \otimes \rho} - \mathbf{e}_{\rho'}$. Since $\div(p') - \div(p) =0$ we have that $ -\mathbf{e}_\rho - \mathbf{e}_{\rho'} + \mathbf{e}_{\rho\otimes \rho'} \in R$. Given that $\ker(\textup{pic})$ is generated by elements of the form $-\mathbf{e}_\rho- \mathbf{e}_{\rho'} + \mathbf{e}_{\rho \otimes \rho'}$ this shows $\ker(\textup{pic}) \subset R$. The commutativity of the diagrams (\ref{com5}) gives $R \subset \ker(\textup{pic})$ and therefore $R = \ker(\textup{pic})$.
The image of $\textup{pic}$ is a torsion $\mathbb{Z}$-module, so $R = \ker(\textup{pic})$ $\mathbb{Q}$-spans $\textup{Wt}(Q)$ and hence any basis $\mathfrak{B}$ of $R$ $\mathbb{Q}$-spans $\textup{Wt}(Q)$. Now let $W$ be a refined representation and let $\theta \in \textup{Wt}(Q)$. We claim that $W$ is $\theta$-semistable. Indeed, let $W_\bullet$ be $\ensuremath{\Bbbk} Q$-module filtration satisfying the conditions in Definition \ref{stab} and write $\theta$ as a $\mathbb{Q}$-linear combination of $\b \in \mathfrak{B}$. Then since $\b(W_\bullet)=0$, we have $\theta(W_\bullet)=0$. This in particular implies that the $\theta$-unstable locus is empty. It follows at once that the rational map $\psi_\theta$ is a morphism \[\psi_\theta: [\AA^n/ G] \longrightarrow \mathcal{M}_\theta(Q, \div)\] for any $\theta \in \textup{Wt}(Q)$. Let $I_\mathscr{L}$ be the $\ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R]$ ideal defined in (\ref{IL}). Again, after noting the $\theta$-unstable locus is empty, an argument similar to that of Proposition \ref{image} gives that the image of $\psi_\theta$ is $[\mathbb{V}(I_\mathscr{L}) / \textup{PGL}(\alpha)]$. It remains to show that $[\AA^n /G]$ is isomorphic to $[\mathbb{V}(I_\mathscr{L}) / \textup{PGL}(\alpha)]$. Consider $\ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R] / I_\mathscr{L}$ and multiply the generators $y^{u_1}z^{v_1} - y^{u_2}z^{v_2}$ of $I_\mathscr{L}$ by the units $z^{-v_1}$ to get an alternative set of generators $y^{u_1}-y^{u_2}z^{v_2-v_1}$. Then $I_\mathscr{L}$ is given by \[ \Big \langle y^{u_1}-y^{u_2}z^v \in \ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R] \,\Big|\, \div(u_1 - u_2) =0, \textup{inc}(u_1 -u_2)- \iota(v) =0 \Big\rangle. \]
Pick $a_1, \ldots, a_n \in Q_1$ such that $\div(a_i)$ is the $i$th basis element of $\mathbb{Z}^n$. Since every arrow in the McKay quiver is labelled by a basis element of $\mathbb{Z}^n$ the kernel of $\div$ is generated by differences $\mathbf{e}_{a_i'} -\mathbf{e}_{a_i}$ with $\div(a_i') = \div(a_i)$. By definition of $R$, for every generator $\mathbf{e}_{a_i'} -\mathbf{e}_{a_i}$ of $\ker(\div)$ there exists $v' \in R$ such that $\textup{inc}(\mathbf{e}_{a_i} -\mathbf{e}_{a_i'}) = v'$. Therefore $I_\mathscr{L}$ is generated by elements of the form $y_{a_i'} - y_{a_i}z^{v'}$. This implies that for every $a \in Q_1$ not in the list $a_1, \ldots, a_n$, the monomial $y_a$ is equivalent in the quotient $\ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R] / I_\mathscr{L}$ to a product of elements in $\ensuremath{\Bbbk}[y_{a_1}, \ldots, y_{a_n}] \otimes \ensuremath{\Bbbk}[R]$. Our choice of $a_1, \ldots, a_n$ implies that $\mathbb{Z} \mathbf{e}_{a_1} \oplus \cdots \oplus \mathbb{Z}\mathbf{e}_{a_n}$ maps injectively into $\mathbb{Z}^n$ and so $\ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus R] / I_\mathscr{L} \cong \ensuremath{\Bbbk}[y_{a_1}, \ldots, y_{a_n}] \otimes \ensuremath{\Bbbk}[R]$. Therefore $[\mathbb{V}(I_\mathscr{L}) / \textup{PGL}(\alpha)] \cong [\AA^n \times (\ensuremath{\Bbbk}^\times)^R / \textup{PGL}(\alpha)]$.
We note that we may always fix the $(\ensuremath{\Bbbk}^\times)^R$ component to 1. Now, the characters of the subgroup of $\textup{PGL}(\alpha)$ fixing the $(\ensuremath{\Bbbk}^\times)^R$ component are given by $\textup{Wt}(Q) / R$. The map $\textup{pic}$ is surjective onto $G^\vee$ and its kernel is given by $R$, so that $\textup{Wt}(Q) / R \cong G^\vee$. Hence the aforementioned subgroup is naturally isomorphic to $G$. Consequently, we have stack isomorphisms \[ [\mathbb{V}(I_\mathscr{L}) / \textup{PGL}(\alpha)] \cong [\AA^n \times (\ensuremath{\Bbbk}^\times)^R / \textup{PGL}(\alpha)] \cong [\AA^n \times \{1\} /G] \cong [\AA^n /G].\] This completes the proof.
\end{proof}
We recall the definitions of $G$-Hilb$(\AA^n)$ and Hilb$^G(\AA^n)$. Take $n\in \mathbb{N}$ and $G$ as above. Following Reid \cite{GHilb}, define $G$-Hilb$(\AA^n)$ to be the fine moduli space of $G$-invariant subschemes of $\AA^n$ whose coordinate ring is isomorphic to $\ensuremath{\Bbbk}[G]$ as a $\ensuremath{\Bbbk}[G]$-module. Although the scheme $G$-Hilb($\AA^n$) is reducible in general, it has a distinguished irreducible component Hilb$^G(\AA^n)$ birational to $\AA^n/G$, see Ito-Nakumra \cite{ItoNakamura}. When $n \leq 3$ and $G \subset \textup{SL}(n,\ensuremath{\Bbbk})$, the scheme $G$-Hilb$(\AA^n)$ is smooth and isomorphic to Hilb$^G(\AA^n)$; furthermore the map $\tau: G\text{-Hilb}(\AA^n) \rightarrow \AA^n/G$ sending a subscheme to the orbit supporting it, is a crepant resolution of $\AA^n/G$, see \cite{ItoNakamura} and Nakamura \cite{Nakamura}.
Proposition \ref{clsdimm} allows us to recover the stack $[\AA^n / G]$ from the McKay quiver. Craw-Maclagan-Thomas \cite{CMT1} show that the distinguished component Hilb$^G(\AA^n)$ of $G$-Hilb$(\AA^n)$ can also be recovered from the labelled McKay quiver. Indeed, by Proposition 5.2 of \cite{CMT1}, Hilb$^G(\AA^n)$ is the subvariety of \[\mathcal{M}_\vartheta(Q) = \big(\AA^{Q_1}\big) ^{ss} _\vartheta\, /\, \textup{PGL}(\alpha)\] cut out by the ideal \[I_Q:= \Big\langle y^{u_1} - y^{u_2} \, \Big|\, \div(u_1 - u_2) = 0, \,\textup{inc}(u_1 -u_2) = 0 \Big\rangle.\] We proceed to relate our construction to that of Hilb$^G(\AA^n)$ by defining a GIT problem in which $[\AA^n /G]$ and Hilb$^G(\AA^n)$ are separated by a finite series of wall-crossings. We begin by carefully picking a basis $\mathfrak{B}$ of $R$.
Write $G^\vee$ as a direct sum of cyclic groups $\bigoplus_{j=1}^mH_j$ and take $\rho_j$ a generator of $H_j$. Define
\begin{equation*}
\overline{\mathfrak{B}}:= \Big\{-\mathbf{e}_{\rho_j} -\mathbf{e}_{\rho'\otimes \rho_j^{-1}} + \mathbf{e}_{\rho'} \in \textup{Wt}(Q) \,|\, \forall\,1\leq j\leq m, \,\, \rho' \in G^\vee \setminus \{\rho_1, \ldots, \rho_m\}\Big\}.
\end{equation*}
\begin{lemma}
The set $\overline{\mathfrak{B}}$ generates the lattice $R \subset \textup{Wt}(Q)$.
\end{lemma}
\begin{proof}
For notational purposes, we use + for the binary operation on $G^\vee$ in this proof. First we show that
\begin{equation*}
\widetilde{\mathfrak{B}}:= \Big\{-\mathbf{e}_{\rho_j} -\mathbf{e}_{\rho' - \rho_j} + \mathbf{e}_{\rho'} \in \textup{Wt}(Q) \,|\,\, \forall\,1\leq j\leq m,\,\,\, \rho' \in G^\vee\Big\}
\end{equation*}
generates $R$. Let $\rho = \sum_{j} \gamma_j \rho_j$ and without loss of generality assume $\gamma_j >0$. Since
\begin{equation*}
\sum_{1 \leq \kappa_j \leq \gamma_j} -\mathbf{e}_{\rho_j} - \mathbf{e}_{\rho' - (\kappa_j -1)\rho_j -\rho_j} +\mathbf{e}_{\rho' - (\kappa_j -1)\rho_j} = -\gamma_j \mathbf{e}_{\rho_j} - \mathbf{e}_{(\rho' - \gamma_j \rho_j)} +\mathbf{e}_{\rho'}
\end{equation*}
we that deduce $(\sum_j -\gamma_j \mathbf{e}_{\rho_j}) + \mathbf{e}_{\rho'}$ is an element of $\mathbb{N} \widetilde{\mathfrak{B}}$. Moreover, for $\rho' = \sum_{j} \gamma_j \rho_j$ and $\rho'' = \sum_{j} \gamma_j' \rho_j$, we have that $-\mathbf{e}_{\rho'} - \mathbf{e}_{\rho''} + \mathbf{e}_{\rho' +\rho''}$ is equal to
\begin{equation*}
\Big(\Big(\sum_j \gamma_j \mathbf{e}_{\rho_j}\Big) - \mathbf{e}_{\rho'}\Big) +\Big(\Big(\sum_j \gamma_j' \mathbf{e}_{\rho_j}\Big) - \mathbf{e}_{\rho''}\Big) + \Big(\Big(\sum_j -(\gamma_j+\gamma_j') \mathbf{e}_{\rho_j}\Big) + \mathbf{e}_{\rho' + \rho''}\Big)
\end{equation*}
showing that $-\mathbf{e}_\rho' - \mathbf{e}_{\rho''} + \mathbf{e}_{\rho'+\rho''} \in \mathbb{Z} \widetilde{\mathfrak{B}}$. Therefore $\widetilde{\mathfrak{B}}$ generates $\ker(\div) =R$.
Take $-\mathbf{e}_{\rho_j} - \mathbf{e}_{\rho_j' - \rho_j} + \mathbf{e}_{\rho_j'}$ for $1 \leq j, j' \leq m$ and $|\rho_j|$ to be the order of $\rho_j$. Then we have
\begin{equation*}
\sum_{0\leq \kappa \leq |\rho_j|- 2} -\mathbf{e}_{\rho_j} - \mathbf{e}_{\kappa \rho_j + \rho_{j'}} +\mathbf{e}_{(\kappa+1)\rho_j + \rho_{j'}} = (1- |\rho_j|) \mathbf{e}_{\rho_j} - \mathbf{e}_{\rho_{j'}} + \mathbf{e}_{\rho_{j'} - \rho_j}
\end{equation*}
which along with the fact that $|\rho_j|\, \mathbf{e}_{\rho_j} \in \mathbb{Z} \overline{\mathfrak{B}}$ shows that $\overline{\mathfrak{B}}$ generates $R$.
\end{proof}
\begin{rmk}\label{P}
Note that for any $j$ as above, the $\mathbf{e}_{\rho_j}$ coefficient of elements of $\mathbb{N} \overline{\mathfrak{B}}$ is non-positive. This will prove crucial in the proof of the theorem below.
\end{rmk}
Fix a basis $\mathfrak{B} \subset \overline{\mathfrak{B}}$ of $R$. We have the following diagram
\begin{equation}\label{com6}\begin{split}
\xymatrix@C=1.5cm@R=1cm{\mathbb{N}^{Q_1} \oplus \mathbb{N} \mathfrak{B} \ar[r]^-{\textup{inc} \oplus \iota} \ar[d]_{\div} & \textup{Wt}(Q) \\ \mathbb{N}^n}
\end{split}\end{equation}
The semigroup homomorphism induces a $\textup{Wt}(Q)$-grading on $\ensuremath{\Bbbk}[\mathbb{N}^n \oplus \mathbb{N} \mathfrak{B}]$ and hence an action of $\textup{PGL}(\alpha)$ on $\AA^{Q_1} \times \AA^\mathfrak{B}$. Define $I_{\mathscr{L}, \mathfrak{B}}$ to be the $\textup{Wt}(Q)$-homogenous ideal of $\ensuremath{\Bbbk}[\mathbb{N}^n \oplus \mathbb{N} \mathfrak{B}]$ given by
\begin{equation*}
I_{\mathscr{L}, \mathfrak{B}} := \Big\langle y^{u_1}z^{v_1} - y^{u_2}z^{v_2} \, \Big| \, \div(u_1-u_2) = 0, \textup{inc}(u_1-u_2)+\iota(v_1-v_2)=0 \Big\rangle.
\end{equation*}
For $\theta \in \textup{Wt}(Q)$ we consider the stack quotient $[\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}}) ^{ss} _\theta / \textup{PGL}(\alpha)]$.
\begin{thm}
There exists generic stability conditions $\chi_{\theta_1}, \chi_{\theta_2} \in \textup{PGL}(\alpha)^\vee$, such that \[[\AA^n/G] \cong [\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}})^{ss}_{\theta_1} / \textup{PGL}(\alpha)] \,\text{ and }\, \textup{Hilb}^G(\AA^n) \cong [\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}})^{ss}_{\theta_2} / \textup{PGL}(\alpha)].\]
\end{thm}
\begin{proof}
Because of Proposition \ref{clsdimm}, to establish the first isomorphism it suffices to find $\theta_1$ for which $\mathbb{V}(I_{\mathscr{L} , \mathfrak{B}})^{ss}_{\theta_1} = \mathbb{V}(I_{\mathscr{L}})_{\theta_1}^{ss}$. The cone $\mathbb{Q}_{\geq 0} \mathfrak{B} \subset \textup{Wt}(Q)_\mathbb{Q}$ is top dimensional, so we may pick a generic $\theta_1 \in \mathbb{N} \mathfrak{B}$. After picking a higher multiple if necessary, we may assume that the $\chi_\theta$-unstable locus in $\AA^{Q_1} \times \AA^\mathfrak{B}$ is given by the vanishing locus of the ideal
\begin{equation*}
B_{\theta_1}:= \Big\langle\, y^uz^v \in \ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus \mathbb{N} \mathfrak{B}] \, \Big| \, \textup{inc}(u) + \iota(v) = \theta_1 \,\Big\rangle
\end{equation*}
We claim that for any monomial $y^uz^v \in B_{\theta_1}$ there exists $u' \in \mathbb{N}^{Q_1}$ such that $y^{u'}z^{\theta_1} - y^uz^v \in I_{\mathscr{L}, \mathfrak{B}}$. Since $\theta_1 \in \mathbb{N} \mathfrak{B}$, $\textup{inc}(u) = \theta_1 - \iota(v) \in R$. The commutative diagrams (\ref{com5}) imply $\div(u)$ is a torus-invariant section of the trivial line bundle. Take $u' \in \mathbb{N}^{Q_1} $ to be a cycle in the quiver of sections corresponding to $\div (u)$ and note that $\textup{inc}(u')=0$. We then have that $\div(u-u') = 0$ and $\textup{inc}(u-u') + \iota(v-\theta_1) =0$, as claimed. From this it follows that any point for which $z^{\theta_1} =0$ is unstable. Now, $\theta_1$ is in the interior of $\mathbb{Q}_{\geq 0}\mathfrak{B}$ therefore any point for which $z_\b = 0$ is unstable. That is $\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}})^{ss}_{\theta_1} := \mathbb{V}(I_{\mathscr{L}, \mathfrak{B}}) \setminus \mathbb{V}(B_{\theta_1}) \subset \AA^{Q_1} \times (\ensuremath{\Bbbk}^\times)^\mathfrak{B}$ and hence \[\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}})^{ss}_{\theta_1} = \mathbb{V}(I_\mathscr{L})^{ss}_{\theta_1}.\]
Now take a generic $\theta_2 \in \textup{Wt}(Q)$ in the interior top-dimensional cone $\Theta:= \mathbb{Q}_{\geq 0}\{\mathbf{e}_\rho \,|\, \rho \in G^\vee\setminus\{0\}\}$. Once again, taking a higher multiple if necessary we may assume that the $\chi_\theta$-unstable locus in $\AA^{Q_1} \times \AA^P$ is given by the vanishing locus of the ideal
\begin{equation*}
B_{\theta_2}:= \Big\langle\, y^uz^v \in \ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus \mathbb{N} P] \, | \, \textup{inc}(u) + \iota(v) = \theta_2 \,\Big\rangle.
\end{equation*}
If we set
\begin{equation*}
B_{\theta_2}':= \langle\, y^u \in \ensuremath{\Bbbk}[\mathbb{N}^{Q_1}] \, | \, \textup{inc}(u) = \theta_2\,\rangle
\end{equation*}
then the vanishing locus of $B_{\theta_2}'$ is equal to that of $B_\vartheta$, since $\theta_2$ and $\vartheta$ lie in the same chamber. Let $y^uz^v \in B_{\theta_2}$ and take $y^{u'}$ to be the unique monomial in $\ensuremath{\Bbbk}[\mathbb{N}^{Q_1}] / I_Q$ for which $\div(u) =\div(u')$ and $\textup{inc}(u') = \theta_2$. We next show that
\begin{equation}\label{iso}
\bigg(\frac{\ensuremath{\Bbbk}[\mathbb{N}^{Q_1} \oplus \mathbb{N} B]}{I_{\mathscr{L}, B}}\bigg)_{y^uz^v} \cong \bigg(\frac{\ensuremath{\Bbbk}[\mathbb{N}^{Q_1}]}{I_Q}\bigg)_{{y^{u'}}}.
\end{equation}
Remark \ref{P} gives that $v$ has a non-positive coefficient for each basis element $\mathbf{e}_{\rho_j}$. Since $\textup{inc}(u) = \theta_2 - \iota(v)$ and $\theta_2$ is in the interior of $\Theta$, $\textup{inc}(u)$ has a strictly positive coefficients for each basis element $\mathbf{e}_{\rho_j}$. Write $u = u_1 + \cdots u_m + u''$ for $u_j, u'' \in \mathbb{N}^{Q_1}$ satisfying $\textup{inc}(u_j) = \mathbf{e}_{\rho_j}$. We have that $y^{u} = y^{u_1}\cdots y^{u_m} y^{u''}$ and therefore in the localization above the monomials $y^{u_j}$ are invertible. Take an arbitrary element $\b:= -\mathbf{e}_{\rho_j} - \mathbf{e}_{\rho'} + \mathbf{e}_{\rho_j \otimes \rho'}$ of $\mathfrak{B}$. Then there exists a path $p_j$ from $\rho'$ to $\rho' \otimes \rho_j$ with label $\div(u_j) \in \textup{Hom}(\rho', \rho'\otimes\rho_j)$. Let $u_j' \in \mathbb{N}^{Q_1}$ be the element determined by $p_j$. We then have $\div(u_j - u_j')=0$ and $\textup{inc}(u_j - u_j') + \iota(\b) =0$ which implies that $z_\b y^{u_j}- y^{u_j'} \in I_{\mathscr{L},\mathfrak{B}}$. Since $y^{u_j}$ is invertible in the localization we may replace $z_\b y^{u_j}- y^{u_j'}$ by $z_\b- y^{u_j'-u_j}$, thereby eliminating $z_\b$ for every $\b\in \mathfrak{B}$. Next, consider the general generator $y^{u_1}z^{v_1} - y^{u_2}z^{v_2}$ of $I_{L,\mathfrak{B}}$, eliminate the monomials $z^{v_1}, z^{v_2}$ and multiply by the invertible elements $y^{u_j}$ to get a polynomial. Since every $z_\b$ is replaced by some $y^{u_j'-u_j}$ for which $\div(u_j - u_j')=0$ the resulting polynomial will be in $I_Q$. After noting that the construction above enables us to write $y^uz^v$ as a monomial in $\ensuremath{\Bbbk}[\mathbb{N}^{Q_1}]$ we have the isomorphisms (\ref{iso}).
The isomorphisms (\ref{iso}) allow us to conclude that \[\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}})^{ss}_{\theta_2} \cong \mathbb{V}(I_Q)_{ss}^{\theta_2}.\] Consequently $[\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}})_{\theta_2}^{ss} / \textup{PGL}(\alpha)] \cong [\mathbb{V}(I_Q)_{\theta_2}^{ss} / \textup{PGL}(\alpha)]$. Now $[\mathbb{V}(I_Q)_{\theta_2}^{ss} / \textup{PGL}(\alpha)]$ is a substack of the representable stack $\mathcal{M}_{\theta_2}(Q)$ cut out by the homogenous ideal $I_Q$ and is therefore the variety $\mathbb{V}(I_Q)_{\theta_2}^{ss} / \textup{PGL}(\alpha)$. Proposition 5.2 of \cite{CMT1} then gives Hilb$^G \cong [\mathbb{V}(I_Q)_{\theta_2}^{ss} / \textup{PGL}(\alpha)]$, completing the proof. \end{proof}
\begin{coro}
For $n \leq 3$ and $G\subset \textup{SL}(n, \ensuremath{\Bbbk})$ finite abelian, there exists generic stability conditions $\chi_{\theta_1}, \chi_{\theta_2} \in \textup{PGL}(\alpha)^\vee$, such that \[ [\AA^n/G] \cong [\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}})^{ss}_{\theta_1} / \textup{PGL}(\alpha)] \,\text{ and }\, G\textup{-Hilb}(\AA^n) \cong [\mathbb{V}(I_{\mathscr{L}, \mathfrak{B}})^{ss}_{\theta_2} / \textup{PGL}(\alpha)].\]
\end{coro}
\begin{proof}
For n=2, by the work of Ito-Nakamura \cite{ItoNakamura} we have $G\textup{-Hilb}(\AA^n) \cong \textup{Hilb}^G(\AA^n)$. For n=3, Nakamura \cite{Nakamura} shows $G\textup{-Hilb}(\AA^n) \cong \textup{Hilb}^G(\AA^n)$
\end{proof}
\begin{rmk}
The careful choice of basis $\mathfrak{B}$ was made with $G\textup{-Hilb}(\AA^n)$ in mind. Moving between $[\AA^n/G]$ and another crepant resolution via wall-crossings may require a different choice of basis.
\end{rmk}
\addcontentsline {toc} {section} {Bibliography}
\bibliographystyle {plain}
|
train/arxiv
|
BkiUbTrxK1ThhAqzU0X4
| 5
| 1
|
\section{Introduction}
\begin{figure*}
\centering
\begin{minipage}{0.5\hsize}
\centering
\includegraphics[width = 0.65\hsize]{./Tripple_schematics_column.pdf}
\end{minipage}%
\begin{minipage}{0.5\hsize}
\includegraphics[width = \hsize]{./COrad_v2v1_radius_obs.pdf}
\end{minipage}
\caption{\label{fig:data}
Disk structures as proposed by \citet{Banzatti2018} (left) and
CO vibrational ratio $v2/v1$ and emitting radius from near-infrared CO spectra of Herbig stars used for comparison to the models in this work (right), see details in Section \ref{sec:data}). The three groups from \citet{Banzatti2018}, are shown in different colors: group II in magenta, high-NIR group I in green, low-NIR group I in blue. Disks where $F_\mathrm{NIR}$ is not available are marked in black. We mark two regions that will be used for comparison with models: disks that have CO inside 5 AU, which show a low vibrational ratio (group II and high-NIR group I, \textit{bottom left corner}), and disks that have CO only outside of 5 AU, which show high vibrational ratios (low-NIR group I, \textit{top right corner}) .
}
\end{figure*}
\begin{table*}[]
\centering
\caption{Median values for Herbig groups from \citet{Banzatti2018}, plus notes from imaging studies}
\centering
\begin{tabular}{l c c c c l l}
\hline
\hline
Herbig group & $F_\mathrm{NIR}$ & $v2/v1$ & $R_\mathrm{CO}$ (AU) & log(Fe/H) & Dust structures from imaging & Refs \\
\hline
Group II& 0.16 & 0.12 & 3 & -4.4 & no large inner cavities, some substructures $< 5$~AU & (1) \\
Group I (high-NIR) & 0.27 & 0.05 & 5 & -4.6 & 40--100~AU cavities, (misaligned) inner disks $< 10$~AU & (2) \\
Group I (low-NIR) & 0.08 & 0.27 & 18 & -5.2 & 15--50~AU cavities, no significant inner disks & (3) \\
\hline
\end{tabular}
\tablefoot{References: (1) e.g. \citet{Menu2015, Huang2018, Isella2018} ; (2) e.g. \citet{Pinilla2018, Stolker2016, Avenhaus2017, Tang2017, diFolco2009, Boehler2018, Benisty2017}; (3) \citet{vanderPlas2017,Fedele2017, White2018, Pinilla2018} }
\label{tab:Av_vals}
\end{table*}
Planetary systems are thought to be built within proto-planetary disks of gas and dust around young stars. How these disks transition from the initial gas-rich remnants of star formation to the solid-body dominated debris disks and planetary systems is still an open question. While for most disks it seems that they go through a quick dispersal process \citep{Cieza2007, Currie2009}, there is a subset of disks that goes through a prolonged period where dust and gas are (partially) depleted in the inner disk, but where there is a large reservoir of mass at larger radii, the so called transition disks \citep{Maaskant2013, Garufi2017, vanderMarel2018}. It is thought that in these disks the cavity is formed either through giant planet formation or X-ray/UV photoevaporation \citep[for reviews, see][]{Owen2016, Ercolano2017}. These processes cause different distributions of gas and dust in the inner disk.
To distinguish these scenarios, we have to study the inner disk. CO rovibrational lines are good tracers of the inner disk, because they are strong lines that originate only from warm, dense gas. The upper level energies for the first vibrationally excited level are around 3000 K so they will only be excited in environments with high temperatures ($\gtrsim$ 300 K). Furthermore CO is a very stable molecule and is thus expected to survive even in regions where there is little dust to shield the gas from UV photons \citep[e.g.][]{Bruderer2013}. Finally the transitions are strong so small columns of excited CO are needed to produce bright lines making CO columns as low as $10^{16}$ cm$^{-2}$ easily detectable if the excitation conditions are right.
Observing the fundamental CO lines around 4.7 $\mu$m allows for the simultaneous measurement of rovibrational line fluxes from the first ($v1$) and second ($v2$) excited states. The $v2/v1$ line flux ratio carries information on the excitation conditions of the gas. The high resolving power on spectrographs such as Keck-NIRSPEC \citep{McLean1998}, VLT-CRIRES \citep{Kaufl2004}, IRTF-CSHELL and now iSHELL \citep{Rayner2016} make velocity-resolved observations of CO line profiles possible \citep[e.g.][]{Najita2003, Blake2004, Thi2005, Brittain2007, Pontoppidan2008, SalykCO2011, Brown2013, Banzatti2015, Brittain2018}. As the emission is expected to come from a Keplerian disk, the width of the line, once coupled with disk inclination and stellar mass, can be used to estimate the CO emitting radii for different gas velocities, thereby obtaining information on the spatial distribution of CO gas in inner disks.
For disks around T-Tauri stars, CO rovibrational lines have been used to probe molecular gas within inner disk dust cavities \citep{Pontoppidan2008, Pontoppidan2011}, and to propose an inside-out clearing scenario for gas and dust \citep{Banzatti2015}. In their sample of T-Tauri disks, CO shows a lower vibrational temperature with decreasing linewidth (and hence increasing emitting radius). This fits well with the expectation that the gas temperature decreases as a function of distance from the star. Furthermore the variation in CO emitting radii can be explained by varying inner molecular gas disk radii, indicating that the inner edge of the molecular disk is not set by the sublimation of dust but carved by another process such as planet formation or photoevaporation.
\begin{figure*}
\centering
\includegraphics[width = \hsize]{./COlines_obs.pdf}
\caption{Selection of stacked CO line profiles from observed spectra (Section \ref{sec:data}). The $v1 $ lines are shown in black, $v2 $ lines in blue. Gaps visible in some line profiles are due to telluric absorption. Disk inclinations are between 20 and 50 deg for all these objects. In HD 31648, the $R_\mathrm{CO}$ is taken for the broad component defined by the line wings.
}
\label{fig:obslineprofiles}
\end{figure*}
Herbig disks, instead, behave very differently and show an inverse relation between linewidth and vibrational excitation \citep{Banzatti2015}. This was attributed to UV-fluorescence becoming more important for stars with higher continuum UV fluxes when the thermal excitation becomes less efficient at larger disk radii \citep{Brittain2007, Brown2013, Banzatti2015}. However, full thermo-chemical models suggest that UV fluorescence is not the dominant excitation mechanism for the $v=1$ and $v=2$ levels of CO \citep{Thi2013, HeinBertelsen2014}. This is supported by the observed rovibrational excitation diagrams that only show a strong difference between rotational and vibrational temperatures for levels $v =3$ and higher, indicating that only for the higher vibrational levels, UV pumping is important \citep{vanderPlas2015}.
Furthermore, based on their SEDs, Herbigs are divided into two groups. Disks with strong far-infrared emission relative to their mid-infrared emission are classified as group I or "flared" disks. Disks with strong mid-infrared emission in comparison to their far infrared emission are classified as group II or "settled" disks \citep{Meeus2001}.
An evolutionary sequence between these groups was inferred with group I disk being the precursors of group II disks \citep{Dullemond2004}. However it is the group I disk that are often observed to have a large ($> 10$ AU) cavity in either scattered light or sub-mm imaging, implying that they are unlikely the precursors for the group II disks that are generally less massive and do not show any cavity \citep{Maaskant2013, Garufi2017}.
Inner disk ($\lesssim 5$ AU) tracers of both gas and dust add interesting pieces to this puzzle. Table~\ref{tab:Av_vals} reports median values for three inner disk tracers (near-infrared excess, $F_\mathrm{NIR}$; CO vibrational ratio, $v2/v1$; CO emitting radius, $R_{\ce{CO}}$) as well as the median stellar surface Fe abundance \citep{Kama2015} used to identify three groups of Herbig disks in \cite{Banzatti2018}; here we also add notes on the presence of disk structures from imaging studies.
Group II disks exhibit a narrow range of intermediate values for the near-infrared excess, $F_\mathrm{NIR} = L_{1.2-4.5 \mu\mathrm{m}}/L_\star$ \citep{Garufi2017}. All of the group II disks show broad CO 4.7 $\mu$m rovibrational lines indicating that CO is emitting from small radii. These tracers together indicate that both molecular gas and dust are present and abundant at small distances from the star \citep[$\lesssim 5$ AU;][]{vanderPlas2015, Banzatti2018}.
The group I disks, instead, remarkably split into two very distinct groups \citep{Banzatti2018}. Some of them have very high near-infrared excesses (high-NIR), higher than the group II sources, and only moderately broad CO rovibrational lines. The rest of the group I disks have low near-infrared excesses (low-NIR) and the narrowest CO rovibrational lines. Group I disks thus, while all having dust cavities imaged at larger radii, seem to show a marked dichotomy in their inner disks between those that have abundant gas and dust in the inner few AU, and those that do not, without a gradient of situations in between. While these groups do not show any segregation in terms of mass accretion rates \citep{Banzatti2018}, stellar elemental abundances show that the low-NIR group I disks are depleted in Fe compared to all of the other sources (Table~\ref{tab:Av_vals}), suggesting that the stars in low-NIR group I disks accrete gas that is depleted in dust compared to the 100:1 ISM dust ratio, suggesting that dust is trapped at larger radii in the disk \citep{Kama2015}.
In this work we focus on CO rovibrational emission, and in particular the observed trends between the radius and excitation of CO emission and the NIR excess (Fig.~\ref{fig:data}), to expand our growing understanding of inner disk structure and evolution in Herbigs.
Specifically, we aim to explain the dichotomy between low vibrational ratios coming from gas within $< 5$ AU and the high vibrational ratios coming from larger radii. The observational dataset from \cite{Banzatti2017,Banzatti2018}, briefly presented in Sec.~\ref{sec:data}, is used for comparison and validation of the models. In Sec.~\ref{sec:Slabmodelling} the vibrational excitation of CO will be studied through simple slab models. Full thermo-chemical models using Dust And LInes \citep[DALI,][]{Bruderer2012,Bruderer2013} for different physical structures will be presented and analysed in Sec.~\ref{sec:DALI}. The implications will be discussed in Sec.~\ref{sec:discussion} and our conclusion will be summarized in Sec.~\ref{sec:conclusion}.
\section{Data overview}
\label{sec:data}
The CO emission lines adopted in this work for comparison to the models are taken from the compilation included in \citet{Banzatti2017,Banzatti2018}, based on spectra originally presented in \citet{Pontoppidan2011b,Brown2012,Banzatti2015a, vanderPlas2015,Banzatti2018}. The data consist of high resolution ($R \sim $75,000--100,000) spectra of CO rovibrational emission around 4.7 $\mu$m for 20 Herbig Ae stars and 3 F stars, taken with the CRIRES instrument on the Very Large Telescope (VLT) of the European Southern Observatory \citep[ESO; ][]{Kaufl2004} and iSHELL on the NASA Infrared Telescope Facility \citep[IRTF; ][]{Rayner2016}. The spectrum of HD 142666 is taken from a previous survey \citep{Blake2004,SalykCO2011} done with Keck-NIRSPEC \citep[R $\sim$ 25,000;][]{McLean1998}. The two parameters we focus on in this work, the CO vibrational ratio, $v2/v1$, and a characteristic emitting radius, are measured from stacked line profiles as explained in \citet{Banzatti2015}.
In brief, the vibrational ratio $v2/v1$ is measured from the line flux ratio between lines around the $v2\,P(4)$ line ($v' = 2, J' = 3 \rightarrow v'' = 1, J'' = 4$) and around the $v1\,P(10)$ line ($v' = 1, J' = 9 \rightarrow v'' = 0, J'' = 10$). The choice of these specific lines is driven by the spectral coverage of the observations, and by the need to use unblended lines \citep[see details in][]{Banzatti2015}. The vibrational flux ratio between the $v2\,P(4)$ and $v1\,P(10)$ line is used as a proxy for the vibrational ratio between the $v2$ and $v1$ levels.
The vibrational ratio depends on the lines that are used in the comparison, even lines of matching $J$ level show a variation of up to 50\% in the vibrational ratio. The $v2\,P(4)$ and $v1\,P(10)$ line ratio lies within the range of values obtained by using matching $J$ levels and is thus a good proxy for the vibrational ratio \citep[see Appendix A in][]{Banzatti2015}.
A characteristic emitting radius is estimated from the half width at half maximum (HWHM) of the line profile, assuming Keplerian rotation and using literature values for the disk inclination and the stellar mass. As better measurements of disk inclinations have become available over time for some disks, estimates of CO radii have changed accordingly; the error-bars in Fig.~\ref{fig:data} reflect the uncertainties in the disk inclinations.
Figure~\ref{fig:data} shows these parameters and their trend as discussed above, namely that the vibrational ratio is larger when CO emission comes from larger disk radii.
Figure~\ref{fig:obslineprofiles} shows a selection of CO line profiles, chosen to span the full range of CO emitting radii and vibrational ratios for the three groups of disks in Fig.~\ref{fig:data}. The broader lines (i.e. smaller CO emitting radii) have low $v2/v1$, and can have flat or double-peaked line profiles. HD 31648 is the only exception that clearly shows two velocity components, as commonly found for T-Tauri stars \citep{Bast2011,Banzatti2015}.
This combination of broad wings and strong peak indicates that the emitting area of the CO rovibrational lines spans a large range of radii (see more in Section \ref{sec:DALI}). In this analysis, for HD 31648 we take the CO radius as indicated by the broad component, defined by the broad line wings. The narrower lines (i.e. larger CO emitting radii) often show a single peak profile indicative of a more extended emitting area, but in some cases they clearly show a double peak profile, indicative of an emitting region that is confined to a narrower ring.
In addition, we use CO line fluxes as measured in \citet{Banzatti2017}, which we scale to a common distance of 150 pc for comparison with the model. The near-infrared excess is measured between 1.2 and 4.5 $\mu$m \citep{Garufi2017,Banzatti2018}. Table \ref{tab:Av_vals} shows the median values for these parameters for the three groups of Herbig disks, as reported in \citet{Banzatti2018}. Spectra and individual measurements can be found in the original references reported in this section.
\section{Slab modelling of the vibrational ratio}
\label{sec:Slabmodelling}
To be able to infer the physical conditions of the CO emitting regions we have to look at the CO line formation process. To compute the strength of a line one needs to know both the chemical and physical state of the gas. Physics and chemistry are strongly intertwined with the temperature, density and radiation field influencing the chemistry and the chemical abundances influencing the heating and cooling of the gas, changing the temperature. Various thermo-chemical models have been developed it solve this coupled problem \citep[e.g.][]{Woitke2009, Bruderer2012, Bruderer2013, Du2014}, however, before we dive into the full problem, we will first study the line formation of CO in a more controlled setting.
The line formation of CO will be studied using two different types of slab models. First the behaviour of a slab of CO with fixed excitation will be studied analytically; this will reveal the effects of the optical depth and excitation on the vibrational ratios. Afterwards RADEX models \citep{RADEX} will be used to study non-LTE effects. These RADEX models will be used to constrain the physical conditions of the CO rovibrational emitting regions.
\subsection{Analytical line ratios}
\subsubsection{Methods}
\begin{figure}
\includegraphics[width=\hsize]{./Analytic_flux_ratio_no_dust.pdf}
\caption{\label{fig:Analytic_ratio} CO vibrational ratio, $v2/v1$, for different temperatures and columns from the analytic model. The green line shows the $\tau = 1$ conditions for the $v1$ line. The white line shows $v2/v1 = 0.2$, which is the value that differentiates low and high vibrational ratio sources. }
\end{figure}
In the case of a mono-thermal slab of CO that is in LTE with an excitation temperature $T_{\mathrm{ex}}$, the continuum subtracted peak surface brightness can be computed by:
\begin{equation}
\label{eq:CO_I}
I(u, l) = \left(B\left((T_{\mathrm{ex}}, \nu_{(u, l)}\right) - B\left(T_\mathrm{back}, \nu_{(u, l)}\right)\right) \times \left( 1 - e^{-\tau(u, l)}\right),
\end{equation}
where
\begin{equation}
\label{eq:CO_tau}
\tau(u, l) = \frac{g(u)}{g(l)} \frac{c^2A(u,l)}{8 \pi^2 \nu^2(u,l)}
\frac{N_\mathrm{CO} \left(1 - \exp{-\left[\frac{h\nu(u,l)}{kT_{\mathrm{ex}}}\right]}\right)}{\sqrt{2\pi}\sigma_v Z(T_{\mathrm{ex}})} g(l)\exp{\left[-\frac{E(l)}{kT_{\mathrm{ex}}}\right]}.
\end{equation}
In Eq.~\ref{eq:CO_I} $I(u, l)$ is the continuum subtracted line peak intensity, $B(T, \nu)$ is the Planck function at temperature $T$ and frequency $\nu$, $T_\mathrm{back}$ is the radiation temperature of the background and $\tau(u, l)$ is the line peak opacity. In Eq.~\ref{eq:CO_tau} $g(n)$ is the degeneracy of rovibrational level $n$, $A(u,l)$ is the Einstein A coefficient of the transition between rovibrational levels $u$ and $l$, $N_\mathrm{CO}$ is the CO column, $\sigma_v$ is the thermal linewidth, $Z(T_\mathrm{ex})$ is the rovibrational partition function of CO, $E(n)$ is the energy above the rovibrational ground state of state $n$ and $c$, $h$ and $k$ are the speed of light, the Planck constant and the Boltzmann constant as usual.
The $v1\,P(10)$ and the $v2\,P(4)$ lines are used as proxy for the stacked $v1$ and $v2$ line from the observations (Section \ref{sec:data}). Under the current assumptions the peak line intensity only depends on the excitation temperature, the total column and the background radiation temperature. This last parameter drops out when looking at line ratios (assuming that $T_\mathrm{back}$ does not vary significantly over the frequency range).
\subsubsection{Results}
The peak line intensity ratio for the $ v1 $ and $ v2$ lines are shown in Fig.~\ref{fig:Analytic_ratio} for a range of temperatures and CO columns. At columns smaller than $10^{17}$ cm$^{-2}$ both lines are optically thin and as such there is no trend with column in the line ratio. At high column densities the line ratio converges to $g_2(u)A_2(u,l)/g_1(u)A_1(u,l)$ which is $\sim 1.01$ for the lines under consideration, for almost any temperature ($T >$ 200 K).
The green line shows where the $ v1 $ line becomes optically thick. An increase in CO column to the right of this line no longer elicits a linear response in the line flux. As the $ v2$ flux still increases linearly with the column, this increases the line ratio. If the column gets big enough the $ v2$ line also gets optically thick and the line ratio tends to unity.
The speed at which this happens with increasing column strongly depends on the population of the upper level of the $ v2$ transition. At temperatures above, 2000 K the column at which the $ v1 $ becomes optically thick increases due to the lower fractional population in the lower rotational levels of the $ v1 $ line.
Observed line ratios coming from within 5 AU are generally below 0.2, so from Fig.~\ref{fig:Analytic_ratio} the conditions to match these observations can be easily read off. If both lines are optically thin, an excitation temperature less than $\sim${}$2000$ K induces low line ratios. At columns above $10^{17}$ cm$^{-2}$ a line ratio of 0.2 requires lower temperature with increasing column, to values as low as $\sim${}$190$ K at a CO column of $10^{22}$ cm$^{-2}$.
All disks with $R_\mathrm{CO} > 5$ AU have line ratios between 0.2 and 0.5. For these high line ratios Fig.~\ref{fig:Analytic_ratio} shows that, as expected a high line ratio can be due to high temperature, or large columns. For low columns, temperatures between $2000$ and $6000$ K are needed to produce the right line ratios. Above a column of $10^{17}$ cm$^{-2}$ progressively lower temperatures lead to the observed line flux ratios.
\begin{figure*}
\begin{minipage}{0.5\hsize}
\includegraphics[width=\hsize]{{./Radex_W_0.01_no_dust_v1}.pdf}
\end{minipage}%
\begin{minipage}{0.5\hsize}
\includegraphics[width=\hsize]{{./Radex_W_0.3_no_dust_v1}.pdf}
\end{minipage}
\caption{\label{fig:RADEX_001_ratio}\label{fig:RADEX_03_ratio} CO vibrational ratio, $v2/v1$, for different temperatures and columns from the RADEX models using a 750 K radiation field with a dilution factor $W$ of 0.01 (\textit{left}) and 0.3 (\textit{right}). The area between the blue and white lines shows where both the vibrational ratio and the $v1$ flux of the low vibrational ratio sources are reproduced. For the $v1$ flux an emitting area with a radius of 5 AU is assumed. If a smaller emitting area is assumed the blue lines would shift in the direction of the blue arrows. High vibrational ratio sources can either be explained by gas with a high column ($N \gtrsim 10^{18}$ cm$^{-2}$) or a high temperature ($T > 2000$ K).}
\end{figure*}
\begin{figure}
\includegraphics[width = \hsize]{./Observations_extra.pdf}
\caption{\label{fig:ratvsrad_obs} CO vibrational ratio versus the inferred radius of emission for observational data (grey points) and analytic (\textit{left}) and RADEX (\textit{right}) model results (coloured lines). For the RADEX models, two different assumption for the radiation field are shown weakly irradiated (W = 0.01, \textit{dashed}) and strongly irradiated (W = 0.3, \textit{dotted}) cases. For the highest column only the LTE model and the weakly irradiated RADEX model are shown. The density for the RADEX models is $10^{12}$ cm$^{-3}$. }
\end{figure}
\subsubsection{Discussion}
The vibrational ratio from these lines can be expressed as a vibrational excitation temperature. However, this only represents the excitation of the gas if both lines are optically thin.
CO rovibrational lines get optically thick at CO columns of $10^{16}$--$10^{18}$ cm$^{-2}$ (Fig.~\ref{fig:Analytic_ratio}). Assuming a dust mass opacity of $2 \times 10^3$ cm$^{2}$ g$^{-1}$ \citep[][small grains]{Bruderer2015}, and a gas-to-dust ratio of 100 gives a \ce{H} column of $\sim 10^{22}$ cm$^{-2}$ before the dust gets optically thick at 4.7 $\mu$m. This allows for CO columns up to $10^{18}$ cm$^{-2}$ that can be detected for a canonical CO abundance of $10^{-4}$, and thus the generation of optically thick lines above the dust photosphere. Higher CO columns are possible if grains have grown beyond 1 $\mu$m or if the dust is depleted with respect to the gas in the emitting layer. Analysis of \ce{^{13}CO} lines suggest that columns of $10^{19}$ cm$^{-2}$ are not uncommon for the sources with a high vibrational ratio \citep{vanderPlas2015}. For these columns the high $v2/v1$ ratios can be explained with a temperature between 300 and 500 K.
Equation~(\ref{eq:CO_I}) does not include the absorption of the line by dust grains nor the emission of hot dust in the region were $\tau_\mathrm{dust} < 1$. These contributions would lower the line flux, by absorbing line photons and increasing the continuum level. These effects are more pronounced at low line opacities, and so affect the $v2$ line stronger than the $v1$ lines. As such, the line ratios are overestimated.
In the case of an added dust contribution, the dust opacity sets a maximum to the CO column that can be seen, while the dust emission sets the background temperature. The CO column under consideration is thus only the CO column above the dust photosphere ($\tau_{\mathrm{dust,} 4.7 \mu\mathrm{m}} \lesssim 1$).
One critical assumption of this analysis is that both lines are formed in the same region of the disk, either under one set of conditions or under a range of conditions each of which gives rise to a similar line ratio as the region average. The idea being that if the $ v1 $ and $ v2 $ lines would be coming from different regions of the disk, this would be seen as a significantly different line shape. As the line ratios are determined on the broad component that is seen in both the $ v2 $ and $ v1 $ lines we can be sure that this assumption holds.
\subsection{RADEX models}
Previously we have assumed a fixed excitation, parametrised by an excitation temperature. Here the excitation processes will be included explicitly by calculating the level populations from the balance between collisions, spontaneous emission and vibrational pumping. If no continuum opacity is assumed, the parameter space is four dimensional: the CO gas column, the kinetic temperature of the gas, the collision partner density (for this purpose assumed to be \ce{H2} \citep{Yang2010}\footnote{Results for H as collision partner are similar, but as the collisional rate coefficients are about an order of magnitude larger than the \ce{H2} collisional rate coefficients, the results for similar densities are shifted to more towards LTE \citep{Song2015,Walker2015}.}) and the radiation field. For the geometry, a slab that is illuminated from one side is assumed. This configuration is representative for the surface layers of proto-planetary disks, where the infrared continuum photons interacting with the gas are not along the same line of sight as the observations are taken. The pumping radiation intensity is parametrized using a 750 K black body diluted by a factor ($W$) between 0.0001 and 0.3, representative of a region at $\sim$100 times the radius of the 4.7 $\mu$m continuum emitting region and of a region very close to the continuum emitting region.
Figure~\ref{fig:RADEX_001_ratio} shows the line ratio for the $ v2$ and $ v1 $ lines from the RADEX models for different densities. This shows that for columns below $10^{17}$ cm$^{-2}$ line ratios below 0.1 are the norm. Only at high density ($> 10^{14}$ cm$^{-3}$) and high temperature ($>1300$ K) is the ratio boosted above 0.1, this is similar to the results from the analytic analysis. For $W = 0.01$, the low density results show the expected subthermal excitation of CO leading to lower line ratios compared to the LTE case. In contrast, in the $W = 0.3$ case there is a stronger contribution from excitation by infrared photons. This contribution is strongest at low temperatures where collisional excitation rates for the vibrational transitions are lowest.
\subsection{LTE vs non-LTE}
The line ratios for CO columns of $10^{17}$, $10^{18}$ and $10^{19}$ cm$^{-2}$ are plotted in Fig.~\ref{fig:ratvsrad_obs} for both the analytical and RADEX models. For these curves the temperature is assumed to scale as:
\begin{equation}
\label{eq:temp}
T(R_\mathrm{CO}) = 1500 \left(\frac{0.4 \mathrm{AU}}{R_\mathrm{CO}}\right)^2
\end{equation}
which is approximately the dust equilibrium temperature around a star of $30\,L_\odot$. It is clear that, for these conditions the LTE models can only explain the vibrational ratios at small radii, and those only at columns $< 10^{18}$ cm$^{-2}$.
The non-LTE RADEX models do somewhat better. With a \ce{H2} density of $10^{12}$ cm$^{-3}$, the RADEX models can reproduce the relatively low line ratios at radii smaller than 5 AU at larger columns than the analytical model. The RADEX models can also reproduce the trend in the observed data points in Figure~\ref{fig:ratvsrad_obs}, and with a strong enough radiation field, or high enough column, it can also reproduce the absolute line ratios. This indicates that high temperatures or a strongly out of LTE excitation is causing the high vibrational ratio at $R_\mathrm{CO} > 5$ AU.
Taking into account that the infrared radiation field decreases with radius, Fig.~\ref{fig:ratvsrad_obs} implies that the \ce{CO} column responsible for the emission needs to increase with $R_\mathrm{CO}$.
\subsection{Absolute fluxes}
\subsubsection{Low vibrational ratios in the inner disk}
To further constrain the conditions of the emitting gas, it is useful to compare the absolute fluxes to the observations. First the sources with low vibrational ratios and small CO emitting radii in the lower left corner of Fig.~\ref{fig:data} are investigated. Rescaling the Herbig line fluxes from \cite{Banzatti2017} to a common distance of 150 pc leads to a range in fluxes between $4\times10^{-15}$ and $2 \times 10^{-12}$ erg s$^{-1}$ cm$^{-2}$ for the $ v1 $ line and $3 \times 10^{-16}$ and $4 \times 10^{-13}$ erg s$^{-1}$ cm$^{-2}$ for the $ v2$ line.
For these sources, the line width implies an emitting radius smaller than 5 AU. Assuming, as a conservative case, that the flux comes from the full inner 5 AU, it is possible to select condition that are able to produce both the correct $v1$ line flux and the correct line ratio. These conditions are confined between the blue and white lines in Fig.~\ref{fig:RADEX_001_ratio}.
The low vibrational ratios and line fluxes can be reproduced with CO columns between $10^{14}$--$10^{19}$ cm$^{-2}$. More confined emitting areas would increase the lower limit of the possible columns. Temperatures between 400 and 1300 K are most likely if the CO excitation is dominated by collisions. If IR vibrational pumping dominates, higher gas temperatures are still consistent with the low vibrational ratios.
\subsubsection{High vibrational ratios at larger radii}
\label{ssc:high_vib_radex}
\begin{figure}
\centering
\includegraphics[width = \hsize]{./Outerdisk_conditions.pdf}
\caption{Parameters that can reproduce the observed CO rovibrational fluxes and line ratios for sources with $R_\mathrm{CO} > 5$ AU as function of assumed emitting area. Two solution branches are found, a low temperature (\textit{left}) and a high temperature (\textit{right}) branch. Different colours show models with different strengths of the infrared radiation field. In the second solution branch, the radiation field does not impact the solutions significantly. }
\label{fig:RADEX_flux_ratio_match}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width = \hsize, page=1]{./Schematics_all.pdf}
\caption{Summary of physical condition constraints from the RADEX models on the emitting regions of the CO rovibrational lines. Fig.~\ref{fig:Cartoon_inner} shows a version of this figure updated with the results of the full disk modelling. The constraints derived here are used to guide the DALI modelling. Regions are not shown to scale. The right panel shows solution \#1 from Fig.~\ref{fig:RADEX_flux_ratio_match} as observations of \ce{^{13}CO} ro-vibrational lines indicate the presence of large columns of \ce{CO} \citep{vanderPlas2015}. }
\label{fig:Schem_after_radex}
\end{figure*}
To extract the physical conditions in the emitting regions for the disks with a high vibrational ratio at large radii (low NIR group I disks), the observed fluxes and vibrational ratios were compared with the predicted fluxes from a grid of RADEX models. As the emitting area for these disks is harder to estimate than for sources with a low vibrational ratio at small radii, the emitting area was left as free parameter. As CO is coming from large radii it is expected that the near-infrared radiation field will be weak in the CO emitting area for these sources. Therefore, weaker radiation fields (W = 0.0001 and 0.01) are used in the RADEX modelling. Figure~\ref{fig:RADEX_flux_ratio_match} shows the conditions that lead to a vibrational ratio between 0.2 and 0.5 and total integrated $v1 $ fluxes between $3 \times 10^{-15}$ and $5\times 10^{-14}$ erg s$^{-1}$ cm$^{-2}$ (normalized to 150 parsec), the range of observed values for low NIR group I sources.
Within the RADEX models there are two families of solutions. For clarity these solution families have been split in Fig.~\ref{fig:RADEX_flux_ratio_match}. One solution family is characterised by low temperatures ($< 300$ K) and very high column densities ($>10^{18}$ cm$^{-2}$), the other solution has high temperatures ($> 2000$ K) and low column densities ($< 10^{14}$ cm$^{-2}$). In the low temperature family of solutions, the high line ratio comes primarily from the large columns of gas. The density is virtually unconstrained at small emitting areas and the lowest radiation fields. To allow for a large emitting area, very high densities are needed ($>10^{13}$ cm$^{-3}$), these densities are not reasonable at 10 AU, especially not in the disk surface. In the $W = 0.0001$ case, the radiation field does not dominate the excitation. The low temperature branch is also sensitive to the IR continuum radiation field. A stronger IR field moves the solutions to smaller emitting surface areas and higher densities as excitation conditions are more easily met and the line surface brightness increases. Strong IR radiation fields are not expected to be produced by local dust so a local pumping field with $W = 0.0001$ is preferred over $W = 0.01$.
In the high temperature family the excitation of CO is dominated by the collisions with the gas. At these high temperatures both vibrational states are easily populated by collisions and the ratio in which they are populated is similar to the line ratios that is seen. As long as the density is above $10^{11}$ cm$^{-3}$, the result is density independent. Because the lines are optically thin, the line flux is given by the total amount of CO molecules giving rise to a surface area, column degeneracy. The solution is independent of the radiation field assumed.
\subsection{Physical conditions in the CO emitting region}
\label{ssc:physical_cond_slab}
The slab models provide important constraints on the physical conditions of gas producing the observed CO rovibrational emission. In Fig.~\ref{fig:Schem_after_radex} these constraints have been put in the context of simple disk geometries. Modest temperatures ($ \lesssim 1000$ K) and columns below $10^{18}$ cm$^{-2}$ are needed to explain the low vibrational ratios at small $R_\mathrm{CO}$. These columns are most likely present in the surface layers of a dust rich inner disk and imply gas-to-dust ratios smaller than 1000, assuming that the dust is optically thick at 4.7 $\mu$m. The modest temperatures needed indicate that at these small radii, the temperature of the CO emitting gas cannot be more than a factor $\sim 2$ higher than the dust temperature, as gas that is hotter than twice the dust temperature would easily reach 1500 K, especially within the inner AU. This would create higher vibrational ratios than measured. In the next Section, the constraints from the slab models will be used as guidance for the thermo-chemical modelling, and the constrains will be updated with the results from the full disk modelling.
To explain the high vibrational ratios coming from large radii a large gap in molecular gas is needed. As these sources also have low near-infrared continuum emission and gaps have been imaged in many of them \citep[e.g.][]{Garufi2017}, a gap devoid of most of the gas and all the dust is assumed. In the case of a large dust gap, the CO column in the cavity needs to be very low, on average lower than $10^{14}$ cm$^{-2}$. If the column were higher, then the $v1$ flux from within 5 AU would be strong enough to be detected. Alternatively, it can be estimated that the surface area of optically thick CO gas within the cavity needs to be $\lesssim 0.25 $AU$^2$.
Two families of solutions have been found from the RADEX models to fit both the line strengths and the line ratios. The first solution is shown in the left panel in Fig.~\ref{fig:Schem_after_radex} and needs low temperatures and high columns. This solution is preferred as fits of the rotational diagram of rovibrational lines of \ce{^{12}CO} and \ce{^{13}CO} for disks with a high vibrational ratio \citep{vanderPlas2015} prefer large columns $N_\mathrm{CO} \approx 10^{19}$ cm$^{-2}$ and moderate temperatures (300 -- 500 K). To be able to probe these large columns, local gas-to-dust ratios in the CO emitting regions above 100 are necessary, with many solutions needing gas-to-dust ratios of 10000.
The increase in vibrational line ratio with emitting radius seems thus to be an effect of the increase in gas-to-dust ratio of the CO emitting area, with CO coming from gas with a temperature that is coupled to the dust for both the low $v2/v1$ and the high $v2/v1$ sources. This indicates that the process that clears out the inner disk of gas in the high $v2/v1$ sources, clears out the dust as well and confines it to larger radii than the gas. This is what would be expected for a dust trap and in line with the low metallicity measured in the accreting material in these sources \citep{Kama2015}. We will discuss these scenarios in Section \ref{sec:discussion}.
\section{DALI modelling}
\label{sec:DALI}
\subsection{Model setup}
\begin{table}
\caption{\label{tab:All_mod_param} Fiducial parameters}
\begin{tabular}{l c c}
\hline
\hline
Parameter & Symbol& Value \\
\hline
Stellar Luminosity & &30 $L_\odot$ \\
Stellar Mass & & $ 2.5 M_\odot$ \\
Effective Temperature & & $10000$ K \\
Sublimation radius & $R_\mathrm{subl}$ & 0.4 AU \\
Critical radius &$R_c$ & 50 AU \\
Disk outer radius & $R_\mathrm{out}$ & 500.0 AU \\
Gas surface density at $R_c$ & $\Sigma_c$ & 60 g cm$^{-2}$ \\
Surface density power law slope & $\gamma$ & 1 \\
Disk opening angle & $h_c $ & 0.1 \\
Disk flaring angle & $\psi$ & 0.25 \\
PAH abun. rel. to ISM & $x_\mathrm{PAH,\ ISM}$ & $10^{-20}$ \\
Large dust fraction& & 0.9 \\
Large dust settling& & 0.1 \\
Disk inner radius & $R_\mathrm{in}$ & 0.4 -- 15 AU \\
Gas-to-dust ratio & $\Delta_\mathrm{g-d}$ & 10 -- 10000 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width = \hsize]{./Schematic_disk_mono.pdf}
\caption{\label{fig:Schem_mono} Schematic representation of the surface density in the monolithic models. $R_\mathrm{in}$ is the same for gas and dust and is varied between 0.4 and 15 AU, while $\Delta_\mathrm{g-d}$ is varied between 10 and 10000.}
\end{figure}
Armed with an understanding of which conditions reproduce the observations we now run a set of DALI models. Different sets of Herbig disks is modelled with the inner edge, that is, innermost radius at which gas and dust is present ($R_\mathrm{in}$), varied from the classical sublimation radius at 0.4 AU up to 15 AU (see Fig.~\ref{fig:Schem_mono}). Table~\ref{tab:All_mod_param} shows the parameters assumed for the model disks. The gas and dust surface densities are given by:
\begin{equation}
\begin{split}
\Sigma_{\mathrm{gas}} &= \Delta_\mathrm{g-d} \Sigma_{\mathrm{dust}} \\
\Sigma_{\mathrm{dust}} &= \frac{\Sigma_c}{100} \left(\frac{R}{R_c}\right)^{-\gamma} \exp{\left[-\left(\frac{R}{R_c}\right)^{2-\gamma}\right]},
\end{split}
\end{equation}
where the gas-to-dust ratio is varied between 10 and 10000.
The thermo-chemical modelling is done with the code DALI \citep{Bruderer2012, Bruderer2013}. The standard setup is used except for a few changes that will be highlighted where relevant. Dust temperature is calculated using Monte Carlo radiative transfer. Gas temperature, chemical composition and molecular excitation are self-consistently calculated. For the thermo-chemical calculation both the \ce{CO} and \ce{H2O} molecular models have been expanded. For CO five vibrational levels, up to $v = 4$ each with 41 rotational levels, up to $J = 40$ are included, with level energies, line positions and Einstein A coefficients taken from the HITRAN database \citep{Rothman2013}. Collision rate coefficients for collisions between CO and \ce{H2} \citep{Yang2010} and \ce{H} \citep{Song2015, Walker2015} are included. The full molecule model is described in Appendix~\ref{app:CO_mol}. The molecule model for \ce{H2O} has been expanded to include vibrational lines, as these could be important for cooling in the regions that \ce{CO} is emitting. For \ce{H2O} the rovibrational datafiles from LAMDA\footnote{Leiden Atomic and Molecular DAtabase \url{http://home.strw.leidenuniv.nl/~moldata/} \citep{Schoier2005}} are used \citep{Tennyson2001, Barber2006, Faure2008}. The line profiles are extracted for the CO $ v2$ and $ v1 $ transitions using the raytracer as described in \cite{Bruderer2012}. For the ray tracing a disk inclination of 45$^{\circ}$ and distance of 150 parsec is used.
The extracted line profiles are then convolved to match a resolving power of $R = 100000$, and noise is added to achieve a similar signal-to-noise as in the observations ($\sim 200$). From these line profiles the emitting radius (from the line width) and vibrational line ratio are extracted using the same method as used for observational data by \cite{Banzatti2015}.
For some models the gas temperature and chemistry are not calculated self consistently. These are the LTE models in Fig.~\ref{fig:ratvsrad_mono} and the $T_\mathrm{gas} = T_\mathrm{dust}$ model in Appendix~\ref{app:excitationtest}. In these models the gas temperature is set equal to the dust temperature as calculated by the dust radiative transfer and the CO abundance is parametrised by:
\begin{equation}
\label{eq:CO_param}
x_{\ce{CO}} = 10^{-4} \times \frac{A_V}{1 + A_V},
\end{equation}
with $A_V$ the visual extinction as calculated from the continuum radiative transfer. For large $A_V$ the CO abundance converges to the canonical value of $10^{-4}$, at $A_V < 1$ the CO abundance is decreased from the canonical value to mimic the effects of photo-dissociation. The CO abundance globally agrees well with the CO abundance from the thermo-chemical model. These simplified models have been run in LTE conditions and in non-LTE by explicitly calculating the excitation ($T_\mathrm{gas} = T_\mathrm{dust}$ model).
\subsection{Model results}
\subsubsection{$v1$ line flux and $v2/v1$ ratio}
\label{ssc:Mono_models}
\begin{figure}
\includegraphics[width = \hsize]{./COrad_vs_Fluxrad_double_main_LF.pdf}
\caption{\label{fig:ratvsrad_mono} $v1$ line flux (\textit{top}) and vibrational ratio of CO (\textit{bottom}) versus the inferred radius of emission for observational data and DALI model results. Lines connect the dots in order of inner model radius. Labels indicate the gas-to-dust ratio for the thermo-chemical models, the LTE model also has a gas-to-dust ratio of 100.} The model with the largest cavity has the largest CO radius. The dust surface density is kept constant for models of different gas-to-dust ratios. Due to missing data, not every source with a vibrational ratio in the lower panel also has a line flux in the upper panel. Clearly none of these models reproduce the trends in the data.
\end{figure}
Results of our fiducial model, with a gas-to-dust ratio of 100, are plotted in Fig.~\ref{fig:ratvsrad_mono} as the black points. The vibrational ratio from these models shows exactly the opposite trend from the data. The model vibrational ratio is roughly flat with a value around 0.4 for CO radii less than 2 AU, while at larger radii the line ratio decreases.
The line-to-continuum ratios and line fluxes for the models are generally too high (Fig.~\ref{fig:ratvsrad_mono}, top). At small CO emitting radii line fluxes are consistent with the highest observed fluxes, at large CO emitting radii the model line fluxes are a factor $\sim$ 10 higher than the average flux.
The flux is dominated by optically thick lines coming from the inner edge of the model. The gas temperature in the emitting region is higher than $\sim 600$ K in all models. This means that the CO rovibrational lines are emitted at wavelengths longer than peak of the relevant Planck function. As a result, the line flux of these optically thick lines scales linearly with the gas temperature. The gas temperature in the emitting regions decreases slowly with increasing inner model radius and is almost constant for models with $R_\mathrm{in}$ between 1.4 and 10 AU. While the total area of the inner edge scales as $A \propto R_\mathrm{in}^{2 + \psi}$, not all of this area contributes to the emission. The emission is dominated by two rings, at the top and bottom of the inner edge wall. These rings are situated in the region where the dust temperatures are higher than the midplane dust temperature and the CO excitation is still thermalized with the gas. In the models the vertical extend of this region increases slightly with radius, leading to faster than linear growth of the emitting area. Coupled with the constant or slowly declining temperature with radius leads to a roughly linear relation between inner model radius and CO $v1$ flux.
The effect of the gas-to-dust ratio on the vibrational ratio is also shown in Fig.~\ref{fig:ratvsrad_mono} (bottom). All models show a similar trend: with increasing CO radius, the $v2$/$v1 $ drops. Models with an increased gas-to-dust ratio produce higher $v2 $/$v1 $ line ratios. This is due to the larger column of gas that can emit, leading to more optically thick lines, driving up the $v2 $ lines compared to the $v1 $ lines. Even so, the models with the lowest gas-to-dust ratio still have a $v2$/$v1$ ratio larger than most of the observed disks for emitting radii of less than $\sim 4$ AU. For models with the smallest cavities the line becomes undetectable at gas-to-dust ratios lower than 10. With increasing gas-to-dust ratio, there is also an increasing $v1$ line flux, indicating that the emitting area is getting larger.
The LTE models in Fig.~\ref{fig:ratvsrad_mono} show a very different behaviour to the thermo-chemical models. This is fully due to the LTE assumption because the parametrisation of the CO abundances and the assumption of coupled gas and dust temperature have only very small effects of the line ratios (see Appendix~\ref{app:excitationtest}). The LTE models consistently have lower vibrational ratios than the fiducial models, due to the fact that neither the IR pumping nor excitation due to self-absorption are included. Together these processes explain the different vibrational ratios between the fiducial and LTE models. The LTE models with small cavities have vibrational ratios and CO emitting radii that are consistent with with observations. For the disks with larger cavities, the LTE models cannot come close to the observations, indicating that non-LTE processes are definitely important for gas as large radii. The effect of infrared pumping and UV pumping has also been studied in Appendix~\ref{app:excitationtest} but removing IR pumping and including UV pumping only has marginal effects on the excitation and neither can explain the discrepancy between the data and the models. The LTE models consistently show line fluxes that are in good agreement with the data.
That the LTE models seem to do so well, certainly for the low $v2/v1$ sources, is puzzling. The LTE assumption only holds for the CO rovibrational lines if the local gas density is above $\sim 10^{16}$ cm$^{-3}$. These high densities are only expected near the disk midplane and not at the disk surface.
Beyond 5 AU there are non-LTE models that overlap with the data and there is a suggestion that higher gas-to-dust ratios are needed to explain the increase in vibrational ratio with increasing CO emitting radius. However, fluxes are high for these models, $10^{-15}$ W m$^{-2}$ at 150 parsec, about a factor of 300 higher than most observed high vibrational line ratio sources. The DALI models do not show a hot, tenuous layer of CO such as would be needed for solution \#2 of Fig.~\ref{fig:RADEX_flux_ratio_match}. They show instead that with high gas-to-dust ratios, and thus high CO columns the right vibrational ratios can be reproduced, consistent with solution \#1 of Fig.~\ref{fig:RADEX_flux_ratio_match}.
\subsubsection{Line profiles}
\label{ssc:LineProfs}
\begin{figure*}
\sidecaption
\includegraphics[width = 12cm]{./new_Lineprofile_lib_5.pdf}
\caption{\label{fig:lineprofiles_sub} Normalised model line profiles for the $ v1 $ (black) and the $ v2$ (blue) lines for a subset of the models at the native resolution of the model, $R = 10^6$. The text on the left of each panel denotes the model set. The top right corner of each panel denotes the inner radius of the model. The vertical bar in the bottom right of each panel shows 0.03 (top two rows) or 0.3 (bottom row) of the continuum flux density. Two models with a "*" match both $R_\mathrm{CO}$ and $v2/v1$ for a subset of the data. All lines are modelled assuming a 45 degree inclination. No noise has been added to these lines, noise-like features in the line profiles are due to the sampling of the DALI grid.}
\end{figure*}
As shown in Fig.~\ref{fig:ratvsrad_mono}, the extracted line ratios and emitting areas of most models are not able to explain the observed behaviour, especially the low line ratios in the inner disk. It is thus necessary to take a closer look at the predicted line profiles, and compare those with the observed line profiles (Fig.~\ref{fig:obslineprofiles}) for an explanation for this mismatch.
The line profiles for subsets of DALI models are shown in Fig.~\ref{fig:lineprofiles_sub} (line profiles for all models are shown in App.~\ref{app:lineprofs}).
The models with small holes ($< 2$ AU) show a clear difference between the full thermo-chemical models and the LTE models. The full DALI models consistently show a two component line structure. There is a broad, nearly top hat, component of the line which is present in both the $v1 $ and the $v2 $ lines, and a more strongly peaked line profile that is very weak in the $v2 $ line. This second component compares well to the line profile of HD 31648 in Fig.~\ref{fig:obslineprofiles}.
The total line flux and the line ratio are seen to increase with increasing gas-to-dust ratio. Furthermore, the $v1 $ line profile gets narrower with increasing gas-to-dust ratio, consistent with the emitting area getting larger for higher gas-to-dust ratios.
None of the observations show the broad plateau-like feature that is in our model line profiles with small $R_\mathrm{in}$ ($< 2$ AU). This indicates that the inner rim of the model disk needs to be adapted to fit the data. Mostly, the $v2 $ flux from the inner disk wall needs to be strongly reduced. The LTE models show that low vibrational ratios are produced if the gas, dust and CO excitation are thermalised. To thermalise the CO excitation densities in the emitting area of more than $\sim 10^{16}$ cm$^{-3}$ are necessary, this is an increase in density of about 4 orders of magnitude compared to the current density of the inner disk wall. Another option would be to lower the \ce{CO} abundance from the inner rim regions by at least 4 orders of magnitude, removing most of the contribution of the inner rim to both the $v1$ and $v2$ lines.
The line profiles from models with an inner radius of 10 AU generally show a narrow double peaked profile in both lines, indicating that the directly irradiated inner edge is contributing most of the flux in both transitions. This is consistent with the very steep line profiles without low-level wings seen from disks with a high vibrational ratio (e.g. IRS 48, Fig.~\ref{fig:obslineprofiles}). The line ratio strongly depends on the gas-to-dust ratio in the disk surface: high gas-to-dust ratios lead to higher vibrational ratios as the $v1$ line opacity increases. Higher gas-to-dust ratios also lead to larger $v1$ fluxes. The LTE models with large cavities have no detectable $v2$ emission. It is, however the only model $v1$ line for which the flux is within the observed range; the non-LTE models overpredict the flux.
Comparison of the line profiles in Fig.~\ref{fig:obslineprofiles} and Fig.~\ref{fig:lineprofiles_sub} indicates that for observed disks with low vibrational ratios, the line profiles can be well reproduced by models that have a small cavity radius except that these models have a plateau-like contribution to the line profile in the inner disk. This indicates that emission from the disk surface agrees with the observed line profiles and line ratios. This is consistent with the analytical and RADEX analysis which predict low vibrational ratios for disk surface conditions.
Using the spatial information in the model image cube, the emission was decomposed into a disk surface and a disk inner rim component. Fig.~\ref{fig:sep_cont_gd10000} shows the original (continuum subtracted) and decomposed line profile for the model. The line profile cleanly separates into a broad, high line ratio component coming from within 0.63 AU and a narrowly peaked, low line ratio component from the rest of the disk. This suggest that the models strongly overestimate the flux coming from the inner rim. The implications of this will be discussed in Sec.~\ref{ssc:dis_inner}.
\subsection{Disk surface emission}
\label{sec:Sep_inner_rim}
\begin{figure}
\includegraphics[width=\hsize]{./Spectrum_seperate_gd_10000.pdf}
\caption{\label{fig:sep_cont_gd10000} Line profiles for the models with an inner cavity of 0.6 AU and a gas-to-dust ratio of 10000. In the right hand plot contributions from the inner rim and disk surface are separated. }
\end{figure}
\begin{table*}
\centering
\caption{\label{tab:subbed_params} Model variations for the models with subtracted edge contributions.}
\begin{tabular}{l c c | c | c c c | c c c | c c c}
\hline
\hline
\multicolumn{3}{c}{Inner radius variation} & & \multicolumn{3}{c}{g/d = 100} & \multicolumn{3}{|c}{g/d = 1000} & \multicolumn{3}{|c}{g/d = 10000} \\
& $R_\mathrm{in}$ (AU) & & $F_\mathrm{NIR}$ &$v2/v1$&$R_\mathrm{CO}$ (AU) & $F_{v1}$\tablefootmark{a}& $v2/v1$&$R_\mathrm{CO}$ (AU) & $F_{v1}$\tablefootmark{a} &$v2/v1$&$R_\mathrm{CO}$ (AU)& $F_{v1}$\tablefootmark{a}\\
\hline
\# 1. & 0.4 & &0.14 & 0.03 & 2.3 & 4.6 & 0.05 & 2.4 & 9.7 & 0.33 & 3.7 & 20.9 \\
\# 2. & 0.6 & & 0.16 & 0.02 & 2.8 & 4.9 & 0.04 & 3.2 & 10.4 & 0.32 & 5.4 & 21.8 \\
\# 3. & 1.35 & & 0.18 & 0.04 & 15.2 & 5.5 & 0.03 & 20.7 & 12.1 & 0.18 & 4.5 & 23.8 \\
\hline
\multicolumn{3}{c|}{Outer radius variation} & & \multicolumn{3}{c}{g/d = 100} & \multicolumn{3}{|c}{ } & \multicolumn{3}{|c}{g/d = 10000} \\
& $R_\mathrm{out}$ (AU)& & $F_\mathrm{NIR}$ &$v2/v1$&$R_\mathrm{CO}$ (AU)& $F_{v1}$\tablefootmark{a} & & & &$v2/v1$&$R_\mathrm{CO}$ (AU)& $F_{v1}$\tablefootmark{a}\\
\hline
\# 1. & 3 & & 0.13 & 0.18 & 1.4 & 2.0 & & & & 0.32 & 1.5 & 7.0 \\
\# 2. & 5 & & 0.13 & 0.12 & 2.5 & 2.6 & & & & 0.25 & 2.3 & 9.0 \\
\# 3. & 8 & & 0.14 & 0.07 & 3.2 & 3.0 & & & & 0.42 & 1.1 & 10.4 \\
\# 4. & 500 & & 0.14 & 0.03 & 2.3 & 4.6 & & & & 0.33 & 3.7 & 20.9 \\
\hline
\multicolumn{3}{c|}{Flaring variation} & & \multicolumn{3}{|c}{g/d = 100} & \multicolumn{3}{|c}{ } & \multicolumn{3}{|c}{g/d = 10000} \\
& h (rad) & $\psi$ & $F_\mathrm{NIR}$ & $v2/v1$&$R_\mathrm{CO}$ (AU)& $F_{v1}$\tablefootmark{a} & & & &$v2/v1$&$R_\mathrm{CO}$ (AU)& $F_{v1}$\tablefootmark{a}\\
\hline
\# 1. & 0.02 & 0.0 & 0.07 & 0.47 & 0.8 & 0.2 & & & & 0.35 & 0.7 & 1.4 \\
\# 2. & 0.1 & 0.0 & 0.45 & 0.18 & 1.4 & 3.5 & & & & 0.67 & 0.80 & 16.6 \\
\# 3. & 0.02 & 0.25 & 0.03 & 0.08 & 12.2 & 0.4 & & & & 0.03 & 2.5 & 5.9 \\
\# 4. & 0.1 & 0.25 & 0.14 & 0.03 & 2.3 & 4.6 & & & & 0.33 & 3.7 & 20.9 \\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{$v1$ line flux ($\times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$)}}
\end{table*}
\begin{figure*}
\sidecaption
\includegraphics[width=12cm]{./tripple_double_subbed.pdf}
\caption{\label{fig:subtracted_ratios} $v1$ flux (\textit{top}) and CO vibrational ratio (\textit{bottom}) versus the inferred radius of emission for observational data and model results. The contribution from the inner edge has been subtracted from the models spectra before analysis. Models show variation in inner radius (\textit{left panel}), variation in outer radius (\textit{middle panel}) and variation in flaring and disk height (\textit{right panel}). In the left and middle panels lines connect points in increasing order of the parameter varied. In the right panels, lines connect models with the same flaring angle and thus the difference between connected points show the effect of a change in thickness of the disk. Table~\ref{tab:subbed_params} lists the parameters varied for these models. Models with a gas-to-dust ratio of 100 and 1000 are better at reproducing the vibrational ratio than the models with gas-to-dust ratios of 10000. Variation in the vibrational ratio can be reproduced by variations in the disk structure, but no single parameter explain all the variation. }
\end{figure*}
The removal of the line contribution from the inner rim makes it possible to make a direct comparison between observations and the flux from the disk surface. We restrict ourselves to model disks with a small inner cavity size (< 1.5 AU) as for these radii the vibrational ratio is most strongly over predicted in the models. The inner rim region from which the line emission is removed originally produces $\sim 40\%$ of the $v1$ flux and $\sim$90\% of the $v2$ flux. This region also accounts for $\sim$90\% of the 4.7 $\mu$m continuum flux in the model. As before, different inner disk radii and gas-to-dust ratios are studied. On top of that, for models with an inner radius of 0.4 AU and gas-to-dust ratios of 100 and 10000, the outer radius and vertical scale height and flaring are also varied. Table~\ref{tab:subbed_params} gives an overview of the varied parameters and model results. Figure~\ref{fig:subtracted_ratios} compares results of the DALI models without a contribution of the inner rim to the observed data.
By isolating the emission from the disk surface, low vibrational ratios can be obtained at small CO radii. Increasing the gas-to-dust ratio increases the vibrational ratio and the $v1$ line flux, while only slightly increasing the CO emitting radius. Increasing the inner cavity radius to more than 1 AU causes the CO emitting radius to increase beyond 10 AU for gas-to-dust ratios of 100 and 1000. No sources with such a narrow CO line and a low vibrational ratio are seen.
Truncating the outer disk, by removing all material beyond a radius of 8, 5 or 3 AU moves the emission inward and generally increases the vibrational ratio, because the emission has less contribution from larger radii and colder gas. The more truncated disks also have lower $v1$ fluxes, while the NIR continuum emission is not reduced compared to their full disk counterparts. As expected, a more flared disk has emission from further out, and is vibrationally colder, than a geometrically flatter disk. Lowering the scale height moves the emission further out for a non-flared disk, while for flared disks the emitting radius is reduced.
Overall, figure~\ref{fig:subtracted_ratios} shows that emission from the disk surface, especially with gas-to-dust ratios of 100 or 1000, can match the observed CO line fluxes and vibrational ratios at small radii. Different inner radii disk cannot explain the full extent of the data. Restricting the emitting region, in this case by truncating the disk, or changing the vertical structure of the inner disk helps in reproducing the spread in vibrational ratio and CO radius. This indicates that rovibrational CO emission is tracing substructures in the inner disk surface.
Comparing the model line profiles (Fig.~\ref{fig:lineprofiles_sub}) with the observed line profiles (Fig.~\ref{fig:obslineprofiles}) reveals that there is only one disk that is matched well with a full, flared disk (HD 31648, also known as MWC 480). All other line profiles are better matched with a very flat or even truncated model. The ubiquity of emission at large radii in the models, but not in the data, implies that the inner disk structure of the observed disks is different from the smooth, flared geometry assumed in the model.
\subsection{$T_\mathrm{gas} \approx T_\mathrm{dust}$}
The removal of the inner rim for the small $R_\mathrm{in}$ models (Sec.~\ref{sec:Sep_inner_rim}) and the lower temperatures in the rounded models with large $R_\mathrm{in}$ (Appendix~\ref{app:outerdisk}) allow us to reproduce line widths, line strengths and vibrational ratios. All these models have in common that the gas and dust temperature in the emitting area are similar, with 20\% temperature differences in the surface layers of the disks with small holes and difference below 50\% for the inner walls of disks with large cavities. Conversely, models that over predicted the flux or vibrational ratio generally had gas temperatures that were at least twice as high as the dust temperature.
These results seem contradictory with results from \citet{Bruderer2012} who modelled the pure rotational high $J$ CO lines in HD 100546, a low-NIR group I source in our sample. Bruderer et al. find that they need a gas temperature that is significantly higher than the dust temperature to explain the $v=0$ high $J$ CO rotation diagram. However, the emitting area for the high $J$ and rovibrational CO lines is not the same. The high $J$ lines come from the surface of the outer disk, while the rovibrational CO lines come from the cavity wall. This difference in emitting region is due to the difference in critical density of the transitions. The critical density of the CO rovibrational lines is around $10^{15}$ cm$^{-3}$ while the $v=0, J = 32-31$ transition has a critical density around $10^{7}$ cm$^{-3}$. The CO rovibrational lines are thus coming from denser ($\sim 10^{10}$ cm$^{-3}$), better thermalised gas than the high $J$ CO lines that can be effectively emitted from the more tenuous, thermally decoupled surface layers.
The thermo-chemical models by \citet{Thi2013} show a slightly stronger gas-dust temperature decoupling in the inner 10 AU at the CO emitting layer with the gas temperature being 2--3 times higher than the dust temperature. The temperature of the CO emitting layer in \citet{Thi2013} is still within the 400--1300 K range. Further testing will have to be done to see if this hotter layer can also reproduce the low vibrational ratios that are observed.
In T-Tauri disks, models of the \ce{H2O} mid-infrared observations have invoked a decoupling of gas and dust temperatures high in the disk atmosphere to explain \textit{Spitzer} observations \citep[e.g.][]{Meijerink2009}. In these models this decoupling happens at densities below $10^{9}$ cm$^{-3}$, which is lower than the density of the gas that produces most of the CO rovibrational lines of $> 10^{10}$ cm$^{-3}$. Our models also show a strong decoupling of gas and dust temperatures ($T_\mathrm{gas} > 3 \times T_\mathrm{dust}$) in this layer, but no CO rovibrational lines are emitted from there.
Observations of optically thin CO ro-vibrational lines, i.e. high J \ce{^{12}CO} and CO isotopologue lines, can be used to directly probe the gas temperatures predicted here. The high J \ce{^{12}CO} will most likely be more sensitive to the hotter, upper or inner layers of the disk atmosphere and so a higher gas temperature would be inferred from these lines compared to the \ce{^{13}CO} and possibly \ce{C^{18}O} ro-vibrational lines.
\begin{table*}[]
\centering
\caption{Summary of physical constraints from modelling results. }
\label{tab:constraints}
\begin{tabular}{l | c c c}
\hline
\hline
CO emission & Conditions & Model & Comments \\
\hline
\multirow{3}{*}{} & $T < 1500$ K, $N_\mathrm{CO} < 10^{18}$ cm$^{-2}$ & Slab LTE & Fig.~\ref{fig:Analytic_ratio} \\
& $T = 400 - 1300$ K, $10^{14} <N_\mathrm{CO} < 10^{18}$ cm$^{-2}$ & RADEX & Fig.~\ref{fig:RADEX_001_ratio}\\
$v2/v1 < 0.2$ and & No CO in dust free gas & Slab LTE, RADEX & Sec.~\ref{ssc:physical_cond_slab} \\
$R_\mathrm{CO} < 5$ & g/d$ \lesssim 1000$ & DALI & Fig.~\ref{fig:subtracted_ratios} \\
& No CO at the inner rim & DALI & Sec.~\ref{sec:Sep_inner_rim} and Fig.~\ref{fig:subtracted_ratios}\\
& Radially constrained emitting area & DALI & Sec.~\ref{sec:Sep_inner_rim} and Figs.~\ref{fig:obslineprofiles},~\ref{fig:lineprofiles}~and~\ref{fig:lineprofiles_sub}\\
\hline
& $T \lesssim 300 $K, $N_\mathrm{CO} > 10^{18}$ cm$^{-2}$ \textbf{or}& \multirow{2}{*}{Slab LTE, RADEX} & \multirow{2}{*}{Figs.~\ref{fig:Analytic_ratio},~\ref{fig:RADEX_001_ratio}~and~\ref{fig:RADEX_flux_ratio_match}} \\
$v2/v1 > 0.2$ and & $T \gtrsim 1000 $K, $N_\mathrm{CO} < 10^{14}$ cm$^{-2}$& & \\
$R_\mathrm{CO} > 5$ & g/d$ > 10000$ & DALI & Figs.~\ref{fig:ratvsrad_mono}~and~\ref{fig:lowflux_highvib}\\
& Rounded edge, $T_\mathrm{gas}< 300$ K, $N_\mathrm{CO} > 10^{20}$ cm$^{-2}$ & DALI & Sec.~\ref{app:outerdisk} \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width = \hsize]{./Final_schematics.pdf}
\caption{\label{fig:Cartoonend}
Typical line profiles, simulated images and inferred disk proposed disk structures for four types of disks identified in the Herbig sample. Near-infrared continuum and CO emitting areas are shown in red and blue respectively. The simulated images show the velocity integrated CO $v1$ line flux. These images are discussed in more detail in Sec.~\ref{ssc:futureobs}. The disk structures are updated versions of those shown in Fig.~\ref{fig:data}.}
\end{figure*}
\section{Discussion}
\label{sec:discussion}
Our modelling results show that we can reproduce the observed CO emission with low vibrational ratios at small radii versus high vibrational ratios at large radii (Fig.~\ref{fig:data}) under different and separate conditions. Low vibrational ratios measured at small disk radii require a CO column below $10^{18}$ cm$^{-2}$ and a temperature between 400--1300 K. These conditions naturally occur in the denser $>10^9$ cm$^{-3}$ surface layers of the disk. Emission from a dust-free inner region, and from an inner disk rim directly irradiated from the star, are ruled out based on the high $v2/v1$ that would be produced under these conditions which are not observed. Line velocity profiles indicate that most group II disks have an emitting area that is radially narrower than what a flared disk model produces. Thus, flared group II disks should be rare, in agreement with the currently accepted paradigm.
High vibrational ratios measured at large disk radii require instead an inner disk region strongly devoid in CO ($N_{\ce{CO}} < 10^{14}$ cm$^{-2}$), i.e. the region that would otherwise produce the high-velocity CO line wings that are not observed in these spectra. In the emitting region at larger radii, where CO is still present, the gas must be cold ($< 300$ K) and CO columns must be high ($> 10^{20}$ cm $^{-2}$). To provide these conditions, a high gas-to-dust ratio is necessary ($> 10000$) coupled with a density structure that allows for efficient cooling. Models with midplane number density and column density that increase with radius are able to match the line flux, vibrational ratio and $R_\mathrm{CO}$. The large columns and low gas temperatures are consistent with \ce{^{13}CO} observations \citep{vanderPlas2015}. These constraints on the disk physical structure, and their relative models, are summarized in Table.~\ref{tab:constraints}.
Figure~\ref{fig:Cartoonend} shows four representative CO line profiles, simulated images, and cartoons of the disk structures proposed to produce the observed emission as based on the combination of this and previous analyses. In Sec.~\ref{ssc:dis_inner} and Sec.~\ref{ssc:dis_outer} we will link these structures to the physical and chemical processes proposed to produce them.
Three different disk structures for the low vibrational ratios at small radii are shown in Figure~\ref{fig:Cartoonend}. Two of these apply to group II disks. They are divided into structures with a compact (upper left) and extended (lower left) rovibrational CO emitting area. CO line profiles and infrared excess show that abundant gas and dust is present within $\sim 5$~AU, so any existing disk cavities must be smaller or the dust extends to the sublimation radius. The stellar abundances for these sources are consistent with solar Fe abundances, so transport of dust to the star is relatively unhindered \citep{Kama2015}. The NIR and CO flux are not emitted from the same region: the IR flux mostly comes from the inner dust edge that is directly irradiated by the star, while the CO must only be emitted from the disk surface. The major difference between the two group II structures is that those with compact CO emission must have a non-flared geometry confining the CO emitting region, possibly with inner disk substructures shadowing the rest of the disk, further confining the emission. The one group II disk with extended CO emission (HD 31648) needs to have a more flared geometry or have a slow molecular disk wind, analogous to those seen in T-Tauris \citep{Pontoppidan2011, Brown2013}. Based on this sample we conclude that both molecular disk winds, as well as flared group II disks should, be very rare. In the case of a flared disk, the size of the emission is measurable is by spectro-astrometry on 8-meter class or IFU spectroscopy on 30-meter class telescopes.
The third structure giving rise to low vibrational ratios at small radii is that of the high-NIR group I disks (upper right). These disks have large cavities (typically the largest found in Herbigs, see Table \ref{tab:Av_vals}), but they have a residual inner dust disk/belt that produces the high near-infrared flux. These disks therefore have a gap between inner and outer disk. CO emission comes from the inner disk, again from the disk surface in order to produce the very low vibrational ratios measured in the data. The high near-infrared flux instead, higher than the group II disks, must have come from a larger emitting area than in the group II disks, possibly from both the inner edge and surface of the inner disk. The solar Fe abundance in the surface layers of the star is an independent indicator of accretion from a still gas- and dust-rich inner disk \citep{Kama2015}, possibly implying efficient filtration of small dust from the outer disk to the inner disk.
The high vibrational ratios at large radii can all be explained with a single structure (bottom right in Figure~\ref{fig:Cartoonend}). These disks have an inner cavity that is strongly reduced in CO surface density. These inner cavities seem to be on average smaller in size than those imaged in high-NIR group I disks (see Table \ref{tab:Av_vals}). The rovibrational CO emission comes from the cavity wall, i.e. the inner edge of the outer disk, which must be rich in molecular gas but strongly depleted in dust. This structure combined with the low metallicity of the material that is accreted on the stellar surface \citep{Kama2015}, indicates that the dust is efficiently trapped at some radii larger than $R_\mathrm{CO}$ for these disks. The most appropriate explanation for these deep gas cavities and dust traps currently seems to be that they are caused by giant planets and not by photo-evaporation or dead-zones \citep[see also][]{vanderMarel2016}.
\subsection{Implications for sources with low $v2/v1$ at small radii}
\label{ssc:dis_inner}
\subsubsection{No CO at the inner rim}
\begin{figure*}
\centering
\includegraphics[width = \hsize, page = 2]{./Schematics_all.pdf}
\caption{Sketches of the upper right quartile of a disk cross section of the preferred configuration of the CO emitting region in the case of low $R_\mathrm{CO}$ and low $v2/v1$ (group II and group I high NIR, \textit{left}) and large $R_\mathrm{CO}$ and high $v2/v1$ (group I low NIR, \textit{right}). This figure is an update to Fig.~\ref{fig:Schem_after_radex} including the DALI results. Relevant radial scales for the inner and outer edge are shown on the bottom left and right corner of each sketch.
}
\label{fig:Cartoon_inner}
\end{figure*}
The good match between the line profiles of disks without a contribution from the inner disk edge (Sec.~\ref{sec:Sep_inner_rim}) indicates that CO is not present within or around the dust sublimation radius in any of these disks. The left plot in Figure~\ref{fig:Cartoon_inner} shows the proposed structure of the inner disk of a group II source as inferred from the data. The dust disk in this case reaches to the dust sublimation radius, forming an inner dust rim. Our modelling suggests that, for Herbig disks, the gas gets heated to high enough temperatures ($>3000$ K) near the sublimation radius to keep the gas atomic (see App.~\ref{app:thermdiss} for a discussion on the chemistry), at least until the gas is hidden under the dust infrared photosphere. This is not seen in the thermo-chemical models, however, the gas and dust temperature in this region are very uncertain. Dust sublimation impacts both the dust temperature structure, as well as the gas temperature by changing the composition of the gas.
At larger radii the gas can cool enough to become molecular above the dust infrared photosphere. The expectation is that the phase change from atomic to molecular gas also induces a strong extra cooling effect, lowering the gas temperature to the dust temperature. This seems to be the most appropriate scenario to produce the low vibrational ratios measured at such inner disk radii.
This scenario assumes that the dust disk extends all the way to the dust sublimation radius. However, even if there is a small inner hole in the dust disk, the radiation field should still be strong enough to heat the gas to temperatures above 3000 K and keep the gas atomic at the inner edge of the dust cavity, at least within the small inner cavities that have been explored above in Fig.\ref{fig:subtracted_ratios}.
The CO abundance structure proposed in our modelling, with the low columns and temperatures and the absence of molecular gas within the inner dust rim, also naturally produces very weak or no CO overtone ($\Delta\nu = 2$) emission. This is consistent with a large survey of 2 $\mu$m CO emission towards Herbig AeBe stars, with detection rates as low as $7\%$ \citep{Ilee2014}.
The dissociation of \ce{H2} and \ce{CO} as proposed here, should leave a large ($N_{\ce{H}} \gtrsim 10^{18}$ cm$^{-2}$), hot ($T> 3000$ K) atomic or ionised reservoir around the dust inner radius. Velocity or spatially resolved atomic lines, such as can now be measured with near-IR interferometry \citep[e.g.][ and with VLTI-GRAVITY, \citet{Gravity2017}]{Eisner2014, GarciaLopez2015}, can thus be used to test if \ce{CO} and \ce{H2} are indeed being dissociated around the inner edge of the disk.
\subsubsection{Group II disks have more mass within 5 AU than group~I high-NIR sources}
Both the group II disks and the high-NIR group I disks show low vibrational ratios at small radii, with the latter showing larger radii and lower vibrational ratios on average (Table \ref{tab:Av_vals}). At the same time these group I disks have a higher NIR excess than the group II disks: this is thought to be due to a vertically more extended dust structure in the inner disk \citep[see e.g. discussions in][]{Maaskant2013,Banzatti2018}.
A vertically more extended structure will lead to lower gas densities in the surface layers of the disk. A large population of small grains is needed to populate the tenuous surface layers and convert stellar flux into the observed bright NIR flux. These conditions naturally lead to larger $R_\mathrm{CO}$ and lower vibrational ratios.
In fact, low densities slow down the chemical formation of CO and observable abundances of CO are thus only produced at lower UV fluxes, further from the star. Furthermore, a larger population of small grains has a higher NIR opacity per unit mass of gas, so the visible column of CO is smaller than for group II disks. A lower gas density also helps in lowering the excitation in the $v =2$ state (Fig.~\ref{fig:RADEX_001_ratio}), thereby lowering the vibrational ratio and providing a good explanation for the measured difference from the group II disks.
Source specific modelling of the rovibrational lines of CO and its isotopes, fitting the full rovibrational sequence using non-LTE models can be used to further constrain the density in the inner regions of the these disks. Furthermore if the grains are indeed small, and the dust mass in the inner disk is low, the continuum might be optically thin at ALMA wavelengths, allowing for inner disk mass and grain size measurements.
The radial extent of CO emission in the high-NIR group I objects is harder to estimate than for the group II objects as the observed CO lines are intrinsically narrower. Spectro-astrometric measurements of HD 142527 indicate that the CO emission extends up to $\sim 5$ AU, twice the measured emitting radius \citep{Pontoppidan2011}, and nicely matching an inner dust belt detected by \citet{Avenhaus2017}. A narrow emitting region would imply that CO only emits in the inner dust disk and is absent in the disk gap. None of the high NIR group I sources show the double peak structure expected from such a narrow emitting ring. This could be due to emission from the outer disk cavity wall and surface filling up the centre of the observed spectral line, but higher spectral and spatial resolution observations are needed to confirm this scenario.
\subsubsection{Inner disk CO emission is confined by substructures}
\begin{figure}
\centering
\includegraphics[width = \hsize]{{./submm_vs_CO_HD142666_2.0}.pdf}
\caption{Radial intensity cuts for the sub-millimeter dust from \citet{Huang2018} and the radial intensity as inferred from the CO rovibrational line profile of HD 142666. Vertical dashed lines show the maximum of the CO and sub-millimeter dust intensity. The CO emission is clearly contained within the bright sub-millimeter ring at 6 AU. The inset on the top right shows the observed line profile (black) and the fitted profile (blue). The vertical red line shows the inner edge of the dust disk at $\sim1$~AU as inferred from IR interferometry \citep{Schegerer2013}.}
\label{fig:HD142666_comp}
\end{figure}
One disk in this sample provides exceptional insight into the inner dust and gas structures: HD 142666. Its $R_\mathrm{CO}$ of $\sim3$~AU is relatively large for a group II object, and a bright ring at $\sim6$~AU has recently been found in sub-millimeter dust continuum images taken at very high angular resolution with ALMA \citep{Andrews2018, Huang2018}. Figure~\ref{fig:HD142666_comp} shows the comparison of the inner part of the sub-millimeter radial continuum intensity profile with the CO radial emission profile. The CO radial emission profile was derived by fitting a flat Keplerian disk intensity model to the observed line profile. The observed line profile and the flat disk fit can be seen in the inset in Fig.~\ref{fig:HD142666_comp}.
The CO rovibrational emission is confined within the inner edge of the sub-millimeter dust ring, indicating that the process that is producing this sub-millimeter ring also confines the CO rovibrational emission. This could happen if the bright sub-millimeter ring traces a vertically extended dust structure that shadows the disk beyond, preventing CO emission from larger radii.
Our fit to the CO line profile for HD 142666 shows that the inner 1 AU of the disk is devoid of emission (Figure~\ref{fig:HD142666_comp}). This would be consistent with IR interferometric observations that report an inner disk radius of 1 AU \citep{Schegerer2013}, significantly larger than the dust sublimation radius expected at 0.3 AU as based on the stellar luminosity. This indicates that a small cavity has formed in this disk; similar cavities could also be present in the other group II disks that have $R_\mathrm{CO} \gtrsim 2 R_\mathrm{subl}$. These small cavities should still allow for efficient transport of both gas and dust (in a $\sim$ 100--1 ratio) from the inner disk edge to the star, as the accretion rates are normal and the stellar abundances for these sources close to solar \citep{Kama2015,Banzatti2018}. The lower sub-mm intensity implies a lower dust surface density at 1 AU. If we assume there is no continuous build up of material around the sub-millimeter ring, then the lower surface density in the inner disk gap should be accompanied with an increase in the velocity of the accretion flow. This would be consistent with an inner dead-zone edge, possibly due to the thermal ionisation of alkali metals \citep{Umebayashi1988}. Another option would be the presence of a giant planet within the small cavity with a saturated or leaky dust trap. In either case there would be a dust trap or traffic jam. This raises the tantalizing possibility that all group II disks with $R_\mathrm{CO} \gtrsim 1$ AU could have a bright sub-millimeter ring in the inner regions as found in HD 142666 and HD 163296 \citep{Huang2018, Isella2018}.
\subsection{Implications for high $v2/v1$ at large radii}
\label{ssc:dis_outer}
\subsubsection{High gas-to-dust ratios by dust trapping}
The large CO columns needed to produce a high vibrational ratio at large radii indicate that around $R_\mathrm{CO}$ the gas surface density does not deviate strongly from what would be expected in absence of an inner cavity. To get the $v2$ lines bright enough, CO columns larger than $10^{20}$ cm$^{-2}$ and thus total $\ce{H}$ columns larger than $10^{24}$ cm$^{-2}$ are needed. This necessitates a drop in gas surface density that is less than two orders of magnitude at $R_\mathrm{CO}$ if CO is at the canonical abundance of $10^{-4}$ alternatively, a CO abundance of more than $\sim10^{-6}$ is needed if the gas surface density is continuous within the dust cavity. The large CO column also implies that dust is under abundant by at least two orders of magnitude at $R_\mathrm{CO}$. The high gas-to-dust ratios necessary are likely not due to strong settling as ALMA images of millimeter dust show cavities that are consistently larger than $R_\mathrm{CO}$ indicating that there is a indeed a radial segregation \citep[e.g. HD 100546, HD 97048, IRS 48, HD169142; ][]{vanderMarel2016, vanderPlas2017, Fedele2017,Pinilla2018}. This is consistent with ALMA gas observations of transition disks, showing that also in the sub-millimeter, \ce{CO} cavities are smaller than dust cavities \citep{vanderMarel2013, vanderMarel2016}.
The steep line profiles and the lack of high velocity CO emission indicates that the CO column within $R_\mathrm{CO}$ is at least six orders of magnitude lower than the column at $R_\mathrm{CO}$ for the observed disks. As CO is hard to photodissociate, a drop in the total gas surface density in the cavity is necessary. A total gas surface density drop of 5 orders of magnitude between the dust ring and the CO poor cavity is needed to produce CO columns below $10^{14}$ cm$^{-2}$ \citep{Bruderer2013}. Observations of atomic oxygen or carbon could be used to measure gas depletion factors directly, constraining the depth of the cavity and thus the mass of a possible cavity-forming planet. Fig.~\ref{fig:Cartoon_inner}, right, illustrates the disk structure as reconstructed from modelling the CO rovibrational lines in low-NIR group I disks.
The combination of a large cavity in both gas and dust and of a radial segregation between gas and dust at the disk cavity wall fits well with what is expected for a giant planet carving an inner disk hole. A sufficiently massive planet can explain the gas depletion in the cavity together with the different cavity sizes in dust and gas, as a gas pressure maximum is created that traps dust at a slightly larger radius. Both photo-evaporation and dead zone models, instead, have problems creating transition disks that have an inner region rich in gas, but depleted in micron-sized grains \citep{Pinilla2012, Gorti2015, vanderMarel2015, vanderMarel2016, Pinilla2016}.
\subsubsection{Not all dust traps are equal}
In Sec.~\ref{ssc:dis_inner} we compared the high NIR group I disks to the group II disks on the basis of inner disk structures. However, at long wavelengths and larger radii these high NIR disks appear more similar to the rest of the group I sources showing cavities in sub-millimeter and scattered light imaging \citep[e.g.][]{Garufi2017}. One notable difference is that the inner disk in the high NIR group I sample seems to be misaligned with respect to the outer disk \citep[e.g.][]{Benisty2017,Banzatti2018}.
Due to this misalignment, perhaps the inner disk cannot shadow the cavity wall in the outer disk and thus CO emission in high NIR group I sources could in principle include a component similar to what observed in low NIR group I disks. If so, there should be a narrow emission component with large $v2/v1$ at the center of the line. This could be the origin of the narrower $v2$ emission line observed in HD 135344B (Fig.~\ref{fig:obslineprofiles}), but even at the current high spectral resolution it is not possible to distinguish a narrower central peak in the $v1$ line. Part of the problem may also be due to flux filtration by the narrow slit (0.2'' for VLT-CRIRES), where the signal from the outer disk will be diluted by any slit that includes a bright inner disk but excludes part of the outer disk \citep{HeinBertelsen2014}.
Another possibility is that the outer disk in the high NIR group I systems may not have as high a gas-to-dust ratio as the rest of the group I systems. A normal gas-to-dust ratio would quench the $v2$ emission, with $v1$ emission from the gap outer edge filling in the line center and producing a narrow single peak. A close to ISM gas-to-dust ratio would imply that dust is not efficiently trapped at the gap outer edge. This could fit with a scenario in which the dusty inner disk and near solar abundances in the stellar atmosphere are replenished by dust from the outer disk. Mapping of the CO emission, either with multiple slit positions with VLT/CRIRES+ or observations with ELT/METIS integral field unit can determine the brightness and nature of the CO emission from the outer disk in high NIR group I sources (see Sec.~\ref{ssc:futureobs}).
\begin{figure*}
\centering
\includegraphics[width = \hsize]{./COv1log_double.pdf}
\caption{Simulated velocity integrated $v1 P(10)$ (top) and $v2 P(4)$ (bottom) line maps convolved to METIS resolution \citep{Brandl2014}. The colour scale is log-stretched between 0.1\% and 100 \% of the maximum of the $v1$ line flux.
The continuum has been subtracted before velocity integration.The contours show 0.1\%, 1\% and 10\% of the peak surface brightness. The disk geometries refer to those presented in Fig.~\ref{fig:Cartoonend}. The distance is assumed to be 150 parsec and the inclination is 45 degrees, the far side of the disk is in the north. For the "flat" geometry, a truncated gas disk (5 AU outer radius) is used. The high NIR group I image is composed by combining two models, a truncated disk model, with an inclination of 30 degrees and a disk with a 40 AU hole and a strong dust trap (so strong $v2$ emission) with an inclination of 45 degrees. No interactions between the inner and outer disk have been taken into account. }
\label{fig:Metis_log}
\end{figure*}
\subsection{Comparison to T-Tauri disks: distribution of UV flux matters}
Under several aspects, disks around Herbig AeBe stars are analogous to T-Tauri disks. However, in CO rovibrational emission they exhibit a very different behaviour \citep{Banzatti2015,Banzatti2018}. While the T-Tauri disks have a decreasing $v2/v1$ with increasing $R_\mathrm{CO}$, the Herbig disks show the opposite trend. This implies a significant difference in the distribution of molecular gas between Herbig and T-Tauri inner disks. This is also seen in the line profiles, since many of the T-Tauri disks have a two component CO profile: if both components originate from the Keplerian disk, it means that the CO emitting region is more radially extended than in the Herbig disks.
This can be explained if the T-Tauri disks have CO emission from the inner rim (and from within the inner rim), while in Herbig disks CO is dissociated by high temperatures in these regions as explained above.
Both Herbigs and highly accreting T-Tauris have strong UV fields. The energy distribution as function of wavelength is very different, however. The UV field of Herbig stars is dominated by the continuum coming from the stellar surface. For T-Tauri stars, on the other hand, most of the UV comes from the accretion shocks and is emitted in emission lines, especially Lyman-$\alpha$ \citep[e.g.][]{France2014}. Both CO and \ce{H2} cannot be photo-dissociated by Lyman-$\alpha$ photons. As such the photo-dissociation of CO and \ce{H2} is much more efficient around Herbig stars. If hydrogen is mainly in atomic form, formation of other molecules such as \ce{CO2} and \ce{H2O} is significantly slowed down as molecules both need the \ce{OH} radical for their formation. This radical forms from \ce{H2 + O -> OH + H} and can be destroyed by \ce{OH + H -> H2 + O}. Only with abundant \ce{H2} can enough \ce{OH} be produced and can \ce{OH} survive long enough to form \ce{CO2} and \ce{H2O}. This could explain the lack of \ce{H2O} and \ce{CO2} emission towards Herbig AeBe disks in comparison to T-Tauri disks \citep[e.g.][]{Pontoppidan2010,Banzatti2017}.
The higher broad-band optical-UV flux in Herbig systems can have a larger impact on the gas temperature compared to the line dominated T-Tauri spectrum as more power can be absorbed by atomic and molecular electronic transitions before the dust absorbs the radiation. Finally, a larger fraction of the stellar flux can generate photo-electrons upon absorption by the dust \citep{Spaans1994}. All these effects will heat the gas more in Herbig than in T-Tauri disks, increasing CO dissociation in the former.
\subsection{Predictions for future observations}
\label{ssc:futureobs}
ELT-METIS will be a generation I instrument on the ELT \citep{Brandl2014}. It will be able to do diffraction limited imaging and IFU spectroscopy at 3--5 $\mu$m. The 39 meter mirror allows for a spatial resolution of $\sim$ 0.03'' at 4.7 $\mu$m in a 0.5'' by 1'' field of view. IFU spectroscopy will be possible at a resolving power of $R = 100000$. With these capabilities, ELT-METIS will be able to resolve CO rovibrational emission both spatially and in velocity in nearby Herbig disks. Fig.~\ref{fig:Metis_log} shows continuum subtracted, velocity integrated maps of the $v1 P(10)$ line. The four different disk structures proposed in Fig.~\ref{fig:Cartoonend} can be clearly distinguished in these images. Spatially resolving the emission will enable to study asymmetries in the spatial distribution of CO and will help in explaining the single peaked nature of CO lines observed in the high-NIR group I disks.
The CO rovibrational ratio is a good tracer of large cavities in Herbig disks, with high ratios ($>0.2$) only coming from low-NIR group I disks, intermediate ratios (0.05 -- 0.2) coming from group II disks and very low ratios ($< 0.05$) coming from the high-NIR group I disks. This could be exploited in more distant and more massive star forming regions, for instance by observing multiple Herbigs within the field of view of the \textit{JWST}-NIRSPEC multi object spectrograph, providing an efficient classification of large numbers of Herbig sources, either for more detailed follow-up or for population studies.
In the shorter term, ground based, high sensitivity observations can be used to constrain densities in the inner disk. In all disk models, CO emission is not in LTE. This results in strongly decoupled vibrational and rotational excitation temperatures. On top of this, most of the $v1$ lines are also optically thick so there should be a break in the $v1$ rotational diagram. This is the point where the lines become optically thin. The position and sharpness of the break critically depends on the density of the emitting area. A source by source modelling of the CO rovibrational lines over a large number of $J$ levels ($ J > 30$) can derive this density and from that the mass in the inner few AU can be constrained. The same should apply to T-Tauri disks, but as the CO line flux can have contribution from within the sublimation radius it will not be straightforward to measure the mass in the dust rich inner disk.
\section{Conclusions}
\label{sec:conclusion}
The goal of this work has been to find the physical conditions that can reproduce trends observed in inner disks of Herbig stars, in terms of the CO vibrational ratio $v2/v1$, the radius of CO emission (from the HWHM of the lines) and the NIR excess (Fig.~\ref{fig:data}).
We have studied the excitation and line profiles of CO rovibrational emission from disks around Herbig Ae stars using LTE and non-LTE slab models, as well as using the thermo-chemical model DALI. Our findings are collected in Figs.~\ref{fig:Cartoonend}~and~\ref{fig:Cartoon_inner} and our conclusions can be summarised as follows:
\begin{itemize}
\item \textit{Emission from the inner disk surface}: CO emission with $v2/v1 < 0.2$ at $R_\mathrm{CO} < 5$ AU is reproduced by conditions found in the inner disk surface. CO columns must be $\lesssim 10^{18}$ cm$^{-2}$ and gas-to-dust ratios $< 1000$. Gas and dust temperatures must be coupled and between 400 and 1300 K. Emission from and within the inner disk rim is ruled out on basis of the measured low vibrational ratios. A scenario in which the gas around the dust sublimation radius is hot, $> 3000$ K, is preferred to explain the absence of CO at the inner rim. At these temperatures, reactions between CO and atomic H should produce a primarily atomic gas that could be observed by IR interferometry.
\item \textit{Emission from the cavity wall}: CO emission with $v2/v1 > 0.2$ at $R_\mathrm{CO} > 5$ AU is reproduced by conditions found in a cavity wall at large disk radii. CO columns must be $> 10^{18}$ cm$^{-2}$ and gas-to-dust ratios $> 10000$. Gas and dust temperatures must be coupled and below 300 K, indicating efficient cooling of the gas. Within $R_\mathrm{CO}$ the gas surface density drops by at least 5 orders of magnitude. A high gas surface density, rather than UV pumping, is the most likely reason for the bright $v2$ lines providing high $v2/v1$ ratios.
\item \textit{Substructures in inner disks}: The broad, flat topped or double peaked line profiles that most group II sources exhibit cannot be explained by a smooth, flared disk. The radial extent of the CO emission in these sources is restricted. Flat disk models work better in matching these line profiles, but they generally still have too much flux at large radii. The outermost radius that emits in these sources thus most likely traces some variation in vertical scale-height. This could be the case for HD 142666, where all the CO emission arises from within the first resolved sub-millimeter dust ring at 6 AU.
\item \textit{Dust trapping}: Small cavities in gas and dust are possible in the group II objects. The low vibrational ratios observed indicate that dust-free and molecular gas rich cavities are not present. If there are small cavities formed by planets in the sample, then they apparently do not create a very efficient dust trap.
This is in contrast with the low NIR group I disks that have high vibrational ratios. The large gas surface density drop and the dust poor gas necessary in these disks fit very well with predictions of giant planets producing a cavity and a strong dust trap. High NIR group I disks, instead, may have dust traps that allow for dust filtration from the outer to the inner disk, sustaining normal elemental accretion onto the star.
\item \textit{CO as a tracer of disk cavities and inner dust belts/disks}: rovibrational CO emission can be used to identify dust cavities also in absence of direct imaging, especially when disks are further away than 200~pc; the difference in CO lines as observed in high- and low-NIR group I disks moreover shows that, in disks with large cavities, CO emission is a good tracer for residual inner dust belts/disks that may be unseen in direct imaging.
\item \textit{Molecular gas within the sublimation radius}: The lack of broad, high vibrational ratio, CO emission in many of the observed sources puts strong constraints on the amounts of dust free molecular gas within the sublimation radius: $N_\mathrm{CO} < 10^{18}$ cm$^{-2}$. This upper limit is consistent with the non-detection of the CO overtone (v = 2-0) emission towards most Herbig AeBe disks.
\item \textit{Future METIS observations}: ELT-METIS will be able to resolve the emitting area of the CO rovibrational lines. These observations can further constrain the emitting area in group II disks. For low NIR group I disks METIS should find CO rovibrational rings within the scattered light and sub-millimeter dust cavities, while for high NIR group I disks the extent of molecular gas in the inner disk as well as the gas-to-dust ratio, and thus efficiency of dust trapping, in the outer disk can be measured.
\end{itemize}
\begin{acknowledgements}
We thank Antonio Garufi for helpful discussions on imaging of Herbig disks, Inga Kamp and Daniel Harsono for help with the CO collisional rate coefficients and Paul Molliere for providing the results to the chemical equilibrium calculations.
Astrochemistry in Leiden is supported by the Netherlands Research School for
Astronomy (NOVA).
This work is partly based on observations obtained with iSHELL under program 2016B049 at the Infrared Telescope Facility, which is operated by the University of Hawaii under contract NNH14CK55B with the National Aeronautics and Space Administration. This work is partly based on observations made with CRIRES on ESO telescopes at the Paranal Observatory under programs 179.C-0151, 093.C-0432, 079.C-0349, 081.C-0833, 091.C-0671. This work is partly based on observations obtained with NIRSPEC at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The observatory was made possible by the generous financial support of the W. M. Keck Foundation.
This project has made use of the SciPy stack \citep{Jones2001}, including NumPy
\citep{Oliphant2006} and Matplotlib \citep{Hunter2007}.
\end{acknowledgements}
\bibliographystyle{aa}
|
train/arxiv
|
BkiUdu05qsBDGjv8hpDK
| 5
| 1
|
\section{Introduction}\label{Introduction}
Recall the partition $\kappa$ of a positive integer $n$ is a sequence of positive integers $k_1 \ge k_2 \ge \cdots \ge k_m$ with $m\ge 1$ whose sum is $n$. The number $m$ is called the length of $\kappa$ and $k_i$ the $i$th largest part of $\kappa$. Let $\mathcal{P}_n$ denote the set of partitions of $n$ and $\mathcal{P}_n(m)$ the set of partitions of $n$ with length \emph{at most} $m$. Thus $1\le m \le n$ and $\mathcal{P}_n(n)$=$\mathcal{P}_n$.
The set of all partitions $\mathcal{P}=\cup_{n} \mathcal{P}_n$ is called the macrocanonical ensemble. The partitions of $n$, $\mathcal{P}_n=\cup_{m} \mathcal{P}_{n}(m)$, is called the canonical ensemble and $\mathcal{P}_{n}(m)$ is the microcanonical ensemble. Integer partitions have a close relationship with statistical physics (\cite{BK37, VU37, AK46}). To be more precise, a partition $\kappa \in\mathcal{P}_n$ can be interpreted as an assembly of particles with total energy $n$. The number of particles is the length of $\kappa$; the number of particles with energy $l$ is equal to $\# \{ j: k_j=l\}.$ Thus $\mathcal{P}_n(m)$ is the set of configurations $\kappa$ with a given number of particles $m$. It is known that $\mathcal{P}_n(m)$ corresponds to the Bose-Einstein assembly (see section 3 in \cite{AK46} for a brief discussion). Therefore the asymptotic distribution of a probability measure on $\mathcal{P}_n(m)$ as $n$ tends to infinity is connected to how the total energy of the system is distributed among a given number of particles.
The most natural probability measure on $\mathcal{P}_n(m)$ is the uniform measure. The uniform measure on $\mathcal{P}_n(m)$ for $m=n$ has been well-studied (see \cite{EL, Fristedt, Pittel}). However, for the other values of $m$, to our best knowledge, the whole picture is not clear yet. In the authors' previous paper \cite{JW-LB}, as a by-product of studying the eigenvalues of Laplacian-Beltrami operator defined on symmetric polynomials, the limiting distribution of $(k_1,\ldots,k_m)$ chosen uniformly from $\mathcal{P}_n(m)$ is derived for fixed integer $m$. This is one of the motivations resulting in this paper. As a special case of a more general measure on $\mathcal{P}_n(m)$, we obtain the asymptotic joint distribution of $(k_1,\ldots,k_m)\in \mathcal{P}_n(m)$ imposed with a uniform measure for $m\to \infty$ and $m=o(n^{1/3})$. It would be an intriguing question to understand the uniform measure on $\mathcal{P}_n(m)$ for all values of $m$. The limiting shape of the young diagram corresponding to $\mathcal{P}_n(m)$ with respect to uniform measure was studied in \cite{vershik96, VK85, VY03} and \cite{Petrov} for $m=n$ and for $m=c \sqrt{n}$ where $c$ is a positive constant.
Another important class of probability measure on $\mathcal{P}_n(m)$ is the Plancherel measure or the more general $\alpha$-Jack measure. Plancherel measure is a special case of $\alpha$-Jack measure with $\alpha=1$. It is known the both the Plancherel measure (see \cite{BDJ, BOO, Jo,O2}, a survey by \cite{O1} and the references therein) and $\alpha$-Jack measure (see for instance \cite{BO, Fulman, sho}) have a deep connection with random matrix theory.
In this paper, we consider two new probability measures on $\mathcal{P}_n(m)$ assuming either $m$ is fixed or $m$ tends to infinity with $n$. We investigate the asymptotic joint distributions of $(k_1,\ldots,k_m)$ as $n$ tends to infinity. We first introduce the probability measures on $\mathcal{P}_n(m)$ and present the main results in Section \ref{sec:r1} and \ref{sec:r2}. The proofs are given in the remaining of the paper.
\subsection{Restricted geometric distribution}\label{sec:r1}
The first kind of random partitions on $\mathcal{P}_n(m)$ is defined as following: for $\kappa=(k_1,\ldots,k_m) \in \mathcal{P}_n(m)$, consider the probability measure
\begin{eqnarray}\label{eq:another}
P(\kappa) = c\cdot q^{k_1}
\end{eqnarray}
where $0<q<1$ and $c=c_{n,m}$ is the normalizing constant that $\sum_{\kappa \in \mathcal{P}_n(m)} P(\kappa) =1$. We call this probability measure the \emph{restricted geometric distribution}. This probability measure favors the partitions $\kappa$ with the smallest possible largest part $k_1$. Thus we concern the fluctuation of $k_1$ around $\lceil \frac{n}{m} \rceil$.
When $m$ is a fixed integer, the main result is the following.
\begin{theorem}\label{another_weight} For given $m\geq 2$, let $\kappa=(k_1,\ldots,k_m) \in \mathcal{P}_n(m)$ be chosen with probability $P(\kappa)$ as in (\ref{eq:another}). For a subsequence $n\equiv j_0$ (mod $m$), define $j=j_0$ if $1\le j_0 \le m-1$ and $j=m$ if $j_0=0$. Then as $n\to\infty$, we have $\big(k_1 -\lceil \frac{n}{m} \rceil, \ldots, k_m-\lceil \frac{n}{m} \rceil )$ converges to a discrete random vector with pmf
\begin{eqnarray*}
f(l_1, \cdots, l_m)=\frac{q^{l_1}}{\sum_{l=0}^{\infty} q^{l }\cdot |\mathcal{P}_{ m(l+1)-j}(m-1)|}
\end{eqnarray*}
for all integers $(l_1, \cdots, l_m)$ with $l_1 \ge 0$, $l_1\geq \cdots \geq l_m$ and $\sum_{i=1}^ml_i=j-m.$
\end{theorem}
From Theorem \ref{another_weight}, we immediately obtain the limiting distribution of the largest part $k_1$, which fluctuates around its smallest possible value $\lceil \frac{n}{m}\rceil$. As a consequence, the conditional distribution of $(k_2,\ldots,k_m)$ given the largest part $k_1$ is asymptotically a uniform distribution.
\begin{coro}\label{cor:first} Given $m\geq 2$, let $\kappa=(k_1,\ldots,k_m) \in \mathcal{P}_n(m)$ be chosen with probability $P(\kappa)$ as in (\ref{eq:another}). For a subsequence $n\equiv j_0$ (mod $m$), define $j=j_0$ if $1\le j_0 \le m-1$ and $j=m$ if $j_0=0$. Then as $n \to\infty$, we have $k_1 -\lceil \frac{n}{m}\rceil$ converges to a discrete random variable with pmf
\begin{eqnarray*}
f(l)=\frac{q^{l}\cdot |\mathcal{P}_{ ml+m-j}(m-1)|}{\sum_{l=0}^{\infty} q^{l }\cdot |\mathcal{P}_{ ml+m-j}(m-1)|},~~ l\geq 0.
\end{eqnarray*}
Furthermore, the conditional distribution
of $(k_2 -\lceil \frac{n}{m}\rceil, \ldots, k_m-\lceil \frac{n}{m}\rceil)$ given $k_1=\lceil \frac{n}{m}\rceil+l_1$ $(l_1 \ge 0)$ is asymptotically a uniform distribution on the set
$\big\{(l_2,\ldots,l_m)\in \mathbb{Z}^{m-1};\, l_1 \ge l_2 \ge \ldots \ge l_m \ \mbox{and}\ l_1 + \sum_{i=2}^m l_i = j-m \big\}.$
\end{coro}
We present the proofs of Theorem \ref{another_weight} and Corollary \ref{cor:first} in Section \ref{sec:anotherfix}.
When $m$ tends to infinity with $n$ and $m=o(n^{1/3})$, we consider the limiting distribution of the largest part $k_1$. The main result is that with proper normalization, the largest part $k_1$ converges to a normal distribution.
\begin{theorem}\label{surprise_result}
Given $q\in (0,1)$, let $\kappa=(k_1,\ldots,k_m) \in \mathcal{P}_n(m)$ be chosen with probability $P(\kappa)$ as in (\ref{eq:another}). Set $\lambda =-\log q>0.$ If $m=m_n\to\infty$ with $m=o(n^{1/3})$, then $\frac{1}{\sqrt{m}}(k_1 -\lceil \frac{n}{m}\rceil - \gamma m)$
converges weakly to $N(0, \sigma^2)$ as $n\to\infty$, where
\begin{eqnarray*}
\gamma=\frac{1}{\lambda^{2}}\int_0^{\lambda}\frac{t}{e^t-1}\,dt\ \ \mbox{and}\ \ \sigma^2=\frac{2}{\lambda^{3}}\int_0^{\lambda}\frac{t}{e^t-1}\,dt- \frac{1}{\lambda(e^{\lambda}-1)}>0.
\end{eqnarray*}
\end{theorem}
The proof of Theorem \ref{surprise_result} is analytic and quite involved. We use the Laplace method to estimate the normalization constant $c=c_{n,m}$ in \eqref{eq:another}. The same analysis is applied to obtain the asymptotic distribution of the largest part $k_1$. Thanks to the Szekeres formula (see \eqref{marriage}) for the number of restricted partitions, we first approximate $c_{n,m}$ with an integral
$$c_{n,m} \approx C(m)\cdot\int \exp(m\psi(t))\, dt$$
for some function $\psi(t)$ that has a global maximum at $t_0>0$. Thus
$$\psi(t) \approx \psi(t_0) -\frac{1}{2} |\psi''(t_0)| t^2$$ and
\begin{eqnarray}\label{eq:nor}
c_{n,m}
&\approx& C(m) e^{m\psi(t_0)}\cdot\int \exp(-\frac{1}{2}m |\psi''(t_0)|)\, dt.
\end{eqnarray}
The most significant contribution in the integral comes from the $t$ close to $t_0$. Indeed, the integral in \eqref{eq:nor} is reduced to a Gaussian integral as $n\to \infty$. We prove Theorem \ref{surprise_result} in Section \ref{sec:anotherinf}.
It remains to consider the conditional distribution of $(k_2,\ldots,k_m)$ given the largest part $k_1$. It is convenient to work with $k_i=\lceil \frac{n}{m}\rceil + l_i$ for $1\le i \le m$. In view of Theorem \ref{surprise_result}, let $k_1 = \lceil \frac{n}{m}\rceil + l_1$ with $l_1=\gamma m + C\cdot\sqrt{m}$. Given $l_1$, $(l_2,\ldots,l_m)$ has a uniform distribution on the set $\{(l_2,\ldots,l_m)\in \mathbb{Z}^{m-1};\, l_1 \ge l_2 \ge \ldots \ge l_m \ \mbox{and}\ l_1 + \sum_{i=2}^m l_i = j-m \}$. We consider a linear transform
$(j_2,\ldots,j_m)=(l_1-l_2,\ldots,l_1-l_m)$. Since uniform distribution is preserved under linear transformations, $(j_2,\ldots,j_m)$ has the uniform distribution on the set
$\{(j_2,\ldots,j_m)\in \mathbb{N}^{m-1};\, j_m \ge \ldots \ge j_3\ge j_2 \ \mbox{and}\ \sum_{i=2}^m j_i = ml_1+m-j \}$.
In general, the problem is reduced to understand the uniform distribution on the set
\begin{eqnarray*}
\{(\lambda_2,\ldots,\lambda_m) \in \mathbb{N}^{m-1};\, \lambda_2 \ge \ldots \ge \lambda_m \ge 0 \ \mbox{and}\ \sum_{i=2}^m \lambda_i = m l_1 \}.
\end{eqnarray*}
To our best knowledge, it is not even clear what the limiting joint distribution of a partition chosen uniformly from $\mathcal{P}_{m^2}(\gamma m)$ is as $m$ tends to infinity. We raise the following question for future projects.
\begin{question}\label{qu1}
Given $q\in (0,1)$, let $\kappa=(k_1,\ldots,k_m) \in \mathcal{P}_n(m)$ be chosen with probability $P(\kappa)$ as in (\ref{eq:another}). Assume $m$ tends to infinity with $n$ and $m=o(n^{1/3})$. Determine the asymptotic joint distribution of $(k_2,\ldots,k_m)$ given $k_1$. Furthermore, what is the limiting distribution of $(k_1,k_2, \ldots,k_m)$ as $n$ tends to infinity?
\end{question}
We have considered the limiting distribution of $\kappa \in \mathcal{P}_n(m)$ chosen as in (\ref{eq:another}) for $m$ fixed as well as $m=o(n^{1/3})$. It is also interesting to investigate this probability measure for other ranges of $m$.
\begin{question}\label{qu1}
Given $q\in (0,1)$, let $\kappa=(k_1,\ldots,k_m) \in \mathcal{P}_n(m)$ be chosen with probability $P(\kappa)$ as in (\ref{eq:another}). Identify the asymptotic distribution of $\kappa$ for the entire range $1\le m \le n$.
\end{question}
\subsection{A generalized distribution}\label{sec:r2}
Next we consider a probability measure on $\mathcal{P}_n(m)$ by choosing a partition $\kappa=(k_1, \ldots, k_m) \vdash n$ with chance
\begin{eqnarray}\label{eq:general}
P_n (\kappa) = c\cdot f(\frac{k_1}{n},\ldots,\frac{k_m}{n})
\end{eqnarray}
where $c=c_{n,m}=\big({\sum_{(k_1,\ldots,k_m)\in\mathcal{P}_n(m)}} f(\frac{k_1}{n},\ldots,\frac{k_m}{n})\big)^{-1}$ is the normalizing constant and $f(x_1,\ldots,x_m)$ is defined on $\overline{\nabla}_{m-1}$, the closure of $\nabla_{m-1}$. Here $\nabla_{m-1}$ is the ordered $(m-1)$-dimensional simplex defined as
\begin{eqnarray*}
\nabla_{m-1}: = \Big\{ (y_1,\ldots,y_m) \in [0,1]^m; y_1> y_2 > \ldots > y_{m-1} > y_m \text{ and } y_m=1-\sum_{i=1}^{m-1}y_i\Big\}.
\end{eqnarray*}
We assume $f$ is a probability density function on $\nabla_{m-1}$ and is Lipschitz on $\overline{\nabla}_{m-1}$.
When $m$ is a fixed integer, we study the limiting joint distribution of the parts of $\kappa$ chosen as in \eqref{eq:general}. The main result is the following.
\begin{theorem}\label{thm:general}
Let $m \ge 2$ be a fixed integer. Assume $\kappa=(k_1,\ldots, k_m) \in \mathcal{P}_n(m)$ is chosen as in \eqref{eq:general}, where $f$ is a probability density function on ${\nabla}_{m-1}$ and $f$ is Lipschitz on $\overline{\nabla}_{m-1}$. Then $(\frac{k_1}{n},\ldots, \frac{k_m}{n})$ converges weakly to a probability measure $\mu$ with density function $f(y_1,\ldots,y_m)$ defined on $\nabla_{m-1}$.
\end{theorem}
From Theorem \ref{thm:general}, we can immediately obtain the limiting convergence to several familiar distributions.
We say $(X_1, \ldots, X_m)$ has the \emph{symmetric Dirichlet distribution} with parameter $\alpha>0$, denoted by $(X_1,\ldots,X_m) \sim \text{Dir}(\alpha)$, if the distribution has pdf
$$\frac{\Gamma(m\alpha)}{\Gamma(\alpha)^m} x_1^{\alpha-1} \cdots x_m^{\alpha-1}$$ on the $(m-1)$-dimensional simplex
$$W_{m-1}:=\Big\{ (x_1,\ldots,x_{m-1},x_m) \in [0,1]^m; \sum_{i=1}^m x_i =1\Big\}$$ and zero elsewhere. Specially, if $\alpha =1$, this is the uniform distribution on $\mathcal{P}_n(m)$.
\begin{coro}\label{cor:dir}
Let $m \ge 2$ be a fixed integer. Assume $\kappa=(k_1,\ldots, k_m) \in \mathcal{P}_n(m)$ is chosen as in \eqref{eq:general} with $f(x_1,\ldots,x_m)= c\cdot x_1^{\alpha-1} \cdots x_m^{\alpha-1}$ for some $\alpha>2$ or $\alpha=1$ and $1/c=\int_{\nabla_{m-1}} x_1^{\alpha-1} \cdots x_m^{\alpha-1} \,dx_1\ldots dx_{m-1}$, then
$$(\frac{k_1}{n},\ldots, \frac{k_m}{n}) \to (X_{(1)}, \ldots, X_{(m)})$$
where $(X_{(1)}, \ldots, X_{(m)})$ is the decreasing order statistics of $(X_1,\ldots,X_m) \sim \text{Dir}(\alpha).$
\end{coro}
\begin{coro}\label{cor:sphere}
Let $m \ge 2$ be a fixed integer. Assume $\kappa=(k_1,\ldots, k_m) \in \mathcal{P}_n(m)$ is chosen as in \eqref{eq:general} with $f(x_1,\ldots,x_m)= c\cdot x_1^{\alpha-1} \cdots x_m^{\alpha-1}$ for some $\alpha>2$ or $\alpha=1$ and $1/c=\int_{\nabla_{m-1}} x_1^{\alpha-1} \cdots x_m^{\alpha-1} \,dx_1\ldots dx_{m-1}$, then
$$\left((\frac{k_1}{n})^{\alpha},\ldots, (\frac{k_m}{n})^{\alpha}\right) \to (x_1, \ldots, x_m)$$
as $n\to \infty$, where $(x_1, \ldots, x_m)$ has the uniform distribution on $$\{(y_1,\ldots,y_m)\in[0,1]^m; \sum_{i=1}^m y_i^{1/{\alpha}} = 1, y_1 \ge\ldots \ge y_m \},$$ or equivalently, $(x_1, \ldots, x_m)$ is the decreasing order statistics of the uniform distribution on $\{(y_1,\ldots,y_m)\in[0,1]^m; \sum_{i=1}^m y_i^{1/{\alpha}} = 1\}$.
\end{coro}
For the special case $\alpha =1$, that is, $\kappa$ is chosen uniformly from $\mathcal{P}_n(m)$, the conclusion of Corollary \ref{cor:sphere} is first proved in \cite{JW-LB}. The proofs of Theorem \ref{thm:general}, Corollary \ref{cor:dir} and Corollary \ref{cor:sphere} are included in Section \ref{sec:generalfix}.
When $m$ grows with $n$, we establish the limiting distribution of random restricted partitions in $\mathcal{P}_n(m)$. Define
\begin{eqnarray*}
\nabla = \{ (y_1, y_2, \cdots) \in [0,1]^\infty;\ y_1\geq y_2 \geq \cdots\ \mbox{and} \sum_{i=1}^{\infty}y_i \le 1 \}.
\end{eqnarray*}
Note that $\nabla_{m-1}$ can be viewed as subsets of
\begin{eqnarray*}
\nabla_{\infty}= \{(y_1,y_2,\ldots) \in [0,1]^{\infty};\ y_1\geq y_2 \geq \cdots\ \mbox{and} \sum_{i=1}^{\infty}y_i = 1 \}
\end{eqnarray*}
by natural embedding. And $\nabla$ is the closure of $\nabla_{\infty}$ in $[0,1]^{\infty}$ with topology inherited from $[0,1]^{\infty}$. By Tychonoff's theorem, $\nabla_{m-1}$ and $\nabla$ are compact. Furthermore, both $\nabla_{m-1}$ and $\nabla$ are compact Polish space and thus any probability measure on $\nabla_{m-1}$ is tight. Therefore, for probability measures $\{\mu_n\}_{n\ge 1}$ and $\mu$ on $\nabla$, $\mu_n$ converges to $\mu$ weakly if all the finite-dimensional distribution of $\mu_n$ converges to the corresponding finite-dimensional distribution of $\mu$.
\begin{theorem}\label{thm:general-infinity}
Let $m=o(n^{1/3}) \to \infty$ as $n\to\infty.$ Assume $\kappa=(k_1,\ldots, k_m) \in \mathcal{P}_n(m)$ is chosen with probability as in \eqref{eq:general} where $f$ is a probability density function on $\nabla_{m-1}$ and is Lipschitz on $\overline{\nabla}_{m-1}$. Let $(X_{m,1}, \cdots, X_{m,m})$ have density function
$f(y_1,\cdots,y_m)$ defined on $\nabla_{m-1}$. If $(X_{m,1}, \cdots, X_{m,m})$ converges weakly to $X$ defined on $\nabla$ as $n\to \infty$, then $(\frac{k_1}{n},\cdots, \frac{k_m}{n})$ converges weakly to $X$ as $n\to \infty$.
\end{theorem}
\begin{comment}
A natural probability measure defined on $\nabla$ is the \emph{Poisson-Dirichlet distribution} with parameter $\theta>0$(see Section 2.1 in \cite{Feng}). This distribution is defined as the limit of the decreasing order statistics of $(X_1,\ldots,X_m) \sim \text{Dir}(\frac{\theta}{m-1})$ as $m\to \infty$. In light of Theorem \ref{thm:general-infinity}, we can prove the analogue of Corollary \ref{cor:dir} for the case that $m$ depends on $n$.
\begin{coro}\label{cor:pos-dir}
Assume $m=o(n^{1/3}) \to \infty$ as $n\to\infty.$ Let $\kappa=(k_1,\ldots, k_m) \in \mathcal{P}_n(m)$ is chosen with as in \eqref{eq:general} where $f(x_1,\ldots, x_m)$ is a probability density function on $\nabla_{m-1}$ proportional to $x_1^{\alpha-1} \cdots x_m^{\alpha-1}$ with $\alpha= \frac{\theta}{m-1}$ for some $\theta>0$, then $(\frac{k_1}{n},\cdots, \frac{k_m}{n})$ converges weakly to $X$ that has the Poisson-Dirichlet distribution on $\nabla$ with parameter $\theta$. In particular, we obtain the marginal distribution of $k_1$. For any $0< x<1$,
\begin{eqnarray*}
P(\frac{k_1}{n} \le x) \to 1+ \sum_{k=1}^{\infty}\frac{(-\theta)^k}{k!} \int_{B_x^k} \frac{(1-\sum_{i=1}^k x_i)^{\theta-1}}{x_1x_2\cdots x_k}\, dx_1dx_2\cdots dx_k
\end{eqnarray*}
as $n\to \infty$, where $B_x^k = \{(x_1,\ldots,x_k);\ x < x_i < 1 \text{~and~} \sum_{i=1}^k x_i <1 \}.$
\end{coro}
\end{comment}
We prove Theorem \ref{thm:general-infinity} in Section \ref{sec:generalinf}. We have investigated the limiting distribution of $\kappa \in \mathcal{P}_n(m)$ chosen as in \eqref{eq:general} for both $m$ fixed and $m=o(n^{1/3})$. It would be interesting to understand the limiting distribution of $\kappa$ for other ranges of $m$. We leave this as an open question for future research.
\begin{question}
Let $\kappa=(k_1,\ldots,k_m) \in \mathcal{P}_n(m)$ be chosen with probability $P_n(\kappa)$ as in (\ref{eq:general}). Identify the asymptotic distribution of $\kappa$ for the entire range $1\le m \le n$.
\end{question}
{\bf Notation:} For $x\in \mathbb{R}$, the notation $\lceil x \rceil$ stands for the smallest integer greater than or equal to $x$. The symbol $[x]$ denotes the largest integer less than or equal to $x$. We use $\mathbb{Z}$ to be the set of all real integers. For a set $A$, the notation $\#A$ or $|A|$ stands for the cardinality of $A$. We use $c\cdot A=\{c\cdot a: a\in A\}$. Denote $\mathcal{P}_0(k)=1$ for convenience. For $f(n),g(n)>0$, $f(n)\sim g(n)$ if $\lim_{n\to \infty} f(n)/g(n)=1$.
\section{Proofs of restricted geometric distribution}\label{sec:another}
\subsection{Case I: $m$ is fixed}\label{sec:anotherfix}
We start with a lemma concerning the number of restricted partitions $\mathcal{P}_n(m)$ with the largest part fixed.
\begin{lemma}\label{lantern_wind} Let $l \geq 0$, $m\geq 2$ and $n\geq 1$ be integers. Set $j=m+n-m\lceil \frac{n}{m} \rceil$. Then $1\leq j \leq m$.
If $0\le l \le \frac{1}{m-1}(\frac{n}{m}-m)$, we have
\begin{eqnarray}\label{eq:newbaby}
\# \Big\{(k_1, k_2,\ldots,k_m) \in \mathcal{P}_n(m);\, k_1= \lceil \frac{n}{m} \rceil + l \Big\}= |\mathcal{P}_{ m(l+1)-j}(m-1)|;
\end{eqnarray}
If $ \frac{1}{m-1}(\frac{n}{m}-m) < l \le n-\lceil \frac{n}{m} \rceil$, we have
\begin{eqnarray}\label{eq:othervalues}
\# \Big\{(k_1, k_2,\ldots,k_m) \in \mathcal{P}_n(m);\, k_1= \lceil \frac{n}{m} \rceil + l \Big\} \le |\mathcal{P}_{ m(l+1)-j}(m-1)|.
\end{eqnarray}
\end{lemma}
\begin{proof} For $\kappa=(k_1,\ldots,k_m) \in \mathcal{P}_n(m)$, let us rewrite $k_i = \lceil \frac{n}{m} \rceil + l_i$ for $1\le i \le m$. By assumption, $l_1=l \ge 0$. Since $\kappa \vdash n$, we have $l_1 \ge l_2 \ge \ldots \ge l_m \geq -\lceil \frac{n}{m} \rceil$ and $l_1 + \sum_{i=2}^m l_i = n-m \lceil \frac{n}{m} \rceil = j-m$ by assumption. Therefore,
\begin{eqnarray*}
& & \#\Big\{(k_1, k_2,\ldots,k_m) \in \mathcal{P}_n(m);\, k_1= \lceil \frac{n}{m} \rceil + l_1 \Big\}\\
& = &\#\Big\{(l_2,\ldots,l_m)\in \mathbb{Z}^{m-1};\, l_1 \ge l_2 \ge \ldots \ge l_m \geq -\lceil \frac{n}{m}\rceil\ \mbox{and}\ l_1 + \sum_{i=2}^m l_i = j-m \Big\}\\
& = & \#\Big\{(j_2,\ldots,j_m)\in \mathbb{Z}^{m-1};\, l_1 +\lceil \frac{n}{m}\rceil \ge j_m \ge \ldots \ge j_2\geq 0\ \mbox{and}\ \sum_{i=2}^m j_i = m(l_1+1)-j \Big\}
\end{eqnarray*}
by the transform $j_i = l_1 - l_i$ for $2\le i \le m$.
Assume $0\le l \le \frac{1}{m-1}(\frac{n}{m}-m)$. If $j_m \ge \ldots \ge j_2\geq 0$ and $\sum_{i=2}^m j_i = m(l_1+1)-j$, then
\begin{eqnarray*}
j_m\leq \sum_{i=2}^m j_i = m(l_1+1)-j\leq m(l_1+1)\leq l_1+\lceil \frac{n}{m} \rceil
\end{eqnarray*}
by assumption, the notation $l_1=l$ and the fact $\lceil x\rceil\geq x$ for any $x\in \mathbb{R}$. It follows that the left hand side of (\ref{eq:newbaby}) is identical to
\begin{eqnarray*}
& & \#\Big\{(j_2,\ldots,j_m)\in \mathbb{Z}^{m-1};\, j_m \ge \ldots \ge j_2\geq 0\ \mbox{and}\ \sum_{i=2}^m j_i = m(l_1+1)-j \Big\}\\
&=& |\mathcal{P}_{ m(l+1)-j}(m-1)|.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\end{eqnarray*}
For $ \frac{1}{m-1}(\frac{n}{m}-m) +1 \le l \le n-\lceil \frac{n}{m} \rceil$, the upper bound \eqref{eq:othervalues} follows directly from the definitions of the sets.
\end{proof}
Now we are ready to present the proof of Theorem \ref{another_weight}.
\begin{proof}[Proof of Theorem \ref{another_weight}] First, it is easy to check that for the subsequence
$n\equiv j_0$ (mod $m$), if we define $j=j_0$ if $1\le j_0 \le m-1$ and $j=m$ if $j_0=0$, then $j=m+n-m\lceil \frac{n}{m} \rceil$.
Set
\begin{eqnarray}\label{glad_carpet}
M_n=\Big[\frac{1}{m-1}(\frac{n}{m}-m)\Big].
\end{eqnarray}
We first estimate the normalizing constant $c$ in (\ref{eq:another}).
\begin{eqnarray*}
1&=& \sum_{\kappa \in \mathcal{P}_n(m)} P(\kappa) = c\cdot \sum_{k_1 =\lceil \frac{n}{m} \rceil}^n \sum_{(k_1, k_2,\ldots,k_m) \vdash n} q^{k_1} \\
&=& c\cdot \sum_{l=0}^{n-\lceil \frac{n}{m} \rceil} q^{\lceil \frac{n}{m} \rceil +l }\sum_{(\lceil \frac{n}{m} \rceil+l,k_2,\ldots,k_m) \vdash n} 1.
\end{eqnarray*}
Indeed, as $n$ tends to infinity,
\begin{eqnarray*}
& &\sum_{l=0}^{n-\lceil \frac{n}{m} \rceil} q^{\lceil \frac{n}{m} \rceil +l }\sum_{(\lceil \frac{n}{m} \rceil+l,k_2,\ldots,k_m) \vdash n} 1 \\
& &\sim \sum_{l=0}^{M_n} q^{\lceil \frac{n}{m} \rceil +l }\sum_{(\lceil \frac{n}{m} \rceil+l,k_2,\ldots,k_m) \vdash n} 1.
\end{eqnarray*}
By Lemma \ref{lantern_wind},
\begin{eqnarray*}
\frac{\sum_{M_n+1}^{n-\lceil \frac{n}{m} \rceil} q^{\lceil \frac{n}{m} \rceil +l }\sum_{(\lceil \frac{n}{m} \rceil+l,k_2,\ldots,k_m) \vdash n} 1}{\sum_{l=0}^{M_n} q^{\lceil \frac{n}{m} \rceil +l }\sum_{(\lceil \frac{n}{m} \rceil+l,k_2,\ldots,k_m) \vdash n} 1}
&\le& \frac{\sum_{M_n+1}^{n-\lceil \frac{n}{m} \rceil} q^l \cdot |\mathcal{P}_{ m(l+1)-j}(m-1)|}{\sum_{l=0}^{M_n} q^l \cdot |\mathcal{P}_{ m(l+1)-j}(m-1)|}\\
&\sim& \frac{\sum_{M_n+1}^{n-\lceil \frac{n}{m} \rceil} q^l \binom{ml+m-j-1}{m-2} }{\sum_{l=0}^{M_n} q^l \binom{ml+m-j-1}{m-2}},
\end{eqnarray*}
where the last equality follows from \eqref{eq:size}. Note that the series $\sum_{s=1}^{\infty} s^{m-2} q^s$ converges for $0<q<1$. We have
\begin{eqnarray*}
\frac{\sum_{M_n+1}^{n-\lceil \frac{n}{m} \rceil} q^l \binom{ml+m-j-1}{m-2} }{\sum_{l=0}^{M_n} q^l \binom{ml+m-j-1}{m-2}} =O( \frac{\sum_{M_n+1}^{n-\lceil \frac{n}{m} \rceil} q^l l^{m-2}}{\sum_{l=0}^{M_n} q^l l^{m-2}}) = o(1).
\end{eqnarray*}
Therefore, one obtains the normalizing constant
\begin{eqnarray}\label{Great_bug}
c\sim \frac{1}{ q^{\lceil \frac{n}{m} \rceil} \sum_{l=0}^{M_n} q^{l } \cdot |\mathcal{P}_{ m(l+1)-j}(m-1)|}.
\end{eqnarray}
Now we study the limiting joint distribution of the parts $$(k_1, k_2, \ldots, k_m) = (\lceil \frac{n}{m} \rceil + l_1, \lceil \frac{n}{m} \rceil + l_2, \ldots, \lceil \frac{n}{m} \rceil + l_m).$$
First, we claim that it is enough to consider $l_1$ to be any fixed integer from $\{ 0, 1, 2, \ldots\}$. Indeed, for any $L=L(n) \to \infty$ as $n \to \infty$, it follows from \eqref{eq:size} that
\begin{eqnarray*}
P(k_1 \ge \lceil \frac{n}{m} \rceil + L)&=&\sum_{l=L}^{M_n} P(k_1 = \lceil \frac{n}{m} \rceil + l)\\
&=& \sum_{l=L}^{M_n}c \cdot q^{\lceil \frac{n}{m} \rceil + l} |\mathcal{P}_{ ml+m-j}(m-1)|\\
&\sim& c\cdot q^{\lceil \frac{n}{m} \rceil} \sum_{l=L}^{M_n} \frac{\binom{ml+m-j-1}{m-2}}{(m-1)!} q^{l}.
\end{eqnarray*}
Plugging in the normalizing constant $c_{n,m}$ and let $L \to \infty$, we have
\begin{eqnarray*}
P(k_1 \ge \lceil \frac{n}{m} \rceil + L) &=& O\Big(\frac{\sum_{l=L}^{M_n} l^{m-2} q^{l}}{\sum_{l=0}^{M_n} q^{l } \cdot |\mathcal{P}_{ ml+m-j}(m-1)|}\Big)\\
&=& o(1),
\end{eqnarray*}
as $n \to \infty$. The last equality follows from the fact that the series $\sum_{s=1}^{\infty} s^{m-2} q^s$ converges for $0<q<1$.
Likewise, we have as $n$ tends to infinity,
\begin{eqnarray}\label{eq:constant}
c \sim q^{-\lceil \frac{n}{m} \rceil} \frac{1}{\sum_{l=0}^{\infty} q^l \cdot |\mathcal{P}_{ ml+m-j}(m-1)|}.
\end{eqnarray}
Therefore, for any given $l_1 = 0, 1, 2, \ldots$, we conclude that
\begin{eqnarray*}
& & P\Big(k_1 = \lceil \frac{n}{m} \rceil + l_1, k_2 = \lceil \frac{n}{m} \rceil + l_2, \ldots, k_m=\lceil \frac{n}{m} \rceil + l_m\Big)\\
&= &c \cdot q^{\lceil \frac{n}{m} \rceil + l_1}\to \frac{q^{l_1}}{\sum_{l=0}^{\infty} q^{l } \cdot |\mathcal{P}_{ ml+m-j}(m-1)|}.
\end{eqnarray*}
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:first}]
By Theorem \ref{another_weight}, it is enough to consider $k_1 = \lceil \frac{n}{m}\rceil + l$ for $l \in \{0,1,2,\ldots\}$ in the limiting distribution. From \eqref{eq:another}, Lemma \ref{lantern_wind} and \eqref{eq:constant},
\begin{eqnarray*}
P(k_1 = \lceil \frac{n}{m}\rceil + l) &=& c\cdot q^{\lceil \frac{n}{m}\rceil + l} \sum_{(\lceil \frac{n}{m} \rceil+l,k_2,\ldots,k_m) \vdash n} 1\\
&=& c\cdot q^{\lceil \frac{n}{m}\rceil + l} \cdot |\mathcal{P}_{ ml_1+m-j}(m-1)|\\
&\to & \frac{q^{l}\cdot |\mathcal{P}_{ ml+m-j}(m-1)|}{\sum_{l=0}^{\infty} q^{l }\cdot |\mathcal{P}_{ ml+m-j}(m-1)|}
\end{eqnarray*}
as $n\to \infty$.
Furthermore, since
\begin{eqnarray*}
&&P(k_2 -\lceil \frac{n}{m}\rceil=l_2,\ldots, k_m -\lceil \frac{n}{m}\rceil=l_m ~|~ k_1 -\lceil \frac{n}{m}\rceil=l_1) \\
& &\quad = \frac{P(k_1 -\lceil \frac{n}{m}\rceil=l_1, k_2 -\lceil \frac{n}{m}\rceil =l_2,\ldots, k_m -\lceil \frac{n}{m}\rceil=l_m)}{P(k_1 -\lceil \frac{n}{m}\rceil=l_1)}\\
& & \quad \sim \frac{f(l_1,\ldots,l_m)}{f(l_1)} = \frac{1}{|\mathcal{P}_{ ml_1+m-j}(m-1)|}
\end{eqnarray*}
as $n\to \infty$, it follows immediately the conditional distribution
of $(k_2 -\lceil \frac{n}{m}\rceil, \ldots, k_m-\lceil \frac{n}{m}\rceil)$ given $k_1=\lceil \frac{n}{m}\rceil+l_1$ $(l_1 \ge 0)$ is asymptotically a uniform distribution on the set
$\big\{(l_2,\ldots,l_m)\in \mathbb{Z}^{m-1};\, l_1 \ge l_2 \ge \ldots \ge l_m \ \mbox{and}\ l_1 + \sum_{i=2}^m l_i = j-m \big\}.$ This completes the proof.
\end{proof}
\subsection{Case II: $m$ tends to infinity and $m=o(n^{1/3})$}\label{sec:anotherinf}
Szekeres formula (see \citet{Sz1, Sz2}; see also \citet{Can} and \cite{Romik}) says that
\begin{eqnarray}\label{marriage}
|\mathcal{P}_{n}(k)| \sim \frac{f(u)}{n}e^{\sqrt{n} g(u)}
\end{eqnarray}
uniformly for $k\ge n^{1/6}$, where $u=k/\sqrt{n}$, and
\begin{eqnarray}
& & f(u)=\frac{v}{2^{3/2}\pi u}\big(1-e^{-v}-\frac{1}{2}u^2e^{-v}\big)^{-1/2}; \label{snow_gone}\\
& & g(u)=\frac{2v}{u}-u\log (1-e^{-v}),\label{spring_come}
\end{eqnarray}
with $v=v(u)$ determined implicitly by
\begin{eqnarray}\label{blue_star}
u^2=\frac{v^2}{\int_0^v\frac{t}{e^t-1}\,dt}.
\end{eqnarray}
We start with a technical lemma that will be used in the proof of Theorem \ref{surprise_result} later.
\begin{lemma}\label{London} Let $\lambda>0$ be given. Define $\psi(t)= \frac{g(t)}{t}-\frac{\lambda}{t^2}$ for $t>0.$ Then
\begin{eqnarray*}
t_0:=\frac{\lambda}{(\int_0^{\lambda}\frac{t}{e^t-1}\,dt)^{1/2}}\ \ \mbox{satisfies}\ \ \psi''(t_0) = -\frac{2\lambda(e^{\lambda}-1)}{t_0^4(e^{\lambda}-1-\frac{1}{2}t_0^2)} <0.
\end{eqnarray*}
Further, $\psi'(t_0)=0$, $\psi(t)$ is strictly increasing on $(0, t_0]$ and strictly decreasing on $[t_0, \infty)$.
\end{lemma}
\begin{proof} Trivially, the function $\frac{t}{e^t-1}=(\sum_{i=1}^{\infty}\frac{t^{i-1}}{i!})^{-1}$ is positive and decreasing in $t\in (0, \infty).$
It follows that $v=v(u)>0$ for all $u\in (0, \infty)$ and
$$\frac{v^2}{u^2} = \int_0^v\frac{t}{e^t-1}\,dt > \frac{v^2}{e^v-1}.$$
Thus $e^v - 1- u^2 >0$. In particular,
\begin{eqnarray}\label{bad_tree}
e^v-1-\frac{1}{2}u^2 >0.
\end{eqnarray}
By taking derivative from \eqref{blue_star}, we get
$$2v\cdot v' = 2u \int_0^v\frac{t}{e^t-1}\,dt + u^2 \frac{v\cdot v'}{e^v-1}.$$ This implies that
$\frac{v'}{e^v-1} = \frac{2v'}{u^2} - \frac{2v}{u^3}$, or equivalently,
\begin{eqnarray}\label{eye_oil}
v'= \frac{v}{u} + \frac{uv}{2(e^v-1 - \frac{1}{2}u^2)}.
\end{eqnarray}
Consequently,
$v'=v'(u)>0$ for all $u>0$, and thus $v(u)$ is strictly increasing on $(0, \infty)$. Take derivative on $g(u)$ in (\ref{spring_come}), and use (\ref{blue_star}) and (\ref{eye_oil}) to see
\begin{eqnarray}\label{xenix}
& & g'(u)= -\log(1-e^{-v});\\
& & g''(u)= -\frac{v' e^{-v}}{1-e^{-v}} = -\frac{v/u}{e^v-1-\frac{1}{2}u^2}.\nonumber
\end{eqnarray}
Therefore
\begin{eqnarray}\label{Beijing}
(\frac{g(u)}{u})'= \frac{ug'(u)-g(u)}{u^2}
\end{eqnarray}
and
\begin{eqnarray*}
(\frac{g(u)}{u})'' &= &\frac{g''(u)}{u}-2\frac{g'(u)}{u^2} + 2\frac{ g(u)}{u^3} \\
&=& \frac{v}{u^4} \Big( 4- \frac{u^2}{e^v-1-\frac{1}{2}u^2} \Big).
\end{eqnarray*}
With the above preparation, we now study $\psi(t)$ (we switch the variable ``$u$" to ``$t$").
\begin{eqnarray}
\psi''(t)&=&(\frac{g(t)}{t} -\frac{\lambda}{t^2})'' \nonumber\\
&=& \frac{v}{t^4} \Big( 4- \frac{t^2}{e^v-1-\frac{1}{2}t^2} \Big) - \frac{6\lambda}{t^4} \nonumber\\
&=& \frac{1}{t^4} \Big( 4v-6\lambda- \frac{v \cdot t^2}{e^v-1-\frac{1}{2}t^2}\Big). \label{teeth_appointment}
\end{eqnarray}
The assertions in (\ref{xenix}) and (\ref{Beijing}) imply
\begin{eqnarray*}
\Big(\frac{g(t)}{t}\Big)'=\frac{-t^2\log(1-e^{-v})-tg(t)}{t^3}=-\frac{2v}{t^3}.
\end{eqnarray*}
Thus, $\psi'(t)=\frac{2(\lambda-v)}{t^3}$. Thus, the stable point $t_0$ of $\psi(t)$ satisfies that $v(t_0)=\lambda$. This implies that $\psi(t)$ is strictly increasing on $(0, t_0]$ and strictly decreasing on $[t_0, \infty)$. It is not difficult to see from (\ref{blue_star}) that
\begin{eqnarray*}
t_0=\frac{\lambda}{(\int_0^{\lambda}\frac{t}{e^t-1}\,dt)^{1/2}}.
\end{eqnarray*}
Plug this into (\ref{teeth_appointment}) to get
\begin{eqnarray*}
\psi''(t_0) &=& -\frac{1}{t_0^4}\Big( 2\lambda +\frac{\lambda \cdot t_0^2}{e^{\lambda}-1-\frac{1}{2}t_0^2}\Big)\\
& = & -\frac{2\lambda(e^{\lambda}-1)}{t_0^4(e^{\lambda}-1-\frac{1}{2}t_0^2)}<0
\end{eqnarray*}
by (\ref{bad_tree}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{surprise_result}] Let $M_n=\Big[\frac{1}{m-1}(\frac{n}{m}-m)\Big]$ as in (\ref{glad_carpet}). The assumption $m=o(n^{1/3})$ implies
\begin{eqnarray}\label{beckey}
\lim_{n\to\infty}\frac{M_n}{m}=\infty.
\end{eqnarray}
Similar to (\ref{Great_bug}), we first claim that the normalization constant
\begin{eqnarray*}
c\sim \frac{1}{ q^{\lceil \frac{n}{m} \rceil} \sum_{l=0}^{M_n} q^{l } \cdot |\mathcal{P}_{ m(l+1)-j}(m-1)|}.
\end{eqnarray*}
Indeed, from Lemma \ref{lantern_wind},
\begin{eqnarray*}
\frac{1}{c} &=& \sum_{l=0}^{n-\lceil \frac{n}{m} \rceil} q^{\lceil \frac{n}{m} \rceil +l }\sum_{(\lceil \frac{n}{m} \rceil+l,k_2,\ldots,k_m) \vdash n} 1 \\
&=&\sum_{l=0}^{M_n} q^{\lceil \frac{n}{m} \rceil +l } \cdot |\mathcal{P}_{ m(l+1)-j}(m-1)| + \sum_{l=M_n+1}^{n-\lceil \frac{n}{m} \rceil} q^{\lceil \frac{n}{m} \rceil +l } \sum_{(\lceil \frac{n}{m} \rceil+l,k_2,\ldots,k_m) \vdash n} 1
\end{eqnarray*}
and
\begin{eqnarray*}
\sum_{l=M_n+1}^{n-\lceil \frac{n}{m} \rceil} q^{\lceil \frac{n}{m} \rceil +l } \sum_{(\lceil \frac{n}{m} \rceil+l,k_2,\ldots,k_m) \vdash n} 1
&\le& \sum_{l=M_n+1}^{n-\lceil \frac{n}{m} \rceil} q^{\lceil \frac{n}{m} \rceil +l } \cdot |\mathcal{P}_{ m(l+1)-j}(m-1)|\\
&=& \sum_{l=M_n+2}^{n-\lceil \frac{n}{m} \rceil+1} q^{\lceil \frac{n}{m} \rceil +l } \cdot |\mathcal{P}_{lm-j}(m-1)|.
\end{eqnarray*}
Observe that $|\mathcal{P}_{lm-j}(m-1)| \leq |\mathcal{P}_{lm}(lm)| \leq e^{K\sqrt{lm}}$ for some constant $K>0$ by Hardy-Ramanujan formula. Therefore,
\begin{eqnarray*}
\sum_{l=M_n+1}^{n-\lceil \frac{n}{m} \rceil} q^{\lceil \frac{n}{m} \rceil +l } \sum_{(\lceil \frac{n}{m} \rceil+l,k_2,\ldots,k_m) \vdash n} 1
&\le& q^{\lceil \frac{n}{m} \rceil} \sum_{l=M_n}^{\infty} e^{-\lambda l + K\sqrt{lm}}\\
&\le& q^{\lceil \frac{n}{m} \rceil} \sum_{l=M_n}^{\infty} e^{-\lambda l/2 } \le q^{\lceil \frac{n}{m} \rceil} \frac{e^{-\lambda M_n/2 }}{1-e^{-\lambda/2}}\\
&=& o(\sum_{l=0}^{M_n} q^{\lceil \frac{n}{m} \rceil +l } \cdot |\mathcal{P}_{ m(l+1)-j}(m-1)|)
\end{eqnarray*}
for $n$ sufficiently large.
Hence, without loss of generality, we have
\begin{eqnarray*}
P\big(k_1 = \lceil \frac{n}{m}\rceil + l\big)
= \frac{q^{l} |\mathcal{P}_{ m(l+1)-j}(m-1)|}{ \sum_{l=0}^{M_n} q^{l } \cdot |\mathcal{P}_{ m(l+1)-j}(m-1)|}
\end{eqnarray*}
for $l=0,1,2,\cdots, M_n$, where $j=m+n-m\lceil \frac{n}{m}\rceil$ and $1\leq j \leq m$. Thus,
\begin{eqnarray}\label{aha}
P\big(k_1 \leq \lceil \frac{n}{m}\rceil + m\xi\big)
= \frac{\sum_{l=1}^{[m\xi]+1} q^{l} \cdot |\mathcal{P}_{ lm-j}(m-1)|}{\sum_{l=1}^{M_n+1} q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)|}
\end{eqnarray}
for any $\xi\geq 0$.
In the following, we first apply a fine analysis to estimate the denominator
$$\sum_{l=1}^{M_n+1} q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)|.$$
We divide the range of summation into five parts: $1\le l \le c m$, $Cm\le l \le M_n$, $cm\leq l < \gamma m -\sqrt{m}\log m$, $\gamma m+\sqrt{m}\log m < l \leq Cm$ and $\gamma m -\sqrt{m}\log m \leq l \leq \gamma m +\sqrt{m}\log m$ for some proper constants $c,C>0$ and $\gamma = t_0^{-2}$ (recall $t_0$ in Lemma \ref{London}). The most significant contribution in the summation comes from the range $\gamma m -\sqrt{m}\log m \leq l \leq \gamma m +\sqrt{m}\log m$ and others are negligible. The estimation for the numerator is similar.
{\it Step 1: Two rough tails are negligible}. First, by Hardy-Ramanujan formula, there exists a constant $K>0$ such that
\begin{eqnarray*}
|\mathcal{P}_{lm-j}(m-1)| \leq |\mathcal{P}_{lm}(lm)| \leq e^{K\sqrt{lm}}
\end{eqnarray*}
for $l\geq 1$ as $n$ is large. Set $\lambda=-\log q>0$. It follows that
\begin{eqnarray}\label{pig}
\sum_{l=Cm}^{M_n+1}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)| \leq \sum_{l=Cm}^{\infty} e^{-\lambda l+K\sqrt{ml}}\leq \sum_{l=Cm}^{\infty} e^{-\lambda l/2} \leq \frac{1}{1-e^{-\lambda/2}}
\end{eqnarray}
for all $l\geq (\frac{4K^2}{\lambda^2})m$, which is satisfied if $C>\frac{4K^2}{\lambda^2}.$ Similarly, for the same $K$ as above,
\begin{eqnarray}\label{brain_head}
\sum_{l=1}^{cm}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)| & \leq & \sum_{l=1}^{cm} q^{l } \cdot |\mathcal{P}_{[cm^2]}(m)| \nonumber\\
&\leq & (cm)\cdot|\mathcal{P}_{[cm^2]}(m)| \leq (cm) \cdot e^{\sqrt{c}Km}
\end{eqnarray}
for all $c>0$ as $n$ is sufficiently large.
In the rest of the proof, the variable $n$ will be hidden in $m=m_n$ and $j=j_n$. Keep in mind that $m$ is sufficiently large when we say ``$n$ is sufficiently large".
We set two parameters
\begin{eqnarray}\label{bigc}
C=\max\{\frac{8K^2}{\lambda^2}, 2\gamma \};
\end{eqnarray}
\begin{eqnarray}\label{littlec}
c=\min\{\frac{\psi(t_0)^2}{16K^2}, \frac{\gamma}{2} \}.
\end{eqnarray}
{\it Step 2: Two refined tails are negligible}. Recall $t_0$ in Lemma \ref{London}. Define $\gamma=t_0^{-2}$ and
\begin{eqnarray}
& & \Omega_1=\{l\in \mathbb{N};\, cm\leq l < \gamma m -\sqrt{m}B\},\ \ \ \Omega_2=\{l\in \mathbb{N};\, \gamma m -\sqrt{m}B\leq l \leq \gamma m +\sqrt{m}B\},\nonumber\\
& & \Omega_3=\{l\in \mathbb{N};\, \gamma m+\sqrt{m}B < l \leq Cm\}, \label{soil_white}
\end{eqnarray}
with $B=\log m$, where $c\in (0, \gamma)$ and $C>\gamma$ by \eqref{littlec} and \eqref{bigc}. The limit in (\ref{beckey}) asserts that $\Omega_2 \subset \{1,2,\cdots, M_n\}$ as $n$ is large.
Then
\begin{eqnarray}\label{hayes}
\sum_{l=cm}^{Cm}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)|=\sum_{i=1}^3\sum_{l\in \Omega_i}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)|.
\end{eqnarray}
Easily,
\begin{eqnarray}\label{more_water}
\sum_{l\in \Omega_1\cup \Omega_3}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)| \leq \sum_{l\in \Omega_1\cup \Omega_3}q^{l } \cdot |\mathcal{P}_{lm}(m)|
\end{eqnarray}
Take $n=lm$ and $k=m$ in (\ref{marriage}), we get
\begin{eqnarray}\label{lake_five}
|\mathcal{P}_{lm}(m)| \sim \frac{f(u)}{lm}e^{\sqrt{lm} g(u)}
\end{eqnarray}
uniformly for all $cm\leq l \leq Cm$ where $u=\big(\frac{m}{l})^{1/2}$. Notice
\begin{eqnarray*}
q^{l} \cdot |\mathcal{P}_{lm}(m)| \sim \frac{f(u)}{lm}e^{-\lambda l+\sqrt{lm} g(u)}.
\end{eqnarray*}
Consider function $-\lambda x+\sqrt{x m}\cdot g\big((mx^{-1})^{1/2}\big)$
for $x\in [cm, Cm]$. Set $t=t_x=(mx^{-1})^{1/2}$. Then
\begin{eqnarray}
-\lambda x+\sqrt{x m}\cdot g\big((mx^{-1})^{1/2}\big)
&=& -\frac{\lambda m}{t^2}+m\frac{g(t)}{t} \nonumber\\
&= & m\Big(\frac{g(t)}{t}-\frac{1}{t^2}\lambda\Big).\label{car_ship}
\end{eqnarray}
By (\ref{snow_gone}) and (\ref{spring_come}), $f(x)$ is a continuous function on $[C^{-1/2}, c^{-1/2}]$. Therefore, $\frac{f((mj^{-1})^{1/2})}{mj} =O(m^{-2})$ uniformly for all $j\in \Omega_1\cup \Omega_3$, which together with (\ref{more_water}) yields
\begin{eqnarray}
& & \sum_{l\in \Omega_1\cup \Omega_3}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)| \nonumber\\
& \leq & O\Big(\frac{1}{m^2}\Big)\sum_{l\in \Omega_1\cup \Omega_3}\exp\Big[m\Big(\frac{g(t_l)}{t_l}-\frac{\lambda}{t_l^2}\Big)\Big] \nonumber\\
& \leq & O\Big(\frac{1}{m}\Big)\cdot\exp\Big[m\max_{l\in \Omega_1\cup \Omega_3}\Big(\frac{g(t_l)}{t_l}-\frac{\lambda}{t_l^2}\Big)\Big]. \label{Road_far}
\end{eqnarray}
Now
\begin{eqnarray*}
\max_{l\in \Omega_1\cup \Omega_3}\Big(\frac{g(t_l)}{t_l}-\frac{\lambda}{t_l^2}\Big)
=\max_{l\in \Omega_1\cup \Omega_3}\Big\{\psi\Big(\sqrt{\frac{m}{l}}\Big)\Big\}.
\end{eqnarray*}
Evidently,
\begin{eqnarray*}
& & \Big\{\sqrt{\frac{m}{l}},\, l \in \Omega_1\Big\} \subset \Big[\Big(\frac{m}{\gamma m- \sqrt{m}\log m}\Big)^{1/2},\, \frac{1}{\sqrt{c}}\Big] \subset (t_0, \infty);\\
& & \Big\{\sqrt{\frac{m}{l}},\, l \in \Omega_3\Big\} \subset \Big[\frac{1}{\sqrt{C}}, \Big(\frac{m}{\gamma m+ \sqrt{m}\log m}\Big)^{1/2}\Big] \subset (0, t_0).
\end{eqnarray*}
Recall Lemma \ref{London}, $\psi(t)= \frac{g(t)}{t}-\frac{\lambda}{t^2}$ is increasing $(0, t_0]$ and decreasing in $[t_0, \infty).$ It follows that
\begin{eqnarray*}
& & \max_{l\in \Omega_1\cup \Omega_3}\Big(\frac{g(t_l)}{t_l}-\frac{\lambda}{t_l^2}\Big)\\
& \leq & \max\Big\{\psi\Big(\frac{\sqrt{m}}{\sqrt{\gamma m- \sqrt{m}\log m}}\Big),\, \psi\Big(\frac{\sqrt{m}}{\sqrt{\gamma m+ \sqrt{m}\log m}}\Big)\Big\}.
\end{eqnarray*}
Notice
\begin{eqnarray*}
\Big(\frac{\sqrt{m}}{\sqrt{\gamma m\pm \sqrt{m}\log m}}-t_0\Big)^2
& = & \Big[\frac{1}{\sqrt{\gamma}}\Big(1\pm \frac{\log m}{\gamma\sqrt{m}}\Big)^{-1/2}-t_0\Big]^2\\
& = & \frac{(\log m)^2}{4\gamma^3m}(1+o(1)).
\end{eqnarray*}
From (\ref{shaoping}), we see that
\begin{eqnarray*}
\psi\big(\frac{\sqrt{m}}{\sqrt{\gamma m\pm \sqrt{m}\log m}})=\psi(t_0) -L\frac{(\log m)^2}{m} +O(m^{-3/2}(\log m)^3)
\end{eqnarray*}
as $n$ is large, where $L=\frac{|\psi''(t_0)|}{8\gamma^3}>0.$ This joins (\ref{Road_far}) to yield that
\begin{eqnarray*}
\frac{1}{\sqrt{m}}e^{-m\psi(t_0)}\sum_{l\in \Omega_1\cup \Omega_3}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)| \leq e^{-(L/2)(\log m)^2}
\end{eqnarray*} and thus
\begin{eqnarray} \label{air_exit}
\sum_{l\in \Omega_1\cup \Omega_3}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)| \leq \sqrt{m} e^{m\psi(t_0)-(L/2)(\log m)^2}
\end{eqnarray}
as $n$ is large.
{\it Step 3. The estimate of $\sum_{j\in \Omega_2}$}. Take $n=lm-j$ and $k=m-1$ in (\ref{marriage}), we get
\begin{eqnarray*}
|\mathcal{P}_{ml-j}(m-1)| \sim \frac{f(u)}{ml-j}e^{\sqrt{ml-j}\, g(u)}
\end{eqnarray*}
uniformly for all $cm\leq l \leq Cm$ where $u=\frac{m-1}{\sqrt{lm-j}}$. By continuity,
\begin{eqnarray}\label{red_army}
\frac{f(u)}{ml-j} \sim t_0^2f(t_0)\cdot \frac{1}{m^2}
\end{eqnarray}
uniformly for all $l\in \Omega_2$. Consequently,
\begin{eqnarray}
& & \sum_{l\in \Omega_2}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)| \nonumber\\
&= & (1+o(1))\frac{t_0^2f(t_0)}{m^2}\sum_{l\in \Omega_2}\exp\Big\{-\lambda l+\sqrt{lm-j}\cdot g\Big(\frac{m-1}{\sqrt{lm-j}}\Big)\Big\}\nonumber\\
& \sim & \frac{t_0^2f(t_0)}{m^2}e^{-\lambda j/m}\sum_{l\in \Omega_2}\exp\Big\{-\frac{\lambda(m-1)^2}{mt_l^2}+\frac{m-1}{t_l}g(t_l)\Big\}
\end{eqnarray}
by setting $t_x=(m-1)/\sqrt{mx-j}$ for $x\geq 2$ (recall $1\leq j \leq m$), and hence $x=\frac{j}{m}+\frac{(m-1)^2}{mt_x^2}$. It is easy to verify that
\begin{eqnarray}\label{donkey_rabbit}
\max_{l\in \Omega_2}|t_l-t_0|=O\Big(\frac{\log m}{\sqrt{m}}\Big)
\end{eqnarray}
as $n\to\infty$. We then have
\begin{eqnarray}\label{god_amen}
& & \sum_{l\in \Omega_2}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)| \nonumber\\
& \sim & \frac{t_0^2f(t_0)}{m^2}e^{\lambda t_0^{-2}-(\lambda j/m)}\sum_{l\in \Omega_2}\exp\Big\{(m-1)\big(\frac{g(t_l)}{t_l}-\frac{\lambda}{t_l^2}\big)\Big\} \label{nice_novel}.
\end{eqnarray}
Recall Lemma \ref{London}. Since $\psi'(t_0)=0$ and $\psi''(t_0)<0$, it is seen from the Taylor's expansion and (\ref{donkey_rabbit}) that
\begin{eqnarray}\label{shaoping}
\psi(t_x)=\psi(t_0) + \frac{1}{2}\psi''(t_0)(t_x-t_0)^2 + O(m^{-3/2}(\log m)^3)
\end{eqnarray}
uniformly for all $x \in \Omega_2$. It follows that
\begin{eqnarray*}
& & \sum_{l\in \Omega_2}\exp\Big[(m-1)\Big(\frac{g(t_l)}{t_l}-\frac{\lambda}{t_l^2}\Big)\Big] \nonumber\\
& = & (1+o(1))\cdot e^{(m-1)\psi(t_0)}\sum_{l\in \Omega_2}\exp\Big[\frac{1}{2}\psi''(t_0)(t_l-t_0)^2m\Big].
\end{eqnarray*}
It is trivial to check that
\begin{eqnarray*}
\frac{m-1}{\sqrt{mx-j}}=\frac{m-1}{\sqrt{mx}} +\frac{j}{2\gamma^{3/2}m^2} + O\big(\frac{\log m}{m^2}\big)
\end{eqnarray*}
uniformly for all $x \in \Omega_2$. Therefore,
\begin{eqnarray*}
m\Big(\frac{m-1}{\sqrt{mx-j}}-t_0\Big)^2
& = & m\Big(\frac{m-1}{\sqrt{mx}}-t_0\Big)^2 + \frac{j}{\gamma^{3/2}m}\Big(\frac{m-1}{\sqrt{mx}}-t_0\Big) + O\big(\frac{\log m}{\sqrt{m}}\big)\\
& = & m\Big(\frac{m-1}{\sqrt{mx}}-t_0\Big)^2 + o(1)
\end{eqnarray*}
uniformly for all $x \in \Omega_2$ by (\ref{donkey_rabbit}). This tells us that
\begin{eqnarray}\label{peach1}
& & \sum_{l\in \Omega_2}\exp\Big[(m-1)\Big(\frac{g(t_l)}{t_l}-\frac{\lambda}{t_l^2}\Big)\Big] \nonumber\\
& = & (1+o(1))\cdot e^{(m-1)\psi(t_0)}\sum_{l\in \Omega_2}\exp\Big[\frac{1}{2}\psi''(t_0)\Big(\frac{m-1}{\sqrt{ml}}-t_0\Big)^2m\Big].
\end{eqnarray}
Set $a_m=\gamma m -\sqrt{m}\log m$, $b_m=\gamma m +\sqrt{m}\log m$, $c_m=(m-1)/\sqrt{m}$ and
\begin{eqnarray}\label{definition_pizza}
\rho(x)=\exp\Big[\frac{1}{2}\psi''(t_0)\big(\frac{c_m}{\sqrt{x}}-t_0\big)^2m\Big]
\end{eqnarray}
for $x>0$. It is easy to check that there exists an absolute constant $C_1>0$ such that
\begin{eqnarray}\label{drone}
\rho(x)\leq e^{-C_1(\log m)^2}
\end{eqnarray}
for all $x\in (a_m, b_m)\backslash([a_m]+2, [b_m]-2)$. Hence
\begin{eqnarray}\label{lum3}
\int_{a_m}^{b_m}\rho(x)\,dx
&=&\Big(\sum_{l=[a_m]}^{[b_m]-1}\int_{l}^{l+1}\rho(x)\,dx\Big) + \epsilon_m \label{lum2}
\end{eqnarray}
where $|\epsilon_m| \leq e^{-C_1(\log m)^2}$ for large $m$. By the expression $\rho(x)=\exp\big[\frac{1}{2}\psi''(t_0)\big(\frac{c_m}{\sqrt{x}}-t_0\big)^2m\big]$, we get
\begin{eqnarray*}
\rho'(x)=-\frac{1}{2}\rho(x)\psi''(t_0)\Big(\frac{c_m}{\sqrt{x}}-t_0\Big)\frac{m {c_m}}{x^{3/2}}
\end{eqnarray*}
for $x>0$. Easily, $\frac{m {c_m}}{x^{3/2}}=O(1)$ and $\frac{c_m}{\sqrt{x}}-t_0=O(\frac{\log m}{\sqrt{m}})$ uniformly for all $[a_m]\leq x\leq [b_m].$ Thus,
\begin{eqnarray*}
|\rho'(x)|\leq \frac{(\log m)^2}{\sqrt{m}}\rho(x)
\end{eqnarray*}
for all $[a_m]\leq x\leq [b_m] $. Therefore, by integration by parts,
\begin{eqnarray*}
\Big|\int_{l}^{l+1}\rho(x)\,dx-\rho(l)\Big| = \Big|\int_{l}^{l+1}\rho'(x)(l+1-x)\,dx\Big|
& \leq & \int_{l}^{l+1}|\rho'(x)|\,dx\\
&\leq & \frac{(\log m)^2}{\sqrt{m}}\int_{l}^{l+1}\rho(x)\,dx
\end{eqnarray*}
as $m$ is sufficiently large. This, (\ref{drone}) and (\ref{lum3}) imply
\begin{eqnarray}\label{snow_heat}
& & \Big|\sum_{l\in \Omega_2}\rho(l)-\int_{a_m}^{b_m}\rho(x)\,dx\Big| \leq \frac{(\log m)^2}{\sqrt{m}}\Big(\int_{a_m}^{b_m}\rho(x)\,dx\Big) + e^{-C_1(\log m)^2}.
\end{eqnarray}
Set $\gamma_m=(\log m)\gamma^{-3/2}/2$. We see from (\ref{peach1}) and (\ref{definition_pizza}) that
\begin{eqnarray*}
\int_{a_m}^{b_m}\rho(x)\,dx &= & \frac{2c_m^2}{\sqrt{m}}\int_{-\gamma_m +o(1)}^{\gamma_m +o(1)}\Big(-\frac{u}{\sqrt{m}}+t_0\Big)^{-3}e^{\frac{1}{2}\psi''(t_0)u^2}\,du\\
& = & (1+o(1))\frac{2\sqrt{m}}{t_0^3}\int_{-\gamma_m}^{\gamma_m}e^{\frac{1}{2}\psi''(t_0)u^2}\,du\\
& = & (1+o(1))\frac{2\sqrt{m}}{t_0^3}\int_{-\infty}^{\infty}e^{\frac{1}{2}\psi''(t_0)u^2}\,du\\
& \sim & \sqrt{m}\cdot \frac{1}{t_0^3}\sqrt{\frac{8\pi}{|\psi''(t_0)|}}
\end{eqnarray*}
by making the transform $u=-\big(\frac{c_m}{\sqrt{x}}-t_0\big)\sqrt{m}$. Combining this, (\ref{peach1}) and (\ref{snow_heat}), we arrive at
\begin{eqnarray}\label{peach8}
e^{-(m-1)\psi(t_0)}\sum_{l\in \Omega_2}\exp\Big[(m-1)\Big(\frac{g(t_l)}{t_l}-\frac{\lambda}{t_l^2}\Big)\Big]
& = & (1+o(1))\sum_{l\in \Omega_2}\rho(l)\nonumber\\
& \sim & \sqrt{m}\cdot \frac{1}{t_0^3}\sqrt{\frac{8\pi}{|\psi''(t_0)|}}
\end{eqnarray}
as $n$ is sufficiently large. This and (\ref{god_amen}) yield
\begin{eqnarray}
& & \sum_{l\in \Omega_2}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)| \nonumber\\
& \sim & \frac{t_0^2f(t_0)}{m^2}e^{\lambda t_0^{-2}-(\lambda j/m)}\sum_{l\in \Omega_2}\exp\Big\{(m-1)\big(\frac{g(t_l)}{t_l}-\frac{\lambda}{t_l^2}\big)\Big\} \nonumber\\
& \sim & \frac{f(t_0)e^{\lambda t_0^{-2}-\psi(t_0)-(\lambda j/m)}}{t_0}\cdot \sqrt{\frac{8\pi}{|\psi''(t_0)|}}\cdot \frac{e^{m\psi(t_0)}}{m^{3/2}} \label{red_gross}
\end{eqnarray}
as $m\to\infty$.
{\it Step 4. Wrap-up of the denominator}. By the choice of $c$ in \eqref{littlec}, we have $\sqrt{c}\le (4K)^{-1}\psi(t_0)$ in (\ref{brain_head}). Therefore we get from (\ref{pig}) that
\begin{eqnarray}\label{ding}
\Big(\sum_{l=1}^{cm}+\sum_{l=Cm}^{M_n+1}\Big)q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)| \leq e^{\psi(t_0)m/2}
\end{eqnarray}
as $n$ is large. This and (\ref{hayes}) imply
\begin{eqnarray*}
\sum_{l=1}^{M_n+1}q^{l } \cdot |\mathcal{P}_{lm}(m-1)| = O\big(e^{\psi(t_0)m/2}\big)+\sum_{i=1}^3\sum_{l\in \Omega_i}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)|
\end{eqnarray*}
as $m\to\infty.$ This identity together with (\ref{air_exit}) and (\ref{red_gross}) concludes that
\begin{eqnarray}\label{victory}
\sum_{l=1}^{M_n+1}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)|
\, \sim \, \frac{f(t_0)e^{\lambda t_0^{-2}-\psi(t_0)-(\lambda j/m)}}{t_0}\cdot \sqrt{\frac{8\pi}{|\psi''(t_0)|}}\cdot \frac{e^{m\psi(t_0)}}{m^{3/2}}
\end{eqnarray}
as $m\to\infty$.
{\it Step 5. Numerator}. We need to show
\begin{eqnarray*}
\lim_{n\to\infty}P\Big(\frac{1}{\sqrt{m}}\Big(k_1 -\lceil \frac{n}{m}\rceil - \frac{m}{t_0^2}\Big) \leq x\Big)= \frac{1}{\sqrt{2\pi}\, \sigma}\int_{-\infty}^xe^{-\frac{t^2}{2\sigma^2}}\,dt
\end{eqnarray*}
for every $x \in \mathbb{R}$, where $\sigma=\frac{1}{\sqrt{|\psi''(t_0)|}}$. Recall $\gamma=t_0^{-2}$. By (\ref{aha}),
\begin{eqnarray}\label{concrete_leave}
P\Big(\frac{1}{\sqrt{m}}\Big(k_1 -\lceil \frac{n}{m}\rceil - \frac{m}{t_0^2}\Big) \leq x\Big)=
\frac{\sum_{l=1}^{b_m'} q^{l} \cdot |\mathcal{P}_{ ml-j}(m-1)|}{\sum_{l=1}^{M_n+1} q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)|}
\end{eqnarray}
where $b_m'=[\gamma m + \sqrt{m}\,x]+1$. Recall $\sqrt{c}\le (4K)^{-1}\psi(t_0)$ be as before. It is known from (\ref{ding}) that
\begin{eqnarray}\label{quiet_water}
\sum_{l=1}^{cm}q^{l } \cdot |\mathcal{P}_{lm-j}(m-1)| \leq e^{\psi(t_0)m/2}
\end{eqnarray}
as $n$ is large. Let $\Omega_1$ and $\Omega_2$ be as in (\ref{soil_white}). Set $\Omega_2'=\{l\in \mathbb{N};\, \gamma m -\sqrt{m}\log m\leq l \leq b_m'\}.$ Notice $\Omega_2'\subset \Omega_2$ for large $m$. By (\ref{air_exit}), (\ref{god_amen}) and (\ref{quiet_water}),
\begin{eqnarray}
& & \sum_{l=1}^{b_m'} q^{l} \cdot |\mathcal{P}_{ ml-j}(m-1)| \nonumber\\
&= & O\big(e^{\psi(t_0)m/2}\big)+ \sqrt{m}e^{m\psi(t_0)-(L/2)(\log m)^2}+\sum_{l\in \Omega_2'}q^{l} \cdot |\mathcal{P}_{ ml-j}(m-1)| \nonumber\\
& = & O\Big(\sqrt{m}\cdot e^{m\psi(t_0)-(L/2)(\log m)^2}\Big)+ \nonumber\\
& &~~~~~~~~~~~~~~~~\frac{t_0^2f(t_0)}{m^2}e^{\lambda t_0^{-2}-(\lambda j/m)}\sum_{l\in \Omega'_2}\exp\Big\{(m-1)\big(\frac{g(t_l)}{t_l}-\frac{\lambda}{t_l^2}\big)\Big\} \label{pecan_eat}
\end{eqnarray}
as $m\to \infty$. Review the derivation between (\ref{peach1}) and (\ref{peach8}) and replace $b_m$ by $b_m'$. by the fact $\Omega_2'\subset \Omega_2$ for large $m$ again, we have
\begin{eqnarray*}
& & e^{-(m-1)\psi(t_0)}\sum_{l\in \Omega_2'}\exp\Big[(m-1)\Big(\frac{g(t_l)}{t_l}-\frac{\lambda}{t_l^2}\Big)\Big] \nonumber\\
& =& \int_{a_m}^{b_m'}\rho(x)\,dx + \epsilon_m+ O\big(\frac{\log m}{m^{1/4}}\big)
\end{eqnarray*}
where, as mentioned before, $a_m=\gamma m -\sqrt{m}\log m$ and $|\epsilon_m| \leq e^{-C_1(\log m)^2}$ for large $m$. Let us evaluate the integral above. In fact, from (\ref{definition_pizza}) we see that
\begin{eqnarray*}
\int_{a_m}^{b_m'}\rho(x)\,dx=\int_{a_m}^{b_m'}\exp\Big[\frac{1}{2}\psi''(t_0)\big(\frac{c_m}{\sqrt{x}}-t_0\big)^2m\Big]\,dx.
\end{eqnarray*}
Set $w=-\big(\frac{c_m}{\sqrt{x}}-t_0\big)\sqrt{m}$. Then
\begin{eqnarray*}
\int_{a_m}^{b_m'}\rho(x)\,dx & = & \frac{2c_m^2}{\sqrt{m}}\int_{-\gamma_m +o(1)}^{\frac{x}{2\gamma^{3/2}}+o(1)}\Big(-\frac{w}{\sqrt{m}}+t_0\Big)^{-3}e^{-\frac{1}{2}|\psi''(t_0)|w^2}\,dw\\
& = & (1+o(1)) \frac{2\sqrt{m}}{t_0^3}\int_{-\infty}^{\frac{x}{2\gamma^{3/2}}}e^{-\frac{1}{2}|\psi''(t_0)|w^2}\,dw\\
& = & (1+o(1)) \frac{\sqrt{m}}{t_0^3\gamma^{3/2}}\int_{-\infty}^{x}e^{-w^2/(2\sigma^2)}\,dw = (1+o(1))\sqrt{m} \int_{-\infty}^{x}e^{-w^2/(2\sigma^2)}\,dw
\end{eqnarray*}
where $\gamma_m=(\log m)\gamma^{-3/2}/2$ and $\sigma^2=\frac{4\gamma^3}{|\psi''(t_0)|}.$ Collect the assertions from (\ref{pecan_eat}) to the above to obtain
\begin{eqnarray*}
& & \sum_{l=1}^{b_m'} q^{l} \cdot |\mathcal{P}_{ ml-j}(m-1)|\\
&=& (1+o(1))\frac{t_0^2f(t_0)}{m^2}e^{\lambda t_0^{-2}-(\lambda j/m)}\cdot e^{(m-1)\psi(t_0)}\cdot \sqrt{m} \int_{-\infty}^{x}e^{-w^2/(2\sigma^2)}\,dw\\
&\sim & t_0^{2}f(t_0)\cdot e^{\lambda t_0^{-2}-\psi(t_0)-(\lambda j/m)}\cdot \frac{e^{m\psi(t_0)}}{m^{3/2}}\int_{-\infty}^{x}e^{-w^2/(2\sigma^2)}\,dw
\end{eqnarray*}
as $m\to\infty$. Join this with (\ref{victory}) and (\ref{concrete_leave}) to conclude that
\begin{eqnarray}
P\Big(\frac{1}{\sqrt{m}}\Big(k_1 -\lceil \frac{n}{m}\rceil - \frac{m}{t_0^2}\Big) \leq x\Big)\to \frac{1}{\sqrt{2\pi}\, \sigma}\int_{-\infty}^{x}e^{-w^2/(2\sigma^2)}\,dw
\end{eqnarray}
as $m\to\infty$. Notice that $\sigma^2=\frac{4}{|\psi''(t_0)|t_0^6}$. The proof is completed by using Lemma \ref{London} and the fact $\gamma=t_0^{-2}$.
\end{proof}
\section{Proofs of the generalized distribution}\label{sec:general}
\subsection{Case I: $m$ is fixed}\label{sec:generalfix}
From \citet{EL}, we have
\begin{eqnarray}\label{eq:size}
\mathcal{P}_n(m) \sim \frac{\binom{n-1}{m-1}}{m!}
\end{eqnarray}
uniformly for $m=o(n^{1/3})$.
\begin{proof}[Proof of Theorem \ref{thm:general}]
To prove the conclusion, it suffices to show that for any bounded and Lipschitz continuous function $\psi$ on $\overline{\nabla}_{m-1}$,
$$\mathbb{E}\left(\psi(\frac{k_1}{n},\ldots,\frac{k_m}{n}) \right) \to \mathbb{E}\left(\psi(x_1,\ldots,x_m) \right)$$
as $n$ tends to infinity.
By definition,
\begin{eqnarray}\label{eq:part}
\mathbb{E}\left(\psi(\frac{k_1}{n},\ldots,\frac{k_m}{n}) \right)
&=& \frac{\sum_{(k_1,\ldots,k_m) \in \mathcal{P}_n(m)} \psi(\frac{k_1}{n},\ldots,\frac{k_m}{n}) f(\frac{k_1}{n},\ldots,\frac{k_m}{n})}{\sum_{(k_1,\ldots,k_m) \in \mathcal{P}_n(m)} f(\frac{k_1}{n},\ldots,\frac{k_m}{n})}\\ \nonumber
&=& \frac{ n^{-(m-1)} \sum_{(k_1,\ldots,k_m) \in \mathcal{R}_n(m)}\psi(\frac{k_1}{n},\ldots,\frac{k_m}{n}) f(\frac{k_1}{n},\ldots,\frac{k_m}{n})}{n^{-(m-1)} \sum_{(k_1,\ldots,k_m) \in \mathcal{P}_n(m)} f(\frac{k_1}{n},\ldots,\frac{k_m}{n})}+\mathcal{E}_{n,m},
\end{eqnarray}
where the set
\begin{eqnarray*}
\mathcal{R}_n(m):= \{(k_1,\ldots,k_m) \vdash n; k_1 > \ldots >k_m >0\}
\end{eqnarray*}
and
\begin{eqnarray*}
\mathcal{E}_{n,m}:=\frac{\sum_{(k_1,\ldots,k_m) \in \mathcal{P}_n(m)\setminus \mathcal{R}_n(m)} \psi(\frac{k_1}{n},\ldots,\frac{k_m}{n}) f(\frac{k_1}{n},\ldots,\frac{k_m}{n}) }{\sum_{(k_1,\ldots,k_m) \in \mathcal{P}_n(m)} f(\frac{k_1}{n},\ldots,\frac{k_m}{n})}.
\end{eqnarray*}
On the other hand,
\begin{eqnarray}\label{eq:gen}
\mathbb{E}(\psi(x_1,\ldots,x_m)) &=& \int_{\nabla_{m-1}} \psi(y_1,\ldots,y_m) f(y_1,\ldots,y_m) \,dy_1\ldots dy_{m-1}\\ \nonumber
&=& \frac{\int_{\nabla_{m-1}} \psi(y_1,\ldots,y_m) f(y_1,\ldots,y_m) \,dy_1\ldots dy_{m-1}}{\int_{\nabla_{m-1}} f(y_1,\ldots,y_m) \,dy_1\ldots dy_{m-1}}.
\end{eqnarray}
In order to compare \eqref{eq:part} and \eqref{eq:gen}, we divide the proof into a few steps.
{\it Step 1: Estimate of $|\mathcal{E}_{n,m}|$}. We claim that the term $\mathcal{E}_{n,m}$ is negligible as $n\to \infty$. We first estimate the size of $\mathcal{R}_n(m)$.
For any $(k_1, \cdots, k_m)\in \mathcal{R}_n(m) $, set $j_i = k_i-(m-i+1)$ for $1\le i \le m$. It is easy to verify that $j_{i-1} - j_i =k_{i-1}-k_{i}-1\ge 0$ for $2\le i \le m$. Thus $$j_1 + \cdots + j_{m} = n-\binom{m+1}{2}$$ and $j_1 \ge \cdots \ge j_m \ge 0$. Therefore,
$(j_1,\cdots,j_m) \in \mathcal{P}_{n-\binom{m+1}{2}}(m).$ Indeed, this transform is a bijection between $\mathcal{R}_n(m)$ and $\mathcal{P}_{n-\binom{m+1}{2}}(m)$, which implies $$|\mathcal{R}_n(m)|=|\mathcal{P}_{n-\binom{m+1}{2}}(m)|.$$
On the other hand, we know from \eqref{eq:size},
$$|\mathcal{P}_N(m)| \sim \frac{\binom{N-1}{m-1}}{m!}$$
as $N\to \infty$.
Thus by Stirling's formula,
\begin{eqnarray*}
\frac{|\mathcal{R}_n(m)|}{|\mathcal{P}_n(m)|} &\sim& \frac{\binom{n-\binom{m+1}{2}-1}{m-1} }{\binom{n-1}{m-1}}=\frac{(n-\binom{m+1}{2}-1)!(n-m)!}{(n-1)!(n-\binom{m+1}{2}-m)!} \\
&\sim& \frac{(n-\binom{m+1}{2})!(n-m)!}{n!(n-\binom{m+1}{2}-m)!}\\
&\sim& \frac{(1-\frac{m}{n})^{1/2}}{(1-\frac{m}{n-\binom{m+1}{2}})^{1/2}}\frac{(1-\frac{m}{n})^n (1-\frac{\binom{m+1}{2}}{n-m})^m}{(1-\frac{m}{n-\binom{m+1}{2}})^{n-\binom{m+1}{2}}}
\end{eqnarray*}
as $n\to \infty$. By assumption $m=o(\sqrt{n})$, we have $\frac{n-\binom{m+1}{2}}{m} \to \infty$ with $n$. Using the fact that $\lim_{N\to \infty} (1+\frac{x}{N})^N = \exp(x)$, we obtain
\begin{eqnarray*}
\frac{|\mathcal{R}_n(m)|}{|\mathcal{P}_n(m)|} \sim \exp\left(-\frac{m\binom{m+1}{2}}{n-m} \right).
\end{eqnarray*}
Thus as long as $m = o(n^{1/3})$,
\begin{eqnarray*}
&& |\mathcal{R}_n(m)| \sim |\mathcal{P}_n(m)|\\
&& |\mathcal{P}_n(m)\setminus \mathcal{R}_n(m)| = o(|\mathcal{P}_n(m)|)
\end{eqnarray*}
as $n\to \infty$.
Further, since $\int_{\nabla_{m-1}} f(y_1,\ldots,y_m) \, dy_1\ldots dy_{m-1}=1$, there exists a region $\mathcal{S}$ on $\overline{\nabla}_{m-1}$
whose measure $|\mathcal{S}| \ge \mu |\nabla_{m-1}|$ for some constant $\mu>0$ such that $f(y_1,\ldots,y_m)>c$ on $\mathcal{S}$ for some $c>0$. By the Lipschitz property of $f$, for $n$ sufficiently large, $f(k_1/n,\ldots,k_m/n)> c_0>0$ for $(k_1,\ldots,k_m)$ in a subset of $\mathcal{P}_n(m)$ with cardinality at least a small fraction of $|\mathcal{P}_n(m)|$. Also since the functions $\psi$ and $f$ are bounded on $\nabla_{m-1}$, we conclude
\begin{eqnarray}\label{eq:error}
|\mathcal{E}_{n,m}| = O\left(\frac{|\mathcal{P}_n(m)\setminus \mathcal{R}_n(m)|}{|\mathcal{P}_n(m)|}\right) = o(1)
\end{eqnarray}
as $n\to \infty$, as long as $m = o(n^{1/3})$.
{\it Step 2: Compare the numerators of \eqref{eq:part} and \eqref{eq:gen}}. For convenience, denote
$$G(y_1,\ldots,y_{m-1}) = \psi(y_1,\ldots,y_{m-1},1-\sum_{i=1}^{m-1}y_i) f(y_1,\ldots,y_{m-1},1-\sum_{i=1}^{m-1}y_i).$$
Since $\psi, f$ are bounded and Lipschitz functions on $\overline{\nabla}_{m-1}$, it is easy to check that $G$ is also bounded and Lipschitz on $\overline{\nabla}_{m-1}$.
We can rewirte the numberator in \eqref{eq:part} as follows.
\begin{eqnarray*}
\mathcal{I}_1 &:=& \frac{1}{n^{m-1}} \sum_{\substack{k_1 > \ldots >k_m >0 \\ k_1+\ldots+k_m=n}} G(\frac{k_1}{n},\ldots,\frac{k_{m-1}}{n}) \\
&=& \frac{1}{n^{m-1}} \sum_{(k_1,\ldots,k_{m-1}) \in \{1,\ldots,n\}^{m-1}} G(\frac{k_1}{n},\ldots,\frac{k_{m-1}}{n}) I_{\mathcal{A}_n}\\
&=& \sum_{(k_1,\ldots,k_{m-1}) \in \{1,\ldots,n\}^{m-1}} \int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}} G(\frac{k_1}{n},\ldots,\frac{k_{m-1}}{n}) I_{ \mathcal{A}_n } ~d y_1 \dots d y_{m-1},
\end{eqnarray*}
where $I_{ \mathcal{A}_n }$ is the indicator function of set $\mathcal{A}_n$ defined as below
\begin{eqnarray}\label{eq:An}
\mathcal{A}_n= \frac{1}{n} \Big\{ (k_1, \cdots, k_{m-1})\in \{1,\ldots,n \}^{m-1};\, \frac{k_1}{n} > \cdots > \frac{k_{m-1}}{n} > 1- \sum_{i=1}^{m-1} \frac{k_i}{n} > 0\Big\}.
\end{eqnarray}
Similarly,
\begin{eqnarray*}
\mathcal{I}_2 &:=& \int_{\nabla_{m-1}} G(y_1,\ldots,y_{m-1}) \,dy_1\ldots dy_{m-1} \\
&=& \int_{[0,1]^{m-1}} G(y_1,\ldots,y_{m-1})I_{\mathcal{A}} \,dy_1\ldots dy_{m-1}\\
&=& \sum_{(k_1,\ldots,k_{m-1}) \in \{1,\ldots,n\}^{m-1}} \int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}} G(y_1,\ldots,y_{m-1}) I_{ \mathcal{A} } ~d y_1 \dots d y_{m-1},
\end{eqnarray*}
where the $I_{ \mathcal{A}}$ is the indicator function of set $\mathcal{A}$ denoted by
\begin{eqnarray}\label{eq:A}
\mathcal{A}=\Big\{(x_1, \cdots, x_{m-1})\in [0,1]^{m-1};\, x_1 > \cdots > x_{m-1} > 1- \sum_{i=1}^{m-1} x_i \ge 0\Big\}.
\end{eqnarray}
Now we estimate the difference between the numerators in \eqref{eq:part} and \eqref{eq:gen}.
\begin{eqnarray*}
&& \mathcal{I}_1 - \mathcal{I}_2\\
& = & \sum_{(k_1,\ldots,k_{m-1}) \in \{1,\ldots,n\}^{m-1}} \int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}} \nonumber\\
& & \quad \quad \quad \quad \quad \quad \quad \quad \left( G(\frac{k_1}{n},\ldots,\frac{k_{m-1}}{n}) I_{ \mathcal{A}_n } - G(y_1,\ldots,y_{m-1}) I_{ \mathcal{A} } \right) ~d y_1 \dots d y_{m-1}
\end{eqnarray*}
which is identical to
\begin{eqnarray*}
&&\sum_{(k_1,\ldots,k_{m-1}) \in \{1,\ldots,n\}^{m-1}} \int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}} \nonumber\\
& & \quad \quad \quad \quad \quad \quad \quad \left(G(\frac{k_1}{n},\ldots,\frac{k_{m-1}}{n}) - G(y_1,\ldots,y_{m-1})\right) I_{ \mathcal{A}_n } ~d y_1 \dots d y_{m-1}\\
& & \quad \quad + \sum_{(k_1,\ldots,k_{m-1}) \in \{1,\ldots,n\}^{m-1}} \int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}} \nonumber\\
& & \quad \quad \quad \quad \quad \quad \quad \quad G(y_1,\ldots,y_{m-1}) \left( I_{ \mathcal{A}_n } - I_{ \mathcal{A} } \right)~d y_1 \dots d y_{m-1}\\
&& := \mathcal{S}_1 + \mathcal{S}_2.
\end{eqnarray*}
{\it Step 3: Estimate $\mathcal{S}_1$.} Since $G$ is Lipschitz, for $y_i \in [\frac{k_i-1}{n}, \frac{k_i}{n}]~(1\le i \le m-1)$,
\begin{eqnarray*}
|G(\frac{k_1}{n},\ldots,\frac{k_{m-1}}{n}) - G(y_1,\ldots,y_{m-1})|
&\le& C \cdot \sqrt{\sum_{i=1}^{m-1}(y_i - \frac{k_i}{n})^2}\\
&\le& C \cdot \frac{\sqrt{m}}{n},
\end{eqnarray*}
for some constant $C$ depending only on the Lipschitz constant of $G$. Thus
\begin{eqnarray}\label{eq:S1}
|\mathcal{S}_1| &\le& \sum_{(k_1,\ldots,k_{m-1}) \in \{1,\ldots,n\}^{m-1}} \int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}} \nonumber\\
& & \quad \quad \quad \quad \quad \quad \quad |G(\frac{k_1}{n},\ldots,\frac{k_{m-1}}{n}) - G(y_1,\ldots,y_{m-1})| ~d y_1 \dots d y_{m-1} \nonumber\\
&\le& C \cdot \frac{\sqrt{m}}{n} \Big(\frac{1}{n}\Big)^{m-1} n^{m-1} = \frac{C\sqrt{m}}{n}.
\end{eqnarray}
{\it Step 4: Estimate $\mathcal{S}_2$.} Since $G$ is bounded on $\overline{\nabla}_{m-1}$, $\|G\|_{\infty} :=\sup_{\bf{x}\in \overline{\nabla}_{m-1}} |G(\bf{x})| < \infty$ and thus
\begin{eqnarray}\label{eq:S2}
|\mathcal{S}_2| \le \|G\|_{\infty}\sum_{(k_1,\ldots,k_{m-1}) \in \{1,\ldots,n\}^{m-1}} \int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}} | I_{ \mathcal{A}_n } - I_{ \mathcal{A}}|~d y_1 \dots d y_{m-1}.
\end{eqnarray}
Now we control $| I_{ \mathcal{A}_n } - I_{ \mathcal{A}}|$ provided $\frac{k_i -1}{n} < y_i < \frac{k_i}{n}$ for $1\le i \le m-1$.
By definition,
\begin{eqnarray}\label{eq:id-An}
I_{ \mathcal{A}_n}= \left\{
\begin{array}{lr}
1 , \text{if}~\frac{k_1}{n} > \cdots > \frac{k_{m-1}}{n} > 1- \sum_{i=1}^{m-1} \frac{k_i}{n} > 0\\
0 , \text{otherwise}
\end{array}
\right.
\end{eqnarray}
and
\begin{eqnarray}\label{eq:id-A}
I_{ \mathcal{A}}= \left\{
\begin{array}{lr}
1 , \text{if}~y_1 > \cdots > y_{m-1} > 1- \sum_{i=1}^{m-1} y_i \ge 0\\
0 , \text{otherwise.}
\end{array}
\right.
\end{eqnarray}
Let $\mathcal{B}_n$ be a subset of $\mathcal{A}_n$ such that
\begin{eqnarray*}
\mathcal{B}_n= \mathcal{A}_n \cap \Big\{(k_1, \cdots, k_{m-1})\in \{1,\ldots,n\}^{m-1};\, \frac{k_{m-1}}{n} +\sum_{i=1}^{m-1} \frac{k_i}{n} > \frac{m}{n}+1\Big\}.
\end{eqnarray*}
Given $(k_1, \cdots, k_{m-1})\in \mathcal{B}_n$, for any
\begin{eqnarray}\label{red_book}
\frac{k_1 - 1}{n} < y_1 < \frac{k_1}{n}, \cdots, \frac{k_{m-1} - 1}{n} < y_{m-1} < \frac{k_{m-1}}{n},
\end{eqnarray}
it is easy to verify from (\ref{eq:id-A}) and (\ref{eq:id-An}) that $I_{\mathcal{A}}=1$. Hence,
\begin{eqnarray}
I_{\mathcal{A}_n}&=&I_{\mathcal{B}_n} + I_{\mathcal{A}_n\backslash\mathcal{B}_n} \nonumber\\
& \leq & I_{\mathcal{A}} + I_{\mathcal{A}_n \cap \{ k_{m-1} + \sum_{i=1}^{m-1}k_i \leq n+m \}} \nonumber\\
& = & I_{\mathcal{A}} + \sum_{j=n+1}^{n+m}I_{E_j}\label{spring_cold}
\end{eqnarray}
where
\begin{eqnarray*}
E_j&=&\Big\{(k_1, \cdots, k_{m-1})\in \{1,\ldots,n\}^{m-1};\, k_1>\ldots>k_{m-1}\ge 1, \\
& & \quad \quad \quad \quad \quad \quad\quad\quad\quad\quad\quad\quad\quad k_{m-1} + \sum_{i=1}^{m-1}k_i=j, \sum_{i=1}^{m-1} k_i <n\Big\}
\end{eqnarray*}
for $n+1 \leq j \leq m+n$. Let us estimate the size of $|E_j|$. From the last two restrictions, we obtain $k_{m-1} >j-n$. Since $\sum_{i=1}^{m-1} k_i <n$ and $k_i > k_{m-1}$ for $1\le i \le m-2$, we have $j-n+1 \le k_{m-1} \le \frac{n}{m-1}$.
For each fixed $k_{m-1}$, since $k_1> \ldots >k_{m-2}$ is the ordered positive integer solution to the linear equation $\sum_{i=1}^{m-2}k_i = j - 2k_{m-1}$, thus
\begin{eqnarray*}
|E_{j}| \le \sum_{j-n+1 \le l \le \frac{n}{m-1}}^{} \frac{\binom{j-2l-1}{m-3}}{(m-2)!} \le \left(\frac{n}{m-1}+n-j \right)\frac{\binom{2n-j-3}{m-3}}{(m-2)!}.
\end{eqnarray*}
As a result, we obtain the crude upper bound
\begin{eqnarray}\label{eq:E}
\sum_{j=n+1}^{n+m}|E_j| \le \sum_{j=n+1}^{n+m} \left(\frac{n}{m-1}+n-j \right)\frac{\binom{2n-j-3}{m-3}}{(m-2)!}\le \frac{m\cdot n^{m-2}}{(m-1)!(m-3)!}.
\end{eqnarray}
On the other hand, consider a subset of $\mathcal{A}_n^c:=\{\frac{1}{n},\frac{2}{n}, \cdots, 1\}^{m-1}\backslash \mathcal{A}_n$ defined by
\begin{eqnarray*}
\mathcal{C}_n &=&
\frac{1}{n} \Big\{(k_1, \cdots, k_{m-1})\in \{1,\ldots,n\}^{m-1};\, \mbox{either}\ k_i\leq k_{i+1}-1\ \mbox{for some }
1\leq i \leq m-2,\\
& & \ \mbox{or}\
k_1+\cdots + k_{m-2} + 2k_{m-1} \leq n,\ \mbox{or}\ k_1+\cdots + k_{m-1} \geq m+n-1 \Big\}.
\end{eqnarray*}
Set $\mathcal{A}^c=[0,1]^{m-1}\backslash \mathcal{A}$. Given $(\frac{k_1}{n}, \cdots, \frac{k_{m-1}}{n})\in \mathcal{C}_n$, for any $k_i$'s and $y_i$'s satisfying (\ref{red_book}), it is not difficult to check that $I_{\mathcal{A}^c}=1$. Consequently,
\begin{eqnarray*}
I_{\mathcal{A}_n^c} &= & I_{\mathcal{C}_n} + I\Big\{(\frac{k_1}{n}, \cdots, \frac{k_{m-1}}{n})\in \mathcal{A}_n^c;\, k_i> k_{i+1}-1\ \mbox{for all }\ 1\leq i \leq m-2,\\
& & ~~~~~~~~~~~~ k_1+\cdots + k_{m-2} + 2k_{m-1} > n,\ \mbox{and}\ k_1+\cdots + k_{m-1} < m+n-1 \Big\}\\
& \leq & I_{\mathcal{A}^c} + I(\mathcal{D}_{n,m,1}) + I(\mathcal{D}_{n,m,2}),
\end{eqnarray*}
or equivalently,
\begin{eqnarray}\label{coca_warm}
I_{\mathcal{A}_n}
\geq I_{\mathcal{A}} - I(\mathcal{D}_{n,m,1}) - I(\mathcal{D}_{n,m,2}),
\end{eqnarray}
where
\begin{eqnarray*}
& &\mathcal{D}_{n,m,1}=\bigcup_{l=n}^{ n+m-2}\frac{1}{n}\big\{(k_1, \cdots, k_{m-1})\in \{1,\ldots,n\}^{m-1};\, \sum_{i=1}^{m-1}k_i=l, k_1 \ge \ldots \ge k_{m-1} \big\};\\
& & \mathcal{D}_{n,m,2}=\bigcup_{l=1}^{m-2}\frac{1}{n}\big\{(k_1, \cdots, k_{m-1})\in \{1,\ldots,n\}^{m-1};\, k_l=k_{l+1}, k_1 \ge \ldots \ge k_{m-1},\\
& & \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \sum_{i=1}^{m-1}k_i + k_{m-1} \ge n+1, \sum_{i=1}^m k_i \le n+m-2\big\};\\
\end{eqnarray*}
By the definition of partitions and \eqref{eq:size}, we have the following bound on $|\mathcal{D}_{n,m,1}|$.
\begin{eqnarray}\label{eq:D1}
|\mathcal{D}_{n,m,1}| &\le& \sum_{l=n}^{n+m-2}|\mathcal{P}_l(m-1)| \sim \sum_{l=n}^{n+m-2} \frac{\binom{l-1}{m-2}}{(m-1)!}\nonumber\\
&\le & (m-1)\frac{\binom{n+m-2}{m-2}}{(m-1)!} \le \frac{(n+m-2)^{m-2}}{[(m-2)!]^2}
\end{eqnarray}
as $n\to \infty$.
The estimation of $|\mathcal{D}_{n,m,2}|$ is the same argument as in (\ref{eq:E}). For the cases $m=3$ or $m=4$, it is easy to verify that $|\mathcal{D}_{n,m,2}|=O(n^{m-2})$. Now we assume $m\ge 5$. First, from the decreasing order of $k_i$ and $\sum_{i=1}k_i \le n+m-2$,
we determine the range of $k_{m-1}$, $$1\le k_{m-1} \le \frac{n+m-2}{m-1}.$$ On the other hand, $n+1-2k_{m-1} \le \sum_{i=1}^{m-2}k_i \le n+m-2-k_{m-1}$. If $l\neq m-2$, from the restriction $k_l = k_{l+1}$, we see $k_1 + \ldots+k_{l-1} + k_{l+2}+\ldots +k_{m-2} = s-2k_l$ is
the ordered positive integer solutions to the equation $j_1+\ldots+j_{m-4}=s-2k_l$, where $n+1-2k_{m-1} \le s \le n+m-2-k_{m-1}$. If $l=m-2$, then $k_1 + \cdots +k_{m-3} = s-2k_{m-1}$ and $n+1-3k_{m-1} \le s-2k_{m-1} \le n+m-2-2k_{m-1}$. Therefore, we have the following crude upper bound
\begin{eqnarray}\label{eq:D2}
|\mathcal{D}_{n,m,2}| &\le& \sum_{l=1}^{m-3}\sum_{k_{m-1}=1}^{\frac{n+m-2}{m-1}} \sum_{s=n+1-2k_{m-1}}^{n+m-2-k_{m-1}}\sum_{k_{m-1}\le k_l \le s/2} \frac{\binom{s-2k_l-1}{m-5}}{(m-4)!}\nonumber\\
&& \quad\quad\quad \quad + \sum_{k_{m-1}=1}^{\frac{n+m-2}{m-1}} \sum_{s=n+1-3k_{m-1}}^{n+m-2-2k_{m-1}} \frac{\binom{s-k_{m-1}-1}{m-4}}{(m-3)!}\nonumber\\
&=& O\left( \frac{n^3(m-3)}{m^2(m-4)!} \binom{n+m-6}{m-5} + \frac{n^2}{m^2(m-3)!} \binom{n+m-6}{m-4}\right)\nonumber\\
&=& O\left( \frac{n^2 (n+m)^{m-4} }{m (m-4)!(m-5)!}\right).
\end{eqnarray}
Joining (\ref{spring_cold}) and (\ref{coca_warm}), and assuming (\ref{red_book}) holds, we arrive at
\begin{eqnarray*}
|I_{\mathcal{A}_n}- I_{\mathcal{A}}|\leq I(\mathcal{D}_{n,m,1})+ I(\mathcal{D}_{n,m,2}) + \sum_{i=n+1}^{n+m}I_{E_i}
\end{eqnarray*}
Observe that $\mathcal{D}_{n,m,i}$'s and $E_i$'s do not depend on $x_i$'s, we obtain from (\ref{eq:S2}) that
\begin{eqnarray*}
|\mathcal{S}_2| &\leq & \|G\|_{\infty}\sum_{k_1 = 1}^n \cdots \sum_{k_{m-1}=1}^n \Big[\sum_{i=1}^2I(\mathcal{D}_{n,m,i}) + \sum_{i=n}^{n+m}I_{E_i}\Big]\int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}}1 ~d x_1 \dots d x_{m-1}\\
& = & \|G\|_{\infty} \Big(\sum_{i=1}^2|\mathcal{D}_{n,m,i}| + \sum_{i=n}^{n+m}|E_i|\Big)\cdot \frac{1}{n^{m-1}}.
\end{eqnarray*}
For $2\le m \le 4$,
$$|\mathcal{S}_2| = O(n^{-1}).$$
For $m\ge 5$, by \eqref{eq:E},\eqref{eq:D1} and \eqref{eq:D2},
\begin{eqnarray*}
|\mathcal{S}_2|&=&O\left( \frac{m\cdot n^{m-2}}{(m-1)!(m-3)!} + \frac{(n+m)^{m-2}}{[(m-2)!]^2}+\frac{n^2 (n+m)^{m-4} }{m (m-4)!(m-5)!} \right)\cdot\frac{1}{n^{m-1}}\\
&=& O\left( \frac{(1+\frac{m}{n})^m}{n} \right)
\end{eqnarray*}
as $n\to\infty.$
{\it Step 5: Difference between the expectations \eqref{eq:part} and \eqref{eq:gen}}. From {\it Step 3} and {\it Step 4}, we obtain the difference between the numberators in \eqref{eq:part} and \eqref{eq:gen}
\begin{eqnarray}\label{eq:diff}
|\mathcal{I}_1 - \mathcal{I}_2| \le |\mathcal{S}_1| + |\mathcal{S}_2| \le C_1\cdot \left(\frac{\sqrt{m}}{n} + \frac{(1+\frac{m}{n})^m}{n} \right)
\end{eqnarray}
as $n\to \infty$ for some constant $C_1$ depending only on the Lipschitz constants of $\psi$ and $f$ and the upper bounds of $\psi$ and $f$ on the compact set $\overline{\nabla}_{m-1}$.
Choosing $\psi$ to be identity on $\overline{\nabla}_{m-1}$, we can bound the denominators in \eqref{eq:part} and \eqref{eq:gen}.
Finally, we estimate the expectations \eqref{eq:part} and \eqref{eq:gen}. Since $m$ is fixed, by \eqref{eq:error} and \eqref{eq:diff},
\begin{eqnarray}\label{eq:compare}
|\mathbb{E}\left(\psi(\frac{k_1}{n},\ldots,\frac{k_m}{n}) \right) - \mathbb{E}\left(\psi(x_1,\ldots,x_m) \right)|
&=& O(\frac{\sqrt{m}}{n} + \frac{(1+\frac{m}{n})^m}{n}) + |\mathcal{E}_{n,m}|\\
&\to& 0 \nonumber
\end{eqnarray}
as $n\to \infty$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:dir}]
By Theorem \ref{thm:general},
$$(\frac{k_1}{n},\ldots, \frac{k_m}{n}) \to (x_1, \ldots, x_m)\sim \mu$$
as $n \to \infty$, where $\mu$ has pdf
\begin{eqnarray}\label{eq:result}
g(y_1,\ldots, y_m) =\frac{ y_1^{\alpha-1} \cdots y_m^{\alpha-1}}{\int_{\nabla_{m-1}} y_1^{\alpha-1} \cdots y_m^{\alpha-1} \,dy_1\ldots dy_{m-1}}.
\end{eqnarray}
It suffices to show the order statistics $(X_{(1)}, \ldots, X_{(m)})$ of $(X_1,\ldots,X_m) \sim \text{Dir}(\alpha)$ has the same pdf on $\nabla_{m-1}$. For any continuous function $\psi$ defined on $\nabla_{m-1}$, by symmetry,
\begin{eqnarray*}
&&\mathbb{E}\psi(X_{(1)}, \ldots, X_{(m)}) \\
&=& \int_{W_{m-1}} \psi(y_{(1)}, \ldots, y_{(m)})\mathbf{1}_{\{y_{(1)}\ge \ldots \ge y_{(m)} \}}\frac{\Gamma(m\alpha)}{\Gamma(\alpha)^m} y_1^{\alpha-1} \cdots y_m^{\alpha-1}\,dy_1\ldots dy_{m-1}\\
&=& \int_{W_{m-1}} \sum_{\sigma\in \mathcal{S}_m} \psi(y_{\sigma(1)}, \ldots, y_{\sigma(m)})\mathbf{1}_{\{y_{\sigma(1)}\ge \ldots \ge y_{\sigma(m)} \}} \frac{\Gamma(m\alpha)}{\Gamma(\alpha)^m} y_{\sigma(1)}^{\alpha-1} \cdots y_{\sigma(m)}^{\alpha-1}\,dy_1\ldots dy_{m-1}\\
&=& \int_{\nabla_{m-1}} \psi(y_1,\ldots,y_m) \frac{m!\Gamma(m\alpha)}{\Gamma(\alpha)^m} y_{1}^{\alpha-1} \cdots y_{m}^{\alpha-1}\,dy_1\ldots dy_{m-1}.
\end{eqnarray*}
Therefore, the pdf of $(X_{(1)}, \ldots, X_{(m)})$ is
\begin{eqnarray}\label{eq:ordered}
\frac{m!\Gamma(m\alpha)}{\Gamma(\alpha)^m} y_{1}^{\alpha-1} \cdots y_{m}^{\alpha-1}
\end{eqnarray}
on the set $\nabla_{m-1}$.
Similarly, since by the definition of pdf $$\int_{W_{m-1}}\frac{\Gamma(m\alpha)}{\Gamma(\alpha)^m} x_1^{\alpha-1} \cdots x_m^{\alpha-1}=1,$$ by symmetry, we now have
$$\int_{\nabla_{m-1}} y_1^{\alpha-1} \cdots y_m^{\alpha-1} \,dy_1\ldots dy_{m-1}=\frac{\Gamma(\alpha)^m}{m!\Gamma(m\alpha)}.$$
Comparing the above with \eqref{eq:ordered} and \eqref{eq:result}, we complete the proof.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:sphere}]
By Theorem \ref{thm:general} or Corollary \ref{cor:dir},
$$(\frac{k_1}{n},\ldots, \frac{k_m}{n}) \to (Y_1, \ldots, Y_m)\sim \mu$$
as $n \to \infty$, where $\mu$ has pdf
$$\frac{m!\cdot \Gamma(\frac{m}{\alpha})}{\Gamma(\frac{1}{\alpha})^{m}}(y_1\ldots y_{m})^{\alpha-1}
$$
on $\nabla_{m-1}$ and zero elsewhere. Since $f(x)=x^{\alpha}$ is continuous,
$$\left( (\frac{k_1}{n})^{\alpha},\ldots, (\frac{k_m}{n})^{\alpha}\right) \to \left( Y_1^{\alpha}, \ldots, Y_m^{\alpha} \right)$$
as $n \to \infty$.
Now it suffices to show $\left( Y_1^{\alpha}, \ldots, Y_m^{\alpha} \right)$ has the uniform distribution on the set
$$\mathcal{U}_{m-1}=\{(x_1,\ldots,x_m)\in [0,1]^m;\sum_{i=1}^m x_i^{\frac{1}{\alpha}}=1, x_1\ge \ldots \ge x_m \}.$$
This can be seen by change of variables. For any continuous function $\psi$ defined on $\nabla_{m-1}$,
\begin{eqnarray*}
&&\mathbb{E}\psi(Y_1^{\alpha}, \ldots, Y_m^{\alpha}) \\
&=& \int_{\nabla_{m-1}} \psi(y_1^{\alpha}, \ldots, y_m^{\alpha})\frac{m!\cdot \Gamma(\frac{m}{\alpha})}{\Gamma(\frac{1}{\alpha})^{m}} y_1^{\alpha-1} \cdots y_m^{\alpha-1}\,dy_1\ldots dy_{m-1}\\
&=& \int_{\mathcal{U}_{m-1}} \psi(x_1, \ldots, x_m)\frac{m!\cdot \Gamma(\frac{m}{\alpha})}{\alpha^{m-1}\Gamma(\frac{1}{\alpha})^{m}} \,dx_1\ldots dx_{m-1}.
\end{eqnarray*}
In the last equality, we set $x_i=y_i^{\alpha}$ for $1\le i \le m$. Therefore, we can see the pdf of $\left( Y_1^{\alpha}, \ldots, Y_m^{\alpha} \right)$ is a constant on $\mathcal{U}_{m-1}$,
which is the uniform distribution on $\mathcal{U}_{m-1}.$ The proof is complete.
\end{proof}
\subsection{Case II: $m$ tends to infinity and $m=o(n^{1/3})$}\label{sec:generalinf}
Now we consider the case that $m$ depends on $n$. The formula \eqref{eq:size} holds as long as $m=o(n^{1/3})$.
Let $\mu$ and $\nu$ be two Borel
probability measures on a Polish space $S$ with the Borel $\sigma$-algebra $\mathcal{B}(S)$. Define
\begin{eqnarray}\label{cream}
\rho(\mu, \nu)
& = & \sup_{\| \varphi \|_L\leq 1}\left|\int_{S} \varphi(x)\, \mu(dx) -
\int_{S}\varphi(x)\, \nu(dx)\right|,
\end{eqnarray}
where $\varphi$ is a bounded Lipschitz function defined on
$S$ with $\|\varphi\|=\sup_{x\in S}|\varphi(x)|,$ and
$\|\varphi\|_L=\|\varphi\|+\sup_{x\ne y}|\varphi(x)-\varphi(y)|/|x-y|.$ It is known that $\mu_n$ converges to $\mu$ weakly if and only if $\lim_{n\to\infty}\int \varphi(x)\, \mu_n(dx) = \int \varphi(x)\, \mu(dx)$ for every bounded and continuous function $\varphi(x)$ defined on $\mathbb{R}^m$, and if and
only if $\lim_{n\to\infty}\rho(\mu_n, \mu)=0$; see, e.g., Chapter 11 from \citet{Dudley}.
Let $\{X_i, X_{n,i};\, n\geq 1,\, i\geq 1\}$ be random variables taking values in $[0,1]$. Set $X_n=(X_{n1}, X_{n2},\cdots)\in [0,1]^{\infty}$. If $X_{ni}=0$ for $i>m$, we simply write $X_n=(X_{n1},\cdots, X_{nm})$. We say that $X_n$ \emph{converges weakly} to $X:=(X_1, X_2, \cdots)$ as $n\to\infty$ if, for any $r\geq 1$, $(X_{n1},\cdots, X_{nr})$ converges weakly to $X=(X_1, \cdots, X_r)$ as $n\to\infty$. This convergence actually is the same as the weak convergence of random variables in $([0,1]^{\infty}, d)$ where
\begin{eqnarray}\label{Pepsi}
d(x,y)=\sum_{i=1}^{\infty}\frac{|x_i-y_i|}{2^i}
\end{eqnarray}
for $x=(x_1, x_2, \cdots)\in [0, 1]^{\infty}$ and $y=(y_1, y_2, \cdots)\in [0, 1]^{\infty}$. The topology generated by this metric is the same as the product topology.
\begin{theorem}\label{thm:infinity}
Let $m=m_n\to\infty$ as $n\to\infty.$ Assume $\kappa=(k_1,\ldots, k_m) \in \mathcal{P}_n(m)$ is chosen with probability as in \eqref{eq:general}. Let $(X_{m,1}, \cdots, X_{m,m})$ and $X=(X_1, X_2, \cdots)$ be random variables taking values in $\nabla_{m-1}$ and $\nabla$, respectively. If
\begin{eqnarray}\label{drink}
\sup_{\|\varphi\|_L\leq 1}\Big|E\varphi\big(\frac{k_1}{n}, \cdots, \frac{k_m}{n}\big) -
E\varphi(X_{m,1}, \cdots, X_{m,m})\Big| \to 0
\end{eqnarray}
as $n\to\infty$, and $(X_{m,1}, \cdots, X_{m,m})$ converges weakly to $X$ as $n\to\infty$, then $\big(\frac{k_1}{n}, \cdots, \frac{k_m}{n}\big)$ converges weakly to $X$ as $n\to\infty$.
\end{theorem}
\begin{proof} Given integer $r\geq 1$, to prove the theorem, it is enough to show $\big(\frac{k_1}{n}, \cdots, \frac{k_r}{n}\big)$ converges weakly to $(X_1, \cdots, X_r)$ as $n\to\infty.$ Since $m=m_n\to\infty$ as $n\to\infty$, without loss of generality, we assume $r<m$ in the rest of discussion. For any random vector $Z$, let $\mathcal{L}(Z)$ denote its probability distribution. Review (\ref{cream}). By the triangle inequality,
\begin{eqnarray}
& & \rho\Big(\mathcal{L}\big(\frac{k_1}{n}, \cdots, \frac{k_r}{n}\big),\, \mathcal{L}\big(X_1, \cdots, X_r\big)\Big)\nonumber\\
& \leq & \rho\Big(\mathcal{L}\big(\frac{k_1}{n}, \cdots, \frac{k_r}{n}\big),\, \mathcal{L}\big(X_{m,1}, \cdots, X_{m,r}\big)\Big) + \rho\Big(\mathcal{L}\big(X_{m,1}, \cdots, X_{m,r}\big),\, \mathcal{L}\big(X_1, \cdots, X_r\big)\Big) \nonumber\\
& & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\label{tang}
\end{eqnarray}
For any function $\varphi(x_1, \cdots, x_r)$ defined on $[0, 1]^r$ with $\|\varphi\|_{L}\leq 1$, set $\tilde{\varphi}(x_1, \cdots, x_m)=\varphi(x_1, \cdots, x_r)$ for all $(x_1, \cdots, x_m)\in\mathbb{R}^m$. Then $\|\tilde{\varphi}\|_{L}\leq 1$. Condition (\ref{drink}) implies that the middle one among the three distances in (\ref{tang}) goes to zero. Further, the assumption that $(X_{m,1}, \cdots, X_{m,m})$ converges weakly to $X$ implies the third distance in (\ref{tang}) also goes to zero. Hence the first distance goes to zero. The proof is completed.
\end{proof}
With Theorem \ref{thm:infinity} and the estimation in Theorem \ref{thm:general}, we obtain the proof of Theorem \ref{thm:general-infinity} immediately.
\begin{proof}[Proof of Theorem \ref{thm:general-infinity}]
Assume $\kappa=(k_1,\ldots, k_m) \in \mathcal{P}_n(m)$ is chosen with probability as in \eqref{eq:general}. In the proof of Theorem \ref{thm:general}, we have shown in \eqref{eq:compare} that for any $\|\varphi\|_L \le 1$,
\begin{eqnarray*}
& & \sup_{\|\varphi\|_L \le 1}|\mathbb{E}\left(\varphi(\frac{k_1}{n},\ldots,\frac{k_m}{n}) \right) - \mathbb{E}\left(\varphi(x_1,\ldots,x_m) \right)|\\
&=& O(\frac{\sqrt{m}}{n} + \frac{(1+\frac{m}{n})^m}{n}) + |\mathcal{E}_{n,m}| \to 0.
\end{eqnarray*}
as $n\to \infty$. Recall in \eqref{eq:error}, we have $|\mathcal{E}_{n,m}| \to 0$ as long as $m=o(n^{1/3})$. Therefore, by Theorem \ref{thm:general-infinity}, we conclude that $(\frac{k_1}{n},\cdots, \frac{k_m}{n})$ converges weakly to $X$ as $n\to \infty$.
\end{proof}
\begin{comment}
\begin{proof}[Proof of Corollary \ref{cor:pos-dir}]
It is known that if $(X_{m,1}, \cdots, X_{m,m})$ is the decreasing order statistics of the symmetric Dirichlet distribution $\text{Dir}(\alpha)$ with $\alpha=\frac{\theta}{m-1}$, then $(X_{m,1}, \cdots, X_{m,m})$ converges weakly to the Poisson-Dirichlet distribution on $\nabla$ with parameter $\theta$. See for example \citet[Theorem 2.1]{Feng}. The conclusion follows from Theorem \ref{thm:general-infinity}. For the marginal distribution of $k_1$, it follows from the continuous mapping theorem and the marginal distribution for the first coordinate of the Poisson-Dirichlet distribution on $\nabla$ (see \citet[Theorem 2.5]{Feng}).
\end{proof}
\end{comment}
\bibliographystyle{apalike}
|
train/arxiv
|
BkiUfg7xK1yAgXBVG9Dv
| 5
| 1
|
\section{Introduction}
Discovered in 1995 at the Tevatron $p\bar{p}$ collider by the CDF and D0 collaborations \cite{CDF,D0}, the top quark is known as the heaviest particle in the Standard Model (SM) of particle physics. In the Large Hadron Collider (LHC) \cite{LHC}, top quarks are produced in pairs through the strong interaction and individually through electroweak processes via proton-proton collisions. With a mass around 173 GeV close to the electroweak symmetry breaking scale, measurements of the top quark properties can provide an important tool in terms of tests of the Standard Model. The top quark has an extremely short life time that leads to a decay before hadronization, providing a unique opportunity to study the bare quark properties. Due to the high production rate of the top quark at the LHC, top quark properties measurements are of great importance.
In this context, using data taken at centre-of-mass energies of $\sqrt{s}$ = 7 TeV and 8 TeV, a variety of analyses are carried out by the ATLAS collaboration \cite{ATLAS} where some of the most recently obtained results are discussed here.
\section{Charge asymmetry}
The production rate difference in positive and negative absolute rapidity difference between top quarks and top antiquarks known as charge asymmetry ($A_{C}$), defined as
\begin{equation}
A_{C} = \frac{N(\Delta |y| > 0) - N(\Delta |y| < 0)}{N(\Delta |y| > 0) + N(\Delta |y| < 0)} ,
\label{eq:AC}
\end{equation}
is one of the interesting features of \ensuremath{t\bar{t}\ } production in $p\bar{p}$ collisions. The Standard Model at next-to-leading order (NLO) in Quantum Chromodynamics (QCD) predicts a charge asymmetry at the level of $A_{C} \sim 1\%$. On the other hand, several processes beyond the Standard Model (BSM) can alter $A_{C}$, either with anomalous vector or axial-vector couplings (e.g. axi-gluons) or via interference with SM processes, predicting different asymmetries.
One of the latest charge asymmetry measurements at ATLAS uses the full $\sqrt{s}$ = 8 TeV data set corresponding to an integrated luminosity of 20.3 fb$^{-1}$ in the single-lepton final state (where one top quark decays to a leptonically decaying \textit{W} boson and the other top quark decays to a hadronically decaying \textit{W} boson). The result was obtained inclusively and differentially as a function of invariant mass, transverse momentum and longitudinal boost ($\beta_{Z}$) of the \ensuremath{t\bar{t}\ } system, requiring at least four jets, one high $p_{T}$ lepton and missing transverse energy. The events are reconstructed via a kinematic likelihood fit method~\cite{klfitter} and a Bayesian unfolding procedure is applied
to account for distortions due to the acceptance and detector effects, leading to parton-level $A_{C}$ measurements. The inclusive measurement yields a value of $A_{C} =0.009 \pm 0.005$ (stat. + syst.)~\cite{CA}.
In addition, ATLAS performed the charge asymmetry measurement in the boosted topology \cite{CA_boost}, by reconstructing the hadronic hemisphere of the \ensuremath{t\bar{t}\ } event as one large-radius jet with radius parameter $R=1.0$ and $p_{T} > 300$ GeV inclusively and differentially. The boosted topology provides an accurate $A_{C}$ measurement as a function of the \ensuremath{t\bar{t}\ } invariant mass ($m_{\ensuremath{t\bar{t}\ }}$) in the TeV range by a more precise reconstruction of $m_{\ensuremath{t\bar{t}\ }}$ for events with highly boosted top quarks. The inclusive measurement for $m_{\ensuremath{t\bar{t}\ }} > $0.75 TeV and $|\Delta |y|| < 2$ yields $A_{C}=(4.2 \pm 3.2)\% $, that is within one standard deviation of the SM expectation. Furthermore, the differential measurement as function of the invariant mass of the \ensuremath{t\bar{t}\ } system disfavours t-channel W$^{'}$ boson model\cite{AguilarSaavedra:2011hz} in the highest $m_{\ensuremath{t\bar{t}\ }}$ bin.
Figure \ref{fig:AC} shows the differential $A_{C}$ measurement in resolved and boosted topologies as a function of $m_{\ensuremath{t\bar{t}\ }}$. These measurements provide a constraint on extensions of the SM. The results of both measurements agree with the SM prediction.
\begin{figure}[h]
\begin{center}
\includegraphics[height=55mm]{figures/ca_diff_a.eps}
\includegraphics[height=56mm]{figures/ca_boosted_diff.eps}
\caption{The $A_{C}$ measurement in resolved topology\cite{CA} (left), compared with predictions for SM and for right-handed colour octets with masses below the \ensuremath{t\bar{t}\ } threshold and beyond the kinematic reach of current LHC searches and boosted topology\cite{CA_boost} (right) as a function of $m_{\ensuremath{t\bar{t}\ }}$ compared with the SM prediction of the NLO calculation.}
\label{fig:AC}
\end{center}
\end{figure}
\section{Rare decays of the top quark}
Within the Standard Model, flavour changing neutral currents (FCNC) are forbidden at tree level and are heavily suppressed via the GIM mechanism \cite{GIM}. In contrast, many BSM models predict significant enhancements at the level of the experimental accessibility, making this top quark property measurements an area of high interest. Within this context, ATLAS performed various searches for FCNC processes such as recent searches for $\mathscr{B}(t \rightarrow qH)$ \cite{FCNC_qH} and $\mathscr{B}(t \rightarrow qZ)$ \cite{FCNC_qZ}, using \ensuremath{t\bar{t}\ } events produced in the full $\sqrt{s}$ = 8 TeV data set corresponding to an integrated luminosity of 20.3 fb$^{-1}$, with one top quark decaying through the FCNC mode and the other through the SM dominant mode ($t \rightarrow bW$). Only the decays of the Higgs boson to $b\bar{b}$ and the Z boson to charged leptons and leptonic \textit{W} boson decays are considered, respectively. The final state topology of the top decay through the $tqH$ process is characterised by an isolated high transverse momentum lepton and at least four jets. The final state characteristics of top quark decays through the $tqZ$ process is given by three isolated charged leptons, at least two jets, and missing transverse momentum from the undetected neutrino.
For the $tqH$ process, results from other ATLAS searches with $H \rightarrow \gamma \gamma$ and $H \rightarrow W^{+}W^{-}, \tau^{+}\tau^{-}$ have been combined with $H \rightarrow b\bar{b}$ to obtain the most restrictive direct bounds on $tqH$ interactions measured so far. Figure \ref{fig:FCNC_comined} summarises the best fit for the individual searches as well as their combination.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=55mm]{figures/FCNC_combined_tHu.png}
\includegraphics[height=55mm]{figures/FCNC_combined_tHc.png}
\caption{The best-fit for the individual searches as well as their combination for (top) $\mathscr{B}(t \rightarrow Hu)$, assuming that $\mathscr{B}(t \rightarrow Hc) =0$ and (bottom) $\mathscr{B}(t \rightarrow Hc)$, assuming that $\mathscr{B}(t \rightarrow Hu) =0$ \cite{FCNC_qH}.}
\label{fig:FCNC_comined}
\end{center}
\end{figure}
No evidence for signal events above the background expectation is found, but the established limits for these FCNC processes are in agreement with the expected theoretical limits.
\section{Spin correlation}
Because of the extreme short lifetime of the top quark, it decays before hadronization and that implies that the spin information of the top quark can be accessed from the angular momentum distributions of its decay products. The degree of correlation between the spin of the top quark and the top anti-quark is sensitive to the production mechanism. However, many scenarios of physics beyond the Standard Model predict different spin correlations, e.g. models including axi-gluons, $W^{'}$ bosons, extra right handed top-quark coupling, etc.
A recent measurement of the correlations between the polar angles of leptons from the decay of top quarks in \ensuremath{t\bar{t}\ } events in the helicity basis carried out at the ATLAS experiment \cite{Spin}. The data set corresponds to an integrated luminosity of 4.6 fb$^{-1}$ at a centre-of-mass energy of $\sqrt{s}$ = 7 TeV with candidate events selected in the dilepton topology (where both top quarks in the \ensuremath{t\bar{t}\ } event decay to a leptonically decaying \textit{W} boson) with large missing transverse momentum and at least two jets. The angles $\theta_{1}$ and $\theta_{2}$ between the charged leptons and the direction of motion of the parent quarks in the \ensuremath{t\bar{t}\ } system rest frame are sensitive to the spin information, and the distribution of $\cos \theta_{1}.\cos \theta_{2}$ is sensitive to the spin correlation between the top quark and the top anti-quark. The events are reconstructed via the so called topology reconstruction method and the result is unfolded to the parton level using a fully Bayesian unfolding algorithm. The unfolded distribution is in good agreement with the prediction from MC@NLO \cite{MC_NLO} as displayed in Fig. \ref{fig:Spin}.
In terms of $A_{helicity}=(N_{like}-N_{unlike})/(N_{like}+N_{unlike})$, where $N_{like}$ ($N_{unlike}$) is the number of events where the spins of the top quark and top anti-quark are (anti-)parallel with respect to the helicity basis, the result yields a value of $A_{helicity}= 0.315 \pm 0.061(stat.) \pm 0.049(syst.)$, and is in a good agreement with the NLO QCD prediction of $A_{helicity}= 0.31$ \cite{NLO_correlation}.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=85mm]{figures/Spin.eps}
\caption{The unfolded data distribution of $\cos \theta_{1}.\cos \theta_{2}$. The predictions from the SM and the MC@NLO sample without spin correlation are overlaid for comparison. A symmetric distribution around zero would indicate no spin correlation \cite{Spin}.}
\label{fig:Spin}
\end{center}
\end{figure}
\section{Conclusion}
Measuring top quark properties with high precision is a very good probe for the Standard Model and could potentially open a window to beyond Standard Model physics.
A variety of recent measurements of top quark properties at ATLAS are discussed. Inclusive and differential charge asymmetry measurements in resolved and boosted topologies in the single lepton final state at a centre-of-mass energy of $\sqrt{s}$ = 8 TeV are presented. The measurement with boosted topology extended the reach of previous ATLAS and CMS\cite{CMS_CA} measurements to beyond 1 TeV and disfavours a t-channel W$^{'}$ boson model in the highest $m_{\ensuremath{t\bar{t}\ }}$ bin. However, no significant deviation from the SM expectation is observed. Having the data statistics as the limiting factor in these measurements, analysing the LHC Run II data with higher centre-of-mass energy of collision and luminosity will be of great interest.
Furthermore, several searches for the observation of the flavour changing neutral current processes carried out with no significant evidence. An increased amount of data and improved measurement techniques will soon improve the current limits. The measurement of the correlations between the polar angles of leptons from the decay of the pair-produced top quark and top antiquark in the helicity basis in dilepton final state at a centre-of-mass energy of $\sqrt{s}$ = 7 TeV confirms the spin correlation in top quark decay and is in good agreement with the predictions of the Standard Model.
\bibliographystyle{JHEP}%
\section{Introduction}
Discovered in 1995 at the Tevatron $p\bar{p}$ collider by the CDF and D0 collaborations \cite{CDF,D0}, the top quark is known as the heaviest particle in the Standard Model (SM) of particle physics. In the Large Hadron Collider (LHC) \cite{LHC}, top quarks are produced in pairs through the strong interaction and individually through electroweak processes via proton-proton collisions. With a mass around 173 GeV close to the electroweak symmetry breaking scale, measurements of the top quark properties can provide an important tool in terms of tests of the Standard Model. The top quark has an extremely short life time that leads to a decay before hadronization, providing a unique opportunity to study the bare quark properties. Due to the high production rate of the top quark at the LHC, top quark properties measurements are of great importance.
In this context, using data taken at centre-of-mass energies of $\sqrt{s}$ = 7 TeV and 8 TeV, a variety of analyses are carried out by the ATLAS collaboration \cite{ATLAS} where some of the most recently obtained results are discussed here.
\section{Charge asymmetry}
The production rate difference in positive and negative absolute rapidity difference between top quarks and top antiquarks known as charge asymmetry ($A_{C}$), defined as
\begin{equation}
A_{C} = \frac{N(\Delta |y| > 0) - N(\Delta |y| < 0)}{N(\Delta |y| > 0) + N(\Delta |y| < 0)} ,
\label{eq:AC}
\end{equation}
is one of the interesting features of \ensuremath{t\bar{t}\ } production in $p\bar{p}$ collisions. The Standard Model at next-to-leading order (NLO) in Quantum Chromodynamics (QCD) predicts a charge asymmetry at the level of $A_{C} \sim 1\%$. On the other hand, several processes beyond the Standard Model (BSM) can alter $A_{C}$, either with anomalous vector or axial-vector couplings (e.g. axi-gluons) or via interference with SM processes, predicting different asymmetries.
One of the latest charge asymmetry measurements at ATLAS uses the full $\sqrt{s}$ = 8 TeV data set corresponding to an integrated luminosity of 20.3 fb$^{-1}$ in the single-lepton final state (where one top quark decays to a leptonically decaying \textit{W} boson and the other top quark decays to a hadronically decaying \textit{W} boson). The result was obtained inclusively and differentially as a function of invariant mass, transverse momentum and longitudinal boost ($\beta_{Z}$) of the \ensuremath{t\bar{t}\ } system, requiring at least four jets, one high $p_{T}$ lepton and missing transverse energy. The events are reconstructed via a kinematic likelihood fit method~\cite{klfitter} and a Bayesian unfolding procedure is applied
to account for distortions due to the acceptance and detector effects, leading to parton-level $A_{C}$ measurements. The inclusive measurement yields a value of $A_{C} =0.009 \pm 0.005$ (stat. + syst.)~\cite{CA}.
In addition, ATLAS performed the charge asymmetry measurement in the boosted topology \cite{CA_boost}, by reconstructing the hadronic hemisphere of the \ensuremath{t\bar{t}\ } event as one large-radius jet with radius parameter $R=1.0$ and $p_{T} > 300$ GeV inclusively and differentially. The boosted topology provides an accurate $A_{C}$ measurement as a function of the \ensuremath{t\bar{t}\ } invariant mass ($m_{\ensuremath{t\bar{t}\ }}$) in the TeV range by a more precise reconstruction of $m_{\ensuremath{t\bar{t}\ }}$ for events with highly boosted top quarks. The inclusive measurement for $m_{\ensuremath{t\bar{t}\ }} > $0.75 TeV and $|\Delta |y|| < 2$ yields $A_{C}=(4.2 \pm 3.2)\% $, that is within one standard deviation of the SM expectation. Furthermore, the differential measurement as function of the invariant mass of the \ensuremath{t\bar{t}\ } system disfavours t-channel W$^{'}$ boson model\cite{AguilarSaavedra:2011hz} in the highest $m_{\ensuremath{t\bar{t}\ }}$ bin.
Figure \ref{fig:AC} shows the differential $A_{C}$ measurement in resolved and boosted topologies as a function of $m_{\ensuremath{t\bar{t}\ }}$. These measurements provide a constraint on extensions of the SM. The results of both measurements agree with the SM prediction.
\begin{figure}[h]
\begin{center}
\includegraphics[height=55mm]{figures/ca_diff_a.eps}
\includegraphics[height=56mm]{figures/ca_boosted_diff.eps}
\caption{The $A_{C}$ measurement in resolved topology\cite{CA} (left), compared with predictions for SM and for right-handed colour octets with masses below the \ensuremath{t\bar{t}\ } threshold and beyond the kinematic reach of current LHC searches and boosted topology\cite{CA_boost} (right) as a function of $m_{\ensuremath{t\bar{t}\ }}$ compared with the SM prediction of the NLO calculation.}
\label{fig:AC}
\end{center}
\end{figure}
\section{Rare decays of the top quark}
Within the Standard Model, flavour changing neutral currents (FCNC) are forbidden at tree level and are heavily suppressed via the GIM mechanism \cite{GIM}. In contrast, many BSM models predict significant enhancements at the level of the experimental accessibility, making this top quark property measurements an area of high interest. Within this context, ATLAS performed various searches for FCNC processes such as recent searches for $\mathscr{B}(t \rightarrow qH)$ \cite{FCNC_qH} and $\mathscr{B}(t \rightarrow qZ)$ \cite{FCNC_qZ}, using \ensuremath{t\bar{t}\ } events produced in the full $\sqrt{s}$ = 8 TeV data set corresponding to an integrated luminosity of 20.3 fb$^{-1}$, with one top quark decaying through the FCNC mode and the other through the SM dominant mode ($t \rightarrow bW$). Only the decays of the Higgs boson to $b\bar{b}$ and the Z boson to charged leptons and leptonic \textit{W} boson decays are considered, respectively. The final state topology of the top decay through the $tqH$ process is characterised by an isolated high transverse momentum lepton and at least four jets. The final state characteristics of top quark decays through the $tqZ$ process is given by three isolated charged leptons, at least two jets, and missing transverse momentum from the undetected neutrino.
For the $tqH$ process, results from other ATLAS searches with $H \rightarrow \gamma \gamma$ and $H \rightarrow W^{+}W^{-}, \tau^{+}\tau^{-}$ have been combined with $H \rightarrow b\bar{b}$ to obtain the most restrictive direct bounds on $tqH$ interactions measured so far. Figure \ref{fig:FCNC_comined} summarises the best fit for the individual searches as well as their combination.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=55mm]{figures/FCNC_combined_tHu.png}
\includegraphics[height=55mm]{figures/FCNC_combined_tHc.png}
\caption{The best-fit for the individual searches as well as their combination for (top) $\mathscr{B}(t \rightarrow Hu)$, assuming that $\mathscr{B}(t \rightarrow Hc) =0$ and (bottom) $\mathscr{B}(t \rightarrow Hc)$, assuming that $\mathscr{B}(t \rightarrow Hu) =0$ \cite{FCNC_qH}.}
\label{fig:FCNC_comined}
\end{center}
\end{figure}
No evidence for signal events above the background expectation is found, but the established limits for these FCNC processes are in agreement with the expected theoretical limits.
\section{Spin correlation}
Because of the extreme short lifetime of the top quark, it decays before hadronization and that implies that the spin information of the top quark can be accessed from the angular momentum distributions of its decay products. The degree of correlation between the spin of the top quark and the top anti-quark is sensitive to the production mechanism. However, many scenarios of physics beyond the Standard Model predict different spin correlations, e.g. models including axi-gluons, $W^{'}$ bosons, extra right handed top-quark coupling, etc.
A recent measurement of the correlations between the polar angles of leptons from the decay of top quarks in \ensuremath{t\bar{t}\ } events in the helicity basis carried out at the ATLAS experiment \cite{Spin}. The data set corresponds to an integrated luminosity of 4.6 fb$^{-1}$ at a centre-of-mass energy of $\sqrt{s}$ = 7 TeV with candidate events selected in the dilepton topology (where both top quarks in the \ensuremath{t\bar{t}\ } event decay to a leptonically decaying \textit{W} boson) with large missing transverse momentum and at least two jets. The angles $\theta_{1}$ and $\theta_{2}$ between the charged leptons and the direction of motion of the parent quarks in the \ensuremath{t\bar{t}\ } system rest frame are sensitive to the spin information, and the distribution of $\cos \theta_{1}.\cos \theta_{2}$ is sensitive to the spin correlation between the top quark and the top anti-quark. The events are reconstructed via the so called topology reconstruction method and the result is unfolded to the parton level using a fully Bayesian unfolding algorithm. The unfolded distribution is in good agreement with the prediction from MC@NLO \cite{MC_NLO} as displayed in Fig. \ref{fig:Spin}.
In terms of $A_{helicity}=(N_{like}-N_{unlike})/(N_{like}+N_{unlike})$, where $N_{like}$ ($N_{unlike}$) is the number of events where the spins of the top quark and top anti-quark are (anti-)parallel with respect to the helicity basis, the result yields a value of $A_{helicity}= 0.315 \pm 0.061(stat.) \pm 0.049(syst.)$, and is in a good agreement with the NLO QCD prediction of $A_{helicity}= 0.31$ \cite{NLO_correlation}.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=85mm]{figures/Spin.eps}
\caption{The unfolded data distribution of $\cos \theta_{1}.\cos \theta_{2}$. The predictions from the SM and the MC@NLO sample without spin correlation are overlaid for comparison. A symmetric distribution around zero would indicate no spin correlation \cite{Spin}.}
\label{fig:Spin}
\end{center}
\end{figure}
\section{Conclusion}
Measuring top quark properties with high precision is a very good probe for the Standard Model and could potentially open a window to beyond Standard Model physics.
A variety of recent measurements of top quark properties at ATLAS are discussed. Inclusive and differential charge asymmetry measurements in resolved and boosted topologies in the single lepton final state at a centre-of-mass energy of $\sqrt{s}$ = 8 TeV are presented. The measurement with boosted topology extended the reach of previous ATLAS and CMS\cite{CMS_CA} measurements to beyond 1 TeV and disfavours a t-channel W$^{'}$ boson model in the highest $m_{\ensuremath{t\bar{t}\ }}$ bin. However, no significant deviation from the SM expectation is observed. Having the data statistics as the limiting factor in these measurements, analysing the LHC Run II data with higher centre-of-mass energy of collision and luminosity will be of great interest.
Furthermore, several searches for the observation of the flavour changing neutral current processes carried out with no significant evidence. An increased amount of data and improved measurement techniques will soon improve the current limits. The measurement of the correlations between the polar angles of leptons from the decay of the pair-produced top quark and top antiquark in the helicity basis in dilepton final state at a centre-of-mass energy of $\sqrt{s}$ = 7 TeV confirms the spin correlation in top quark decay and is in good agreement with the predictions of the Standard Model.
\bibliographystyle{JHEP}%
|
train/arxiv
|
BkiUbi7xK3YB9raXyiXP
| 5
| 1
|
\section{Introduction}
Let $M$ be a (complete) orientable hyperbolic $3$-manifold. Up to isometry, we may identify $M$ with
$\HH^3/\Gamma$, where $\Gamma$ is a discrete,
torsion-free subgroup of $\isomplus(\HH^3)$, uniquely determined
up to conjugacy by the hyperbolic structure of $M$, and isomorphic to $\pi_1(M)$. In this setting we have the following definition:
\begin{definition} A {\it
Margulis number} for $M$ (or for $\Gamma$) is a positive real number $\mu$ such that the following condition holds:
\Claim\label{si si si}
If $P$ is a point of $\HH^3$ and $x,y$ are elements of $\Gamma$ such that $\max(d(P,x\cdot P),d(P,y\cdot P))<\mu$, then $x$ and $y$ commute.
\EndClaim
\end{definition}
Here, and throughout this paper, $d$ denotes hyperbolic distance on $\HH^3$.
I refer the reader to the introduction to \cite{finiteness} for general background discussion of Margulis numbers, including external references generalizations to the case of higher-dimensional or non-orientable hyperbolic manifolds. In \cite{finiteness} I pointed out that if $\Gamma\cong\pi_1(M)$ is non-abelian then there is an {\it optimal Margulis number} for $M$, denoted $\mu(M)$, characterized by the property that a given positive number $\mu$ is a Margulis number for $M$ if and only if $\mu\le\mu(M)$. I also discussed the Margulis Lemma, which implies that there is a constant which is a Margulis number for {\it every} orientable hyperbolic $3$-manifold. In this paper I will denote the largest such constant by $\mu_+(3)$.
In \cite{finiteness} I discussed the problem of finding lower bounds for $\mu(M)$, as $M$ varies over a prescribed class of orientable hyperbolic manifolds, and its significance for the problem of classifying finite-volume orientable hyperbolic $3$-manifolds. Results on this problem include Meyerhoff's lower bound of $0.104\ldots$ for $\mu_+(3)$, given in \cite{meyerhoff} (which should be compared with Marc Culler's upper bound of
$0.616\ldots$); the main result of \cite{hakenmarg}, which implies that $\mu(M)\ge0.286$ for any orientable hyperbolic Haken manifold $M$; the main result of \cite{finiteness}, which asserts that, up to isometry, there are at most finitely many orientable $3$-manifolds with $\mu(M)<0.29$; and Corollaries \ref{born free} and \ref{born to facebook} of this paper, which assert that if every subgroup of rank at most $2$ in $\pi_1(M)$ has infinite index---and in particular if $H_1(M;\QQ)$ has rank at least $3$, or if $M$ is closed and $H_1(M;\ZZ_p)$ has rank at least $4$ for some prime $p$---then $\mu(M)\ge\log3=1.09\ldots$. (These corollaries are well-known consequences of known results, and are included in the present paper for completeness. The essential ingredients are, first, the ``$\log3$ Theorem,'' which is deduced from the results of \cite{paradoxical}, \cite{accs}, \cite{agol}, \cite{cg}, \cite{NS}, and \cite{Oh}, and, second, some topological facts proved in \cite{js} and \cite{sw}.)
It is a standard observation that a lower bound for $\mu(M)$ forces a lower bound for the volume of $M$. Indeed, if $\vol M<\infty$, it is easy to deduce that the $\mu$-thick part of $M$ \cite[Chapter D]{bp} is non-empty, and hence that $M$ contains an isometric copy of a ball $B$ of radius $\mu/2$ in $\HH^3$. In particular, the volume of $B$ is a lower bound for $\vol M$. Various refinements of this estimate are also well known.
The theme of the present paper, exemplified by Theorems A and B below and their corollaries, is that for an orientable hyperbolic $3$-manifold $M$, certain {\it upper} bounds on $\mu(M)$ force {\it upper} bounds on the volume of $M$. From this one can deduce that certin upper bounds on $\mu(M)$ also force upper bounds on certain group-theoretical invariants of $\pi_1(M)$ such as its rank.
Of course, these results can be reinterpreted as saying that suitable lower bounds on the volume of $M$, or on such group-theoretical invariants as the rank of $\pi_1(M)$, imply certain lower bounds on $\mu(M)$. From this point of view, the paper can be seen as a contribution to a body of results, discussed above, that give lower bounds for $\mu(M)$ under various restrictions on $M$.
The following result, which gives a first illustration of how upper bounds on $\mu(M)$ force upper bounds on the volume of $M$, will be proved in the body of the paper as Theorem \ref{abs tract}.
\begin{TheoremA}\label{abs tract intro}
Let $\lambda$ be a positive real number strictly less than $\log3$. Then there is a constant $V_\lambda$ such that for every orientable hyperbolic $3$-manifold $M$ with $\mu(M)<\lambda$ we have $\vol M\le V_\lambda$ (and in particular $\vol M<\infty$).
\end{TheoremA}
The following two corollaries to Theorem A will be proved in the body of the paper as
Corollaries \ref{synecdoche} and \ref{poor thing}.
\begin{simple corollary}\label{synecdoche intro}
Let $\lambda$ be a positive real number strictly less than $\log3$. Then there is a
there is a natural number $d_\lambda$ such that for every orientable hyperbolic $3$-manifold $M$ with $\mu(M)<\lambda$, the group $\pi_1(M)$ has a rank-$2$ subgroup of index at most $d_\lambda$.
\end{simple corollary}
\begin{simple corollary}\label{poor thing intro}
Let $\lambda$ be a positive real number strictly less than $\log3$. Then there is a
natural number $k_\lambda$ such that for every orientable hyperbolic $3$-manifold $M$ with $\mu(M)<\lambda$, the group $\pi_1(M)$ has rank at most $k_\lambda$.
\end{simple corollary}
Note that in Theorem A and the two corollaries stated above, no explicit estimate is given for the constants $V_\lambda$, $d_\lambda$ and $k_\lambda$. As I shall now explain, explicit estimates can be obtained if we replace the assumption $\lambda<\log3$ by the stronger assumption that $\lambda<(\log3)/2$.
The following result will be proved in the body of the paper as Corollary \ref{boingo cuckoo}. (It is a corollary to a more technical result, Theorem \ref{what, no soap?}.)
\begin{TheoremB}
Let $\lambda$ be a positive real number strictly less than $(\log3)/2$. Then for every orientable hyperbolic $3$-manifold $M$ with $\mu(M)<\lambda$ we have
$$\vol M< \lambda\bigg(6+\frac{880}{\log3-2\lambda}\log{1\over\log3-2\lambda}\bigg).$$
\end{TheoremB}
To avoid confusion it may be worth pointing out that the right hand side of the inequality in the conclusion of Theorem B is negative if, say, $\lambda<0.1$. Thus in this case the theorem asserts that $\mu(M)$ cannot be less than $\lambda$. However, this is not new information, as Meyerhoff \cite{meyerhoff} has shown that $\mu(3)>0.1$, and indeed his result is used in the proof of Theorem B.
The following two corollaries to Theorem B will be proved in the body of the paper as
Corollaries \ref{more cuckoo} and \ref{mostly moxie}.
\begin{simple corollary}\label{more cuckoo intro}
Let $\lambda$ be a positive real number strictly less than $(\log3)/2$. Then
for every orientable hyperbolic $3$-manifold $M$ with $\mu(M)<\lambda$, the group $\pi_1(M)$ has a rank-$2$ subgroup of index at most
$$\frac{\lambda}{V_0}\bigg(6+\frac{880}{\log3-2\lambda}\log{1\over\log3-2\lambda}\bigg).$$
\end{simple corollary}
\begin{simple corollary}\label{mostly moxie intro}
Let $\lambda$ be a positive real number strictly less than $(\log3)/2$. Then for every hyperbolic $3$-manifold $M$ with
$\mu(M)<\lambda$, we have
$$\rank\pi_1(M)\le2+\log_2\bigg(\frac{\lambda}{V_0}\bigg(6+\frac{880}{\log3-2\lambda}\log{1\over\log3-2\lambda}\bigg)\bigg).$$
\end{simple corollary}
In my forthcoming paper \cite{cubic}, Theorem \ref{what, no soap?} will be combined with arguments invoking many other results---the $\log3$ theorem, the algebra of
congruence subgroups, and Beukers and Schlickewei's explicit form of Siegel and Mahler's finiteness theorem for solutions to the unit equation in number fields---to prove the following result:
\begin{simple theorem} Let $K$ be any number field, and let $D$ denote its degree. The number of (isometry classes of) closed, non-arithmetic hyperbolic $3$-manifolds which are $\ZZ_6$-homology $3$-spheres, have trace field $K$, and have optimal Margulis number less than $0.183$ is at most
$141\times 2^{24(D+1)}$.
\end{simple theorem}
(That the number of such isometry classes is finite follows from the main result of \cite{finiteness}. It is the explicit bound which is the content of the theorem above.)
The proof of Theorem A occupies Section \ref{abstract section}. The method of proof is to reduce the result to the $\log3$ Theorem (which I mentioned above) using relatively formal arguments based on standard results about algebraic and geometric convergence. The basic strategy is similar to the one used in \cite{finiteness}.
The proof of Theorem B is rather easily reduced to the case in which $\pi_1(M)$ is a two-generator group. In this case, the proof involves two steps. The first, which is carried out in Section \ref{short section}, consists of showing that an upper bound $\lambda<(\log3)/2$ on $\mu(M)$ forces an explicit upper bound on the minimal length of a non-trivial relation in the generators of $\pi_1(M)$; this step, which is embodied in Proposition \ref{first one}, is a refinement of the elementary packing arguments that are used, for example, in \cite{sw} to give an elementary proof that $(\log3)/2$ is a Margulis number for any hyperbolic $3$-manifold $M$ such that every two-generator subgroup of $\pi_1(M)$ is free.
The second step involved in proving Theorem B in the case in which $\pi_1(M)$ is a two-generator group is to show that an upper bound on the minimal length of a non-trivial relation in the generators of $\pi_1(M)$ forces an explicit upper bound on the volume of $M$. This bound is given by Proposition \ref{short to bounded}, the proof of which is the goal of Section \ref{short-bounded section}. The proof of Proposition \ref{short to bounded} is a refinement, in the two-generator case, of the argument used by Cooper in \cite{cooper} to show that the volume of a closed hyperbolic $3$-manifold is bounded by $\pi$ times the length of a presentation of its fundamental group. Adapting Cooper's method to giving a volume bound in terms of the length of a single relation rather than the length of an entire presentation involves
some ingredeints are required that did not appear in \cite{cooper}.
One of these is an isoperimetric inequality which was proved by Agol and Liu as Lemma 4.4 of their paper \cite{A-L}, where they applied it to prove a result that is somewhat analogous to, but distinct from, Proposition \ref{short to bounded}.
The newest ingredient needed for the proof of Proposition \ref{short to bounded} is a deep and technical topological result, Lemma \ref{oh far out}. One of the results needed to prove Lemma \ref{oh far out}, Proposition \ref{wow what a variety of characters}, is a new application of the characteristic submanifold theory that seems to be of particular independent interest.
In Section \ref{concrete section} I assemble the results of Sections \ref{short section} and \ref{short-bounded section} to prove Theorem \ref{what, no soap?} and its various corollaries (including Corollary \ref{boingo cuckoo} which is Theorem B above).
Sections \ref{gen prelim} and \ref{3 prelim} are preliminary sections, devoting to assembling more or less well known results used in the later sections for which convenient references are not easy to find.
I am grateful to Michael Siler, Dick Canary and Marc Culler for a series of valuable discussions of the material in this paper. Siler pointed out an error in an earlier version of the paper; he called my attention to Lemma 4.4 of Agol and Liu's paper \cite{A-L}; and most importantly he made a suggestion that led me to the realization that $\pi$ could be replaced by $\min(\pi,\lambda)$ in Proposition \ref{short to bounded}, which gives an important improvement in Theorem \ref{what, no soap?}. Canary has painstakingly continued to educate me about algebraic and geometric convergence; he corrected my na\"\i ve ideas about the algebraic convergence arguments involved in the proof of Theorem \ref{abs tract}, and helped me to the correct argument and suitable references. Culler patiently listened to all the details of the material in the paper before they were written down, and clarified the proof of Proposition \ref{uneeda}.
\section{General preliminaries}\label{gen prelim}
Throughout this paper, $\log x$ will denote the natural logarithm of a positive number $x$, and $\#(X)$ will denote the cardinality of a finite set $X$.
In statements and arguments involving fundamental groups, I will suppress base points
whenever it is possible to do so without ambiguity. If $X$ is a path-connected space, I will often implicitly assume that $X$ is equipped with an unnamed (and arbitrary) base point $\star_X$, and write $\pi_1(X)$ for $\pi_1(X,\star_X)$. If $f:X\to Y$ is a map between path-connected spaces then $f_\sharp:\pi_1(X)\to\pi_1(Y)$ will mean the homomorphism from $\pi_1(X,\star_X)$ to $\pi_1(Y)=\pi_1(Y,\star_Y)$ which is obtained by composing the standard induced homomorphism $f_\sharp:\pi_1(X,\star_X)\to\pi_1(Y,f(\star_X))$ with the isomorphism from $\pi_1(Y,f(\star_X))$ to $\pi_1(Y,\star_Y)$ defined by an unspecified path from $f(\star_X)$ to $\star_Y$. Thus $f_\sharp$ is well-defined up to post-composition with inner isomorphisms of $\pi_1(Y)$. Many assertions about $f_\sharp$, such as the assertion that is injective or surjective, are invariant
under post-composition with inner isomorphisms, and will be made without reference to a connecting path. Likewise, the image of an element or subgroup of $\pi_1(X)$ under $f_\sharp$ is well-defined up to conjugacy.
A path connected subset $A$ of a path connected space $X$ will be termed {\it $\pi_1$-injective} if the inclusion homomorphism $\pi_1(A)\to\pi_1(X)$ is injective.
The following two easy results from group theory will be needed later in the paper.
\Lemma\label{o hula who}
Let $F$ be a free group on a generating set $S$, let $Z$ be a cyclic subgroup of $F$, and let $k$ be a positive integer. Then there are at most $2k+1$ elements of $Z$ that can be expressed as words of length at most $k$ in the generating set $S$.
\EndLemma
\Proof
Let $t$ be a generator of $Z$. If $t=1$ the assertion is trivial. If $t\ne1$, there exist a reduced word $V$ in the generating set $S$ and a cyclically reduced word $W$ in $S$ such that the word $V\ast W\ast\overline V$ is reduced and represents $t$; here $\ast$ denotes concatenation of words and $\overline V$ denotes the inverse of the word $V$. For any non-zero integer $n$ the word $V\ast(\star^n W)\ast\overline V$ is reduced and represents $t^n$; here $\star^nW$ denotes the $n$-fold concatenation of $W$. In particular, the unique reduced word representing $t^n$ has length at least $n$. If $t^n$ can be expressed as a word of length at most $k$ then the reduced word represented $t^n$ has length at most $k$, and hence $n\le k$. The conclusion follows.
\EndProof
\Proposition\label{you peeked}
Suppose that $\tGamma$ is a finite-index subgroup of a finitely generated group $\Gamma$. Then
$$\rank\Gamma\le \rank\tGamma+\log_2[\Gamma:\tGamma].$$
\EndProposition
\Proof
Set $r=\rank\tGamma$, and fix a generating set $\tS$ for $\tGamma$ with $|\tS|=r$. Let $S\supset\tS$ be a finite generating set for $\Gamma$, and let $S$ be chosen to have minimal cardinality among all generating sets for $\Gamma$ that contain $S$. Let us denote the distinct elements of $S-\tS$ by $x_1,\ldots,x_k$. For $0\le j\le k$, let $\Gamma_j$ denote the subgroup of $\Gamma$ generated by $\tS\cup\{x_1,\ldots,x_j\}$ (so that in particular $\Gamma_0=\tGamma$ and $\Gamma_k=\Gamma$). It follows from the minimality of $S$ that $\Gamma_{j-1}$ is a proper subgroup of $\Gamma_j$ for $j=1,\ldots,k$, and therefore $[\Gamma_j:\Gamma_{j-1}]\ge2$. Hence
$$[\Gamma:\tGamma]=\prod_{j=1}^k
[\Gamma_j:\Gamma_{j-1}]\ge2^k.$$
Using this, we find
$$\rank\Gamma\le|S|=r+k\le r+ \log_2[\Gamma:\tGamma].$$
\EndProof
The following elementary fact from hyperbolic geometry will also be needed.
\Proposition\label{uneeda}
Let $T$ be a triangle in a hyperbolic space, and let $L$ denote the length of the shortest side of $T$. Then
$$ \area T<\min(\pi,L).$$
\EndProposition
\Proof
We may assume that $T\subset\HH^2$. let $l$ denote a side of $T$ having length $L$, and let $\overline\HH^2$ denote the union of $\HH^2$ with the circle at infinity. There is a triangle $T'$ in $\overline\HH^2$ which contains $T$, has $l$ as a side, and has an ideal vertex opposite $l$. It is enough to prove that $\area T'<\min(\pi,L)$. Let us identify $\HH^2$ with the upper half-plane model in such a way that $l$ is an arc in the upper unit semicircle and the other two sides of $T'$ are vertical rays. Let $(x_1,y_1)$ and $(x_2,y_2)$ be the endpoints of $l$ in Cartesian coordinates, where $-1<x_1<x_2<1$. We have
$$\area T'=\int_{x_1}^{x_2}\int_{\sqrt{1-x^2}}^\infty\frac1{y^2}\,dy\,dx=\arcsin x_2-\arcsin x_1,$$
which is the Euclidean length of the arc $l$. This shows that $\area T'<\pi$. On the other hand, the hyperbolic length $L$ of $l$ is the integral over the arc $l$ of the hyperbolic length element, which is given by $ds/y$ where $ds$ is the Euclidean length element; since $y<1$ at all but at most one point of the arc $l$, the hyperbolic length of $l$ strictly exceeds its Euclidean length, so that $\area T'<L$.
\EndProof
\section{Three-manifold preliminaries}\label{3 prelim}
\Number\label{cat-a-gory}
When no category is specified, it will be understood that ``manifolds'' and ``submanifolds'' are smooth. At various points in the paper I will need to mention PL or real-analtyic manifolds, and in these cases I will be explicit about the category in question.
As is customary in $3$-manifold topology, I will frequently quote results about $3$-manifolds that are proved in the PL category, and apply them in the smooth category, most often without explicitly mentioning the transition. In most cases this will be justified by the following facts:
\begin{enumerate}
\item \label{sunny side up}If $M$ is a smooth $n$-manifold and $V$ is a (possibly empty) smooth, properly embedded submanifold, then $M$ has a smooth triangulation with respect to which $V$ is a polyhedral subset.
\item\label{over easy} Any two smooth triangulations of a given smooth manifold determine the same PL structure up to PL homeomorphism.
\item\label{coddled} If $n\le3$, and if $M$ and $M'$ are smooth $n$-manifolds which are PL homeomorphic with respect to the PL structures defined by smooth triangulations, then $M$ and $M'$ are diffeomorphic.
\end{enumerate}
Of these facts, (\ref{over easy}), and the case of (\ref{sunny side up}) where $V=\emptyset$, are proved in \cite{whitehead}, and (\ref{coddled}) is included in \cite[Theorem 6.3]{munkres}. I have not located a direct reference for the case of
(\ref{sunny side up}) in which $V\not=\emptyset$, but it is a very special case of the main result of \cite{goresky}.
\EndNumber
In Section \ref{short-bounded section} I will use a somewhat different result about the interaction between the PL and smooth (or rather real-analytic) categories:
\Proposition \label{real analytic}
Suppose that $M$ is a real-analytic manifold and that $X_1,\ldots,X_n$ are semi-analytic sets. Then $M$ admits a smooth triangulation with respect to which $X_1,\ldots,X_n$ are polyhedral subsets.
\EndProposition
\Proof
It is shown in \cite{grauert} that $M$ is real-analytically isomorphic to a real-analytic submanifold $M'$ of $\RR^N$ for some $N$. If we identify $M$ with $M'$, the sets
$M,X_1,\ldots,X_n$ become semi-analtyic subsets of $\RR^N$. The main theorem of
of \cite{graver} then asserts that $\RR^N$ admits a smooth triangulation with respect to which $M,X_1,\ldots,X_n$ are polyhedral subsets. The conclusion follows.
\EndProof
\Corollary\label{gruosso}
Let $M$ be a hyperbolic $n$-manifold, let $p:\HH^n\to M$ be a locally isometric covering map, and let $\delta_1,\ldots,\delta_k$ be closed hyperbolic simplices in $\HH^n$. Then
$M$ admits a smooth triangulation with respect to which $X_1,\ldots,X_n$ are polyhedral subsets.
\EndCorollary
\Proof
Each $\delta_i$ may be subdivided into finitely many closed hyperbolic simplices which are embedded in $M$ under $p$. Hence we may assume without loss of generality that the $\delta_i$ are themselves embedded in $M$ under $p$. In this case the sets $p(\delta_1),\ldots,p(\delta_k)$ are clearly semi-analytic in the real analytic structure defined by the hyperbolic structure of $M$, and so the result follows from Proposition \ref{real analytic}.
\EndProof
I will say that an orientable $3$-manifold $M$ is {\it irreducible} if $M$ is connected and every (smooth) $2$-sphere in $M$ bounds a (smooth) ball in $M$.
\Lemma\label{plug}Let $X_0$ be a compact, connected, $3$-dimensional submanifold of an irreducible, orientable $3$-manifold $M$. Then there is a compact, irreducible, $3$-dimensional submanifold $X_1$ of $M$ such that $X_1\supset X_0$, and such that the inclusion homomorphism $\pi_1(X_0)\to\pi_1(X_1)$ is surjective. \EndLemma
\Proof Let $\calx$ denote the set of all compact $3$-dimensional submanifolds $X$ of $M$ such that $X\supset X_0$ and the inclusion homomorphism $\pi_1(X_0)\to\pi_1(X)$ is surjective. We have $X_0\in\calx$. Let $X_1\in\calx$ be chosen to have the smallest number of boundary components among all submanifolds in $\calx$. It suffices to show that $X_1$ is irreducible. If $S\subset\inter X_1$ is a $2$-sphere then $S$ bounds a ball $B\subset Q$. If $B\not\subset X_1$, then $X_1\cup B$ is a compact submanifold of $Q$ containing $X_1$ and having fewer boundary components than $X_1$. It is clear that the inclusion homomorphism $\pi_1(X_1)\to\pi_1(X_1\cup B)$ is surjective, and hence that $X_1\cup B\in\calx$, a contradiction to minimality. \EndProof
\Definitions\label{simple def}
Let $M$ be a, irreducible, orientable $3$-manifold.
Following the convention of \cite{hempel}, I will define an {\it incompressible surface} in $M$ to be a compact, properly embedded $2$-manifold in $M$ which if it is $\pi_1$-injective and is not a sphere or a boundary-parallel disk.
A closed (smooth) $2$-manifold $S$ in $\int M$ is said to be {\it boundary-parallel} if $S$ is the frontier of a (smooth) submanifold $H$ of $M$ such that the pair $(H,S)$ is diffeomorphic to $(S\times[0,1],S\times\{1\})$. (The definition in the case of a properly embedded surface with non-empty boundary would be slightly trickier in the smooth category, but will not be needed in this paper.)
I will define a {\it Haken manifold} to be a compact, irreducible, orientable $3$-manifold which contains an incompressible surface. Note that according to this definition a $3$-ball is not a Haken manifold.
By an {\it essential disk} in the irreducible, orientable $3$-manifold $M$ I will mean a properly embedded disk whose boundary does not bound a disk in $\partial M$. I will say that $M$ is {\it boundary-irreducible} if $M$ contains no essential disk. It follows from the Loop Theorem \cite[p. 39, 4.2]{hempel} that $M$ is boundary-irreducible if and only if every component $T$ of $\partial M$ is $\pi_1$-injective in $M$. \EndDefinitions
\Lemma\label{it's parallel}
Let $N$ be an irreducible, orientable $3$-manifold, let $T\subset\inter M$ be a closed incompressible surface, and suppose that the inclusion map $T\to N$ is homotopic to a map of $T$ into $\partial N$. Then $T$ is boundary-parallel in $N$.
\EndLemma
\Proof
In the case where $N$ is compact this is included in \cite[Lemma 5.3]{waldhausen}. To prove it in the general case, note that since the inclusion map $i:T\to N$ is homotopic to some map $f$ of $T$ into $\partial N$, there is a compact subset $X_0$ of $N$, containing $T$ and $f(T)$, such that $i$ is homotopic to $f$ in $X_0$. After passing to a regular neighborhood we may assume that $X_0$ is a submanifold of $N$. Accoding to Lemma \ref{plug}, there is a compact
irreducible submanifold $X_1$ of $M$ such that $X_1\supset X_0$. Applying the compact case of the lemma, with $X_1$ in place of $N$, we deduce that $T$ is boundary parallel in $X_1$, and hence in $N$.
\EndProof
\Definition Let $M$ be an orientable hyperbolic $3$-manifold, and let $p:\HH^3\to M$ be a locally isometric covering map.
A {\it \cn} in $M$ is a subset $C$ of $M$ such that $p^{-1}(C)\subset\HH^3$ is a horoball, and the image of the inclusion homomorphism $\pi_1(C)\to\pi_1(M)$ is a free abelian group of rank $2$.
\EndDefinition
\Proposition\label{hypersimple} Let $M$ be a hyperbolic $3$-manifold. Then
\begin{enumerate}
\item\label{it's irreducible}
$M$ is aspherical and irreducible.
\item\label{yes injective}
Every incompressible torus $T\subset M$ is the boundary of a
submanifold $C$ of $M$ which is closed as a subset of $M$, is diffeomorphic to $T^2\times[0,\infty)$, and has finite volume.
\item\label{not injective}
For every torus $T\subset M$ which is {\it not} incompressible, either
(a) $T$ is contained in a $3$-ball in $M$, or (b) $T$ is the boundary of a solid torus in $M$.
\end{enumerate}
\EndProposition
\Proof
Write $M=\HH^3/\Gamma$ where $\Gamma\subset\isomplus( \HH^3)$ is discrete and torsion-free. Let $p:\HH^3\to M$ denote the quotient map.
Since the universal covering $\HH^3$ of $M$ is contractible, $M$ is aspherical.
To prove that $M$ is irreducible, suppose that $S\subset M$ is a $2$-sphere. Then $S$ lifts to a $2$-sphere $\tS\subset\HH^3$. Since $\HH^3$ is diffeomorphic to $\RR^3$, it is irreducible by \cite[Theorem 1]{moise}, and so $\tS$ bounds a ball $\tB\subset\HH^3$. For any $\gamma\in\Gamma-\{1\}$, we have $\tS\cap\gamma\cdot\tS=\emptyset$. By the Brouwer fixed point theorem we cannot have $\tB\subset\gamma\cdot\tB$ or $\tB\supset\gamma\cdot\tB$, and since $\HH^3$ is non-compact we cannot have $\tB\cup\gamma\cdot\tB=\HH^3$. Hence
$\tB\cap\gamma\cdot\tB=\emptyset$ for every $\gamma\in\Gamma-\{1\}$. It follows that $B:=p(\tB)$ is a $3$-ball with boundary $S$, and the proof of (\ref{it's irreducible}) is complete.
To prove (\ref{yes injective}), consider a torus $T\subset M$ which is incompressible.
The
image of the inclusion homomorphism $\pi_1(T)\to\pi_1(M)$
is a rank-$2$ free abelian subgroup of $\pi_1(M)$ which is defined up to conjugacy, and corresponds to a a rank-$2$ free abelian subgroup $X$ of $\Gamma$ which is also defined up to conjugacy. Since $\Gamma$ is discrete, $X$ must be parabolic. It is then a standard consequence of Shimizu's lemma (see for example \cite[Theorem 2.21]{series}) that there is a horoball $H\subset\HH^3$, precisely invariant under $\Gamma$, whose stabilizer $\Gamma_H$ contains $X$. In particular $\Gamma_H$ is non-cyclic, and since it is parabolic and torsion-free it must be a rank-$2$ free abelian group. Hence $C_0:=H/\Gamma$ is a \cn.
After possibly replacing $C_0$ by a smaller \cn, we may assume that $C_0\cap T=\emptyset$. Since $X\le\Gamma_H$, and since $M$ is aspherical by (\ref{it's irreducible}), the inclusion map of $T$ into $M$ is homotopic in $M$ to a map whose image is contained in $C_0$. This inclusion map is therefore homotopic in $\overline{M-C_0}$ to a map whose image is contained in $\partial C_0$. It now follows from Lemma \ref{it's parallel}, applied with $N=\overline{M-C_0}$, that $T$ is boundary-parallel in $\overline{M-C_0}$; that is, there is a submanifold $P$ of $M$, diffeomorphic to $T^2\times[0,1]$, such that $\partial P=T\cup\partial C_0$ and $P\cap C_0=\partial C_0$. Since $C_0$ is diffeomorphic to $T^2\times[0,\infty)$, the submanifold $C:=C_0\cup P$, which is closed as a subset of $M$ and has boundary $T$, is also diffeomorphic to $T^2\times[0,\infty)$. Furthermore, since $C_0$ has finite volume and $P$ is compact, $C$ also has finite volume.
To prove (\ref{not injective}), consider a torus $T\subset M$ which is not incompressible.
In this case, by \cite[Lemma 6.1]{hempel}, there is a disk $D\subset\inter M$ such that $D\cap T=\partial D$, and $\partial D$ is a non-trivial, and hence non-separating, curve on $T$. Let $E$ be a $3$-ball containing $D$, such that $A:=E\cap T$ is an annular neighorhood of $\partial D$ in $T$, and $A\subset\partial E$. Then $S:=(T\cup\partial E)-\inter A$ is a $2$-sphere which must bound a $3$-ball $B\subset M$, since $M$ is irreducible by (\ref{it's irreducible}). We must have either $E\subset B$ or $E\cap B=\closure{(\partial E)-A}$. In the first case we have $T\subset B$, and (ii) holds. In the second case, $J:=E\cup B$ is a solid torus since $M$ is orientable, and $\partial J=T$, so that (b) holds.
\EndProof
The following well-known consequence of the sphere theorem is included, for example, in \cite[Theorem 8.2]{epstein}:
\Proposition\label{sphere consequence}An irreducible $3$-manifold with infinite fundamental group is aspherical.\NoProof\EndProposition
The following two facts about $3$-manifolds are well-known, but I have supplied proofs for completeness.
\Proposition\label{solid-torus}Let $N$ be a compact, irreducible, orientable
$3$-manifold such that every component of $\partial N$
is a torus. If $N$ is boundary-reducible then it is a solid torus.
\EndProposition
\Proof I will prove the corresponding statement in the PL category (see \ref{cat-a-gory}).
Let $N$ be a compact, irreducible, orientable PL
$3$-manifold such that every component of $\partial N$
is a torus. If $N$ is boundary-reducible, it contains an
essential disk $D$. By hypothesis the component of $\partial N$
containing $\partial D$ is a torus $T$. If $E$ denotes a regular
neighborhood of $D$ in $N$, then the component of
$\partial(\closure{N-E})$ that meets $T$ is a $2$-sphere. By
irreducibility follows that $\closure{N-E}$ is a ball, so that
$N$ is a solid torus.
\EndProof
\Proposition\label{core}
Suppose that $M$ is an irreducible, orientable $3$-manifold. Then there is a compact, irreducible submanifold $N$ of $M$ such that the inclusion homomorphism $\pi_1(N)\to\pi_1(M)$ is an isomorphism.
\EndProposition
\Proof
According to \cite{core}, there is a compact submanifold $N_0$ of $M$
such that the inclusion homomorphism $\pi_1(N_0)\to\pi_1(M)$ is an
isomorphism. By Lemma \ref{plug}, there is a compact
irreducible submanifold $N$ of $M$ such that $N\supset N_0$, and the
inclusion homomorphism $\pi_1(N_0)\to\pi_1(N)$ is surjective. It
follows that the inclusion homomorphism $\pi_1(N)\to\pi_1(M)$ is an
isomorphism.
\EndProof
I will make use of the characteristic submanifold theory \cite{js}, \cite{johannson}. The information that I will need is summarized in the following statement:
\Proposition\label{absolutely}
Let $M$ be any Haken manifold. Then there is there is a Seifert-fibered manifold $\Sigma\subset\inter M$, such that each component of $\partial\Sigma$ in incompressible in $M$, and having the following property: if $f:T^2\to M$ is any map such that $f_\sharp:\pi_1(T^2)\to\pi_1(M)$ is injective, then $f$ is homotopic to a map $g:T^2\to\inter M$ such that $g(T^2)\subset\Sigma$.
\EndProposition
\Proof
According to the statement of the Characteristic Pair Theorem on page 138 of \cite{js}, and the discussion of the case $T=\emptyset$ following that statement, there is a Seifert-fibered manifold $\Sigma\subset\inter M$, such that each component of $\partial\Sigma$ is incompressible in $M$, and having the following property: if $\cals$ is any Seifert fibered space, and $F:\cals\to M$ is any map which is nondegenerate (in the sense defined on p. 55 of \cite{js}, taking $\calf=\calt=\emptyset$), then $F$ is homotopic to a map whose image is contained in $\Sigma$. Now if $f:T^2\to M$ is any map such that $f_\sharp:\pi_1(T^2)\to\pi_1(M)$ is injective, and if we let $q:T^2\times[0,1]\to T^2$ denote the projection to the first factor, it follows immediately from the definition that $f\circ q$ is a nondegenerate map of the Seifert fibered manifold $T^2 \times[0,1]$ into $M$. Hence $f\circ q$ is homotopic to a map of $T^2\times[0,1]$ into $M$ whose image is contained in $\Sigma$, and so $f$ is homotopic to a map $g:T^2\to\inter M$ such that $g(T^2)\subset\Sigma$.
\EndProof
(It may be shown that the submanifold $\Sigma$ given by Proposition \ref{absolutely} is unique up to ambient isotopy in $M$, but I will not need this fact. Note that although $\Sigma\subset\inter Q$,
every torus component $T$ of $\partial Q$ which is $\pi_1$-injective in $M$ is ``parallel'' to a component of $\partial\Sigma$. One may think of $\Sigma$ is an ``absolute'' characteristic submanifold of $Q$ which ``carries essential singular tori,'' as distinguished from the ``relative'' characteristic submanifold which is defined only when $Q$ is boundary-irreducible, and carries both essential singular tori and essential singular annuli.)
The next two results, Propositions \ref{jaco} and \ref{life is never simple}, are closely related to Theorem VI.4.1 of \cite{js}. They are both essentially topological results, although I have found it more convenient to state Proposition \ref{life is never simple} in the setting of hyperbolic $3$-manifolds.
\Proposition\label{jaco}
Let $N$ be a compact, irreducible, orientable $3$-manifold such that $\pi_1(N)$ has rank $2$ and is not free. Then each component of $\partial N$ is a torus.
\EndProposition
\Proof
If $N$ is closed, there is nothing to prove. If $\partial N\ne\emptyset$ then $N$ has the homotopy type of a connected finite CW complex $K$ of dimension at most $2$. We may take $K$ to have only one $0$-cell. Let $m$ and $n$ denote, respectively, the numbers of $1$-cells and $2$-cells of $K$. Then $\pi_1(K)$ has a presentation with $m$ generators and $n$ relations. By definition, the {\it deficiency} of this presentation is $m-n$. It is shown in \cite{magnus} that if if $k$ is a positive integer, a finitely presented group that has rank $k$ and has a presentation of deficiency at least $k$ must be free. Since
$\pi_1(K)\cong\pi_1(N)$ has rank $2$ and is not free, the deficiency $m-n$ must be at most one. This gives
$$\frac12\chi(\partial N)=\chi(N)=\chi(K)=1-m+n\ge0.$$
Note that if some component of $\partial N$ were a sphere then by irreducibility $N$ would be a ball, which is impossible since $\pi_1(N)$ has rank $2$.
Thus $\partial N$ is a closed, orientable $2$-manifold with no sphere components and $\chi(N)\ge0$. Hence every component of $\partial N$ is a torus.
\EndProof
\Proposition\label{life is never simple}
Let $M$ be a hyperbolic $3$-manifold of infinite volume. Then every two-generator non-abelian subgroup of $\pi_1(M)$ is free.
\EndProposition
\Proof
Let $X\le \pi_1(M)$ be a two-generator non-abelian subgroup. Then $X\cong\pi_1(\tM)$ for some covering space $\tM$ of $M$.
Since $M$ is irreducible by Assertion (\ref{it's irreducible}) of Proposition \ref{hypersimple}), it follows from \cite[p. 647, Theorem 3]{msy} that $\tM$ is irreducible.
By Proposition \ref{core}, there is a compact, irreducible submanifold $N$ of $M$ such that the inclusion homomorphism $\pi_1(N)\to\pi_1(M)$ is an isomorphism.
If $X\cong\pi_1(N)$ is not free, then it follows from Proposition \ref{jaco} that every component of $\partial M$ is a torus.
Since $\pi_1(N)\cong X$ is non-abelian by hypothesis, $N$ is not a solid torus; hence by Proposition \ref{solid-torus}, $N$ is boundary-irreducible. If $T$ is any component of $\partial N$, it follows that $T$ is $\pi_1$-injective in $N$, and therefore in $M$ as well. Hence by Assertion (\ref{yes injective}) of Proposition \ref{hypersimple}, $T$ is the boundary of a submanifold $C_T$ of $M$ which is closed as a subset of $M$, is diffeomorphic to $T^2\times[0,\infty)$, and has finite volume. Since $\pi_1(N)$ is non-abelian, and since the inclusion homomorphism $\pi_1(N)\to\pi_1(M)$ is in particular injective, we cannot have $C_T\supset N$ for any $T$. Hence we must have $C_T\cap N=T$ for each $T$, so that $M=N\cup\bigcup_TC_T$. But this implies that $M$ has finite volume, a contradiction.
\EndProof
The following result has direct relevance to estimating Margulis numbers, and its corollaries were mentioned in the introduction.
\begin{proposition}\label{chained to the altar} Let $x$ and $y$ be non-commuting elements of
$\isomplus(\HH^3)$ such that $\langle x,y\rangle$ is discrete and torsion-free and has infinite covolume. Then for every $P\in\HH^3$ we have
$$\max(d(P,x\cdot P),d(P,y\cdot P))\ge\log3.$$
\end{proposition}
\Proof
Set $\Gamma=\langle x,y\rangle$. Proposition \ref{life is never simple}, applied to the hyperbolic $3$-manifold $M=\HH^3/\Gamma$, shows that $\Gamma$ is free. The conclusion now follows from the case $k=2$ of \cite[Theorem 4.1]{surgery}.
\EndProof
\Corollary\label{born free}
Let $M$ be an orientable hyperbolic $3$-manifold such that every subgroup of rank at most $2$ in $\pi_1(M)$ has infinite index. Then $\mu(M)\ge\log3=1.09\ldots$.
\EndCorollary
\Proof
Write $M=\HH^3/\Gamma$, where $\Gamma\le\isomplus(\HH^3)$ is discrete and torsion-free. Let $x$ and $y$ be non-commuting elements of $\Gamma$. The hypothesis implies that $\langle x,y\rangle$ has infinite index in $\Gamma$, and hence has infinite covolume. By Proposition \ref{chained to the altar}, it follows that
$\max(d(P,x\cdot P),d(P,y\cdot P))\ge\log3$
for every $P\in\HH^3$. Hence $\log3$ is a Margulis number for $M$.
\EndProof
\Corollary\label{born to facebook}
Let $M$ be an orientable hyperbolic $3$-manifold such that either $H_1(M;\QQ)$ has rank at least $3$, or
$M$ is closed and $H_1(M;\ZZ_p)$ has rank at least $4$ for some prime $p$. Then $\mu(M)\ge\log3$.
\EndCorollary
\Proof
If $H_1(M;\QQ)$ has rank at least $3$, it is clear that every subgroup of rank at most $2$ in $\pi_1(M)$ has infinite index. If
$M$ is closed and $H_1(M;\ZZ_p)$ has rank at least $4$ for some prime $p$, then by \cite[Proposition 1.1]{sw}, it is again true that every subgroup of rank at most $2$ in $\pi_1(M)$ has infinite index. Hence the result follows from Corollary \ref{born free}.
\EndProof
\section{An abstract bound for $\vol M$ when $\mu(M)<\log3$, and its consequences}
\label{abstract section}
Let $(\rho_n)_{n\ge1}$ be a sequence of representations of a group $X$ in $\isomplus(\HH^3)$. Recall that the sequence $(\rho_n)$ is said to {\it converges algebraically} to a representation $\rho_\infty$ of $X$ in $\isomplus(\HH^3)$ if we have $\rho_n(\gamma)\to\rho_\infty(\gamma)$ for every $\gamma\in X$.
Recall that a subgroup of $\isomplus(\HH^3)$ is said to be {\it elementary} if it has an abelian subgroup of finite index. According to \cite[Proposition 2.1]{finiteness}, every torsion-free, elementary, discrete subgroup of $\isomplus(\HH^3)$ is abelian.
I will give an explicit proof of the following fact, which is implicit in \cite{jor}.
\Lemma\label{last one i hope} Let $(\rho_n)_{n\ge1}$ be a sequence of representations of a finitely generated group $\Phi$ in $\isomplus(\HH^3)$. Suppose that $\rho_n(\Phi)$ is discrete, non-elementary and torsion-free for each $n\in\NN$. Then there is a neighborhood $W$ of the identity in $\isomplus(\HH^3)$ such that $\tGamma_n\cap W=\{1\} $ for every $n\in\NN$.
\EndLemma
\Proof
If the conclusion is false, there is a sequence of elements $(x_n)_{n\ge1}$ of $\Phi$ such that $\rho_n(x_n)\ne1$ for $n=1,2,\dots$ but $\rho_n(x_n)\to\infty$ as $n\to\infty$. Since $\rho_n(\Phi)$ is a non-abelian, torsion-free, discrete subgroup of $\isomplus$, it has trivial center. In particular $\rho_n(x_n)\ne1$ is non-central in $\rho_n(\Phi)$ for each $n\in\NN$. Hence if $S$ is a finite generating set for $\Phi$, then for each $n\in\NN$ there is an element $s_n\in S$ such that $\rho_n(x_n)$ and $\rho_n(s_n)$ do not commute. Since $S$ is finite, we may assume after passing to a subsequence that the $s_n$ are all the same element of $S$, say $s$.
For each $n\in\NN$, the group $\Gamma_n:=\langle \rho_n(x_n),\rho_n(s)\rangle$ is contained in $\rho_n(\Phi)$, and is therefore discrete and torsion-free. Since $\rho_n(x_n)$ and $\rho_n(s)$ do not commute, $\Gamma_n$ is non-abelian, and is therefore non-elementary by \cite[Proposition 2.1]{finiteness}.
If $X_n$ and $Y_n$ are elements of $\zzle(\CC)$ representing
$\rho_n(x_n)$ and $\rho_n(s)$ respectively, it then follows from
Jorgensen's inequality \cite[Lemma 1]{jor} that
\Equation\label{troels knows}
|(\trace X_n)^2-4|+|\trace(X_nY_nX_n^{-1}Y_n^{-1})-2|\ge1
\EndEquation
for each $n\in\NN$. As $n\to\infty$ we have
$\rho_n(x_n)\to1$, while $\rho_n(s)\to\rho_\infty(s)$, where $\rho_\infty$ denotes the algebraic limit of the sequence $(\rho_n)$. After passing to a subsequence we may therefore assume that $X_n\to\pm I$ and that the sequence $(Y_n)$ has a limit. Hence $X_nY_nX_n^{-1}Y_n \to I$. It follows that the left hand side of (\ref{troels knows}) converges to $0$ as $n\ge\infty$, a contradiction.
\EndProof
The following result was stated in the introduction as Theorem A.
\begin{theorem}\label{abs tract}
Let $\lambda$ be a positive real number strictly less than $\log3$. Then there is a constant $V_\lambda$ such that for every orientable hyperbolic $3$-manifold $M$ with $\mu(M)<\lambda$ we have $\vol M\le V_\lambda$ (and in particular $\vol M<\infty$).
\end{theorem}
\Proof
It follows immediately from Proposition \ref{chained to the altar} that if $M$ is any orientable $3$-manifold of infinite volume then $\log3$ is a Margulis number for $M$, so that
$\mu(M)\ge\log3>\lambda$.
Hence if suffices to show that there is a constant $V_\lambda$ such that for every orientable hyperbolic $3$-manifold $M$ with $\infty>\vol M>V_\lambda$ we have $\mu(M)\ge\lambda$.
We reason by contradiction. Assume there is a sequence $(M_n)_{n\ge1}$ of
orientable finite-volume hyperbolic $3$-manifolds such that $\vol M_n\to\infty$ and $\mu(M_n)<\lambda$, i.e. no $M_n$ admits $\lambda$ as a Margulis number.
For
each $n$ write $M_n=\HH^3/\Gamma_n$ for some torsion-free
cocompact discrete subgroup $\Gamma_n$ of $\isomplus(\HH^3)$.
Then, by definition, for each $n$ there exist non-commuting elements
$x_n,y_n\in\Gamma_n$ and a point $P_n\in\HH^3$ such that
$$\max(d(P_n,x_n\cdot P_n),d(P_n,y_n\cdot P_n))<\lambda.$$
After replacing each $\Gamma_n$ by a suitable conjugate of itself in
$\isomplus(\HH^3)$, we may assume that the $P_n$ are all the same point of $\HH^3$, which I will denote by $P$. Thus for each $n$ we have
\begin{equation}\label{maximillian}\max(d(P,x_n\cdot P),d(P,y_n\cdot P))<\lambda.\end{equation}
For each $n$, set $\tGamma_n:=\langle x_n,y_n\rangle$ and $\tM_n:=\HH^3/\tGamma_n$. Note that $\tGamma_n$ is discrete and torsion-free since $\Gamma_n$ is, and that $\tGamma_n$ is non-abelian---and hence non-elementary by \cite[Proposition 2.1]{finiteness}---since $x_n$ and $y_n$ do not commute.
Since $\lambda<\log3$, it follows from (\ref{maximillian}) and Proposition \ref{chained to the altar} that $\vol\tM_n<\infty$. On the other hand, $\tM_n$ covers $M_n$, and hence $\vol \tM_n\ge\vol M_n$. In particular, $\vol\tM_n\to\infty$.
It follows from (\ref{maximillian}) that the $x_n$ and $y_n$ lie in a compact subset of $\isomplus(\HH^3)$. Hence, after passing to a subsequence, we may assume that the sequences $(x_n)$ and $(y_n)$ converge in $\isomplus(\HH^3)$ to limits $x_\infty$ and $y_\infty$. Again by (\ref{maximillian}), we have
\begin{equation}\label{welsh}
\max(d(P,x_\infty\cdot P),d(P,y_\infty\cdot P))\le\lambda.
\end{equation}
For $1\le n\le\infty$ we define a representation $\rho_n$ of the rank-$2$ free group $F_2=\langle\xi,\eta\rangle$ by $\rho_n(\xi)=x_n$, $\rho_n(\eta)=y_n$. Thus $\rho_n(F_2)=\tGamma_n$ for each $n$. Since $\xi$ and $\eta$ generate $F_2$, and since $\rho_n(\xi)\to\rho_\infty(\xi)$ and $\rho_n(\eta)\to\rho_\infty(\eta)$ as $n\to\infty$, we have $\rho_n(\gamma)\to\rho_\infty(\gamma)$ for every $\gamma\in F_2$. By definition this means that the sequence $(\rho_n)$ converges algebraically to $\rho_\infty$. Set $x_\infty=\rho_\infty(\xi)$, $y_\infty=\rho_\infty(\eta)$.
Let $D$ denote the set of representations of $F_2$ in $\isomplus(\HH^3)$ whose images are discrete, torsion-free, and non-elementary.
According to \cite[Theorem 2.4]{finiteness} (a theorem essentially due to T. Jorgensen and P. Klein \cite{jk}), the limit of any algebraically convergent sequence of representations in $D$ is again in $D$. Hence $\rho_\infty\in D$. Thus $\tGamma_\infty:=\rho_\infty(F_2)=\langle x_\infty,y_\infty\rangle$ is a discrete group.
According to \cite[Proposition 3.8]{jm}, since $(\rho_n)$ converges algebraically, the sequence of discrete groups $(\tGamma_n) $ has a geometrically convergent subsequence (in the sense defined in \cite{jm}). Hence without loss of generality we may assume that $(\tGamma_n) $ converges geometrically to some discrete group $\h\Gamma_\infty$. It then follows, again from \cite[Proposition 3.8]{jm}, that
$\tGamma_\infty\le \h\Gamma_\infty$.
According to Lemma \ref{last one i hope}, there is a neighborhood $W$ of the identity in $\isomplus(\HH^3)$ such that $\tGamma_n\cap W=\{1\} $ for every $n\in\NN$. Let $E$ denote the set of all torsion-free subgroups $\Delta$ of $\isomplus(\HH^3)$ such that $\Delta\cap W=\{1\} $. (In particular each group in $E$ is discrete.) According to \cite[Theorem 1.3.1.4]{ceg}, $E$ is compact in the topology of geometric convergence. Since $\tGamma_n\in E$ for each $n\in\NN$, we have $\h\Gamma_\infty\in E$. In particular $\h\Gamma_\infty$ is torsion-free. We let $\h M_\infty$ denote the orientable hyperbolic $3$-manifold $\HH^3/\h\Gamma_\infty$.
Since $(\tGamma_n) $ converges geometrically to $\h\Gamma_\infty$, the sequence of orientable hyperbolic $3$-manifolds $(\tM_n)$ converges geometrically to $\h M_\infty$ in the sense of \cite[Chapter E]{bp}. If $\vol \h M_\infty$ were finite, it would then follow from \cite[Proposition E.2.5]{bp} that the sequence $(\vol\tM_n)$ had the finite limit $\vol \h M_\infty$, which contradicts
$\vol\tM_n\to\infty$. Thus $\tM_\infty:=\HH^3/\rho_\infty(F_2)$ is a hyperbolic $3$-manifold of infinite volume. It therefore follows from Proposition \ref{chained to the altar} that
$$\max(d(P,x_\infty\cdot P),d(P,y_\infty\cdot P))\ge\log3.$$
But this contradicts (\ref{welsh}).
\EndProof
\bigskip
The following two corollaries of Theorem \ref{abs tract} were also pointed out in the introduction.
\Corollary\label{synecdoche}
Let $\lambda$ be a positive real number strictly less than $\log3$. Then there is a
there is a natural number $d_\lambda$ such that for every orientable hyperbolic $3$-manifold $M$ with $\mu(M)<\lambda$, the group $\pi_1(M)$ has a rank-$2$ subgroup of index at most $d_\lambda$.
\end{corollary}
\Proof
Let $v$ denote the infimum of the volumes of all
hyperbolic $3$-manifolds; we have $v>0$, for example by
\cite[Theorem 1]{meyerhoff}. Let $V_\lambda$ be a positive real number having the property stated in Theorem \ref{abs tract}, and set
$$d_\lambda=\lfloor \frac{V_\lambda}v\rfloor.$$
Suppose that $M$ is an orientable hyperbolic $3$-manifold
such that $\mu(M)<\lambda$, i.e. such that $\lambda$ is not a Margulis number for $M$. Write $M_\infty=\HH^3/\Gamma$, where $\Gamma\le\isomplus(\HH^3)$ is discrete and torsion-free. Then by definition there exist a point $P\in\HH^3$ and non-commuting elements $x,y\in\Gamma$ such that
\Equation\label{Yes, AGAIN!}
\max(d(P,x\cdot P),d(P,y\cdot P))<\lambda.
\EndEquation
Since $x$ and $y$ are non-commuting elements of $\tGamma$, it follows from (\ref{Yes, AGAIN!}) that $\lambda$ is not a Margulis number for $\tM$, i.e. that $\mu(M)<\lambda$. Hence $\infty\ge\vol \tM\le V_\lambda<\infty$, and since $\tM$ covers $M$ we have $\vol M<\infty$. Since $\vol M\ge v$, we find that
$$[\Gamma:\tGamma]=\frac{\vol\tM}{\vol M}\le
\frac{V_\lambda}v.$$
It follows that $[\Gamma:\tGamma]\le d_\lambda$, so that $\Gamma\cong\pi_1(M)$ has a rank-$2$ subgroup of index at most $d_\lambda$.
\EndProof
\Corollary\label{poor thing}
Let $\lambda$ be a positive real number strictly less than $\log3$. Then there is a
natural number $k_\lambda$ such that for every orientable hyperbolic $3$-manifold $M$ with $\mu(M)<\lambda$, the group $\pi_1(M)$ has rank at most $k_\lambda$.
\end{corollary}
\Proof
Let $d_\lambda$ be a natural number having the property stated in Corollary \ref{synecdoche}. Set $k_\lambda=2+\lfloor\log_2d_\lambda\rfloor$. If $M$ is an orientable hyperbolic $3$-manifold with $\mu(M)<\lambda$, then the group $\Gamma=\pi_1(M)$ has a rank-$2$ subgroup $\tGamma$ of index at most $d_\lambda$. According to Proposition \ref{you peeked}, we have
$$\rank\Gamma\le \rank\tGamma+\log_2[\Gamma:\tGamma]\le2+d_\lambda$$
and hence $\pi_1(M)\cong \Gamma$ has rank at most $k_\lambda$.
\EndProof
\section{Margulis numbers and short relations}\label{short section}
\Notation\label{weird enough for ya?}
Let $\lambda$ be a real number $0<\lambda<(\log3)/2$. Then for any sufficiently large positive integer $N$ we have
\Equation\label{you're so weird}
\frac{3^{N}-1}{4N+1}\ge2667(\sinh(2N\lambda+.104)-(2N\lambda+.104)).
\EndEquation
(The natural logarithm of the left hand side of (\ref{you're so weird}) is asymptotic to $N\log3$, whereas the natural logarithm of the right hand side is asymptotic to $2N\lambda>N\log3$.)
I shall denote by $N(\lambda)$ denote the smallest positive integer $N$ for which (\ref{you're so weird}) holds.
\EndNotation
Here is one simple estimate of $N(\lambda)$:
\Proposition\label{nestimate}
Let $\lambda\in(0.1,(\log3)/2)$ be given, and set $\beta=1/((\log3)-2\lambda)$. Then
\Equation\label{real nestimate}
N(\lambda)<1+110\beta\log\beta.
\EndEquation
\EndProposition
\Proof
Since $\lambda>0.1$ we have $\beta>1/((\log3)-0.2)>1.11$. Hence the right hand side of (\ref{real nestimate}) is bounded below by $1+(110)(1.11)(\log 1.11)=13.7\ldots$. We may therefore assume without loss of generality that $N(\lambda)\ge14$.
Set $n=N(\lambda)-1\ge13$. From the definition of $N(\lambda)$ we have
\Equation\label{obversely weird}
\frac{3^{n}-1}{4n+1}<2667(\sinh(2n\lambda+.104)-(2n\lambda+.104)).
\EndEquation
Since in particular we have $n\ge3$, the left hand side of (\ref{obversely weird}) is bounded below by $3^n/5n$. The right hand side is obviously bounded above by $2667\exp(2n\lambda+.104)/2$. Hence
$$3^n\le\frac{2667}2\cdot5n\cdot\exp(2n\lambda+.104)<7400ne^{2n\lambda},$$
which upon taking logarithms and using the definition of $\beta$ gives
\Equation\label{poultroon}
n<\beta\log(7400n).
\EndEquation
Now suppose that (\ref{real nestimate}) is false, so that $n\ge110\beta\log\beta$. Define a function $g(x)$ for $x>0$ by $g(x)=x/\log(7400x)$. Then $g(x)$ is monotone increasing for $x>e/7400$, and since $n\ge110\beta\log\beta>12.7$, we have
$g(n)\ge g(110\beta\log\beta)$. With (\ref{poultroon}) this gives
$$\frac{110\beta\log\beta}{\log(7400\cdot 110\beta\log\beta)} \le \frac{n}{\log(7400n)}<\beta,$$
so that
$${110\log\beta}<\log7400+\log110+\log\beta+\log\log\beta<13.61+\log\beta+\log\log\beta,$$
i.e.
\Equation\label{hermione}
{109\log\beta}-\log\log\beta<13.61.
\EndEquation
On the other hand, the function $h(x):=109x-\log x$ is monotonically increasing for $x>1/109$. Since $\log\beta>\log 1.11=0.104\ldots>1/109$, we have
$${109\log\beta}-\log\log\beta=h(\log\beta)>h(\log1.11)=13.63\ldots,$$
which contradicts (\ref{hermione}).
\EndProof
\begin{proposition}\label{first one}
Let
$M$ be an orientable hyperbolic $3$-manifold, and write $M=\HH^3/\Gamma$, where $\Gamma\le\isomplus(\HH^3)$ is discrete and torsion-free. Let $\lambda<(\log3)/2$ be given, and let $x$ and $y$ be non-commuting elements of $\Gamma$ such
that $\max(d(P,x\cdot P),d(P,y\cdot P))<\lambda$. Then there is a reduced word $W$ in two letters, with $0<\length W\le 8N(\lambda)$, such that $W(x,y)=1$. (Here $N(\lambda)$ is defined by \ref{weird enough for ya?}.)
\end{proposition}
\Proof
Set $\mu=0.104$. I pointed out in the introduction that according to \cite{meyerhoff}, we have $\mu<\mu_+(3)$; that is, $\mu$ is a Margulis number for every orientable hyperbolic $3$-manifold. Set $N=N(\lambda)$.
Since $\mu$ is in particular a Margulis number for $M$, the elements $\gamma\in\Gamma$ such that $d(\gamma\cdot P,P)<\mu$ generate an abelian subgroup $C$ of $\Gamma$.
Let $F_2$ denote a free group on two (abstract) generators $\xi$ and $\eta$. We identify $F_2$ with the set of all reduced words in $\xi$ and $\eta$, so that $V(\xi,\eta)=V$ for every reduced word $V$. Let $\phi:F_2\to\Gamma$ denote the unique homomorphism such that $\phi(\xi)=x$ and $\phi(\eta)=y$; then $\phi(V)=V(x,y)$ for every reduced word $V$.
For every positive integer $n$, let $\calv_n\subset F_2$ denote the set of all reduced words of length at most $n$ in $\xi$ and $\eta$.
If $m$ and $n$ are positive integers, then for all $V\in\calv_m$ and $V'\in\calv_n$, we can concatenate $V$ and $V'$ and then reduce the resulting word to obtain a reduced word of length at most $m+n$ representing the product $VV'$ in $F_2$. Hence
\Equation\label{give up the chainu8}
\calv_m\calv_n\subset\calv_{m+n}\text{ for all }m,n>0.
\EndEquation
Note also that
\Equation\label{hu 8 the chain}
\calv_n^{-1}=\calv_n\text{ for every }n>0.
\EndEquation
For any $k>0$, the number of reduced words in $\xi$ and $\eta$ of length exactly $k$ is $4\cdot3^{k-1}$. Summing from $k=1$ to $k=n$, we deduce that
\Equation\label{sum like it dim}
\#(\calv_n)-1=2(3^{n}-1) \text{ for every }n\ge1.
\EndEquation
I will assume that:
\Equation\label{o'galavant}
\phi(W)\ne1\text{ for every }W\in\calv_{ 8N}-\{1\}.
\EndEquation
Under the assumption (\ref{o'galavant}) I will derive a contradiction, thereby proving the proposition.
Assuming (\ref{o'galavant}), I claim:
\Equation\label{pogo shtick}
\#(\calv_N\cap\phi^{-1}(\gamma C))\le4N+1 \text{ for any left coset }\gamma C\text{ of }C\text{ in }\Gamma.
\EndEquation
To prove (\ref{pogo shtick}) I will first consider the special case in which $\phi^{-1}(C)\cap\calv_{2N}=\{1\}$. In this case, if $V$ and $V'$ are elements of $\calv_N\cap\phi^{-1}(\gamma C)$ for a given left coset $\gamma C$ of $C$ in $\Gamma$, we have $U:=V^{-1}V'\in\phi^{-1}(C)$; but by (\ref{give up the chainu8}) and (\ref{hu 8 the chain}) we have $U\in\calv_{2N}$. The assumption
$\phi^{-1}(C)\cap\calv_{2N}=\{1\}$ then implies that $U=1$ and hence that $V=V'$. Hence in this case we have $\#(\calv_N\cap\phi^{-1}(\gamma C))\le1$, which is stronger than (\ref{pogo shtick}).
Hence in proving (\ref{pogo shtick}) we may assume that
$\phi^{-1}(C)\cap\calv_{2N}\not=\{1\}$.
Of course we may also assume that $\calv_N\cap\phi^{-1}(\gamma C)\not=\emptyset$. Let us fix an element $U_0\ne1$ of $\phi^{-1}(C) \cap\calv_{2N}$ and an element $V_1$ of $\calv_N\cap\phi^{-1}(\gamma C)$. Let $\hC$ denote the centralizer of $U_0$ in $F_2$; then $\hC$ is cyclic, since $F_2$ is free and $U_0\ne1$.
Let us define an injective map $J:\calv_N\cap\phi^{-1}(\gamma C)\to F_2$ by $J(V)=V^{-1}V_1$. Since
$V_1\in\phi^{-1}(\gamma C)$, we have $J(V)\in\phi^{-1} (C)$ for every $V\in\calv_N\cap\phi^{-1}(\gamma C)$. Since $V_1\in\calv_N$, it follows from (\ref{give up the chainu8}) and (\ref{hu 8 the chain}) that $J(V)\in\calv_{2N}$ for every $V\in\calv_N\cap\phi^{-1}(\gamma C)$. Thus $J$ maps $\calv_N\cap\phi^{-1}(\gamma C)$ into $\calv_{2N}\cap\phi^{-1}(C)$.
Let $V\in\calv_N\cap\phi^{-1}(\gamma C)$ be given.
Since $U_0,J(V)\in\phi^{-1}(C)$, we have $\phi(J(V)U_0J(V)^{-1}U_0^{-1}=1$. Since $J(V)$ and $U_0$ belong to $\calv_{2N}$, it follows from (\ref{give up the chainu8}) and (\ref{hu 8 the chain}) that
$J(V)U_0J(V)^{-1}U_0^{-1}\in\calv_{8N}$. Applying (\ref{o'galavant}) with $W=J(V)U_0J(V)^{-1}U_0^{-1})$ we deduce that $W=1$, i.e. that $J(V)$ commutes with $U_0$. This means that $J(V)\in\hat C$.
Thus $J$ is an injection from $\calv_N\cap\phi^{-1}(\gamma C)$ to $\calv_{2N}\cap\hC$, and so $\#(\calv_N\cap\phi^{-1}(\gamma C))\le \#(\calv_{2N}\cap\hC)$. But according to Lemma \ref{o hula who}, appplied with $k=2N$ and with $Z=\hC$, we have $\#(\calv_{2N}\cap\hC)\le4N+1$. Thus (\ref{pogo shtick}) is established.
Now let $\call$ denote the set of all left cosets of $C$ in $\Gamma$, and define a map $\psi:F_2\to\call$ by $\psi(V)=\phi(V)C$. We may then paraphrase (\ref{pogo shtick}) by saying that the fibers of the surjection $\psi|\calv_N:\calv_N\to\psi(\calv_N)$ have cardinality at most $4N+1$. If we set $r=\#(\psi(\calv_N))$, it follows that $r\ge\#(\calv_N)/(4N+1)$. Combining this with (\ref{sum like it dim}), we find that
\Equation\label{barnacles to you}
r>\frac{2(3^{N}-1)}{4N+1}.
\EndEquation
Let $b$ denote the open ball of radius $\mu/2$ centered at $P$, and let
$B$ denote the ball of radius $N\lambda+(\mu/2)$ centered at $P$. We have
\Equation\label{volare o o}
\vol b=\pi(\sinh(\mu)-\mu)= 0.000589\ldots
\EndEquation
and
\Equation\label{o o o o}
\vol B=\pi(\sinh(2N\lambda+\mu)-(2N\lambda+\mu)).
\EndEquation
Since
$d(x\cdot P, P)<\lambda$ and $d(y\cdot P, P)<\lambda$, and since $x$ and $y$ are isometries, we have
$d(\phi(V)\cdot P, P) =d(V(x,y)\cdot P, P)<N\lambda$
for every $V\in\calv_N$. It follows that
\Equation\label{if i had them i'd be king}
\phi(V)\cdot b\subset B\text{ for every }V\in\calv_N.
\EndEquation
According to the definition of $r$, there are elements $\gamma_1,\ldots,\gamma_r$ of $\phi(\calv_N)$ which represent distinct left cosets of $C$ in $\gamma$. From (\ref{if i had them i'd be king}) we have
\Equation\label{faith and begorrah}
\gamma_i\cdot b\subset B\text{ for }i=1,\ldots,r.
\EndEquation
If $i$ and $j$ are distinct indices in $\{1,\ldots,r\}$ we have $\gamma_j^{-1}\gamma_i\notin C$, which by the definition of $C$ gives $
d(\gamma_i\cdot P,\gamma_j\cdot P,P)=
d(\gamma_j^{-1}\gamma_i\cdot P,P)\ge\mu$, so that
\Equation\label{whatcha doin' in disjoint}
\gamma_i\cdot b\cap \gamma_j\cdot b=\emptyset\text{ for all distinct indices }i\text{ and }j\text{ in }\{1,\ldots,r\}.
\EndEquation
From (\ref{faith and begorrah}) and (\ref{whatcha doin' in disjoint}) it follows that $r\vol b\le\vol B$. Combining this with (\ref{barnacles to you}), (\ref{volare o o}) and (\ref{o o o o}), we obtain
$$\begin{aligned}
\frac{2(3^{N}-1)}{4N+1}&< r\le\frac{\vol B}{\vol b}\cr&\le {\pi(\sinh(2N\lambda+\mu)-(2N\lambda+\mu))}/{0.000589}\cr
&<5334
(\sinh(2N\lambda+\mu)-(2N\lambda+\mu))
\end{aligned}
$$
which is a contradiction, since by definition (\ref{you're so weird}) holds with $N=N(\lambda)$.
\EndProof
\section{From a short relation to a volume bound}\label{short-bounded section}
The main result of this section is Proposition \ref{short to bounded}. As I mentioned in the introduction, the basic method of proof of that proposition is due to Cooper \cite{cooper}. Among the several preliminaries results needed to apply Cooper's method in the present situation, Lemma \ref{oh far out} is the deepest, while Proposition \ref{wow what a variety of characters} seems to be of particular independent interest.
\begin{lemma}\label{same old characters}Let $N$ and $Q$ be irreducible, orientable $3$-manifolds. Suppose that $\pi_1(N)$ is isomorphic to a subgroup of $\pi_1(Q)$ and that $ N$ is closed. Then either $Q$ is closed, or $N$ is simply connected. \end{lemma}
\Proof
First consider the case in which $\pi_1(N)$ is infinite. In this case $\pi_1(Q)$ is also infinite, and $Q$ and $N$ are aspherical by Proposition \ref{sphere consequence}. Fix a base point $q$ in $Q$ and a subgroup $J$ of $\pi_1(Q,q)$ isomorphic to $\pi_1(N)$, and let $(\tQ,\tq)$ denote the based covering of $(Q,q)$ determined by $J$. Since $N$ and $\tQ$ are aspherical and have isomorphic fundamental groups, they are homotopy-equivalent. If $ Q$ is not closed then $ \tQ$ is not closed, and hence $H_3(\tQ;\ZZ)=0$. But $H_3(N;\ZZ)\ne0$ since $N$ is closed, and we have a contradiction to the homotopy-equivalence of $\tQ$ and $N$. Hence in this case $ Q$ must be closed.
Now consider the case in which $\pi_1(N)$ is finite. In this case I will assume that $\pi_1(N)$ is non-trivial and show that $Q$ is closed, thus establishing the conclusion. The assumption that $\pi_1(N)$ is finite and non-trivial implies that $\pi_1(Q)$ has torsion, and so $Q$ cannot be aspherical \cite[Lemma 8.4]{epstein}. By
Proposition \ref{sphere consequence}, $\pi_1(Q)$ is finite.
We apply Proposition \ref{core}, letting $Q$ play the role of $M$ in that proposition. This gives a compact, irreducible submanifold $Q_0$ of $Q$ such that the inclusion homomorphism $\pi_1(Q_0)\to\pi_1(Q)$ is an isomorphism.
In particular, $\pi_1(Q_0)$ is finite, so that $H_1(Q_0;\QQ)=0$, and hence every component of $\partial Q_0$ is a sphere (for example by \cite[Proposition 2.2]{finiteness}).
Since $Q_0$ is irreducible it follows that either $\partial Q_0=\emptyset$ or $Q_0$ is a ball. In the latter case the hypothesis gives $\pi_1(N)=\{1\}$, a contradiction. Hence $\partial Q_0=\emptyset$, which implies that $Q=Q_0$ and that $Q$ is closed.
\EndProof
\begin{proposition}\label{wow what a variety of characters}Let $N$ and $Q$ be irreducible, orientable $3$-manifolds (possibly with boundary). Suppose that $N$ is compact, that $\pi_1(N)$ is non-cyclic and that every component of $\partial N$ is a torus. Let $h:N\to Q$ be a map such that $h_\sharp:\pi_1(N)\to\pi_1(Q)$ is injective. Then there exist a compact submanifold $Q_0$ of $Q$ such that every component of $\partial Q_0$ is a torus, and a map $h_0:N\to Q$ homotopic to $h$, such that $h_0(N)\subset Q_0$.
\end{proposition}
\Proof
First note that since $N$ is irreducible, $\pi_1(N)$ is non-cyclic, and every component of $\partial N$ is a torus, it follows from Proposition \ref{solid-torus} that $N$ is boundary-irreducible.
First consider the case in which $N$ is closed. Note that since $h_\sharp$ is injective,
$\pi_1(N)$ is isomorphic to a subgroup of $\pi_1(Q)$. Since $\pi_1(N)$ is non-cyclic, it then follows from Lemma \ref{same old characters} that $Q$ is closed. Hence the conclusion holds in this case if we set $Q_0=Q$.
Next consider the case in which $\partial N\ne\emptyset$ and $Q$ is compact. Let us fix a Seifert-fibered space $\Sigma\subset\inter M$ having the property stated in Proposition \ref{absolutely}. Let us denote by $J$ the union of all components $C$ of $\overline{Q-\Sigma}$ such that $\partial C\subset\partial \Sigma$.
In this case I will set $Q_0=J\cup\Sigma$, so that every component of $\partial Q_0$ is a torus, and I will construct a map $h_0:N\to Q$, homotopic to $h$, such that $h_0(N)\subset Q_0$.
Consider an arbitrary component $T$ of $\partial N$. By hypothesis $T$ is a torus. Since $N$ is boundary-irreducible, $T$ is $\pi_1$-injective in $N$. Since $h_\sharp:\pi_1(N)\to\pi_1(Q)$ is injective it follows that $(h|T)_\sharp:\pi_1(T)\to\pi_1(Q)$ is injective. It then follows from the property of $\Sigma$ stated in Proposition \ref{absolutely} that $h|T:T\to Q$ is homotopic to a map whose image is contained in $\Sigma$. Since this is true for each component $T$ of $\partial N$, it follows that $h|\partial N$ is homotopic to a map $g:\partial N\to Q$ such that $g(\partial N)\subset\inter\Sigma$. By the homotopy extension property of polyhedra, $g$ extends to a map from $N$ to $Q$ which is homotopic to $h$.
It follows from \cite[Lemma 6.5]{hempel} that we can choose an extension $h_1:N\to Q$ of $g$, homotopic to $h$, so that $h_1$ is transverse to $\partial \Sigma$ and so that each component of $h_1^{-1}(\partial\Sigma)$ is incompressible. Since $h_1(\partial N)=g(\partial N)\subset\inter\Sigma$, we have $h_1^{-1}(\partial\Sigma)\subset\inter N$.
I claim:
\Claim\label{talmudic} If $K$ is a component of $h_1^{-1}(\overline{Q-Q_0})$, then $h_1|K$ is homotopic rel $\partial K$ to a map $h_K$ with $h_K(K)\subset\partial\Sigma$. (In particular
$h_1(\partial K)\subset\partial\Sigma$.)
\EndClaim
To prove \ref{talmudic}, consider a component $K$ of $h_1^{-1}(\overline{Q-Q_0})$, and let $C$ denote the component of $\overline{Q-Q_0}$ containing $h_1(K)$.
Since $h_1$ is transverse to $\partial\Sigma$ and maps $\partial N$ into $\inter\Sigma$, we have $h_1(\partial K)\subset C\cap\partial\Sigma$. In particular $h_1|K:K\to C$ is a boundary-preserving map.
Every component of the frontier of $K$ in $N$ is a component of $h_1^{-1}(\partial\Sigma)$ and is therefore incompressible in $N$. Hence $K$ is $\pi_1$-injective in $N$.
On the other hand, $h_\sharp:\pi_1(N)\to\pi_1(Q)$ is injective by hypothesis, and since $h_1$ and $h$ are homotopic, $(h_1)_\sharp:\pi_1(N)\to\pi_1(Q)$ is also injective. This shows that $(h_1|K)_\sharp:\pi_1(K)\to\pi_1(Q)$ is injective, and so in particular
$(h_1|K)_\sharp:\pi_1(K)\to\pi_1(C)$ is injective.
Since $N$ is irreducible and boundary-irreducible, and since we have observed that the components of the frontier of $K$ in $N$ are incompressible, the manifold $K$ is also irreducible and boundary-irreducible. Since the components of $\partial N$ are tori and the components of the frontier of $ K$ are incompressible, $\partial K$ has no sphere components.
Since $\partial N\ne\emptyset$, we have $\partial K\ne\emptyset$. According to \cite[Lemma 6.7]{hempel}, it follows that $K$ is a Haken manifold. Since $K$ is boundary-irreducible it is not a solid torus. It now follows from
\cite[Theorem 13.6]{hempel} that every boundary-preserving map from $K$ to $C$ which induces an injection of fundamental groups is homotopic rel $\partial K$ either to a covering map or to a map whose image is contained in $\partial C$.
Suppose that
$h_1|K:K\to C$ is homotopic rel $\partial K$ to a covering map. Then in particular we have $h(\partial K)=\partial C$. Since we have seen that $h_1(\partial K)\subset C\cap\partial\Sigma$, this implies that $\partial C\subset\partial\Sigma$, which by the definition of $Q_0$ implies that $C\subset Q_0$. This is a contradiction, since
$C$ is a component of $\overline{Q-Q_0}$. Hence $h_1|K:K\to C$ is homotopic rel $\partial K$ to a map whose image is contained in $\partial C$. Since $h_1(\partial K)\subset C\cap\partial\Sigma$, it follows that we in fact have $h_K(\partial K)\subset C\cap\partial\Sigma$. This proves \ref{talmudic}.
We may now define a map $h_0:N\to Q$ by letting $h_0$ agree with $h_1$ on $h_1^{-1}(Q_0)$, and setting $h_0|K=h_K$ for each component $K$ of $h_1^{-1}(\overline{Q-Q_0})$. It is immediate from the properties of the $h_K$ stated in \ref{talmudic} that $h_0$ is well-defined, is homotopic to $h_1$ (and hence to $h$), and maps $N$ into $Q_0$. This establishes the conclusion in this case.
There remains the case in which $\partial N\ne\emptyset$ and $Q$ is non-compact. In this case, fix a compact submanifold $R_0$ of $Q$ containing $h(N)$. By Lemma \ref{plug}, there is a compact
irreducible submanifold $R$ of $M$ such that $R\supset R_0$. The
hypotheses of the propositon continue to hold if $Q$ is replaced by
$R$. As $R$ is compact, the case of the proposition already proved
gives a compact submanifold $Q_0$ of $R\subset Q$ such that every
component of $\partial Q_0$ is a torus, and a map $h_0:N\to R$
homotopic to $h$ in $R$ (and hence in $Q$), such that $h_0(N)\subset
Q_0$. \EndProof
Although it is convenient to state the following result in the hyperbolic setting, the proof is essentially topological.
\begin{lemma}\label{oh far out}Let $M$ be a hyperbolic $3$-manifold, let $K$ be a compact, path-connected space such that $\pi_1(K)$ has rank $2$ and is not free, and let $f:K\to M$ be a continuous map such that $f_\sharp:\pi_1(K)\to\pi_1(M)$ is surjective and $f(K)\subset M$ is a polyhedron with respect to some $C^1$ triangulation of $M$. Then for each component $C$ of $M-f(K)$, the image of the inclusion homomorphism $\pi_1(C)\to\pi_1(M)$ is abelian.
\end{lemma}
\Proof
We may assume that $\pi_1(M)$ is non-abelian, as otherwise the conclusion is obvious.
We fix a PL structure on $M$, defined by a $C^1$ triangulation, in which $f(K)$ is a polyhedron.
For the purpose of this proof, a compact polyhedron in $M$ will be termed ``small'' if it is contained in a $3$-ball in $M$. Let $R$ be a regular neighborhood of $f(K)$ in $M$. Let $Q$ denote the union of $R$ with all small components of $\overline{M-R}$. Then no component of
$\overline{M-Q}$ is small. If $S$ is any $2$-sphere in $\inter Q$, then $S$ bounds a $3$-ball $B\subset M$, since $M$ is irreducible by Assertion (\ref{it's irreducible}) of Proposition \ref{hypersimple}. If $B\not\subset Q$, then $B$ contains a component of
$\overline{M-Q}$, which by definition must be small, a contradiction; hence $B\subset Q$. This shows that $Q$ is irreducible.
Fix a base point $x\in K$, and set $q=f(x)\in f(K)\subset R\subset Q$. We may regard $f$ as a map from the based space $(K,x)$ to the based space $(Q,q)$. Let $I\le\pi_1(Q,q)$ denote the image of the homomorphism $f_\sharp:\pi_1(K,x)\to\pi_1(Q,q)$. Then $I$ determines a based covering space $p:(\tQ,\tq)\to(Q,q)$, and there is a unique lift $\tf:K\to\tQ$ such that $\tf(x)=\tq$.
If $i:Q\to M$ denotes inclusion, we have a commutative diagram
$$
\begin{xy}
(0,0)*+{\pi_1(K,x)}="a";(30,20)*+{\pi_1(\tQ,\tq)}="b";(30,0)*+{\pi_1(Q,q)}="c";
(60,0)*+++{\pi_1(M,q).}="d";
{\ar^{\tf_\sharp}"a";"b"}
{\ar^{f_\sharp}"a";"c"}
{\ar^{p_\sharp}"b";"c"}
{\ar^{i_\sharp}"c";"d"}
\end{xy}
$$
According to the hypothesis, $f_\sharp\circ i_\sharp$ is surjective, and hence by commutativity of the diagram, $i_\sharp\circ p_\sharp:\pi_1(\tQ,\tq)\to \pi_1(M,q)$ is surjective. Since $\pi_1(M)$ is non-abelian, it follows that $\pi_1(\tQ)$ is non-abelian, and in particular non-cyclic. On the other hand, the construction of $(\tQ,\tq)$ implies that
$\tf_\sharp$ is surjective. Thus the non-cyclic group $\pi_1(\tQ)$ is a homomorphic image of $\pi_1(K)$, which by hypothesis is a non-free group of rank $2$. It follows that $\pi_1(\tQ)$ is also a non-free group of rank exactly $2$.
Since $Q$ is irreducible, it follows from \cite[p. 647, Theorem 3]{msy} that $\tQ$ is irreducible.
We now apply Proposition \ref{core}, letting $\tQ$ play the role of $M$ in that proposition. This gives a compact, irreducible submanifold $N$ of $\tQ$ such that the inclusion homomorphism $\pi_1(N)\to\pi_1(\tQ)$ is an isomorphism. In particular, $\pi_1(N)$ is a non-free group of rank exactly $2$. Hence by Proposition \ref{jaco}, each component of $\partial N$ is a torus.
Set $h=p|N:N\to Q$. Since $p_\sharp:\pi_1(\tQ)\to\pi_1(Q)$ is injective, and since the inclusion homomorphism $\pi_1(N)\to\pi_1(\tQ)$ is an isomorphism. We have seen that
$N$ and $Q$ are compact, irreducible, orientable $3$-manifolds that $\pi_1(N)$ has rank $2$, and is in particular non-cyclic; and that every component of $\partial N$ is a torus. Thus all the hypotheses of Proposition \ref{wow what a variety of characters} hold. The latter proposition now gives a compact submanifold $Q_0$ of $Q$ such that every component of $\partial Q_0$ is a torus, and a map $h_0:N\to Q$ homotopic to $h$, such that $h_0(N)\subset Q_0$.
We have seen that $i_\sharp\circ p_\sharp:\pi_1(\tQ)\to\pi_1(M)$ is surjective. Since the inclusion homomorphism $\pi_1(N)\to\pi_1(\tQ)$ is an isomorphism, it follows that
$i_\sharp\circ h_\sharp=i_\sharp\circ (p|N)_\sharp :\pi_1(N)\to\pi_1(M)$ is surjective. Since the maps $h_0,h:N\to Q$ are homotopic, $i_\sharp\circ (h_0)_\sharp:\pi_1(N)\to\pi_1(M)$ is also surjective. Since $h_0(N)\subset Q_0$, the inclusion homomorphism $\pi_1(Q_0)\to M$ is surjective.
To establish the conclusion of the lemma, it suffices to show that for every component $c$ of $\closure{M-R}$, the image of the inclusion homomorphism $\pi_1(c)\to\pi_1(M)$ is abelian. According to the definition of $Q$, any such $c$ is either a small component of $\overline{M-R}$, in which case the image of the inclusion homomorphism is trivial, or a component of $\overline{M-Q}$. In the latter case, $c$ is contained in a component $c_0$ of $\overline{M-Q_0}$, and I shall complete the proof by showing that the image of the inclusion homomorphism $\pi_1(c_0)\to\pi_1(M)$ is abelian. Let us fix any component $T$ of $\partial c_0$. Then $T$ is a component of $\partial Q_0$ and is therefore a torus. It follows from Assertions (\ref{yes injective}) and (\ref{not injective}) of Proposition \ref{hypersimple} that $T$ is the boundary of a $3$-dimensional submanifold $J$ of $M$, closed as a subset of $M$, such that the inclusion homomorphism $\pi_1(J)\to\pi_1(M)$ has abelian image. Since $Q_0$ is connected, we must have either $J=c_0$, in which case the required conclusion holds, or $J\supset Q_0$. The latter alternative would imply that the inclusion homomorphism $\pi_1(Q_0)\to\pi_1(M)$ has abelian image. This is impossible, as this inclusion homomorphism is surjective and $\pi_1(M)$ is non-abelian.
\EndProof
\begin{lemma}\label{eh golly you}Let $M$ be a hyperbolic $3$-manifold of finite volume, and let $C$ be a compact $3$-dimensional (smooth) submanifold of $M$. Suppose that the inclusion homomorphism $\pi_1(C)\to\pi_1(M)$ has abelian image.
Then
$$\vol C\le\frac12\area \partial C.$$
\end{lemma}
\Proof
This is essentially Lemma 4.4 of \cite{A-L}. The authors of \cite{A-L} assume $C$ to be ``PL'' in a sense that it is not clear to me, but their proof can be read in the smooth category without change.
\EndProof
\begin{lemma}\label{jeweler you failed}Let $M$ be a hyperbolic $3$-manifold of finite volume. Let $X\subset M$ be a compact set which is a $2$-dimensional polyhedron with respect to some smooth triangulation of $M$. Suppose that for every component $C$ of $M-X$, the inclusion homomorphism $\pi_1(C)\to\pi_1(M)$ has abelian image.
Then
$$\vol M\le\area X.$$
\end{lemma}
\Proof
Let $m$ denote the number of cusps of $M$.
Let $\epsilon>0$ be given.
Since $M$ has finite volume, there are disjoint \cn s $V_1,\ldots,V_m$ in $M$, for some $m>0$, such that $M_0:=\overline{M-(V_1\cup\cdots V_m)}$ is compact. Set $V=V_1\cup\cdots\cup V_m$. After replacing the $V_i$ by smaller \cn s if necessary, we may assume that each $V_i$ has volume less than $\epsilon/m$, that
each $\partial V_i$ has area less than $\epsilon/m$, and that the $V_i$ are disjoint from $X$. Hence $\vol V<\epsilon$, $\area\partial V<\epsilon$, and $X\subset\inter M_0$.
Since $X$ is a $2$-dimensional polyhedron with respect to the given smooth triangulation of $M$, there is an open neighborhood $U$ of $X$ in $\inter M_0$ with $\vol U<\epsilon$. Let $N\subset U$ be a smooth regular neighborhood of $X$ in the sense of \cite{hirsch}. We may choose $N$ in such a way that $\area\partial N<\epsilon+2\area X$.
Let $C_1,\ldots,C_n$ denote the components of $\overline{M_0-N}$. Then each $C_j$ is contained in a component of $M-X$, and hence the inclusion homomorphism $\pi_1(C_j)\to\pi_1(M)$ has abelian image.
According to Lemma \ref{eh golly you}, we have
$$
\vol C_j\le\frac12\area \partial C_j
$$
for $j=1,\ldots,n$. Hence
$$\begin{aligned}
\vol(\overline{M_0-N})&\le\frac12\area \partial(\overline{M_0-N})\cr
&=\frac12\area\partial M_0+\frac12\area\partial N.
\end{aligned}
$$
Since $\area\partial M_0=\area\partial V<\epsilon$ and $\area\partial N<\epsilon+2\area X$, it follows that
$$\vol(\overline{M_0-N})<\epsilon+\area X.$$
Now since $M=V\cup N\cup\overline{M_0-N}$, we have
$$\begin{aligned}
\vol M&=\vol V +\vol N+\vol(\overline{M_0-N})\cr
&<\epsilon+\epsilon+(\epsilon+\area X)\cr
&=3\epsilon+\area X.
\end{aligned}
$$
Since $\epsilon>0$ was arbitrary, it follows that $\vol M\le\area X$.
\EndProof
\begin{proposition}\label{short to bounded}
Let
$(M,\star)$ be a based closed, orientable, hyperbolic $3$-manifold such that $\pi_1(M,\star)$ is generated by two non-commuting elements $x$ and $y$. Let $\lambda>0$ be given, and suppose that $x$ and $y$ are represented by closed loops of length $<\lambda$ based at $\star$. Let $W$ be a non-trivial reduced word in two letters such that $W(x,y)=1$. Then
$$\vol M<(\length (W)-2)\min(\pi, \lambda).$$ (In particular, $M$ has finite volume.)
\end{proposition}
\Proof
If $M$ has infinite volume then Proposition \ref{chained to the altar} implies that the non-commuting elements $x$ and $y$ cannot both be represented by loops of length $<\log3$ based at $\star$. Hence the hypothesis implies that $\vol M<\infty$.
Since $x$ and $y$ do not commute, they are in particular both non-trivial elements of $\pi_1(M,\star)$. Since $\pi_1(M)$ is generated by $x$ and $y$, it is non-abelian and in particular non-cyclic. As the fundamental group of a hyperbolic manifold, it is also torsion-free, and hence $x$ and $y$ have infinite order.
I will regard $W$ as a word in two abstract generators $\xi$ and $\eta$. Let us write $W=\psi_1\cdots\psi_n$, where each $\psi_j$ has the form $\xi^{\pm1}$ or $\eta^{\pm1}$. We may also identify $W$ with a non-trivial element of the free group $F$ on the generators $x$ and $y$. Since $\pi_1(M)$ is generated by $x$ and $y$ and since $W(x,y)=1$, there is an epimorphism from $F/\langle\langle W\rangle\rangle$ to $\pi_1(M,\star)$ that maps $\xi$ to $x$ and $\eta$ to $y$. Set $n=\length W$.
If there is a non-trivial reduced word $W'$ in two letters such that $n':=\length W'<n$ and such that $W'(x,y)=1$, and if
$\vol M<(n'-2)\min(\pi, \lambda)$, then in particular
$\vol M<(n-2)\min(\pi, \lambda)$. Hence, arguing inductively, we may assume that $n$ is the minimal length of any non-trivial reduced word which gives a relation between $x$ and $y$.
If $n$ were at most $3$, either $F/\langle\langle W\rangle\rangle$ would be cyclic, which is impossible since $\pi_1(M)$ is non-cyclic, or the image of $\xi$ or $\eta$ in $F/\langle\langle W\rangle\rangle$ would have finite order, which is impossible since $x$ and $y$ have infinite order. Hence $n\ge4$.
The hypothesis implies that there are loops $\alpha,\beta:[0,1]\to M$ based at $\star$ such that $\alpha|(0,1)$ and $\beta|(0,1)$ are open geodesics of length $<\lambda$, and such that $[\alpha]=x$ and $[\beta]=y$. These geodesics are non-constant since $x$ and $y$ are non-trivial elements of $\pi_1(M,\star)$. Let $\calg$ be a graph with one vertex $v$ and two closed edges $a$ and $b$. Fix loops $\kappa _a$ and $\kappa _b$ in $\calg$, based at $v$, such that $\kappa_a|(0,1)$ and $\kappa_b|(0,1)$ are homeomorphisms of $(0,1)$ onto $\inter a$ and $\inter b$ respectively. Define a continuous map $\phi:\calg\to M$ by setting $\phi(v)=\star$, $\phi|\inter a=\alpha\circ \kappa _a^{-1}$ and $\phi|\inter b=\beta\circ \kappa _b^{-1}$.
Let us define a map $\omega:S^1\to \calg$ as follows. Fix $n$ points on $S^1$, labeled in counterclockwise order as $\zeta_0,\ldots,\zeta_{n-1}$. Set $\zeta_n=\zeta_0$. For $1\le j\le n$ let $A_j$ denote the arc which, when oriented counterclockwise, has $\zeta_{j-1}$ and $\zeta_j$ as its initial and terminal points respectively. Fix a homeomorphism $h_j:A_j\to[0,1]$ which maps $\zeta_{j-1}$ and $\zeta_j$ to $0$ and $1$ respectively. Set $\omega(\zeta_j)=\star$ for $0\le j\le n-1$. For $j=1,\ldots,n$, define $\omega|A_i$ to be $\kappa_a\circ h_j$, $\overline{\kappa_a}\circ h_j$, $\kappa_b\circ h_j$, or $\overline{\kappa_b}\circ h_j$, according to whether
$\psi_j$ is equal to $\xi$, $\xi^{-1}$, $\eta$ or $\eta^{-1}$, respectively.
Since $W(x,y)=1$, the map $\phi\circ \omega$ is homotopic to a constant. Hence if we define a CW complex $K$ by attaching a $2$-cell to $\calg$ via the attaching map $\omega$, then $\phi$ extends to a map $f:K\to M$. Let $c:D^2\to K$ denote the characteristic map for the $2$-cell of $K$. For each $j$ with $2\le j\le n-2$, let $\sigma_j$ denote the line segment in the Euclidean disk $D^2$ with endpoints $\zeta_0$ and $\zeta_j$. Set $\sigma_1=A_1$ and $\sigma_{n-1}=A_n$. Then for each $j$ with $1\le j\le n$, the topological arc $\sigma_j$ has endpoints $\zeta_0$ and $\zeta_j$. For $j=1,\ldots,n-1$, by precomposing $f\circ c|\sigma_j$ with a homeomorphism from $[0,1]$ to $\sigma_i$ that maps $0$ to $\zeta_0$ and $1$ to $\zeta_j$, we obtain a loop $\nu_j$ based at $\star$. We have $[\nu_j]=W'(x,y)$, where $W':=\psi_1\cdots\psi_j$ is a reduced word of length $j<n$. By our mimimality assumption on $n$, it follows that $\nu_j$ is a homotopically non-trivial loop, and is therefore fixed-endpoint homotopic to a loop whose restriction to $(0,1)$ is a non-constant geodesic. Furthermore, each of the paths $\nu_1|(0,1)$ and $\nu_n|(0,1)$ is a (possibly orientation-reversing) reparametrization of either $\alpha|(0,1)$ or $\beta|(0,1)$, and is therefore a non-constant geodesic. Hence after modifying $f$ within its homotopy class rel $\calg$ we may assume that $\nu_j|(0,1)$ is a non-constant geodesic for $j=1,\ldots,n-1$.
The set $D^2-(\sigma_1\cup\cdots\cup\sigma_{n-1})$ has $n-2$ components. The closures of these components are topological disks, which we may label as $\tau_1,\ldots,\tau_{n-2}$, where $\partial \tau_j=\sigma_j\cup A_{j+1}\cup\sigma_{j+1}$.
Let $j$ be any index with $1\le j\le n-2$. The maps $f|\sigma_j$ and $f|\sigma_{j+1}$ are reparametrizations of the non-constant geodesic paths $\nu_j$ and $\nu_{j+1}$, while $f|A_{j+1}$ may be obtained from one of the non-constant geodesic paths $\alpha$ or $\beta$ by precomposing it with some homeomorphism from $A_{j+1}$ to $[0,1]$. Hence if $\tG_j$ is a lift of the map $f\circ c| \tau_j$ to $\HH^3$, then $\tG_j(\partial\tau_j)$ is the boundary of a triangle $T_j\subset\HH^3$. Since one of the sides of $T_j$ is a lift of either $\alpha$ or $\beta$, the length of the shortest side of $T_j$ is less than $\lambda$. Hence it follows from Proposition \ref{uneeda} that $\area T_j<\min(\pi,\lambda)$.
Since $M$ is aspherical, we may arrange after modifying $f$ by a homotopy rel $\calg\cup c(\sigma_2)\cup\cdots\cup c(\sigma_{n-2})$ that for each $j$ with $1\le j\le n-1$, the map $f|\tau_j$ admits a lift to $\HH^3$ which maps $\tau_j$ onto $T_j$.
Now by \ref{gruosso} we may fix a smooth triangulation of $M$ with respect to which each of the sets $f(\alpha([0,1])$, $(\beta([0,1])$ and $f\circ c(\tau_j)=p\circ \tG_j(\tau_j)$ ($j=1,\ldots,n-2$), is polyhedral. It follows that the union $f(K)$ of these sets is also polyhedral. We have
$$\area (f\circ c(\tau_j))\le \area T_j<\min(\pi,\lambda).$$
Hence
\Equation\label{weird love}
\area f(K)\le\sum_{j=1}^{n-2}\area (f\circ c(\tau_j))<(n-2)\min(\pi,\lambda).
\EndEquation
According to the construction of $K$ we have $\pi_1(K)\cong F/\langle\langle W\rangle\rangle$, where, as above, $F$ denotes the free group on the generators $\xi$ and $\eta$. In particular, $\pi_1(K)$ has rank at most $2$. Furthermore, since $W$ is a non-trivial reduced word, $\pi_1(K)$ is not a free group of rank $2$. On the other hand, since $x$ and $y$ generate $\pi_1(M,\star)$, the map $f_\sharp:\pi_1(K,v)\to\pi_1(M,\star)$ is surjective. Since we have observed that $\pi_1(M)$ is non-cyclic, $\pi_1(K)$ is also non-cyclic; hence $\pi_1(K)$ has rank exactly $2$. Thus all the hypotheses of Lemma \ref{oh far out} are seen to hold, and it follows that for each component $C$ of $M-f(K)$, the image of the inclusion homomorphism $\pi_1(C)\to\pi_1(M)$ is abelian.
We may now apply Lemma \ref{jeweler you failed}, taking $X=f(K)$, to deduce that
\Equation\label{broke in a big way}
\vol M\le\area f(K).
\EndEquation
The required conclusion $\vol M<(n-2)\min(\pi,\lambda)$ follows immediately from (\ref{weird love}) and (\ref{broke in a big way}).
\EndProof
\section{A concrete bound for $\vol M$ when $\mu(M)<(\log3)/2$, and its consequences}\label{concrete section}
\begin{theorem}\label{what, no soap?}
Let $\lambda$ be a positive real number strictly less than $(\log3)/2$, and let $N(\lambda)$ be defined by \ref{weird enough for ya?}. Then every orientable hyperbolic $3$-manifold $M$ with $\mu(M)<\lambda$ we have
$$\vol M< \lambda\cdot (8N(\lambda)-2).$$
\end{theorem}
\Proof
Suppose that $M$ is an orientable hyperbolic $3$-manifold such that $\mu(M)<\lambda$, i.e. such that $\lambda$ is not a Margulis number for $M$. Let us write $M=\HH^3/\Gamma$, where $\Gamma\le\isomplus(\HH^3)$ is discrete and torsion-free. Then there are
non-commuting elements $x$ and $y$ of $\Gamma$ such
that $\max(d(P,x\cdot P),d(P,y\cdot P))<\lambda$ for $i=1,2$.
According to Proposition \ref{first one}, there is a
reduced word $W$ in two letters, with $0<\length W\le 8N(\lambda)$, such that $W(x,y)=1$.
Set $\tGamma=\langle x,y\rangle\le\Gamma$, and $\tM=\HH^3/\tGamma$.
Let $\star\in\tM$ denote the image of $P\in\HH^3$ under the quotient map $\HH^3\to\HH^3/\tGamma$. Then $(\HH^3,P)$ is a based covering space of $(\tM,\star)$. Under the natural identification of $\pi_1(\tM,\star)$ with the deck group $\tGamma$, the elements $x$ and $y$ are identified with generators of
$\pi_1(\tM,\star)$ which are represented by closed loops of length $<\lambda$ based at $\star$. Applying Proposition \ref{short to bounded}, with $\tM$ playing the role of $M$ in that proposition, we deduce that
$$\vol\tM<\lambda(\length (W)-2)\le \lambda(8N(\lambda)-2).$$ (In particular $\tM$ has finite volume.)
Since $\tM$ covers $M$, it follows that
$$\vol M< \lambda(8N(\lambda)-2).$$
\EndProof
In the following corollary,
$V_0=0.94\ldots$ will denote the volume of the Weeks manifold \cite{weeks}.
\Corollary\label{I can't spell conniptualization}
Let $\lambda$ be a positive real number strictly less than $(\log3)/2$, and let $N(\lambda)$ be defined by \ref{weird enough for ya?}. Then
for every orientable hyperbolic $3$-manifold $M$, with $\mu(M)<\lambda$, the group $\pi_1(M)$ has a rank-$2$ subgroup of index at most
$\lambda\cdot (8N(\lambda)-2)/V_0$.
\end{corollary}
\Proof
Suppose that $M$ is a closed, orientable hyperbolic $3$-manifold
such that $\mu(M)<\lambda$, i.e. such that $\lambda$ is not a Margulis number for $M$. Write $M_\infty=\HH^3/\Gamma$, where $\Gamma\le\isomplus(\HH^3)$ is discrete and torsion-free. Then by definition there exist a point $P\in\HH^3$ and non-commuting elements $x,y\in\Gamma$ such that
\Equation\label{omg}
\max(d(P,x\cdot P),d(P,y\cdot P))<\lambda.
\EndEquation
Now $\tGamma:=\langle x,y\rangle$ is a non-abelian rank-$2$ subgroup of $\Gamma$.
Since $x$ and $y$ are non-commuting elements of $\tGamma$, it follows from (\ref{omg}) that $\lambda$ is not a Margulis number for $\tM$, i.e. that $\mu(\tM)<\lambda$. Hence by Theorem \ref{what, no soap?} we have $\vol \tM< \lambda\cdot (8N(\lambda)-2)$. (In particular $\tM$ has finite volume) On the other hand, it is shown in \cite{milley} that the Weeks manifold has minimal volume among all orientable hyperbolic $3$-manifolds. Hence $\vol M\ge V_0$. We therefore have
$$[\Gamma:\tGamma]=\frac{\vol\tM}{\vol M}<
\frac{\lambda\cdot (8N(\lambda)-2)}
{V_0}.$$
It follows that $\pi_1(M)\cong\Gamma$ has a rank-$2$ subgroup of index at most
$\lambda\cdot (8N(\lambda)-2)/
{V_0}$.
\EndProof
\Corollary\label{talk like a putz day}
Let $\lambda$ be a positive real number strictly less than $(\log3)/2$. Then for every hyperbolic $3$-manifold $M$ with $\mu(M)<\lambda$, we have
$$\rank\pi_1(M)\le2+\log_2(\lambda\cdot (8N(\lambda)-2)/V_0).$$
\end{corollary}
\Proof
Let $M$ be an orientable hyperbolic $3$-manifold with
$\mu(M)<\lambda$. According to Corollary \ref{I can't spell conniptualization}, $\pi_1(M)$ has a rank-$2$ subgroup $X$ of index at most
$\lambda\cdot (8N(\lambda)-2)/V_0$. According to Proposition \ref{you peeked}, we have
$$\rank\pi_1(M)\le \rank X+\log_2[\pi_1(M):X]\le2+
\log_2\bigg(\frac{\lambda\cdot (8N(\lambda)-2)}{V_0}\bigg).$$
\EndProof
I will conclude with three corollaries which follow immediately upon combining the earlier results of this section with the estimate for $N(\lambda)$ given by Proposition \ref{nestimate} and Meyerhoff's lower bound \cite{meyerhoff} of $0.104$ for $\mu_+(3)$. The first, Corollary \ref{boingo cuckoo}, was stated in the introduction as Theorem B, and the other two were presented as corollaries to Theorem B (although I will derive them formally from Corollaries \ref{I can't spell conniptualization} and \ref{talk like a putz day} above.)
\Corollary\label{boingo cuckoo}
Let $\lambda$ be a positive real number strictly less than $(\log3)/2$. Then for every orientable hyperbolic $3$-manifold $M$ with $\mu(M)<\lambda$ we have
$$\vol M< \lambda\bigg(6+\frac{880}{\log3-2\lambda}\log{1\over\log3-2\lambda}\bigg).$$
\EndCorollary
\Proof
Since $\lambda>\mu(M)$, we have in particular that $\lambda>\mu_+(3)>0.1$.
The assertion now follows from Theorem \ref{what, no soap?} and Proposition \ref{nestimate}.
\EndProof
\Corollary\label{more cuckoo}
Let $\lambda$ be a positive real number strictly less than $(\log3)/2$. Then
for every orientable hyperbolic $3$-manifold $M$ with $\mu(M)<\lambda$, the group $\pi_1(M)$ has a rank-$2$ subgroup of index at most
$$\frac{\lambda}{V_0}\bigg(6+\frac{880}{\log3-2\lambda}\log{1\over\log3-2\lambda}\bigg).$$
\EndCorollary
\Proof
Since $\lambda>\mu(M)$, we have in particular that $\lambda>\mu_+(3)>0.1$.
The assertion now follows from Corollary \ref{I can't spell conniptualization} and Proposition \ref{nestimate}.
\EndProof
\Corollary\label{mostly moxie}
Let $\lambda$ be a positive real number strictly less than $(\log3)/2$. Then for every hyperbolic $3$-manifold $M$ with
$\mu(M)<\lambda$, we have
$$\rank\pi_1(M)\le2+\log_2\bigg(\frac{\lambda}{V_0}\bigg(6+\frac{880}{\log3-2\lambda}\log{1\over\log3-2\lambda}\bigg)\bigg).$$
\EndCorollary
\Proof
Since $\lambda>\mu(M)$, we have in particular that $\lambda>\mu_+(3)>0.1$.
The assertion now follows from Corollary \ref{talk like a putz day} and Proposition \ref{nestimate}.
\EndProof
\bibliographystyle{plain}
|
train/arxiv
|
BkiUdvQ5qoYAttXCzKxd
| 5
| 1
|
\section{Introduction} \label{sec:Intro}
Advances in communication, networking and computing technologies offer
the opportunity to design dynamic, demand responsive and coordinated
transportation systems. Such a paradigm could potentially overcome the
challenging trade-off between cost of the service and the coverage of
the service faced by the traditional transportation systems. This
paper explores the idea of a coordinated, dynamic and demand
responsive feeder service for first and last mode connectivity in a
multi-modal transportation service. In particular, we consider a
macroscopic \emph{one-shot feeding} problem, in which the first and
last mode services have a single hard time window and a common
destination/origin respectively. This problem occurs in many scenarios
such as peak-hour single destination para-transit~\cite{RL-RM:2000},
freight transportation~\cite{MSS-etal:2014}, express courier
systems~\cite{CB-etal:2002}, evacuation in preparation of a natural
calamity~\cite{VC-RB-AB:2012} and management for event with a large
foot-fall.
\subsubsection*{Literature Review}
In the context of routing of transportation services, the vehicle
routing problem (V.R.P.) \cite{PT-DV:2002-book,
FF:2013-book,MWU:2017-book, HM:1989} assumes a depot from where one
or more vehicles are routed via locations where one or more types of
entities are picked up and dropped of at their origin(s) and
destination(s). V.R.P. has many variations like capacitated
\cite{FF:2013-book}, multiple origin and destination
\cite{PT-DV:2002-book}, multiple time window \cite{MWU:2017-book,
PT-DV:2002-book} or simultaneous pick-up and delivery \cite{HM:1989}
etc. Sharing some similarities with V.R.P. are the ride sharing
problem~\cite{NA-AE-MS-XW:2012, JA-etal:2017, FYV-etal:2018,
CCT-CYC:2007} and dial-a-ride problem~\cite{JFC-GL:2007,
SCH-etal:2018, BL-DK-TVW-HAR:2016}. In these problems, the aim is to
match vehicles with the passengers while maximizing operator's profits
in some time windows. Recently, there is also a growing interest in
routing problems with awareness of the demand and the fleet. For
example, ~\cite{KTS-NHD-DHL:2010, FM-etal:2016} optimally dispatch
taxis based on the location of the taxis and customer requests.
While many routing problems deal with discrete vehicles and discrete
entities to be transported, macroscopic models that deal with flows of
vehicles and volumes of demand and supply are also common. Though less
realistic than discrete models, they allow for computationally easier
solutions and greater scope for analysis and higher order planning. In
the context of demand anticipative mobility,
\cite{FR:2017,MS-etal:2018, MP-SLS-EF-DR:2012,RZ-MP:2016,
AV-WG-SS:2017} aim to match demand and supply by routing autonomous
vehicle flows and maximize throughput in the network through a
steady-state design of the load-balancing and routing flow rates. Much
of the literature on fleet routing developed in the context of
uni-modal or single hop transportation services. Though multi-modal
transportation has been extensively studied dynamic, demand and supply
aware first and last mode service is not sufficiently
studied~\cite{SS-NC:2016}.
\subsubsection*{Contributions}
In this paper, we consider the first-mode or \emph{feed-in} problem
and the last-mode or \emph{feed-out} problem, wherein all the demand
has a common destination and origin, respectively. In particular, we
take a macroscopic approach and pose a network flow problem. Given
feeder vehicle supply and customer demand volumes at the nodes of a
network, we pose the problem of maximizing the operator's profits by
pricing and coordinated routing of the feeder vehicles for
transporting the demand to the destination(s) in a fixed time window.
Our first contribution is a macroscopic model for the feed-in problem
of maximizing the operator's profits. In the setup, after setting the
prices, the problem reduces to a linear program. We obtain the optimal
prices based on the notion of value of time (V.o.T.) and the idea that
the perceived cost of the feeder service cannot be greater than the
perceived cost for the best alternate transportation. Our second
contribution is an offline (demand and supply independent) method that
reduces the computational complexity of the feed-in problem by
eliminating routes and the corresponding decision variables that would
never be used in an optimal solution. Our third contribution is the
optimal supply optimization problem for a given demand distribution
and a total supply volume. This lets us compute the maximum profits
that the operator can earn for a given total supply volume. We also
provide the closed-form expression of the absolute maximum profits
that the operator can earn over all supply distributions for a given
demand distribution. Such a study is useful for planning and viability
studies of a first and last mode feeder service. The fourth
contribution of the paper is the establishment of an equivalence
between the optimal supply location problem and the last-mode or
feed-out problem. This equivalence enables us to utilize directly all
the analytical results and methods we developed for the feed-in and
supply optimization problems. The fifth contribution is a simple model
for obtaining necessary conditions on the best alternate
transportation for the viability of the proposed feeder
service. Finally, we illustrate our results and analysis through
several simulations. In the preliminary version~\cite{SG-PT:2019-ecc}
of this paper, we considered only the feed-in problem and supply
location problem. Here, we additionally provide results for the
feed-out problem and introduce a model for carrying out the analysis
on the viability of the feeder service. Moreover, here we also provide
all the proofs of the results and include additional simulation
results.
\subsubsection*{Organization}
The rest of the paper is organized as follows - we set up the
\emph{feed-in} problem in Section~\ref{sec:prob-setup} and in
Section~\ref{sec:opt-sol} we describe the properties of its optimal
solutions and the \emph{off-line route-set reduction} method. In
Section~\ref{sec:max-profits}, we discuss the supply location problem
and analyze maximum possible profits with a given total supply for a
given demand distribution. In Section~\ref{sec:feed-out} we propose
the \emph{feed-out} problem. We present a model of best alternate
transportation parameters utilised for pricing in
Section~\ref{sec:best-transport}, followed by simulation results in
Section~\ref{sec:results} and conclusions. We present the proofs of
all but the main theorems in appendices.
\subsubsection*{Notation}
We use ${\mathbb{Z}}$ and ${\mathbb{N}}$ for the set of integers and natural
numbers, respectively. We use $\intrangecc{a}{b}$ and
$\intrangeoc{a}{b}$ to denote $[a,b] \ensuremath{\operatorname{\cap}} {\mathbb{Z}}$ and
$(a,b] \ensuremath{\operatorname{\cap}} {\mathbb{Z}}$, respectively.
\section{Problem Setup} \label{sec:prob-setup}
In this section, we setup the coordinated feed-in problem using a
macroscopic formulation. We first describe the network or the graph
variables, then the decision variables and the constraints. Finally,
we summarize the full optimization problem, discuss specific
challenges and aspects of it that we seek to resolve and analyze.
\subsection{Graph Model and Routes} \label{subsec:graph-mod}
In this paper, we model the region for which the feeding service has
to be designed as a graph $G := (V,E)$, with $V$ and $E$ being the
set of nodes and edges in the graph, respectively. Each node $l\in V$
may represent an area or a locality. Each edge $(l,k)\in E$ represents
an abstract macroscopic link from node $l$ to node $k$. We assume that
at each node $l$ there is a certain demand volume, $d_l$, and a supply
volume of feeder vehicles, $S_l$. All the demand is for the
\emph{interchange} node, $I \in V$, where we want all flows from all
nodes in the graph $G$ to converge. To each edge $(l,k)\in E$, we
associate a per-unit flow traversal cost $\rho_{lk}$ and an average
commute time $t_{lk}$ between the nodes. Each of these parameters is
dependent on the background traffic flow, which we assume the operator
knows beforehand. The service-provider needs to cater to the demand
for interchange $I$ within a fixed \emph{destination-time} $T$, thus
making it a \emph{one-shot} problem.
\begin{remark}\longthmtitle{Supply \& demand distribution and graph
parameters}
We assume that the demand distribution $\{d_l\}$ and graph
parameters ($\rho_{lk},t_{lk}$ ) are known and fixed. The latter
assumption is justified if the change in the ambient traffic is
gradual compared to the destination time $T$. We also assume
initially that the supply distribution $\{S_l\}$ is given and can
not be optimized. Finally, we let the demand at node $I$ to be zero,
though $S_I\geq 0$, in general. \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol
\end{remark}
Next, we define a route $r := (V_r, E_r)$ as a \emph{walk} in
$G$. For a route $r$, $V_r$ is the sequence of nodes along the route,
with $V_r(j)$ being the $j^{\text{th}}$ node on the route $r$, whereas
$E_r(j)$ is the $j^{\text{th}}$ edge on the route $r$. Therefore,
\begin{align}
&\map{ V_r }{ \intrangecc{1}{n_r} }{ V }, \quad ( V_r(j), V_r(j+1)
) \in E , \label{eq:nodeset}
\\ & \map{ E_r }{ \intrangecc{1}{n_r-1} }{E }, \quad E_r(j) =
(V_r(j),V_r(j+1)), \label{eq:edgeset}
\end{align}
where $n_r$ is the number of nodes (possibly repeated) on the route
$r$. We label the \emph{origin} node of the route $r$ as $o_r\in V$,
that is $o_r = V_r(1)$. We are interested in routes with
\emph{destination} $D_r \in V$ as the interchange node, that is
$D_r:= V_r(n_r) = I$ and with traversal time less than $T$. We call
such routes as \emph{feasible routes} and define the \emph{feasible
route set} as
\begin{align}\label{eq:rset}
\mathbf{R} := \left\{ r\ | \ t_r \leq T, \ D_r = I \right\} ,
\end{align}
where $\displaystyle t_r := \sum_{(l,k) \in E_r}t_{lk}$ is the
traversal time of the route $r$. Note that there exists a feasible
route that passes through a node $l$ if and only if there is a
feasible route $r$ with $o_r = l$. In general, there may be nodes
through which no feasible route passes and can be removed from $G$
without loss of generality.
For large enough $T$, it is possible that some routes make multiple
visits to $I$. We call each trip to the interchange on a route as a
\emph{leg of the route}. We denote the $i^\text{th}$ leg of route $r$ by
$r^i = (V_r^i, E_r^i)$ with $V_r^i$ and $E_r^i$ defined in the same
manner as $V_r$ and $E_r$ respectively. We refer to the first leg of a
route as its \emph{primary leg} and all subsequent legs as its
\emph{secondary legs}. Thus each route has a primary leg but may not
have a secondary leg. We identify the number of legs in a route $r$ by
$\theta_r \in {\mathbb{N}}$. If $\theta_r = 1$ then we say $r$ is a
\emph{simple route}. We define a \emph{cycle} in a leg of a route as
any sub-sequence of nodes in $V_r^i$ which starts and ends at the same
node before reaching $I$.
Considering routes with multiple legs is particularly useful when the
supply is not capable of meeting the demand in one go, in which case
feeders can drop-off passengers at the interchange $I$ and return to
serve more demand to $I$. Let $c_r^i > 0$ denote the per-unit
traversal cost on the $i^{\text{th}}$ leg of $r$. Then, the per-unit
traversal cost, $c_r > 0$, on a route $r$ is
\begin{align*}
c_r &:= \sum_{(l,k) \in E_r} \rho_{lk} = \sum_{i = 1}^{\theta_r}
\sum_{(l,k) \in E_r^i} \rho_{lk} =: \sum_{i = 1}^{\theta_r}
c_r^i .
\end{align*}
\subsection{Decision Variables and Constraints}
\label{subsec:constr}
Next, we introduce the decision variables and the constraints of the
problem. For each route $r \in \mathbf{R}$ we define \emph{feeder
volume}, $f_r$, as the volume of feeders which takes the route
$r$. Note that the route set $\mathbf{R}$ is an exhaustive set of all
feasible routes. Therefore, if a route $r$ makes multiple visits to
the Interchange $I$ then there is a separate route in $\mathbf{R}$ for
each permutation of the secondary legs of route $r$. Thus, we assume
that the feeders $f_r$ traverse the full route $r$. We call $(r,i,l)$ as
a \emph{service tuple}, which identifies a passenger pick-up on node
$l$ on the $i^\text{th}$ leg of route $r$. Let \emph{allocation on a node}
$f_r^i(l)$ represent the volume of demand the operator intends to pickup
through service $(r,i,l)$. Then, the \emph{total allocation} on a node
$l$, $F_l$, is
\begin{equation}
\label{eq:Fldefn}
F_l := \sum_{r \vert l \in V_r} \sum_{i \vert l \in V_r^i} f_r^i(l) .
\end{equation}
Ideally, the total allocation at a node should be the demand that is
picked-up. However, if $F_l> d_l$ then in such a case the maximum
demand serviced can at-most be $d_l$. To identify such situations we
define $\tilde{F}_l$ as the \emph{total service} on a node $l$, with
$\tilde{f}_r^i(l)$ as the service offered on $(r,i,l)$. Then the service and
allocations are related as follows
\begin{subequations}\label{eq:service-constraints}
\begin{align}
&\tilde{f}_r^i(l) \leq f_r^i(l), \quad \forall\ r, i, l \label{eq:tfril}
\\ &\tilde{F}_l := \sum_{r,i,l}\tilde{f}_r^i(l) = \min \{F_l, d_l
\}, \quad \forall l \in V . \label{eq:fltilde}
\end{align}
\end{subequations}
These service constraints are economic in nature. We also have the
following physical constraints on the allocations and flows
\begin{subequations} \label{eq:constraints}
\begin{align}
&\sum_{l \in V_r^i}f_r^i(l) %
\leq f_r, \quad \forall \ i \in \intrangecc{ 1 }{ \theta_r }, \
\forall r \in \mathbf{R} \label{eq:legconstr}
\\
& \sum_{r \vert o_r = l} f_r %
\leq S_l, \quad \forall \ l \in V . \label{eq:supplyconstr}
\end{align}
\end{subequations}
The constraint~\eqref{eq:legconstr} is the \emph{allocation
constraint}, which ensures that the sum of all allocations in a leg
$i$ on a route $r$ is at-most $f_r$, the feeder volume on that route,
while~\eqref{eq:supplyconstr} is the \emph{supply constraint}, which
ensures that the sum of feeder volumes on all routes originating from
node $l$, is at most the supply $S_l$.
\subsubsection*{Pickup times}
We let $ t_r^i(l)$ be the \emph{pick-up time} for the service tuple
$(r,i,l)$. We assume that the next mode of transportation leaves at time
$T$ from the node $I$. Hence, the time spent on the first mode is
$T - t_r^i(l)$. We assume that $t_r^i(l)$ is the last possible pick-up time
for each service tuple $(r,i,l)$. This implies that the feeders should
leave their origin at the last possible time,
$(T-\sum_{(l,k) \in E_r}t_{lk})$. This is justified below after we
discuss pricing. Thus, we do not consider $t_r^i(l)$ as decision
variables for economizing notation and to ease exposition.
\subsubsection*{Pricing}
The last set of decision variables are the \emph{prices} $p_r^i(l)$ that
a unit volume of passengers pay for service on the tuple $(r,i,l)$. We
assume that the price $p_r^i(l)$ is less than the \emph{maximum viable
price}, $\bar{p}_r^i(l)$, which is the maximum price for service
$(r,i,l)$ a customer will pay.
\begin{equation}\label{eq:price-constr}
p_r^i(l) \leq \bar{p}_r^i(l) ,
\end{equation}
To model $\bar{p}_r^i(l)$, we utilise two concepts - \emph{value of
time} (V.o.T.), which associates a monetary cost to the travel times
and \emph{perceived cost}. In particular, we let $\alpha$ be the
monetary value of unit time. Then, the perceived cost is
$M + \alpha \tau$ for a transportation service that takes $\tau$ units
of time and charges a monetary price $M$. For each node $l \in V$, we
let $g_l := \alpha \eta_l + \zeta_l$ be the perceived cost for the
\emph{best alternate transport}, which has a travel time $\eta_l$ and
has a price of $\zeta_l$. For the service $(r,i,l)$ to be viable, the
perceived cost of the feeder service should be less than or equal to
perceived cost for the best alternate transportation at node $l$, that
is
\begin{equation*}
p_r^i(l) + \alpha(T- t_r^i(l)) \leq \zeta_l + \alpha \eta_l=: g_l .
\end{equation*}
Thus, the maximum viable price for the service $(r,i,l)$ is
\begin{equation}
\bar{p}_r^i(l):= \zeta_l + \alpha \eta_l - \alpha(T- t_r^i(l)) =
g_l - \alpha(T- t_r^i(l)) . \label{eq:opt-pric}
\end{equation}
\subsection{Optimization Model}
Next we give a model for the revenues and the cost to the operator and
then we summarize the overall optimization problem from the operator's
point of view. We let the revenue from service $(r,i,l)$ be
$p_r^i(l) \tilde{f}_r^i(l)$, which is the product of the price for and the volume
of demand serviced by the service tuple $(r,i,l)$. The total revenue is
the sum of revenues from all the services $(r,i,l)$. We consider two
different types of costs incurred by the service-provider. First, we
let the travel cost for the volume of vehicles that take the route $r$
be $c_r f_r$, which is the product of travel cost per-unit flow and
the volume of vehicles that go on route $r$. Second, we consider the
\emph{operational costs} (which may include incentives or commissions
to the drivers and maintenance costs). We assume the operational cost
is one unit for every unit of allocation on a node. Thus with $f_r$,
$\tilde{f}_r^i(l)$, $\tilde{F}_l$, $p_r^i(l)$ and $f_r^i(l)$ as decision variables we
let the \emph{feed-in operator profit maximization problem} be
\begin{equation} \label{eq:optmodel}
\begin{aligned}
\max & \ J := \sum_{(r,i,l)} p_r^i(l)\tilde{f}_r^i(l) -\left( \sum_{r \in
\mathbf{R}}f_rc_r + \sum_{l\in V} F_l\right)
\\
\text{s.t.}& \quad \eqref{eq:Fldefn}-\eqref{eq:opt-pric},
\quad \ f_r , \ f_r^i(l)\geq 0 ,
\\
& \forall\ r \in \mathbf{R}, \ \forall\ i \in
\intrangecc{1}{\theta_r}, \ \forall\ l \in V_r .
\end{aligned}
\end{equation}
\begin{remark}[Maximum viable price is the optimal price]
\label{rem:optim-price}
For any fixed $f_r^i(l)$, the total profit $J$ is a strictly increasing
function of $p_r^i(l)$. If the price $p_r^i(l) \leq \bar{p}_r^i(l)$, then
it has no effect on any other constraints or on other optimization
variables. Therefore, the optimal price $p_r^i(l) = \bar{p}_r^i(l)$.
\relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol
\end{remark}
Setting $p_r^i(l) = \bar{p}_r^i(l)$, the nonlinear optimization
problem~\eqref{eq:optmodel} can be reduced to a linear program.
\subsection{Optimal Allocations and Linear Program
Formulation} \label{subsec:opt-alloc}
From the structure of Problem~\eqref{eq:optmodel} and as a consequence
of Remark~\ref{rem:optim-price}, we show that the allocations $f_r^i(l)$
and passengers served $\tilde{f}_r^i(l)$ are the same, in all optimal
solutions. We present the proof of this result in
Appendix~\ref{prf:equivalentopt}.
\begin{lemma}\longthmtitle{Equivalence of optimal allocations and
optimal volume of passengers served}
\label{lem:equivalentopt}
In the model~\eqref{eq:optmodel}, for any optimal solution the
allocations and passengers served are the same, that is
$\tilde{f}_r^i(l) = f_r^i(l)$ and hence $\tilde{F}_l = F_l \leq d_l$. \qed
\end{lemma}
Given Lemma~\ref{lem:equivalentopt}, we use the terms \emph{allocation
on a node} and \emph{service at a node} interchangeably. Similarly,
we use the terms \emph{total allocation} at a node and \emph{total
service} at a node equivalently. Further, the original nonlinear
optimization problem~\eqref{eq:optmodel} reduces to the following
linear program.
\begin{equation}\label{eq:equivoptmodel}
\begin{aligned}
& \max_{f_r, f_r^i(l)} \bar{J} : = \sum_{(r,i,l)} \beta_r^i(l) f_r^i(l) - \sum_{r
\in \mathbf{R}}f_rc_r
\\
& \text{s.t. } \eqref{eq:Fldefn},\eqref{eq:constraints}, \ F_l
\leq d_l, \ f_r ,\ f_r^i(l) \geq 0 , \ \forall (r,i,l) ,
\end{aligned}
\end{equation}
where $\beta_r^i(l)$ is the per-unit \emph{operator revenue} for the service
$(r,i,l)$, which we define as
\begin{equation}\label{eq:Nodeprofitability}
\beta_r^i(l) := \bar{p}_r^i(l) - 1 .
\end{equation}
The $-1$ in the above definition is due to the assumption that
operational costs are $1$ unit money per-unit allocation. Thus,
$\beta_r^i(l)$ is the revenue of operator from a pick-up of unit demand.
Starting with the formulation~\eqref{eq:equivoptmodel} we solve three
problems in this paper. First we reduce the size of the linear
program, and thereby computational complexity, with an offline
method. Then, we define the \emph{feed-in supply optimization
problem} which extends~\eqref{eq:equivoptmodel} by considering the
supply distribution as an optimization variable. Using this, we
calculate the maximum profits for a given demand
distribution. Finally, we propose the \emph{feed-out operator profit
optimization problem} and analyse its properties on lines of the
above problems. Additionally, we also present a simple model for
generating the perceived costs $g_l$ for the best alternate
transportation.
\section{ Properties of Optimal Solutions and Off-line Route
Elimination} \label{sec:opt-sol}
In this section, we discuss some properties of the optimal solutions
of the problem~\eqref{eq:equivoptmodel}. With these properties we
reduce the size of the problem by eliminating routes in the feasible
route set, $\mathbf{R}$, that would never be used in an optimal
solution irrespective of the demand and supply distributions.
\subsection{Properties of Optimal Solutions} \label{subsec:prop-opt-sol}
We start by describing the cases where the constraint
\eqref{eq:legconstr} must be active. The following result states that
the feeders on a route are allocated fully on each secondary leg. The proof is stated in Appendix~\ref{prf:legs}.
\begin{lemma}\label{lem:legs}\longthmtitle{No redundant feeders in
optimal solutions} In every optimal solution, the total allocation
on a secondary leg of a route $r$ is equal to the feeder volume on
that route, $f_r$. That is, in any optimal solution,
\begin{align}\label{eq:alloc-prop}
\sum_{l \in V_r^i} f_r^i(l) = f_r, \quad \forall i \in
\intrangecc{2}{\theta_r}, \quad \forall r \in \mathbf{R} .
\end{align}
\end{lemma}
Next, we present necessary conditions for a route to have non-zero
allocations in an optimal solution.
\begin{proposition}\longthmtitle{Necessary conditions for a route to
be used in an optimal solution}
\label{prop:necc-cond-opt-sol}
In an optimal solution of the feed-in problem
\eqref{eq:equivoptmodel},
if $f_r^* > 0$ for $r \in \mathbf{R}$ then the following necessarily
hold.
\begin{enumerate}[label=(\alph*),
ref=\ref{prop:necc-cond-opt-sol}(\alph*)]
\item\label{itm:necc-a} $f_r^i(l)^* > 0$ for some
$i \in \intrangecc{1}{\theta_r}$ and $l \in V_r^i$. Further, for
any $(r,i,l)$, if $f_r^i(l)^*>0$ then $\beta_r^i(l) \geq 0$.
\item\label{itm:necc-b} The route $r$ as a whole does not make a
loss, that is,
\begin{align*}
\sum_{i = 1}^{\theta_r} \sum_{l \in V_r^i} f_r^i(l)^* \beta_r^i(l) \geq f_r^*
c_r .
\end{align*}
\item\label{itm:necc-c} For each $i \in \intrangecc{2}{\theta_r}$,
there must exist an $l \in V_r^i$ such that $f_r^i(l)^* > 0$ and
$\beta_r^i(l) \geq c_r^i$.
\item\label{itm:necc-d} If $r$ is a simple route ($\theta_r = 1$)
then there must exist atleast one $l \in V_r$ such that
$f_r^1(l)^*>0$ and $\displaystyle \beta_r^1(l) \geq c_r$. \qed
\end{enumerate}
\end{proposition}
Proposition~\ref{prop:necc-cond-opt-sol}, proven in
Appendix~\ref{prf:necc-cond-opt-sol}, states that irrespective of the
supply and demand distributions, if a route is used in an optimal
solution then the following must hold for that route.
\begin{itemize}
\item There is a positive allocation on at least one node with each
of them returning non-negative operator revenues.
\item The route, as a whole, does not make a loss.
\item Every secondary leg of the route should not make a loss,
that is, operator revenue of a pick-up in a secondary leg is no less
than the per-unit traversal cost of the leg itself.
\item Every simple route used must have at least one node with
non-negative per-unit operator revenue.
\end{itemize}
Using Proposition~\ref{prop:necc-cond-opt-sol}, we formulate an
off-line route reduction method in the next
subsection.
\subsection{Offline Route Elimination} \label{subsec:route-elim}
This subsection presents the \emph{reduced route set} for the feed-in
problem \eqref{eq:equivoptmodel}. This set is formed by pruning out
routes and the corresponding optimization variables that would have a
zero allocation in every optimal solution under every possible supply
and demand distributions. We obtain this by application of the
individual properties in Lemma~\ref{lem:legs} and
Proposition~\ref{prop:necc-cond-opt-sol}, after eliminating the
dependence on $f_r$ and $f_r^i(l)$. We first define the reduced route
set $\mathbf{\bar{R}}$, then show that any route with $f_r^*>0$ in every optimal
solution to the feed-in problem~\eqref{eq:equivoptmodel} belongs to
$\mathbf{\bar{R}}$.
\begin{align}
&w_r^i := \max_{l\in V_r^i}\{ \max\{ \beta_r^i(l), 0 \} \} - c_r^i, \quad
\mathbf{\bar{R}} := \mathbf{R}_1 \cup \mathbf{R}_2 \label{eq:Rset}
\\
&\mathbf{R}_1 := \left\{ r \in \mathbf{R} \ \vert \ \theta_r = 1, w_r^1
\geq 0\right\}\label{eq:R1}
\\
&\mathbf{R}_2 := \{ r \in \mathbf{R} \ \vert \ \theta_r >
1, \sum_{i = 1}^{\theta_r} w_r^i \geq 0, \
w_r^i \geq 0, \forall i > 1 \label{eq:R2}
\}
\end{align}
\begin{theorem}\label{thm:algo-1}\longthmtitle{Optimal solutions to the feed-in problem
use only the routes from the reduced route set} \label{th:routeprune}
For the optimization problem~\eqref{eq:equivoptmodel}, every optimal
solution for every demand and supply distribution is guaranteed to
have $f_r^* = 0$ and consequently $f_r^i(l)^* = 0$ over all legs $i$ of
$r$, $\forall \ r \notin \mathbf{\bar{R}}$ .
\end{theorem}
\begin{proof}
We prove this result by contradiction - hence let there exist an
optimal solution such that $f_r^* > 0$ for some $r \notin \mathbf{\bar{R}}$. If
$\theta_r =1$, then it satisfies Proposition~\ref{itm:necc-d} and as
a consequence $\exists l \in V_r^1$ s.t.
$\beta_r^1(l) \geq c_r = c_r^1$, which implies
$w_r^1\geq0$. Therefore $r\in \mathbf{R}_1$. Now, if $\theta_r>1$,
then $r$ satisfies Proposition~\ref{itm:necc-c}. Hence
\begin{equation*}
w_r^i = \max_{l\in V_r^i}\{ \max\{ \beta_r^i(l), 0 \} \} - c_r^i \geq 0,
\quad \forall i>1 .
\end{equation*}
Further, $r$ must also satisfy Propositions~\ref{itm:necc-a}
and~\ref{itm:necc-b}, that is,
\begin{align*}
f_r^*c_r %
&\leq \sum_{i = 1}^{\theta_r} f_r^i(l)^* \beta_r^i(l)
\leq \sum_{i = 1}^{\theta_r} \sum_{l \in V_r^i} \max\{ \beta_r^i(l), 0
\} f_r^i(l)^*
\\ & \leq \sum_{i = 1}^{\theta_r} \max_{l \in V_r^i} \{\max
\{\beta_r^i(l)\}, 0 \}f_r^* ,
\end{align*}
where we have used the fact that $f_r^i(l)^* > 0$ only if
$\beta_r^i(l) \geq 0$ for the second inequality and the third inequality
follows from~\eqref{eq:legconstr}. Hence, $r$ must satisfy
$\sum_{i = 1}^{\theta_r} w_r^i\geq 0$ which implies
$r\in \mathbf{R}_2$. Therefore, in either case,
$r \in \mathbf{R}_1\cup\mathbf{R}_2 = \mathbf{\bar{R}}$. This is a
contradiction.
\end{proof}
In the proof we don't explicitly check Proposition~\ref{itm:necc-a}
for the route but one can verify that $\forall\ r\in \mathbf{\bar{R}}$ there
exists $\beta_r^i(l) \geq 0$ where $f_r^i(l)^*>0 $ is possible for some supply
and demand distribution. Thus, replacing $\mathbf{R}$ with $\mathbf{\bar{R}}$ in
Problem \eqref{eq:equivoptmodel} causes no \emph{approximation} or
\emph{loss} of any optimal solutions.
\begin{remark} \longthmtitle{Reduced route set reaches a constant as
the destination time $T$ is increased} There exists a time $T^*$
such that for all $T \geq T^*$, the set $\mathbf{\bar{R}}$ is the same. This is
because even though the set $\mathbf{R}$ includes more and more
routes as $T$ increases, the time from pickup at a node to drop off
at interchange $I$ cannot exceed a certain value to maintain
$\beta_r^i(l) \geq 0$. Furthermore, even for pick-ups with higher pickup
times, higher costs would render them unprofitable. Thus $\mathbf{\bar{R}}$ do
not have such routes. This is particularly useful as the set
$\mathbf{R}$ and problem~\eqref{eq:equivoptmodel} keeps growing with
$T$, whereas the size of~\eqref{eq:equivoptmodel} with $\mathbf{\bar{R}}$
instead or $\mathbf{R}$ does not grow for $T \geq T^*$. \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol
\end{remark}
\section{Feed-in Supply Optimization problem}
\label{sec:max-profits}
Our main focus till the previous section had been to maximize
operator's profits for the feed-in problem, given a supply and demand
distribution. However, an operator may also be interested in
``aligning'' the supply to the demand distribution so that profits
increase. Thus, we next focus on the problem of optimizing the
supply distribution to maximize profits given the demand distribution
and the total available supply $s$, where we assume that
$\{S_l\}$ are also optimization variables. Then, we let the
\emph{feed-in supply optimization problem} be
\begin{align}
& \max_{S_l, f_r, f_r^i(l)} \bar{J} : = \sum_{(r,i,l)} \beta_r^i(l) f_r^i(l) - \sum_{r \in
\mathbf{R}}f_rc_r \notag
\\
& \text{s.t. } \eqref{eq:Fldefn}, \eqref{eq:constraints}, \ F_l \leq
d_l, \ \sum_{l \in V} S_l \leq s, \notag \
f_r ,\ f_r^i(l), \ S_l \geq 0
\\ &\quad \quad \quad \forall\ r \in \mathbf{R}, \ \ \forall\ i \in
\intrangecc{1}{\theta_r}, \ \forall\ l \in V_r
. \label{eq:supply-opt}
\end{align}
For analysing the maximum profit as a function of the total supply
$s$, we make the following assumptions.
\begin{enumerate}[label=\textbf{(A\arabic*)},
ref=\textbf{(A\arabic*)}]
\item \label{A:1} The demand distribution $\{ d_l \}_{l \in V}$ is
fixed. Also, $d_l > 0$ for each node $l \neq I$ and $d_I = 0$.
\item \label{A:2} For each node $l$ in the graph
$\exists\ r \in \mathbf{R}$ s.t. $o_r =l$, $\theta_r =1$ and
$\beta_r^1(l) - c_r^1 \geq 0$.
\end{enumerate}
\subsubsection{General Properties for Feed-in Supply
Optimization} \label{subsec:supply-optim-prop}
In the following proposition, we analyse the properties of the optimal
solutions of \emph{feed-in supply optimization problem}
\eqref{eq:supply-opt} and show that there is no loss of generality in
the Assumptions~\ref{A:1} and~\ref{A:2}. We present the proof in
Appendix~\ref{prf:supply-opt}.
\begin{proposition}\longthmtitle{Properties of optimal supply
distributions and allocations}
\label{prop:supply-opt}
In every optimal solution to the problem~\eqref{eq:supply-opt}, the
following hold:
\begin{enumerate}[label=(\alph*), ref=\ref{prop:supply-opt}(\alph*)]
\item \label{itm:sup0} If $f_r^* >0$ then $r\in \mathbf{\bar{R}}$
\item \label{itm:sup1} For each $r \in \mathbf{\bar{R}}$,
$f_r^1(o_r)^* = f_r^*$. Consequently,
\begin{align}\label{eq:no-redund-flow}
\sum_{l \in V_r^i} f_r^i(l)^* = f_r^*, \quad \forall i
\in \intrangecc{1}{\theta_r}, \quad \forall r \in \mathbf{\bar{R}} ,
\end{align}
and no route originating from $I$ is used.
\item \label{itm:sup2} For a route $r\in \mathbf{\bar{R}}$ with $f_r^*>0$,
$\beta_r^1(o_r)\geq c_r^1$. Consequently,
$\forall i \in \intrangecc{1}{\theta_r}$, $\exists\ l \in V_r^i$
such that $f_r^i(l)^*>0 $. Moreover, for $(r,i,l)$, $f_r^i(l)^* > 0$ only
if $\beta_r^i(l) \geq c_r^i$.
\item \label{itm:sup3} If a node $l$ does not satisfy the property
in~\ref{A:2} then $f_r^i(l)^* = 0$ for all $(r,i,l)$ that serve the
node $l$.
\item \label{itm:sup4} A route $r$ with a cycle in the first leg is
not used, that is $f_r^* = 0$.
\item \label{itm:sup5} If $\displaystyle s\leq \sum_{l \in V} d_l$ and \ref{A:2} holds then $\displaystyle \sum_{r \vert o_r = l} f_r^* = S_l \leq d_l, \
\forall l\in V$. \qed
\end{enumerate}
\end{proposition}
From Proposition~\ref{prop:supply-opt} we see that there is no loss of
generality in the Assumptions~\ref{A:1} and~\ref{A:2}. This is because
if for a node $l$, $d_l = 0$ then, by Proposition~\ref{itm:sup1}, all
routes $r$ originating at $l$ have zero flow ($f_r = 0$) in every
optimal solution. Similarly, Proposition~\ref{itm:sup3} says that in
every optimal solution there is no allocation on nodes that violate
Assumptions~\ref{A:2}.
With Proposition~\ref{itm:sup3} one
can eliminate the nodes that do not follow assumption \ref{A:2}. Also,
as a consequence of Propositions~\ref{itm:sup1} and ~\ref{itm:sup2},
one can eliminate the route flow variables $f_r$ and remove routes without a
\emph{profitable} pickup at their origins. With
Proposition~\ref{itm:sup4} we can eliminate every route with a cycle
in the first leg. Thus we construct the \emph{reduced route set} for~\eqref{eq:supply-opt}, $\mathbf{R}^-$, as
\begin{equation}\label{eq:reduced-fiso-rset}
\mathbf{R}^- := \{r \in \mathbf{\bar{R}}\vert o_r \neq I, \beta_r^1(o_r)\geq
c_r^1, \text{ no cycles in } r^1 \} .
\end{equation}
Further, using Proposition~\eqref{prop:supply-opt}
and~\eqref{eq:no-redund-flow}, we can solve~\eqref{eq:supply-opt} with
strict equality in the constraints of~\eqref{eq:legconstr}
and~\eqref{eq:supplyconstr}. Thus, we can reduce~\eqref{eq:supply-opt}
to an optimization problem over decision variables $f_r^i(l)$, the
allocations, and $S_l$, the supply at a node. This elimination of the
variables $f_r$ leads to a significant reduction in the number of
optimization variables, specifically equal to the number of routes in
$\mathbf{R}^-$. As a result, we can express the supply optimization problem
as
\begin{align}\label{eq:max-profits}
&\max_{S_l,\ f_r^i(l) } \bar{J} = \sum_{(r,i,l)\vert r \in \mathbf{R}^-} (\beta_r^i(l)-c_r^i)f_r^i(l)
\\
& \text{s.t. } \eqref{eq:Fldefn}, \ F_l \leq d_l, \ \sum_{l \in
V_r^i}f_r^i(l) = f_r, \ \sum_{r \vert o_r = l} f_r = S_l, \ \sum_{l \in V}
S_l \leq s, \notag
\\
&f_r^i(l), \ S_l \geq 0, \ \forall\ r \in \mathbf{R}^-, \ \ \forall\ i \in
\intrangecc{1}{\theta_r}, \ \forall\ l \in V_r^i . \notag
\end{align}
This is a simpler problem to solve for a sequence of values of $s$
than \eqref{eq:supply-opt}. Also, as we show in
Section~\ref{sec:feed-out}, this formulation makes the feed-out
problem computationally simpler.
\subsubsection{Absolute Maximum Profits} \label{subsec:suf-suppl}
With the objective \eqref{eq:max-profits}, we can also analyse the
absolute maximum profits an operator can earn, over all supply
distributions, for a given demand distribution. Quantification of the
absolute maximum profits is useful for determining feasibility or
profitability of the service from the operator's perspective. To
arrive at the value of \emph{absolute maximum profits}, denoted by
$J_{\max}$ from here on, we assume that supply is sufficient, i.e.
$s\geq \sum_l d_l$. We also denote the set of simple routes
originating at $l$ as
\begin{equation}
\label{eq:simple-routes}
\srs{l} := \{ r \in \mathbf{R} \ \vert \ o_r = l, \ \theta_r = 1
\}
\end{equation}
and we denote the set of simple routes with the maximum rate of
profits for a pickup at $l$ by
\begin{equation} \label{eq:best-routes}
\Mc{R}(l) := \ensuremath{\operatorname{argmax}}_{r \in \srs{l} \cap \mathbf{R}^-}
\{\beta_r^1(l)-c_r \} .
\end{equation}
\begin{lemma} \label{lem:Rl-least-perceived-cost} For each node
$l \in V$, $\beta_r^1(l)-c_r > \beta_{\bar{r}}^i(l)-c_{\bar{r}}^i$
for all $r \in \Mc{R}(l)$, $\bar{r} \notin \Mc{R}(l)$ and
$i \in \intrangecc{1}{\theta_{\bar{r}}}$. Further,
$\forall r \in \Mc{R}(l)$ the perceived cost $(\alpha t_r +c_r)$ is
the least from node $l$. \qed
\end{lemma}
The lemma is proved in Appendix \ref{prf:Rl-least-perceived-cost}.
Next, we state the properties of optimal solutions
of~\eqref{eq:max-profits} for sufficient supply.
\begin{theorem}\label{th:suff-supplies}
\longthmtitle{Properties of optimizers under sufficient supply} If
$\displaystyle s\geq \sum_{l \in V} d_l$, then all optimal solutions
of~\eqref{eq:max-profits} satisfy
\begin{enumerate}[label = {(\alph*)},
ref=\ref{th:suff-supplies}(\alph*)]
\item \label{itm:thm6a} If $r \notin \Mc{R}(o_r)$ then $f_r^* =
0$. Further, for each $l \in V\setminus \{I\}$,
$\displaystyle F_l^* = \sum_{r \in \Mc{R}(o_r) } f_r^* = d_l$ and
$S_l^* \geq d_l $.
\item \label{itm:thm6c} The maximum profits over all supply
distributions is
\begin{align}\label{eq:Jmax}
J_{max} = \sum_{l\in V} d_l \max_{r \in \Mc{R}(l)}
\{\beta_r^1(l)-c_r^1 \} .
\end{align}
\end{enumerate}
\end{theorem}
\begin{proof}
\textbf{(a):} As $s \geq \sum_l d_l$, consider a solution where,
$\displaystyle \sum_{r \in \Mc{R}(l)}f_r^1(l) = d_l$, $f_r^i(l) = 0$
for all $r \notin \Mc{R}(l)$ for each $l \in V$, and
$\displaystyle S_l = \sum_{r \in \Mc{R}(l)}f_r^1(l)$,
$\forall\ l\neq I$, and $\displaystyle S_I = s - \sum_l d_l$. One
can verify that such a solution is feasible under
Assumption~\ref{A:2}. From Lemma~\ref{lem:Rl-least-perceived-cost},
we know that
$\beta_r^1(l)-c_r > \beta_q^i(l)-c_{q}^i$ for all
$r \in \Mc{R}(l)$ and $q \notin \Mc{R}(l)$. Then the structure of
the objective function~\eqref{eq:max-profits} implies that this
solution is also optimal.
\textbf{(b):} Given the part~(a), we now see that the maximum
profits must satisfy~\eqref{eq:Jmax}, in which the term indexed by
$l$ corresponds to the profits from node $l$.
\end{proof}
This theorem gives the absolute maximum profits for a given demand distribution over all supply
distributions. The value of
$J_{\max}$ is easily computable with knowledge of maximum rates of profit for simple routes and demand distribution.
\section{One shot feed-out} \label{sec:feed-out}
In this section, we propose the \emph{one-shot feed-out} problem on
the lines of the \emph{one-shot feed-in} problem. The goal is to
drop-off passengers at different destinations from a single origin
within a single, fixed time window, $\hat{T}$. We assume a fixed
demand distribution $\{\hat{d}_l\}$, where $\hat{d}_l$ represents the demand
from the \emph{interchange} node $I$ to node $l \in V$. We also assume
that the total available supply is $s$ and concentrated at $I$ with
$S_l = 0$, $\forall l \neq I$. Hence, we are interested in routes with
$o_r =I$. We define the set of feasible routes as
\begin{equation*}
\mathbf{\hat{R}}:= \{r\vert \eqref{eq:nodeset}-\eqref{eq:edgeset},\ o_r = I, \
t_r \leq \hat{T} \} .
\end{equation*}
Let $\hat{f}_r$ denote the \emph{feed-out flow} for a route
$r\in \mathbf{\hat{R}}$, let $\hat{f}_r^i(l)$ represent \emph{feed-out allocation} for a
service $(r,i,l)$ and let the \emph{total feed-out node allocation} for
node $l$ be denoted by $\hat{F}_l$. Then,
the constraints on these variables are
\begin{subequations}\label{eq:feedout_constr}
\begin{align}
\sum_{r \in \mathbf{\hat{R}}} \hat{f}_r %
&\leq s \label{eq:suppl_constr_2}
\\
\hat{F}_l := \sum_{r,i} \hat{f}_r^i(l) %
&\leq \hat{d}_l, \quad \forall l \in {V} \label{eq:dem_constr}
\\
\sum_{l\in V_r^i} \hat{f}_r^i(l) %
&\leq f_r, \quad \forall i \in \intrangecc{1}{\theta_r}, \ \forall
r \in \mathbf{\hat{R}} . \label{eq:leg-constr-fo}
\end{align}
\end{subequations}
These constraints are exactly analogous to the ones in the feed-in
problem. As in the feed-in problem, one could again demonstrate that
the demand serviced for any node is the same as the total feed-out
node allocation (see Lemma~\ref{lem:equivalentopt}). Thus, we ignore
the service variables.
We let $\hat{p}_r^i(l)$ be the \emph{feed-out price} a unit of
passengers pays for the service $(r,i,l)$. We assume an operational cost
of $1$ unit money per unit allocation. We can then define the
\emph{drop-off operator revenue} on the lines
of~\eqref{eq:Nodeprofitability} as
\begin{equation}\label{eq:drop_revenue}
\hat{\beta}_r^i(l) := \hat{p}_r^i(l) - 1 .
\end{equation}
Again, as in the feed-in problem, we can set the prices independent of
the flows and allocations. Thus with the optimization variables
$\hat{f}_r$, $\hat{f}_r^i(l)$ we can write the \emph{feed-out operator profit
maximization problem} as
\begin{equation}\label{eq:feedout-prob}
\begin{aligned}
\max_{\hat{f}_r, \hat{f}_r^i(l)}\hat{J} &= \sum_{r,i,l}\hat{\beta}_r^i(l) \hat{f}_r^i(l)-\sum_r \hat{f}_r
c_r
\\
\text{Subject to: } &\eqref{eq:feedout_constr}, \hat{f}_r, \hat{f}_r^i(l) \geq 0,
\quad \forall (r, i, l) .
\end{aligned}
\end{equation}
Now we fix the price $\hat{p}_r^i(l)$. We denote the drop-off time for
the service tuple $(r,i,l)$ by $\hat{t}_r^i(l)$. Let $\hat{\eta}_l$ and
$\hat{\zeta}_l$ are the best transportation time and costs from $I$ to
$l$ respectively. Then the optimal price for a $(r,i,l)$ using perceived
costs is
\begin{align}
\hat{p}_r^i(l)^* &= \alpha(\hat{\eta}_l- \hat{t}_r^i(l)) +\hat{\zeta}_l, \quad
\forall (r,i,l) . \label{eq:feedout_prices}
\end{align}
Further, analogous to the feed-in problem, the drop-off time $\hat{t}_r^i(l)$
should be the least possible as the price is reduced otherwise. Hence,
we assume that the service starts at $t= 0$. This implies that
$\hat{t}_r^i(l)$ is the traversal time from $I$ to $l$ along the route $r$
with the drop-off on the service $(r,i,l)$.
\subsection{Equivalence to feed-in supply optimization problem}
In this subsection, we show that for the feed-out
problem~\eqref{eq:feedout-prob}, an equivalent supply optimization
feed-in problem exists with the same optimization value and related
optimizers. We first propose a set of supply distributions that always
contain an optimizer of the problem \eqref{eq:supply-opt}.
\begin{lemma}\label{lem:supply-set}
For total supply $s$, the set of supply distributions
\begin{equation}\label{eq:supply-set}
\Mc{S}(s) := \{\{S_l\} \vert S_I = \max\{0, s- \sum_{l \neq
I} d_l \} \}
\end{equation}
always contains an optimizer for the \emph{feed-in supply
optimization} problem defined in \eqref{eq:supply-opt}.
\qed
\end{lemma}
The proof of this lemma follows from Proposition~\ref{itm:sup5} and
Theorem~\ref{itm:thm6a}. Next, for the feed-out
problem~\eqref{eq:feedout-prob} on the graph
$\hat{G} =(\hat{V},\hat{E})$, we construct an equivalent feed-in
supply optimization problem with the following construction. We
express the construction through the following assumptions.
\begin{enumerate}[label=\textbf{(A\arabic*)},
ref=\textbf{(A\arabic*)}]
\setcounter{enumi}{2}
\item \label{A:3} Let the graph $G= (V, E)$ be such that
$V = \hat{V}$, edge $(l,k) \in \hat{E}$ iff $(k,l) \in E$ and
$(\hat{\rho}_{lk}, \hat{t}_{lk}) = (\rho_{kl}, t_{kl})$
$\forall (l,k) \in \hat{E}$.
\item \label{A:4} The supply distribution for \eqref{eq:supply-opt} is
chosen from the supply set \eqref{eq:supply-set}. Also,
$\{\hat{d}_l\}=\{d_l\} $
\item \label{A:5} Let $T = \hat{T}$. Also, the best alternate travel
times and costs are the same, i.e. $\eta_l= \hat{\eta}_l $ and
$\zeta_l= \hat{\zeta}_l$.
\end{enumerate}
Now we will give a mapping for the route set $\mathbf{R}$ and $\mathbf{\hat{R}}$.
\begin{remark}\longthmtitle{Equivalent
graphs} \label{rem:equiv-routes} Given a route $r$ on graph
$\hat{G}$, let $ \phi_{\hat{G}G}(r) := \bar{r}$, a route in $G$ (defined by
\ref{A:3}) where $\theta_r = \theta_{\bar{r}}$,
$V_r(i)= V_{\bar{r}}(n_{\bar{r}}-i+1)$ and
$E_{\bar{r}}(n_{\bar{r}}-i)= (V_r(i+1),V_r(i)) $. Then,
$\forall\ r \in \mathbf{\hat{R}}$, $\exists \phi_{\hat{G}G}(r) = \bar{r}\in \mathbf{R}$ with
$c_r = c_{\bar{r}}$ and every service tuple $(r,i,l)$ of $r$ mapping
to $(\bar{r}, \theta_r-i+1, l)$. \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol
\end{remark}
With this remark we can now show that $\hat{\beta}_r^i(l)$ and
$\beta_{\bar{r}}^{\theta_r-i+1}(l)$ are the same under the assumptions
given above.
\begin{lemma}\label{lem:equal_prices}
For the feed-out problem and the corresponding feed-in problem,
$\hat{\beta}_r^i(l)= \beta_{\bar{r}}^{\theta_r-i+1}(l)$, where $\bar{r} = \phi_{\hat{G}G}(r)$.\qed
\end{lemma}
The proof of this Lemma is given in Appendix~\ref{prf:equal_prices}. Next we show equivalence of the two problems.
\begin{theorem}\longthmtitle{Equivalence of the feed-out problem and the
supply location feed-in problem} \label{prop:equivalence}
Under Assumptions~\ref{A:3}-\ref{A:5}, the feed-out problem defined
in \eqref{eq:feedout-prob} can be represented by an equivalent
supply optimization feed-in problem~\eqref{eq:supply-opt} with
$\hat{J}^*(s) = \bar{J}^*(s)$, $\forall s \geq 0$. Further, for all
optimal solutions
\begin{enumerate}[label=(\alph*),
ref=\ref{prop:equivalence}(\alph*)]
\item\label{itm:equiv-b}
$\hat{f}_r^i(l)^* = f_{\bar{r}}^{\theta_r-i+1}(l)^* $, $\forall (r,i,l)$ and
$\bar{r} = \phi_{\hat{G}G}(r)$,
\item\label{itm:equiv-c} $\hat{f}_r^* = f_{\bar{r}}^*$, $\bar{r} = \phi_{\hat{G}G}(r)$,
$\hat{F}_l^* = F_l^*$, $\forall\ l \in V$.
\end{enumerate}
\end{theorem}
\begin{proof}
Each pair of $r\in \mathbf{\hat{R}} $ and $\phi_{\hat{G}G}(r) = \bar{r} \in \mathbf{R}$
satisfy the relationship given in Remark \ref{rem:equiv-routes}.
Using Lemma~\ref{lem:equal_prices}, we see that
$\hat{\beta}_r^i(l) = \beta_{\bar{r}}^{\theta_r - i+1}(l)$ and given
$T = \hat{T}$ we conclude that the cost functions are equivalent,
i.e. $\bar{J} \equiv \hat{J}$ under the assumption
$\hat{f}_r^i(l) = f_{\bar{r}}^{\theta_r-i+1}(l)$. Hence, it is sufficient to
prove equivalence of the constraints in both problems.
\emph{Leg Constraints}: In both problems given flows $\hat{f}_r$ and
$f_{\bar{r}}$ the constraint for leg $i$ and route $r$ in
\emph{feed-out} problem~\eqref{eq:leg-constr-fo} and the constraint
for leg $\theta_r -i+1$ and route $\bar{r}$ for the feed-in supply
optimization (see \eqref{eq:max-profits}) are equivalent.
\emph{Demand Constraints}: Constraint \eqref{eq:dem_constr} and Constraint $F_l\leq d_l $ are equivalent under the assumption that $d_l= \hat{d}_l$.
\emph{Supply Constraints}: To show this equivalence, we let
$\{\hat{S}_l\}_l $ be the supply distribution at the end of
\emph{feed-out}. We know that for any route $r$, the flow terminates
at $D_r$. Therefore, the final supply located at any node $l$ is
$\displaystyle \hat{S}_l = \sum_{r \vert D_r= l} \hat{f}_r$. Therefore,
Constraint \eqref{eq:suppl_constr_2} can be rewritten as
\begin{align*}
\sum_r\hat{f}_r = \sum_{l} \sum_{r\vert D_r = l }\hat{f}_r = \sum_l
\hat{S}_l \leq s ,
\end{align*}
which implies,
$ \hat{S}_l = \sum_{r \vert D_r= l} \hat{f}_r, \text{ and} \quad \sum_l
\hat{S}_l \leq s $. Now, with the assumption that supply
distribution for \emph{feed-in} is chosen from the
set~\eqref{eq:supply-set}, one can see that constraint
\eqref{eq:supplyconstr} along with $\sum_{l \in V} S_l \leq s$ are
equivalent to ones stated above as consequences of Proposition
\ref{itm:sup5} for $s\leq \sum_l d_l$ and Theorem \ref{itm:thm6a}
with Lemma \ref{lem:supply-set} for $s \geq \sum_l d_l$.
\end{proof}
Theorem~\ref{prop:equivalence} establishes the equivalence of the
feed-out problem on the graph $\hat{G}$ to the supply optimization
problem on the graph $G$, formed using \ref{A:3}. Thus, one may solve
either problem and obtain a solution to the feed-out problem. More
importantly, many of the properties and results of supply optimization
problem apply for the feed-out problem.
\subsection{Route Pruning and Absolute Maximum Profits}
Here, we state some properties of optimal solutions of feed-out
problem. One can prove them using the results for the feed-in and
supply optimization feed-in problems and by using the equivalence
stated in Theorem \ref{prop:equivalence}.
\begin{corollary}\label{corr:fo-necc}
\longthmtitle{Necessary conditions for a route to be used in an
optimal solution} In every optimal solution to the
problem~\eqref{eq:feedout-prob}, if $\hat{f}_r^*>0$, $r\in \mathbf{\hat{R}}$, then
\begin{enumerate}[label=(\alph*),
ref=\ref{corr:fo-necc}(\alph*)]
\item \label{corr:legs} For all legs of the route $r\in \mathbf{\hat{R}}$ we
have
\begin{align*}
\hat{f}_r^* = \sum_{l \in V_r^i}\hat{f}_r^i(l)^*, \quad \forall\ i \in
\intrangecc{1}{\theta_r} .
\end{align*}
\item\label{itm:fo-necc-a} $\hat{f}_r^i(l)^*>0$ for some
$i \in \intrangecc{1}{\theta_r}$ and $l \in V_r^i$. Also, if
$\hat{f}_r^i(l)^*>0$ for any $(r,i,l)$ then $\hat{\beta}_r^i(l)\geq 0$.
\item\label{itm:fo-necc-b} The route as well as its every leg is
profitable i.e.
\begin{align*}
&\sum_{i ,l}\hat{\beta}_r^i(l)\hat{f}_r^i(l)^*\geq \hat{f}_r^* c_r .
\end{align*}
Also, $\forall\ i \in \intrangecc{1}{\theta_r}$,
$\exists \ l \in V_r^i$ where $\hat{f}_r^i(l)^*>0$ with
$\hat{\beta}_r^i(l) \geq c_r^i > 0$.
\item\label{itm:fo-necc-c} The destination of the route is
profitable i.e.
$\hat{\beta}_r^{\theta_r}(D_r) \geq c_r^{\theta_r}$. Also,
$\hat{f}_r^{\theta_r}(D_r)^*=\hat{f}_r^*$.
\item\label{itm:fo-necc-d} The route doesn't contain a cycle in the
final leg and doesn't terminate in $I$ . \qed
\end{enumerate}
\end{corollary}
Corollary~\ref{corr:fo-necc} states the necessary conditions for a
route to be utilised in some optimal solution. As was stated for
Propositions \ref{prop:necc-cond-opt-sol} and \ref{prop:supply-opt},
this corollary also presents properties independent of demand
distribution or total supply. Thus, combining all properties that are
satisfied for routes used in any optimal solutions, reduced route set $\mathbf{\hat{R}}^{-}$ is obtained as
\begin{equation} \label{eq:red-route-set-fo}%
\mathbf{\hat{R}}^{-} := \{r \in \mathbf{\hat{R}}\vert \bar{r} \in \mathbf{R}^-, \bar{r} = \phi_{\hat{G}G}(r) \}
\end{equation}
The route set $\mathbf{\hat{R}}^{-}$ contains all routes used in any optimal solution
to the feed-out problem \eqref{eq:feedout-prob}. Also utilising
Corollary~\ref{corr:legs} the equivalent reduced problem for
\eqref{eq:feedout-prob} is
\begin{equation}\label{eq:fo-obj}
\begin{aligned}
&\max_{\hat{f}_r^i(l) }\ \hat{J} = \sum_{r,i,l}(\hat{\beta}_r^i(l)-c_r^i) \hat{f}_r^i(l) \\
&\text{Subject to: }\sum_{l \in V_r^i} \hat{f}_r^i(l) = \hat{f}_r^{\theta_r}(D_r), \sum_{r,i} \hat{f}_r^i(l) \leq \hat{d}_l,
\\& \sum_{r \in \mathbf{\hat{R}}^{-}} \hat{f}_r^{\theta_r}(D_r)
\leq s, \ \hat{f}_r^i(l) \geq 0, \ \forall (r,i,l) .
\end{aligned}
\end{equation}
We next define the set of simple routes for a destination $l\in V$
that have the highest per-unit allocation profits.
\begin{equation} \label{eq:fo-simple-route} \hat{\Mc{R}}(l) :=
\ensuremath{\operatorname{argmax}}_{r \in \mathbf{\hat{R}}^{-} \vert D_r = l, \ \theta_r = 1}\{\hat{\beta}_r^i(l)-c_r \} .
\end{equation}
Using this definition, we state some properties of solutions
to~\eqref{eq:fo-obj} that depend on the supply $s$ and also give the
maximum profits an operator can earn for the feed-out problem.
\begin{corollary}\label{corr:fo-optsol}
\longthmtitle{Dependence of optimal solutions on total supply} For
an optimal solution of \eqref{eq:fo-obj},
\begin{enumerate}[label=(\alph*), ref=\ref{corr:fo-optsol}(\alph*)]
\item \label{corr:fo-optsol-a} If $s\leq \sum_{l} \hat{d}_l$, then
$\sum_r \hat{f}_r^{\theta_r}(D_r)^* =s$.
\item \label{corr:fo-optsol-b} If $s\geq \sum_{l} \hat{d}_l$ and
$\hat{f}_r^{\theta_r}(D_r)^*>0$ then $r\in \hat{\Mc{R}}(l)$ with
$\displaystyle \sum_{r\in \hat{\Mc{R}}(l)} \hat{f}_r^{\theta_r}(D_r)^* = \hat{d}_l$.
\item \label{corr:fo-optsol-c} The absolute maximum profits over all
$s$ are
\begin{equation}\label{eq:fo-max-prof}
\hat{J}_{\max} = \sum_l \hat{d}_l\max_{r \in \mathbf{\hat{R}}^{-} \vert D_r = l,
\ \theta_r = 1}\{\hat{\beta}_r^i(l)-c_r \} .
\end{equation}
\end{enumerate}
\end{corollary}
As one can see, $\hat{\Mc{R}}(l)$ has properties similar to those
proposed in Lemma \ref{lem:Rl-least-perceived-cost} Also, the absolute
maximum profits are similar in nature to that in supply location
feed-in problem.
\section{Best Alternate Transport and its Effect on%
the Reduced Route Set}
\label{sec:best-transport}
In this section, we present a simple model of the perceived costs for
the best alternate transportation $\{g_l\}$ and explore the their
effect on the feasibility of the feeder service.
\subsubsection{Modelling the Perceived Costs of the Best Alternate
Transportation}
In order to systematically generate $g_l$ for each node $l \in V$, we
first assume that the best alternate transport available in the region
costs $b c_r$ and time $t_r$ along a route $r \in \mathbf{R}$. The
\emph{cost-factor} $b \geq 0$ signifies the cost to a passenger of the
best alternate transportation relative to the feeder service. For
simplicity, we assume it to be the same through out the service
area. Then, we let
\begin{align}
&r(l,b)^* \in \ensuremath{\operatorname{argmin}}_{r \in \srs{l}} \{\alpha t_r + b
c_r\}\label{eq:best-alt-route} \\
&g_l(b) := \alpha \eta_l + \zeta_l , \ \eta_l = t_{r(l,b)^*}, \ \zeta_l %
= bc_{r(l,b)^*} , \label{eq:alt-perceived-cost}
\end{align}
where $r(l,b)^*$ is a route that the best alternate transport uses
from node $l$ to node $I$, while $\eta_l$, $\zeta_l$ and $g(l(b)$ are
the travel time, cost and the perceived cost of the best alternate
transport from node $l$ to node $I$.
\begin{remark}\longthmtitle{Effect of the cost-factor on best
alternate transportation}
\label{rem:alt-transport-b}
For a fixed $T$, as there are finitely many routes, the perceived
cost $g_l(b)$ is a piecewise-linear, increasing, concave and unique
function of $b$ for each node $l \in V$. Further, at $b$, the slope
of $g_l(b)$ is equal to $c_{r(l,b)^*}$ and the $g$ intercept is
$t_{r(l,b)^*}$. Thus, $\eta_l$ and $\zeta_l$ are also unique for
each $b$ except where the slope of $g_l(b)$ changes. For $b = 0$ and
$b= \infty$, the routes $r(l,b)^*$ are the fastest and cheapest,
respectively. For any $b$, $r(l,b)^*$ is such that
$t_{r(l,b)^*} \geq t_{r(l,0)^*}$ and
$c_{r(l,b)^*}\geq c_{r(l,\infty)^*}$. \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol
\end{remark}
\subsubsection{Viability of Feeder Service}
We first present a necessary condition on the value of $b$ for the
reduced route set $\mathbf{\bar{R}}$ to be non-empty and as a result for the
feeder service to be viable.
\begin{lemma}\longthmtitle{Necessary condition on $b$ for
viability of feeder service}\label{lem:viability}
If $g_l(b)$ is given by~\eqref{eq:alt-perceived-cost}, for each
$l \in V$, then the reduced route set $\mathbf{\bar{R}}$ is non-empty only if
$b>1$. \qed
\end{lemma}
We present the proof in Appendix~\ref{prf:viability}. Next, we check
for the existence of multi-legged routes in $\mathbf{\bar{R}}$ given $b$. We
denote $c^*(I,l)$ as the cheapest cost to go from $I$ to $l$ in the
graph for the following proposition. Its proof is given in
Appendix~\ref{prf:multi-legged}.
\begin{proposition}\label{prop:multi-legged} \longthmtitle{Necessary
value of $b$ for $\mathbf{\bar{R}}$ to contain multi-legged routes} Consider
the following statements
\begin{enumerate}[label = {(\alph*)},
ref=\ref{prop:multi-legged}(\alph*)]
\item $\exists$ $r\in \mathbf{\bar{R}}$ such that $\theta_r>1$
\item \label{prop:int-route}$\exists$ $r\in \mathbf{\bar{R}}$ with $o_r = I$
and $\theta_r =1$
\item \label{prop:necc-suff-b-multi-legs}
$g_l(b)\geq g_l(1)+ c^*(I,l)+1$, for some $l \in V$, $l\neq I$
\item \label{prop:necc-b-multi-legs}
$\displaystyle b\geq\left(1 + \frac{1+ c^*(I,l) +
\alpha(t_{r(l,1)^*}-t_{r(l,\infty)^*})}{ c_{r(l,1)^*}}
\right) =: b_l^*$ for some $l \in V$, $l \neq I$.
\end{enumerate}
Then, (a) $\Rightarrow$ (b) $\Rightarrow$ (c) $\Rightarrow$
(d). Also, (c) $\Rightarrow$ (b) for sufficiently large $T$. \qed
\end{proposition}
With this proposition, given the graph $G$, the value of time $\alpha$
and the value of $b$, the operator can evaluate viability of the
feeder service. Also, Proposition~\ref{prop:necc-suff-b-multi-legs}
and~\ref{prop:necc-b-multi-legs} for specific nodes $l$ may be
interpreted as necessary conditions for a traditional V.R.P. service
to that node to be viable. Note that we give two necessary conditions
in Proposition~\ref{prop:multi-legged}, namely parts~(c) and~(d),
because the condition~\ref{prop:necc-suff-b-multi-legs} requires the
computation of $g_l(b)$ for each $b$. In comparison,
condition~\ref{prop:necc-b-multi-legs} is a computationally simpler
relation but provides a bound lower than the one in
condition~\ref{prop:necc-suff-b-multi-legs}.
\section{Results} \label{sec:results}
Since both feed-in problem~\eqref{eq:equivoptmodel} and feed-out
problem~\eqref{eq:fo-obj} are linear programs, we utilized CVXpy
\cite{SD-SB:2016} for simulations.
\subsubsection{Simulation Setup}
We used the 24 node graph in Figure~\ref{fig:20_node} for the
simulations of the feed-in problems. The \emph{Interchange} node,
$I=23$ and \emph{destination-time} is $T=30$. The feasible route set
$\mathbf{R}$ has 37283 routes with 274411 variables. We also note that
all nodes satisfy Assumption~\ref{A:2}. We assume V.o.T. to be
$\alpha = 0.5$.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth, height=0.3\columnwidth ]{23_node}
\caption{Graph for simulation of feed-in problems. Numbers in the
circles represent node index, an arrow between nodes $l$ and $k$
indicates a directed edge from $l$ to $k$ and a line without an
arrow between nodes $l$ and $k$ indicates a bi-directional
edge. The tuple $(a,b)$ on the edge $(l,k)$ represents
$(\rho_{lk}, t_{lk})$ and the Interchange node $I=23$ is marked in
square.}\label{fig:20_node}
\end{figure}
Given the cost-factor $b$, we utilised~\eqref{eq:alt-perceived-cost}
to generate the best alternate transportation time and cost, $\eta_l$,
$\zeta_l$ respectively, for each node $l$ and~\eqref{eq:opt-pric} to
generate prices for each service
tuple~$(r,i,l)$. Figure~\ref{fig:bvsnor} shows the number of routes in
$\mathbf{\bar{R}}$ as a function of $b$ (with a step size of 0.01) for the graph
in Figure~\ref{fig:20_node}.
\begin{figure}[h]
\centering \includegraphics[width=0.29\textwidth]{bvsnor_graph3.pdf}
\caption{Variation of Number of Routes in $\mathbf{\bar{R}}$ versus $b$ for
$\alpha = 0.5$ and $T= 30$. Note that number of routes is 0 till
$b=1$. }\label{fig:bvsnor}
\end{figure}
We note that the first route with origin as $I$ and the first
multi-legged route in $\mathbf{\bar{R}}$ occur at $b =
1.71$. Proposition~\ref{prop:necc-suff-b-multi-legs}
and~\ref{prop:necc-b-multi-legs} give necessary lower bounds on $b$
for the existence of multi-legged routes in $\mathbf{\bar{R}}$ as $1.7027$ and
$1.692$, respectively. In each case, the origin of the multi-legged
route is the node $l = 0$. In Figure~\ref{fig:bvsnor}, we also see
that there is a significant increase in the number of routes around
$b = 2.1$. This can be explained by the fact that $b_l^* \in (2,2.1)$
for 5 nodes. For the rest of the results we set $b=2.5$ for which
$\mathbf{\bar{R}}$ has 12219 routes and 45050 optimization variables.
\subsubsection{Feed-In and Comparison with V.R.P.}
\begin{figure}[h]
\begin{subfigure}[b]{0.5\textwidth}
\centering \includegraphics
width=0.75\textwidth]{Sim_results_as.pdf}
\caption{}
\label{fig:sim_results1}
\end{subfigure}\\
\begin{subfigure}[b]{0.5\textwidth}
\centering \includegraphics[height=
0.28\textwidth,width=0.65\textwidth]{NORts_new_grph.pdf
\caption{}
\label{fig:sim_results3}
\end{subfigure
\caption{Simulation results for the feed-in problem. \textbf{(a):}
Maximum profits as a function of total supply is in red. Maximum
profits with all supply at $I$ and their profits as a function of
total supply $s$ is in green and for different supply
distributions, as a function of $s$, as the blue scatter points.
\textbf{(b):} Number of routes utilised for obtaining maximum
profits in (a). } \label{fig:Simulations}
\end{figure}
We simulated the \emph{feed-in} problem for a fixed demand profile
with demand, $d_l$ drawn uniformly from $\intrangecc{0}{250}$. The
total demand was $\sum_l d_l = 2468$. Figure \ref{fig:Simulations}
shows the simulation results. We utilised
Proposition~\ref{prop:supply-opt} to generate the route set
$\mathbf{R}^-$ which had 6265 routes and the resulting number of
optimization variables was 12695. In Figure \ref{fig:sim_results1},
maximum profits for given total supply (marked in red line) is from
Proposition~\ref{prop:supply-opt}. The maximum profits converge to the
absolute maximum profits, $J_{\max} = 105033.5$ (given by Theorem
\ref{th:suff-supplies}). We also simulated an \emph{equivalent
macroscopic} V.R.P. by concentrating all supply at $I$, i.e.
$S_I = s$, $S_l = 0$ , $\forall l\neq I$. We observe in
Figure~\ref{fig:sim_results1} that the profits earned are far lower,
compared to that of any randomly chosen supply distributions. This is
explained by two elements - insufficient time-window and
cost-factor. Given $T=30$, 4 nodes do not have $r\in \mathbf{R}$ such
that $o_r = I$ and $l \in V_r$. Also given $b = 2.5$, only 16 of the
23 nodes satisfy Proposition~\eqref{prop:necc-suff-b-multi-legs},
implying at-least 7 nodes do not have $r \in \mathbf{\bar{R}}$ with $o_r = I$ and
$l \in V_r$. The necessary value of $b$ is $3.06$ for all nodes to
satisfy Proposition~\ref{prop:necc-suff-b-multi-legs}.
In Figure \ref{fig:sim_results3} we also see the number of routes used
to generate maximum profits generally increases with $s$ though after
a point the number of routes used starts to reduce. We imposed the
added restriction that the supply distribution is chosen from the set
\eqref{eq:supply-set} for using in the construction of an equivalent
feed-out problem.
\subsubsection{Equivalence of Feed-Out and Feed-In Supply Optimization}
We use the directed graph in Figure \ref{fig:20_node} and Assumptions
\ref{A:3}-\ref{A:5} to generate a feed-out problem for the feed-in
supply optimization problem. Using route set $\mathbf{\hat{R}}^{-}$, we generate the
optimal profits for the same instances of total supply as before and
compared it with the maximum profits for the feed-in supply
optimization. The absolute error is in the range of $10^{-4}$ while
the maximum relative error is $5.66\times10^{-6}$, which is within
numerical tolerance given the solver precision is $10^{-8}$ and the
number of variables are 12695. This verifies the equivalence of the
two problems.
\section{Conclusions}
In this paper, we proposed a problem of \emph{one-shot} coordination
of first mode feed-in service, where in an operator seeks to maximize
its profits with routing and allocation, to transport a known demand
to a common destination on a network in a given fixed time window. We
solved the problem in a macroscopic setting where we considered all
supplies and demands as volumes. Using K.K.T. analysis we were able to
design an offline (supply and demand independent) method that reduces
the complexity of the online (after supply and demand are revealed)
optimization. Then, we considered the feed-in supply optimization
problem, analysed its properties and computed the absolute maximum
profits that the operator can earn over all possible supply
distributions for a given demand distribution. We showed an
equivalence between the feed-in supply optimization problem and the
\emph{one-shot feed-out} problem, wherein the operator needs to
drop-off people to their destinations from a common origin within a
fixed time window. This allows us to directly apply the results and
algorithms developed for the feed-in problem. Finally we analysed the
limitations of a macroscopic V.R.P. in addressing the first or
last-mile connectivity. In particular, a traditional V.R.P. may not be
an ideal last-mile connectivity solution and a mix of multi-origin
transportation model may be more viable.
Future work includes extension to a multiple time window problem, load
balancing of the supply in accordance with the anticipated demand
using the insights from the feed-in supply optimization problem,
extension to the scenario with uncertainty about supply and demand and
finally an integrated coordination of multiple modes of transportation.
\section{Acknowledgements}
We wish to thank Dr. Tarun Rambha of Department of Civil Engineering
at the Indian Institute of Science (IISc), Bengaluru for his valuable
comments and suggestions.
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUbNjxK6EuNArbAS3M
| 5
| 1
|
\section{Introduction}
A lattice $\mathcal{L} \subset \mathbb{Q}^n$ is the set of all integer linear combinations of some linearly independent basis vectors $\vec{b}_1,\ldots, \vec{b}_n \in \mathbb{Q}^n$.
The two central computational problems on lattices are the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). Given a lattice $\mathcal{L} \subset \mathbb{Q}^n$, the SVP is to find a shortest non-zero vector in $\mathcal{L}$. Given a lattice $\mathcal{L} \subset \mathbb{Q}^n$ and a target vector $\vec{t} \in \mathbb{Q}^n$, the CVP is to find a vector in $\mathcal{L}$ whose distance to $\vec{t}$ is minimal.
Algorithms for SVP and CVP, in both their exact and approximate versions, have found many diverse applications in computer science. They have been used to factor polynomials over the rationals~\cite{LLL82}, solve integer programming~\cite{Len83,Kan87,DPV11}, and break cryptographic schemes~\cite{Odl90,JS98,NS01}. And, over the past twenty years, a wide range of strong cryptographic primitives have been constructed with their security based on the \emph{worst-case} hardness of the approximate versions of these problems~\cite{Ajt96,MR07,GPV08, Gen09, Peikert09, Reg09,LPR10,BV11,BV14}.
Both problems are known to be hard, even to approximate to within the nearly polynomial factor of $n^{c/\log \log n}$ for some constant $c$~\cite{ABSS93,Ajt98,CN98,BS99,DKRS03,Mic01svp,Khot05svp,Micciancio12,HRsvp}. Indeed, CVP is in some sense \scarequotes{lattice complete} in that nearly all well-studied lattice problems are reducible to CVP via dimension-preserving (and approximation-factor-preserving) reductions. (See~\cite{Micciancio08} for a list of such problems.) In particular, a dimension-preserving reduction from SVP to CVP has long been known~\cite{GMSS99}. However, the best-known dimension-preserving reduction in the other direction only reduces $O(\sqrt{n})$-approximate CVP to SVP.
A powerful tool for studying lattices is the discrete Gaussian, the probability distribution $D_{\mathcal{L} - \vec{t}, s}$ that assigns to each vector $\vec{x} \in \mathcal{L} - \vec{t}$ probability proportional to its Gaussian mass, $e^{-\pi \length{\vec{x}}^2/s^2}$, for a lattice $\mathcal{L} \subset \mathbb{Q}^n$, shift vector $\vec{t} \in \mathbb{Q}^n$, and parameter $s > 0$. The discrete Gaussian and the closely related theta functions have been used to prove transference theorems on lattices~\cite{banaszczyk, Cai03}; to show that $\sqrt{n}$-approximate CVP and SVP are in co-NP~\cite{AharonovR04}; to embed flat tori in a Hilbert space with low distortion~\cite{HavivR13}; to solve the Bounded Distance Decoding Problem~\cite{LiuLM06}; and even in the study of the Riemann zeta function (e.g., in~\cite{BianePY01}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.4 \textwidth]{DGS10}
\qquad
\includegraphics[width=0.4 \textwidth]{shiftedDGS}
\caption{\label{fig:DGS}
Two very different discrete Gaussian distributions in two dimensions. On the left is $D_{\ensuremath{\mathbb{Z}}^2, 10}$. On the right is $D_{\mathcal{L} - \vec{t},5}$, where $\mathcal{L}$ is spanned by $3\vec{e}_1$ and $\vec{e}_2/2$, and $\vec{t} = 3\vec{e}_1/2 + \vec{e}_2/4$ is a \scarequotes{deep hole.}}
\end{center}
\end{figure}
Note that the discrete Gaussian is concentrated on relatively short vectors. In particular, in the important special case when the discrete Gaussian is \emph{centered} so that $\vec{t} = \vec0$, $D_{\mathcal{L}, s}$ assigns higher weight to shorter lattice vectors. This suggests a connection between $D_{\mathcal{L}, s}$ and SVP. In the more general case, $D_{\mathcal{L} - \vec{t}, s}$ is concentrated on short vectors in the shifted lattice $\mathcal{L} - \vec{t}$. By translating this distribution by $\vec{t}$ (i.e., considering the distribution of $D_{\mathcal{L} - \vec{t}, s} + \vec{t}$), we obtain a distribution over the lattice that assigns higher weight to the vectors that are closest to $\vec{t}$, suggesting a connection between $D_{\mathcal{L} - \vec{t}, s}$ and CVP. As the parameter $s$ becomes lower, the distribution becomes more concentrated. Indeed, one can show that samples from $D_{\mathcal{L} - \vec{t}, s}$ (when suitably translated) yield $(1+\alpha \sqrt{n})$-approximate solutions to CVP when $s \approx \dist(\vec{t}, \mathcal{L})/\alpha$. (See Figure~\ref{fig:DGS} for two examples of the discrete Gaussian in two dimensions.)
Largely because of its connection to other lattice problems, algorithms for discrete Gaussian sampling (DGS) have recently played an important role in computer science.
Gentry, Peikert, and Vaikuntanathan introduced a polynomial-time trapdoor algorithm for sampling from the discrete Gaussian with very high parameters $s$ in order to construct a secure signature scheme \cite{GPV08}.
And, many reductions between lattice problems use a DGS algorithm as a subroutine~\cite{Reg09,Pei10, MicciancioP13,BLPRS13}. But, these reductions also only work for very high parameters $s$. In particular, all previously known polynomial-time algorithms (even those with access to trapdoors and oracles) can only sample from $D_{\mathcal{L} - \vec{t},s}$ when $s$ is significantly above the \scarequotes{smoothing parameter} of the lattice, in which case the discrete Gaussian \scarequotes{looks like a continuous Gaussian distribution} in a certain precise sense that we do not define here. (See \cite{MR07} for the formal definition.)
In the past year, Aggarwal, Dadush, Regev, and Stephens-Davidowitz introduced an exponential-time algorithm for sampling from the discrete Gaussian with much lower parameters in order to solve exact SVP~\cite{ADRS15}, and \cite{ADS15} showed how to extend this result to CVP. These are the current fastest-known algorithms for SVP and CVP. In particular,~\cite{ADRS15} showed how to sample exponentially many vectors from the \emph{centered} discrete Gaussian for \emph{any} parameter $s$ in $2^{n+o(n)}$ time, which yields a solution to SVP. \cite{ADS15} extended this work to show how to sample many vectors from $D_{\mathcal{L} - \vec{t}, s}$ for very small parameters $s \approx \dist(\vec{t}, \mathcal{L})/2^{n}$, also in $2^{n+o(n)}$ time. Surprisingly, they showed how to use such an algorithm to construct a $2^{n+o(n)}$-time algorithm for CVP.\full{\footnote{It is easy to see that a discrete Gaussian sampler that works for any $\vec{t}$ and any $s$ is sufficient to solve CVP efficient. (We include a proof in Section~\ref{sec:CVPtoDGS} for completeness.) The difficulty in \cite{ADS15} is that the sampler only works for parameters $s$ greater than roughly $\dist(\vec{t}, \mathcal{L})/2^{n}$. While this minimum value is very small, this does not seem to be enough to efficiently solve exact CVP on its own. \cite{ADS15} manage to solve exact CVP in spite of this difficulty because their DGS algorithm outputs very many samples, which they use to recursively find an exact closest vector.}}{} (In Table~\ref{tab:DGS}, we summarize the previous known algorithms for discrete Gaussian sampling, together with the results of this work.)
\begin{table}
\begin{center}
\begin{tabular}{l l l l l }
& Shift & Parameter & Time & Notes \\
\hline
\hline
& Any $\vec{t}$ & $s \geq 2^{n \log n/\log \log n } \cdot \lambda_n$ & $\mathrm{poly}(n)$ & \cite{AKS01,GPV08} \\
& Any $\vec{t}$ & $s \geq \gamma \sqrt{\log n} \cdot \lambda_n$ & -- & Reduces to $\gamma$-approx. SVP \cite{GPV08, BLPRS13}.\\
& Any $\vec{t}$ & $s \gg \sqrt{n} \cdot \eta$ & -- & Quantum reduction to LWE \cite{Reg09}.\\
& Any $\vec{t}$ & $s \geq \sqrt{2} \cdot \eta$ & $2^{n/2 + o(n)}$ & Outputs $2^{n/2}$ samples~\cite{ADRS15}.\\
& Any $\vec{t}$ & $s \gtrsim 2^{-n/\log n}\dist(\vec{t}, \mathcal{L})$ & $2^{n+o(n)}$ & Outputs many samples~\cite{ADS15}.\\
* & Any $\vec{t}$ & Any $s$ & -- & Equivalent to CVP.\\
* & Any $\vec{t}$ & Any $s$ & $2^{n+o(n)}$ & Follows from equivalence and \cite{ADS15}. \\
\hline\hline
& $\vec{t} = \vec0$ & Any $s$ & $2^{n+o(n)}$ & Outputs $2^{n/2}$ samples~\cite{ADRS15}.\\
* & $\vec{t} = \vec0$ & Any $s$ & -- & Reduces to SVP.\\
\hline
\end{tabular}
\caption{\label{tab:DGS}Known results concerning the problem of sampling from $D_{\mathcal{L} - \vec{t}, s}$. Lines marked with a * are new results. We have omitted some constants. $\eta$ is the smoothing parameter, as defined in~\cite{MR07}, and $\lambda_n$ is the $n$th successive minimum. (They are related by $\eta/\sqrt{\log n} \lesssim \lambda_n \lesssim \sqrt{n} \cdot \eta$, where the upper bound is tight for the lattices that are relevant for cryptography. We also have have $\dist(\vec{t}, \mathcal{L}) \leq \sqrt{n} \lambda_n/2$.)}
\end{center}
\end{table}
All of these results reflect the increasing prominence of discrete Gaussian sampling algorithms in computer science. However, they left open a natural question: what is the complexity of DGS itself? In particular, prior to this work, DGS was one of the only prominent lattice problems not known to be reducible to CVP via a dimension-preserving reduction. \full{(Another important example is the Lattice Isomorphism Problem.) }{}In fact, previously, there was simply no known algorithm that sampled from $D_{\mathcal{L} - \vec{t}, s}$ for an arbitrary shift $\vec{t}$ and parameter $s > 0$, and it was not even known whether sampling from the \emph{centered} distribution $D_{\mathcal{L}, s}$ could be reduced to a problem in NP. (Since DGS is a sampling problem, it technically cannot be placed directly in classes of decision problems or search problems like NP or FNP. But, we can still reduce it to such problems. See, e.g.,~\cite{Aaronson14} for a discussion of the complexity of sampling problems and their relationship to search problems.)
\subsection{Our results}
Our first main result is a dimension-preserving reduction from discrete Gaussian sampling to CVP. (See Theorem~\ref{thm:DGStoCVP}.) This immediately implies two important corollaries. First, together with the relatively straightforward reduction from CVP to DGS\full{ (see Section~\ref{sec:CVPtoDGS})}{}, this shows that CVP and DGS are equivalent via efficient dimension-preserving reductions. In particular, this suggests that the approach of \cite{ADS15} is in some (weak) sense the \scarequotes{correct} way to attack CVP, since we now know that any faster algorithm for CVP necessarily implies a similarly efficient discrete Gaussian sampler, and vice versa. Second, together with the result of \cite{ADS15}, this gives a $2^{n+o(n)}$-time algorithm for discrete Gaussian sampling that works for any parameter $s$ and shift $\vec{t}$, the first known algorithm for this problem.
Our second main result is a dimension-preserving reduction from \emph{centered} DGS to SVP. (See Theorem~\ref{thm:DGStoSVP}.) As we describe below, this result requires quite a bit more work, and we consider it to be more surprising, since, in a fixed dimension, an SVP oracle seems to be significantly weaker than a CVP oracle. In contrast to the CVP case, we know of no efficient reduction from SVP to centered DGS, and we do not even know whether centered DGS is NP-hard. (While \cite{ADRS15} use centered DGS to solve SVP, they require exponentially many samples to do so.) \full{We}{In the full version, we} present only a much weaker reduction from $\gamma$-approximate SVP to centered DGS for any $\gamma = \Omega(\sqrt{n/\log n})$. We also show that, for any $\gamma = o(\sqrt{n/\log n})$,
no \scarequotes{simple} reduction from $\gamma$-SVP to centered DGS will work. (See Section~\ref{sec:SVPtoDGS}.)
Finally, we note that our proofs do not make use of any unique properties of the discrete Gaussian or of the $\ell_2$ norm. We therefore show a much more general result: any distribution that is close to a weighted combination of uniform distributions over balls in some norm reduces to CVP in this norm. (See Section~\ref{sec:other}.) In particular, sampling from the natural $\ell_q$ analogue of the discrete Gaussian is equivalent to CVP in the $\ell_q$ norm, under efficient dimension-preserving reductions. We imagine that a similar result holds for SVP, but since we know of no application, we do not endeavor to prove such a result in the more difficult setting of SVP.
\subsection{Proof overview}
\full{We now provide a high-level description of our techniques.}{}
\paragraph{Reduction from DGS to CVP. }
Our basic idea is to sample from the discrete Gaussian $D_{\mathcal{L} - \vec{t}, s}$ in two natural steps. We first sample some radius $r$ from a carefully chosen distribution. We then sample a uniformly random point in $(\mathcal{L} - \vec{t}) \cap r B_2^n$. In particular, the distribution on the radius should assign probability to each radius $r$ that is roughly proportional to $e^{-\pi r^2/s^2} \cdot |(\mathcal{L} - \vec{t}) \cap rB_2^n| $. (See the proof of Theorem~\ref{thm:DGStoCVP} for the exact distribution.) So, in order to solve DGS, it suffices to (1) compute $|(\mathcal{L} - \vec{t}) \cap r B_2^n|$ for arbitrary $r$, and (2) sample a uniformly random point from $(\mathcal{L} - \vec{t}) \cap r B_2^n$.
We actually use the same technical tool to solve both problems: lattice sparsification, as introduced by Khot~\cite{Khot05svp} (though our analysis is more similar to that of Dadush and Kun~\cite{DK13} and \cite{cvpp}). Intuitively, sparsification allows us to sample a random sublattice $\mathcal{L}' \subset \mathcal{L}$ of index $p$ such that for any vector $\vec{x} \in \mathcal{L}$, we have $\Pr[\vec{x} \in \mathcal{L}'] \approx 1/p$. Suppose we could find a sublattice $\mathcal{L}'$ such that for the closest $N \approx p$ points to $\vec{t}$ in $\mathcal{L}$, we have $\Pr[\vec{x} \in \mathcal{L}'] = 1/p$, independently of the other points. Then, this would suffice for our two use cases. In particular, if the lattice has $N$ points in the ball of a given radius around $\vec{t}$, then $\mathcal{L}' - \vec{t}$ would have a point in this ball with probability very close to $N/p$. We can use a CVP oracle to approximate this probability empirically, and we therefore obtain a good approximation for the number of lattice points in any ball. (\full{We achieve an approximation factor of $1+1/f(n)$ for any $f(n) = \mathrm{poly}(n)$. }{}See Theorem~\ref{thm:counter}.) Similarly, if we know that the number of lattice points in a ball of radius $r$ around $\vec{t}$ is roughly $N$, then we can take $p = \mathrm{poly}(n) \cdot N$ and repeatedly sample $\mathcal{L}'$ until $\mathcal{L}'$ has a point inside the ball of radius $r$ around $\vec{t}$. The resulting point will be a nearly uniformly random sample from the lattice points in the ball of radius $r$ around $\vec{t}$. Combining these two operations allows us to sample from the discrete Gaussian using a CVP oracle, as described above. (See Theorem~\ref{thm:DGStoCVP}.)
Unfortunately, sparsification does not give us exactly this distribution. More specifically, sparsification works as follows. Given a prime $p$ and lattice basis $\ensuremath{\mathbf{B}}$, we sample $\vec{z} \in \ensuremath{\mathbb{Z}}_p^n$ uniformly at random and define the corresponding sparsified sublattice as
\begin{equation}
\label{eq:sparsesublattice}
\mathcal{L}' := \{\vec{x} \in \mathcal{L}\ : \ \inner{\vec{z}, \ensuremath{\mathbf{B}}^{-1}\vec{x}} \equiv 0 \bmod p \}
\; .
\end{equation}
Then, for any vector $\vec{x} \in \mathcal{L}$, we have $\Pr[\vec{x} \in \mathcal{L}'] = 1/p$ unless $\vec{x} \in p\mathcal{L}$ (in which case $\vec{x}$ is always in $\mathcal{L}'$). Unfortunately, even if we ignore the issue that points in $p\mathcal{L}$ do not behave properly, it is easy to see that these probabilities are not at all independent. For example, if $\vec{x} = \alpha \vec{y}$, then $\vec{x} \in \mathcal{L}'$ if and only if $\vec{y} \in \mathcal{L}'$. \full{And of course, more complex dependencies can exist as well. }{}Fortunately, we can get around this by using an idea from \cite{cvpp} (and implicit in \cite{DK13}). In particular, we can show that the probabilities are close to independent if we also shift the sublattice $\mathcal{L}'$ by a \scarequotes{random lattice vector} $\vec{w} \in \mathcal{L}$. I.e., while the distribution of the points in $\mathcal{L}' \cap (rB_2^n + \vec{t})$ might be very complicated, each point in $\mathcal{L} \cap (rB_2^n + \vec{t})$ will land in $\mathcal{L}' - \vec{w}$ with probability $\approx 1/p$, and their distributions are nearly independent. (See Theorem~\ref{thm:shiftedsparsification} for the precise statement.) Our CVP oracle makes no distinction between lattices and shifted lattices (we can just shift $\vec{t}$ by $\vec{w}$), so this solution suffices for our purposes.
\paragraph{Reduction from centered DGS to SVP. } Our reduction from centered DGS to SVP uses the same high-level ideas described above, but the details are a bit more complicated. As in the CVP case, our primary tool is lattice sparsification, in which we choose a sparsified sublattice as in Eq.~\eqref{eq:sparsesublattice}. As before, we wish to control the distribution of the shortest vector in $\mathcal{L}'$, and we note that, ignoring degenerate cases, $\vec{x}$ is a shortest vector of $\mathcal{L}'$ if and only if $\vec{x} \in \mathcal{L}'$ and $\vec{y}_1, \ldots, \vec{y}_N \notin \mathcal{L}'$ where the $\vec{y}_i \in \mathcal{L}$ are the non-zero lattice vectors shorter than $\vec{x}$ (up to sign). However, as in the CVP case, this probability can be affected by linear dependencies. In the CVP case, we solved this problem by considering a random shift of $\mathcal{L}'$. But, this solution clearly does not work here because an SVP oracle simply \scarequotes{cannot handle} shifted lattices. We therefore have to deal explicitly with these dependencies.
The most obvious type of dependency occurs when $\vec{x}$ is not \emph{primitive}, so that $\vec{x} = \alpha \vec{y}_i$ for $|\alpha| > 1$. In this case, there is nothing that we can do---$\vec{y}_i$ is shorter than $\vec{x}$ and $\vec{y}_i \in \mathcal{L}'$ if and only if $\vec{x} \in \mathcal{L}'$, so $\vec{x}$ will \emph{never} be a shortest non-zero vector in $\mathcal{L}'$. We therefore are forced to work with only primitive vectors (i.e., lattice vectors that are not a scalar multiple of a shorter lattice vector). Even if we only consider primitive vectors, it can still be the case that two such vectors are scalar multiples of each other mod $p$, $\vec{x} \equiv \alpha \vec{y}_i \bmod p\mathcal{L}$. \full{Luckily, we show that this can only happen if there are $\widetilde{\Omega}(p)$ primitive vectors shorter than $\vec{x}$ in the lattice, so that this issue does not affect the $\widetilde{\Omega}(p)$ shortest primitive vectors. (See Lemma~\ref{lem:nogoodnameforthislemma}.) We also show that higher-order dependencies (e.g., equations of the form $\vec{x} \equiv \alpha \vec{y}_i + \beta \vec{y}_j \bmod p\mathcal{L}$) have little effect. (See Lemma~\ref{lem:almostindependent}.)}{In the full version, we show that such issues can be overcome.} So, the shortest non-zero vector in the sparsified lattice will be distributed nearly uniformly over the $\widetilde{\Omega}(p)$ shortest primitive vectors in the original lattice. (See Theorem~\ref{thm:sparsification} and Proposition~\ref{prop:sparsifier} for the precise statement, which might be useful in future work.)
As in the CVP case, this suffices for our purposes. In particular, if there are $N$ \emph{primitive} lattice vectors in the ball of radius $r$ centered at the origin for $N \leq \tilde{O}(p)$, then there will be a non-zero vector in $\mathcal{L}' \cap rB_2^n$ with probability very close to $N/p$. With an SVP oracle, we can estimate this probability, and this allows us to approximate the number of primitive lattice vectors in a ball with very good accuracy. (See Theorem~\ref{thm:primcounter}.) And, the sparsification algorithm and SVP oracle also allow us to sample a primitive lattice vector in the ball of radius $r$ around the origin with nearly uniform probability, as in the CVP case. (See Lemma~\ref{lem:uniformsampler}.)
Then, the same approach as before would allow us to use an SVP oracle to sample from the discrete Gaussian over the \emph{primitive} lattice vectors. In order to obtain the true discrete Gaussian, we first \scarequotes{add $\vec0$ in} by estimating the total Gaussian mass $\rho_s(\mathcal{L})$ and returning $\vec0$ with probability $1/\rho_s(\mathcal{L})$. Second, after sampling a primitive vector $\vec{x}$ using roughly the above idea, we sample an integer coefficient $z \in \ensuremath{\mathbb{Z}} \setminus \{ 0\}$ according to a one-dimensional discrete Gaussian (using an algorithm introduced by \cite{BLPRS13}) and output $z \vec{x}$. If we choose the primitive vector appropriately, we show that the resulting distribution is $D_{\mathcal{L}, s}$.\full{\footnote{Interestingly, the problem of sampling from the centered discrete Gaussian over the \emph{primitive} lattice vectors, or even just the discrete Gaussian over $\mathcal{L} \setminus \{ \vec0\}$ might be strictly harder than centered DGS. In particular, in Section~\ref{sec:SVPtoDGS}, we show a family of lattices for which $D_{\mathcal{L}, s}$ almost never returns a $o(\sqrt{n/\log n})$-approximate shortest vector. However, it is easy to see that the discrete Gaussian over the \emph{primitive} lattice vectors or even just over the lattice without $\vec0$ will output the shortest vector with overwhelming probability if the parameter $s$ is sufficiently small. Therefore, both of these sampling problems are actually polynomial-time equivalent to SVP, while we have some evidence to suggest that sampling from $D_{\mathcal{L}, s}$ is not. Indeed, we know of no application of centered DGS in which non-primitive vectors are actually desirable.}}{}
\subsection{Related work}
\label{sec:related}
\paragraph{DGS algorithms. } There are now many very different algorithms for sampling from the discrete Gaussian. (See Table~\ref{tab:DGS}.) The procedure of \cite{GPV08} (which was originally introduced by Klein in a different context~\cite{Klein00} and was later improved by Brakerski et al.~\cite{BLPRS13}) is a randomized variant of Babai's celebrated nearest plane algorithm \cite{Bab86}. It chooses the coordinates of a lattice vector in a given basis one-by-one by sampling from appropriate shifts of the $n$ one-dimensional Gaussians generated by the Gram-Schmidt orthogonalization of the basis vectors. Peikert showed a similar algorithm that uses the one-dimensional Gaussians generated by the basis vectors themselves instead of their Gram-Schmidt orthogonalizations~\cite{Peikert09}. This yields an elliptical discrete Gaussian, and Peikert convolves this with an elliptical continuous Gaussian in a clever way to obtain a spherical discrete Gaussian. Both of these algorithms are useful for building trapdoor primitives because they can sample from lower parameters if the input basis is shorter.
From our perspective, the algorithms of \cite{Klein00, GPV08,BLPRS13} and \cite{Pei10} can be viewed as reductions from DGS with high parameters $s$ to approximate SVP, where a better approximation factor allows us to sample with a lower parameter $s$ by finding a better basis. And, Regev~\cite{Reg09} explicitly showed a \emph{quantum} reduction from DGS with large $s$ to a different lattice problem. Indeed, many reductions between lattice problems start by sampling vectors from $D_{\mathcal{L}, s}$ for some large $s$ using one of these algorithms and then using an oracle for some lattice problem to find small combinations of the samples whose average lies in the lattice (e.g.,~\cite{Reg09, MicciancioP13}). One can show that the distribution of the resulting average will be close to $D_{\mathcal{L}, s'}$ for some $s' < s$ (as long as certain conditions are met).
However, all of the above-mentioned algorithms only work above the smoothing parameter of the lattice because they incur error that depends on \scarequotes{how smooth} the distribution is. Recently, \cite{ADRS15} showed that the averages of pairs of vectors sampled from the centered discrete Gaussian will be distributed \emph{exactly} as discrete Gaussians with a lower parameter, as long as we condition on the averages lying in the lattice. They then showed how to choose such pairs efficiently and proved that this is sufficient to sample from any centered discrete Gaussian in $2^{n+o(n)}$ time---even for parameters $s$ below smoothing. \cite{ADS15} then extended this idea to arbitrary Gaussians (as opposed to just centered Gaussians) with very low parameters $s \gtrsim \dist(\vec{t}, \mathcal{L})/2^{n}$. In both cases, the sampler actually outputs exponentially many vectors from the desired distribution.
\paragraph{Sparsification. } The samplers in this work approach discrete Gaussian sampling in a completely different way. (Indeed, the author repeatedly tried and failed to modify the above techniques to work in our context.) Instead, as we described above, we use a new method of sampling based on lattice sparsification. This tool was originally introduced by Khot for the purposes of proving the hardness of approximating SVP~\cite{Khot05svp}. Khot analyzed the behavior of sparsification only on the specific lattices that arose in his reduction, which were cleverly designed to \scarequotes{behave nicely} when sparsified. Later, Dadush and Kun analyzed the behavior of sparsification over general lattices~\cite{DK13} and introduced the idea of adding a random shift to the target in order to obtain deterministic approximation algorithms for CVP in any norm. Dadush, Regev, and Stephens-Davidowitz used a similar algorithm to obtain a reduction from approximate CVP to the same problem with an upper bound on the distance to the lattice (and a slightly smaller approximation factor)~\cite{cvpp}. Our sparsification analysis in the CVP case is most similar to that of~\cite{cvpp}, though our reduction requires tighter analysis.
However, in the SVP case our analysis is quite different from that of prior work. In particular, we deal explicitly with primitive lattice vectors, which allows us to tightly analyze the behavior of sparsification without a random shift. This seems necessary for studying the distribution of the shortest vector of an arbitrary sparsified lattice, but prior work managed to avoid this by either working with a specific type of lattice or adding a random shift.
Our use case for sparsification is also novel. In all prior work, sparsification was used to \scarequotes{filter out annoying short vectors, leaving only desirable vectors behind.} We instead use it specifically to sample from the resulting distribution of the shortest or closest vector in the sparsified lattice. We suspect that this technique will have additional applications.
\paragraph{Dimension-preserving reductions. } More generally, this paper can be considered as part of a long line of work that studies the relationships between various lattice problems under dimension-preserving reductions. Notable examples include~\cite{GMSS99}, which showed that SVP reduces to CVP;~\cite{Micciancio08}, which gave a reduction from SIVP to CVP; and~\cite{LM09}, which showed the equivalence of uSVP, GapSVP, and BDD up to polynomial approximation factors. In particular, this work together with~\cite{Micciancio08} shows that exact SIVP, exact CVP, and DGS are all equivalent under dimension-preserving reductions. (See~\cite{latticereductions} for a summary of such reductions.)
\subsection{Directions for future work}
\paragraph{Centered DGS. } In this work, we completely characterize the complexity of arbitrary discrete Gaussian sampling by showing that it is equivalent to CVP under dimension-preserving reductions. But, the complexity of centered DGS is still unknown. This is therefore the most natural direction for future work. In particular, we show that centered DGS is no harder than SVP (and therefore no harder than NP), but our lower bound only shows that it is at least as hard as $\gamma$-approximate SVP for any $\gamma = \Omega(\sqrt{n/\log n})$. The decision version of SVP is not NP-hard for such high approximation factors unless the polynomial hierarchy collapses, so there is a relatively large gap between our lower and upper bounds. Indeed, for $\gamma = \Omega(\sqrt{n / \log n})$, the decision version of $\gamma$-approximate SVP is known to be in co-AM, and even in SZK~\cite{GG98}.\footnote{The search problem could still potentially be NP-hard for such high approximation factors without violating any widely believed complexity-theoretic conjectures. However, this seems unlikely.} We provide some (relatively weak) evidence to suggest that $\gamma = \Omega(\sqrt{n/\log n})$ is the best achievable approximation factor (see Section~\ref{sec:SVPtoDGS}), and we therefore ask whether centered DGS can be reduced to an easier problem---perhaps even the search variant of a problem in $\mathsf{NP} \cap \mathsf{co}\text{-}\mathsf{AM}$.
A related and arguably much more important question is whether there is an algorithm for centered DGS that is faster than the $2^{n+o(n)}$-time algorithm of~\cite{ADRS15}---perhaps a sampler that outputs only one sample, as opposed to exponentially many. Indeed,~\cite{ADRS15} discuss possible ways to improve their techniques to achieve a roughly $2^{n/2 + o(n)}$-time algorithm for centered DGS, and they make some progress towards this goal. It seems that entirely new techniques would be needed to achieve running times below $2^{n/2}$. Any algorithm with a substantially better constant in the exponent would be the asymptotically fastest algorithm to break nearly all lattice-based cryptographic schemes.
\paragraph{Reductions to approximate lattice problems. } We note that the sampling algorithm of~\cite{Klein00, GPV08, BLPRS13} and many of the DGS subroutines used in hardness proofs can be seen as dimension-preserving reductions from DGS with very high parameters to \emph{approximate} lattice problems. If one simply plugs an exact SVP solver into these reductions, they will still only work for very high parameters. (More specifically, these works can be seen as reducing DGS with $s \gtrsim \gamma \sqrt{\log n} \lambda_n(\mathcal{L})$ to $\gamma$-approximate SVP or SIVP.) Our reductions, on the other hand, can handle any parameter but only work with exact solvers.
We therefore ask if there are better reductions from DGS to \emph{approximate} lattice problems with a better lower bound on the parameter $s$ than the one obtained in~\cite{GPV08, BLPRS13}. Ideally, we would like a smooth trade-off between the approximation factor $\gamma$ and the lower bound on the parameter $s$ that matches our result that works for any $s$ in the exact case when $\gamma = 1$. But, any non-trivial improvement over~\cite{GPV08,BLPRS13} would be a major breakthrough. (A dimension-preserving reduction from DGS with parameter $s \gtrsim \sqrt{(\gamma - 1)/n} \cdot \dist(\vec{t},\mathcal{L} ) $ to $\gamma$-approximate CVP would show that the two problems are equivalent and therefore completely characterize DGS. Furthermore,~\cite{LiuLM06, cvpp} show that it actually suffices to handle cases when either $\dist(\vec{t}, \mathcal{L}) \gtrsim \sqrt{\log n/n} \cdot \lambda_1(\mathcal{L})$ or $s$ is above the smoothing parameter.)
Indeed, it is still plausible that we could obtain a dimension-preserving reduction from \emph{centered} DGS to $\gamma$-approximate SVP for some $1 < \gamma \lesssim \sqrt{n/\log n}$. A reduction with $\gamma = \Omega(\sqrt{n/\log n})$ would completely characterize the complexity of centered DGS, but it seems far out of reach. However, any non-trivial $\gamma > 1$ would be quite interesting. In fact, DGS is essentially equivalent to centered DGS above the smoothing parameter. (See, e.g.,~\cite[Section 5.4]{ADRS15}.) So, a result for centered DGS might also advance the study of arbitrary DGS above smoothing.
\section{Preliminaries}
For $\vec{x} \in \ensuremath{\mathbb{R}}^n$, we write $\length{\vec{x}}$ to represent the $\ell_2$ norm of $\vec{x}$. (Except for the last section, this is the only norm that we consider.) We write $r B_2^n$ to represent the (closed) ball of radius $r$ in $\ensuremath{\mathbb{R}}^n$, $r B_2^n := \{ \vec{x} \in \ensuremath{\mathbb{R}}^n\ : \ \length{\vec{x}} \leq r\}$. \full{We will make repeated use of the simple fact that $(1+1/\mathrm{poly}(n))^{1/C} = 1+1/\mathrm{poly}(n)$ for any constant $C$.}{Many of the proofs that are not included in this extended abstract can be found in the full version.}
\subsection{Lattices}
\full{A lattice $\mathcal{L}\subset \ensuremath{\mathbb{R}}^n$ is the set of all integer linear combinations of linearly independent vectors $\ensuremath{\mathbf{B}} = (\vec{b}_1, \ldots, \vec{b}_n) \in \ensuremath{\mathbb{R}}^n$. $\ensuremath{\mathbf{B}}$ is called a basis of the lattice. As the basis is not unique, we often refer to the lattice itself, as opposed to its representation by a basis.
}{}
We write $\lambda_1(\mathcal{L})$ for the length of a shortest non-zero vector in the lattice, and $\lambda_2(\mathcal{L})$ is the length of a shortest vector in the lattice that is linearly independent from a vector of length $\lambda_1(\mathcal{L})$.
For any $\vec{t} \in \ensuremath{\mathbb{R}}^n$, we define
$
\dist(\vec{t}, \mathcal{L}) := \min_{\vec{x} \in \mathcal{L}} \length{\vec{x} - \vec{t}}$,
and the covering radius is then $\mu(\mathcal{L}) := \max_{\vec{t}} \dist(\vec{t}, \mathcal{L})$.
\full{We will need basic bounds on $\lambda_1(\mathcal{L})$ and $\mu(\mathcal{L})$ for rational lattices in terms of the bit length of the basis. (Many of our results are restricted to lattices and targets in $\mathbb{Q}^n$ entirely for the sake of bounds like this. We could instead work over the reals, provided that the chosen representation of real numbers leads to similar bounds.)}{}
\begin{lemma}
\label{lem:lambda1bitlength}
For any lattice $\mathcal{L} \subset \mathbb{Q}^n$ with basis $\ensuremath{\mathbf{B}} = (\vec{b}_1, \ldots, \vec{b}_n)$, let $m$ be a bound on the bit length of $\vec{b}_i$ for all $i$ in the natural representation of rational numbers. Then,
\[
2^{-nm} \leq \lambda_1(\mathcal{L}) \leq 2^{m}
\; ,
\]
and
\[
2^{-nm - 1} \leq \mu(\mathcal{L}) \leq n2^{m}
\; .
\]
\end{lemma}
\full{\begin{proof}
The first upper bound is trivial, as $\lambda_1(\mathcal{L}) \leq \length{\vec{b}_1} \leq 2^{m}$. For the lower bound, let $q_i$ be a the minimal positive integer such that $q_i \vec{b}_i \in \ensuremath{\mathbb{Z}}^n$. Note that $q_i \leq 2^{m}$. Then, for any vector $\vec{x} \in \mathcal{L}$, we have $\vec{x} \cdot \prod_i q_i \in \ensuremath{\mathbb{Z}}^n$. Therefore, $\lambda_1(\mathcal{L}) \geq \prod_i q_{i}^{-1} \geq 2^{-n m}$.
Similarly, the lower bound on $\mu(\mathcal{L})$ is trivial, as $\mu(\mathcal{L}) \geq \lambda_1(\mathcal{L})/2 \geq 2^{-nm - 1}$. For the upper bound, we have $\mu(\mathcal{L}) \leq \sum \length{\vec{b}_i} \leq n2^{m}$.
\end{proof}}{}
\full{The following Lemma is due to~\cite{BHW93}.
\begin{lemma}
\label{lem:weakKL}
For any lattice $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$ and $r > 0$,
\[
|\mathcal{L} \cap r B_2^n| \leq 1+ \Big(\frac{8r}{\lambda_1(\mathcal{L})} \Big)^{n}
\; .
\]
\end{lemma}}{}
\begin{corollary}
\label{cor:ballcountingbitlength}
For any lattice $\mathcal{L} \subset \mathbb{Q}^n$ with basis $(\vec{b}_1, \ldots, \vec{b}_n)$, $\vec{t} \in \mathbb{Q}^n$, and $r > 0$, let $m$ be a bound on the bit length of the $\vec{b}_i$ for all $i$ in the natural representation of rational numbers. Then,
\[
|(\mathcal{L} - \vec{t}) \cap rB_2^n| \leq 1 + (2+r)^{\mathrm{poly}(n,m)}
\; .
\]
\end{corollary}
\full{\begin{proof}
It suffices to bound $|\mathcal{L} \cap (r+\mu(\mathcal{L})) B_2^n)|$. The result then follows by applying Lemma~\ref{lem:lambda1bitlength} and Lemma~\ref{lem:weakKL}.
\end{proof}}{}
\subsection{The discrete Gaussian distribution}
For $\vec{x} \in \ensuremath{\mathbb{R}}^n$ and $s > 0$, we write $\rho_s(\vec{x}) := e^{-\pi \length{\vec{x}}^2/s^2}$. For $A \subset \ensuremath{\mathbb{R}}^n$, a discrete set, we write $\rho_s(A) := \sum_{\vec{x} \in A} \rho_s(\vec{x})$, and we define the discrete Gaussian distribution over $A$ with parameter $s$, $D_{A,s}$, as the distribution that assigns probability $\rho_s(\vec{x})/\rho_s(A)$ to all $\vec{x} \in A$. When $s = 1$, we omit it and simply write $\rho(\vec{x})$, $D_{\mathcal{L}}$, etc.
\full{Banaszczyk proved the following two useful bounds on the discrete Gaussian over lattices \cite{banaszczyk}.
\begin{lemma}
\label{lem:rhoLt}
For any lattice $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$, $s > 0$, and $\vec{t} \in \ensuremath{\mathbb{R}}^n$,
\[
\rho_s(\mathcal{L} - \vec{t}) \geq e^{-\pi \dist(\vec{t}, \mathcal{L})^2/s^2}\rho_s(\mathcal{L})
\; .
\]
\end{lemma}}{}
\begin{lemma}[{\cite[Lemma 2.13]{cvpp}}]
\label{lem:banaszczyk}
For any lattice $\mathcal{L}\subset\ensuremath{\mathbb{R}}^n$, $s > 0$, $\vec{t} \in \ensuremath{\mathbb{R}}^n$, and $r \geq1/\sqrt{2\pi}$,
\[
\Pr_{\vec{X} \sim D_{\mathcal{L} - \vec{t}, s}}[\length{\vec{X}} \geq r s\sqrt{n} ] < \frac{\rho_s(\mathcal{L})}{\rho_s(\mathcal{L} - \vec{t})}\big( \sqrt{2 \pi e r^2} \exp(-\pi r^2) \big)^n
\; .
\]
\end{lemma}
\full{With this, we derive a corollary similar to \cite[Corollary 2.7]{ADS15}.
\begin{corollary}
\label{cor:shiftedbanaszczyk}
For any lattice $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$, $s > 0$, $\vec{t} \in \ensuremath{\mathbb{R}}^n $, and $r \geq 1/\sqrt{2\pi}$,
\[
\Pr_{\vec{X} \sim D_{\mathcal{L} - \vec{t}}, s}[\length{\vec{X}}^2 \geq \dist(\vec{t}, \mathcal{L})^2 + r^2 s^2 n] < \big( \sqrt{2 \pi e r^{\prime 2}} \exp(-\pi r^{2}) \big)^n
\; ,
\]
where $r' := \sqrt{\dist(\vec{t}, \mathcal{L})^2/(s^2 n) + r^2}$.
In particular, if
\[r \geq 10\sqrt{\log \big(10 + \dist(\vec{t}, \mathcal{L})/(s \sqrt{n})\big)}
\; ,
\]
then
\[
\Pr_{\vec{X} \sim D_{\mathcal{L} - \vec{t}}, s}[\length{\vec{X}}^2 \geq \dist(\vec{t}, \mathcal{L})^2 + r^2 s^2 n] < e^{-r^2n}
\; .
\]
\end{corollary}
\begin{proof}
Combining the above two lemmas, we have
\begin{align*}
\Pr_{\vec{X} \sim D_{\mathcal{L} - \vec{t}}, s}[\length{\vec{X}}^2 \geq \dist(\vec{t}, \mathcal{L})^2 + r^2 s^2 n] &< e^{\pi \length{\vec{t}}^2/s^2}\cdot \big( \sqrt{2 \pi e r^{\prime 2}} \exp(-\pi r^{\prime 2}) \big)^n \\
&= \big( \sqrt{2 \pi e r^{\prime 2}} \exp(-\pi r^{2}) \big)^n
\; ,
\end{align*}
as needed.
Now, suppose, $r \geq 10\sqrt{\log \big(10 + \dist(\vec{t}, \mathcal{L})/(s \sqrt{n})\big)}$. We consider two cases. First, suppose $\frac{\dist(\vec{t}, \mathcal{L})}{s \sqrt{n}} < 1$. Then, we have
$
r^{\prime 2} < 2r^2 < \frac{e^{r^2}}{2\pi e}
$,
and the result follows. Otherwise, we have
\[
r^{\prime 2} = \frac{\dist(\vec{t}, \mathcal{L})^2}{s^2 n} \cdot (1+ r^2 s^2 n/\dist(\vec{t}, \mathcal{L})^2)< \frac{\dist(\vec{t}, \mathcal{L})^2}{s^2 n} \cdot \exp\Big(\frac{r^2 s^2 n}{\dist(\vec{t}, \mathcal{L})^2}\Big)
\;.
\]
So,
\begin{align*}
\sqrt{2 \pi e r^{\prime 2}} \exp(-\pi r^{2}) &< \frac{\dist(\vec{t}, \mathcal{L})}{s} \cdot \sqrt{2\pi e/n} \cdot \exp \Big( \frac{r^2 s^2 n}{2\dist(\vec{t}, \mathcal{L})^2} - \pi r^2\Big)\\
&< \frac{\dist(\vec{t}, \mathcal{L})}{s} \cdot \sqrt{2\pi e/n} \cdot e^{(1/2-\pi) r^2}\\
&< e^{- r^2} \; ,
\end{align*}
as needed.
\end{proof}}
{
\begin{corollary}
\label{cor:shiftedbanaszczyk}
For any lattice $\mathcal{L} \subset \mathbb{Q}^n$, $s > 0$, $\vec{t} \in \mathbb{Q}^n $, and $r\geq 10\sqrt{\log (10 + \dist(\vec{t}, \mathcal{L})/(s \sqrt{n}))}$,
\[
\Pr_{\vec{X} \sim D_{\mathcal{L} - \vec{t}}, s}[\length{\vec{X}}^2 \geq \dist(\vec{t}, \mathcal{L})^2 + r^2 s^2 n] < e^{-r^2n}
\; .
\]
\end{corollary}
}
\full{The following lemma is actually true for \scarequotes{almost all lattices,} in a certain precise sense that is outside the scope of this paper. (See, e.g.,~\cite{siegal45}.)
\begin{lemma}
\label{lem:randomlattice}
For any $n \geq 1$,
there is a lattice $\mathcal{L} \subset \mathbb{Q}^n$ such that for any $s > 0$, $\rho_s(\mathcal{L}) \geq 1+s^{n}$ and $\lambda_1(\mathcal{L}) > \sqrt{n}/10$.
\end{lemma}}
{}
\subsection{Lattice problems}
\full{
\begin{definition}
For any parameter $\gamma \geq 1$, $\gamma$-SVP (the Shortest Vector Problem) is the search problem defined as follows: The input is a basis $\ensuremath{\mathbf{B}}$ for a lattice $\mathcal{L} \subset \mathbb{Q}^n$. The goal is to output a lattice vector $\vec{x}$ with $0 < \length{\vec{x}} \leq \gamma \lambda_1(\mathcal{L})$.
\end{definition}
\begin{definition}
For any parameter $\gamma \geq 1$, $\gamma$-CVP (the Closest Vector Problem) is the search problem defined as follows: The input is a basis $\ensuremath{\mathbf{B}}$ for a lattice $\mathcal{L} \subset \mathbb{Q}^n$ and a target vector $\vec{t} \in \mathbb{Q}^n$. The goal is to output a lattice vector $\vec{x}$ with $\length{\vec{x} - \vec{t}} \leq \gamma \dist(\vec{t}, \mathcal{L})$.
\end{definition}
}
{
\begin{definition}
SVP (the Shortest Vector Problem) is the search problem defined as follows: The input is a basis $\ensuremath{\mathbf{B}}$ for a lattice $\mathcal{L} \subset \mathbb{Q}^n$. The goal is to output a lattice vector $\vec{x}$ with $ \length{\vec{x}} = \lambda_1(\mathcal{L})$.
\end{definition}
\begin{definition}
CVP (the Closest Vector Problem) is the search problem defined as follows: The input is a basis $\ensuremath{\mathbf{B}}$ for a lattice $\mathcal{L} \subset \mathbb{Q}^n$ and a target vector $\vec{t} \in \mathbb{Q}^n$. The goal is to output a lattice vector $\vec{x}$ with $\length{\vec{x} - \vec{t}} = \dist(\vec{t}, \mathcal{L})$.
\end{definition}
}
\full{We will mostly be interested in the exact case, when $\gamma = 1$, in which case we simply write SVP and CVP respectively. Note that there may be many shortest lattice vectors or closest lattice vectors to $\vec{t}$.}{}
\begin{definition}
For $\gamma \geq 1$ and $\eps \ge 0$, we say that a distribution $X$ is $(\gamma, \eps)$-close to a distribution $Y$ if there is another distribution $X'$ with the same support as $Y$ such that
\begin{enumerate}
\item the statistical distance between $X$ and $X'$ is at most $\eps$; and
\item for all $x$ in the support of $Y$,
$
\Pr[Y = x]/\gamma \leq \Pr[X' = x] \leq \gamma \Pr[Y = x]
$
.
\end{enumerate}
\end{definition}
\begin{definition}
For any parameters $\eps \geq 0$ and $\gamma \geq 1$, $(\gamma, \eps)$-DGS (the Discrete Gaussian Sampling problem) is defined as follows:
The input is a basis $\ensuremath{\mathbf{B}}$ for a lattice $\mathcal{L} \subset \mathbb{Q}^n$, a shift $\vec{t} \in \mathbb{Q}^n$, and a (rational) parameter $s > 0$. The goal is to output a vector whose distribution is $(\gamma, \eps)$-close to $D_{\mathcal{L} - \vec{t}, s}$.
\end{definition}
\begin{definition}
For any parameters $\eps \geq 0$ and $\gamma \geq 1$, $(\gamma, \eps)$-cDGS (the centered Discrete Gaussian Sampling problem) is defined as follows:
The input is a basis $\ensuremath{\mathbf{B}}$ for a lattice $\mathcal{L} \subset \mathbb{Q}^n$ and a (rational) parameter $s > 0$. The goal is to output a vector whose distribution is $(\gamma, \eps)$-close to $D_{\mathcal{L}, s}$.
\end{definition}
\full{DGS is typically defined with an additional parameter $\sigma \geq 0$, such that the algorithm only needs to output discrete Gaussian samples if $s > \sigma$. Since both of our reductions achieve $\sigma = 0$, we omit this parameter.}{}
\full{\subsection{Algorithms for one-dimensional Gaussians}
Brakerski, Langlois, Peikert, Regev, and Stehl\'e show how to efficiently sample from the one-dimensional discrete Gaussian $D_{\ensuremath{\mathbb{Z}} + c, s}$ for any $c \in \ensuremath{\mathbb{R}}$ and $s > 0$~\cite{BLPRS13}. For completeness, we describe a slightly modified version of their algorithm to sample from $D_{\ensuremath{\mathbb{Z}} \setminus \{ 0 \}, s}$.
\begin{lemma}
\label{lem:sampleZ}
There is an algorithm that samples from $D_{\ensuremath{\mathbb{Z}} \setminus \{ 0 \}, s}$ for any $s > 0$ in (expected) polynomial time.
\end{lemma}
\begin{proof}
We describe an algorithm that samples from $D_{\ensuremath{\mathbb{Z}}^+, s}$, which is clearly sufficient.
Let $Z := e^{-\pi/s^2} + \int_1^\infty e^{-\pi x^2/s^2} {\rm d} x$. The algorithm outputs $1$ with probability $e^{-\pi/s^2}/Z$. Otherwise, it samples $x$ from the one-dimensional continuous Gaussian with parameter $s$ restricted to the interval $(1,\infty)$. Let $y := \ceil{x}$. With probability $e^{-\pi(y^2-x^2)/s^2}$, the algorithm outputs $y$. Otherwise, it repeats.
On a single run of the algorithm, for any integer $z \geq 2$, the probability that the algorithm outputs $z$ is
\[
\frac{1}{Z} \cdot \int_{z-1}^z e^{-\pi x^2/s^2}\cdot e^{-\pi(z^2-x^2)/s^2} {\rm d}x = \frac{e^{-\pi z^2/s^2}}{Z}
\;.
\]
And, the probability that the algorithm outputs $1$ is of course $e^{-\pi/s^2}/Z$. So, the algorithm outputs the correct distribution.
It remains to bound the expected running time. After a single run, the algorithm outputs an integer with probability
\[
\frac{\rho_s(\ensuremath{\mathbb{Z}}^+)}{Z} = \frac{\rho_s(\ensuremath{\mathbb{Z}}^+)}{e^{-\pi/s^2} + \int_1^\infty e^{-\pi x^2/s^2} {\rm d}x } \geq \frac{1}{2}
\; .
\]
It follows that it runs in expected polynomial time.
\end{proof}
Furthermore, we will need to efficiently compute $\rho_s(\ensuremath{\mathbb{Z}} \setminus \{0 \})$ for arbitrary $s$. Brakerski et al. give a simple algorithm for this problem as well. (Here, we ignore the bit-level concerns of what it means to \scarequotes{efficiently compute} a real number, as this will not be an issue for us.)
\begin{claim}
\label{clm:computerhoZ}
There is an efficient algorithm that computes $\rho_s(\ensuremath{\mathbb{Z}} \setminus \{0 \})$.
\end{claim}}
{}
\full{
\subsection{\texorpdfstring{Lattice vectors mod $p$ and $\ensuremath{\mathbb{Z}}_p^n$}{Lattice vectors mod p and Zpn}}
\label{sec:sparseprelims}
Our primary technical tool will be lattice sparsification, in which we consider the sublattice
\[\mathcal{L}' := \{ \vec{x} \in \mathcal{L}\ :\ \inner{\vec{z}, \ensuremath{\mathbf{B}}^{-1}\vec{x}} \equiv 0 \bmod p \}
\; ,
\] where $p$ is some prime, $\vec{z} \in \ensuremath{\mathbb{Z}}_p^n$ is uniformly random, and $\ensuremath{\mathbf{B}}$ is a basis of the lattice $\mathcal{L} \subset \mathbb{Q}^n$. As such, we will need some lemmas concerning the behavior of lattice vectors mod $p\mathcal{L} $. We first simply note that we can compute $\mathcal{L}'$ efficiently.
\begin{claim}
\label{clm:efficientsparse}
There is a polynomial-time algorithm that takes as input a basis $\ensuremath{\mathbf{B}}$ for a lattice $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$, a number $p \in \ensuremath{\mathbb{Z}}^+$, and a vector $\vec{z} \in \ensuremath{\mathbb{Z}}_p^n$ and outputs a basis $\ensuremath{\mathbf{B}}'$ for
\[\mathcal{L}' := \{ \vec{x} \in \mathcal{L}\ :\ \inner{\vec{z}, \ensuremath{\mathbf{B}}^{-1}\vec{x}} \equiv 0 \bmod p \}
\; .
\]
\end{claim}
\begin{proof}
On input $\ensuremath{\mathbf{B}} = (\vec{b}_1,\ldots, \vec{b}_n)$, $p \in \ensuremath{\mathbb{Z}}^+$, and $\vec{z} = (z_1,\ldots, z_n) \in \ensuremath{\mathbb{Z}}_p^n$, if $\vec{z} = \vec0$, the algorithm simply outputs $\ensuremath{\mathbf{B}}$. Otherwise, we assume without loss of generality that $z_n \neq 0$. The algorithm then computes $\ensuremath{\mathbf{B}}^{-T} = (\vec{b}_1^*, \ldots, \vec{b}_n^*)$. It sets
\[
\hat{\ensuremath{\mathbf{B}}} := \Big(\vec{b}_1^*, \ldots, \vec{b}_{n-1}^*, \frac{1}{q}\sum z_i \vec{b}_i^* \Big)
\; .
\]
Finally, it outputs $\ensuremath{\mathbf{B}}' := \hat{\ensuremath{\mathbf{B}}}^{-T}$.
A quick computation shows that $\hat{\ensuremath{\mathbf{B}}}$ has full rank and that $\ensuremath{\mathbf{B}}'$ is indeed a basis for $\mathcal{L}'$.
\end{proof}
Since we will only be concerned with the coordinates of the vectors mod $p$, it will suffice to work over $\ensuremath{\mathbb{Z}}_p^n$.
\begin{lemma}
\label{lem:almostindependent}
For any prime $p$ and collection of vectors $\vec{x}, \vec{v}_1,\ldots, \vec{v}_N \in \ensuremath{\mathbb{Z}}_p^n \setminus \{\vec0 \}$ such that $\vec{x}$ is not a scalar multiple of any of the $\vec{v}_i$, we have
\[
\frac{1}{p} - \frac{N}{p^2} \leq \Pr\big[\inner{\vec{z}, \vec{x}} \equiv 0 \bmod p \text{ and } \inner{\vec{z}, \vec{v}_i} \not\equiv 0 \bmod p \ \forall i \big] \leq \frac{1}{p}
\; ,
\]
where $\vec{z} $ is sampled uniformly at random from $\ensuremath{\mathbb{Z}}_p^n$.
\end{lemma}
\begin{proof}
For the upper bound, it suffices to note that, since $\vec{x}$ is non-zero, $\inner{\vec{z}, \vec{x}}$ is uniformly distributed over $\ensuremath{\mathbb{Z}}_p$. Therefore, $\Pr[\inner{\vec{z}, \vec{x}} \equiv 0 \bmod p] = 1/p$.
For the lower bound, note that $A := \{ \vec{y} \in \ensuremath{\mathbb{Z}}_p^n \ : \ \inner{\vec{y}, \vec{x}} \equiv 0 \bmod p \}$ and $B_i := \{ \vec{y} \in \ensuremath{\mathbb{Z}}_p^n \ : \ \inner{\vec{y}, \vec{v}_i} \equiv 0 \bmod p \}$ are distinct subspaces of dimension $n-1$. Therefore, $A \cap B_i$ is a subspace of dimension $n-2$ with $p^{n-2}$ elements. Let $B := \bigcup B_i$. It follows that
\begin{align*}
\Pr\big[\inner{\vec{z}, \vec{x}} \equiv 0 \bmod p \text{ and } \inner{\vec{z}, \vec{v}_i} \not\equiv 0 \bmod p \big] &= \frac{|A \setminus B|}{|\ensuremath{\mathbb{Z}}_p^n |}\\
&\geq \frac{|A| - \sum_i |A \cap B_i|}{|\ensuremath{\mathbb{Z}}_p^n|}\\
&= \frac{p^{n-1} - N p^{n-2}}{p^n}\\
&= \frac{1}{p} -\frac{N}{p^2}
\; .
\end{align*}
\end{proof}
\begin{corollary}
\label{cor:shiftedindependence}
For any prime $p$, collection of vectors $\vec{v}_1,\ldots, \vec{v}_N \in \ensuremath{\mathbb{Z}}_p^n$, and $\vec{x} \in \ensuremath{\mathbb{Z}}_p^n$ with $\vec{x} \neq \vec{v}_i$ for any $i$, we have
\[
\frac{1}{p} - \frac{N}{p^2} - \frac{N}{p^{n-1}} \leq \Pr\big[\inner{\vec{z}, \vec{x} + \vec{c}} \equiv 0 \bmod p \text{ and } \inner{\vec{z}, \vec{v}_i + \vec{c}} \not\equiv 0 \bmod p \ \forall i \big] \leq \frac{1}{p} + \frac{1}{p^{n}}
\; ,
\]
where $\vec{z}$ and $\vec{c} $ are sampled uniformly and independently at random from $\ensuremath{\mathbb{Z}}_p^n$.
\end{corollary}
\begin{proof}
For the upper bound, it suffices to note that $\Pr[\inner{\vec{z}, \vec{x} + \vec{c}} \equiv 0 \bmod p] \leq \frac{1}{p} + \frac{1}{p^n}$.
Turning to the lower bound, note that for any $i$, we have $\Pr[\vec{v}_i + \vec{c} = \vec0] = 1/p^n$. By union bound, the probability that $\vec{v}_i + \vec{c} = \vec0$ for any $i$ is at most $N/p^n$. Now, fix $i$, and note that if there exists some $\alpha \in \ensuremath{\mathbb{Z}}_p \setminus \{1 \}$ such that $\alpha(\vec{v}_i + \vec{c}) = \vec{x} + \vec{c}$, then we must have
\[
\vec{c} = \frac{\alpha \vec{v}_i - \vec{x}}{1-\alpha}
\; .
\]
There are therefore at most $p-1$ values for $\vec{c}$ that satisfy the above---one for each value of $\alpha$. So, the probability that $\vec{c}$ will satisfy the above equation for any $\alpha$ is at most $(p-1)/p^n$. Taking a union bound over all $i$, we see that the probability that $\vec{x} + \vec{c}$ is a multiple of any of the $\vec{v}_i + \vec{c}$ is at most $N(p-1)/p^n$. The result then follows from Lemma~\ref{lem:almostindependent} and union bound.
\end{proof}}
{}
\subsection{Primitive lattice vectors}
\full{For a lattice $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$, we say that $\vec{x} \in \mathcal{L}$ is non-primitive in $\mathcal{L}$ if $\vec{x} = k \vec{y}$ for some $\vec{y} \in \mathcal{L}$ and $k \geq 2$. Otherwise, $\vec{x}$ is primitive in $\mathcal{L}$. }{}Let $\mathcal{L}^{\mathrm{prim}}$ be the set of primitive vectors in $\mathcal{L}$. For a radius $r > 0$, let $\xi(\mathcal{L}, r) := |\mathcal{L}^{\mathrm{prim}} \cap r B_2^n|/2 $ be the number of primitive lattice vectors in a (closed) ball of radius $r$ around the origin (counting $\vec{x}$ and $-\vec{x}$ as a single vector).
\full{We will need the following technical lemma, which shows that relatively short primitive vectors cannot be scalar multiples of each other mod $p$.
\begin{lemma}
\label{lem:nogoodnameforthislemma}
For any lattice $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$ with basis $\ensuremath{\mathbf{B}}$, suppose $\vec{x}_1, \vec{x}_2 \in \mathcal{L}$ are primitive with $\vec{x}_1 \neq \pm \vec{x}_2$ and $\length{\vec{x}_1} \geq \length{\vec{x}_2}$ such that
\[
\ensuremath{\mathbf{B}}^{-1}\vec{x}_1 \equiv \alpha \ensuremath{\mathbf{B}}^{-1}\vec{x}_2 \bmod p
\;
\]
for any number $p \geq 100$ and $\alpha \in \ensuremath{\mathbb{Z}}_p$. Then, $\xi(\mathcal{L}, \length{\vec{x}_1}) > p/(20\log p)$.
\end{lemma}
\begin{proof}
We assume $\alpha \neq 0$, since otherwise $\vec{x}_1$ is not even primitive.
So, we have that $\vec{x}_1 - q \vec{x}_2 \in p \mathcal{L} \setminus \{ \vec0\}$ for some integer $q \equiv \alpha \bmod p $ with $0 < |q| \leq p/2$. Let $\vec{y} := (\vec{x}_1 - q \vec{x}_2)/p \in \mathcal{L}$ and note that $\vec{y}$ is not a multiple of $\vec{x}_2$. It suffices to find at least $\ceil{p/(20 \log p)}$ primitive vectors in the lattice spanned by $\vec{y}$ and $\vec{x}_2$ that are at least as short as $\vec{x}_1$.
We consider two cases. If $q = \pm 1$, then for $i = 0, \ldots, p-1$, the vectors $i\vec{y} + q \vec{x}_2$ are clearly primitive in the lattice spanned by $\vec{y}$ and $\vec{x}_2$, and we have
\[
\length{i\vec{y} + q \vec{x}_2} = \length{i\vec{x}_1 + q(p-i)\vec{x}_2}/p \leq \length{\vec{x}_1}
\; ,
\]
as needed.
Now, suppose $|q| > 1$. Then, for $i = \ceil{p/4},\ldots, \floor{p/2}$, let $k_i$ be an integer such that $|k_i - iq/p| \leq 1/2$ and $0 < |k_i| < i$. (Note that such an integer exists, since $1/2 \leq |iq/p| \leq i/2$). Then,
\begin{align*}
\length{i\vec{y} + k_i\vec{x}_2} &= \length{i \vec{x}_1/p + (k_i -iq/p) \vec{x}_2} \leq \length{\vec{x}_1}
\;.
\end{align*}
When $i$ is prime, then since $0 < |k_i| < i$, we must have $\gcd(i, k_i) = 1$. Therefore, the vector $i\vec{y} + k_i\vec{x}_2$ must be primitive in the lattice spanned by $\vec{y}$ and $\vec{x}_2$ when $i$ is prime. It follows from a suitable effective version of the Prime Number Theorem that there are at least $\ceil{p/(20 \log p)}$ primes between $\ceil{p/4}$ and $\floor{p/2}$ (see, e.g., \cite{rosser41}), as needed.
\end{proof}
We next show that we can find many primitive lattice vectors in a suitably large ball around $\vec0$.}
{}
\begin{lemma}
\label{lem:notdegenerate}
For any lattice $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$ and radius $r \geq \lambda_2(\mathcal{L})$,
\[
\xi(\mathcal{L}, r) > \frac{\sqrt{r^2-\lambda_2(\mathcal{L})^2}}{\lambda_1(\mathcal{L})} + \Big\lfloor \frac{r-\lambda_2(\mathcal{L})}{\lambda_1(\mathcal{L})} \Big\rfloor
\; .
\]
\end{lemma}
\full{\begin{proof}
Let $\vec{v}_1 , \vec{v}_2 \in \mathcal{L}$ with $\length{\vec{v}_i} = \lambda_i(\mathcal{L})$ and $\inner{\vec{v}_1, \vec{v}_2} \geq 0$. Then, for $k = 0, \ldots, \floor{\sqrt{r^2 - \lambda_2(\mathcal{L})^2}/\lambda_1(\mathcal{L})}$,
\[
\length{\vec{v}_2 - k\vec{v}_1}^2 = \lambda_2(\mathcal{L})^2 + k^2 \lambda_1(\mathcal{L})^2 - 2k \inner{\vec{v}_1, \vec{v}_2} \leq r^2
\; .
\]
Similarly, for $k = 1, \ldots, \floor{(r-\lambda_2(\mathcal{L}))/\lambda_1(\mathcal{L})}$,
\[
\length{\vec{v}_2 + k \vec{v}_1} \leq \lambda_2(\mathcal{L}) + k \lambda_1(\mathcal{L}) \leq r
\]
The result follows by noting that all of these vectors are distinct and primitive in the lattice generated by $\vec{v}_1, \vec{v}_2$ (as is $\vec{v}_1$).
\end{proof}}{}
\full{\subsection{Probability}
We will also need the Chernoff-Hoeffding bound~\cite{hoeffding}.
\begin{lemma}[Chernoff-Hoeffding bound]
\label{lem:chernoff}
Let $X_1, \ldots, X_N $ be independent and identically distributed random variables with $0 \leq X_i \leq 1$ and $\overline{X} := \expect[X_i]$. Then, for $s > 0$
\[
\Pr\Big[N\overline{X} - \sum X_i \geq s \Big] \leq e^{-s^2/N}
\; ,
\]
and
\[
\Pr\Big[\sum X_i - N\overline{X} \geq s \Big] \leq e^{-s^2/N}
\; .
\]
\end{lemma}}
{}
\section{DGS to CVP reduction}
\label{sec:DGStoCVP}
\subsection{Sparsify and shift}
We now present the main sparsification result that we require. In particular Theorem~\ref{thm:shiftedsparsification}\full{ (which is immediate from the work done in Section~\ref{sec:sparseprelims}, and is presented in this form here for the reader's convenience)}{} shows the generic behavior of the sparsification procedure. Proposition~\ref{prop:shiftedsparsifier} then applies the theorem to show how sparsification interacts with a CVP oracle.
\begin{theorem}
\label{thm:shiftedsparsification}
For any lattice $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$ with basis $\ensuremath{\mathbf{B}}$, prime $p$, and lattice vectors $\vec{x}, \vec{y}_1,\ldots, \vec{y}_N \in \mathcal{L}$ such that $\ensuremath{\mathbf{B}}^{-1}\vec{x} \not\equiv \ensuremath{\mathbf{B}}^{-1}\vec{y}_i \bmod p$ for all $i$, we have
\[
\frac{1}{p} - \frac{N}{p^2} - \frac{N}{p^{n-1}} \leq \Pr[\inner{\vec{z}, \ensuremath{\mathbf{B}}^{-1}\vec{x} + \vec{c}} \equiv \vec0 \text{ and } \inner{\vec{z}, \ensuremath{\mathbf{B}}^{-1}\vec{y}_i + \vec{c}} \not\equiv 0 \bmod p \ \forall i] \leq
\frac{1}{p} + \frac{1}{p^n} \; ,
\]
where $\vec{z}, \vec{c} \in \ensuremath{\mathbb{Z}}_p^n$ are chosen uniformly and independently at random.
\end{theorem}
\begin{proof}
Simply apply Corollary~\ref{cor:shiftedindependence} to $\ensuremath{\mathbf{B}}^{-1}\vec{x}$ and $\ensuremath{\mathbf{B}}^{-1}\vec{y}_i$.
\end{proof}
\begin{proposition}
\label{prop:shiftedsparsifier}
There is a polynomial-time algorithm that takes as input a basis $\ensuremath{\mathbf{B}}$ for a lattice $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$ and a prime $p$
and outputs a full-rank sublattice $\mathcal{L}' \subseteq \mathcal{L}$ and shift $\vec{w} \in \mathcal{L}$ such that, for any $\vec{t} \in \ensuremath{\mathbb{R}}^n$, $\vec{x} \in \mathcal{L}$ with $N:= |(\mathcal{L} - \vec{t}) \cap \length{\vec{x} - \vec{t}} \cdot B_2^n| -1 < p$, and any $\problem{CVP}$ oracle,
\[\frac{1}{p} - \frac{N}{p^2} - \frac{N}{p^{n-1}} \leq \Pr[\problem{CVP}(\vec{t} + \vec{w}, \mathcal{L}') = \vec{x} + \vec{w}] \leq
\frac{1}{p} + \frac{1}{p^n}
\; .
\]
\full{In particular,
\[
\frac{N}{p} - \frac{N^2}{p^2} - \frac{N^2}{p^{n-1}} \leq \Pr[\dist(\vec{t} + \vec{w}, \mathcal{L}') \leq \length{\vec{x} - \vec{t}}] \leq \frac{N}{p} + \frac{N}{p^n}
\; .
\]}{}
\end{proposition}
\full{\begin{proof}
On input $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$ with basis $\ensuremath{\mathbf{B}}$ and $p$, the algorithm samples $\vec{z}, \vec{c} \in \ensuremath{\mathbb{Z}}_p^n$ uniformly and independently at random. It then returns the sublattice
\[
\mathcal{L}' := \{ \vec{x} \in \mathcal{L}\ :\ \inner{\vec{z}, \ensuremath{\mathbf{B}}^{-1}\vec{x}} \equiv 0 \bmod p\}
\; ,
\]
and the shift $\vec{w} := \ensuremath{\mathbf{B}}\vec{c}$.
By Claim~\ref{clm:efficientsparse}, the algorithm can be run in polynomial time.
Let $\vec{y}_1,\ldots, \vec{y}_{N} \in \mathcal{L}$ be the unique vectors such that $\length{\vec{y}_i - \vec{t}} \leq \length{\vec{x} - \vec{t}}$ with $\vec{y}_i \neq \vec{x}$.
Note that $\problem{CVP}(\mathcal{L}', \vec{t} + \vec{w})$ must be $\vec{x} + \vec{w}$ if $\inner{\vec{z}, \ensuremath{\mathbf{B}}^{-1} \vec{y}_i + \vec{c}} \not\equiv 0 \bmod p$ for all $i $ \emph{and} $\inner{\vec{z}, \ensuremath{\mathbf{B}}^{-1} \vec{x} + \vec{c}} \equiv 0 \bmod p$. We therefore wish to apply Theorem~\ref{thm:shiftedsparsification}, which requires showing that $\ensuremath{\mathbf{B}}^{-1}\vec{y}_i \not\equiv \ensuremath{\mathbf{B}}^{-1} \vec{x} \bmod p$ for all $i$.
Suppose on the contrary that $\ensuremath{\mathbf{B}}^{-1}\vec{y}_i \equiv \ensuremath{\mathbf{B}}^{-1} \vec{x} \bmod p$ for some $i$. Then, $\vec{y} := \vec{y}_i - \vec{x} \in p\mathcal{L} \setminus \{ \vec0\}$, and there are therefore $p+1$ lattice vectors on the line segment between $\vec{y}_i$ and $\vec{x}$ (including the two endpoints). Note that all of these vectors are at least as close to $\vec{t}$ as $\vec{x}$. But, there can be at most $N+1 < p+1$ such vectors, a contradiction. Therefore, we can apply Theorem~\ref{thm:shiftedsparsification}, yielding the result.
\end{proof}}
{}
As a consequence of Proposition~\ref{prop:shiftedsparsifier}, we show that we can use a CVP oracle to sample nearly uniformly from the lattice points in a ball around $\vec{t}$. This relatively straightforward algorithm is the core idea behind our reduction. For simplicity, we provide the algorithm with an estimate of the number of points inside the ball as input. (In the next section, we show how to obtain this estimate using roughly the same techniques.) \full{}{The proof is in the full version.}
\begin{lemma}
\label{lem:shifteduniformsampler}
For any efficiently computable $f(n)$ with $2\leq f(n) \leq \mathrm{poly}(n)$, there is an algorithm with access to a CVP oracle that takes as input a lattice $\mathcal{L} \subset \mathbb{Q}^n$, shift $\vec{t} \in \mathbb{Q}^n$, radius $r > 0$, and integer $N \geq 1$ and outputs a vector $\vec{y}$ such that, if
\[
N \leq |\mathcal{L} \cap (rB_2^n+ \vec{t})| \leq f(n) N
\; ,
\]
then the algorithm runs in expected polynomial time, and for any $\vec{x} \in \mathcal{L} \cap (rB_2^n+ \vec{t})$,
\[
\frac{\gamma^{-1}}{ |\mathcal{L} \cap (rB_2^n+ \vec{t})|} \leq \Pr[\vec{y} = \vec{x}] \leq \frac{\gamma}{|\mathcal{L} \cap (rB_2^n+ \vec{t})|}
\; ,
\]
where $\gamma := 1+1/f(n)$.
Furthermore, all of the algorithm's oracle calls are on full-rank sublattices of the input lattice.
\end{lemma}
\full{\begin{proof}
We assume without loss of generality that $n \geq 2$. On input $\mathcal{L} \subset \mathbb{Q}^n$, $\vec{t} \in \mathbb{Q}^n$, $r >0$, and $N \geq 1$, the algorithm chooses a prime $p$ with $10 f(n)N \leq p \leq 20 f(n) N$ and calls the procedure from Proposition~\ref{prop:shiftedsparsifier} on input $\mathcal{L}$ and $p$, receiving as output a sublattice $\mathcal{L}' \subseteq \mathcal{L}$ and a shift $\vec{w} \in \mathcal{L}$. It then calls its CVP oracle on input $\mathcal{L}'$ and $\vec{t} + \vec{w}$, receiving as output $\vec{y}'$. If $\length{\vec{y}' - \vec{w} - \vec{t}} \leq r$, it outputs $\vec{y} := \vec{y}' - \vec{w}$. Otherwise, it repeats.
From Proposition~\ref{prop:shiftedsparsifier}, we have that, after a single run of the algorithm,
\[
\frac{1}{\sqrt{\gamma} \cdot p} \leq \frac{1}{p} - \frac{N}{p^2} - \frac{N}{p^{n-1}} \leq \Pr[\vec{y}' = \vec{x} + \vec{w}] \leq \frac{1}{p} + \frac{1}{p^n} \leq \frac{\sqrt{\gamma}}{p}
\; .
\]
Correctness follows immediately. Furthermore, note that the reduction outputs something on each run with probability at least
$
\frac{N}{\sqrt{\gamma} f(n) p} \geq \frac{1}{100 f(n)^2}$. So, in particular, the expected number of runs is polynomial in $n$. It is clear that a single run takes polynomial time, and the result follows.
\end{proof}}{}
\subsection{Counting the lattice vectors in a ball}
We now show how to use the sparsification algorithm to approximate the number of lattice points in a ball, given access to a CVP oracle. \full{We will use this both to instantiate the procedure from Lemma~\ref{lem:shifteduniformsampler} and directly in our DGS sampling procedure.}{The proof is in the full version.}
\begin{definition}
For any parameter $\gamma \geq 1$, $\gamma$-GapVCP (the Vector Counting Problem) is the promise problem defined as follows: the input is a lattice $\mathcal{L} \subset \mathbb{Q}^n$ (represented by a basis), shift $\vec{t} \in \mathbb{Q}^n$, radius $r > 0$, and an integer $N \geq 1$. It is a NO instance if
$|(\mathcal{L} - \vec{t}) \cap r B_2^n| \leq N$ and a YES instance if $|(\mathcal{L} - \vec{t}) \cap r B_2^n| > \gamma N$.
\end{definition}
\begin{theorem}
\label{thm:counter}
For any efficiently computable function $f(n)$ with $1\leq f(n) \leq \mathrm{poly}(n)$, there is a polynomial-time reduction from $\gamma$-GapVCP to CVP where $\gamma := 1+1/f(n)$. The reduction \full{preserves dimension and }{}only calls the CVP oracle on sublattices of the input lattice.
\end{theorem}
\full{\begin{proof}
We assume without loss of generality that $n \geq 20$ and $f(n) \geq 20$. On input a lattice $\mathcal{L} \subset \mathbb{Q}^n$ with basis $\ensuremath{\mathbf{B}}$, target $\vec{t} \in \mathbb{Q}^n$, $r > 0$, and $N \geq 1$, the reduction behaves as follows. First, it finds a prime $p$ with $200f(n)N \leq p \leq 400f(n) N$. Then, for $i = 1, \ldots, \ell := \ceil{100f(n)^2p^2 /N^2}$, the reduction calls the procedure from Proposition~\ref{prop:shiftedsparsifier} on $\mathcal{L}$, $\vec{t}$, and $p$. It receives as output $\mathcal{L}_i$ and $\vec{w}_i$. It then calls the CVP oracle on $\mathcal{L}_i$ and $\vec{t} + \vec{w}_i$, receiving as output a vector whose distance from $\vec{t} + \vec{w}_i$ is $r_i$. Finally, it returns yes if $r \leq r_i$ for all but at most $\ell N/p + 2\sqrt{\ell}$ values of $r_i$ and no otherwise.
It is clear that the reduction runs in polynomial time. Now, suppose $|\mathcal{L} \cap (rB_2^n+ \vec{t})| \leq N$. By Proposition~\ref{prop:shiftedsparsifier}, we have that for each $i$,
\[
\Pr[r_i \leq r] \leq \frac{N}{p} + \frac{N}{p^n} < \frac{N}{p} + \frac{1}{2\sqrt{\ell}}
\; .
\]
Then, applying the Chernoff-Hoeffding bound (Lemma~\ref{lem:chernoff}), we have
\[
\Pr[|\{ i\ :\ r_i \leq r\}| > \ell N/p + 2\sqrt{\ell}] < 1/e
\; .
\]
So, the reduction returns the correct answer in this case with probability at least $1-1/e$.
On the other hand, suppose that $|\mathcal{L} \cap (rB_2^n+ \vec{t})| > \gamma N$. Using the lower bound in Proposition~\ref{prop:shiftedsparsifier},
\[
\Pr[r_i \leq r] \geq \frac{\gamma N}{p} - \frac{\gamma^2 N^2}{p^2} - \frac{\gamma^2 N^2}{p^{n-1}} > \frac{N}{p} + \frac{5}{\sqrt{\ell}}
\; .
\]
Applying the Chernoff-Hoeffding bound again, we have
\[
\Pr[|\{ i\ :\ r_i \leq r\}| \leq \ell N/p + 2\sqrt{\ell}] < 1/e
\;,
\]
as needed.
\end{proof}}
{}
\subsection{The DGS algorithm}
\begin{theorem}
\label{thm:DGStoCVP}
For any efficiently computable function $f(n)$ with $1 \leq f(n) \leq \mathrm{poly}(n)$, there exists an (expected) polynomial-time reduction from $(\gamma, \eps)$-DGS to CVP, where $\eps := 2^{-f(n)}$ and $\gamma := 1+1/f(n)$. The reduction preserves dimension and only calls the CVP oracle on full-rank sublattices of the input lattice.
\end{theorem}
\begin{proof}
We assume without loss of generality that $n \geq 5$ and $s = 1$. (If $s \neq 1$, we can simply rescale the lattice.) On input $\mathcal{L} \subset \mathbb{Q}^n$ and $\vec{t} \in \mathbb{Q}^n$, the reduction behaves as follows. It first calls its CVP oracle to compute $d := \dist(\vec{t}, \mathcal{L})$. For $i = 0,\ldots, \ell := \ceil{100n^2 f(n) \log(10+ d)} $, let $r_i := \sqrt{d^2 +i/(10f(n))}$.
For each $i$, the reduction uses its CVP oracle together with the procedure given in Theorem~\ref{thm:counter} to compute $N_i$ such that $\gamma^{-1/10} \cdot |(\mathcal{L} - \vec{t}) \cap r_iB_2^n| \leq N_i \leq |(\mathcal{L} - \vec{t}) \cap r_iB_2^n|$.
Let $w_\ell := e^{-\pi r_\ell^2}$, and for $i = 0, \ldots, \ell - 1$, let $w_i := e^{-\pi r_i^2}-e^{-\pi r_{i+1}^2}$. Let $W := \sum_{i=0}^\ell N_i w_i$. The reduction then chooses an index $0 \leq k \leq \ell$, from the distribution that assigns to index $i$ probability $N_i w_i/W$.
It then runs the procedure from Lemma~\ref{lem:shifteduniformsampler} with input $\mathcal{L}$, $\vec{t}$, $r_k$, and $N_k$, receiving as output a vector $\vec{y} \in (\mathcal{L} - \vec{t}) \cap r_k B_2^n$ whose distribution is $(\gamma^{1/10}, 0)$-close to the uniform distribution over $(\mathcal{L} - \vec{t}) \cap r_k B_2^n$. It then simply returns $\vec{y}$.
\full{To see that the reduction runs in polynomial time, first note that Lemma~\ref{lem:lambda1bitlength} implies that $\ell$ is polynomial in the length of the input. Similarly, Corollary~\ref{cor:ballcountingbitlength} implies that the $N_i$ have bit lengths polynomial in the length of the input. It follows that the reduction runs in expected polynomial time.
We now prove correctness.}{It is clear that the reduction runs in polynomial time.}
Let $A := (\mathcal{L} - \vec{t}) \cap r_\ell B_2^n$ be the support of $\vec{y}$. By Corollary~\ref{cor:shiftedbanaszczyk}, $D_{A}$ is within statistical distance $\eps$ of $D_{\mathcal{L} - \vec{t}}$\full{, so it suffices to show that the output of the reduction is $(\gamma,0)$-close to $D_A$. In order to show this, it suffices to show that, for any $\vec{x} \in A$, $\Pr[\vec{y} = \vec{x}]$ is proportional to $\rho(\vec{x})$, up to a factor of $\gamma^{\pm 1/2}$.}{.}
Note that
\begin{equation}
\label{eq:probformula}
\Pr[\vec{y} = \vec{x}] = \frac{1}{W} \sum_{i\ :\ r_i \geq \length{\vec{x} - \vec{t}}} w_iN_i \cdot \Pr[\vec{y} = \vec{x} \ |\ k = i]
\; .
\end{equation}
For any $i$ such that $\vec{x} \in (\mathcal{L} - \vec{t}) \cap r_i B_2^n$, by Lemma~\ref{lem:shifteduniformsampler} we have that
\[
\frac{\gamma^{-1/5}}{ N_i} \leq \frac{\gamma^{-1/10}}{|(\mathcal{L} - \vec{t}) \cap r_i B_2^n|} \leq \Pr[\vec{y} = \vec{x} \ | \ k = i] \leq \frac{\gamma^{1/10}}{|(\mathcal{L} - \vec{t}) \cap r_i B_2^n|} \leq \frac{\gamma^{1/10}}{N_i}
\; .
\]
\full{Let $j$ be minimal such that $\vec{x} \in (\mathcal{L} - \vec{t}) \cap r_j B_2^n$. Plugging in the upper bound to Eq.~\eqref{eq:probformula}, we have
\[
\Pr[\vec{y} = \vec{x}] \leq \frac{\gamma^{1/10}}{W}\cdot \sum_{i \geq j} w_i = \frac{\gamma^{1/10}}{W} \cdot e^{-\pi r_j^2} \leq \frac{\sqrt{\gamma}}{W} \cdot \rho(\vec{x})
\; .
\]
A nearly identical computation shows that $\Pr[\vec{y} = \vec{x}] \geq \rho(\vec{x})/(\sqrt{\gamma} W)$, as needed.}{Plugging these bounds into Eq.~\eqref{eq:probformula} gives the result.}
\end{proof}
\section{Centered DGS to SVP reduction}
\label{sec:DGStoSVP}
\subsection{Sparsification}
\full{Since we are now interested in the SVP case, we can no longer handle the shifts used in Theorem~\ref{thm:shiftedsparsification} and Proposition~\ref{prop:shiftedsparsifier} (neither the input shift $\vec{t}$ nor the output shifts $\vec{w}$ and $\vec{c}$). As a result, we are forced to consider the effect of sparsification on primitive vectors only, which requires new analysis. }{}Recall that $\xi(\mathcal{L}, r) := |\mathcal{L}_{\mathrm{prim}} \cap r B_2^n|/2$ is the number of primitive lattice vectors in a ball of radius $r$ (counting $\pm \vec{x}$ as a single vector).
\begin{theorem}
\label{thm:sparsification}
For any lattice $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$ with basis $\ensuremath{\mathbf{B}}$, primitive lattice vectors $\vec{y}_0, \vec{y}_1,\ldots, \vec{y}_N \in \mathcal{L}_{\mathrm{prim}}$ with $\vec{y}_i \neq \pm \vec{y}_0$ for all $i > 0$, and prime $p \geq 101$, if $\xi(\mathcal{L}, \length{\vec{y}_i}) \leq p/(20 \log p)$ for all $i$, then
\[
\frac{1}{p} - \frac{N}{p^2} \leq \Pr\big[\inner{\vec{z}, \ensuremath{\mathbf{B}}^{-1}\vec{y}_0} \equiv 0 \bmod p \text{ and } \inner{\vec{z}, \ensuremath{\mathbf{B}}^{-1}\vec{y}_i} \not\equiv 0 \bmod p \ \forall i > 0\big] \leq \frac{1}{p} \; ,
\]
where $\vec{z} \in \ensuremath{\mathbb{Z}}_p^n$ is chosen uniformly at random.
\end{theorem}
\begin{proof}
Let $\vec{v}_i := \ensuremath{\mathbf{B}}^{-1}\vec{y}_i$. By Lemma~\ref{lem:nogoodnameforthislemma}, we have that $\vec{v}_0 $ is not a scalar multiple of $\vec{v}_i$ mod $p$ for any $i > 0$. The result then follows from Lemma~\ref{lem:almostindependent}.
\end{proof}
\full{}{The proof of the next result is in the full version.}
\begin{proposition}
\label{prop:sparsifier}
There is a polynomial-time algorithm that takes as input a basis $\ensuremath{\mathbf{B}}$ for a lattice $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$ and a prime $p \geq 101$
and outputs a full-rank sublattice $\mathcal{L}' \subseteq \mathcal{L}$ such that for every $\vec{x} \in \mathcal{L}$ with
$N:= \xi(\mathcal{L}, \length{\vec{x}}) - 1 \leq p/(20 \log p)$ and $\lambda_1(\mathcal{L}) > \length{\vec{x}}/p$, we have that for any SVP oracle,
\[
\frac{1}{p} - \frac{N}{p^2} \leq \Pr[\problem{SVP}(\mathcal{L}') = \pm \vec{x}] \leq \frac{1}{p}
\; .
\]
\full{In particular,
\[
\frac{N}{p} - \frac{N^2}{p^2} \leq \Pr
\big[\lambda_1(\mathcal{L}') \leq \length{\vec{x}}\big] \leq \frac{N}{p}
\; .
\]}{}
\end{proposition}
\full{\begin{proof}
On input $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$ with basis $\ensuremath{\mathbf{B}}$ and $p$, the algorithm samples $\vec{z} \in \ensuremath{\mathbb{Z}}_p^n$ uniformly at random. It then returns the sublattice
\[
\mathcal{L}' := \{ \vec{x} \in \mathcal{L}\ :\ \inner{\vec{z}, \ensuremath{\mathbf{B}}^{-1}\vec{x}} \equiv 0 \bmod p\}
\; .
\]
It is clear that the algorithm runs in polynomial time.
Since $\Pr[\vec{x} \in \mathcal{L}'] = 1/p$, the upper bound on the probability is immediate as well.
For the lower bound, let $\vec{y}_0,\ldots, \vec{y}_{N} \in \mathcal{L}_{\mathrm{prim}}$ such that $\length{\vec{y}_i} \leq \length{\vec{x}}$, $\vec{y}_i \neq \pm \vec{y}_j$, and $\vec{y}_0 := \vec{x}$. Let $\vec{v}_i := \ensuremath{\mathbf{B}}^{-1}\vec{y}_i$. Note that, if $\vec{v}_0 \in \mathcal{L}'$ and $\vec{v}_i \notin \mathcal{L}'$ for $i > 0$, then $\problem{SVP}(\mathcal{L}') = \pm \vec{x}$. (Here, we have used the fact that $\lambda_1(\mathcal{L}) > \length{\vec{x}}/p$.)
The result then follows from Theorem~\ref{thm:sparsification}.
\end{proof}}
{}
\begin{lemma}
\label{lem:uniformsampler}
For any efficiently computable $f(n)$ with $2\leq f(n) \leq \mathrm{poly}(n)$, there is an (expected) polynomial-time algorithm with access to a SVP oracle that takes as input a lattice $\mathcal{L} \subset \mathbb{Q}^n$, radius $r > 0$, and integer $N \geq 1$ and outputs a vector $\vec{y} \in \mathcal{L}$ such that, if
$N \leq \xi(\mathcal{L}, r) \leq f(n) N$ and $\lambda_1(\mathcal{L}) > r/(f(n)\xi(\mathcal{L}, r))$,
then for any $\vec{x} \in \mathcal{L}^{\mathsf{prim}} \cap r B_2^n$,
\[
\frac{\gamma^{-1}}{\xi(\mathcal{L}, r)} \leq \Pr[\vec{y} = \pm \vec{x}] \leq \frac{\gamma}{\xi(\mathcal{L}, r)}
\; ,
\]
where $\gamma := 1+f(n)$.
Furthermore, the algorithm \full{preserves dimension and }{}only calls its oracle on full-rank sublattices of $\mathcal{L}$.
\end{lemma}
\full{\begin{proof}
We assume without loss of generality that $n \geq 10$. On input $\mathcal{L} \subset \mathbb{Q}^n$, $r >0$, and $N \geq 1$, the algorithm chooses a prime $p$ with $100 f(n)N \log(10f(n)N) \leq p \leq 200 f(n) N \log(10 f(n)N)$ and calls the algorithm from Proposition~\ref{prop:sparsifier} on input $\mathcal{L}$ and $p$, receiving as output a sublattice $\mathcal{L}' \subset \mathcal{L}$. It then calls its SVP oracle on input $\mathcal{L}'$, receiving as output $\vec{y}$. If $\length{\vec{y}} \leq r$, it outputs $\vec{y}$. Otherwise, it repeats.
From Proposition~\ref{prop:sparsifier}, we have that, after a single run of the algorithm
\[
\frac{\gamma^{-1/2}}{p} \leq \frac{1}{p} - \frac{N}{p^2} - \frac{N}{p^{n-1}} \leq \Pr[\vec{y} = \pm \vec{x}] \leq \frac{1}{p}
\; .
\]
Correctness follows immediately. Furthermore, note that the algorithm terminates after a given run with probability at least $\gamma^{-1/2} N/(f(n)p) \geq 1/(1000 f(n)^2 \log(Nf(n)))$. By Corollary~\ref{cor:ballcountingbitlength}, $\log(N)$ is polynomial in the length of the input. So, in particular, the expected number of runs is polynomial in the length of the input. It is clear that a single run takes polynomial time, and the result follows.
\end{proof}}{}
\subsection{Counting the primitive lattice vectors in a ball around the origin}
\begin{definition}
For any parameters $\beta \geq 0$, $\gamma \geq 1$, $(\beta, \gamma)$-GapPVCP (the Primitive Vector Counting Problem) is the promise problem defined as follows: the input is a lattice $\mathcal{L} \subset \mathbb{Q}^n$ (represented by a basis), radius $r > 0$, and an integer $N \geq 1$. It is a NO instance if
$\xi(\mathcal{L}, r) \leq N$ or if $\lambda_1(\mathcal{L}) < \beta r/ N$ and a YES instance if $\xi(\mathcal{L}, r) > \gamma N$.
\end{definition}
\full{Intuitively, the condition that $\lambda_1(\mathcal{L}) < \beta r/ N$ handles the degenerate case in which there are many non-primitive vectors that may \scarequotes{hide} the primitive vectors in the lattice. It is not clear that this should be treated as a degenerate case in general, but it is clear that our methods fail in this case.
}
{In the full version, we prove the following.}
\begin{theorem}
\label{thm:primcounter}
For any efficiently computable $f(n)$ with $1 \leq f(n) \leq \mathrm{poly}(n)$, there is a polynomial-time reduction from $(\beta, \gamma)$-GapPVCP to SVP where $\beta := 1/f(n)$ and $\gamma := 1+1/f(n)$. The reduction preserves dimension and only calls the SVP oracle on sublattices of the input lattice.
\end{theorem}
\full{\begin{proof}
On input $\mathcal{L} \subset \mathbb{Q}^n$ with basis $\ensuremath{\mathbf{B}}$, $r > 0$, and $N \geq 1$, the reduction behaves as follows. It first calls its SVP oracle on $\mathcal{L}$ to compute $\lambda_1(\mathcal{L})$. If $\lambda_1(\mathcal{L}) > r$ or $\lambda_1(\mathcal{L}) < \beta r/N$, it returns no. The reduction then finds a prime $p$ with $200f(n) N\log(10f(n)N) \leq p \leq 400 f(n)N \log(10f(n)N)$, and for $i = 1, \ldots, \ell := \ceil{100f(n)^2p^2/N^2}$, it calls the procedure from Proposition~\ref{prop:sparsifier} on $\mathcal{L}$ and $p$, receiving as output $\mathcal{L}_i$. It then calls the SVP oracle on each $\mathcal{L}_i$, receiving as output a vector of length $r_i$. Finally, it returns yes if $r \leq r_i$ for all but at most $\ell N/p + 2\sqrt{\ell}$ values of $r_i$ and no otherwise.
It is clear that the reduction runs in polynomial time. We assume $\lambda_1(\mathcal{L}) \geq \beta r/N > r/p$ (since otherwise the reduction clearly outputs the correct answer).
Suppose $m := \xi(\mathcal{L}, r) \leq N$. By Proposition~\ref{prop:sparsifier}, we have
$
\Pr[r_i \leq r] \leq \frac{m}{p} \leq \frac{N}{p}
$,
for each $i$.
Applying the Chernoff-Hoeffding bound (Lemma~\ref{lem:chernoff}), we have
\[
\Pr\Big[|\{i\ :\ r_i \leq r\}| > \frac{N\ell }{p} + 2\sqrt{\ell} \Big] < 1/e
\; .
\]
So, the reduction returns the correct answer in this case with probability at least $1-1/e$.
Now, suppose $\xi(\mathcal{L}, r) > \gamma N$. We again apply Proposition~\ref{prop:sparsifier} to obtain
\[
\Pr[r_i \leq r] \geq \frac{\gamma N}{p} - \frac{\gamma^2 N^2}{p^2} > \frac{N}{p} + \frac{5}{\sqrt{\ell}}
\]
Applying the Chernoff-Hoeffding bound again, we have
\[
\Pr\Big[|\{i\ :\ r_i \leq r\}| \leq \frac{N\ell}{p} + 2\sqrt{\ell} \Big] < 1/e
\; .
\]
The result follows.
\end{proof}}{}
\subsection{The centered DGS algorithm}
\begin{theorem}
\label{thm:DGStoSVP}
For any efficiently computable function $f(n)$ with $1 \leq f(n) \leq \mathrm{poly}(n)$, there is an (expected) polynomial-time reduction from $(\gamma, \eps)$-cDGS to SVP, where $\eps := 2^{-f(n)}$ and $\gamma := 1+1/f(n)$. The reduction preserves dimension and only calls the SVP oracle on sublattices of the input lattice.
\end{theorem}
\begin{proof}
We assume without loss of generality that $s = 1$. (If $s \neq 1$, we can simply scale the lattice.) On input $\mathcal{L} \subset \mathbb{Q}^n$, the reduction behaves as follows. First, it computes $\lambda_1(\mathcal{L})$ using its SVP oracle. For $i = 0,\ldots, \ell := \ceil{200n^2 f(n)^2} $, let $r_i := \sqrt{\lambda_1(\mathcal{L})^2 +i/(100n f(n))}$. For each $i$, the reduction uses its SVP oracle together with the procedure given in Theorem~\ref{thm:primcounter} to compute $N_i$ such that
\begin{equation}
\label{eq:NiapproxXi}
\gamma^{-1/10}\cdot \xi(\mathcal{L}, r_i) \leq N_i \leq \xi(\mathcal{L}, r_i)
\; ,
\end{equation}
or $N_i := 1$ if $\lambda_1(\mathcal{L}) < r_i/(100 n^2 f(n) \xi(\mathcal{L}, r_i))$. Let $w_{\ell} := \rho_{1/r_\ell}(\ensuremath{\mathbb{Z}} \setminus \{ \vec0 \})$, and for $i = 0, \ldots, \ell - 1$, let $w_i := \rho_{1/r_i}(\ensuremath{\mathbb{Z}}\setminus \{ 0 \}) - \rho_{1/r_{i+1}}(\ensuremath{\mathbb{Z}}\setminus \{ 0 \})$. (\full{Claim~\ref{clm:computerhoZ}}{\cite{BLPRS13}} shows one way to compute $w_i$ efficiently.)
Let $W := \sum_{i=0}^\ell N_i w_i$. Then, the reduction outputs $\vec0$ with probability $1/(1+W)$. Otherwise, it chooses an index $0 \leq k \leq \ell$, assigning to each index $i$ probability $N_i w_i/W$. If $N_k > 1$, the reduction then calls the procedure from Lemma~\ref{lem:uniformsampler} on input $\mathcal{L}$, $r_k$, and $N_k$, receiving as output a vector $\vec{x} \in \mathcal{L}^{\mathsf{prim}}$ that is distributed uniformly over $\mathcal{L}^{\mathsf{prim}} \cap r_k B_2^n$, up to a factor of $\gamma^{\pm 1/10}$. If $N_k = 1$, the reduction simply sets $\vec{x} = \problem{SVP}(\mathcal{L})$. Finally, it \full{uses the procedure from Lemma~\ref{lem:sampleZ} to sample}{samples} an integer $z$ from $D_{\ensuremath{\mathbb{Z}} \setminus \{ 0 \}, 1/\length{\vec{x}}}$ and returns $\bar{\vec{x}} := z \cdot \vec{x}$. \full{}{(\cite{BLPRS13} shows how to sample such an integer efficiently.)}
\full{First, we note that the reduction runs in expected polynomial time. In particular, the $N_i$ have polynomial bit length by Corollary~\ref{cor:ballcountingbitlength}, and the various subprocedures have expected running times that are polynomial in the length of their input.
We now prove correctness.}{It is clear that the reduction runs in polynomial time.} Let $\mathcal{L}^\dagger$ be the set of all points that are integer multiples of a lattice vector whose length is at most $r_\ell > \sqrt{n f(n)} $. By Lemma~\ref{lem:banaszczyk}, it suffices to consider the distribution $D_{\mathcal{L}^\dagger}$, as this is within statistical distance $\eps$ of $D_{\mathcal{L}}$. Then,
\begin{align*}
\rho(\mathcal{L}^\dagger \setminus \{\vec0 \}) = \sum_{\vec{y} \in \mathcal{L}^\dagger \setminus \{ \vec0 \}} \rho(\vec{y})
= \sum_{\vec{y} \in \mathcal{L}^{\mathsf{prim}} \cap \sqrt{n} B_2^n} \rho_{1/\length{\vec{y}}}(\ensuremath{\mathbb{Z}} \setminus \{ 0\})
\; .
\end{align*}
A quick computation shows that for any $\vec{y}$ with $r_{i-1} \leq \length{\vec{y}} \leq r_i$, we have
\[
\rho_{1/r_i}(\ensuremath{\mathbb{Z}} \setminus \{ \vec0\}) \leq \rho_{1/\length{\vec{y}}}(\ensuremath{\mathbb{Z}} \setminus \{ 0\}) \leq \gamma^{1/10} \cdot \rho_{1/r_i}(\ensuremath{\mathbb{Z}} \setminus \{ 0\})
\; .
\]
Recalling the definition of the $w_i$, it follows that
\[
\sum_{i=0}^\ell \xi(\mathcal{L}, r_i) w_i \leq \rho(\mathcal{L}^\dagger \setminus \{\vec0 \}) \leq \gamma^{1/10} \cdot \sum_{i=0}^\ell \xi(\mathcal{L}, r_i) w_i
\; .
\]
Now, we would like to say that $N_i \approx \xi(\mathcal{L}, r_i)$, as in Eq.~\eqref{eq:NiapproxXi}. This is of course true by definition \emph{except} when $N_i = 1$ and $\xi(\mathcal{L}, r_i) > 1$, i.e., when $\lambda_1(\mathcal{L}) < r_i/(100 n^2 f(n) \xi(\mathcal{L}, r_i))$ and $\lambda_2(\mathcal{L}) \leq r_i$. But, in this case, a quick computation together with Lemma~\ref{lem:notdegenerate} shows that $\xi(\mathcal{L}, r_{i+1}) > 1/(100 n f(n)\lambda_1(\mathcal{L}))$, and therefore $N_{j}$ satisfies Eq.~\eqref{eq:NiapproxXi} for all $j > i$. (In other words, the $N_i$ can only be \scarequotes{wrong} for at most one value of $i$.) It follows that, for any $i < \ell$, we have
\[
\gamma^{-1/5}\cdot \sum_{j \geq i} \xi(r_j, \mathcal{L}_j)w_j \leq \sum_{j \geq i} N_j w_j \leq \sum_{j \geq i} \xi(r_j, \mathcal{L}_j)w_j
\;.
\]
(The case $N_\ell = 1$ can be handled separately. Correctness in this case follows essentially immediately from Lemma~\ref{lem:banaszczyk}.)
Putting everything together, we have that
\[
\gamma^{-1/5} \cdot \rho(\mathcal{L}^\dagger \setminus \{\vec0 \}) \leq W \leq \gamma^{1/5} \cdot \rho(\mathcal{L}^\dagger \setminus \{\vec0 \})
\; .
\]
So, in particular, the probability that the reduction outputs $\vec0$ is \full{$1/(1+W)$, which is a good approximation to the correct probability of $1/\rho(\mathcal{L}^\dagger)$}{$1/(1+W) \approx 1/\rho(\mathcal{L}^\dagger)$, as needed}.
Now, for any $\vec{y} \in \mathcal{L}^{\mathsf{prim}}$, it follows from Lemma~\ref{lem:uniformsampler} and the argument above that
\begin{equation}
\label{eq:prbound}
\gamma^{-1/2} \cdot \frac{\rho_{1/\length{\vec{y}}}(\ensuremath{\mathbb{Z}} \setminus \{0\})}{\rho(\mathcal{L}^\dagger)}\leq \Pr[\vec{x} = \pm \vec{y}] \leq \gamma^{1/2} \cdot \frac{\rho_{1/\length{\vec{y}}}(\ensuremath{\mathbb{Z}} \setminus \{0\})}{\rho(\mathcal{L}^\dagger)}
\; .
\end{equation}
Finally, for any $\vec{w} \in \mathcal{L}^\dagger \setminus \{0 \}$, let $\vec{y}$ be one of the two primitive lattice vectors that are scalar multiples of $\vec{w}$, and let $\bar{z}$ such that $\vec{w} = \bar{z} \vec{y}$. Then,
\full{\begin{align*}
\Pr[\bar{\vec{x}} = \vec{w}] &= \Pr[\vec{x} = \pm \vec{y}] \cdot \Pr[z = \bar{z}\ |\ \vec{x} = \pm \vec{y}]\\
&= \Pr[\vec{x} = \pm \vec{y}] \cdot \frac{\rho(\vec{w})}{\rho_{1/\length{\vec{y}}}(\ensuremath{\mathbb{Z}} \setminus \{0\})}
\end{align*}}
{\[
\Pr[\bar{\vec{x}} = \vec{w}] = \Pr[\vec{x} = \pm \vec{y}] \cdot \Pr[z = \bar{z}\ |\ \vec{x} = \pm \vec{y}]= \Pr[\vec{x} = \pm \vec{y}] \cdot \frac{\rho(\vec{w})}{\rho_{1/\length{\vec{y}}}(\ensuremath{\mathbb{Z}} \setminus \{0\})}
\]}
The result follows from plugging the above equation into Eq.~\eqref{eq:prbound}.
\end{proof}
\section{Sampling from other distributions}
\label{sec:other}
We note that our reductions from Sections~\ref{sec:DGStoCVP} and~\ref{sec:DGStoSVP} do not use any unique properties of the discrete Gaussian distribution or of the $\ell_2$ norm. Above, we focused on this particular case because it has so many applications, while other distributions on lattices seem to be of much less interest. In this section, we show that a much larger class of sampling problems can be reduced to CVP in various different norms.
First, we show that the sparsification result in Proposition~\ref{prop:shiftedsparsifier} naturally extends to arbitrary norms $K$. In particular, for any norm $K$, we can use a CVP oracle in norm $K$ to sample (nearly) uniformly from the lattice points in a $K$-ball. (See below for the definitions.) We can naturally extend this to any distribution that can be efficiently written as the weighted average of uniform distributions over the lattice points in $K$-balls. For example, this will be enough to show how to use a CVP oracle in the $\ell_q$ norm to sample from the natural $\ell_q$ generalization of the discrete Gaussian, which assigns to $\vec{x} \in \mathcal{L} - \vec{t}$ probability proportional $e^{- \|\vec{x} \|_q^q}$, where $ \|\vec{x} \|_q := (\sum |x_i|^q)^{1/q}$ for $1 \leq q < \infty$ is the $\ell_q$ norm.
Below, we make this precise. For simplicity, we will not worry about the more difficult analogous problem of reducing sampling from centered distributions to SVP.
\subsection{Arbitrary distributions and norms}
Recall that any norm $\|\cdot \|_K$ over $\ensuremath{\mathbb{R}}^n$ is uniquely represented by a compact symmetric convex body with non-empty interior $K \subset \ensuremath{\mathbb{R}}^n$, its unit ball. The norm itself is then simply
\[
\| \vec{x} \|_{K} := \min \{ r \ : \ \vec{x} \in r K \}
\; .
\]
(Since we are interested in asymptotics, we formally identify $K := (K_1,K_2,\ldots )$ with a sequence of such bodies with $K_n \subset \ensuremath{\mathbb{R}}^n$, but we will ignore such details.) A $K$-ball with center $\vec{c}$ and radius $r$ is $rK + \vec{c}$, the set of all points within distance $r$ of $\vec{c}$ in the norm $\|\cdot \|_K$.
We define the general problem that interests us below, together with the natural generalization of CVP to arbitrary norms.
\begin{definition}
For any $\gamma \geq 1$, $\eps > 0$, and function $\chi$ mapping a shifted lattice $\mathcal{L} - \vec{t}$ to a distribution over $\mathcal{L} - \vec{t}$, the sampling problem $(\gamma, \eps)\text{-}\problem{LSP}_{\chi}$ (the Lattice Sampling Problem) is defined as follows: The input is (a basis of) a lattice $\mathcal{L} \subset \mathbb{Q}^n$ and a shift $\vec{t} \in \mathbb{Q}^n$. The goal is to output a vector whose distribution is $(\gamma, \eps)$-close to $\chi(\mathcal{L} - \vec{t})$.
\end{definition}
\begin{definition}
For any norm $\|\cdot \|_K$, the search problem $\problem{CVP}_K$ (the Closest Vector Problem in norm $K$) is defined as follows: The input is (a basis of) a lattice $\mathcal{L} \subset \mathbb{Q}^n$ and a target vector $\vec{t} \in \mathbb{Q}^n$. The goal is to output a lattice vector $\vec{x}$ such that $\length{\vec{x} - \vec{t}}_K$ is minimal.
\end{definition}
\subsection{Sparsify, shift, count, and sample}
We now observe that Proposition~\ref{prop:shiftedsparsifier} generalizes to arbitrary norms. (One can simply check that the proof of Proposition~\ref{prop:shiftedsparsifier} does not use any special properties of the $\ell_2$ norm.)
\begin{proposition}
\label{prop:shiftedsparsifier-ell_q}
There is a polynomial-time algorithm that takes as input a basis $\ensuremath{\mathbf{B}}$ for a lattice $\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$ and a prime $p$
and outputs a sublattice $\mathcal{L}' \subseteq \mathcal{L}$ and shift $\vec{w} \in \mathcal{L}$ such that, for any norm $\| \cdot \|_K$, $\vec{t} \in \ensuremath{\mathbb{R}}^n$, $\vec{x} \in \mathcal{L}$ with $N:= |(\mathcal{L} - \vec{t}) \cap \length{\vec{x} - \vec{t}} \cdot K| < p$, and any $\problem{CVP}_K$ oracle,
\[\frac{1}{p} - \frac{N}{p^2} - \frac{N}{p^{n-1}} \leq \Pr[\problem{CVP}_K(\vec{t} + \vec{w}, \mathcal{L}') = \vec{x} + \vec{w}] \leq
\frac{1}{p} + \frac{1}{p^n}
\; .
\]
\end{proposition}
And, from this, we obtain a generalization of Lemma~\ref{lem:shifteduniformsampler} and Theorem~\ref{thm:counter}.
\begin{definition}
For any parameter $\gamma \geq 1$ and norm $\|\cdot\|_K$, $\gamma\text{-GapVCP}_K$ (the Vector Counting Problem in norm $K$) is the promise problem defined as follows: the input is (a basis of) a lattice $\mathcal{L} \subset \mathbb{Q}^n$, shift $\vec{t} \in \mathbb{Q}^n$, radius $r > 0$, and an integer $N \geq 1$. It is a NO instance if
$|(\mathcal{L} - \vec{t}) \cap r K| \leq N$ and a YES instance if $|(\mathcal{L} - \vec{t}) \cap rK| > \gamma N$.
\end{definition}
\begin{theorem}
\label{thm:counter_q}
For any efficiently computable norm $\| \cdot \|_K$ and efficiently computable function $f(n)$ with $2\leq f(n) \leq \mathrm{poly}(n)$, there is a polynomial-time reduction from $\gamma\text{-GapVCP}_K$ to $\problem{CVP}_K$, where $\gamma := 1+1/f(n)$. Furthermore, there is an (expected) polynomial-time reduction from $(\gamma, 0)\text{-}\problem{LSP}_{\chi}$ to $\problem{CVP}_K$, where $\chi(\mathcal{L} - \vec{t})$ is the uniform distribution on $(\mathcal{L}- \vec{t}) \cap K$ (or $\chi$ is constant on $-\vec{t}$ if $(\mathcal{L} - \vec{t}) \cap K$ is empty). Both reductions preserve dimension and only make calls to the $\problem{CVP}_K$ oracle on sublattices of the input lattice.
\end{theorem}
\subsection{Sufficiently \scarequotes{nice} distributions and the sampler}
Recall that the sampling algorithm from Theorem~\ref{thm:DGStoCVP} works by computing a finite sequence of balls $B_0,\ldots, B_\ell$ such that the discrete Gaussian distribution is $(\gamma, \eps)$-close to a weighted average of the uniform distributions over these balls. This motivates the following definition and theorem.
\begin{definition}
\label{def:ball-decomposable}
For a norm $K$, $\gamma = \gamma(n) \geq 1$, and $\eps = \eps(n) > 0$, we say that a function $\chi$ that maps a shifted lattice $\mathcal{L} - \vec{t}$ to a distribution over $\mathcal{L} - \vec{t}$ is $(\gamma, \eps, K)$-ball decomposable if it is $(\gamma, \eps)$-close to a weighted average of uniform distributions over the lattice points inside $K$-balls, and these balls and weightings can be computed efficiently with access to a $\problem{CVP}_K$ oracle.
\end{definition}
\begin{theorem}
For any efficiently computable norm $K$, $\gamma = \gamma(n) \geq 1$, and $\eps = \eps(n) > 0$, if $\chi$ is $(\gamma, \eps, K)$-ball decomposable, then for any efficiently computable function $2 \leq f(n) \leq \mathrm{poly}(n)$, there is a polynomial-time reduction from $(\gamma', \eps)\text{-}\problem{LSP}_{\chi}$ to $\problem{CVP}_K$, where $\gamma' := (1+1/f(n)) \gamma $. The reduction preserves dimension and only calls its oracle on sublattices of the input lattice.
\end{theorem}
\begin{proof}
On input $\mathcal{L} \subset \mathbb{Q}^n$ and $\vec{t} \in \mathbb{Q}^n$, the reduction first calls the procedure guaranteed by Definition~\ref{def:ball-decomposable} to obtain a sequence of $K$-balls $B_0,\ldots, B_\ell$ and weights $w_0,\ldots, w_\ell$. It then selects an index $i$ with probability $w_i$. Finally, it uses the sampling procedure from Theorem~\ref{thm:counter_q} to sample a vector that is $(\gamma^{1/10}, 0)$-close to uniform over $|(\mathcal{L} - \vec{t} ) \cap B_i|$ and outputs the result.
It is clear that the reduction runs in polynomial time. Correctness follows from the correctness of the various subprocedures and some simple calculations.
\end{proof}
\begin{corollary}
For any efficiently computable function $2 \leq f(n) \leq \mathrm{poly}(n)$ and constant $1 \leq q < \infty$, there is an efficient reduction from $(\gamma, \eps)\text{-}\problem{LSP}_{\chi_q}$ to $\problem{CVP}_{\ell_q}$, where $\gamma := 1+1/f(n)$, $\eps := e^{-f(n)}$, and $\chi_q(\mathcal{L} - \vec{t})$ is the distribution that assigns to each $\vec{x} \in \mathcal{L} - \vec{t}$ probability proportional to $e^{-\|\vec{x}\|_q^q}$.
\end{corollary}
\begin{proof}
It suffices to show that $\chi_q$ is $(\sqrt{\gamma}, \eps, \ell_q)$-ball decomposable, i.e., that there is an efficient algorithm with access to a $\problem{CVP}_q$ oracle that outputs balls and weights as in Definition~\ref{def:ball-decomposable}. The algorithm first computes $d := \min_{\vec{y} \in \mathcal{L}} \| \vec{y} - \vec{t} \|_q$ using its $\problem{CVP}_q$ oracle. For $i = 0, \ldots, \ell := 100 n^q f(n)^{q+1}$, let $r_i := (d^q + i/(10f(n)))^{1/q}$, $\vec{c}_i := \vec0$, and $B_i := r_i K + \vec{c}_i$. Let $\hat{w}_\ell := e^{-r_\ell^q}$, and for $0 \leq i < \ell$, let $\hat{w}_i := e^{-r_i^q} - e^{-r_{i+1}^q}$. The algorithm then uses the counting procedure from Theorem~\ref{thm:counter_q} to approximate $|(\mathcal{L} - \vec{t}) \cap B_i| = |(\mathcal{L} - \vec{t} - \vec{c}) \cap r_iK|$ up to a factor of $\gamma^{1/10}$, receiving as output $N_i$. Finally, let $w_i := N_i \hat{w}_i$. The algorithm then simply outputs the $B_i$ and $w_i$.
A simple calculation shows that this is a valid $(\sqrt{\gamma}, \eps, \ell_q)$-ball decomposition of $\chi_q$.
\end{proof}
\full{\section{\texorpdfstring{$\sqrt{n/\log n}$}{sqrt(n/log n)}-SVP to centered DGS reduction and a lower bound}
\label{sec:SVPtoDGS}
It is an immediate consequence of Lemma~\ref{lem:banaszczyk} that $O(\sqrt{n})$-SVP reduces to DGS. In fact, we can do a bit better.\footnote{Interestingly, \cite{ADRS15} achieves nearly identical parameters in a different context with a very different algorithm. They work over the dual and only solve the decisional variant of $\gamma$-SVP. Though they are interested in exponential-time algorithms, it is easy to see that their approach yields a polynomial-time reduction from (the decisional variant of) $\gamma$-SVP to DGS for any $\gamma = \Omega(\sqrt{n/\log n})$. See~\cite[Theorem 6.5]{ADRS15}. Their reduction only requires samples above the smoothing parameter, which is in some sense the reason that they only solve the decisional variant of SVP.}
\begin{proposition}
For any efficiently computable function $10 \leq f(n) \leq \mathrm{poly}(n)$, there is a polynomial-time reduction from $\gamma$-SVP to $(f, \eps)$-DGS, where $\gamma := 10\sqrt{ \frac{ n}{\log f(n)}}$, and $\eps := 1/f(n)$. The reduction only calls the oracle on the input lattice.
\end{proposition}
\begin{proof}
We assume without loss of generality that $n$ is large enough so that $f(n) < 2^{n-1}$. On input $\mathcal{L} \subset \mathbb{Q}^n$, the reduction behaves as follows. Let $d_{\min}, d_{\max} > 0$ such that $d_{\min} \leq \lambda_1(\mathcal{L}) \leq d_{\max}$ such that the bit lengths of $d_{\min}$ and $d_{\max}$ are polynomially bounded. (E.g., we can take $d_{\min}$ and $d_{\max}$ to be the values guaranteed by Lemma~\ref{lem:lambda1bitlength}.) For $i = 0, \ldots, 100n^2 \ceil{\log (d_{\max}/d_{\min})}$, let
\[
s_i := (1+1/n^2)^i \cdot \frac{d_{\min}}{\sqrt{ \log f(n)}}
\; .
\]
The reduction calls the DGS oracle on input $\mathcal{L}$ and $s_i$ for each $i$, $\ceil{100nf(n)^{2}}$ times. It then returns the shortest resulting non-zero vector.
It is clear that the reduction runs in polynomial time. Let $i$ such that $ s_{i-1} \leq
10\sqrt{\frac{1}{\log f(n)}} \cdot
\lambda_1(\mathcal{L}) < s_i$. Note that
\[
\Pr_{\vec{X} \sim D_{\mathcal{L}, s_i}}[\vec{X} = \vec0] < \frac{1}{1+4/f(n)} < 1-2/f(n)
\; .
\]
By Lemma~\ref{lem:banaszczyk},
\[
\Pr_{\vec{X} \sim D_{\mathcal{L}, s_i}}\big[\length{\vec{X}} > \gamma \cdot \lambda_1(\mathcal{L})\big] \leq \Pr_{\vec{X} \sim D_{\mathcal{L}, s_i}}[\length{\vec{X}} > s_i\sqrt{n}] < 2^{-n}
\; .
\]
Therefore, if the samples were truly from $D_{\mathcal{L}, s_i}$, each would be a valid approximation with probability at least $2/f(n)-2^{-n}$. It follows that each sample from the DGS oracle is a valid approximation with probability at least $1/f(n)^2-2^{-n}/f(n) > 1/(2f(n)^2)$, and the result follows.
\end{proof}
We now show a lower bound on the length of non-zero discrete Gaussian vectors. In particular, for any approximation factor $\gamma = o(\sqrt{n/\log n})$, we show a lattice (technically, a family of lattices indexed by the dimension $n$) such that the probability that $D_{\mathcal{L}, s}$ yields a $\gamma$-approximate shortest vector is negligible for any $s$. This shows that any efficient reduction from $\gamma$-SVP to DGS with $\gamma = o(\sqrt{n/\log n})$ must output a vector not returned by the DGS oracle and/or make DGS calls on a lattice other than the input lattice.
\begin{theorem}
\label{thm:SVPtoDGS}
For any sufficiently large $n$ and $2 < t < \sqrt{n}/10$, there exists a lattice $\mathcal{L} \subset \mathbb{Q}^n$ with $\lambda_1(\mathcal{L}) = t$ such that for any $s > 0$,
\[
\Pr_{\vec{X} \sim D_{\mathcal{L}, s}}[0 < \length{\vec{X}} \leq \sqrt{n}/10] < e^{-t^2}
\; .
\]
In particular, for any $t = \omega(\sqrt{\log n})$, $D_{\mathcal{L}, s}$ will yield a $\sqrt{n}/(10t)$-approximate shortest vector with at most negligible probability.
\end{theorem}
\begin{proof}
Fix $n$.
Let $\mathcal{L}' \subset \mathbb{Q}^{n-1}$ be an $(n-1)$-dimensional lattice with $\rho_s(\mathcal{L}') \geq 1+s^{n-1}$ and $\lambda_1(\mathcal{L}) > \sqrt{n-1}/10$, as promised by Lemma~\ref{lem:randomlattice}. Then, let $\mathcal{L} := \mathcal{L}' \oplus t\ensuremath{\mathbb{Z}}$ be the lattice obtained by \scarequotes{appending} a vector of length $t$ to $\mathcal{L}'$. Note that the only vectors of length at most $\sqrt{n-1}/10$ in $\mathcal{L}$ are those that are multiples of the \scarequotes{appended} vector. So,
\[
\Pr_{\vec{X} \sim D_{\mathcal{L}, s}}[0 < \length{\vec{X}} \leq \sqrt{n-1}/10] \leq \frac{\rho_s(t\ensuremath{\mathbb{Z}} \setminus \{ \vec0\})}{\rho_s(\mathcal{L}')} \leq \frac{\rho_{s/t}(\ensuremath{\mathbb{Z}} \setminus \{ \vec0\})}{1+s^{n-1}}
\; .
\]
Now, if $s \leq t$, then the numerator is less than $e^{-t^2}$. If $s > t$, then we have
\[
\frac{\rho_{s/t}(\ensuremath{\mathbb{Z}} \setminus \{ \vec0\})}{1+s^{n-1}} < \frac{s}{1+s^{n-1}} < \frac{1}{s^{n/2}} < \frac{1}{t^{n/2}} < e^{-t^2}
\; ,
\]
where we have used the fact that $\rho_{s'}(\ensuremath{\mathbb{Z}} \setminus \{0\}) < s'$, and the fact that $2 < t < \sqrt{n}/10$.
\end{proof}
\section{CVP to DGS reduction}
\label{sec:CVPtoDGS}
For completeness, we give a simple reduction from CVP to DGS. It suffices to find a parameter $s$ that is small enough so that the weight of a closest vector to the target is much larger than the weight of all non-closest vectors. The only slightly non-trivial observation necessary is that we can take $s$ large enough that it still has polynomial bit length.
\begin{proposition}
\label{prop:CVPtoDGS}
For any efficiently computable function $2 \leq f(n) \leq \mathrm{poly}(n)$, there is a polynomial-time reduction from CVP to $(f, \eps)$-DGS where $\eps := 1-\frac{1}{f(n)}$. The reduction succeeds with probability at least $1/(2f(n)^2)$ and only makes one oracle call on $\mathcal{L} - \vec{t}$ where $\mathcal{L}$ is the input lattice and $\vec{t}$ is the input target.
\end{proposition}
\begin{proof}
On input $\mathcal{L} \subset \mathbb{Q}^n$ and $\vec{t} \in \mathbb{Q}^n$, the reduction behaves as follows. Let $q \geq 1$ with polynomial bit length such that $\mathcal{L} \subseteq \ensuremath{\mathbb{Z}}^n/q$ and $\vec{t} \in \ensuremath{\mathbb{Z}}^n/q$. Let $d$ be the upper bound on $\mu(\mathcal{L})$ guaranteed by Lemma~\ref{lem:lambda1bitlength} (which in particular has polynomial bit length), and let $s := (100f(n)n q \log (10+d))^{-1}$. The reduction simply samples $\vec{y}$ from $D_{\mathcal{L} - \vec{t}, s}$ and returns $\vec{y} + \vec{t} \in \mathcal{L}$.
It is clear that the reduction runs in polynomial time. Note that for any point $\vec{x} \in \mathcal{L}$ that is not a closest point to $\vec{t}$, we must have $\length{\vec{x} - \vec{t}}^2 \geq \dist(\vec{t}, \mathcal{L})^2 + 1/q^2$. By Corollary~\ref{cor:shiftedbanaszczyk}, we have
\[
\Pr_{\vec{X} \sim D_{\mathcal{L} - \vec{t}, s}}[\length{\vec{X}}^2 \geq \dist(\vec{t}, \mathcal{L})^2 + 1/q^2] < e^{-1/(ns^2q^2)} < e^{-2f(n)n}
\; .
\]
Therefore, any distribution within statistical distance $\eps$ of $D_{\mathcal{L} - \vec{t}, s} + \vec{t}$ must output a closest point with probability at least $1/f(n) - e^{-2f(n)n} > 1/(2f(n))$. It follows that the oracle outputs a closest point with probability at least $1/(2f(n)^2)$, as needed.
\end{proof}
\begin{corollary}
CVP is equivalent to DGS under polynomial-time, dimension-preserving reductions.
\end{corollary}
\begin{proof}
Combine Theorem~\ref{thm:DGStoCVP} with Proposition~\ref{prop:CVPtoDGS}.
\end{proof}
}{}
\full{\section*{Acknowledgments}
I would like to thank Divesh Aggarwal, Daniel Dadush, and Oded Regev for many enlightening discussions and for their helpful comments on early drafts of this work; Daniele Micciancio for finding a bug in an earlier version of Proposition~\ref{prop:CVPtoDGS}; and the SODA reviewers for their very helpful and thorough reviews.}{}
\full{\bibliographystyle{alpha}}{\bibliographystyle{abbrv}}
\newcommand{\etalchar}[1]{$^{#1}$}
|
train/arxiv
|
BkiUc-TxK6mkyCfOB744
| 5
| 1
|
\section{INTRODUCTION}
\normalsize
Entanglement is a distinctively quantum phenomenon whereby a pure
state of a composite quantum system may no longer be determined by the
states of its constituent subsystems~\citep{Schroedinger35a}. Entangled
pure states are those that have {\it mixed} subsystem states. To
determine an entangled state requires knowledge of the correlations
between the subsystems. As no pure state of a classical system can be
correlated, such correlations are intrinsically non-classical, as
strikingly manifested by the possibility of violating local realism
and Bell's inequalities~\cite{Bell93a}. In the science of quantum
information processing (QIP), entanglement is regarded as the defining
resource for quantum communication, as well as an essential feature
needed for unlocking the power of quantum computation.
The standard definition of quantum entanglement requires a preferred
partition of the overall system into subsystems--- that is,
mathematically, a factorization of the Hilbert space as a tensor
product. Even within quantum mechanics, there are motivations for
going beyond such subsystem-based notions of entanglement. Whenever
indistinguishable particles are sufficiently close to each other,
quantum statistics forces the accessible state space to be a proper
subspace of the full tensor product space, and exchange correlations
arise that are not a usable resource in the conventional QIP
sense. Thus, the natural identification of particles with preferred
subsystems becomes problematic. Even if a distinguishable-subsystem
structure may be associated to degrees of freedom different from the
original particles (such as a set of position or momentum modes, as in
~\cite{Zanardi2001a}), inequivalent factorizations may occur on the
same footing. Entanglement-like notions not tied to modes have been
proposed for bosons and fermions~\citep{Eckert2002a}.
However, the introduction of quasiparticles, or the purposeful
transformation of the algebraic language used to analyze the system
~\citep{Batista2001a, Batista2002a}, may further complicate the
choice of preferred subsystems.
In this paper, we review and further develop \emph{generalized
entanglement} (GE) introduced in~\cite{BKOV2002a}, which incorporates
the entanglement settings introduced to date in a unifying framework.
In quantum-mechanical settings, the key idea behind GE is that
entanglement is an \emph{observer-dependent concept}, whose properties
are determined by the expectations of a \emph{distinguished subspace
of observables} of the system of interest, without reference to a
preferred subsystem decomposition. Distinguished observables may
represent, for instance, a limited means of manipulating and measuring
the system. Standard entanglement is recovered when these means are
restricted to arbitrary \emph{local} observables acting on subsystems.
The central idea is to generalize the observation that standard
entangled pure states are precisely those that look mixed to local
observers.
The most fundamental aspects of this notion of GE make use only of the
convex structures of the spaces of quantum states and observables.
Therfore it is also applicable in contexts much broader than that of
quantum systems with distinguished subspaces of observables. It may
be formulated within general convex frameworks, based on ordered
linear spaces or the closely related notion of convex effect algebras,
suitable for investigating the foundations of quantum mechanics and
related physical theories (cfr.~\cite{Beltrametti97a} and references
therein). While commenting on physically motivated special cases, we
will concentrate on this general setting in the present paper. Though
we make no major advances over \cite{BKOV2002a} and \cite{BKOSV2004a},
new material here includes
Theorem 3.4 which gives another characterization of the convex cones
framework, in terms of restriction to a subspace of observables, and
more detailed investigation of the distinguished quantum observables
subspace. This includes the introduction of the {\em unique preimage property}
(Def. \ref{def: unique preimage property}) and the relationship between
the quadratic purity measure, generalized entanglement, and the UPIP in
this context, notably Problems \hbox{\ref{prob: when UPIP},}
\ref{prob: when GE implies mrp}, and \ref{prob: irreducible implies UPIP},
and Propositions \ref{prop: when else mrp implies ge} and
\ref{prop: max purity implies GE}.
Two sets of articles contain related
ideas. The first originated in the context of $C^{*}$ and von Neumann
algebras, for example in~\cite{CNT87a}, where the dynamical entropy of
automorphisms of algebras, intended to generalize the Kolmogorov-Sinai
dynamical entropy, is defined --- using a notion of entropy of a
state's restriction to a subalgebra introduced in~\cite{NT85a}. These
ideas were further developed with special attention to finding optimal
decompositions for the convex roof construction of entropy relative to
a subalgebra, and applied to quantum information concepts such as
quantum parameter estimation and the entanglement of formation. See
e.g.~\cite{Benatti96a, Uhlmann98a,BNU96a,BN98a,BNU2003a}. The
association of subsystems, whether physical or
``virtual''/''encoded,'' of a quantum system with associative
subalgebras appeared in in a second set of
articles~\cite{KLV2000a,DeFilippo2000a,VKL2001a,Zanardi2001b}; this
association was recently revisited, and examples collected,
in~\cite{ZLL2004a}. Note, however, that these latter articles were
not directly concerned with the {\em extremality properties of reduced
states} which form the basis of our GE notion. Also, in both sets of
articles, the context of subalgebras, whether $C^*$, von Neumann, or
associative, is considerably more restrictive than the general context
we work in here, except for the fact that Benatti, Connes, Narnhofer,
Thirring, and Uhlmann often include and are sometimes primarily
interested in infinite-dimensional algebras, whereas we focus here
exclusively on the finite-dimensional setting.
\iffalse
Here we highlight the significance of GE from a physics and
information-physics perspective. For this purpose, we focus on the
case where the observable subspace is a Lie algebra. A key result is
then the identification of pure generalized unentangled states with
the \emph{generalized coherent states} (GCSs, a connection
independently noted by Klyachko~\cite{klyachko2002a}), which are well
known for their applications in physics~\cite{Zhang90a}.
We demonstrate that many information-theoretic notions previously thought
specific to partitioning into subsystems extend to coherent state
theory and beyond, define new measures of entanglement based on the
general theory, and apply quantum information to condensed-matter problems.
In particular, we introduce notions of \emph{Generalized Local Operations
assisted by Classical Communication} (GLOCC) under which the ordinary
measures of standard entanglement do not increase, as well as measures
of GE with the desired behavior under classes of GLOCC maps. New measures
of standard entanglement can be constructed for the multipartite case.
In the Lie-algebraic setting, a simple GE measure obtained from the
\emph{purity relative to a Lie algebra} is a useful diagnostic tool
for quantum many-body systems, playing the role of a \emph{disorder
parameter} for broken-symmetry quantum phase transitions.
\fi
\large
\section{MATHEMATICAL BACKGROUND}
\normalsize
For background on cones and convexity, we highly recommend the
text by~\cite{Barvinok2002a}, or the short introductory portion
of~\cite{HHL89a}; however, the summary we give here should suffice for
what follows.
\begin{definition}
A {\em positive cone} is a proper subset $K$ of a real vector space
$V$ closed under multiplication by nonnegative scalars. It is called
{\em regular} if it is (a) convex (equivalently, closed under
addition: $K + K = K$), (b) generating ($K-K=V$, equivalently $K$
linearly generates $V$,) (c) pointed ($K \cap -K = \{0\}$, so that it
contains no non-null subspace of $V$), and (d) topologically closed
(in the Euclidean metric topology, for finite dimension).
\end{definition}
In the remainder of this paper, ``vector space'' and ``linear space''
will mean {\em finite-dimensional} vector space, ``cone'' will mean a
regular cone in a finite-dimensional vector space, unless otherwise
stated.
A cone $K$ induces a partial order $\ge_K$ on $V$, defined by
$x \ge_K y := x - y \in K$. \iffalse $(V,\ge_K)$, or sometimes
$(V,K)$, is called an {\em ordered linear space}. \fi \iffalse The
Hermitian operators on a finite-dimensional complex vector space, with
the ordering induced by the cone of positive semidefinite operators,
are an example. \fi \iffalse (A relation $R$ is defined to be
a partial order if it is reflexive ($x R x$), transitive ($x R y ~\&~
y R z \Rightarrow x R z$) and antisymmetric ($(x R y ~\&~ y R x)
\Rightarrow x=y$).) \fi It is ``linear-compatible'': inequalities can
be added, and multiplied by positive scalars. If one removes the
requirement that the cones be generating, cones are in one-to-one
correspondence with linear-compatible partial orderings. A pair
$\langle V, \succeq \rangle$ of a linear space and a distinguished
such ordering is called an {\em ordered linear space}. The categories
of real linear spaces with distinguished cones and partially ordered real
linear spaces are equivalent.
Note that the intersection of the interior of a generating cone with a
subspace is (if not equal to $\{0\}$) a ({\em non-closed} but otherwise
regular) cone that
generates the subspace. When a cone or other set is said to generate
a linear space, it does so via linear combination. When a set is said
to generate a cone, it does so via positive linear combination. We
will use the notation $\dot{C}$, for the set $C - \{0\}$.
By an {\em extremal} state in a convex set of states, we mean the
usual convex-set notion that a point $x$ is extremal in a convex set
$S$ if (and only if) it cannot be written as a nontrivial convex
combination $x = \lambda_1 x_1 + \lambda_2 x_2$ of points $x_1, x_2$
in $S$. (Convex combination means $\lambda_i
\ge 0, \lambda_1 + \lambda_2 = 1$, and nontrivial means $\lambda_i \ne
0, x_1 \ne x_2$). We sometimes use the physics term {\em pure state}
for an extremal point in a convex set of states, but for clarification
we emphasize that when this convex set is the set of all quantum
states on some Hilbert space, the term ``pure state'' in the present
paper refers to a projector $\pi := |#1\rangle \langle #1 |{\psi}$, and not to a vector
$\ket{\psi}$ in the underlying Hilbert space. We write $\rm Extr~{S}$ for
the set of extremal points of a convex set $S$.
A {\em ray} belonging to a cone $K$ is a set $R$ such that there
exists an $x \in K$ for which $R = \{\lambda x: \lambda \ge 0 \}$,
i.e. it is the set of all nonnegative scalar multiples of some element
of the cone. An {\em extreme ray} in $K$ is a ray $R$ such that no $y
\in R$ can be written as a convex (or equivalently, positive)
combination of elements of $K$ that are not in $R$. The topological
closure condition guarantees, through an easy but not trivial argument
using the Krein-Milman theorem, that a (regular) cone is convexly
(equivalently, positively)
generated by its extreme rays. We'll say a point is
extremal in a cone if it belongs to an extreme ray of the cone; note
that such points are not usually extremal in the convex set sense,
although the cone is a convex set; the only point in a cone extremal
in the convex set sense is zero.
The dual vector space $V^*$ for real $V$ is the space of all linear
functionals from $V$ to ${\mathbb{R}}$; the dual cone $C^* \subset V^*$ of the
cone $C \subset V$ is the set of such linear functionals which are
nonnegative on $C$.
$\lambda \in V^*$ is said to
separate $C$ from $-C$ if $\lambda(x) \ge 0$ for all nonzero $x \in
C$. For $\alpha \in V^*$, $x
\in V$, we write the value of $\alpha$ on $x$ as $\alpha[x]$, rather
than $\alpha(x)$.
The adjoint $\phi^*: V_2^* \rightarrow V_1^*$ of a linear map $\phi:
V_1 \rightarrow V_2$ is defined by \begin{eqnarray} \phi^*(\alpha)[x] =
\alpha[\phi(x)]\;, \end{eqnarray} for all $\alpha \in V_2^*, x \in V_1$. The
following proposition is easily (but instructively) verified.
\begin{proposition} \label{dual maps}
Let $C_i$ be a cone in $V_i$ for $i = 1,2,$ and let $\phi(C_1)
\subseteq C_2$. Then $\phi^*(C_2^*) \subseteq C_1^*$.
\end{proposition}
We will also use the following:
\begin{proposition} \label{epimono}
Let $C_i$ be a cone in $V_i$ for $i = 1,2,$ and let $\phi(C_1) = C_2$.
Then $\phi^*(C_2^*) \subseteq C_1^*$ and $\phi^*$ is one-to-one.
\end{proposition}
{\bf Proof:}
Let $\eta_1, \eta_2 \in C_2^*$, and $\eta_1 \ne eta_2$. $\eta_1 \ne \eta_2$ is
equivalent to the existence of $y$ in $C_2$ such that $\eta_1[y] \ne
\eta_2[y]$. By the assumption that $\phi$ maps $C_1$ onto $C_2$,
there is an $x \in C_1$ such that $\phi(x)=y$; thus $\eta_1[\phi(x)]
\ne \eta_2[\phi(x)]$. By the definition of $\phi^*$, this implies
that $\phi^*(\eta_1)[x] \ne \phi^*(\eta_2)[x]$, which implies that
$\phi^*(\eta_1) \ne \phi^*(\eta_2)$. $\Box$
\large
\section{GENERALIZED ENTANGLEMENT}
\normalsize
We now introduce GE of states in a convex set of states given by the
intersection $\hat{C}$ of an affine ``normalization'' plane $\{x :
\lambda(x) = \alpha \}$ (for a fixed $\alpha$, which we'll take to be
one) with a cone $C$ of ``unnormalized states.'' This GE is a
relative notion: states are entangled or unentangled relative to
another such state-set $\hat{D}$, and a choice of
normalization-preserving map of the first state-set onto the second,
which generalizes the notion of computing the reduced density matrices
of a bipartite system. To fix ideas, note that in the case where $C$
is supposed to represent states on a finite dimensional quantum system
whose Hilbert space has dimension $d$, $C$ is isomorphic to the set of
$d \times d$ positive semidefinite matrices, whose normalized (i.e.
unit-trace) members form the convex set of density matrices for the
system, while the ambient linear space $V$ is the space of $d \times
d$ Hermitian matrices. We shall often use the abbreviation ``PSD''
for ``positive semidefinite.''
\begin{definition} \label{def: normalized unentangled}
Let $V,W$ be finite-dimensional real linear spaces equipped with cones
$C \subset V$, $D \subset W$, and distinguished linear functionals
$\lambda \in C^*$, $\tilde{\lambda} \in D^*$ that separate $C,D$ from
$-C, -D$ respectively. Let $\pi: V \rightarrow W$ be a linear map that
takes $C$ onto $D$ (that is, $\pi(C) = D$), and maps the affine plane
$L_\lambda := \{x \in V : \lambda(x)=1\}$ onto the plane
$M_{\tilde{\lambda}} := \{y \in W : \tilde{\lambda}(y) = 1 \}$.
An element (``state'') in $\hat{C} := L_\lambda \cap C$ is called
{\em generalized unentangled} (GUE) relative to $D$ if it is in the
closure of the convex hull of the set of extreme points $x$ of $\hat{C}$
whose images $\pi(x)$ are extreme in
$\hat{D} := D \cap M_{\tilde{\lambda}}$.
\end{definition}
\begin{definition}\label{def: cone-pair}
We will call a pair of linear spaces $V,W$ equipped
with distinguished cones $C,D$, functionals $\lambda,
\tilde{\lambda}$, and a map $\pi$, satisfying the conditions in the
above definition, a {\em cone-pair}. As noted above, we write $\hat{C}, \hat{D}$ for
the normalized subsets of $C, D$, i.e. for $\{x \in C: \lambda(x) =
1\}$ and $\{x \in D: \tilde{\lambda}(x) = 1\}$. We will also
sometimes call $\lambda, \tilde{\lambda}$ the {\em traces} on their
respective cones, so that the condition on $\pi$ above may be called
{\em trace-preservation}.
\end{definition}
That is, with the usual physics terminology that extremal states are
``pure'' and nonextremal ones ``mixed,'' unentangled pure states of
$\hat{C}$ are those whose ``reduced'' states (images under $\pi$) are pure,
and the notion extends to mixed states as in standard entanglement
theory: unentangled mixed states in $\hat{C}$ are those expressible as
convex combinations of unentangled pure states (or limits of such
combinations, though the latter is unnecessary in finite dimension).
It is easy to see that the motivating example of ordinary bipartite
entanglement is a special case of this definition. Here, $C$ is the
cone of PSD operators on some tensor product $A \otimes B$ of
finite-dimensional Hilbert spaces, while $D$ is the direct product of
the cones of PSD operators on $A$ and on $B$ (intuitively, it is just
the cone of all ordered pairs whose first member is a positive
operator on $A$ and whose second is one on $B$). $\lambda$ is
the trace. $\pi$ is the map
that takes an operator on $A \otimes B$ to the ordered pair of its
``marginal'' or ``reduced'' operators (``partial traces'') on $A$ and
$B$. Similarly, standard multipartite entanglement is a special case of GE.
So we may view the GUE definition (in particular condition (a) of
Definition 3.3 below) as based on extending the long-standing
observation that for ordinary multipartite finite-dimensional quantum
systems, a pure state is entangled if and only if at least one of its
reduced density matrices is mixed.
It is perhaps mathematically more natural to define the {\em
unnormalized} unentangled states of $C$ relative to $D$, omitting all
mention of $\lambda, \tilde{\lambda}$, and the
normalization-preservation requirement on $\pi$. That is:
\begin{definition}
Let $C, D$ be cones in finite-dimensional real linear spaces
$V,W$ respectively, and let $\pi: V \rightarrow W$ map $C$
onto $D$. $x \in C$ is {\em generalized unentangled} (relative to
$D, \pi$) if either (a) $x$ belongs to an extreme ray of $C$, and
$\pi(x)$ belongs to an extreme ray of $D$, or (b) $x$ is a positive
linear combination of states satisfying (a), or a limit of such
combinations.
\end{definition}
It is easy to verify that the unnormalized GUE states are a (possibly
non-generating, but otherwise regular) cone in $V$. If one introduces
the notion of normalization in $C$ via a functional $\lambda$, it is
also easily verified that the normalized GUE states of Definition
\ref{def: normalized unentangled} are precisely the intersection of
this cone with the normalization plane. (It is straightforward
to introduce a normalization plane, and associated functional
$\tilde{\lambda}$ on $W$, if desired, as the image of $L_\lambda$
under $\pi$.)
\cite{BKOV2002a}, and especially~\cite{BKOSV2004a},
stressed applications in which the reduced state-set is obtained by
selecting a distinguished subspace of the observables (Hermitian
operators) on some quantum system. The reduced state-set is then the
set of linear functionals (equivalently, consistent lists of
expectation values for the distinguished observables) on this subspace
of the space of all observables, that are induced by normalized quantum
states\footnote{It is worth noting that beyond the setting of standard
quantum entanglement this is not in general a vacuous requirement:
there can be normalized linear functionals on the reduced state set
that are {\em not} obtainable by restriction from a quantum state on
the set of all observables. Although all normalized functionals on the
distinguished observables can be extended in many ways to normalized
functionals on the full set, in some cases not all can be extended to
{\em positive} functionals.}. We dub this class of cone-pairs the {\em
distinguished quantum observables} setting. Even in
the more general cones setting, there is a natural notion of
observables, and
Definition~\ref{def: normalized unentangled} can be interpreted
as restriction of the states to a subspace of the observables. To show
this we employ a formalism of states, measurements, and observables
that, in many variants, is frequently used as a touchstone of
``operational'' approaches to theories in the abstract~\footnote{By an
``operational theory,'' we mean one in which a theory describes
various measurements or operations one can perform on systems of the
type described by the theory, and specifies a set of possible
``states,'' each of which determines the probabilities for the
outcomes of all possible measurements, when the system is in that
state.}.
We view $V^*$ as a space of real-valued observables. For $x \in V^*$
and $\eta \in \hat{C}$, we interpret $x[\eta]$ as the expectation
value of observable $x$ in state $ \eta$. We view $V$ as the dual of
$V^*$ in such a way that $x[\eta] = \eta[x]$ for all $x \in V^*, \eta
\in V$. But what guarantee do we have that these expectation values
behave in a reasonable way, as observables in an operational theory
should? That is, can we view the expectation value $\eta(x)$ of an
observable $x$ in a state $\eta$ as the expected value of some
quantity being measured? By this we mean that $x$ is associated with
a quantity that takes different values depending on the outcome of the
measurement, and the state determines the expectation value by
determining probabilities for the different outcomes of the
measurement, such that the value $\eta(x)$ is indeed the expectation
value of the outcome-dependent quantity, calculated according to the
probabilities assigned to the outcomes by the state.
We will only sketch the answer to this question; more details may be
found in many places (though accompanied by additional concepts and
formalism), notably \cite{Beltrametti97a}. In the structure we have
described, of state-space and dual observable space, we are able to
find a special class of observables, the ``decision effects,'' whose
expectation value may be viewed as the probability of a measurement
outcome. These ``effects'' are the elements of the initial interval
${\cal E} := [0, \lambda] \subset C^*$, i.e. the set of $x \in C^*$
satisfying \mbox{$\lambda \ge_{C^*} x$}. A (finite) {\em resolution
of $\lambda$} is a set of effects $x_i \in {\cal E}$ such that $\sum_i x_i
= \lambda$. For normalized states $\omega$, it follows that
$\omega(x_i) \ge 0$ and $\sum_i \omega(x_i)=1$, so the values
$\omega(x_i)$ may be viewed as probabilities of measurement outcomes,
with a resolution of $\lambda$ representing the mutually exclusive and
exhaustive outcomes of some measurement. Then it can be shown that
for {\em any} observable $A \in V^*$, a resolution ${\cal R}$ of $\lambda$
and an assignment of real values $v(x_i)$ to the outcomes $x_i$ in ${\cal R}$ can
be found, such that for all normalized states $\omega$, $\omega(A) =
\sum_i \omega(x_i) v(x_i)$. For example, this is a consequence of (i)
of Theorem 1 in \cite{Beltrametti97a}. In general the converse does
not hold, giving rise to a generalization of observables sometimes
known as {\em stochastic observables} for which not only does the
analogous statement (which is (i) of Theorem 1 of
\cite{Beltrametti97a} where stochastic observables are just called observables) hold,
but so does the converse of this analogue. The relation between the
convex and the effect-algebras approach has been treated in various
places (and aspects of it appear in some contexts, e.g. \cite{Ludwig83a},
even earlier than the
formal notion of effect algebra). Some references are
\cite{Gudder98a}, \cite{Gudder99a}, \cite{Gudder99b}, and the book
\cite{DallaChiara2004a} (especially Ch. 6). \cite{Barnum2003a}
explores the relation between probabilistic operational theories and
``weak effect algebras,'' as well as related more dynamical objects termed {\em
operation algebras}, but without explicit consideration of observables.
\cite{Bennett97a}, \cite{Foulis98a}, and \cite{Foulis2000a} address
very closely related representational issues
but without the constraint of convexity. The relations
between convex and general effect-algebras and their representations
are discussed in \cite{Foulis2005a}.
We now show that our formalism of maps $\pi$ onto cones $D$ is
equivalent to restriction to a subspace of observables.
\begin{theorem}\label{observable restriction equivalent to conepair}
I) (``Observable restriction implies cone-pair'').
Let $C$ be a cone in $V$, and let $\lambda \in V^*$ separate $V$ from
$-V$ (as in Def. \ref{def: normalized unentangled}),
and let $W^*$ be a subspace of $V^*$, containing $\lambda$.
For $\eta \in V$, define $\eta\smash{\downharpoonright}: W^* \rightarrow
{\mathbb{R}}$ as the restriction of $\eta$ to the subspace $W^*$,
i.e. $\eta\smash{\downharpoonright}(x) = \eta(x)$ for $x \in W^*$ and otherwise $\eta\smash{\downharpoonright}(x)$
is undefined. Thus $\eta\smash{\downharpoonright} \in (W^*)^* =: W$.
Define $D = \{\eta\smash{\downharpoonright} : \eta \in C \}$, $M_\lambda = \{y
\in W : \lambda(y)=1\}$.
Define $\pi$ as the
restriction map $\pi := \smash{\downharpoonright}: V \rightarrow W, \eta \mapsto \eta\smash{\downharpoonright}$.
Then $V, W, C,D, \lambda, \tilde{\lambda} (:= \lambda), \pi$
form a cone-pair in the sense of Definition \ref{def: cone-pair}.
That is, $D$ is a cone in $W$, $\pi(C)=D$, and
the image under $\smash{\downharpoonright}$ of the plane $L_\lambda \equiv \{\eta \in V:
\lambda(\eta) = 1\}$ is a translation of a plane separating $D$ from
$-D$.
II) (``Cone-pair implies observable restriction'').
Let $V,W,C,D,\lambda, \tilde{\lambda},\pi$ be a cone-pair. Then there
exists an injection (one-to-one map) $\tau: W^* \rightarrow V^*$, taking
$\tilde{\lambda}$ to $\lambda$, such that $\pi$ is the map from $V$ to $W$
that takes $x$ to the function $x\smash{\downharpoonright}_{W^*}$. Here $x \smash{\downharpoonright}_{W^*}$ defined
as the
linear functional on $W^*$ whose value on $a \in W^*$ is the value of
$x$'s restriction to $\tau(W^*)$ on $\tau(a)$.
\end{theorem}
{\em Remark concerning I:} The restriction that the subspace $W^*$ contain
$\lambda$ is hardly objectionable from an operational point of view.
$\lambda$'s expectation value is just the normalization constant, and
is independent of which normalized state has been prepared. Therefore
it can be measured without any resources, and there is no point in
claiming that omitting it could represent a physically significant
restriction on the means available to observe or manipulate a system.
{\em Remark concerning II:} The definition of $\smash{\downharpoonright}$ in part I of the theorem
involved a subspace
$W^*$ of $V^*$; in part II we have defined $W^*$
abstractly rather than as a subspace of $V^*$, so it is $\tau(W^*)$,
which is isomorphic to $W^*$ but {\em is} a subspace of $V^*$, to
which we restrict states in defining $\smash{\downharpoonright}$. (Of course, $W^*$ {\em
itself} is a subspace of $V^*$ according to the category-theoretic
definition of subspace.)
{\bf Proof:} To prove part I, we must show that
$D$ is a cone in $W$, and $\lambda$ separates it from $-D$.
It is easy to verify linearity of $\smash{\downharpoonright} \equiv \pi$ from the
definition, and in finite dimensions, it is also easy to verify that
linear maps from one vector space {\em onto} another (such as $\smash{\downharpoonright}$)
take cones to cones. For all $x \in \dot{C} =: C - \{0\}$,
$\lambda[x] > 0$. But $\lambda[x] = x[\lambda]$ by duality, and by
the definition of $\smash{\downharpoonright}$ and the fact that $\lambda \in W^*$,
$x[\lambda] = x\smash{\downharpoonright} [\lambda] \equiv \lambda[ x \smash{\downharpoonright}]$, so $\lambda[x\smash{\downharpoonright}]
> 0$ for all $x \in \dot{C}$, i.e. (since $\smash{\downharpoonright}$ maps $\dot{C}$ onto
$\dot{D}$), $\lambda[y] > 0$ for all $y \in \dot{D}$. That is,
$\lambda$ separates $D$ from $-D$.
To prove part II, let $\tau$ be $\pi^*$. That is, for all $x \in W^*$,
$\eta \in V, \tau(x)[\eta] = x[\pi(\eta)]$. By duality,
this gives $\eta[\tau(x)] = \pi(\eta)[x]$. Since, by
Proposition~\ref{epimono}, $\tau$ is an injection, this last equation
determines $\pi(\eta)$ to be essentially $\eta\smash{\downharpoonright}_{\tau(W^*)}$, as
desired. The ``essentially'' refers to the fact that $\pi(\eta)$ is
actually the pullback along $\tau$ of this restriction; the two are
the same function only if one identifies $W^*$ with its image under
$\tau$. $\tau$, in other words, tells us how $W^*$ can be identified
with a subspace of the full space $V^*$ of observables, in such a way
that $\pi(\eta)$ becomes identified with the restriction of $\eta$ to
$W^*$. $\Box$
\begin{proposition}\label{prop: basic preimage facts}
In a cone-pair, $\pi$ has the property that
for $x \in \rm Extr~{\hat{D}}$, the set $\pi^{-1}(x)$ is convex, compact,
and closed, and its extremal elements are extremal in $\hat{C}$.
\end{proposition}
{\bf Proof:} Convexity is immediate: if $y_1, y_2 \in C$, $\pi(y_1)=
x$ and $\pi(y_2) = x$, then $\lambda y_1 + (1 - \lambda) y_2 \in C$ by
convexity of $C$, and by linearity of $\pi$, $\pi(\lambda y_1 + (1 -
\lambda) y_2) = \lambda(\pi(y_1)) + (1 - \lambda) \pi(y_2) = x$.
Closedness of $\pi^{-1}(x)$ in the Euclidean metric topology
follows from the fact that $\pi$, being a
function from a finite-dimensional inner product space to a
finite-dimensional normed space, is continuous (cf. e.g.
\cite{Young88a}, Exercise 7.3), and the preimage of a closed set under
a continuous function is closed (cf. e.g. \cite{Kripke68a}, Corollary
IV.C.4). Since finite intersections of closed sets are closed, $C
\cap \pi^{-1}(x)$ is closed as well. Compactness follows from the
fact that $\hat{C}$ is compact (cf. e.g. \cite{Barvinok2002a}) hence
a compact metric space, and a closed subset of a compact metric space
is compact (\cite{Kripke68a}, Corollary VII.A.11).
Now let $x \in \rm Extr~{\hat{D}}$, and let $y \in \pi^{-1}(x)
\cap C$ not be extremal in $\hat{C}$. We need to show that such a $y$ is
not extremal in $\pi^{-1}(x)$ either. $y \notin \rm Extr~{\hat{C}}$ means
there are $y_1, y_2 \in \hat{C}$ with $y_1 \ne y_2$, $y = \lambda y_1 + (1 -
\lambda)y_2$. By linearity of $\pi$, $x \equiv \pi(y) = \lambda
\pi(y_1) + (1 - \lambda) \pi(y_2)$; since $x \in \rm Extr~(\hat{D})$,
$\pi(y_1) = \pi(y_2) = x$. Hence $y_1, y_2 \in \pi^{-1}(x)$, so $y
\notin \rm Extr~{(\pi^{-1}(x) \cap C}$. $\Box$
In important classes of examples, a stronger property holds:
\begin{definition} \label{def: unique preimage property}
A cone-pair including $C,D,\lambda,\pi$ is said to have the {\em
unique preimage property} (UPIP) if $x \in \rm Extr~{\hat{D}}$ implies
that $\pi^{-1}(x)$ consists of a single element (which must therefore
be extremal).
\end{definition}
Equivalently (because of Prop. \ref{prop: basic preimage facts}),
extremal reduced states have only extremal preimages.
\begin{problem} \label{prob: when UPIP}
Find nontrivial necessary and/or sufficient conditions (some are given
below, but others almost certainly exist) for cone-pairs $C, D, \pi$
to have the UPIP.
\end{problem}
Finally, note that the converse of the UPIP follows from Proposition
\ref{prop: basic preimage facts}: If $\pi^{-1}(x)$ is unique,
then it must be extremal, and $x$ must be extremal as well.
\large
\section{GENERALIZED ENTANGLEMENT IN SPECIAL CLASSES OF CONES}
\normalsize
We now formally define several ``settings'' in which to study GE;
these are special classes of cone-pairs, physically and/or
mathematically motivated.
\begin{definition}
\begin{mlist}
\item {\em Distinguished quantum observables setting}, defined above.
An equivalent formulation is the {\em Hermitian-closed (aka
$\dagger$-closed) operator subspace setting}, in which the
distinguished observable subspace is the Hermitian operators belonging
to a $\dagger$-closed subspace, containing the identity operator, of
the complex vector space of all linear operators on a quantum system.
\item {\em Lie-algebraic setting.}
Here, $C$ is the cone of positive Hermitian operators on a
(finite-dimensional) Hilbert space carrying a Hermitian-closed Lie
algebra $\mathfrak{g}$ (playing the role of $W^*$) of Hermitian operators (with
Lie bracket $[X,Y] := i(XY-YX)$, and containing the identity operator)
and $D$ the cone (in $(W^*)^* =: W$) of linear functionals on $\mathfrak{g}$
induced from positive Hermitian elements of $C$ by restriction to
$W^*$.
\item {\em Associative algebraic setting.} Here, the distinguished
observables are the Hermitian elements of some associative subalgebra
of the associative algebra of all operators on a quantum system.
\end{mlist}
\end{definition}
By construction, the Lie-algebraic and associative algebraic settings
are special cases of the distinguished quantum observables case. As
noted in ~\cite{BKOV2002a},
since the Lie-algebraic setting was defined to involve
finite-dimensional $\dagger$-closed matrix representations, the Lie
algebras involved are necessarily reductive i.e., the
direct product\footnote{As Lie algebras; the induced direct product of
the algebras considered as vector spaces (i.e. without their Lie
bracket structure) is also a vector space direct sum.} of a semisimple
and an Abelian part.
A distinction that can be nontrivially made within all the settings in
the above list is between those in which the distinguished observables
act {\em irreducibly}, and those in which there is a nontrivial
subspace invariant under the action of all observables.
\begin{proposition}
In the $\dagger$-closed operator subspace setting, the distinguished
subspace has a basis of Hermitian operators that is orthonormal in the
trace inner product $\langle A, B \rangle = {\rm tr}\; AB$.
\end{proposition}
Because of this proposition, we may construct an orthogonal projection
operator (some would call it a superoperator) $\Pi_S$, acting on the
space of Hermitian operators by projecting into the subspace of
distinguished observables. We can also use such a basis to define a
measure of entanglement for pure states, the {\em relative purity}
(although the name may be slightly misleading, for reasons we will explain).
\begin{definition}
Let $\omega$ be a state on a $\dagger$-closed set $S$ of quantum
observables. The {\em purity} $P(\omega)$ of a state $\omega$ is
defined by letting $X_\alpha$ be an orthonormal (in trace inner
product) basis of $S$. Then \begin{eqnarray} P(\omega) := \sum_{\alpha}
(\omega(X_\alpha))^2\;. \end{eqnarray}
\end{definition}
Note that any state $\omega$ on the {\em full} operator space
corresponds uniquely to a density operator $\rho_{\omega}$, defined by the
condition ${\rm tr}\; (\rho_\omega X) = \omega(X)$ for all observables $X$.
Closely related to the above purity is the {\em relative purity} of a
pure state $\ket{\psi}$ of the overall quantum system; this is defined
equal to the purity of the state it induces on $S$, or equivalently,
with $X_\alpha$ as above, \begin{eqnarray} P_S(\ket{\psi}) := \sum_{\alpha \in S}
|\dmelement{\psi}{X_\alpha}|^2\;. \end{eqnarray} In fact, this definition
could be straightforwardly extended to mixed states $\omega$ on the
full Hilbert space, as \begin{equation} P_S(\omega):= \sum_{\alpha \in S} |{\rm tr~}
\omega X_\alpha|^2\;. \end{equation}
However, a requirement for entanglement measures is
convexity~\citep{Vidal2000b}, and the above extension lacks this as
well as other desirable properties. We will generally extend
pure-state entanglement measures $\mu$ to mixed states via the {\em
convex hull (often called convex roof) construction}
(cf. e.g. \cite{Uhlmann98a, Bennett96a})
standard in ordinary entanglement theory: the value of the measure
$\mu$ on
a mixed state $\omega$ is the infimum, over convex decompositions
$\omega = \sum_i p_i \pi_i$ of $\omega$ into pure states $\pi_i$, of
the average value of the pure-state measure, that is, of $\sum_i p_i
\mu(\pi_i)$. This is convex by construction.
Defining $\Pi_S$ as the projection superoperator onto the operator
subspace $S$, it is easily verified that \begin{eqnarray} P_S(\omega) :=
\sum_{\alpha} |{\rm tr}\; \Pi_S(\rho_{\omega}) X_\alpha|^2
\equiv {\rm tr}\;
[\Pi_S(\rho_\omega)^2]\;. \end{eqnarray}
For any density operator $\rho$, we call $\Pi_S(\rho)$ the associated
{\em reduced} density operator; note that it need {\em not} be a
positive operator on the full state space (although it is in the
standard multipartite case). This is not problematic because for any
PSD element $R$ of the {\em distinguished} observable space, ${\rm tr}\; \Pi_S(\rho)
R \ge 0$, of course.
The following proposition is immediate from Theorem 14 of~\cite{BKOV2002a}.
\begin{proposition} \label{prop: mre implies ge in irreducible lie}
In the irreducible Lie-algebraic setting, pure states with maximal
relative purity are generalized unentangled.
\end{proposition}
The converse is {\em not} true in general. Also,
the analogue of Prop. \ref{prop: mre implies ge in irreducible lie}
for the general Lie-algebraic setting (allowing {\em reducible}
representations) can be shown by example to be false.
Another situation in which maximal relative purity implies generalized
unentanglement is embodied in the following.
\begin{proposition} \label{prop: when else mrp implies ge}
In the $\dagger$-closed operator subspace setting states with unit
relative purity have unique preimages, and are
therefore generalized unentangled.
\end{proposition}
\noindent
{\bf Proof:} A necessary and sufficient condition for a normalized
state $\omega$ on the space of all observables to be pure is ${\rm tr}\;
(\rho_\omega^2) = 1$. (Henceforth we suppress the $\omega$-dependence
of $\rho$.) Letting $X_\alpha$ be an orthonormal basis for the space
of all observables such that a subset (denoted by the letter $\beta$
for the index) indexes the distinguished subspace $S$, with another
subset (indexed by $\gamma$) indexing $S^\perp$, and writing $\langle X_\alpha
\rangle$ for ${\rm tr}\; \rho X_\alpha$, we have $\rho = \sum_\alpha
\langle X_\alpha \rangle X_\alpha$.
From this and orthonormality of the $X_\alpha$ it is easy to see that
${\rm tr}\; (\rho^2) = \sum_\alpha \langle X_\alpha \rangle^2$.
$P_S(\rho) \equiv \sum_{\beta \in S} \langle X_\beta \rangle^2$; since
extremal overall states have ${\rm tr}\; (\rho^2) = 1$, $P_S(\rho)$ for
a pure state $\rho$ can never be greater than $1$, since it is a sum
of a subset of the positive quantities $\langle X_\alpha \rangle^2$ which
sum to $1$.
Let $X$ have unit relative purity, i.e. $\sum_{\beta \in S} X_\beta^2 = 1$.
This implies $\sum_{\gamma \in S^\perp} \langle Y_\gamma \rangle^2 =
0,$ which requires $\langle X_\gamma \rangle = 0$ for all $\gamma \in
S^\perp$. Thus, $P_S(\rho)$ has a unique preimage, namely itself, so
$\rho = P_S(\rho)$. If $P_S(\rho)$ did not induce an extremal state
in the convex set of reduced states, it would be a convex combination
of distinct operators $\rho_1$ and $\rho_2$ inducing distinct reduced
states; these would have distinct preimages, but the convex
combination of these preimages be $P_S(\rho) \equiv
\rho$, violating the assumption that $\omega$ is pure. $\Box$
What about states whose relative
purity is maximal among all states, even when this maximal value is
not unity? When the maximum is not unity, no pure state has an
unchanged reduced density matrix: all pure state density
matrices project to reduced ``density matrices'' that are either
mixed, or not even PSD. Thus we cannot immediately
conclude that $\sum_\beta \langle X_\beta \rangle^2 =1$, so we do not
have $\langle X_\gamma \rangle = 0 $ for all $X_\gamma \in S^\perp$.
If there is nevertheless a unique preimage, i.e. the $X_\gamma$ are
uniquely determined by the $X_\beta$ (for $\beta$ indexing $S$), it
must be a consequence of positive semidefiniteness of the initial
state, since linear algebra alone gives no restrictions on the
$X_\gamma$.
However, because relative purity is a strictly convex function
of the reduced density matrix, a state's having
maximal, even if not unit, relative purity,
implies generalized unentanglement in the $\dagger$-closed
operator subspace framework. It does not, however, imply the other
part of Proposition \ref{prop: when else mrp implies ge}, that the
reduced state has a unique preimage. Formally:
\begin{proposition} \label{prop: max purity implies GE}
Let $x \in \hat{C}$ be such that the relative purity of $x$ is no less
than that of every other element of $\hat{C}$. Then $x$ is generalized
unentangled.
\end{proposition}
\noindent
{\bf Proof:}
The relative purity of $\omega$ is just the Euclidean norm of $\Pi_S(\rho_\omega)$ (with
respect to the trace inner product). Suppose $\omega$ has maximal relative
purity, i.e. $|\Pi_S(\rho_\sigma)|| \le ||\Pi_S(\rho_\omega)||$
for all $\sigma \in \hat{C}$. Suppose
there are $\alpha, \beta \in \hat{D}, \alpha \ne
\beta,$
such that $\Pi_S(\rho_\omega) = \mu \rho_\alpha + (1 - \mu) \rho_\omega$.
Then by the triangle inequality
$||\Pi_S(\rho_\omega)|| \le
||\mu \rho_\alpha|| + ||(1 - \mu) \rho_\beta|| = \mu ||\rho_\alpha || + (1 - \mu)
||\rho_\beta||$.
Since neither $||\rho_\alpha ||$ nor $||\rho_\beta||$ is greater than
$||\Pi_S(\rho_\omega)||$, we must
have $||\rho_\alpha|| = ||\rho_\omega|| = ||\Pi_S(\rho_\omega)||$,
so there is equality in the triangle
inequality. That requires $\mu \rho_\alpha$ to be proportional to $(1 - \mu) \rho_\beta$,
however, so that $\rho_\alpha = \rho_\beta = \Pi_s(\rho_\omega)$.
This shows that $\Pi_S(\rho_\omega)$ is extremal in the set of reduced density operators
corresponding to states in
$\hat{D}$. In other words, $\omega$ is generalized unentangled. $\Box$
It follows from the representation theory of associative
algebras that the UPIP holds for the irreducible associative
algebraic setting. The other case in which we know it holds is the
irreducible semisimple Lie algebraic setting. In this setting, the
observables consist of the Hermitian part (itself a real Lie algebra)
of a complex Lie algebra represented faithfully and irreducibly by
matrices acting on a finite-dimensional complex Hilbert space, and
including the identity matrix $I$. Such Hermitian parts of
irreducible matrix Lie algebras are precisely the real semisimple
algebras possibly extended by the identity. The identity is
relatively unimportant since all normalized states have the same value
on it: the normalization condition is the affine plane $\omega(I)=1$,
so the convex structure of the state space is entirely determined by
the expectation values of the traceless operators. We introduce a bit
more notation in order to state a result, proved in~\cite{BKOV2002a},
that includes this and other important facts about the irreducible
Lie-algebraic case.
\iffalse
We begin by reviewing the needed Lie representation
theory~\cite{Humphreys72a}. A \emph{Cartan subalgebra} (CSA)
$\lie{c}$ of a semisimple Lie algebra $\lie{h}$ is a maximal
commutative subalgebra. A vector space carrying a representation of
$\lie{h}$ decomposes into orthogonal joint eigenspaces $V_\lambda$ of
the operators in $\lie{c}$. That is, each $V_\lambda$ consists of the
set of states $\ket{\psi}$ such that for $x\in\lie{c}$,
$x\ket{\psi}=\lambda(x)\ket{\psi}$. The eigenvalue $\lambda$ is
therefore a linear functional on $\lie{c}$, called the \emph{weight}
of $V_\lambda$. As an example, consider a spin-$J$ irreducible
representation of $\mathfrak{su}(2)$. Any spin component $J_\alpha$, for any
direction $\alpha$ in ${\mathbb{R}}^3$, spans a (one-dimensional) CSA
$\lie{c}_\alpha$. There are $2J+1$ weight spaces,
each spanned by a state $|{M}\rangle$
(for $M \in \{J, J-1, ..., -(J-1), -J\}$) having spin component $M$ in
direction $\alpha$. Any two CSAs are conjugate under
elements of the Lie group, manifested in the spin example by the fact
that $J_\alpha$ transforms into any desired spin component via
conjugation by a rotation in SU(2).
The subspace of operators in $\lie{h}$ orthogonal in the trace inner
product to $\lie{c}$ can be organized into orthogonal ``raising and
lowering'' operators, which connect different weight spaces. In the
example, choosing $J_z$ as the basis of our CSA, these are $J_{\pm} :=
(J_x \pm iJ_y)/\sqrt{2}$. For a fixed CSA and irreducible
representation, the weights generate a convex polytope; a lowest (or
highest) weight is an extremal point of such a polytope, and the
one-dimensional weight-spaces having those weights are known as
\emph{lowest-weight states} (in the spin example, this polytope is the
interval $[J, -J]$). The set of lowest-weight states for all CSAs is
the orbit of any one such state under the Lie group generated by
$\lie{h}$. These are the group-theoretic {\em generalized coherent
states} (GCSs)~\cite{Zhang90a}. Notably, the GCSs attain \emph{minimum
invariant uncertainty}~\cite{Delbourgo77b}.
\fi
A \emph{real} Lie algebra of
Hermitian operators may be thought of as a distinguished
family of Hamiltonians, which generate (via $h \mapsto e^{ih}$) a Lie
group of unitary operators, describing a distinguished class of
reversible quantum dynamics.
More generally, we might want
Lie-algebraically distinguished completely positive (CP) maps, $\rho
\mapsto \sum_i A_i \rho A_i^\dagger$ describing open-system quantum dynamics. We call
the operators $A_i$ the ``Hellwig-Kraus'' or ``HK'' operators, since
they appear to have been introduced in \cite{Hellwig69a,Hellwig70a}
(see also \cite{Kraus83a, Choi75a}). The HK operators for a given CP
map are not unique, but this does not lead to nonuniqueness of any of
the objects we define in terms of them below. A natural Lie-algebraic
class of CP-maps has HK operators $A_i$ in the
topological closure $\overline{e^{\mathfrak{h}_c\oplus \one}}$ of the Lie group
generated by the \emph{complex} Lie algebra
$\mathfrak{h}_c\oplus\one$~\footnote{ $\mathfrak{h}_c$ is constructed by taking the
complex linear span of a basis for $\mathfrak{h}$. $\mathfrak{h}_c \oplus \leavevmode\hbox{\small1\kern-3.8pt\normalsize1}$
guarantees inclusion of the identity operator $\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}$.}. Having HK operators in a
group ensures closure under composition. Using $\mathfrak{h}_c \oplus \one$
allows non-unitary HK operators. Topological closure introduces
singular operators such as projectors.
\iffalse
The following theorem (Theorem
14 in \cite{BKOV2002a}) characterizing GUE states
shows the power of the Lie-algebraic setting.
\fi
Define an $\mathfrak{h}$-state to be a linear functional on a {\em complex}
matrix Lie algebra $\mathfrak{h}$ belonging to the convex set of such states induced
by normalized quantum states on the full representation space.
Complex-linearity ensures that the convex structure of such a state
space is the same as that of the states induced by taking as the
distinguished observables only the Hermitian elements (a real Lie
algebra we denote $\mathrm{Re}({\lie{h}})$), which is the definition we used
above for the Lie-algebraic setting.
\newcounter{stcharcnt}
\begin{theorem}
\label{thm:stchar}
Let $\mathfrak{h}$ be a complex irreducible matrix Lie algebra,
with $\trless{\lie{h}}$ its traceless (semisimple) part
and $\mathrm{Re}({\lie{\mathfrak{h}}})$ its Hermitian part.
The following are equivalent for a density matrix $\rho$
inducing the $\lie{h}$-state $\lambda$:
\begin{mlist}
\refstepcounter{stcharcnt}\label{st1}
\item[\emph{(\textbf{\thestcharcnt})}] $\lambda$ is a pure
$\lie{h}$-state, that is, it is extremal in the convex set of
normalized linear functionals on $\lie{h}$.
\refstepcounter{stcharcnt}\label{st2}
\item[\emph{(\textbf{\thestcharcnt})}] $\rho=\ketbra{\psi}{\psi}$ with
$\ket{\psi}$ the unique ground state of
some $H$ in $\mathrm{Re}(\lie{h})$.
\refstepcounter{stcharcnt}\label{st3}
\item[\emph{(\textbf{\thestcharcnt})}] $\rho=\ketbra{\psi}{\psi}$ with
$\ket{\psi}$ a minimum-weight vector (for some simple root system
of some Cartan subalgebra) of $\trless{\lie{h}}$.
\refstepcounter{stcharcnt}\label{st5}
\item[\emph{(\textbf{\thestcharcnt})}] $\lambda$ has maximum purity
relative to the subspace $\mathrm{Re}(\lie{h})$ of observables.
\refstepcounter{stcharcnt}\label{st6}
\item[\emph{(\textbf{\thestcharcnt})}] $\rho$ is a one-dimensional
projector in the topological closure of $\overline{e^{\lie{h}}}$.
\end{mlist}
\end{theorem}
\begin{problem} \label{prob: when GE implies mrp}
Does the implication from GUE to maximal relative
purity, hold in other natural situations?
\end{problem}
As already noted it is fairly easy to show by example that in the
Lie-algebraic setting but without the assumption of irreducibility,
the UPIP need {\em not} hold. A more general question
suggests itself:
\begin{problem} \label{prob: irreducible implies UPIP}
In the $\dagger$-closed operator subspace setting, does the UPIP
hold whenever the distinguished operators act irreducibly?
\end{problem}
\large
\section{ANALOGUES OF LOCAL MAPS}
\normalsize
Our work on GE raises many questions arising from the closely
related problems of finding natural generalizations or
analogues of the notions of LOCC ({\em Local Operations and Classical
Communication}) and of monotone entanglement measures (or {\em
entanglement monotones} \citep{Vidal2000b}). The relation comes from
requiring that a reasonable entanglement measure be nonincreasing
under LOCC operations; if one found a natural generalization of this
notion of LOCC to our more general settings, it would also be natural
to look for measures of GE monotone under this generalization.
Here, we briefly present some ideas from \cite{BKOV2002a} (with
a few minor extensions) on how to generalize LOCC;
that paper contains more on this topic and on
GE measures. Some of the most fundamental questions remain open,
so we will concentrate on sketching the situation in hopes of
stimulating further work.
The semigroup of LOCC maps, introduced in~\cite{Bennett96c}, and the
preordering it induces on states according to whether or not a given
state can be transformed to another by an LOCC operation are at the
core of entanglement theory. LOCC maps are precisely those
implementable by using CP quantum maps on the local subsystems, and
classical communication, e.g. of ``measurement results,'' between
systems.
We now formalize this notion, beginning with the notion
of {\em explicitly decomposed} map which, however, can apply to the general
case, not just the quantum one.
An {\em explicitly decomposed} trace-preserving map
$\{M_k\}_{k \in K}$ is a set of maps $M_k$ that sum to a
trace-preserving one $M$. The {\em conditional composition} of an
explicitly decomposed map $\{M_k\}_{k \in K}$ with a set of explicitly
decomposed maps $N_k := \{N_{nk}\}_{n \in N_k}$ is the explicitly
decomposed map $\{ N_{nk} \circ M_k \}_{k \in K, n \in N_k}$. We can
view each $M_k$ as being associated with measurement outcome $k$,
obtained (given a state $\omega$) with probability
${\rm tr}\; M_k (\omega)$, and
leading to the state $ M_k (\omega)$ when outcome $k$ is obtained. The
conditional composition of $\{M_k\}_{k \in K}$ and $\{N_{nk}\}_{n \in
N_k}$ can be implemented by first applying $M$ and then, given
measurement outcome $k$, applying $N_k$. There are analogous definitions
of explicitly decomposed maps and conditional composition
without the trace-preservation
condition.
In the usual quantum case, closing the set of one-party (aka {\em unilocal})
maps (for all parties) under conditional composition gives the LOCC
maps.
The semigroup generated by composition of unilocal
explicitly decomposed maps having a single HK operator in their
decomposition, is often known as SLOCC (for {\em stochastic} LOCC).
SLOCC involves local quantum
measurements and classical communication conditional on a {\em single}
sequence of local measurement results, when each local measurement is
performed in a manner that preserves all pure states (i.e., with a
single HK operator for each outcome). Its mathematical structure is
relatively simple, as the part generated by
nonsingular HK operators is the trace-nonincreasing part of a
representation of a product of various GL$(d_i)$, with the factors
acting on local systems of dimension $d_i$~\footnote{We are not
certain if the full LOCC semigroup is the trace-nonincreasing part of
the topological closure of this representation, but it seems a
reasonable possibility.}.
When the distinguished observables form a semisimple Lie algebra
$\mathfrak{h}$, a natural multipartite structure can be exploited to generalize
LOCC, as well as the larger, more tractable class of
{\em separable} maps; see \cite{BKOSV2004a,BKOV2002a}.
\iffalse
$\mathfrak{h}$ can be uniquely expressed as a direct sum of simple Lie
algebras, $\mathfrak{h} = \oplus_i \mathfrak{h}_i$. A Hilbert space irreducibly
representing $\mathfrak{h}$ factorizes as ${\cal H} = \otimes_i {\cal H}_i$, with $\mathfrak{h}_i$
acting non-trivially on ${\cal H}_i$ only. This resembles ordinary
entanglement, except that the ``local'' systems ${\cal H}_i$ may not be
\emph{physically} local, and actions on them are restricted to involve
operators in the topological closure of a ``local'' Lie group
representation which need not be GL(\mbox{dim}$({\cal H}_i))$ as in
standard entanglement. For each simple algebra $\mathfrak{h}_i$, a natural
restriction is to CP maps with HK operators in $\overline{e^{(\mathfrak{h}_i)_c
\oplus \one}}$. GLOCC, generalized LOCC, is the closure under
conditional composition of the set of operations each of which is
representable with HK operators in the topological closure of
$e^{(\mathfrak{h}_i)_c \oplus \one}$ for some $i$.
\fi
\iffalse
In conventional entanglement, there is also interest in the properly
larger \cite{} set of \emph{separable} maps
(~\cite{Vidal2000b,Bennett2001a,Duer2001a}), which are those
representable with HK operators that are tensor products. This is a
mathematically simpler set than the LOCC operations, since it is just
the trace-nonincreasing part of the cone generated by SLOCC maps. A
Lie-algebraic generalization of separable maps is obtained by
considering the semigroup of maps whose HK operators are in
$\overline{e^{\mathfrak{h}_c\oplus\one}}$.
\fi
\iffalse
A potential generalization of LOCC
involves using spectra of operators to classify them as analogues of
{\it single-party} operators. Yet another begins from maps that
induce well-defined maps on the set of reduced states, as single-party
maps do in the standard setting. These alternative proposals are
discussed further in~\cite{BKOV2002a}. Here we review, with slight
variations, another proposal made in that paper for how the notion of
LOCC might be extended to the general convex setting.
\fi
In generalizing
LOCC to the convex setting, two aspects of LOCC must be considered: first, that it
constrains maps to be {\em completely positive}; second,
that it also constrains them to have certain {\em locality} properties.
A \emph{positive} map of $D$ is a linear map ${A}:V\rightarrow V$ such
that ${A}(D)\subseteq D$. The map ${A}$ is \emph{trace preserving} if
$\tr(x)=\tr({A}(x))$ for all $x$. This definition corresponds to
positive, but not necessarily CP, maps in quantum
settings. Without additional algebraic structure,
it is not possible to define a unique ``tensor product'' of
cones, as would be required to distinguish between positive and
CP maps~(\cite{Namioka69a,Wittstock74a} (cited
in~\cite{Wilce92b})).
In a continuum of possible products of cones, there are two natural
possibilities that are in a sense the two extremes. The first is the
convex closure of the set of tensor-products of the cones' vectors,
which for the case of the product of two quantum systems' unnormalized
state spaces gives the separable states of the bipartite system. The
second is to use the dual cone of the cone obtained by applying the
first construction to the duals of the cones; in the quantum case, it
gives the set of (unnormalized) states that are positive on product
effects (this is isomorphic to the cone of positive but not CP
operators between the state spaces, by the ``Choi-Jamiolkowski''
isomorphism between $V \otimes V$ and ${\cal L}(V)$). It is not clear how
to pick out a natural case between these extremes in general without
adding algebraic structure, except perhaps if the cones are self-dual
with respect to non-degenerate inner products on the real vector
spaces. In that case, one could pick a self-dual cone between the two
constructions (which would give the usual state space of a bipartite
system in the quantum case).
The family of positive maps of $C$ is closed under positive
combinations and hence forms a cone. In the Lie-algebraic, or even the
bipartite setting, the extreme points of this cone are not easy to
characterize (see, for example,~\cite{Wilce92b},
p. 1927,~\cite{Gurvits2002a}). We seek generalizations of
the notion of complete positivity to the cones setting.
We might explicitly introduce a cone representing the
``tensor product'' extension of $D$ and require extendibility or
``liftability'' of the map to $D$. Another, perhaps more uniquely
determined, approach might begin from the observation that the extreme
points of the cone of completely positive maps are extremality preserving:
for all extremal (belonging to an extreme ray) $x\in D$,
${A}(x)$ is extremal. However there are extremality preserving
positive, not CP, maps. An example is partial transposition for
density operators of qubits. In \cite{BKOV2002a} we explore how one
might rule these out. There is also the question of why extremality
preservation would be a natural physical or operational, as opposed
to mathematical, requirement.
\iffalse
one runs the risk of ending
up with, say, locally {\em positive} but not necessarily completely
positive maps, and classical communication, so one must hope that in
such a case one will see how to exclude non-CP maps by additional
natural requirements.
The next step is to define a family of maps that generalizes the
separable maps. Call a positive map ${A}$ of $D$ $C$-separable if it
is a mixture of extremality-preserving positive maps ${A}_k$ that are
also extremality-preserving and positive for $D_{\textrm{\tiny sep}}$.
In the bipartite setting, this definition includes maps such as the
swap, which exchanges the two subsystems and is not separable, in
addition to some non-completely positive operations. Note that if the
Lie-algebraic definition of separability is used, operations like the
swap are excluded because they are not in the Lie group generated by
$\lie{h}_l$: The swap induces an exterior automorphism of
$\lie{h}_l$. From the point of view of entanglement, including the
swap can make sense because it obviously does not increase
entanglement.
\fi
To try to generalize the notion of locality, we introduce the idea of
{\em liftability}. We say that a positive map ${A}$ on $D$ can be
lifted to $C$ if ${A}$ preserves the nullspace of $\pi$, or,
equivalently, if there exists a positive map ${A}'$ on $C$ such that
$\pi({A}(x))={A}'(\pi(x))$. In this case, we say that ${A}'$ is the
lifting of ${A}$ to $C$.
In standard multipartite quantum entanglement, {\em
unilocal} maps (ones that act nontrivially only on one factor) are
liftable to the cone of local observables; they have a
well-defined action there. But so are tensor product maps ${\cal A}
\otimes {\cal B} \otimes \cdots {\cal Z}$, and in the case when some of the
subsystems are of the same dimension, so are maps performing
permutations among the isodimensional factors. To get LOCC we would
need to rule out the latter two cases, leaving the unilocal maps; then
one can generate a semigroup from the unilocal maps by conditional
composition of explicitly decomposed trace-preserving maps. On the
other hand, in the standard quantum case the semigroup of maps generated by
conditional composition of maps liftable to the distinguished subcone
might enjoy many of the same properties of the usual LOCC maps, so it may
be worth study in the general setting.
\begin{problem} \label{abcd}
Is the semigroup generated by {\em completely positive} unilocal quantum
maps and pairwise exchanges of isodimensional systems the full
semigroup generated by conditional composition of
liftable-to-local-observables explicitly decomposed maps?
\end{problem}
Note that using liftability to define locality may be of some help in
ruling out local non-completely positive maps, since all maps must be
positive on the overall cone. It is especially helpful if the answer to
Problem \ref{abcd} is ``yes.''
When no subsystem has dimension greater than the square root of
the overall dimension, it is then fully effective
in imposing complete positivity, because for any local map $M$,
complete positivity of $M$ is equivalent to positivity of the unilocal
map $\text{id} \otimes M$ where the identity map $\text{id}$ acts on a Hilbert space at least as
large as the one $M$ acts on.
In the standard multipartite quantum
case, the high degeneracy of unilocal operators can also be used to
help distinguish them in a way not so directly dependent on explicit
introduction of cones to represent individual systems---and similarly
one can use spectral information about HK operators to characterize
ones that act on the {\em same} single system, thereby characterizing
LOCC in terms of conditional composition of explicitly decomposed maps
whose HK operators together satisfy certain spectral conditions
\citep{BKOV2002a}. However, it is not
clear how to abstract this to general cones. Perhaps
something can be done with the facial structure of the
cone $D$, or of the cone of positive maps on $D$ (or of other
subcones of maps chosen as abstractions capturing aspects of complete
positivity). A more in-depth investigation of dynamics generalizing
LOCC thus remains as a challenging and many-faceted area for research,
as does the investigation of measures of GE nonincreasing under such
maps.
{\bf Acknowledgements} We thank Manny Knill for valuable discussions
and for collaboration on earlier work summarized and built upon in the
present paper. Work at Los Alamos was supported by the US DOE through
Los Alamos National Laboratory's Laboratory Directed Research and
Development (LDRD) program.
|
train/arxiv
|
BkiUc8k5qsBC5S8Fn_E4
| 5
| 1
|
\section{Introduction}
\label{sec:intro}
Inflow of cold gas is needed in the Milky Way (MW) to explain the observed stellar chemical abundances \citep{vandenbergh62,schlesinger12}, the chemical evolution of the ISM \citep{larson72,edmunds90,schoenrich17}, and the current star formation rate of $\sim 1 M_\odot$ yr$^{-1}$ \citep{robitaille10,licquia15} and star formation history. In general, this gas is not observed directly, neither in the MW nor in other star forming galaxies, presumably due to it being largely diffuse \citep[but see][]{zheng17,koch18}. However, infalling High-Velocity Clouds (HVCs) in the MW do offer a clear indication of gas accretion. These are structures of cool atomic hydrogen with velocities that significantly differ from the local standard of rest (LSR, typically defined as $|v_{\mathrm{LSR}}| > 90 \mathrm{km s}^{-1}$) first observed by \cite{muller63}. They are classically associated with H\textsc{i}, i.e. neutral hydrogen, although partially and fully ionised HVCs are also observed \citep{sembach03,lehner12,richter15}. Although estimates of the total mass in infalling HVCs typically fall short of what would be needed to fully replenish the gas lost to star formation in the MW disc, they are still estimated to represent a significant fraction of gas accretion \citep{putman12,fox19}.
HVCs are generally found to be within MW's hot diffuse gas corona based on distance constraints of typically $1 \lesssim d \lesssim 50$ kpc \citep[e.g.][]{wakker01,putman03b,thom08,lehner10,peek16}, less than the expected extent of the corona \citep{faerman17,bregman18}, and signs of interaction with an external medium. These are seen in the head-tail morphology of many HVCs \citep{putman11}. Recently, indications of hydrodynamical instabilities associated with cloud-corona interaction have also been found in detailed morphological analysis of Complex A \citep{barger20}. The origins of HVCs are still not clear, but in any case they appear to have multiple formation mechanisms. The metallicities generally observed in HVCs are too low for them to originate in the disc. The exception to this is the Smith Cloud \citep{smith63,fox16} that may either represent outflowing entrained gas \citep{marasco17}; a small dark matter (DM) halo that has collided with the disc and accreted gas from the ISM \citep{nichols09,nichols14}, or a `streamer' in the form of a gaseous structure dislodged from the Galactic gas disc by a transiting, gas-bearing DM mini-halo \citep{tepper-garcia18a,galyardt16}.
Some HVCs, such as the ones in the Magellanic Stream, appear to have been stripped from satellite galaxies. However, the infall of the Sagitarius Dwarf galaxy does not seem to have led to the formation of any currently observed HVCs \citep{tepper-garcia18b}. A plausible origin for other HVCs is that they form through condensation, i.e. cooling of coronal gas.
This can be triggered by outflows, as claimed by \cite{fraternali15} as a possible origin of Complex C at $z=8.4$ kpc above the disc, or by density perturbations in the corona through thermal instability typically around $z\sim 10$ kpc \citep{maller04,joung12,sormani19}.
Buoyant oscillations can disrupt this process \citep{binney09}, however the Galactic halo magnetic field has been shown to stabilise perturbations against this \citep{ji18}. The halo magnetic field also affects the interaction between the corona and HVCs as they are traveling through it. This magnetic field is difficult to constrain but has a significant ordered component that appears to mainly curl around the $z$-axis (the axis pointing perpendicularly away from the disc) with strength decreasing with $z$ \citep{sunreich10,jansson12a,unger19}.
The general scenario of a cloud in relative motion with a surrounding hot magnetised medium has been widely studied in numerical simulations in the literature \cite[e.g.][]{jones96,gregori99,gregori00,santillan99,dursi08,kwak09,mccourt15,banda-barragan16,gronnow17,gronnow18,banda-barragan18,cottle20,sparre20}. In most of these simulations \cite[the exceptions being ][]{santillan99,kwak09} there is no external gravitational potential and the cloud is instead given an initial velocity, either directly or, more commonly, by injecting a constant velocity `wind' around it. We refer to those as `wind tunnel' simulations. They are usually analysed in the context of entrained gas in outflows, but can equivalently be interpreted as inflowing gas. However, while such simulations can provide useful insights for infalling gas, the velocity evolution is quite different from that of an HVC condensed out of the corona. With no gravitational potential present, the cloud decelerates due to drag from the outset, eventually coming to rest with respect to the surrounding gas (in the wind frame this corresponds to the cloud \emph{accelerating} until it becomes comoving with the wind).
The initial high relative velocity leads to an initial shock propagating through the cloud, which is appropriate for entrained gas but not generally for infalling clouds \citep{bland-hawthorn07,tepper-garcia15}. Typically this shock, in combination with Kelvin-Helmholtz (KH) and Rayleigh-Taylor (RT) instabilities, destroys the cloud before it becomes comoving with the surrounding medium. The KH and RT instabilities are caused by the velocity shear between the cloud and the surrounding gas and parcels of gas at different densities being accelerated into each other, respectively. Cloud destruction in outflows is a big topic in contemporary astrophysics because entrained clouds are generally shredded before they can accelerate and travel sufficiently to agree with observational constraints \citep{zhang17}.
Overall, the cloud's life time roughly scales with the density contrast between the cloud and surrounding medium $\chi$, relative velocity $v$, and size of the cloud $L$ as $\sim \chi^{1/2}v L^{-1}$ \citep{jones96} while the dependence on the magnetic field is more complicated. The first three-dimensional simulation of this kind, \cite{gregori99}, found that the magnetic field in fact \emph{hastened} the destruction by enhancing RT instability along the third dimension, i.e. the direction perpendicular to both the wind and the magnetic field. In contrast, later higher resolution simulations run with a variety of codes and numerical schemes
\citep[e.g.][]{mccourt15,banda-barragan16,gronnow17,banda-barragan18,gronnow18,cottle20,gronke20a,li20} have found that the magnetic field generally does not hasten the destruction. Instead, it tends to extend the overall cloud life time through its partial suppression of KH instability, although the effect is limited and often by itself insufficient to solve the problem of shorter than expected survival.
However, in the context of accretion of infalling gas, the survival of the remnant of the original cloud is not the most relevant diagnostic. Rather, the total mass of cold gas associated with the cloud, whether in the form of a single cloud or a complex containing many cloudlets, is the main quantity of interest. In this case, cloud stripping can even lead to an overall increase in cold gas mass through condensation as shown in simulations that include radiative cooling \citep[e.g.][]{marinacci10,armillotta16,gronke20a}. The mixing of stripped cold gas with the hot coronal gas can, in some realistic parts of the parameter space of cloud and corona density, velocity, temperature, and metallicity, lead to efficient condensation that increases the total cold gas mass, rather than evaporation of the stripped gas. This process is typically associated with intermediate-velocity,
relatively metal-enriched galactic fountain clouds travelling through the inner part of the corona. In this context, \cite{gronnow18} showed that the halo magnetic field strongly restricts the efficiency of condensation while still allowing growth in cold gas mass. However, \cite{gritton17} claimed that even essentially pristine HVCs can lead to efficient condensation based on wind tunnel simulations without magnetic fields. More generally, \cite{gronke20a} and \cite{li20} have recently investigated the parameter space that leads to growth and destruction of cold gas in magnetized wind tunnel simulations and find that growth occurs for a large part of the parameter space relevant to HVCs.
In this work, we investigate the evolution of cold gas in an HVC falling through the MW's magnetised hot corona using magnetohydrodynamic (MHD) simulations. We simulate an initially stationary cold cloud surrounded by a hydrostatic hot corona with density and magnetic field strength increasing towards the disc and a uniform gravitational acceleration. Due to the external potential the cloud accelerates and eventually reaches HVC velocities. This is representative of an HVC formed through thermal instability in the corona.
This setup is reminiscent of the one used by \cite{heitsch09} {\it but with the important addition of the halo magnetic field.} Clouds falling through a hydrostatic medium with a magnetic field have been simulated before by \cite{santillan99} and \cite{kwak09}. However, both of those studies focused on clouds closer to the disc and, crucially, neglected radiative cooling and thus were unable to follow the cold gas evolution.
This paper is organised as follows: In Section \ref{sect:ICs} we describe our numerical setup and methods. In Section \ref{sec:results} we show the results of these simulations which we discuss in the context of cold gas evolution in Section \ref{sec:discussion}. Finally, we conclude in Section \ref{sec:summary}.
\section{Numerical setup}
\label{sect:ICs}
\subsection{Initial conditions}
We simulate a cold ($T = 10^4$ K) cloud initially at rest with the hot corona and follow its evolution as it accelerates and becomes an HVC\footnote{We assume that the velocity w.r.t. the corona is representative of the LSR velocity such that the cloud can be referred to as an HVC in our simulations once it reaches $|v_z| > 90 \mathrm{km s}^{-1}$.} due to the gravitational potential of the MW disc. Our simulations are in a three-dimensional cartesian coordinate system $(x,y,z)$ where the $z$ axis is perpendicular to the disc. All gas is assumed to be monatomic ideal gas and so has an adiabatic index of $\gamma=5/3$. We model the hot coronal gas along the path of the cloud as being in (magneto)hydrostatic equilibrium. When there is no magnetic field we assume an isothermal confining medium with temperature $T_\mathrm{iso}=2 \times 10^6$ K typical of the Galactic corona \citep{henley15,miller15}. We assume a constant gravitational acceleration of the Milky Way disc of $g=-10^{-8}$ cm s$^{-2}$ in the $z$ (height above the disc) direction. This is appropriate for our range of 2 kpc $\leq z \leq$ 10 kpc in the solar neighbourhood according to the models of
\cite{kalberla08,joung12}. We have additionally confirmed that this value is consistent with the `MWPotential2014' model of the Galactic potential of \cite{bovy15} for virial masses in the range $1.0-1.4 \times 10^{12} M_\odot$ as indicated by observational constraints \citep[e.g.][]{posti19,cautun20}. For the solar cylindrical radius assumed in this model of $R=8$ kpc the gravitational acceleration is always within 20 per cent of our assumed value for $2 \leq z \leq 10$ kpc. Consequently, the density is only a function of $z$ and the resulting particle density profile becomes
\begin{equation}
\label{eqn:densprof}
n_h(z) = n_{h,0}\exp{\left(\frac{-g \mu z}{k_B T_\mathrm{iso}}\right)},
\end{equation}
where $k_B$ is the Boltzmann constant and $\mu$ is the mean molecular weight. We use the subscript `h' for `halo' to refer to what we otherwise call the corona to avoid confusion with `c' which we use to refer to the cloud. We choose $n_{h,0}=0.01 \ifmmode{\>{\rm cm}^{-3}}\else{cm$^{-3}$}\fi$ to obtain densities that agree with the loose constraints on the coronal density at 2 kpc $\leq z \leq 10$ kpc in the literature \citep{stanimirovic02,bregman07,grcevich09,miller15}.
The magnetic field of the halo is poorly known but appears to have a significant, mostly axisymmetric, ordered component that falls off with $z$ and cylindrical radius $R$ \citep{sunreich10,jansson12a}. We base our magnetic field on the model of \cite{sunreich10} in the limit of negligible differences in $R$. That is, we assume that the field does not change significantly in magnitude or direction along the sub-kpc extent along $R$ of the cloud and so is only a function of $z$. We align the field such that it points in the $+x$ direction. This choice is inconsequential due to the $xy$-symmetry in the initial conditions. This field is thus
\begin{align}
B(z) = & \frac{B_0}{1+\left[(\vert z \vert-z_a) / z_b \right]^2},
\end{align}
with $z_a=1.5$ kpc and $z_b=4$ kpc. We use a `weak' and a `strong' field setup with $B_{0, \mathrm{ weak}}= 0.3 \mu$G and $B_{0, \mathrm{ strong}}=5B_{0, \mathrm{ weak}} = 1.5 \mu$G, respectively. Because this field initially has no tension force due to all field lines being straight, it simply acts as another pressure term with magnetic pressure $P_\mathrm{mag}(z)=B(z)^2/8\pi$. It is, however, not a force-free field due to its gradient in $z$ and so our isothermal non-magnetic setup must be modified when the field is added. We choose to retain the density profile of Eq. \ref{eqn:densprof}, instead allowing the temperature to vary with $z$ to compensate for the added force in the $+z$ direction from the magnetic pressure gradient. Hence, the gas pressure profile required for equilibrium is
\begin{equation}
P(z) = n_h(z)k_B T_\mathrm{iso} - P_\mathrm{mag}(z)
\end{equation}
and the temperature becomes
\begin{equation}
T_h(z) = T_\mathrm{iso} - P_\mathrm{mag}(z)/(n_h(z)k_B).
\end{equation}
However, due to the gas pressure dominating, the magnetised corona is still nearly isothermal with $0.93 T_{\mathrm{iso}} < T_h(z) \leq T_{\mathrm{iso}}$ for all $z$.
We initialise the cloud as a spherical overdensity centred at a height of $z_0=10$ kpc above the disc in pressure equilibrium with the hot corona. The density profile as function of radius from the cloud centre $r$ is a smoothed top-hat
\begin{equation}
\label{eq:densprofile}
n(r,z)=n_h(z) + \frac{1}{2}(n_c - n_h(z))\left\{1 - \tanh{\left[ s \left(\frac{r}{r_c}-1\right)\right]}\right\} \, ,
\end{equation}
where $n$ is total particle density, $n_c$ is the central cloud particle density, $r_c$ is the cloud radius, and $s=10$ sets the steepness of the profile. Hence, the cloud radius is defined as the radius where the density $n(r_c)$ is halfway between $n_c$ and $n_h$ which is $n(r_c) \approx n_c/2$ because $n_c \gg n_h$. The $z$ dependence is to a very good approximation only a dependence on the height at the cloud centre $z_0$ because the $\Delta z=2r_c$ range covered by the cloud is much smaller than the scale height of the corona. Hence, the density profile of the cloud is approximately spherically symmetric. The initial mass of the cloud is about $7.6\times 10^4$ $M_\odot$. We assume that the cloud and the corona both have a uniform metallicity of $0.2 Z_{\odot}$ in agreement with observations of HVCs and the MW corona \citep{shull09,miller15,hodges-kluck18}. We add a tracer quantity to the cloud that is passively advected with the flow. That is, the tracer is set to 1 within $r_c$ and 0 elsewhere. This allows us to separate cold gas that was originally cold cloud material from cold gas that has condensed from hot gas during in the simulation. We always set the initial temperature of the cloud $T_c$ to equal the cooling floor at $10^4$ K to ensure its stability (see Section \ref{sec:nummethod}). Due to the pressure equilibrium, $n_c$ then follows from $n_h$ and $T_h$. We summarise the physical parameters that are the same in all simulations in Table \ref{tab:params}.
\begin{table}
\centering
\caption{Physical parameters that are the same in all of our simulations.}
\label{tab:params}
\begin{tabular}{ccccccc}
$n_{h,0}^a$ & $g$ & $T_{\mathrm{iso}}$ & $T_c$ & $Z/Z_{\odot}$ & $r_c$ & $\chi\equiv\rho_c/\rho_h^b$\\
(cm$^{-3}$) & (cm s$^{-2}$) & (K) & (K) & & (kpc) &\\
\hline
$10^{-2}$ & $10^{-8}$ & $2 \times 10^6$ & $10^4$ & 0.2 & 0.1 & 412\\
\hline
\multicolumn{7}{l}{\footnotesize$^a$ The halo particle density at the cloud's initial height is}\\
\multicolumn{7}{l}{\footnotesize $n_h(10$ kpc$) = 3.4 \times 10^{-3}$ cm$^{-3}$. For the given density contrast this}\\
\multicolumn{7}{l}{\footnotesize leads to a cloud particle density of $n_c = 0.682$ cm$^{-3}$.}\\
\multicolumn{7}{l}{\footnotesize This varies slightly across the $\Delta z=0.2$ kpc size of the cloud.}\\
\multicolumn{7}{l}{\footnotesize$^b$ The \emph{particle} density contrast $n_c/n_h\approx 200$ is given by $T_h/T_c$ due to}\\
\multicolumn{7}{l}{\footnotesize pressure equilibrium and is approximately half that of $\chi$ due to}\\
\multicolumn{7}{l}{\footnotesize differences in the mean molecular weight. This value is approximately}\\
\multicolumn{7}{l}{\footnotesize equal in the HD and MHD simulations since $T_h(10 \ifmmode{\>{\rm kpc}} \else{kpc}\fi)\approx T_{\mathrm{iso}}$.}\\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{The magnetic field strength normalisation, resolution, final time, and final $z$ coordinate of the leading edge of the main remnant in the simulation runs. Note that the final times and heights in the simulations are constrained by our corona model and numerical effects as described in Section \ref{sec:nummethod}. They are hence not related to the state of the clouds which in all cases still exist at the end of the simulations. Each simulation has a short acronym that is used when referring to them throughout the paper.}
\label{tab:sims}
\begin{tabular}{lcccc}
\hline
Name & $B_0$ & Resolution & $t_{\mathrm{end}}$ & $z_c(t_{\mathrm{end}})$\\
& ($\mu$G) & (cells/$r_c$) & (Myr) & (kpc)\\
\hline
HD & 0 & 50 & 75 & 2\\
MHD-W & 0.3$^a$ & 50 & 68 & 3.5\\
MHD-S & 1.5$^b$ & 50 & 64 & 4\\
MHD-Sl & 1.5 & 25 & 59 & 5\\
MHD-Sh & 1.5 & 100 & 43 & 7\\
\hline
\multicolumn{5}{l}{\footnotesize$^a$ In terms of $\beta\equiv P/P_{\mathrm{mag}}$ this corresponds to $\beta\approx 7700$ at $z=10$ kpc}\\
\multicolumn{5}{l}{and $\beta \approx 640$ at $z=2$ kpc.}\\
\multicolumn{5}{l}{\footnotesize$^b$ In terms of $\beta\equiv P/P_{\mathrm{mag}}$ this corresponds to $\beta\approx 310$ at $z=10$ kpc}\\
\multicolumn{5}{l}{and $\beta \approx 25$ at $z=2$ kpc.}\\
\hline
\end{tabular}
\end{table}
\subsection{Numerical methods}
\label{sec:nummethod}
We use the RAMSES Adaptive Mesh Refinement (AMR) code \citep{teyssier02,fromang06} to evolve our simulations in a cartesian domain of $8\times 8\times 8$ kpc. We assume ideal MHD and evolve the magnetic field with a constrained transport scheme which guarantees that the field is divergence free to machine precision. The use of AMR allows us to use a domain sufficiently large to follow the entire tail of stripped material from the cloud throughout the simulations at high resolution. There is a factor of 2 difference in the maximum number of cells along each dimension between the AMR levels, with the coarsest level having $64^3$ cells and the finest level having up to $4096^3$ cells for our standard resolution runs. Thus the maximum resolution is $\Delta x\approx 2$ pc or about one fiftieth of the initial radius of the cloud. We refine based on the density gradient to also capture the mixing of relatively low density gas in the wake. Our simulation domain is much wider than necessary along the transverse ($x$ and $y$) directions because RAMSES does not allow for simulation domains with unequal lengths. However, due to our use of AMR the corona far from the cloud is always on the coarsest level and so the additional computational resources used in evolving this large domain is negligible. We only include the relevant part of the volume in our slice and projection plots in Section \ref{sec:results}.
Initially, the simulation box covers 8 kpc $< z < 16$ kpc. We keep the simulation approximately in the HVC's rest frame by regularly subtracting the centre of mass velocity of the cloud material. We keep track of the cloud material through the use of a passive scalar (i.e. a tracer fluid quantity) initially set to 1 for $r<r_c$ and 0 elsewhere, which we denote as $C$. This comoving cloud frame method has the advantage that the inner parts of the corona, where the density and magnetic field strengths are relatively high and the MHD equilibrium is easily disturbed by numerical effects, is only included near the end of the simulations. It also reduces truncation errors related to advection by minimising the overall velocity of the cloud material with respect to the computational mesh. We discuss this method and compare the evolution in this frame to that of the corona reference frame in Appendix \ref{sec:refframe}. We include optically thin radiative cooling assuming collisional ionisation equilibrium down to a temperature floor of $10^4$ K using the default RAMSES cooling tables calculated using CLOUDY \citep{ferland98}. This takes the temperature dependence of $\mu$ into account. In order to keep the corona stable we disable cooling in unmixed coronal gas. This is justified because the unmixed coronal gas, due to its low density and high temperature (that is also near the minimum in the cooling curve) has a much greater cooling time scale than that of the mixed gas and typically longer than the time span of the simulations. For this we define a passive scalar initially set to 1 where $\rho>2\rho_h$ and 0 elsewhere and only allow cooling in cells where this tracer is greater than zero. This does not affect coronal gas condensation because this process is driven by mixing with cloud material.
In Table \ref{tab:sims} we show the different magnetic field strengths and resolutions for our five different simulation runs. We also note the time and approximate $z$ coordinate of the front of the HVC at the end of each simulation. For the simulation without a magnetic field, HD, we stop the simulation once the main cloud reaches the disc-corona interface at roughly $z\approx 2$ kpc \citep{haffner03,gaensler08}. As can be seen, we are not able to follow the HVC all the way to this point in simulations that include the halo magnetic field. This is due to the strongly amplified draped field at the front of the main cloud remnant. The combination of a very strong magnetic field and a low density (as this region is in the corona just in front of the cloud) leads to a very high velocity of fast magnetosonic waves. This in turn leads to very short simulation time steps which results in the simulations becoming unfeasibly expensive to run past a certain time. However, the two simulations with a magnetic field at our standard resolution, MHD-W and MHD-S, still ran for 90 and 85 per cent of the time and 81 and 63 per cent of the distance, respectively, of simulation HD. Hence, we expect the monotonic increases that we see in all cases for the cold and low velocity gas mass described in Section \ref{sec:results} to be qualitatively representative of the full evolution.
\section{Results}
\label{sec:results}
\begin{figure*}
\centering
\vspace{4cm}
\begin{picture}(75,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{hd_rhocold_projy_1.pdf}}
\put(24,190){HD}
\put(24,182){$t=0$ Myr}
\end{picture}
\begin{picture}(68,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{hd_rhocold_projy_44.pdf}}
\put(15,190){HD}
\put(15,182){$t=41$ Myr}
\end{picture}
\begin{picture}(68,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{hd_rhocold_projy_56.pdf}}
\put(15,190){HD}
\put(15,182){$t=52$ Myr}
\end{picture}
\begin{picture}(68,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{hd_rhocold_projy_68.pdf}}
\put(15,190){HD}
\put(15,182){$t=64$ Myr}
\end{picture}
\begin{picture}(75,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{hd_rhocold_projy_80.pdf}}
\put(15,190){HD}
\put(15,182){$t=75$ Myr}
\end{picture}\\
\vspace{4cm}
\begin{picture}(75,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{mhd_weakfield_rhocold_projx_44.pdf}}
\put(24,190){MHD-W}
\put(24,181){$x$ projection}
\put(24,172){$t=41$ Myr}
\end{picture}
\begin{picture}(65,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{mhd_weakfield_rhocold_projx_56.pdf}}
\put(15,190){MHD-W}
\put(15,181){$x$ projection}
\put(15,172){$t=52$ Myr}
\end{picture}
\begin{picture}(70,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{mhd_weakfield_rhocold_projx_68.pdf}}
\put(15,190){MHD-W}
\put(15,181){$x$ projection}
\put(15,172){$t=64$ Myr}
\end{picture}
\hspace{1cm}
\begin{picture}(75,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{mhd_weakfield_rhocold_projy_44.pdf}}
\put(24,190){MHD-W}
\put(24,181){$y$ projection}
\put(24,172){$t=41$ Myr}
\end{picture}
\begin{picture}(65,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{mhd_weakfield_rhocold_projy_56.pdf}}
\put(15,190){MHD-W}
\put(15,181){$y$ projection}
\put(15,172){$t=52$ Myr}
\end{picture}
\begin{picture}(70,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{mhd_weakfield_rhocold_projy_68.pdf}}
\put(15,190){MHD-W}
\put(15,181){$y$ projection}
\put(15,172){$t=64$ Myr}
\end{picture}\\
\vspace{4cm}
\begin{picture}(75,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{mhd_strongfield_rhocold_projx_44.pdf}}
\put(24,190){MHD-S}
\put(24,181){$x$ projection}
\put(24,172){$t=41$ Myr}
\end{picture}
\begin{picture}(65,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{mhd_strongfield_rhocold_projx_56.pdf}}
\put(15,190){MHD-S}
\put(15,181){$x$ projection}
\put(15,172){$t=52$ Myr}
\end{picture}
\begin{picture}(70,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{mhd_strongfield_rhocold_projx_68.pdf}}
\put(15,190){MHD-S}
\put(15,181){$x$ projection}
\put(15,172){$t=64$ Myr}
\end{picture}
\hspace{1cm}
\begin{picture}(75,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{mhd_strongfield_rhocold_projy_44.pdf}}
\put(24,190){MHD-S}
\put(24,181){$y$ projection}
\put(24,172){$t=41$ Myr}
\end{picture}
\begin{picture}(65,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{mhd_strongfield_rhocold_projy_56.pdf}}
\put(15,190){MHD-S}
\put(15,181){$y$ projection}
\put(15,172){$t=52$ Myr}
\end{picture}
\begin{picture}(70,100)
\put(0,0){\includegraphics[height=0.4\textwidth]{mhd_strongfield_rhocold_projy_68.pdf}}
\put(15,190){MHD-S}
\put(15,181){$y$ projection}
\put(15,172){$t=64$ Myr}
\end{picture}
\caption{Logarithm of the projected density of cold ($T<2 \times 10^4$ K) gas at different times for simulations HD (top, no magnetic field), MHD-W (weak magnetic field) projected along $x$ (middle left) and $y$ (middle right), and MHD-S (strong magnetic field) projected along $x$ (bottom left) and $y$ (bottom right).}
\label{fig:colddensproj}
\end{figure*}
\subsection{Cloud evolution}
We show the projected density of cold gas at different times\footnote{Animated versions are available at \url{https://www.astro.rug.nl/~gronnow/animations/magnetichvcs.html}}
of the non-magnetic simulation, HD, weak field simulation MHD-W, and strong field simulation MHD-S, in Figure \ref{fig:colddensproj}.
In the non-magnetic case, the two transverse dimensions, $x$ and $y$, are similar and so we only show one projection. As can be seen, the evolution of the cloud's morphology in these simulations is clearly distinct. The evolution in the magnetic simulations is highly asymmetrical along the transverse dimensions, especially for the strong field case. The magnetic field stretches the cloud in the transverse direction perpendicular to it (i.e. along $y$) but for the strong field keeps the cloud very compact along it (i.e. along $x$). Both of the magnetic simulations show a large number of small, cold, and loosely connected cloudlets at late times while in simulation HD the head and tail remain mostly connected.
The strong field in simulation MHD-S exacerbates the RT-instability at the cloud's leading edge along $y$. There field lines become trapped in indentations and strongly amplified. Eventually, around $t\approx 45$ Myr this leads to the creation of finger-like structures which break apart and branch off forming a tree-like structure.
Later at $t\approx 55$ Myr another set of fingers are created, but these end up detaching from the main cloud completely, leading to the main cloud remnant splitting into three clumps as seen in the $x$-projection at $t=64$ Myr.
Figure \ref{fig:bmag} shows a slice of the magnetic field strength at $y=0$ for the two MHD simulations at $t=64$ Myr. As can be seen, the cloud strongly affects the magnetic field. More specifically, the field becomes locally strongly amplified and aligned with the cloud and wake. This is due to the magnetic `draping' effect as the cloud sweeps up the magnetic field as it moves through it \citep[see ][for a detailed description of this phenomenon]{dursi08,gronnow18}. In the wake of the cloud the draping causes oppositely directed magnetic field lines to come into close vicinity of each other. Hence, we expect that magnetic reconnection might occur in this region when resistivity is included.
\begin{figure}
\centering
\includegraphics[height=0.4\textheight]{mhd_weakfield_bmag_y_68.pdf}
\includegraphics[height=0.4\textheight]{mhd_strongfield_bmag_y_68.pdf}
\caption{Magnetic field strength in a slice at $y=0$ at $t \approx 64$ Myr for simulation MHD-W (left) and MHD-S (right). Normalised arrows show the direction of the field in the $xz$-plane.}
\label{fig:bmag}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{mcold_evo.pdf}
\caption{Cold gas mass evolution for the simulation without magnetic field (HD, dashed), with a weak field (MHD-W, solid grey), and a strong field (MHD-S, solid black). The vertical axis is the mass of gas at temperatures $T<2\times 10^4$ K divided by the mass of gas at these temperatures at $t=1$ Myr. This is to filter out the rapid early cooling which is not due to condensation as described in the text.}
\label{fig:coldgas}
\end{figure}
\subsection{Condensation of cold gas}
\label{sec:condensation}
As mentioned previously, numerical studies in the literature have found that the mixing between cold clouds and the hot corona can cause cold gas to condense out of the hot gas. In this way, substantial amounts of cold gas might be accreted even if the original cloud has largely dispersed by the time it reaches the disc. This process is usually associated with relatively metal-rich intermediate-velocity clouds ejected in the galactic fountain \citep[e.g.][]{marinacci10,armillotta16,gronnow18,kooij21} but it has also been claimed to be efficient in metal-poor HVCs \citep{gritton17}.\\
We show the evolution of the total mass of cold ($T < 2\times 10^4$ K) gas in Figure \ref{fig:coldgas}. The cold gas mass is normalised by its value at $t\approx 1$ Myr. This is to filter out the rapid early cooling that occurs in the smooth transition region between the cloud and corona, i.e. at $r \approx r_c$. This cooling is not caused by mixing but rather by this gas having initial temperatures in between that of the cold gas and the hot corona near the peak of the cooling curve. As can be seen, in all cases the amount of cold gas is increasing with time, i.e. condensation is clearly occurring. The simulations that include the halo magnetic field both lead to less cold gas in agreement with the findings of \cite{gronnow18} for the case without any gravitational potential. However, the suppression in condensation is more effective for the \emph{weaker} magnetic field in simulation MHD-W where the mass of cold gas only increases appreciably after $t\approx 60$ Myr. A similar effect was noted in \cite{gronnow18} for the condensation of clouds in the galactic fountain. However, in that case this was caused by the stronger magnetic field not being effectively draped and amplified around the cloud. That cannot be the case here because significant draping is observed in both of the magnetic simulations. This is also expected because the cloud is moving faster than the Alfv\'{e}n speed at $t \gtrsim 25$ Myr in both cases and so is firmly in the regime where draping is effective. Instead, the cause for the greater mass of condensed gas in simulation MHD-S is the `fingers' created by RT-instability that are mostly absent in simulation MHD-W (see Figure \ref{fig:colddensproj}). These fingers split into cloudlets where mixing with hot gas is efficient leading to condensation. Of course, for magnetic fields progressively weaker than in MHD-W eventually the cold gas mass will tend towards the HD result \citep[see][]{gronnow18}.\\
\subsection{Cold gas velocity}
\label{sec:coldgasvel}
We show the velocity distribution of cold gas (calculated as the mass-weighted velocity histogram of cells containing cold gas) for our three standard simulations at $t=64$ Myr in Figure \ref{fig:coldgasvel}. As can be seen, in all cases, most of the cold gas reaches sufficient infall velocity to be classified as high-velocity (assuming that the velocity with respect to the corona is representative of the LSR velocity). However, the distributions at lower velocities\footnote{We use `lower' and `higher' for velocities based on the absolute velocity, i.e. $-200 \mathrm{km s}^{-1}$ is a higher velocity than $-50 \mathrm{km s}^{-1}$ despite being more negative, for example.} are clearly different. Simulation HD has a tail towards lower velocities due to interaction with the corona, as expected. However, for simulation MHD-S this tail is much less prominent and instead the velocity distribution is bimodal with a low-velocity bump peaking around $v_z = -30 \mathrm{km s}^{-1}$. Overall, about 23 per cent of the total cold gas mass is in this low velocity bump at $|v_z| < 60 \mathrm{km s}^{-1}$ (and about half of this below the peak at $|v_z| < 30 \mathrm{km s}^{-1}$ as expected from its relative symmetry) in simulation MHD-S. We show the evolution of the fraction of cold gas that is at $|v_z| < 60 \mathrm{km s}^{-1}$ in simulation MHD-S in Figure \ref{fig:coldgasslowfrac}. This fraction is rising superlinearly mirroring the evolution of the total cold gas mass in Figure \ref{fig:coldgas}. In contrast, without the magnetic field the velocity distribution essentially does not extend below $|v_z| \approx 60 \mathrm{km s}^{-1}$ with less than 1 per cent of the cold gas mass being at those velocities. Simulation MHD-W has a less prominent tail towards lower velocities than simulation HD following the distribution of simulation MHD-S at velocities in between the two peaks. Unlike simulation MHD-S it has no clear second peak but it does have more gas at velocities $|v_z| < 75 \mathrm{km s}^{-1}$ than simulation HD. At the end of simulation MHD-W at $t=68$ Myr the mass fraction of cold gas at $|v_z| < 60 \mathrm{km s}^{-1}$ is about 4 per cent. Simulation HD continues to contain virtually no cold gas at $|v_z| < 60 \mathrm{km s}^{-1}$ until it ends at $t=76$ Myr.
In Figure \ref{fig:coldgasfcond} we show the velocity distribution of the condensed mass fraction of cold gas in simulation MHD-S at $t=64$ Myr. This is the fraction of cold gas in each velocity bin that did not originally belong to the cold cloud, as tracked by the passive scalar $C$ (see Section \ref{sec:nummethod}). Hence, it represents gas that was initially in the hot corona and has cooled during the simulation due to mixing with cloud material (i.e. condensed). As expected, this reveals that the cold gas in the high-velocity peak largely represents the main remnant(s) of the original cloud which have become HVCs. Conversely, the low velocity bump is largely comprised of gas that has condensed during the simulation. This is consistent with the increase over time of slow gas seen in Figure \ref{fig:coldgasslowfrac}. The fact that the lowest velocity gas is largely comprised of the initially static coronal gas is of course expected in any case. However, when the magnetic field is absent this condensed gas is able to accelerate to, or at least maintain, intermediate and high velocities and form a tail rather than a second peak. We interpret this as being due to the effect of magnetic tension from strongly draped fields as described in Section \ref{sec:slowclumps}. For the weak field, there is a low fraction of gas at velocities below the main peak due to the low amount of condensation.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{veldist_all.pdf}
\caption{Velocity distribution (i.e. mass-weighted velocity histogram) of cold gas at $t=64$ Myr for the simulation without magnetic field (HD, dashed black), the weak magnetic field simulation (MHD-W, solid grey), and the strong magnetic field simulation (MHD-S, solid black).}
\label{fig:coldgasvel}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{mhd_strongfield_coldslowgas.pdf}
\caption{Fraction of cold gas mass at velocities $|v_z| < 60 \mathrm{km s}^{-1}$ in simulation MHD-S after the main cloud remnant has accelerated beyond this velocity.}
\label{fig:coldgasslowfrac}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{mhd_condensed_fraction.pdf}
\caption{Fraction of condensed gas, i.e. mass fraction of cold gas that was not originally in the cloud according to the tracer, as function of velocity for simulation MHD-S at $t=64$ Myr.}
\label{fig:coldgasfcond}
\end{figure}
\subsection{Clump finding analysis}
\label{sec:clumps}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{clumps_massvel_hiresclumps}
\caption{The mass and velocity of clumps as a function of time, for simulations HD (top), MHD-W (middle), and MHD-S (bottom). On the left, mass is on the $y$-axis while the velocity is indicated by the colour, and on the right this is swapped. On the left, the black curve indicates the total mass of clumps, the blue curve indicates the mass of low-intermediate velocity clumps ($|v_z| < 60 \mathrm{km s}^{-1}$), and the red curve indicates the mass of low velocity clumps ($|v_z| < 30 \mathrm{km s}^{-1}$). On the right, the black curve indicates the velocity of the most massive clump, i.e. the main remnant of the HVC, while the dashed blue and red lines mark the $60 \mathrm{km s}^{-1}$ and $30 \mathrm{km s}^{-1}$ thresholds, respectively.}
\label{fig:clumps}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.41\textwidth]{clumps_nclumps_hiresclumps}
\caption{The total number of detected clumps (black), low-intermediate velocity clumps ($|v_z| < 60 \mathrm{km s}^{-1}$, blue) and low velocity clumps ($|v_z| < 30 \mathrm{km s}^{-1}$, red) in simulations HD (top), MHD-W (middle), and MHD-S (bottom).}
\label{fig:nclumps}
\end{figure}
From Figure \ref{fig:colddensproj} it is clear that in all cases the cloud breaks into clumps and filaments. As discussed in \ref{sec:condensation}, the stripped gas is also not smoothly distributed because it triggers condensation of coronal gas creating cloudlets that grow through cooling. Therefore, in addition to examining the integrated mass, we ran a clump finding algorithm to study the mass and velocity spectrum of these clumps. We used the FellWalker \citep{berry15} algorithm as implemented in CUPID \citep{berry07} on the density outputs from the simulations. To avoid spurious density peaks being interpreted as clumps we required a 48 cell minimum for each clump as well as a 10 $M_\odot$ mass minimum. We did not put any constraint on the temperature of the gas as any significant overdensities in our simulations are cold anyway. We have verified that we find essentially the same spectrum of clumps when running the clump finder on the density of only the cold ($T < 2 \times 10^4$ K) gas instead. Hence, these clumps are tracking the cold gas distribution.
We show the mass and centre of mass z-velocity distribution of these clumps for simulations HD, MHD-W, and MHD-S, as a function of time in Figure \ref{fig:clumps}. In all simulations the most massive clump (i.e. the main HVC remnant) moves with constant acceleration towards the disc until roughly $t\approx 50$ Myr. After this time, drag becomes significant and in all simulations the velocity of the main remnant appears to have flattened out by the end of the simulation. The increase in the total mass in clumps is due to more clumps forming through condensation rather than individual clumps gaining mass. This is clear from Figures \ref{fig:clumps} and \ref{fig:nclumps}. Figure \ref{fig:clumps} shows that the clumps, including the main remnant, are generally losing mass. Figure \ref{fig:nclumps} shows that the number of clumps increases substantially with time.
Examining the rest of the distribution, it is clear that a large population of clumps with low masses and low velocities exist for MHD-S which is absent in the HD case. These clumps make up the low velocity bump seen in the overall velocity distribution of the cold gas as previously described in Section \ref{sec:coldgasvel}. In simulation HD, at all times all clumps, including ones with low mass, have velocities of $|v_z| > 50 \mathrm{km s}^{-1}$ as expected from the cold gas velocity distribution. In contrast, most of the low mass ($M < 100 M_\odot$) clumps have velocities below this value throughout most of the strong field simulation. The weak field case is intermediate between the two with some low velocity clumps. The velocity of the main cloud remnant, i.e. the most massive clump, as shown by the solid black line in the velocity plot in Figure \ref{fig:clumps} (right panel), is slightly lower at late times when the magnetic field is included. However, this difference is much smaller than the general difference in the velocities of the lower mass clumps.
We classify clumps with $|v_z| < 60 \mathrm{km s}^{-1}$ as `low-intermediate velocity' clumps (LIVC) and clumps with $|v_z| < 30 \mathrm{km s}^{-1}$ as `low velocity' clumps (LVC) indicated by the dashed blue and red lines in the velocity plot, respectively. These velocities correspond to the upper bound of the low velocity bump in simulation MHD-S at $t=64$ Myr (see Figure \ref{fig:coldgasvel}) and to approximately the regime for LSR velocity measurements usually considered to be low-velocity gas, respectively. We show the integrated mass of these clumps with the solid blue and red lines in the mass plot in Figure \ref{fig:clumps}. As can be seen, this mass is low compared to the total mass of all clumps, which is dominated by the main cloud remnant, but it is growing much faster than the overall mass in clumps. The fraction of mass in LIVCs and LVCs is about 20 and 7 per cent, respectively, at the end of simulation MHD-S at $t\approx 64$ Myr. This is a bit less than the overall fractions of the total cold gas at those velocities (see Section \ref{sec:coldgasvel}) so some of this mass is presumably somewhat diffuse in the tails of clumps and filaments between them at densities too low to be recognised as belonging to a clump by the clump finding algorithm. The mass fraction of LIVCs for simulation MHD-W and also of LVCs for simulation MHD-S is increasing fast by the end of the simulations so we expect both to become a significant contribution by the time the main remnants would reach the disc-corona interface at $t\approx 75$ Myr. In any case, due to their growing importance with time as well as resolution (see Section \ref{sec:resolution}) the mass fraction in LIVCs and LVCs in our simulations is a lower limit. As expected from our results for the overall cold gas in Section \ref{sec:coldgasvel}, LIVCs and LVCs are mainly comprised of cold gas that has condensed out of the corona.
We show the position of the clumps distinguished by their velocities on top of the density at $x=0$ at the end of simulation MHD-S at $t=64$ Myr in Figure \ref{fig:clumplocs}. As can be seen, LVCs and LIVCs generally reside in the wake while the faster clumps are mostly near the front, close to the main cloud remnants.
\begin{figure}
\centering
\includegraphics[width=0.36\textwidth]{clumps_overplot.pdf}
\caption{Logarithm of the density in a slice at $x=0$ at $t = 64$ Myr for simulation MHD-S. The centre of mass positions of clumps in this slice are indicated with different symbols according to their velocities. These are red crosses for $|v_z| < 30 \mathrm{km s}^{-1}$, blue squares for $30 \mathrm{km s}^{-1} < |v_z| < 60 \mathrm{km s}^{-1}$, and green diamonds for $|v_z| > 60 \mathrm{km s}^{-1}$. The blue rectangle indicates the region plotted in Figure \ref{fig:clumpvel}. The slice in the $xz$ plane in Figure \ref{fig:clumpfield} is centered on the middle of this rectangle at $y=-0.3$.}
\label{fig:clumplocs}
\end{figure}
\section{Discussion}
\label{sec:discussion}
\subsection{Magnetic deceleration of condensed gas}
\label{sec:slowclumps}
As shown in Sections \ref{sec:condensation} and \ref{sec:clumps}, much of the condensed cold gas is severely decelerated in simulation MHD-S. We show the velocity evolution of some of these clumps in one of the finger-like substructures in Figure \ref{fig:clumpvel}. The region shown in this figure is marked by the blue rectangle in Figure \ref{fig:clumplocs}. Clearly, the cold gas in this structure is strongly decelerating. The cloudlets there might be more affected by hydrodynamic drag due to not being directly upwind of the main cloud remnants. However, more importantly is the deceleration caused by the magnetic field. The Lorentz force of the magnetic field is given by
\begin{equation}
\mathbf{f}_L=\frac{1}{4\pi}(\nabla\times\mathbf{B})\times\mathbf{B}=-\frac{1}{8\pi}\nabla B^2+\frac{1}{4\pi}(\mathbf{B}\cdot\nabla)\mathbf{B},
\end{equation}
where the first term is the force due to magnetic pressure and the second term is the force due to magnetic tension. The magnetic tension force resists curving of field lines and can be written as
\begin{equation}
\mathbf{f}_t = \frac{1}{8\pi}\mathbf{\hat{b}}\mathbf{\hat{b}}\cdot\nabla B^2 + \frac{B^2}{4\pi R_c}\mathbf{\hat{n}},
\end{equation}
where $\mathbf{\hat{b}}$ is the unit vector in the direction of $\mathbf{B}$, $\mathbf{\hat{n}}$ is the unit vector perpendicular to the field lines, and $R_c$ is the radius of curvature of the field lines. Figure \ref{fig:clumpfield} shows that the field is strongly draped in the $xz$-plane around the structure shown in Figure \ref{fig:clumpvel}. The draping causes amplification of the field in a layer bending around the front of the clumps. This leads to a force from the magnetic pressure in the $-z$ direction at the front. Additionally, the strong bending of the field lines in this layer leads to strong magnetic tension. At the front of draped clumps this force likewise points in the $-z$ direction. Hence, magnetic pressure and tension forces effectively act as additional drag. We find that the magnetic tension force generally dominates over the force due to magnetic pressure in draped layers by typically roughly an order of magnitude. Hence, magnetic tension is the main cause of the deceleration.
The overall effect of magnetic fields decelerating gas clouds is known from previous wind tunnel simulations \citep[e.g.][]{gronnow18,gronke20a,sparre20}, as well as cosmological zoom-in simulations \citep{vandevoort21} and the largely similar MHD hydrostatic corona setup of \cite{kwak09}. However, {\em we uncover an important aspect of this effect in our falling cloud simulations due to the interplay of magnetic fields, the gravitational potential, and radiative cooling not seen in previous studies} which did not include all these effects: The magnetic drag does not affect all the gas in our simulations equally. The clumps that are either a remnant of the original HVC or were stripped at early times are able to accelerate largely unimpeded by the magnetic field. Eventually they build up a considerably amplified draped field at their leading edge but have sufficient momentum to not be significantly decelerated by this. In contrast, the stripped gas, where the mixing and condensation occurs, is decelerating in some cases to the point of being nearly suspended in the corona. These cloudlets have lower momentum and additionally, due to their small sizes, the radius of curvature of the draped field around them is small compared to around the main cloud remnants leading to much stronger tension force. This leads to the overall bimodal velocity distribution with a low-velocity bump, rather than a tail, as shown in Section \ref{sec:condensation}.
Recently, \cite{heitsch21} performed simulations with a setup reminiscent of ours but without a magnetic field, and found that condensed gas was able to catch up to, and in some cases even overtake, the original cloud material in a `peloton effect'. In Figure \ref{fig:clumps} it can be seen that there are some clumps at late times that are travelling slightly faster than the most massive cloud remnant in simulations HD and MHD-W. This indicates that the peloton effect might be occurring to some degree in these cases. However, the magnetic deceleration of condensed gas in simulation MHD-S appears to suppress this effect almost completely. Hence, the peloton effect might not be important in the inner part of the corona but could still be effective at larger distances where the halo magnetic field is weaker.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{clump_vel.pdf}
\caption{Velocity along $z$ in a slice at $x=0$ of the small region with condensed cloudlets marked in Figure \ref{fig:clumplocs} in simulation MHD-S at $t=52$ Myr, $t=60$ Myr, and $t=68$ Myr from left to right, respectively. Overlaid in red are density contours at $\rho=0.2 \ifmmode{\>{\rm cm}^{-3}}\else{cm$^{-3}$}\fi$. The filament breaks apart and decelerates.}
\label{fig:clumpvel}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{clump_field.pdf}
\caption{Magnetic field strength divided by the coronal field strength in a slice at $y=-0.3$ of the small region with condensed cloudlets marked in Figure \ref{fig:clumplocs} in simulation MHD-S at $t=52$ Myr. Overlaid in red are density contours at $\rho=0.2 \ifmmode{\>{\rm cm}^{-3}}\else{cm$^{-3}$}\fi$ and magnetic field lines in this plane.}
\label{fig:clumpfield}
\end{figure}
\subsection{Efficiency of accretion through condensation from HVCs}
We have shown that the halo magnetic field substantially decreases the efficiency of condensation of cold gas around an HVC. As discussed in \cite{gronnow18} the magnetic field will be amplified and bent around the clouds. If the cloud is traveling faster than the Alfv\'{e}n speed in the hot gas the field becomes completely draped around the cloud and its wake. This is obviously not the case at early times as the cloud is initially comoving with the corona, but it eventually happens in all cases as the cloud accelerates. The amplification and the alignment of the field with the interface caused by the draping suppresses the Kelvin-Helmholtz instability. This inhibits the mixing of cold cloud and hot coronal gas which drives the condensation.
In addition, we also expect the significant deceleration of some of the condensed cloudlets in a moderately strong field described previously to further slightly restrict the accretion of condensed gas. Due to their slow velocities, the time it takes them to reach the disc becomes hundreds of Myr. Rather than delaying the accretion of these cloudlets, they might not be accreted at all. Because of their small sizes and long infall times they may be dispersed by thermal conduction before they can reach the disc. However, once the clumps have decelerated to very low velocities they no longer effectively sweep up surrounding field lines. The ambient magnetic field by itself might not provide sufficient tension, in which case the time needed to reach the disc for these clumps will depend on how long the magnetic field can remain locally draped and amplified before decaying from magnetic diffusion processes. The strong local magnetic field will also strongly suppress thermal conduction. However, the draped field will be locally ordered, even if the surrounding field has a significant random component, and thermal conduction can still be non-negligible in that case \citep{kooij21}. Future simulations that include anisotropic thermal conduction as well as non-ideal MHD effects are needed to further examine this.
In spite of these effects, the amount of gas that can be accreted through the condensation triggered by HVCs remains substantial. As can be seen from Figure \ref{fig:coldgas}, the condensation increases the mass of cold gas by almost 20 percent by 64 Myr for simulation MHD-S and at least half of this is intermediate velocity gas. While the weaker magnetic field of simulation MHD-W is much more effective at suppressing condensation by not creating the RT fingers seen in MHD-S, cold gas mass is increasing at late times and the condensed cloudlets are being decelerated much less and can hence more easily reach the disc. This agrees with the simpler HVC simulations without magnetic fields or a gravitational potential of \cite{gritton17}. However, we expect that we are overestimating the amount of condensed gas during the simulations because we do not include heating from ultraviolet (UV) radiation and cosmic rays and ignore thermal conduction which will reduce the efficiency of condensation \citep[][see Section \ref{sec:limitations}]{kooij21}. Nonetheless, this suggests that the gas accretion rate from HVCs might be somewhat underestimated in the literature \citep[e.g.][]{putman12} although by an amount still within the substantial uncertainties of such studies. A larger suite of simulations that explore the parameter space of initial conditions (e.g. clouds with different initial sizes and $z$ coordinates or different trajectories) would be needed to quantify the overall importance of additional accretion through condensation.
\subsection{Observational detection of magnetically decelerated gas}
Clearly, the low velocity clumps are too small to be detected individually. In theory, the overall bimodal velocity distribution with the low velocity bump of largely condensed gas seen in simulation MHD-S (see Figure \ref{fig:coldgasvel}) should be discernible for HVCs traveling through parts of the corona where the halo magnetic field is moderately strong. For this, the observed infalling clouds would have to be at high galactic latitude where the line of sight is largely aligned with the cloud's velocity \citep[e.g.][]{bish19}. However, in practice gas at these low velocities would be largely indistinguishable from the foreground gas in the interstellar medium (ISM) in the disc. In some cases, the velocity distribution of the low velocity bump might be partially separated from that of the ISM because it is centered at a slightly negative velocity rather than at zero. However, it would generally not be possible to establish if such a peak represents a tail of trailing condensed material or an unrelated component somewhere along the line of sight. A precise measurement of the chemical composition could assist in this although, being mostly condensed gas, we would expect the composition to be close to that of the corona.
\subsection{Resolution}
\label{sec:resolution}
From previous studies of both wind tunnel and hydrostatic simulations of clouds we expect our standard resolution of 50 cells per cloud radius (which we denote as $\mathcal{R}=50$) to be sufficient to capture the overall qualitative evolution. However, we also expect it to be insufficient for convergence of some quantities, especially those that depend on the cooling such as total cold gas mass. For adiabatic MHD simulations resolutions in the range 32-64 cells per cloud radius have been found to be needed \citep{dursi08,banda-barragan16}. For radiative cooling the physical cell size matters. Without thermal conduction, the cooling will tend to break up cloudlets all the way down to the size where the sound crossing time becomes comparable to the cooling time scale \citep[refered to as `shattering', see e.g.][]{mccourt18,gronke20b,sparre20}. This scale is typically unfeasibly small to resolve for simulations of Galactic gas clouds and is far from resolved at the $\Delta x=2$ pc standard resolution (i.e. $\mathcal{R}=50$) in our simulations. Hence while the main cloud remnant should be relatively well resolved, the broad spectrum of stripped and condensed clumps will not be.
In Figure \ref{fig:convergence} we show the cold gas mass evolution for simulation MHD-S at three different resolutions, the standard resolution ($\mathcal{R}=50$) and half and twice the resolution ($\mathcal{R}=25$ (MHD-Sl) and $\mathcal{R}=100$ (MHD-Sh), respectively). The lowest resolution run clearly underestimates the mass of cold gas, except at early times where the higher numerical diffusion at the cloud-corona interface in this simulation leads to too much cooling. Due to its high computational cost, we could only run the highest resolution simulation to $t=42$ Myr. It closely follows the standard resolution case deviating only slightly towards lower masses at $t\gtrsim 25$ Myr. While MHD-Sh ends too early for much of the `tree-like' structure to have formed, clumps have started condensing in the wake and MHD-Sl is already significantly diverging from MHD-S and MHD-Sh by this time. Running the clump finding algorithm on the outputs of simulation MHD-Sh at full resolution is not feasible. However, downsampling it by one AMR level, bringing it to the same resolution as MHD-S, we find that about a factor of two more clumps with generally lower masses are detected. If we lower the $10 M_\odot$ minimum mass to $1 M_\odot$ this difference disappears. Crucially, the mass fraction of cold gas that is in clumps with $|v_z| < 60 \mathrm{km s}^{-1}$ (i.e. LVCs and LIVCs) is quite close at all times, being slightly higher for MHD-Sh. On the other hand, as expected, simulation MHD-Sl, which underestimates the overall mass of cold gas, correspondingly underestimates the number of clumps and greatly underestimates the mass fraction of LVCs and LIVCs.
In summary, the mass of cold gas as well as the mass fraction in LVCs and LIVCs appear to be quite well converged at our standard resolution of $\mathcal{R}=50$, although we cannot assess this at late times. In any case, a resolution not much lower than $\mathcal{R}=50$ is needed as our $\mathcal{R}=25$ simulation clearly disagrees with the results at higher resolutions. Based on this, we conclude that the condensation, and in particular the population of low-velocity condensed clumps, appears to be real rather than an artefact caused by insufficient resolution.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{mcold_res.pdf}
\caption{Cold gas mass evolution for the simulation with a strong magnetic field at low resolution (light grey), standard resolution (dark grey), and high resolution (black).}
\label{fig:convergence}
\end{figure}
\subsection{Limitations and missing physics}
\label{sec:limitations}
The properties of individual clumps and the total number of clumps are sensitive to the choice of the clump finding algorithm and the parameters passed to it. Based on Figure \ref{fig:clumplocs}, Fellwalker \citep{berry15} appears to have identified clumps to good accuracy without counting single structures as multiple clumps or missing significant structures. To assess the robustness of the results of our clump based analysis in more detail, we have also run the CLUMPFIND algorithm \citep{williams94} as implemented in CUPID. In this case we likewise find a comparable population of slow, low mass clumps in simulation MHD-S that is absent in simulation HD.
We neglect heating and ionisation from UV photons in our simulations. This effect is highly uncertain in the part of the corona that we simulate. Both the metagalactic UV background and stellar UV radiation are likely significant and their relative importance changes with the height above the disc. We are, however, effectively assuming that heating dominates at low temperatures due to our $10^4$ K cooling floor. In general, UV heating will reduce the peak of the cooling curve around $T\sim 10^5$ K and lead to less condensation. On the other hand, if heating is included instead of a cooling floor, dense clumps can cool to temperatures below $10^4$ K and be less easily dispersed.
In our simulations the HVC travels perpendicular to the magnetic field lines. In reality the HVC would also experience a gravitational pull along $R$ and the ordered component of the field would not be purely perpendicular to $z$ \cite[e.g. there are indications of an `X-shaped' component of the halo magnetic field][]{jansson12a}. The interaction of clouds with uniform magnetic fields with different orientations has been studied in wind tunnel simulations. These find that while the effect of a field parallel to the wind is qualitatively different from a transverse field, in the general case of a field at some intermediate angle the evolution qualitatively follows the transverse case even for quite shallow angles \citep[see e.g. ][]{banda-barragan16,gronnow18,kooij21}. An additional complication in the Galactic magnetic field structure that we ignore is its random components \citep{jansson12b,beck16}. Turbulence adds isotropic and anisotropic random components to the halo magnetic field but this is very poorly constrained. In general, we expect any random field to become locally ordered around the cloud as the transverse components are draped as seen in simulations that include such fields \citep{asai07,sparre20}. Hence, including the random halo field should not fundamentally change our results.
We have ignored thermal conduction in our simulations, which may be important for the evolution of the cold gas. Magnetic fields strongly suppress this effect perpendicular to the field lines, however due to the field becoming locally ordered thermal conduction will generally not be completely suppressed along all directions. This anisotropic suppression has recently been found to be roughly equivalent to an overall isotropic decrease in thermal conduction efficiency by a factor of order 10 for the cold gas evolution in galactic fountain clouds \citep{kooij21}. However, this factor might be different for the environment of our simulations. Thermal conduction tends to lower stripping and disperse smaller cloudlets \citep{armillotta16,armillotta17}. Thus, as previously mentioned, the slow low mass clumps in the magnetic simulations would probably not survive to be accreted.
Kinematic models constrained by O\textsc{vii} absorption in the Milky Way's corona suggest that it is rotating at a significant velocity of $v_{\phi}=183\pm 41 \mathrm{km s}^{-1}$ \citep{hodges-kluck16}, similar to theoretical expectations \citep{pezzulli16,pezzulli17}. The inclusion of rotation would change the density of the hydrostatic corona in our simulations. For the non-magnetised case the hydrostatic equilibrium solutions for such rotating coronae can be described with a relatively simple model \citep{sormani18}. However, this is not possible in general when a non-uniform magnetic field is included. In any case, the density difference between models with and without rotation is generally within the significant uncertainties in the density normalisation $n_{h,0}$ of non-rotating coronae. Additionally, the rotation velocity is expected to change with $z$ \citep{sormani18} which would lead to additional shear between the corona and the cloud.
We neglect the self-gravity of the gas. \cite{li20} explored the effect of self-gravity in clouds with similar flat density profiles as ours and found that for clouds initially below the Jeans mass self-gravity continues to be unimportant throughout the cloud's entire evolution. The mass of our cloud is an order of magnitude below its Jeans mass. Recently, however, \cite{sander19,sander21} showed that self-gravity can have a significant effect on clouds below the Jeans mass if they have a cuspy density profile. In that case, the self-gravity inhibits stripping such that the cloud loses gas more slowly but also decreases the efficiency of condensation lowering the overall amount of cold gas.
In reality observed HVCs are not described by a single flat or cuspy density profile but have structure on a wide array of scales. Cold gas with an initially fractal structure has been simulated in the context of the Magellanic Stream by \cite{bland-hawthorn07,tepper-garcia15}. In general, internal velocities and magnetic field configurations in the cloud will also be non-uniform. For such non-uniform clouds the magnetic field can be more effectively folded around the many substructures and have a stronger effect on the evolution \citep{banda-barragan18}. The evolution of initially non-uniform HVCs is outside the scope of this paper and we defer this work to a forthcoming paper (Jung et al., in prep).
\section{Summary}
\label{sec:summary}
We follow the evolution of a cloud formed out of the Galactic corona as it falls towards the disc due to its gravitational potential eventually becoming an HVC, with and without including the Galactic halo magnetic field. Summarising our main results, we find that:
\begin{enumerate}
\item Although we are unable to follow the cloud all the way to the disc-corona interface when we include the magnetic field, the original cloud appears to survive. However, when a strong magnetic field is included the cloud breaks into multiple fragments due to enhanced Rayleigh-Taylor instability in the direction perpendicular to the field and the gravitational potential.
\item We find that the total mass of infalling cold gas associated with the original cloud increases with time due to condensation of mixed gas despite the low metallicity of the cloud. This is in agreement with \cite{gritton17} and analogous to the trend seen in simulations of higher metallicity galactic fountain clouds \citep{marinacci10,armillotta16,gronnow18}. As found for the clouds in \cite{gronnow18} the role of the magnetic field is to partially suppress the efficiency of the condensation by limiting stripping and mixing.
\item For our strong magnetic field case, magnetic tension leads to severe deceleration of mostly condensed cloudlets which leads to a bimodal velocity distribution of cold gas with a low-velocity bump at $|v_z|<60 \mathrm{km s}^{-1}$. These cloudlets are generally small so due to their slow infall velocities they might disperse before being accreted onto the disc. The remnants of the original cloud which represents the majority of the cold gas is however not significantly decelerated.
\end{enumerate}
Overall, HVCs formed through thermal instability in the corona appear to be able to reach the disc and feed star formation. Additionally, condensation along their trajectories may moderately add to this accretion. However, ignoring the magnetic field substantially overestimates the amount of gas that may be accreted in this way.
\section*{Acknowledgements}
We thank the referee for a constructive report that improved the quality of the paper.
AG and FF acknowledge support from the Netherlands Research School for Astronomy (Nederlandse Onderzoekschool voor Astronomie, NOVA), Network 1, Project 10.1.5.7.
AG and TTG acknowledge financial support from the Australian Research Council (ARC) through an Australian Laureate Fellowship awarded to JBH. We acknowledge the facilities, and the scientific and technical assistance of the Sydney Informatics Hub at the University of Sydney and, in particular, access to the high-performance computing facility Artemis and additional resources on the National Computational Infrastructure (NCI) through the University of Sydney's Grand Challenge Program `Astrophysics Grand Challenge: From Large to Small' (CIs G. F. Lewis and J. Bland-Hawthorn). We also acknowledge additional access to NCI facilities through the Astronomy Supercomputer Time Allocation Committee (ASTAC) scheme managed by Astronomy Australia Limited and supported by the Australian Government.
AG and FF would like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster.
\section*{Data availability}
The code that we used to generate our simulations is available at \url{https://github.com/agronnow/hvc_gravity}. Simulation output data will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
train/arxiv
|
BkiUdZo5qYVBeaTuDOpZ
| 5
| 1
|
\section*{Introduction}
Humans have been bestowed with a gift, from the heavens: the gift of starlight. Imagine that we were on a planet, like Venus, with an opaque atmosphere. Or like the people of Lagash in `Nightfall', a 1941 science-fiction novelette by Isaac Asimov. Their planet is lit by six suns and they wait for the year 2049 when eclipsing of all the suns would lead to the night of darkness which would reveal stars that madden the people...
We live in a unique world spaced by day and night, reminders of light and darkness, ...each with its own beauty.
\keywords{molecular clouds, Jean's criterion, protostars, protoplanetary discs, star clusters}
The most enchanting component of our skies are stars. To our ancestors, the most obvious explanation was that they were lanterns, or fireplaces in the skies. Then they noticed patterns, that they moved, in concentric circles, about a fixed point in approximately 24 hours. And hence followed the notion of a shell of stars rotating around the earth, with the fixed points as the north and south poles.
The stars of a galaxy distribute themselves into broadly three components, viz.: the disc, halo and bulge (Fig.\ref{gal}). The halo is made up of an older population of stars that constitute globular clusters. Globular clusters are made up of low metallicity, dense aggregates of 50,000 to 100,000 stars, gravitationally bound, with orbits that are randomly distributed, which leads to their spherical shape. The stars are redder, older $\approx$ 10 Gyr\footnote {Common Units: 1 Myr = $10^6$ yr, 1 Gyr = $10^8$ yr, 1 parsec (pc) = $3 \times 10^{13}$ km, 1 light year (ly)= $10^{13}$ km, 1 $M_{\odot}$ = Mass of the Sun = 2 $\times 10^{30}$ kg} and we do not see any signs of SF taking place there. The disc component is made up of spiral arms where young stars are forming, even now, as it is gas rich and this is where we find open star clusters which are looser aggregates of stars with typical lifetimes of a few 100 Myr$^1$. The nuclear bulge contains the highest density of stars in the galaxy.
\rightHighlight{The Milky Way is made up of three components: disc, halo, bulge. Active SF is taking place in the disc. Star clusters are found in a continuous range of densities of 0.01-100 stars pc$^{-3}$, with the older globular clusters in the halo and the younger associations and open star clusters in the gas rich discs$^1$.}
Although some hot young stars may be found in the nucleus, the primary population of stars there is similar to the old stars found in the halo. The core of the galaxy is highly obscured by dust and gas at visible wavelengths and can be observed at other wavelengths. Our galaxy contains a very massive black hole at its center with a mass of $\approx 4.5\times 10^6$ M$_\odot^1$, which drives many of these processes near the core.
\begin{figure}[!t]
\caption{Structure of the Milky Way. Image credit: http://www.profjohn.com/courses/ast1004/milkyway/milkyway1.htm}
\label{gal}
\vskip -12pt
\centering
\includegraphics[width=10cm, height=5.5cm]{gal_globs.jpg}
\end{figure}
Star clusters are nature's samples of stars formed from the same parent cloud and therefore of the same age, chemical composition and at the same distance, differing only in mass. The Russell-Vogt theorem states that mass and chemical composition are the two important parameters that decide a star's fate. Hence, star clusters are widely used as ideal samples to study stellar evolution as all other parameters are fixed and the mass of stars defines its' evolution. In the present times, they are also very useful in understanding star and planet formation as these are very closely linked processes, planet formation being a byproduct of SF.
\rightHighlight{Open clusters, associations, and moving groups are probably just different realisations of the SF process, differentiated due to the way in which we observe them, their environments and other factors.}
There are also associations, which consist of recently formed stars, not bound gravitationally, at large separations of $\approx$ 100 pc, and expanding away from some common center, which presumably marks their birthplace. The motion of these stars can be traced back in time to support this conjecture. They are categorized as OB, T and R Associations, based on the properties of their stars, which could vary. (Fig. \ref{clus}). If the remnants of a stellar association drift through the Milky Way as a coherent set of stars, they are called a moving group or kinematic group.
\begin{figure}[!t]
\caption{Typical density of associations, open clusters and globular clusters.(Image :
Moraux, Estelle; Lebreton, Y; Charbonnel, C, EAS Publications Series, Volume 80-81, Stellar Clusters: Benchmarks of Stellar Physics and Galactic Evolution)}
\label{clus}
\vskip -12pt
\centering
\includegraphics[width=9.5cm, height=3cm]{clus.pdf}
\end{figure}
\section{Voids in in the Milky Way: Molecular Clouds}
There are regions in the sky that appear as dark patches (Fig \ref{b68}\footnote{B = 445 nm, V = 551 nm, I = 806 nm}) due to extinction which is absorption and scattering caused by intervening matter. Extinction is inversely proportional to wavelength and hence observations in longer wavelengths are used.
The star count method was used to estimate the missing number of stars in a region by comparing them to the number of stars in the neighbouring region. These regions are identified as Giant Molecular Clouds (GMCs) and have sizes of 20-100 pc, masses ranging from $10^4-10^6$ M$_{\odot}$, $n \approx$ 50-100 cm$^{-3}$ and $T \approx 10$ K. They are confined to the Galactic plane, coinciding with the spiral arms and concentrated in a ring 3.5-7.5 kiloparsec (kpc) in diameter.
\begin{figure}[!t]
\caption{BVI image of molecular cloud Barnard~68, at a distance of 500~ly$^1$ and 0.5~ly in diameter.(Image Credit: Wikipedia)}
\label{b68}
\vskip -12pt
\centering
\includegraphics[width=5.5cm, height=5.5cm]{b68.jpg}
\end{figure}
\rightHighlight{Molecular clouds are the cold, dense regions where SF formation takes place. They are embedded in gas and dust and hence have to be observed at longer wavelengths from near infrared to radio. Young stars also emit very strongly in the Xray. Hence multiwavelength observations are required to get a complete census of young stars in GMCs.}
The mass of GMCs is mainly contained in molecular hydrogen and helium atoms. About one percent is made up of dust, typically silicates and/or graphites. Other molecules and their isotopes like CO, NH$_3$, CN, H$_2$O have been detected, with CO being the most abundant. Molecular hydrogen does not possess a permanent dipole moment. Hence, at the typical temperatures of molecular clouds it does not emit any radiation.
Observations use different tracers such as dust or other abundant molecules and indirectly estimate the amount of molecular hydrogen. They can be observed in the radio by observations of molecular lines, or using dust extinction.
Understanding the formation and evolution of GMCs, is a daunting problem. One immediate issue is the vast range in scales between galaxies and protostellar discs, from ~10 kpc to ~10$^{-3}$ pc. Another difficulty in the study of molecular clouds is the complex physics involved: gravity, magnetic fields, thermodynamics, turbulence and stellar feedback. The Interstellar Medium itself is a multiphase medium of atomic, molecular and ionized hydrogen with a range of temperatures from 10 K to 10$^8$ K and large differences in density.
\section{Jeans instability}
Sir James Jeans proposed a very simple theory for the formation of stars, based on the Kant-Laplace nebular hypothesis. An interstellar cloud is in hydrostatic equilibrium, i.e., there is a balance between the gravitational force and the gas pressure.
\leftHighlight{James Jeans proposed gravitational collapse as the mechanism for formation of stars. The conditions for this to take place was that the GMC is cold and dense. As the core gains mass, material accretes on to it, forming a disc. When the temperature of the core rises to 10$^6$ K, nuclear fusion begins and a star is born.}
He proposed that if a cloud was cold and dense enough, then the gravitational force would dominate, leading to the gravitational collapse of the cloud (Fig \ref{sf}). As the cloud collapses it breaks into fragments in a hierarchical fashion, until the fragments are close to stellar mass. Jeans Mass ($M_J$) is the characteristic mass of a cloud when this condition gets satisfied and is given by:
$ M_J=3\times 10^{4}\sqrt{{\frac {T^{3}}{n}}} M_{\odot}$ where $T$ is the temperature and $n$ is the density.
The centre of the clump collapses to a dense, gravitationally stable core known as a protostar, which heats up as it continues to contract. The protostar grows by accreting more material from the surrounding molecular cloud, and thus its core gets denser and hotter.
\begin{figure}[!t]
\caption{Star Formation Image credit: http://science.howstuffworks.com/how-are-stars-formed.htm}
\label{sf}
\vskip -12pt
\centering
\includegraphics[width=7.5cm, height=7.5cm]{sf.jpg}
\end{figure}
The protostar gravitates material from the surrounding molecular cloud, which accretes onto the star. Due to the conservation of angular momentum, this material spirals in towards the star and forms a disc of material that orbits the star, slowly accreting onto the star in bright bursts that illuminate the surrounding cloud. With each burst of accretion the star becomes hotter and more massivem till it hot enough for nuclear fusion to take place. At first the star can only burn deuterium, but as it gets hotter it burns hydrogen just like our own Sun. The star is now beginning to shine quite brightly and the radiation from the star prevents further material accreting onto the star and may even begin to disperse the remaining material in the disc that still surrounds the star.
\rightHighlight{YSOs are classified based on their spectral energy distribution. Class 0/I are the youngest which evolve to Class II and then the discless Class III sources. The spatial distribution of YSOs can be used to study the progress of SF formation and the influence of its environment. Images from ALMA of HL Tau should a very clear agreement of this picture of SF.}
Once the star has started nuclear fusion from hydrogen into helium we say that it is born (Shu et al 1987). Hydrogen fusion is the natural source of energy for most stars and the star continues fusion of lighter to heavier nuclei till it reaches the atomic number of iron. For nuclei heavier than iron, repulsive forces are stronger and fusion becomes an endothermic process, requiring energy. Elements heavier than iron are only produced in highly energetic events like a supernova explosion.
Young Stellar Objects (YSOs) are usually classified using criteria based on the slope of their spectral energy distribution, introduced by Lada (1987). He proposed three classes (I, II and III), based on the values of intervals of spectral index $ \alpha$ given by $\alpha ={\frac {d\log(\lambda F_{\lambda })}{d\log(\lambda )}} $, where $\lambda$, is the wavelength, and $F_{\lambda }$ is the flux density.
$\alpha$ is calculated in the wavelength interval of 2.2--20 $\mu$m. Later, Class 0 objects were added with strong submillimeter emission, but very faint at $\lambda <10 \mu$m and then, the class of `flat spectrum' sources were added.
This classification schema follows an evolutionary sequence, where the most deeply embedded Class 0 sources evolve towards Class I stage, dissipating their circumstellar envelopes. Eventually they become optically visible on the stellar birthline as pre-main-sequence stars.
The most important result of this picture of SF is the co-formation of stars and planets from the protoplanetary discs and it has been recently validated by observational evidence by the Atacama Large Millimeter/submillimeter Array (ALMA).
Figure \ref{hltau} reveals in astonishing detail the planet-forming disc surrounding HL Tau, a Sun-like star located approximately 450 ly from Earth in the constellation Taurus.
\begin{figure}[!t]
\caption{ALMA image of the young star HL Tau and its protoplanetary disc. Credit: ALMA (NRAO/ESO/NAOJ); C. Brogan, B. Saxton (NRAO/AUI/NSF)}
\label{hltau}
\vskip -12pt
\centering
\includegraphics[width=4.5cm, height=3.5cm]{hltau.jpg}
\end{figure}
\section{Filaments}
\rightHighlight{Star clusters are found where filaments overlap. The obvious question is whether the filament collision occurs before, after, or even during the SF process. Observational evidence supports all above scenarios and that could be possible because of the wide range of environments present, leading to realizations of all possible cases.}
Molecular clouds have considerable structure, which is quite filamentary, formed by variable densities of gas and dust, causing variable extinction, very clearly seen in Herschel images (Fig. \ref{fila}).
\begin{figure}[!t]
\caption{A far-infrared image of the Taurus molecular cloud, showing the filamentary structure of the gas
(Credit: Herschel Space Observatory)}
\label{fila}
\vskip -12pt
\centering
\includegraphics[width=5.5cm, height=3.5cm]{fila.jpg}
\end{figure}
This structure is thought to arise due to a combination of shock compression (due to collisions between material) and self-gravity (filaments can form gravitationally-stable structures on their own). These filaments can be seen on all spatial scales, large and small (Andre et al 2014).
Many of these filaments are dense, containing many times the mass of our Sun in molecular gas. This high density implies that they can be gravitationally unstable, which can lead them to collapse and potentially form stars. Simulations of filaments suggest that a single filament can actually lead to the formation of multiple stars.
\begin{figure}[!t]
\caption{Star clusters forming in the Rosette Molecular Cloud
(Credit: Schneider et al. 2012)}
\label{sch}
\vskip -12pt
\centering
\includegraphics[width=7.5cm, height=8.5cm]{sch.jpg}
\end{figure}
This picture where SF occurs in dense gas is seen in all the star forming regions and the youngest stars appear on the densest parts of the filaments, and on small scales (like NGC 1333) and on much larger scales of giant molecular clouds such as the Orion A cloud.
Figure \ref{sch} shows an image of filaments and star clusters in a star forming region known as the Rosette Molecular Cloud. The background image shows the distribution of dense gas in the cloud, with the density of the gas ranging from low-density (black) to high-density (green and red). Also are marked (in white) the positions of the filaments that make up the molecular cloud, and on top of that (the turquoise stars) are the positions of known star clusters, typically at points where filaments overlap. In somewhat older regions we start to see clusters of stars where there is no gas and the filamentary structure has dissolved, but the gas morphology could imply it being blown away by the young stars.
\section{Embedded Clusters}
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=6cm]{ngc281.jpg}
\caption{This composite image of NGC 281 contains X-ray data from Chandra, in purple, with infrared observations from Spitzer, in red, green, blue.X-ray: NASA/CXC/CfA/S.Wolk; IR: NASA/JPL/CfA/S.Wolk}
\label{ngc281}
\end{figure}
Once stars form, embedded clusters are the earliest units of SF, of young stars embedded in gas and dust, invisible in the optical. They have sizes of 0.3-1 pc, and masses 20-1000 M$_{\odot}$, mean stellar densities $1-10^3$ M$_{\odot}$ pc$^{-3}$. The SF Efficiency (SFE) is the ratio of the mass of stars to the total mass of the original cloud and is 10-30\%. It is this unused gas that provides the gravitational glue that binds the cluster. Most clusters lose their gas in the first 5 Myr which leads to the disruption of clusters called `infant mortality'. Barely 4 \% of clusters survive beyond 100 Myr.
It is very important to study these objects in the early phase of their formation to understand their distribution, spatial as well as temporal, stellar, planetary companions, environments, etc.
Embedded Clusters can be identified by making systematic searches of molecular clouds in infrared bands, say K band (2.2 $\mu$m). Often, most members are obscured and the densities may not be significantly higher than the background. Other methods are surveys of signposts of SF like outflows, luminous IRAS sources, Herbig AeBe stars, etc. We also use surveys using all sky data of 2MASS (Skrutskie et al, 2006), DENIS which can give good methods of statistical subtraction of stars in cluster and field areas to map over densities.
\leftHighlight{Embedded clusters are the earliest signs of SF. The crucial step is detection of the YSOs because of the high levels of extinction}
For young embedded clusters with ages $< $ 3 Myr, at least half of the members will have circumstellar discs (Haisch et al 2001). Observations using the Spitzer Space Telescope at wavelengths 3 - 70 $\mu$m have been very useful to study discs around stars (Winston et al 2007). For discless stars, a very effective method is observations in the Xray, as young stars emit Xrays at levels $10^2-10^4$ times that of normal stars, particularly during the first 10 Myr of their lives (Winston et al 2007) (see Fig \ref{ngc281}).
\section{Some important aspects of Star Formation}
\subsection{Initial Mass Function (IMF)}
The distribution of mass amongst the stars born from the same parent cloud is called the IMF. The IMF is often stated in terms of a series of power laws, where $N(M)$ the number of stars of mass $M$ within a specified volume of space is proportional to $M^{- \alpha}$ where $\alpha$ is a dimensionless exponent. The form of the IMF for stars with $ M > 1 M_{\odot}$ was discovered by Salpeter (1955), who found $\alpha = 2.35$.
\subsection{Multiplicity}
It was earlier believed that most stars are born in multiples (Larson 1972). It has been also found that there is a linear relation between the mass of a star and the number of multiples. Figure~\ref{mul} shows the relation between the frequency of multiple systems and the companion frequency MF and CF, resp, to the mass of the star. For M type stars 66-67\% stars are single. Recent studies of the IMF, show that the IMF breaks from a single power law near 0.5M$_{\odot}$ and has a broad peak between 0.1--0.5M$_{\odot}$ (Muench et al 2002). On either side of this peak the IMF falls rapidly. The peak of the IMF lies around M type stars and it has been estimated that 73\% -78\% of all stars are M type. Therefore, two-thirds of (main sequence) stars currently residing in the galactic disc are single stars (Lada 2006). However, it is found that there is a decline in multiplicity as we go from Class 0 to Class I, II and III stars, implying that all single stars may not have been born single.
The questions that follow are: Is this how stars are formed? Individually? Or do stars form in binaries and multiples which later disrupt? Then, how do we explain the dependence of stellar multiplicity with mass? Or does increased cloud turbulence in massive dense cores lead to efficient core fragmentation and higher incidence of binary stars?
\begin{figure}[h]
\centering
\includegraphics[width=8cm,height=6cm]{duchene.pdf}
\caption{Dependency of CF (red squares) and MF (blue triangles) with primary mass for MS stars (Duchene and Krauss, 2013).}
\label{mul}
\end{figure}
\subsection{Mass Segregation}
Mass segregation is the spatial distribution of stars according to their masses. It is observed that there is a concentration of high-mass stars near the centre and the low-mass ones away from the centre. This can take place as a result of dynamical interactions between stars in the clusters or could be primordial. The variation of the MF of clusters in different regions of the clusters has been studied by (Hasan and Hasan 2011 and references therein).
\section{Conclusion}
\rightHighlight{Many unsolved problems of this process continue to puzzle astronomers like: How and why are stars clustered when they form? What causes stars to form with different masses? Is there a different process of SF for low and high mass stars? What brings the SF process within a molecular cloud to a halt? How do stars form in diverse environments? How do massive stars influence the formation of low mass stars? How coeval is SF?}
Stars are forming in our galaxy at a rate of between 1 and 4 $M_{\odot}$yr$^{-1}$. In contrast to elliptical galaxies, star formation is still going on in spiral galaxies because of their reservoirs of molecular gas which is the fuel for new stars.
The optically dark molecular clouds are nurseries of stars, where the enigmatic process of SF takes places under the grip of gravity. New observations and further research is required in these areas to answer some long-standing questions about the universe in which we live and to decipher the secrets in the starlight in darkness.
\section*{Acknowledgement}
The author would like to thank the referee for her/his valuable comments that helped improve the content of the article. The author also thanks Prajval Shastri, Rajaram Nityananda and colleagues who came up with the lovely idea of this issue of Resonance - indeed a great way to commemorate the contribution of Women in Science!
|
train/arxiv
|
BkiUb4XxK6EuNCwyuEAb
| 5
| 1
|
\section{Introduction}
An IDE equipped with a formal verification system at its back end can facilitate an alternative style of developing software that involves using feedback from the verifier to locate and correct errors statically, instead of a more classical testing and debugging approach. This paper illustrates how such an approach can work in practice based on our experience in employing it to teach a software engineering course, where students are asked to develop software components that are provably correct with respect to a set of given specifications.
The discussion in this paper is in the context of teaching analytical reasoning to undergraduate CS students. The overall details of our educational approach for teaching mathematical reasoning, including an evaluation of student learning over multiple years in two required courses at Clemson, may be found in \cite{drachova:2013,drachova:2015}. Details of the types of software component development and reasoning assignments given to students are the topic of \cite{cook:2013}. The purpose of the present paper is to explain the iterative approach that students and software engineers, in general, can employ for developing verifiably correct software using the feedback from the Web IDE and its integrated prover.
The IDE discussed in this paper is web-integrated, easy to use, and freely available online. It has been used at multiple institutions over the span of several years for teaching \cite{cook:2014} and research \cite{welch:2014} purposes, and is designed for RESOLVE, an integrated specification and programming language supported by a push-button verifying compiler \cite{sitaraman:2011}. The characteristics that distinguish the RESOLVE language and approach from most others include the following \cite{kulczycki:2004}:
\begin{itemize}
\item Mathematical theories used in specifying programming concepts are extensible and they are described in reusable mathematical units; The theories are purely mathematical and do not involve any computational considerations. They are carefully engineered to ease automated proving. Number theory and a theory of strings over arbitrary types (used in this paper) are some examples.
\item Specifications of programming concepts that encapsulate abstract data types are kept strictly separate from implementations to facilitate design-by-contract \cite{meyer:1997} and to allow for multiple ways of realizing the same concept and permit efficiency trade-offs. Examples of such concepts include programming integers with computational bounds, arrays, stacks, queues, and lists.
\item The notion of clean semantics \cite{kulczycki:2004} makes it inessential to introduce and reason about object references explicitly in typical user code.
\end{itemize}
While the literature includes several integrated development environments based on formal techniques related to ours (see section~\ref{related} for a complete description), the one closest in spirit to the IDE discussed in this paper is the online Dafny IDE \cite{leino2014dafny}. For most of the exercises discussed in this paper, Dafny could be used as well. However, the key difference that manifests itself the most for the purposes of this paper is our system's usage of a VC generation system \cite{harton:2011} that underlies the integrated Web IDE. Using the VCs and a supporting prover capable of revealing which VCs fail to prove, it is possible to determine why a proof was unsuccessful from givens in the context. However, unlike the Dafny approach, which is backed by Z3 \cite{leino2010dafny}, the IDE presented here cannot be used to provide counter examples when verification fails. The integrated prover does not use the proof-by-refutation technique, thus requiring a different sort of debugging to take its place. For example, a user contrasts what goal needs to be proved from the givens, tries to understand which givens would be more useful in attempting to prove the goal, and then adjusts the code or assertions as needed.
The reasoning process using the RESOLVE Web IDE is quite similar to what might be employed by one using a Coq-style proof assistant, except that the proofs to be done are mostly `obvious' due to the simple nature of VCs arising from well-designed software \cite{kirschenbaum:2009}. This characteristic has allowed us thus far to forego the need for manual proof assistance for VCs.\footnote{A proof assistant such as Coq or Isabelle \cite{nipkow2002isabelle} is indeed necessary for proving the results in reusable mathematical units employed by the automated prover, but the focus of this paper is on code correctness and VCs, assuming that the necessary theorems have been established previously \cite{kulczycki:icsr2013}.}
To illustrate how the IDE helps produce correct code based on realtime feedback, we begin with a simple example that involves only the use of the \texttt{Integer} datatype. This is followed by two object-based erroneous code examples: one that is recursive, and another that is iterative. These are examples presented to students as part of a software engineering course at Clemson. In all cases, we follow an iterative approach that eventually leads to the correct code or adequate annotations. The discussion concludes with a non-trivial queue copy example code with invariants that students were asked to develop for an assignment using the iterative approach. We note that the examples discussed in this paper are meant to give an idea of the iterative process. Several more complex components are available at the Web IDE; even more can be created by logging in to the site. We conclude the paper with a discussion of the most related work and a summary.
\section{Understanding Design by Contract Using the IDE}
In this and following sections, we provide several illustrative examples, each building in complexity, that demonstrate the iterative process we envision when using the Web IDE to develop provably correct code. All examples discussed have a shared emphasis on the debugging aspect: that is, each requires sufficient knowledge of design by contract to correctly identify and amend a variety of errors in code or annotations based on interactive feedback from the prover in the form of VCs.
The first, relatively simple example presents an operation that arithmetically swaps the values of two \texttt{Integer} objects. Taking advantage of the conceptual simplicity of the code comprising this initial example, we also use this as an opportunity to familiarize readers with RESOLVE style specifications, syntax, and layout of the Web IDE. More details on the design of the RESOLVE language and its IDE may be found elsewhere \cite{cook:2014,sitaraman:2011,sitaraman:1994}.
Upon opening our first example, \texttt{Int\_Swap\_Example\_Fac} (Fig.~\ref{fig:intexchNoVC}), students are presented with code for a single operation, \texttt{Exchange}, that takes two \texttt{Integer} objects, denoted \texttt{I} and \texttt{J}. The essence of the specifications that formally communicate what exactly \texttt{Exchange} does can be found in the \texttt{ensures} clause (the postcondition), where we formally assert that $\texttt{I = \#J} \wedge \texttt{J = \#I}$. This assertion, when stated informally, can simply be read as ``the outgoing value of \texttt{I} is equal to the incoming value of \texttt{J} and the outgoing value of \texttt{J} is equal to the incoming value of \texttt{I}.''\footnote{In RESOLVE ensures clauses, \texttt{\#} denotes the \textit{incoming} value of a formal parameter.} Notice also that there is no return value for the \texttt{Exchange} operation. Instead, we prefix each parameter with mode \texttt{updates} to indicate to clients that each of the \texttt{Integer} values passed will contain a purposeful value as specified by the conclusion of the operation.
\begin{figure}[htb]
\centering
\frame{\includegraphics[width = \linewidth]{figures/IntExchNoVC.png}}
\caption[]{\texttt{Exchange} operation with missing requires clause.}
\label{fig:intexchNoVC}
\end{figure}
Software developers are free to edit both the specifications (formal contracts) of \texttt{Exchange}, as well as its executable body (sandwiched between the \texttt{Procedure} and \texttt{end} keywords). When ready to verify the operation, students may invoke an integrated prover. The exact prover used is of less importance to the discussion in this paper. It's worth noting here, however, that our system is supportive of three approaches: one based on term-rewriting (accessible via the \textit{RWVerify} button) \cite{smith:2013}, another that is currently under development and uses a congruence closure algorithm in conjunction with a matcher for quantified expressions (accessible via \textit{CCVerify} button), and (optionally) an external SMT solver.\footnote{Z3 \cite{de2008z3} is currently being incorporated as a proving option.} The second one that is designed to be just sufficient to prove VCs arising from program correctness (as opposed to arbitrary mathematical assertions) is summarized in section \ref{sec:prover}, and that is the one used for the examples in this paper.
Upon attempting to verify the \texttt{Exchange} operation, students are presented with a screen summarizing proof results, as shown in Fig.~\ref{fig:intexch0}. The system generates eight distinct VCs \cite{harton:2011}. VCs are mathematical assertions that are both necessary and sufficient for the code to be proven correct. To understand why there are eight VCs, we briefly describe the design-by-contract idea in this setting. Two VCs arise from the two conjuncts of the \texttt{ensures} clause of the \texttt{Exchange} operation, guarantees to be provided by the implementer of the code. Six VCs, two each for the \texttt{requires} clauses (preconditions) of each of the three statements in the code, namely that the sum or differences do not go outside computational integer bounds (i.e., \texttt{min\_int} and \texttt{max\_int}), for a total of eight VCs. This is because preconditions of called operations are the responsibility of the calling code in design-by-contract. Placing the cursor near the line number of a statement causes a box to appear referring to the names of one or more VCs generated if the statement produces VCs.
\begin{figure}[htb]
\centering
\frame{\includegraphics[width = \linewidth]{figures/IntSwapVCList.png}}
\caption[]{Proof attempt of \texttt{Exchange} operation with missing requires clause.}
\label{fig:intexch0}
\end{figure}
Of the eight VCs, two are unable to be proven, as indicated by the yellow exclamation marks beside VC\_01 and VC\_02 (Fig.~\ref{fig:intexch1}). The line numbers in code corresponding to the VC are given in parentheses. While VCs in general might arise from any number of sources within a block of executable code, those unable to be established here arise from the requires clause of the sum operation that is implicitly invoked when \texttt{I} and \texttt{J} are added via the \texttt{+} operator. We leave it to a reader to convince themselves that an overflow or an underflow can occur in this code only for the first statement.
\begin{figure}[htb]
\centering
\frame{\includegraphics[width = \linewidth]{figures/IntExch1.png}}
\caption[]{Full display of a VC in the IDE.} \label{fig:intexch1}
\end{figure}
To aid students in arriving at the particular insight necessary to debug this code, we encourage them to interactively explore the unprovable VCs by mousing over context sensitive VC buttons appearing next to lines of code that generated VCs. Upon clicking any of these buttons, the panel on the right hand side of the Web IDE updates with relevant, finer grained information about the particular VC queried, including a succinct description of what must be established (the goal) and the various facts (givens) the system currently knows.\footnote{A parsimonious approach to the generation of givens is under research and several of the unrelated givens are expected to disappear in the next version of the IDE.}
In terms of the \texttt{Exchange} example, it is easy to observe that the system is unable to infer from the givens that \texttt{min\_int \textless= (I + J)} (VC 0\_1) and
\texttt{(I + J) \textless= max\_int} (VC 0\_2). It then becomes possible to infer that the system currently lacks knowledge suitable to realize that \texttt{Integer} overflow (or underflow) will not occur when the \texttt{+} operation is carried out. To remedy this, and `provide' the system with the assurance that this will not happen, students must defer to their knowledge of design-by-contract, amending the specification of \texttt{Exchange} with a suitable requires clauses as shown in Fig.~\ref{fig:intexch2}. Again, under design-by-contract, the requirements become givens to be used in proofs. The figure shows that the Web IDE successfully verifies the code using the improved operation specification.
\begin{figure}[htb]
\centering
\frame{\includegraphics[width = \linewidth]{figures/IntExch2.png}}
\caption[]{\texttt{Int\_Swap\_Example\_Fac} verified.}\label{fig:intexch2}
\end{figure}
\section{Debugging Recursive Code}
For the second example, we consider an example operation which inverts the order of items in a queue (see Fig.~\ref{fig:enhancement}). The \texttt{Invert} operation is specified in an enhancement (an extension using specification inheritance) to the \texttt{Preemptable\_Queue} concept and implemented in an enhancement realization, using only operations provided in the \texttt{Preemptable\_Queue} concept. This separation of concerns makes it possible to verify the enhancement realization in a modular fashion without referring to or refining to any one implementation of \texttt{Preemptable\_Queue} concept. A preemptable queue differs from a regular queue in that it has operations to ``inject'' new items at the front of the queue (i.e., preempt the regular queue order), in addition to regular queue operations, such as \texttt{Enqueue} and \texttt{Dequeue}.
\begin{figure}[htb]
\centering
\frame{\includegraphics[width=\linewidth]{figures/ComponentFinder.png}}
\caption[]{Selection of an enhancement in the Web IDE.}
\label{fig:enhancement}
\end{figure}
A complete specification of the \texttt{Preemptable\_Queue} concept is shown in Appendix \ref{App:PreQ}. In the \texttt{Preemptable\_Queue} concept, the contents of the queue are conceptualized as a mathematical string (a structure similar to but simpler than a sequence in \texttt{Z}, because no positions are involved). So for this operation, the ensures clause (or post-condition) states that the outgoing value of the parameter \texttt{Q} should be the mathematical reverse of the input parameter (denoted by \texttt{\#Q}). Suppose that this operation is implemented using faulty code such as is shown in Fig.~\ref{fig:pq1}. Three of the VCs are verified, but VC 0\_3 is not. So as we did before, we encourage students to take a close look at that particular unprovable VC.
\begin{figure}[htb]
\centering
\frame{\includegraphics[width=\linewidth]{figures/PQ1.png}}
\caption[]{Inverting Code with Error.}
\label{fig:pq1}
\end{figure}
In the goal, \texttt{E'} is the dequeued entry, \texttt{Q'} is the conceptual string that stands for the value of the queue passed into the recursive call of \texttt{Invert}, and \texttt{Q} stands for the value of the queue at the beginning of the procedure. The goal is that \texttt{E'} concatenated with the reverse of \texttt{Q'} is equal to the reverse of \texttt{Q}. In order to debug this VC a user may first write down the goal and then apply transformations until we can clearly observe why the goal is unprovable. The purpose here is to show a general process when the problem with the unprovable VC is less obvious.\footnote{While we show the details of these steps here, in actual debugging, such a detailed analysis may not be necessary; understanding of such principles as string concatenation is not commutative is straightforward and the problem may be inferred more readily.}
\begin{verbatim}
Goal: Q' = Reverse(Q)
\end{verbatim}
Our first transformation will be to use given \#1 and apply it to the left-hand side of the goal. We will label this transformation Step 1.
\begin{verbatim}
Step 1: <E'> o Q'' = Reverse(Q)
\end{verbatim}
Next, we will apply given \#2 to transform the left-hand side once again:
\begin{verbatim}
Step 2: <E'> o Reverse(Q''') = Reverse(Q)
\end{verbatim}
And then we apply given \#3 to the right-hand side:
\begin{verbatim}
Step 3: <E'> o Reverse(Q''') = Reverse(<E'> o Q''')
\end{verbatim}
Next, one can attempt to use a theorem from \texttt{String\_Theory}, which defines string notations and results involving those notations for mathematical strings. The theorem we need here states the following:
\begin{verbatim}
For all u, v : String, Reverse (u o v) = Reverse(v) o Reverse(u)
\end{verbatim}
This transformation will produce the following result:
\begin{verbatim}
Step 4: <E'> o Reverse(Q''') = Reverse(Q''') o Reverse(<E'>)
\end{verbatim}
Finally, we apply a theorem that states that the reverse of a single-length string is itself, which gives us step 5:
\begin{verbatim}
Step 4: <E'> o Reverse(Q''') = Reverse(Q''') o <E'>
\end{verbatim}
At this point, it is obvious to see that the goal is categorically false, as the concatenation operator is not commutative. Thus, the problem with the code is that the call to Inject is placing the dequeued entry on the wrong side of the recursively inverted queue. This example illustrates how the VC can serve as a guide to pinpoint the source of the error in formal reasoning.
In the case, the correction is to fix the code: Specifically, the call to \texttt{Inject} needs to be replaced with a call to \texttt{Enqueue}.
\section{Loop Invariants}
This section outlines creation and iterative development of loop invariants for code using object-based examples. Stacks and queues are abstract data types represented as objects in RESOLVE. Their behavior is specified in a \emph{Concept}, which is an abstract description of the methods all implementations must contain. It concludes with a discussion of an assignment given to students in a junior level software engineering course.
\subsection{Learning Iterative Invariant Development}
We begin with a simple example involving stacks to highlight the iterative steps we commonly see students working through with our Web IDE in reasoning about, and ultimately arriving at, appropriate assertions for loop invariants. Stacks, like queues, are modeled mathematically using strings; operations such as \texttt{Push} and \texttt{Pop} are specified using string notations.
Fig.~\ref{fig:stack1} shows an example operation presented in a classroom to teach the idea of invariants. \path{Flip_onto} iteratively transfers entries from a source stack, \texttt{S}, to a destination stack, \texttt{T}, resulting in a version of \texttt{T} that is prefixed by a `flipped' version of \texttt{S}. As expected, the intuition describing this outcome is formalized in the operation's \texttt{ensures} clause by the following succinct assertion: \texttt{T = Reverse(\#S) o \#T}. With the operation's input/output behavior formally expressed, students must turn to the task of deriving a suitable invariant for the while loop, expressed in RESOLVE using the \texttt{maintaining} clause. (The \texttt{decreasing} clause is used to document the progress metric necessary to prove termination.)
\begin{figure}[htb]
\centering
\frame{\includegraphics[width=\linewidth]{figures/Stack1.png}}
\caption[]{Loop with insufficient invariant.}\label{fig:stack1}
\end{figure}
Starting with a \texttt{maintaining} clause that simply reads ``\texttt{true}''---which is an appropriate starting point for beginners to understand the process of developing adequate invariants---the system (unsurprisingly) fails to establish correctness. Aside from the obvious inability to prove VCs corresponding to the operation's overall \texttt{ensures} clause, students using the Web IDE are able to see---with the help of the interactive VC buttons next to the line numbers---that VC 1\_1 and 1\_2 arising from calls to \texttt{Pop} and \texttt{Push} within the body of the loop are currently unprovable, as shown in Fig.~\ref{fig:stack1}.
Examining VC 1\_1, students are immediately informed that the \texttt{requires} clause of \texttt{Pop} (\texttt{|S''| /= 0}) cannot be established. Referring to the list of available givens, students can see that while the system is aware that \texttt{D' /= 0}, it still lacks any knowledge relating the length of \texttt{S} to the current depth, \texttt{D}. To address this roadblock and provide the prover with the information it needs to meet the precondition criteria of \texttt{Pop}, students might start by amending the maintaining clause with the assertion that \texttt{|S| = D}. Sure enough, upon re-running the prover, students are given validation in the form of a green checkmark, indicating that one roadblock to verification of the current operation has been successfully dealt with.
In motivating further construction of the \texttt{maintaining} clause, students once again look to unproven VCs as a guide to development. In this case, looking specifically to VC 2\_2, students can see that the \texttt{ensures} clause to the overall operation is still unable to be established. Using this insight, combined with the goal this VC is attempting to establish---specifically, that \texttt{T' = (Reverse(S) o T)}---one way for students to proceed is to simply append this assertion to the evolving \texttt{maintaining} clause, yielding \texttt{|S| = D and T = (Reverse(\#S) o \#T)}. Upon doing so, students can indeed see that the prover is now able to establish the ensures clause of the operation (indicated by VC 2\_3), but is unable to establish the VC corresponding to the invariant of the while statement---suggesting that something is still lacking from the assertion. However, in examining the (now provable) goal of the overall \texttt{ensures} clause addressed in the previous step, students can see that it reads as follows: \texttt{(Reverse(S) o T) = (Reverse(S) o T)}. Thus, mirroring the same technique and intuition employed to make the ensures clause provable earlier---that is, adding \texttt{T = (Reverse(\#S) o \#T)}---students can now make the necessary cognitive leap to realize the clause must be changed to read: \texttt{Reverse(S) o T = Reverse(\#S) o \#T}, resulting in a final, provable assertion that reads: \texttt{D = |S| and Reverse(S) o T = Reverse(\#S) o \#T}.
\subsection{Applying Iterative Invariant Development}
Following an introduction to the iterative development of loop invariants and discussion, students used the Web IDE to complete reasoning assignments. The assignments required students to write verified code for pre-specified concepts and enhancement operations. The specification of one such operation to copy a generic \texttt{Preemptable\_Queue} is given below.\\
\\
\noindent\texttt{\textbf{\color{blue}Operation} Copy\_Queue \\
\indent(\textbf{\color{blue}restores} Q: P\_Queue; \textbf{\color{blue}replaces} Q\_Copy: P\_Queue);\\
\indent\textbf{\color{blue}ensures} Q\_Copy = Q; }\\
Table~\ref{table:assignment} is a summary of student performance for each of the invariant writing assignments. In addition to copying a queue, students wrote code for outputting a queue, reversing a sequence, and an end user application assignment that involved use of custom-made mathematical definitions and operations involving non-trivial types. The definitions in the end user assignment were not complemented by necessary results and hence, the prover was not of use in establishing the invariants. The complexity of the assignment, the mathematics involved, and the absence of prover support are among possible reasons for the low success rate of students in developing appropriate invariants for those operations.
\begin{table}[h]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{@{}lllll@{}}
\toprule
& \textbf{Writing (Queue)} & \textbf{Copying (Queue)} & \textbf{Reversal (Sequence)} & \textbf{Facility Operations} \\ \midrule
\textbf{Correct} & 70\% & 90\% & 60\% & 30\% \\
\textbf{Insufficient Invariant} & 20\% & 10\% & 30\% & \\
\textbf{Other} & 10\% & & 10\% & 70\% \\ \bottomrule
\end{tabular}
}\caption{Evaluation of Invariant Assignments}\label{table:assignment}
\end{table}
Fig.~\ref{fig:studentcode} is an example of code developed by a student for the \texttt{Copy\_Queue} assignment.\footnote{In the figure, \texttt{changing} clause is optional and it lists variables potentially affected by the loop; variables not mentioned are assumed to be unchanging. In the absence of this clause, all variables are assumed to be affected. This clause is useful to simplify some routine invariants \cite{harton:2011}.} Neither the code nor the invariant is necessarily optimal. Proofs of all 18 VCs generated for this copy operation are completed in an average time of 3 seconds (total) on the server that hosts the Web IDE. As noted earlier in the introduction, other provers, such as Z3, could be used to discharge the VCs.
\begin{figure}[htb]
\centering
\frame{\includegraphics[width=\linewidth]{figures/CopyOp.png}}
\caption[]{Student code for the Queue copy assignment.} \label{fig:studentcode}
\end{figure}
\section{Summary of the Prover Underlying the Web IDE} \label{sec:prover}
The verifying compiler that serves as the back end of the web interface contains a modular VC generation subsystem \cite{cook2012specification,harton:2011} which provides input to an automated prover. The automated prover relies on previously proven results in a library of mathematical theories that are reused in the specification of programming concepts \cite{smith:2013}.
At the core of the \emph{CCVerify} automated VC prover is a congruence closure algorithm that incorporates the Theory of Equality, similar in spirit to that described in \cite{Fast}. An outer layer that incorporates pattern matching techniques for expressions containing universally quantified variables is engaged, similar to the matcher described in \cite{Simplify}. In this way, a single component can handle problems from multiple domains. \emph{CCVerify}, which contains fewer than 2000 lines of Java code, is designed to be fast, simple, and effective. As it is fully integrated into the compiler, there are no issues with portability, licensing, or translation of the assumptions to other formats as there might be if an external tool were used.
There is an implied division of labor in the production of proofs. Specifications must be written so that the consequent of the VC eventually produced is a predicate with constants as arguments. It turns out that typically it is sufficient to use only instances of previously proven universally quantified statements (these are members of a reusable theorem library) to construct a proof, assuming that the specifications used to generate the VC make such a proof relatively obvious. Sophisticated techniques used in automated theorem provers may not be required to discharge the VCs. We are currently testing this hypothesis using a limited version of Z3 as well.
Our mathematical specification system is feature--rich. It allows for polymorphic types and first class functions. These features make a \emph{direct} translation of \emph{some} of our mathematical theories to the standard many-sorted first-order logic language \cite{barrett2010smt} used in SMT proving impossible, though it is possible to support these features relatively simple (in a sound, but not complete way) within the matching component of the integrated prover.
\section{Related Work}\label{related}
A summary of related specification/verification languages may be found in \cite{hatcliff2012behavioral}, and tools or IDEs to facilitate their usage are discussed in the first proceedings of this workshop \cite{DBLP:journals/corr/DuboisGM14}. We discuss only the most related work in this section.
Like RESOLVE that underlies the Web IDE discussed in this paper, Dafny is a programming and specification language intended for verification of functional correctness \cite{leino2014dafny}. The Eiffel integrated language effort \cite{EiffelRef}, though initially focused on runtime assertion checking, is now supported by Eve~\cite{furiaeptcs}, the Eiffel Verification Environment, which includes the AutoProof~\cite{DBLP:journals/corr/abs-1106-4700} tool for static verification. Both Dafny and AutoProof translate code and specifications into Boogie~\cite{DBLP:conf/fmco/BarnettCDJL05}, an intermediate verification language. The Boogie tool can create VCs suitable as input to an SMT solver. An important distinction is that the RESOLVE compiler handles VC generation internally, and displays them in a format that makes it easy for users to connect problematic VCs with the code and specifications that produced them.
Java Modeling Language (JML) is a specification/verification language for Java programs, and tools for the language include the ability to do runtime assertion checking \cite{leavens2006preliminary}. JML does not have an IDE, but there are efforts to integrate JML as plugin to Eclipse \cite{chalin2008jml4,cok2014openjml}. Tools are available for ProB, an animation and a model checker for the B-Method, which is a formal method based on abstract
machine notation \cite{bendisposto2014prob,witulski2014prob}. The VeriFast effort is aimed at verifying single/multi-threaded C and Java programs \cite{jacobs2011verifast,smans2013verifast}. VeriFast also includes a GUI that is packaged into their code distribution.
We are not alone in employing a formal methods IDE in education. Whereas our educational focus is mostly on software engineering aspects (though we have used the Web IDE to a limited extent in a discrete mathematics course), teaching discrete mathematics and specifications using an IDE is the focus of FoCaLiZe---an IDE
that takes source code, specification properties, and machine-checkable proofs to produce executable OCaml
code and checkable Coq input values \cite{jaume2014teaching,pessaux2014focalize}. Though our Web IDE does not support inferring loop invariants, invariant inference is a useful feature; an Eclipse plugin with a goal to infer object and loop invariants for C programs is discussed in \cite{cok2014speedy}.
\section{Conclusions}
This paper has detailed an iterative approach for creating, debugging, and developing components that are correct with respect to their specifications, using an IDE equipped with a verification system. Using several illustrative examples drawn from lectures and student assigments, we have explained how students and software engineers, in general, can develop provably correct software iteratively based on the VCs and feedback received from the RESOLVE Web IDE. Extensive experience with the IDE in the classroom indicates that students are indeed capable of producing correct software using the the IDE as discussed in this paper. While the present paper has focused only on functional correctness of code, the IDE includes features to create and view mathematical units and data abstraction realizations with representation invariants and abstraction relations, as well as for generating executable Java code from RESOLVE code \cite{welch:2014}.
A variety of improvements to the IDE are in progress, ranging from minor visual improvements, such as highlighting VC buttons that correspond to unprovable VCs, to more significant ones, such as the creation and development of performance specifications and related correctness checks.
\section{Acknowledgements}
The RESOLVE verifying compiler is a multi-decade project involving researchers at several universities, including, but not limited to, Clemson University, Ohio State University, and Denison University. We acknowledge the ideas and support of members of the group in this endeavor. This research has been funded in part by the US NSF grants CCF-0811748, CCF-1161916, and DUE-1022941.
|
train/arxiv
|
BkiUdJk5qsBDDqo4B1if
| 5
| 1
|
\section{\label{chap5_sec:level1}Introduction}
J.\;W.\;Gibbs envisioned uniform solutions decomposing (or phase separating) through two kinds of kinetic processes \cite{gibbs1906scientific,cahn1961spinodal}. In alloy systems, these processes are sometimes classified as continuous and discontinuous transformations. While continuous transformations begin with small fluctuations that extend over relatively large spatial regions and take place simultaneously throughout the volume of the system, discontinuous transformations initiate with localized concentration fluctuations that are comparatively large in amplitude but small in spatial extent \cite{balluffi2005kinetics}. From the perspective of the thermodynamic free-energy \cite{cahn1961spinodal,balluffi2005kinetics}, continuous transformations initiate spontaneously from an unstable solution when an infinitesimal variation decreases the free-energy. This behavior is associated with the spinodal decomposition mechanism. Discontinuous transformations develop in an initially metastable solution through a series of statistical fluctuations that eventually overcome a free-energy barrier. They are characteristic of nucleation and growth mechanisms. These thermodynamic concepts are useful for interpreting alloy decomposition even though functions like temperature and free-energy are strictly speaking defined only at equilibrium and must be extrapolated to non-equilibrium states to describe kinetic phenomena.
Although the unit process underlying the two mechanisms are the same (atomic migration by diffusion), the driving forces are quite different, and this leads to very different kinetic characteristics. Models for decomposition processes like these generally start by assuming that a particular step of the process is rate-limiting, and then building an appropriate mathematical description of the rate-limiting step. An inherent difficulty with this approach is the need to know the underlying reaction mechanism in order to build an accurate kinetic model. For example, if classical nucleation is the operative process responsible for phase decomposition, the kinetics are described in terms of the distribution of cluster sizes and their rates of growth and shrinkage \cite{schmelzer2004dynamics}. On the other hand, if spinodal decomposition is operative, the decomposition rate is better described by a generalized diffusion equation (e.g., reference \cite{cahn1961spinodal}). For this reason, microstructural modeling starts by assuming a decomposition mechanism rather than determining it from the physical conditions.
Following Gibbs \cite{gibbs1906scientific}, the decomposition mechanism should be selected at the very beginning of the decomposition process when changes take place through the collective behavior of a relatively small number of fluctuations. Not surprisingly, kinetic Monte Carlo methods, which are based on statistical fluctuations and do not assume a rate-limiting step, are successful describing multiple processes \cite{soisson2006kinetic,gao2018theoretical}. Quantum mechanics is widely used to interpret discrete behavior in small systems, so it should be reasonable to apply the tools of quantum mechanics to the {\it selection} of transformation mechanisms in bulk systems.
In this regard, the steepest-entropy-ascent quantum thermodynamics (SEAQT) framework shows great promise for predicting both the operative decomposition mechanism as well as the reaction kinetics. SEAQT is a non-equilibrium thermodynamic-ensemble approach that was originally formulated to address a number of physical inconsistencies between quantum mechanics and thermodynamics \cite{hatsopoulos1976-I,hatsopoulos1976-IIa,hatsopoulos1976-IIb,hatsopoulos1976-III,beretta2005generalPhD}. It describes the relaxation process of a system from an initial non-equilibrium state to stable equilibrium following the direction of steepest entropy ascent, i.e., maximum entropy production. To apply the framework to the phase decomposition of alloys, the system is described differently from conventional microstructural models. Rather than describing the system in terms of position-dependent functions, like free-energy, that evolve with time, the SEAQT approach employs a thermodynamic-ensemble and a density operator formalism (analogous to a phase-space probability measure in statistical mechanics) that tracks the decomposition process in terms of a single time-dependent variable. While perhaps physically nonintuitive, reformulating the problem in this way has important computational advantages over approaches based on classical mechanics (e.g., molecular dynamics) and microstructual models (e.g., phase field models).
States in the SEAQT framework are described by occupation probabilities of a set of possible energy eigenlevels, also called the energy eigenstructure \cite{li2016steepest}, as depicted in Fig.\;\ref{fig5:SEAQT_flowchart}. For example, an energy eigenstructure for a A--B binary solid-solution of a specified size is constructed from the energies corresponding to all the possible arrangements of A-type and B-type atoms. The entropy of the system is given by a measure of the degree of energy load sharing among available energy eigenlevels, and the evolution of the system from an initial, non-equilibrium state at time $t=0$ to a final, stable equilibrium state at time $t=\infty$ is found by solving the SEAQT equation of motion (indicated by the large schematic arrow in Fig.\;\ref{fig5:SEAQT_flowchart}). By assuming the system's evolution of state follows the path of steepest entropy ascent (maximum rate of entropy production), the equation of motion yields a unique kinetic path through state space from the initial state to the final equilibrium state predicted by the second law of thermodynamics.
\begin{figure}
\begin{center}
\includegraphics[scale=0.34]{fig5_SEAQT_flowchart}
\caption{\label{fig5:SEAQT_flowchart} A schematic explanation of the SEAQT approach: (a) An energy landscape, or eigenstructure, of an alloy with variable composition and long-range order is constructed from an appropriate solution model. The energy of the system is displayed as a discrete function of alloy concentration and a long-range order parameter (LRO). (b) The initial state of the system ($t=0$) is expressed by occupation probabilities for each possible configuration, which is superimposed over the eigenstructure (shaded squares). The time-evolution of the system is determined by solving the SEAQT equation of motion (represented by the large arrow) to find the path from the initial state to that of stable equilibrium (c) at $t=\infty$.}
\end{center}
\end{figure}
To use the SEAQT framework, the energy eigenstructure must be determined for the system in question. Although the eigenstructure for a gas phase can be constructed relatively easily (e.g., by assuming ideal gas behavior), many-body interactions among particles make the eigenstructure highly complex for condensed phases. There are two aspects to this complexity. First, determining the available energy eigenlevels from appropriate quantum models may be computationally intractable, and second, the number of energy eigenlevels is effectively infinite. Both of these problems are addressed in recent work modeling the thermal expansion of silver \cite{yamada2018method}. A highly simplified eigenstructure is built from a reduced-order model (a solid-state instead of a quantum model), and an infinite energy-level eigenstructure is replaced with a discretized, finite-level ``pseudo-eigenstructure'' with the use of the density of states method developed in reference \cite{li2016steepest}.
In this contribution, the SEAQT theoretical framework with the pseudo-eigenstructure is applied to phase decompositions in binary solid-solutions to determine the kinetic pathways. The work consists of two parts; continuous and discontinuous transformations are investigated in Part\;I, and ordering and concurrent ordering with phase separation are explored in Part\;II. Part\;I is organized as follows. First, the SEAQT equation of motion is modified for kinetic calculations in binary alloy systems with fixed composition in Sec.\;\ref{chap5_sec:level2_1}, and a pseudo-eigenstructure for a solid-solution is constructed using a mean-field approximation (or a solution model) in Sec.\;\ref{chap5_sec:level2_2}. In Sec.\;\ref{chap5_sec:level2_3}, calculation conditions and how to prepare initial states are described. In Sec.\;\ref{chap5_sec:level3}, the calculated time-evolution of the decomposition process from arbitrary initial states is shown and discussed focusing on the continuous and discontinuous transformation behaviors (Sec.\;\ref{chap5_sec:level3_1}) in which a spinodal limit and a real time-dependence of the decomposition process are also explored (Secs.\;\ref{chap5_sec:level3_2} and \ref{chap5_sec:level3_3}, respectively). At the end, the study of the continuous and discontinuous phase decomposition behaviors in an alloy system using the SEAQT model is summarized in Sec.\;\ref{chap5_sec:level4}.
\section{\label{chap5_sec:level2}Theory}
\subsection{\label{chap5_sec:level2_1}SEAQT equation of motion}
The equation of motion in the SEAQT modeling has been developed to account for dissipative processes in quantum systems. The dissipative contribution is incorporated in the Schr\"{o}dinger equation as the irreversible term and the SEAQT equation of motion takes a form \cite{beretta1985quantum,beretta2006nonlinear,beretta2009nonlinear}:
\begin{equation}
\frac{d\hat{\rho}}{dt}=\frac{1}{i\hbar}[\hat{\rho},\hat{H}]+\frac{1}{\tau(\hat{\rho})}\hat{D}(\hat{\rho}) \; , \label{eq5:equation_of_motion}
\end{equation}
where $\hat{\rho}$ is the density operator, $t$ the time, $\hbar$ the reduced Planck constant, $\hat{H}$ the Hamiltonian operator, $\tau$ the relaxation time (which represents the rate at which the states of a system evolve in Hilbert space along the unique kinetic path determined by Eq.\;(\ref{eq5:equation_of_motion})), and $\hat{D}$ the dissipation operator. The left-hand side of the equation and the first term on the right corresponds to the time-dependent von Neumann (or Schr\"{o}dinger) equation. The second term on the right is a dissipation term, the irreversible contribution that accounts for relaxation processes in the system. When $\hat{\rho}$ is diagonal in the Hamiltonian eigenvector basis, $\hat{\rho}$ and $\hat{H}$ commute and the von Neumann term in the equation of motion disappears so that Eq.\;(\ref{eq5:equation_of_motion}) simplifies (for the case of a system in which the identity and Hamiltonian operators are the only generators of the motion) to \cite{beretta2006nonlinear,beretta2009nonlinear,li2016steepest}
\begin{equation}
\frac{dp_j}{dt}=\frac{1}{\tau}\frac{\begin{vmatrix}
-p_j \mathrm{ln} \frac{p_j}{g_j} & p_j & \epsilon_jp_j \\
\langle s \rangle & 1 & \langle e \rangle \\
\langle es \rangle & \langle e \rangle & \langle e^2 \rangle
\end{vmatrix}}{\begin{vmatrix}
1 & \langle e \rangle \\
\langle e \rangle & \langle e^2 \rangle
\end{vmatrix}} \; , \label{eq5:equation_of_motion_simplified}
\end{equation}
where
\[
\begin{array}{c c}
\langle s \rangle = - \sum\limits_{i} p_i \mathrm{ln} \frac{p_i}{g_i} \; ,
&
\langle e \rangle = \sum\limits_{i} \epsilon_i p_i \; , \\ \\
\langle e^2 \rangle = \sum\limits_{i} \epsilon_i^2 p_i \; , \;\;\;
&
\langle es \rangle = - \sum\limits_{i} \epsilon_i p_i \mathrm{ln} \frac{p_i}{g_i} \; ,
\end{array}
\]
and the $p_j$ are the diagonal terms of $\hat{\rho}$, each of which represents the occupation probability in the $j^{th}$ energy eigenlevel ${\epsilon}_j$; the $g_j$ are the degeneracies of the energy eigenlevel; and $\langle \cdot \rangle$ is the expectation value of the property. Note that the von Neumann formula for entropy is used here. Provided the density operator is based on a homogeneous ensemble, this formula satisfies all the characteristics of entropy required by thermodynamics without making entropy a statistical property of the ensemble \cite{gyftopoulos1997entropy,cubukcu1993thermodynamics,yamada2018steepest}. It is assumed here that $\hat{\rho}$ is diagonal in the eigenvector basis, which is the case for many classical systems or when no quantum correlations between particles are present \cite{li2016generalized,li2016modeling,li2017study}.
The SEAQT equation of motion, Eq.\;(\ref{eq5:equation_of_motion_simplified}), is derived via a constrained gradient in Hilbert space that causes the system to follow the direction of steepest entropy ascent when the energy and occupation probabilities are conserved. When the number of particles is conserved as an additional constraint, the identity, Hamiltonian, and particle number operators become the generators of the motion. The equation of motion, then, becomes \cite{li2016steepest2}
\begin{equation}
\frac{dp_j}{dt}=\frac{1}{\tau}\frac{\begin{vmatrix}
-p_j \mathrm{ln} \frac{p_j}{g_j} & p_j & N_j p_j & \epsilon_jp_j \\
\langle s \rangle & 1 & \langle N \rangle & \langle e \rangle \\
\langle Ns \rangle & \langle N \rangle & \langle N^2 \rangle & \langle eN \rangle \\
\langle es \rangle & \langle e \rangle & \langle eN \rangle & \langle e^2 \rangle
\end{vmatrix}}{\begin{vmatrix}
1 & \langle N \rangle & \langle e \rangle \\
\langle N \rangle & \langle N^2 \rangle & \langle eN \rangle \\
\langle e \rangle & \langle eN \rangle & \langle e^2 \rangle
\end{vmatrix}} \; , \label{eq5:equation_of_motion_grand_canonical}
\end{equation}
where
\[
\begin{array}{c c}
\langle N \rangle = \sum\limits_{i} N_i p_i \; ,
&
\langle N^2 \rangle = \sum\limits_{i} N_i^2 p_i \; , \\ \\
\langle eN \rangle = \sum\limits_{i} \epsilon_i N_i p_i \; , \;\;\;
&
\langle Ns \rangle = - \sum\limits_{i} N_i p_i \mathrm{ln} \frac{p_i}{g_i} \; .
\end{array}
\]
Here the $N_j$ are the number of particles in the $j^{th}$ energy eigenlevel. The equation of motion can be modified further by allowing an exchange of heat between the system and a heat reservoir. This can be done by viewing them as subsystems of an overall composite system (see references \cite{li2016steepest,li2016generalized,li2016steepest2,yamada2018steepest}) for which the generators of the motion are the identity and particle number operators for each subsystem and the Hamiltonian operator for the composite system. This combined with the concept of hypoequilibrium states \cite{li2016steepest,li2016generalized,li2016steepest2} transforms Eq.\;(\ref{eq5:equation_of_motion_grand_canonical}) for the original system into the following form:
\begin{equation}
\frac{dp_j}{dt}=\frac{1}{\tau} p_j \left[ \left( s_j - \langle s \rangle \right) + \left( N_j- \langle N \rangle \right) \gamma^R - \left( \epsilon_j - \left< e \right> \right) \beta^R \right] ,
\label{eq5:equation_motion_grand_canonical_heat}
\end{equation}
where
\[
\gamma^R \equiv -\frac{ (\langle Ns \rangle - \langle N \rangle \langle s \rangle) - ( \langle eN \rangle - \langle e \rangle \langle N \rangle) \beta^R}{ \langle N^2 \rangle - \langle N \rangle \langle N \rangle} \; ,
\]
and $\beta^R$ is the inverse of the product of Boltzmann's constant and the temperature of the reservoir $T_R$, i.e., $\beta^R=1/k_BT_R$.
For many physical processes occurring in an alloy, the concentrations of the components remain constant. This can be described for a binary A--B alloy by replacing $N_j$ with $N_{B,j}$ (or $N_{A,j}$) and fixing the total number of particles in each energy eigenlevel (i.e., $N_j=N_{A,j}+N_{B,j}=\mbox{constant}$ where $N_{A,j}$ and $N_{B,j}$ are, respectively, the number of A-type and B-type atoms in the $j^{th}$ energy eigenlevel). These notations together with Eqs.\;(\ref{eq5:equation_of_motion_grand_canonical}) and (\ref{eq5:equation_motion_grand_canonical_heat}) are applicable to a binary alloy of fixed composition.
\subsection{\label{chap5_sec:level2_2}Pseudo-eigenstructure}
Configurational energy in a binary alloy system is given by \cite{khachaturyan2013theory}
\begin{equation}
E=\frac{1}{2} \sum_{\bm{\mathrm{r}},\bm{\mathrm{r}}'} W(\bm{\mathrm{r}} - \bm{\mathrm{r}}') n(\bm{\mathrm{r}}) n(\bm{\mathrm{r}}') \; , \label{eq5:total_energy_MF_original}
\end{equation}
where $W(\bm{\mathrm{r}} - \bm{\mathrm{r}}')$ is a pairwise interatomic interaction energy between two atoms at lattice sites $\bm{\mathrm{r}}$ and $\bm{\mathrm{r}}'$. The factors $n(\bm{\mathrm{r}})$ and $n(\bm{\mathrm{r}}')$ represent the distribution functions at these lattice points. The pseudo-eigenstructure in an alloy system is constructed by employing a mean-field approximation that replaces many-body interactions among particles with an average internal field experienced by each atom \cite{girifalco2003statistical}. Using the simplest mean-field approximation, where short-range correlations between different atomic species are ignored, the $n(\bm{\mathrm{r}})$ and $n(\bm{\mathrm{r}}')$ can be expressed in terms of the concentration of B-type atoms, $c$. When the reference energy is set to the segregation limit (a line connecting the energies of two systems composed of pure A-type and pure B-type atoms), Eq.\;(\ref{eq5:total_energy_MF_original}) becomes
\begin{equation}
E(c)= \frac{1}{2} N c (1-c) V(\bm{0}) \; , \label{eq5:total_energy_phase_separation}
\end{equation}
where $N$ is the number of atoms in the system and $V(\bm{0})$ is a parameter incorporating all the interaction energies. For a face-centered cubic crystal, $V(\bm{0})$ is given by \cite{khachaturyan2013theory}
\begin{equation}
V(\bm{0})=12w_1+6w_2+24w_3+12w_4+\cdots \;\; ,
\label{eq5:interaction_energies_fcc}
\end{equation}
where $w_{n}$ is the $n^{th}$ nearest-neighbor {\it effective} pair interaction energy, which is related to the component-specific $n^{th}$-neighbor pair interaction energies, $V_{ij}^{(n)}$ ($i,j=$ A or B), by
\begin{equation}
w_{n}=V_{AA}^{(n)}+V_{BB}^{(n)}-2V_{AB}^{(n)} \;\; .
\label{eq5:effective_interaction_energies_fcc}
\end{equation}
The parameter $V(\bm{0})$ is positive when the interactions among A and B species are such that a solid-solution of A and B prefers to decompose into two different solid-solutions. The degeneracy of each energy in Eq.\;(\ref{eq5:total_energy_phase_separation}) is given by the binomial coefficient,
\begin{equation}
g(c)=\frac{N !}{N_{A} ! \cdot N_{B} !}=\frac{N !}{(N(1-c))! \cdot (Nc)!} \; , \label{eq5:degeneracy_mean_field_binary_alloy}
\end{equation}
where $N_{A}$ and $N_{B}$ are the number of A-type and B-type atoms, respectively. Here, using the approximation for a factorial \cite{weisstein2008stirling}, $x!\approx(2x+\frac{1}{3} \pi) x^x e^{-x}$, Eq.\;(\ref{eq5:degeneracy_mean_field_binary_alloy}) can be treated as a continuous function for large $N$. The energy eigenlevels, $E_j$, and the degeneracy, $g_j$, are determined from Eqs.\;(\ref{eq5:total_energy_phase_separation}) and (\ref{eq5:degeneracy_mean_field_binary_alloy}) by replacing $N$ and $c$ with $N_j$ and $c_j$ (here the energy eigenlevels are denoted by $E_j$ instead of $\epsilon_j$ because the $E_j$'s are extensive quantities). Since the $N_j$ are the same for all energy eigenlevels (because of the constraint mentioned at the end of Sec.\;\ref{chap5_sec:level2_1}), it is denoted as $N$ hereafter. For a bulk sample composed of a vast number of particles, any value of $c_j$ between zero and unity is possible and the number of states is effectively infinite. To cope with this intractable number of accessible energy eigenlevels, the density of states method developed by Li and von Spakovsky within the SEAQT framework \cite{li2016steepest} is used, where similar energy eigenlevels are combined into discrete bins and the computational burden is reduced substantially without affecting the accuracy of the result. With this method, the energy eigenlevels, degeneracies, and concentration of B-type atoms become
\begin{equation}
E_j = \frac{1}{g_j} \int_{\bar{c}_j}^{\bar{c}_{j+1}}g(c) E(c) \; dc \;, \label{eq5:energy_eigenvalue_pseudo}
\end{equation}
\begin{equation}
g_j=\int_{\bar{c}_j}^{\bar{c}_{j+1}} g(c) \; dc \;, \label{eq5:degeneracy_pseudo}
\end{equation}
and
\begin{equation}
c_j = \frac{1}{g_j} \int_{\bar{c}_j}^{\bar{c}_{j+1}} g(c) c \; dc \;, \label{eq5:fraction_down_spin_pseudo}
\end{equation}
where $\bar{c}_j$ is specified by the number of intervals, $R$, as $\bar{c}_j= j/R$ with $j$ an integer ($j=0, 1, 2, ... \; R$). The number of intervals, $R$, is determined by ensuring the following condition is satisfied \cite{yamada2018steepest}:
\begin{equation}
\frac{1}{\beta} \gg \frac{| E_{j+1}-E_j | }{N} \; . \label{eq5:quasi_continuous_condition}
\end{equation}
The size of the system, specified via the number of atoms, $N$, establishes the energy and the degeneracy through Eqs.\;(\ref{eq5:total_energy_phase_separation}) and (\ref{eq5:degeneracy_mean_field_binary_alloy}), respectively. In order to capture quantum effects, the system size should not be so large that it behaves classically but large enough to include important interactions among the constituent atoms --- say 5 to 20 times the interatomic distance for a metallic solid-solution. For most of the subsequent calculations, $N=10^4$ was chosen for the system size although a more detailed analysis than that conducted here could be carried out to determine the most appropriate system size, but that is beyond the present scope.
The system being considered here is analogous to what Gibbs called a ``homogenous part of the given mass'' in his seminal paper on the equilibrium of heterogeneous substances \cite{gibbs1906scientific}. His homogeneous part is spatially uniform in chemical composition and physical state, and it is a subsystem of the larger isolated system he considers at equilbrium. While a uniform system may seem at odds with the concept of fluctuations, it is entirely consistent with the way a system is represented in the SEAQT framework. Fluctuations, or changes in composition or physical state, in the SEAQT system are reflected by multimode probability distributions among the energy eigenlevels, not by spatial variations in a property. Gibbs demonstrated that equilibrium is reached when the intensive property values (temperature, pressure, and chemical potential) of each homogeneous part are identical. The SEAQT framework is used here to identify the path by which a part reaches this equilibrium.
\subsection{\label{chap5_sec:level2_3}Specification of initial states}
The evolution of a binary solid-solution that is quenched and annealed within a miscibility gap is considered in this work. The phase diagram for a binary alloy with a high-temperature solid-solution and a miscibility gap at lower temperatures is shown in Fig.\;\ref{fig5:phase_diagram_phase_separation}. The pseudo-eigenstructure of such an alloy corresponds to a system with a positive $V(\bm{0})$ in Eq.\;(\ref{eq5:total_energy_phase_separation}).
The initial disordered solid-solution ({\it S.S.}) is annealed at a high temperature, $T^H$\;($=T_0$), and then quenched to a lower temperature, $T^L$\;($=T_R$), and annealed at that temperature. The initial state can be prepared using the (semi-) \cite{lesar2013introduction} grand canonical distribution:
\begin{equation}
p^{0}_j=\frac{g_j e^{-\beta^0 ( E_j +\mu_A N_{A,j} +\mu_B N_{B,j} ) }}{\Xi} \; , \label{eq5:grand_canonical_distribution}
\end{equation}
where $\beta^0=1/k_BT_0$, $\mu_A$ and $\mu_B$ are, respectively, the chemical potentials of A atoms and B atoms, and $\Xi$ is the grand partition function, which is given by
\begin{equation}
\Xi \equiv \sum\limits_i g_i e^{-\beta^0 ( E_i +\mu_A N_{A,i} +\mu_B N_{B,i} )} \; . \label{eq5:grand_partition_function}
\end{equation}
The target alloy composition is obtained by adjusting the chemical potentials. Note that one needs to adjust the chemical potentials just for the initial state since once the initial state is prepared using Eq.\;(\ref{eq5:grand_canonical_distribution}), the alloy composition is fixed and conserved in the kinetic calculations via Eq.\;(\ref{eq5:equation_motion_grand_canonical_heat}).
Although not a necessary assumption, preparing the initial state of the alloy system with Eq.\;(\ref{eq5:grand_canonical_distribution}) alone means that its initial state is in equilibrium at the initial, high temperature, $T_0$. On the other hand, the initial state of the composite system (i.e., alloy system plus reservoir) is that of non-equilibrium since the equilibrium state of the reservoir is not that of the alloy system. This non-equilibrium state is in effect what Li and von Spakovsky \cite{li2016steepest,li2016steepest2} call a $2^{nd}$-order hypoequilibrium state. The concept of hypoequilibrium provides a simple relaxation pattern for a system by properly dividing the system into a number of subsystems (or subspaces). The steepest entropy ascent principle under hypoequilibrium ensures that each subsystem moves along its own manifold of different equilibrium states
until the states of both subsystems (alloy system plus reservoir) arrive at a final equilibrium state of the composite system in which the two subsystems are in mutual stable equilibrium with each other. In order to explore the effects on state evolution of not assuming that the alloy subsystem is initially in equilibrium, concentration fluctuations are introduced into the initial state to drive it away from equilibrium.
This is done by using an occupation probability distribution corresponding to a smaller number of particles than are actually present in the system, $N_0 < N$. A smaller number of particles reduces the degeneracies of some of the energy eigenlevels, $g_j$, and generates an initial occupation probability distribution calculated from Eq.\;(\ref{eq5:grand_canonical_distribution}) that is broader than the equilibrium distribution. The effects of the number of particles on initial states and kinetic paths are discussed in Sec.\;\ref{chap5_sec:level3_2}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.26]{fig5_phase_diagram_phase_separation}
\caption
{\label{fig5:phase_diagram_phase_separation} A phase diagram with a positive $V(\bm{0})$. The solid line is the solvus line, inside of which is a two-phase region of different solid-solutions. The spinodal curve determined from the free-energy \cite{khachaturyan2013theory} is shown as the broken line. The vertical axis is a normalized temperature, $T^*=\frac{k_BT}{V(\bm{0})}$. }
\end{center}
\end{figure}
\section{\label{chap5_sec:level3}Results and Discussion}
\subsection{\label{chap5_sec:level3_1}Continuous and discontinuous transformations}
The SEAQT equation of motion, Eq.\;(\ref{eq5:equation_motion_grand_canonical_heat}), is solved with Eqs.\;(\ref{eq5:energy_eigenvalue_pseudo})\;--\;(\ref{eq5:fraction_down_spin_pseudo}) to track the decomposition process in two alloys, A--40.0\;at.\%\;B and A--30.0\;at.\%\;B, quenched from $T^*_0=\frac{k_BT_0}{V(0)}=0.30$ to $T^*_0=0.20$. Solving the equation of motion gives the occupancy probabilities of the atomic configurations (distinguished by the concentration of B-type atoms, $c$) as a function of time from the initial state to the final stable equilibrium state.
Figure\;\ref{fig5:kinetic_separation_N0_1000}\;(a) shows the occupancy probabilities as a function of $c$ at five different times (expressed as a dimensionless ratio of time to the relaxation time, $t^*=t/\tau$) in a A--40.0\;at.\%\;B alloy. From the phase diagram in Fig.\;\ref{fig5:phase_diagram_phase_separation}, quenching this alloy from $T^*_0=0.30$ to $T^*_R=0.20$ falls within the spinodal limits and should thus lead to a continuous transformation. The dotted curve in Fig.\;\ref{fig5:kinetic_separation_N0_1000}\;(a) represents the initial occupancy probability distribution at the high temperature, $T^*_0=0.30$. As time increases, the occupancy probability evolves from the dashed distribution into two peaks (one at a dilute concentration of B and the other at a rich concentration) that eventually at $t^*=3.0$ correspond to the compositions of the two equilibrium solid-solutions at the temperature of the reservoir, $T^*_R=0.20$. At early times, the probability distribution between the two peaks of the evolving phases is non-zero --- this is a signature of a continuous transformation. There is a finite probability of finding any concentration between those of the two developing phases.
A contrasting example is shown in Fig.\;\ref{fig5:kinetic_separation_N0_1000}\;(b), which shows the equivalent heat treatment for a A--30.0\;at.\%\;B alloy. In this comparatively dilute alloy, the same thermal cycle places the alloy very close to the spinodal limit at the annealing (or reservoir) temperature. In this case, the initial probability distribution in Fig.\;\ref{fig5:kinetic_separation_N0_1000}\;(b) shifts to more dilute concentrations with time, and a new phase suddenly appears at high concentrations. The occupation probabilities of atomic configurations between the dilute and high concentrations are zero --- this behavior is a signature of a discontinuous transformation (a nucleation and growth mechanism). The B-rich phase with concentrations in the range $0.65 < c < 0.8$ appears from the initial distribution, but there are no occupied probabilities between $c=0.4$ and $c=0.65$.
Considering the influence of alloy composition, as the B concentration in the alloy increases from $c=0.3$ (Fig.\;\ref{fig5:kinetic_separation_N0_1000}\;(b)) to $c=0.4$ (Fig.\;\ref{fig5:kinetic_separation_N0_1000}\;(a)), the transformation mechanism switches from discontinuous to continuous. This transition is consistent with conventional wisdom in that the driving force for transformation increases with $c$ at the annealing temperature and has the effect of lowering the barrier to nucleation. Although not shown, it also was confirmed that the kinetic path is sensitive in a similar fashion to the annealing temperature: lowering the annealing temperature increases the driving force for decomposition and as a result shifts the mechanism from a discontinuous transformation path at high annealing temperatures to a continuous transformation path at lower annealing temperatures.
It is worth noting that the equation of motion is a system of $R$ first-order, ordinary differential equations ($R$ is the number of energy eigenlevels). From a computational standpoint, these are relatively easy to solve. For the system considered here ($R=500$ and $N=10^4$), the kinetic path from the initial state to stable equilibrium can be calculated in a few minutes on a laptop computer with 8\;GB of memory. This is an added advantage of the SEAQT approach when compared to other methods (e.g., kinetic Monte Carlo), where extensive information on particles and possible paths is required at each time-step.
\begin{figure}
\includegraphics[scale=0.42]{fig5_kinetic_separation_N0_1000_C_40B}
\includegraphics[scale=0.42]{fig5_kinetic_separation_N0_1000_C_30B}
\caption{\label{fig5:kinetic_separation_N0_1000} The calculated phase separation processes in (a) A--40.0\;at.\%\;B and (b) A--30.0\;at.\%\;B alloy systems at $T^{*}_R=0.20$ using $T^*_0=0.30$, $N=10^4$, and $N_0=10^3$. The time, $t$, is normalized by the relaxation time, $\tau$, as $t^*=t/\tau$. }
\end{figure}
\subsection{\label{chap5_sec:level3_2}Estimated spinodal curves}
Of course, being an initial value problem, the kinetic path is sensitive to the initial probability distribution. When an initial probability distribution, $p_j^0$, is prepared using a smaller $N_0$ (which corresponds to an initial state further from stable equilibrium at the initial temperature, $T^*_0$), the transformation path changes. The effect of system size can be seen from Fig.\;\ref{fig5:effects_of_Number_of_particles}, where the initial probability distributions for systems with sizes, $N_0=1000$, $500$, $200$, and $100$, are calculated with Eq.\;(\ref{eq5:grand_canonical_distribution}) for an A--50.0\;at.\%\;B alloy at $T^*_0=0.30$. The larger the $N_0$ used to prepare the initial state, the sharper the peak in the occupancy probability distribution. In the limit of large $N_0$, the distribution is a delta function (at the most probable state of statistical mechanics).
\begin{figure}
\begin{center}
\includegraphics[scale=0.42]{fig5_effects_of_Number_of_particles}
\caption{\label{fig5:effects_of_Number_of_particles} The calculated initial probability distributions in a A--50.0\;at.\%\;B alloy system at $T^*_0=0.30$ using Eq.\;(\ref{eq5:grand_canonical_distribution}) with $N_0=1000$, $500$, $200$, and $100$. Here, an occupation probability calculated using $N_0=N=10^4$ is shown together as a dotted line. }
\end{center}
\end{figure}
The kinetic pathways the system follows from the initial probability distributions of Fig.\;\ref{fig5:effects_of_Number_of_particles} are shown in Fig.\;\ref{fig5:E_S_diagram_kinetic_pathway}, where the kinetic path calculated with $N_0=N=10^4$
is shown as a dotted line. As seen from the enlarged inset in the figure, the deviation from the curve for $N_0=N=10^4$ becomes more significant as the initial fluctuation becomes larger. Note that although the initial states of each kinetic path in the energy-entropy diagram (Fig.\;\ref{fig5:E_S_diagram_kinetic_pathway}) are different, the final states of the paths correspond to the same stable equilibrium state since in each case the final state is one in which the alloy system is in mutual stable equilibrium with the same reservoir.
\begin{figure}
\begin{center}
\includegraphics[scale=0.12]{fig5_E_S_diagram_kinetic_pathway}
\caption{\label{fig5:E_S_diagram_kinetic_pathway} The kinetic pathways of the phase separation process calculated with the SEAQT model using the initial probability distributions shown in Fig.\;\ref{fig5:effects_of_Number_of_particles} with $N=10^4$ (A--50.0\;at.\%\;B alloy with $T^{*}_0=0.30$ and $T^{*}_R=0.20$).
The initial states of each path are indicated by arrows and the final states are shown by an open circle. The specific energy and entropy are normalized and denoted as $e^*$ and $s^*$, respectively. }
\end{center}
\end{figure}
The fact that the initial state can affect the kinetic path has an interesting implication when it comes to representing the spinodal limit. When a phase decomposition process is continuous (spinodal in the present example), there is a non-zero occupation probability between the concentrations associated with the two stable concentration peaks during decomposition. On the other hand, when the transformation is discontinuous, there is a concentration range over which the occupation probabilities are zero when the second phase (precipitate) appears. Therefore, a spinodal curve can be determined by checking if occupation probabilities are zero or not in the concentration range between two peaks during decomposition process. In a numerical calculation, however, the probabilities have finite non-zero values even when those values are close to zero (e.g., $10^{-20}$). Practically speaking, we can select an arbitrary value, say, $10^{-5}$, as a cutoff below which the occupation probability is taken to be effectively zero to distinguish discontinuous occupation probabilities from continuous (non-zero) values. That is, when the second phase emerges and all probabilities between two peaks in the occupation probabilities are below $10^{-5}$, the transformation is taken to be discontinuous. Spinodal curves calculated from this criterion are shown in Fig.\;\ref{fig5:estimated_spinodal_line}. These spinodal curves are clearly sensitive to the initial state of the alloy system and are quite different from those determined from a free-energy analysis (the second derivative of the free-energy versus $c$ curve). This indicates that the onset of a continuous transformation is not simply a matter of the thermodynamic driving force at the transformation temperature. Instead, it also depends upon the initial state. Note that the criteria used for the distinction between continuous and discontinuous transformations, i.e., $10^{-5}$ here, should depend on the number of intervals in the concentration of B atoms, $R$ (see Sec.\;\ref{chap5_sec:level2_2}). When a larger value of $R$ is used, the criteria should be changed to a smaller value (here $R=500$ is used for the calculations).
\begin{figure}
\begin{center}
\includegraphics[scale=0.24]{fig5_estimated_spinodal_line}
\caption{\label{fig5:estimated_spinodal_line} The estimated spinodal curves using $T^*_0=0.30$ with the different initial probability distributions, $N_0=1000$, $500$, $200$, and $100$. When $T^*_R$ is inside/outside the spinodal curve, the transformation shows a continuous/discontinuous behavior. The solvus line (solid black line) and the spinodal curve (broken black line), which is determined from the free-energy analysis, are also shown together (Fig.\;\ref{fig5:phase_diagram_phase_separation}). }
\end{center}
\end{figure}
\subsection{\label{chap5_sec:level3_3}Scaling to dimensional time}
In the results shown in Fig.\;\ref{fig5:kinetic_separation_N0_1000}, the times, $t^*$, represent a dimensionless ratio of the actual dimensional time, $t$, and the relaxation time, $\tau$, from the SEAQT equation of motion. The latter represents a variable that tracks the dynamic progress from the initial state to the final equilibrium state. The dimensional time can be extracted from $t^*$ through a comparison with experimental data \cite{beretta2017steepest,li2018multiscale} or from $ab$ $initio$ calculations \cite{beretta2014steepest,li2016generalized,li2018steepest,yamada2018method}.
While SEAQT framework predicts the transformation mechanism (nucleation-growth or spinodal decomposition) for a given eigenstructure by selecting the path from the initial state with the steepest entropy ascent principle, the actual time required to traverse this path depends upon the rate of entropy production associated with the unit processes. For a nucleation process involving the assembly of subcritical embryos, entropy production is much slower than for the diffusion throughout a spinodally decomposing material. Thus, the scaling that maps the relaxation time, $\tau$, to dimensional time should be different for the nucleation-growth and spinodal mechanisms.
Here, the dimensional time dependence is extracted via comparisons of the relaxation time to experimental transformation kinetics from the Co--Cu alloy system.
The Cu--Co system has a positive mixing enthalpy (positive $V(\bm{0})$ in Eq.\;(\ref{eq5:total_energy_phase_separation})) and a large miscibility gap extending over almost the whole concentration range (see the phase diagram in reference \cite{nishizawa1984co}). The discontinuous transformation mechanism (nucleation-growth) has been investigated extensively in the Cu-rich region (Cu--0.5$\sim$2.7\;at.\%\;Co alloys) \cite{legoues1984influence,wendt1985atom,hattenhauer1993decomposition}, and the continuous transformation mechanism (spinodal decomposition) has been observed in Cu--10\;at.\%\;Co alloy at 713\;K \cite{busch1996high}.
The procedures for scaling the dimensional time to the relaxation time for each transformation mechanism (nucleation-growth and spinodal decomposition) are shown in Appendices\;\ref{chap5_sec:level5_1} and \ref{chap5_sec:level5_2}, respectively. After scaling the relaxation time, $\tau$, to experimental data, the calculated kinetics from the SEAQT framework can be presented in terms of dimensional time. Figures\;\ref{fig5:kinetic_separation_realtime_C_1B} and \ref{fig5:kinetic_separation_realtime_C_50B} show the time-dependence of the nucleated precipitate volume fraction (the Co-rich phase) and the concentration of Co atoms in the spinodal decomposed phases, respectively. The predicted time-evolution processes show opposite tendencies: the speed of the transformation slows as nucleation and growth proceeds, whereas spinodal decomposition is predicted to accelerate as the transformation proceeds. Thus, the different experimental scalings for $\tau$ make it possible to place nucleation-growth and spinodal decomposition on very different dimensional time scales: spinodal decomposition is scaled to times less than a second whereas nucleation-growth extends over a period of 2 or 3\;hours.
\begin{figure}
\begin{center}
\includegraphics[scale=0.30]{fig5_kinetic_separation_realtime_C_1B}
\caption
{\label{fig5:kinetic_separation_realtime_C_1B} The dimensional time dependence of the precipitate volume fraction during nucleation and growth in Cu--1.0\;at.\%\;Co annealed at 823\;K calculated with SEAQT using $T^{*}_R=0.089$, $T^{*}_0=0.30$, $N=10^4$, and $N_0=10^2$. The relaxation time is correlated with the experimental kinetics of Cu--1\;at.\%\;Co alloy annealed at 823\;K \cite{hattenhauer1993decomposition}. The inset has a time range of 0-4\;min and the incubation period for the nucleation process obtained from the intercept with the abscissa is approximately 1.2\;min. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.54]{fig5_kinetic_separation_realtime_C_50B}
\caption
{\label{fig5:kinetic_separation_realtime_C_50B} The dimensional time dependence of the Co concentration in the Cu- and Co-rich phases during spinodal decomposition in Cu--50.0\;at.\%\;Co annealed at 823\;K calculated with SEAQT using $T^{*}_R=0.089$, $T^{*}_0=0.30$, $N=10^4$, and $N_0=10^2$. The relaxation time is correlated with the experimental diffusion coefficient \cite{dohl1984measurement} and the characteristic wave length of the spinodal microstructure \cite{liu1993spinodal}.}
\end{center}
\end{figure}
\section{\label{chap5_sec:level4}Conclusions}
The quantum mechanical-based SEAQT framework was applied to the decomposition of a binary solid-solution using a pseudo-eigenstructure based on the mean-field approximation. In this Part\;I, the different behaviors of continuous and discontinuous transformations are explored. It is confirmed that the SEAQT approach is able to predict the transformation characteristic of continuous and discontinuous transformation mechanisms. The kinetic path is sensitive to the initial state of the alloy and the annealing temperature, and the spinodal limits estimated from the SEAQT model show some quantitative difference from the conventional spinodal limit calculated from a free-energy analysis. Furthermore, very different dimensional time dependencies of the continuous and discontinuous transformation mechanisms are readily obtained by calibrating the SEAQT relaxation time to experimental spinodal data and nucleation-growth data.
It is noteworthy that the SEAQT model with a mean-field approximation is computationally efficient. Kinetic paths from an initial state to stable equilibrium in a system considered here were obtained in minutes on a standard laptop computer.
\section*{ACKNOWLEDGEMENT}
We acknowledge the National Science Foundation (NSF) for support through Grant DMR-1506936. \\
\begin{appendices}
\section*{\label{chap5_sec:level5}Appendix}
\section{\label{chap5_sec:level5_1}Scaling to dimensional time for nucleation-growth}
The nucleation-growth mechanism has been investigated in the Cu--Co alloy system \cite{legoues1984influence,wendt1985atom,hattenhauer1993decomposition}. The relaxation time can be related to the dimensional time, $t$, in the calculated discontinuous phase transformation using experimental data for a Cu--1\;at.\%\;Co alloy isothermally aged at 823\;K \cite{hattenhauer1993decomposition}.
The measured data of the precipitated volume fraction at Cu--1\;at.\%\;Co alloy at 823\;K is shown in Fig.\;\ref{fig5:precipitated_volume_fraction_experiments}, where the following fitting function is shown as well:
\begin{equation}
f_p = f_p^{\mbox{\scriptsize max}} - e^{-K t^n} \;, \label{eq5:fitting_function_nucleation_growth}
\end{equation}
where $f_p$ is the volume fraction of the precipitate, $f_p^{\mbox{\scriptsize max}}$ is the maximum measured value of $f_p$, $t$ is the annealing time, and $K$ and $n$ are the fitting parameters. Equation\;(\ref{eq5:fitting_function_nucleation_growth}) is rewritten as
\begin{equation}
t = \left[ - \frac{1}{K} \mathrm{ln} (f_p^{\mbox{\scriptsize max}} - f_p) \right] ^{\frac{1}{n}} \; . \label{eq5:fitting_function_nucleation_growth_t}
\end{equation}
The annealing time, $t$, can be determined once the volume fraction, $f_p$, is known at each time.
\begin{figure}
\begin{center}
\includegraphics[scale=0.60]{fig5_precipitated_volume_fraction_experiments}
\caption
{\label{fig5:precipitated_volume_fraction_experiments} The experimentally measured volume fraction of precipitate (or Co-rich phase) in a Cu--1\;at.\%\;Co alloy isothermally aged at 823\;K. The black circles are the original data \cite{hattenhauer1993decomposition} and the dotted line is the fitting function, $f_p = f_p^{\mbox{\scriptsize max}} - e^{-K t^n}$, where $f_p^{\mbox{\scriptsize max}}=0.71$, $K=0.3217$, and $n=0.5004$.}
\end{center}
\end{figure}
Although the real temperatures of the calculated phase diagram (shown in Fig.\;\ref{fig5:phase_diagram_phase_separation}) were estimated using the reported regular solution parameter, $\Omega=V(\bm{0})/2=33,300$ (J/mol) \cite{hattenhauer1993decomposition}, the phase diagram had some differences with the experimentally determined one \cite{nishizawa1984co}. For this reason, the normalized temperature, which corresponds to 823\;K, is found by searching for the condition for which the calculated $f_p^{\mbox{\scriptsize max}}$ becomes 0.71. Since $f_p^{\mbox{\scriptsize max}} \sim 0.71$ at $T^{*}_R=0.089$, the normalized annealing temperature, $T^{*}_R=0.089$, is used here for the calculation. The calculated time dependence of the volume fraction of precipitate, $f_p$, predicted by SEAQT is shown in Fig.\;\ref{fig5:volume_fraction_precipitate_SEAQT}. The determined time dependence of the relaxation time, $\tau$, is shown in Fig.\;\ref{fig5:relaxation_time_nucleation_mechanism}. Note that Eq.\;(\ref{eq5:fitting_function_nucleation_growth}) has negative values below $t \approx 1$ (see Fig.\;\ref{fig5:precipitated_volume_fraction_experiments}), but this does not cause difficulties when determining the relaxation time.
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{fig5_volume_fraction_precipitate_SEAQT}
\caption{\label{fig5:volume_fraction_precipitate_SEAQT} The time dependences of the volume fraction of precipitate (or B-rich phase) in a A--1.0\;at.\%\;B alloy system calculated with the SEAQT modeling using $T^{*}_R=0.089$, $T^{*}_0=0.30$, $N=10^4$, and $N_0=10^2$. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{fig5_relaxation_time_nucleation_mechanism}
\caption{\label{fig5:relaxation_time_nucleation_mechanism} The time dependence of the relaxation time, $\tau$, in a Cu--1.0\;at.\%\;Co alloy system when a sample with some initial concentration fluctuations is annealed at $823$\;K. It is estimated using Eq.\;(\ref{eq5:fitting_function_nucleation_growth_t}) with the result shown in Fig.\;\ref{fig5:volume_fraction_precipitate_SEAQT}. }
\end{center}
\end{figure}
\section{\label{chap5_sec:level5_2}Scaling to dimensional time for spinodal decomposition}
To scale the relaxation time, $\tau$, to dimensional time for a continuous transformation, the reported diffusion coefficient \cite{dohl1984measurement} and the characteristic wave length of the spinodal microstructure \cite{liu1993spinodal} in a Cu--Co alloy system are used. Atomic diffusion is assumed between the cube-shaped A-rich ($\alpha$) and B-rich ($\beta$) phases in a A--50.0\;at.\%\;B alloy system, where the edge length of the phases, $L$, corresponds to half the characteristic wave length of the spinodal microstructure, $\lambda_c$ (see Fig.\;\ref{fig5:atomic_diffusion_pic_cubic_spinodal}). The diffusion equation for a constant diffusivity is given by
\begin{equation}
\frac{\partial c^{\alpha / \beta}}{\partial t} = D \nabla^2 c^ {\alpha / \beta} \;, \label{eq5:diffusion_equation}
\end{equation}
where $D$ is the diffusion coefficient and $c^ {\alpha / \beta}$ is the concentration of B-type atoms in the $\alpha/\beta$-phase. The Laplacian can be replaced by expressing the concentration on each of the six surfaces of the cube as a Taylor series expanded about $c^ {\alpha / \beta}$ at the cube center, $c^ {\alpha / \beta}_0$, and summing the series (up to the quadratic terms). With this approximation, Eq.\;(\ref{eq5:diffusion_equation}) becomes
\begin{equation}
\frac{\partial c^{\alpha / \beta}}{\partial t} \approx D \frac{6}{(L/2)^2} (c^{\beta / \alpha} - c^{\alpha / \beta}_0) \;, \label{eq5:diffusion_equation_approximation}
\end{equation}
where $L$ is the edge length of the cube-shaped phases and given as $L=\lambda_c/2$. When an average quantity of concentration of B-type atoms in each phase, $\langle c \rangle^{\alpha / \beta}$, is taken, Eq.\;(\ref{eq5:diffusion_equation_approximation}) is written as
\begin{equation}
\frac{\partial \langle c \rangle^{\alpha / \beta}}{\partial t} = D \frac{6}{(L/2)^2} (\langle c \rangle^{\beta / \alpha} - \langle c \rangle^{\alpha / \beta}) \;. \label{eq5:diffusion_equation_approximation_average}
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{fig5_atomic_diffusion_pic_cubic_spinodal}
\caption{\label{fig5:atomic_diffusion_pic_cubic_spinodal} (a) One dimensional atomic diffusion between assumed cube-shaped phases with side length, $L$ (each phase corresponds to either $\alpha$- or $\beta$-phase). (b) the schematic time-evolution process of spinodal decomposition; the broken lines are part way through the evolution process, and the solid lines are the final distribution. The side length of the cube-shaped regions shown in (a) would correspond to half of the characteristic wave length of the spinodal microstructure, $\lambda _c$; i.e., $L = \lambda _c /2$. }
\end{center}
\end{figure}
For the equivalent SEAQT system, the concentration change rate is given as
\begin{equation}
\frac{\partial \langle c \rangle^{\alpha / \beta}}{\partial t} \Rightarrow \frac{d \langle c \rangle^{\alpha / \beta}}{d t^*} \;, \label{eq5:dc_dt_SEAQT}
\end{equation}
where $t^*$ is a normalized time ($t^*=t/\tau$). Thus, the relaxation time, $\tau$, is derived as
\begin{equation}
\tau=\frac{(\lambda_c/2)^2}{24 D (\langle c \rangle^{\beta / \alpha} - \langle c \rangle^{\alpha / \beta})} \frac{d \langle c \rangle^{\alpha / \beta}}{d t^*} \;. \label{eq5:relaxation_time_spinodal}
\end{equation}
Note that $\langle c \rangle^{\alpha / \beta}$ is a function of time, and $D$ is also time-dependent because the temperature in an alloy system changes with time. Here, however, it is assumed that $D$ is time-independent and the value used for $D$ is that at the annealing temperature.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{fig5_average_concB_each_phase}
\caption{\label{fig5:average_concB_each_phase} The time dependence of average concentration of B-type atoms in A-rich ($\alpha$) and B-rich ($\beta$) phases calculated with the SEAQT model using $T^{*}_R=0.089$, $T^{*}_0=0.30$, $N=10^4$, and $N_0=10^2$. The averages are, respectively, taken from the calculated occupation probabilities in the concentration ranges 0$\sim$50\;at.\%\;B and 50$\sim$100\;at.\%\;B. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{fig5_relaxation_time_spinodal}
\caption
{\label{fig5:relaxation_time_spinodal} The time dependence of the relaxation time, $\tau$, in a Cu--50.0\;at.\%\;Co alloy system when a sample with some initial concentration fluctuations is annealed at $823$\;K. It is estimated using Eq.\;(\ref{eq5:relaxation_time_spinodal}) with the result shown in Fig.\;\ref{fig5:average_concB_each_phase} and the reported experimental data \cite{dohl1984measurement,liu1993spinodal}, $D=0.43\; \mbox{exp}(- 2.22 \; \mbox{eV} /k_BT)$ and $\lambda_c \approx 3.5$\;nm. }
\end{center}
\end{figure}
The experimental data of the diffusion coefficient and the characteristic spinodal wave length in a Cu--Co alloy system are, respectively, $D=0.43\;\mbox{exp}(- 2.22 \; \mbox{eV} /k_BT)$ (for Cu--0.1\;$\sim$\;0.15\;at.\%\;Co with 640\;$\sim$\;848\;K) \cite{dohl1984measurement} and $\lambda_c \approx 3.5$\;nm \cite{liu1993spinodal}. Since it is estimated that $T^{*}_R=0.089$ corresponds to 823\;K in Appendix\;\ref{chap5_sec:level5_1}, the spinodal decomposition behavior at 823\;K is investigated here for Cu--50.0\;at.\%\;Co alloy assuming that the diffusion coefficient is not sensitive to the composition. The calculated time dependence of the average concentration of B atoms in each phase using the SEAQT model is shown in Fig.\;\ref{fig5:average_concB_each_phase}, where the averages of each phase are, respectively, taken in the concentration ranges 0$\sim$50\;at.\%\;B and 50$\sim$100\;at.\%\;B. The determined time dependence of the relaxation time, $\tau$, using Eq.\;(\ref{eq5:relaxation_time_spinodal}) is shown in Fig.\;\ref{fig5:relaxation_time_spinodal}.
\end{appendices}
\bibliographystyle{ieeetr}
|
train/arxiv
|
BkiUdHI5ixsDMFcyPqIK
| 5
| 1
|
\section{Introduction}
\label{sec:intro}
The effects of stochastic forcing on solutions of stochastic partial
differential equations ({SPDEs}\xspace) have recently received a great deal of
attention in applications ranging from material science, atmosphere
modelling to neural science. Of particular interest are the effects of
noise on travelling waves and fronts as these are often physically
important solutions. We use the term travelling wave to include
fronts and waves and develop in this paper numerical methods to solve
for stochastic versions of these objects.
We consider the stochastic PDE
\begin{equation}\label{eq:spde0}
du = \Big[u_{xx} + f(u)\Big] \,dt + g(u,t)\circ dW(t), \qquad
\mbox{given } \quad u(0)=u^0, \qquad x \in {\mathbb{R}}.
\end{equation}
Although the noise term is written here as a Stratonovich integral we
consider below the noise in both a Stratonovich and {It\^o}\xspace sense.
For ease of exposition we take $u:{\mathbb{R}} \times {\mathbb{R}}_+ \to {\mathbb{R}}$
although similar numerical proceedures have been applied to
systems of PDEs and waves in ${\mathbb{R}}^2$, see for example \cite{bst07}.
For additive noise the function $g$ is taken as a constant whereas for
multiplicative noise $g$ depends on $u$. In the case of no noise
($g=0$) we recover the deterministic {PDE}\xspace
\begin{equation}\label{eq:pde}
u_t = u_{xx} + f(u), \qquad \mbox{given } \quad u(0)=u^0, \qquad x\in{\mathbb{R}}.
\end{equation}
In the deterministic case the analysis of travelling waves both
analytically and by numerics is a mature field. This is not the case
for {SPDEs}\xspace where much of the analysis is performed
for specific equations or for the case of small noise. Indeed with
stochastic forcing existence for all time of these waves is not
guaranteed and the definition of quantities such as wave speed vary
from system to system.
Typically the position of a stochastic travelling wave
is determined from the position of a level set. For small
noise the centres of these fronts can be shown to follow a
rescaled Brownian motion, see \cite{Shardlow:00,Brsssco.etal,Funaki95}.
In the case of multiplicative noise the front may exist
for all times and the wave front may have compact support
\cite{Shga94} and
is well--defined over some time varying interval in space
$[a(t),b(t)]$ and takes stationary values out side of this
interval.
There is a well developed literature for the stochastic
Fisher--Kolmogorov-Piscounov equation
\cite{Drng+etal:03,Drng.etal:05,Mllr_Swrs:95,Trbe96} with waves
defined in this form.
Multiplicative noise is seen to change the wave speed of the wave and the
position is seen to diffuse from the mean (or Goldstein mode), for
reviews see \cite{GrciaOjlvo+Sncho,panja04}.
However, it is not our aim to replicate these results here.
We investigate different ways to measure
the wave speed, using the level set approach as well as a new approach
of minimizing the $L^2$ norm of the wave against a fixed profile.
Furthermore we apply different computational techniques to compute time
dependent waves - this inlcudes freezing the wave to stop it from
travelling.
We extend a numerical method introduced in \cite{bt04} for
deterministic {PDEs}\xspace of the form \eqref{eq:pde}.
This method freezes the wave in the computational domain by adding
a convection term to the equation to compensate for the movement of
the wave.
The convection term that gives the speed of the wave is determined
from an extra algebraic condition from the $L^2$ minimization and the wave
speed is explicitly solved for as a time dependent quantity.
Convergence of this method for {PDEs}\xspace was considered in \cite{thu05} and
stability of the wave considered in \cite{thu06}.
We use these techniques to obtain new computational results on the
effect of multiplicative noise on the wave speed of the Nagumo
equation with Stratonovich and {It\^o}\xspace noise. In contrast to
\cite{GrciaOjlvo+Sncho,Armro_etal,PhysRevE.58.5494}, we present new
numerical results on the effect of spatial correlation length on the
width of the wave and compare different measures of the wave speed.
These results incude our new idea of comparing the wave to a reference
function. Furthermore we present new results for {It\^o}\xspace and additive
noise.
Numerically we solve for the wave profile and a time
dependent wave speed for \eqref{eq:spde0} which is, in the case of
stochastic forcing, a random variable.
As a specific example to illustrate the computational
method and to compare against existing techniques we consider
the scalar Nagumo equation \cite{nay62}
\begin{equation} \label{eq:snagumo}
du = \left[u_{x x} + u(1-u)(u-\alpha)\right]dt + (\nu + \mu u(1-u))
\circ dW.
\end{equation}
With $\nu\neq0$ and $\mu=0$ we have additive noise and $\mu\neq0$ the noise is
multiplicative.
For multiplicative noise we have that $u=0$ and $u=1$
are stationary and numerical simulations suggest a wave exists between
them \cite{Armro_etal,PhysRevE.58.5494}. The deterministic equation
\begin{equation}\label{eq:nagumo}
u_t = u_{x x} + u(1-u)(u-\alpha),\quad u(x,t)\in{\mathbb{R}},\ x\in{\mathbb{R}},\ t > 0,
\end{equation}
is often used for testing algorithms since travelling wave
solutions $u(x,t)=u_{\mathrm{det}}(x-c t)$ connecting the stationary points
$u_-=0$, $u_+=1$ of this equation are explicitly known besides other
explicit solutions, such as pulses, sources and sinks
\cite{air85,cg92}.
These travelling wave solutions depend on the nonlinearity and the
leading profile of initial data $u^0$. Define the function $u_k(x)$ by
\begin{equation}
u_{k}(x) =\left(1+ \e{-kx}\right)^{-1}.
\label{eq:uk}
\end{equation}
We use this function to specify both initial data $u^0$ and
reference functions that have different profiles (by varying $k$).
For $\alpha\in (0,1/2]$ there is a unique asymptotic travelling wave
where as for $\alpha\in (-1,0]$ the
asymptotic profile and the wave speed depends on the leading profile
$e^{k_0 x}$ of the initial data as $x \to \infty$.
We summarize results below for the deterministic Nagumo equation,
these are found, for example, in \cite{GrciaOjlvo+Sncho}.
\begin{itemize}
\item For $\alpha \in (0,1/2]$ the solution $u=u_k$, with $k=1/\sqrt{2}$ is
asymptotically stable and all initial front data $u^0$ is attracted to
this wave. The asymptotic wave speed is given
by $c = -\sqrt 2\ (\tfrac 1 2 - \alpha)$.
\item For $\alpha \in (-1/2,0]$ if the initial data $u^0=u_{k_0}$ has
$k_0\geq k_*=-\alpha\sqrt{2}$ then the asymptotic
speed is given by $c = -\sqrt 2\ (\tfrac 1 2 - \alpha)$.
If $k_0 < k_*$ then the asymptotic wave speed is $\geq (k_0^2-\alpha)/k_0$.
\item For $\alpha \in (-1,-1/2]$ if the initial data $u^0=u_{k_0}$ is such
that $k_0\geq k_\dag=\sqrt{|\alpha|}$
then the asymptotic speed is given by $2k_{\dag}$.
If $k_0 < k_\dag$ then the asymptotic speed is $\geq2k_{\dag}$.
\end{itemize}
The outline of the rest of the paper is as follows.
We review in \secref{sec:2} the computation of travelling waves in the
deterministic case and extend to the case of stochastic forcing.
We discuss measures of wave
speeds of a stochastic travelling wave and discuss the numerical approximation.
In \secref{sec:results} we
illustrate the numerical method on the Nagumo equation with
multiplicative noise
We compare solving the stochastic partial differential equation
(SPDE) and the stochastic partial differential
algebraic equation (SPDAE) where the wave is frozen by minimizing the
$L^2$ distance between a reference function and the travelling
wave. We compare the different measures of wave speeds. We illustrate
from numerics that although a feasible method it can lead to numerical
instability. We investigate the effect of the choice of reference
function $\hat u$ in \secref{sec:uref}.
In \secref{sec:ItoStrat} we present new numerical results for {It\^o}\xspace and
Stratonovich multiplicative noise on how the wave speed changes with
noise intensity. We alos present new results on the effect of the
spatial correlation length. Additive noise is considered in
\secref{sec:add} where we again examine the wave speed with noise
intensity and illustrate the SPDAE approach when new travelling waves are
nucleated.
Finally we consider weaker versions of the stochastic travelling wave
fixed in the computational domain by mean wave speeds and then discuss
the results and computational method.
\section{Stochastic travelling waves}
\label{sec:2}
In this section we introduce the (stochastic) differential algebraic
equations that we use to define the travelling wave problem.
We start by reviewing the more familiar deterministic case before
considering the case with stochastic forcing. In both cases we
reduce the infinite problem to finite dimensions by truncating the
computational domain and discretizing in space.
\subsection{Deterministic {PDE}\xspace and discretization}
Let us assume that equation \eqref{eq:pde} has a travelling wave solution
$u$, so that $u$ can be written as
\begin{equation}\label{eq:trav.wav.1}
u(x,t) = u_{\mathrm{det}}(\xi),\quad \xi = x- \lambda_{\mathrm{det}} t,\
\end{equation}
where $u_{\mathrm{det}} \in \mathcal C^2_b({\mathbb{R}}, {\mathbb{R}}^m)$ denotes the waveform
and $\lambda_{\mathrm{det}}$ its wave speed.
In a comoving frame $v(\xi,t)=u(\xi - \lambda_{\mathrm{det}} t,t)$ equation
\eqref{eq:pde} reads
\begin{equation}\label{eq:moving.frame}
v_t = v_{\xi \xi} + \lambda_{\mathrm{det}} v_\xi + f(v), \quad \xi \in {\mathbb{R}},\quad t \geq 0
\end{equation}
of which the travelling wave $u_{\mathrm{det}}$ is a stationary solution.
Since the wave speed $\lambda_{\mathrm{det}}$ is generally unknown we transform equation
\eqref{eq:pde} into a co-moving frame with unknown position $\gamma(t)$, i.e. we
insert the ansatz
$v(x,t) = u(x - \gamma(t), t)$ into \eqref{eq:pde}. Then we obtain
\begin{equation}\label{eq:pde.trafo}
v_t= v_{xx}+ \lambda v_x +f(v),
\end{equation}
where $\lambda(t) = \gamma'(t)$.
In order to compensate for the additional variable $\lambda$ we add
a so called phase condition
\begin{equation}\label{eq:pc}
0=\psi(v, \lambda)
\end{equation}
which together with \eqref{eq:pde.trafo} forms a partial differential
algebraic equation ({PDAE}\xspace) \cite{bt04}.
The position $\gamma$ of the wave can then be calculated by integrating
$ \gamma' = \lambda$,$\gamma(0) = 0$ to get
\begin{equation}\label{eq:gamma}
\gamma(t) = \int_0^t \lambda(s) ds.
\end{equation}
For the numerical implementation we need to truncate the spatial domain
from $x\in {\mathbb{R}}$ to $x\in[0,L]$ and impose appropriate boundary conditions
such as Neumann, Dirichlet or projection boundary conditions \cite{thu05}.
We then solve \eqref{eq:pde.trafo} and \eqref{eq:pc} for $x\in
[0,L]$.
In contrast to traditional deterministic travelling wave computations
where the steady states of \eqref{eq:moving.frame} are solved for
with appropriate boundary conditions
this method does not rely on $\lambda$ being a constant wave speed.
Thus far we have not discussed the choice of the phase fixing function
$\psi$ in \eqref{eq:pc}. Since the phase condition only selects one
representative out of the infinite family of solutions, there is some
freedom of choice here.
The simplest phase condition is to align the solution
with respect to a given reference function $\hat u$.
It is natural to take the $L^2$ norm since we then the minimum can be
found by differentiation and equating to zero (see eg \cite{bt07}).
For two functions $v$ and $w$ to minimize over shifts in space $y$ the
$L^2$ norm:
$\min_{y} \|v(x,\cdot)-w(x-y,\cdot)\|_2$.
Differentiating and equating to zero we find that
$$\int (v(x,\cdot)-w(x-y,\cdot))w_x dx = 0.$$
So for the PDE we take the phase condition to be :
$$
\psi_{\mathrm{fix}}(u) = \langle \hat u_x, u-\hat u \rangle.
$$
This choice was termed the template fitting method in \cite{rkml03}.
In our numerical simulations we will discretize in space
using standard uniformly spaced finite
differences, so we discretize on a finite grid $x_0,...,x_M$, $u(x_j) = u_j$.
For the second derivative with
$M$ points and spatial step $\Delta x$ we approximate the derivative
$\partial_{xx} \approx A$ where
$A=\frac 1 {\Delta x^2} B \in {\mathbb{R}}^{M-2, M-2}$
for Dirichlet boundary conditions and for Neumann boundary conditions,
$$A=\frac 1 {\Delta x^2} \left(
\begin{array}{rrrrr}
-2 & 2 \\
& B \\
& 2 & -2
\end{array}
\right)\in {\mathbb{R}}^{M, M}, \qquad \text{with} \qquad
B = \left( \begin{array}{ccccc}
-2 & 1 & \\
1 & \ddots & \ddots \\
& \ddots & \ddots & 1 \\
& & 1 & -2 \\
\end{array}\right).$$
We choose not to use periodic boundary conditions since we compute
a travelling front rather than a pulse and this would require
a domain of twice the size and also introduces two travelling waves
that travel in opposite directions.
Neumann and Dirichlet boundary conditions were shown to work well in
the deterministic case in \cite{thu05,thu06}.
For the first spatial derivative
we introduce
$(D_R u)_j = (u_{j+1} - u_j)/ {\Delta x}$, $(D_L u)_j = (u_{j} - u_{j-1})/ {\Delta x}$,
$(D_C u)_j = (u_{j+1} - u_{j-1})/ (2 \Delta x)$ for $j=1,...,M-1$ using
Dirichlet boundary conditions
$u^0 = \gamma_L, u_M=\gamma_R$ or Neumann boundary conditions
$u^0=u_1, u_{M-1}=u_M$.
For convection terms we either use $D_C$ or, where
up-winding is an issue, we choose the appropriate $D_L$, $D_R$ or a
weighted combination \cite{bt04}
\begin{equation}
\partial_x \approx D_h := \e{-\beta \mu} D_{L} + (1-\e{-\beta
h}) D_{R},
\label{eq:Dbeta}
\end{equation}
where $\beta$ is a parameter ($\beta = 0$ or $\beta = \frac 1 2$ in our
computations) and in what follows $h$ will be some function of the wave speed.
Recent work by Hairer and Voss \cite{HairerVoss} examine the discretization of
the advection term $u u_x$ for the stochastic Burger's equation and
show that the limit is dependent on the discretization. The form of
advection for the Burger's equation and considered here is different
and as we can compare to cases without the advection term we did not
note and discretizatoin dependent
Discretizing in space with $N$ grid points and after eliminating the
boundary conditions we obtain the following DAE system for $\lambda
\in {\mathbb{R}}$ and $v\in{\mathbb{R}}^{N-2}$ for
Dirichlet or $v\in {\mathbb{R}}^N$ for Neumann boundary conditions
\begin{equation}\label{eq:dae}
\begin{aligned}
v' &= A v + \lambda (D_\lambda v + \eta) + f(v) + \varphi \\
0 & = \langle \hat u_x, v-\hat u \rangle,
\end{aligned}
\end{equation}
where the vectors $\varphi, \eta$ are used to deal with the boundary conditions.
This system can be solved by using appropriate DAE solvers
\cite{ap98} or we can use a linear implicit Euler method to obtain the
fully discrete system
\begin{equation}\label{eq:lin.euler}
\begin{aligned}
v^{n+1} &= v^n + \Delta t \left[ A v^{n+1} + \lambda^{n+1} (D_{\lambda^n} v^n + \eta) + f(v^n) + \varphi \right]\\
0 & = \langle D_C \hat u, v^{n+1} -\hat u \rangle
\end{aligned}
\end{equation}
which leads to
\begin{equation*}
\begin{pmatrix}
I- \Delta t A & - \Delta t (D_{\lambda^n} v^n + \eta) \\
\Delta x\, D_C \hat u^T & 0
\end{pmatrix}
\begin{pmatrix} v^{n+1} \\ \lambda^{n+1} \end{pmatrix}
= \begin{pmatrix} v^n + \Delta t(f(v^n) + \varphi) \\ \langle D_C \hat u,\hat u \rangle \end{pmatrix}.
\end{equation*}
Note that for the reference or template $\hat u_x$ we use the central
difference approximation $D_C$ since this is most accurate and
convection instabilities are not an issue for this term.
Under a uniqueness assumption of the travelling wave of the PDE
it was shown in \cite{thu05} that for $L\to \infty$ and
$\Delta x \to 0$ the stationary solution of \eqref{eq:dae} converges
to the exact travelling wave solution.
Moreover the solution of \eqref{eq:dae} inherits the
nonlinear stability properties of \eqref{eq:pde}.
Numerically we observe below that the DAE system correctly computes the
travelling wave depending even when the travelling wave is not unique,
see \secref{sec:results}.
\subsection{Stochastic {PDE}\xspace and stochastic travelling wave}
\label{sec:spdes}
We seek travelling wave solutions to
the Stratonovich {SPDE}\xspace
\begin{equation}\label{eq:spdeStrat}
d u = \left[u_{xx} + f(u) \right]dt + g(u) \circ d W,\quad u(0)=u^0
\end{equation}
or
the {It\^o}\xspace {SPDE}\xspace
\begin{equation}\label{eq:spdeIto}
d u = \left[u_{xx} + f(u) \right]dt + g(u) d W,\quad u(0)=u^0
\end{equation}
with $g(u) = \nu + \mu h(u)$, where $\nu$
and $\mu$ are parameters that allow us to consider additive and
multiplicative noise.
Results on the existence of a solution for
\eqref{eq:spdeStrat} and \eqref{eq:spdeIto} with $x\in\real$
domain can be found in \cite{Walsh86}. For the stochastic Allen-Cahn
equation with spatially smooth, bounded additive noise existence is
shown in \cite{Rgmnt02}. A recent paper \cite{Xie} shows existence for
the non-Lipschitz cases for space time white noise.
We truncate the infinite domain and consider \eqref{eq:spdeStrat} (or \eqref{eq:spdeIto})
on a large finite domain so that $x\in[0,L]$ with either Neumann or
Dirichlet boundary conditions.
For the finite domain with $x\in [0,L]$ we refer to \cite[Theorem
7.4]{DaPrtoZbczyk} with $f$ and $g$ satisfying global Lipschitz
conditions, \cite{PrevotRoeckner} for some weaker conditions and
recent results in \cite{JentzenRoeckner} for non-Lipschitz $g$.
For the stochastic Nagumo equation we have $f(u)=u(1-u)(u-\alpha)$
and take $h(u)=u(1-u)$.
We consider noise $W(t)$ to be a $Q$--Wiener
process~\cite{DaPrtoZbczyk}, and assume that covariance operator $Q$
and the linear operator $\partial_{xx}$
have the same eigenfunctions $\phi_j$. If the covariance operator has
eigenvalues $\zeta_j\geq 0$ then we can write
\begin{equation}
\label{eq:W}
W(x,t) = \sum_{j\in\integers} \zeta_j^{1 / 2} \phi_j(x) \beta_j(t),
\end{equation}
for independent Brownian motions $\beta_j$.
We take space-time noise that is white in time with exponential decay
in the spatial correlation length $\xi>0$ in which case
$$\expect{dW(x,t)dW(y,s)} = C(x-y)\delta(t,s),
\qquad C(x) = \frac 1 {2\xi} \exp\left(-\frac {\pi x^2}{4
\xi^2}\right).$$
We approximate using \eqref{eq:W} by taking $\zeta_n= \exp(- \frac
{\xi^2 \lambda_j} {L})$, where $\lambda_j=\frac {j^2 \pi^2}{L^2}$
and $L$ is the length of the interval \cite{Shardlow:05,Lrd+Rgmnt04}.
In the Stratonovich case \eqref{eq:spdeStrat} we can eliminate
the systematic effects of the noise on the drift and convert to an
{It\^o}\xspace integral. This gives an
additional term to the nonlinearity so that the Stratonovich SPDE
\eqref{eq:spdeStrat} is equivalent to the {It\^o}\xspace SPDE
\begin{equation}\label{eq:spdeCorrect}
d u = \left[u_{xx} + \tilde{f}(u) \right]dt + g(u) d W
\end{equation}
where $\tilde{f}(u) = f(u) - C(0)g'(u)g(u)$, see for example
\cite{GrciaOjlvo+Sncho}.
We can also convert from the {It\^o}\xspace interpretation \eqref{eq:spdeIto} to
a Stratonovich by
\begin{equation}\label{eq:spdeCorrect2}
d u = \left[u_{xx} + \tilde{f}(u) \right]dt + g(u)\circ d W
\end{equation}
where now $\tilde{f}(u) = f(u) + C(0)g'(u)g(u)$.
We discretize the SPDE in space by finite differences and evaluate
the noise on the spatial grid.
In time we discretize with a constant time step $\Dt$.
For the noise term we have an increment
$$\Delta W_n= \sum_{j\in[-J,J]} \zeta_j^{1 / 2} \phi_j(x)
\xi_j, $$
where $\xi_j \sim N(0,\Dt)$.
To compute directly with the Stratonovich noise for
\eqref{eq:spdeStrat} we use the standard Heun method
\cite{Ojalvo,kloeden:1992} and also the semi-implicit Euler--Heun method
\begin{equation}\label{eq:SPDEHeun}
\begin{aligned}
z & = u^n + g(u^n) \Delta W_n \\
u^{n+1} &= u^n + \Delta t \left[ A u^{n+1}+ f(u^n)
+\varphi \right] + \frac 1 2 (g(z) + g(u^n)) \Delta W_n
\end{aligned}
\end{equation}
where $\Delta W_n$ in an increment of the noise and
$\varphi$ arises from the boundary conditions.
Although intuitively it is understood what is meant by a stochastic travelling
wave it is not easy to find a definition in the literature, however
for a review see \cite{GrciaOjlvo+Sncho,panja04}. Typically
a stochastic travelling wave and speed is either defined by the
evolution of a level set such as in \cite{Mllr_Swrs:95,Trbe96} or
through the evolution relative to a deterministic wave, such as
through a small noise expansion such as in \cite{Mkhlv_etal83}.
We will apply our methods to the case where in the deterministic case
the travelling wave is known to be unique and also where it is not unique.
Consider the {SPDE}\xspace with a well defined wave with compact support as defined in
\cite{Shga94}, so that at
$u(-\infty,\cdot\,)=u_-$, $u(\infty,\cdot\,)=u_+$. We can then define a
travelling wave and wave speed using the points
\begin{equation}
a(t) := \sup\{z: u(x,t)=u_-, x\leq z\}, \quad
b(t) := \sup\{z: u(x,t)=u_+, x\geq z\},
\label{eq:defab}
\end{equation}
and in addition we can take the 'mid point' level set of a wave
\begin{equation}
c(t) := \sup\{z: u(x,t)=(u_-+u_+)/2, x\leq z\}.
\label{eq:defc}
\end{equation}
These level sets define the position of the travelling wave. Note that
we take the supremum as there may be multiple crossings through the level
set (see for example \figref{fig:uhfail}) for the front.
Given the positions an 'instantaneous' wave speed can be determined
from the $a$, $b$ and $c$ by differentiation.
We report below a wave speed $\Lambda_z(t)$, $z\in\{a,b,c\}$ defined by
\begin{equation}
\Lambda_z(t) = \mathbb{E}\left(\frac{z(t)-z(t_0)}{t-t_0}\right) \qquad z\in
\left\{a,b,c \right\}
\label{eq:levLAM}
\end{equation}
where the expectation is taken over the number of realizations. We may
choose the initial time $t_0$ to either be at the start of the
computation $t_0=0$ or some later time ($t_0>0$) to avoid transient effects.
We believe This differs from the definition of wave speed used in
computations by \cite{Armro_etal,PhysRevE.58.5494,Mro:04} where they
report $z(t)/t$ for $z\in\left\{a,b,c\right\}$.
Numerically, these level set points $a,b,c$ are found by evolving
the {SPDE}\xspace \eqref{eq:spdeStrat} directly and interpolating over the grid.
If we assume the wave has some long time invariant speed an
alternative definition of the wave speed is to fit a linear polynomial
$P_{{\Lambda_{\text{fit}}}}$ to the data $(t,\mathbb{E} z(t))$, $z\in {a,b,c}$, $t\geq t_0$ where
\begin{equation}
P_{{\Lambda_{\text{fit}}}}(t):={\Lambda_{\text{fit}}} t+K
\label{eq:LF}
\end{equation}
Wave speeds may then be estimated by ${\Lambda_{\text{fit}}}_z$, $z\in {a,b,c}$ where we may take
$t_0>0$ to avoid transients. Although this is a trivial extension of
wave speed defined by \eqref{eq:levLAM} we have not seen it reported
in the literature.
Finally we introduce a novel measure of the wave speed fpr SPDEs
through minimization of the $L^2$ norm $\|u(x,t)-\hat u(x-y,t)\|^2$
against a fixed profile $\hat u$. This is similar to the freezing the
wave in the deterministic case. We solve the SPDE and compute the
position $\gamma(t)$ of the wave. We
then move the reference function relative to the travelling wave
solution $u$. That is we solve SPDE
$$ d u = \left[u_{xx} + f(u) \right]dt + g(u) \circ d W,\quad u(0)=u^0$$
and couple this to a reference function $\hat u(x,t)$ that moves so that
\begin{equation}
\label{eq:phase}
\langle \hat u_x(t),u(t)-\hat u(t) \rangle =0.
\end{equation}
Numerically this requires interpolation onto the spatial grid at each
time step. We compute an instantaneous wave speed $\lambda(t)$ and
this is related to the position of the wave through
$\gamma(t)=\int_0^t \lambda(s) ds$.
A wave speed $\Lambda_{\min}$ is then defined through the time average of
the instantaneous wave speed $\lambda$
\begin{equation}
\Lambda_{\min}(t) = \frac{1}{t-t_0}\int_{t_0}^{t} \mathbb{E}(\lambda(s)) ds .
\label{eq:LAM}
\end{equation}
So far we have not commented on the choice of profile $\hat u$ to
minimize against.
If a deterministic travelling wave profile exists then this is a
natural choice for $\hat u$.
In examples where we do not have an analytic expression for the
deterministic travelling wave $\hat u$ then this can be solved for
simultaneously or a sample solution solved for and saved.
However, this is a matter of choice and we could minimize the $L^2$ norm
against any fixed profile.
One obvious choice of profile $\hat u$ is to take the initial data, so
that $\hat u=u(0)$.
Note that the choice of $\hat u$ is important. In particular we
illustrate in \secref{sec:uref} that for a given $\hat u$ the
minimization may not be unique and may fail numerically
if $\hat u$ has small support.
We again note that from the position data $(t,\mathbb{E} \gamma(t))$ we can
fit a linear polynomial
\begin{equation}
P_{\Lambda_\gamma}(t):=\Lambda_\gamma t +K
\label{eq:Lg}
\end{equation}
to obtain an alternative estimate of the average wave speed.
\subsubsection{Freezing the stochastic travelling wave}
Inspired by the deterministic fixing of a wave we freeze a stochastic
travelling wave relative to a reference function $\hat u$, this allows
direct computation of the wave speed from the $L^2$ minimization.
We take $\hat u$ to be a fixed continuous function and we evaluate it
numerically at the grid pouints.
We examine the Stratonovich SPDE in a co-moving frame as we did for the
deterministic case.
First let us examine the effects of a shift in space on the covariance of
the noise noting that
$$\mathbb{E} (dW(x+r(t),t),dW(y+r(s),s) ) = C(x+r(t)-y-r(s))\delta(t-s).$$
We see that for noise that is white in time the covariance is the
same in the two frames.
Let's consider the Stratonovich SPDE transforming to the moving frame
$u(x,t)=v(x+\gamma(t),t)$ we have
$$dv(x+\gamma(t),t) = v_x\circ d\gamma(t) + dv$$
and so formally we can write
$$
\begin{aligned}
d v &= \left[ v_{xx} + \frac{d\gamma(t)}{dt}v_x + f(v) \right] dt
+ g(v) \circ d W,\quad v(0)=u^0.
\end{aligned}
$$
We introduce the phase condition to determine $d\gamma$ and
minimize the $L^2$ norm, i.e. $\langle \hat u_x, v-\hat u \rangle=0$.
If we define the random variable $\lambda$ so that $\gamma(t) =
\int_0^t \lambda(s) ds$ then we seek to solve the SPDAE
\begin{equation}\label{eq:stratspdae}
\begin{aligned}
d v &= \left[ v_{xx} + \lambda v_x + f(v) \right] dt + g(v) \circ d W,\quad v(0)=u^0 \\
0 & = \langle \hat u_x, v-\hat u \rangle
\end{aligned}.
\end{equation}
In order to compute directly with the Stratonovich noise for
\eqref{eq:spdeStrat} we use either the standard Heun method
\cite{Ojalvo,kloeden:1992} or the semi-implicit Euler--Heun method
\begin{equation}\label{eq:Heun}
\begin{aligned}
z & = u^n + g(u^n) \Delta W_n \\
u^{n+1} &= u^n + \Delta t \left[ A u^{n+1} + \lambda^{n} \left( D_{\lambda^n} u^n +\eta \right) + f(u^n) +\varphi \right]
+ \frac 1 2 (g(z) + g(u^n)) \Delta W_n\\ 0 & = \langle \hat u_x, u^{n+1}-\hat u \rangle.
\end{aligned}
\end{equation}
The algorithm yields an approximation $u^n$, $n=0,1,2,\ldots$ to
$u(n\Delta t)$ and $\lambda^n$, and approximation to $\lambda(s)$.
This gives us a numerical scheme for the SPDAE in which the stochastic
travelling wave is frozen.
We have a time-dependent
random variable $\lambda(t)$ that we call the instantaneous
wave speed. Of more physical interest is the time average of this
quantity $\Lambda_{\min}^{fix}$ that we call the wave speed and report
\begin{equation}\label{eq:deflambda}
\Lambda_{\min}^{fix} = \frac{1}{t-t_0}\int_{t_0}^{t} \mathbb{E} \lambda(s) ds.
\end{equation}
The instantaneous wave speed $\lambda$ gives the position $\gamma(t)$
of the wave.
If we assume the wave has some long time invariant
speed this can be estimated from $\gamma(t)$ by
fitting a linear polynomial
\begin{equation}
P_{\Lambda_\gamma^{fix}}(t):=\Lambda_\gamma^{fix} t +K
\end{equation}
to the data from freezing the wave $(t,\mathbb{E} \gamma(t))$, for $t>t_0\geq 0$.
The literature on solving stochastic DAEs is in its infancy, however there
are some analytic and computational results mainly arising from
examining noise in circuit simulations, see for example
\cite{SchnDnk98,Wnklr01,Wnklr03,Wnklr04}. We are not aware of work on
the existence directly for {SPDAE}\xspace.
Computing a travelling wave through \eqref{eq:stratspdae} or
we introduce the random variable $\lambda$ which is used to freeze the wave.
We could however define a weaker
versions by taking statistics of $\lambda$. For example we can take
the time-averaged wave speed $\Lambda(t)$ for each realization
\begin{equation}\label{eq:spdae.Lam}
\begin{aligned}
d v &= \left[ v_{xx} + \Lambda_{\min} v_x + f(v) \right] dt +
g(v) \circ d W,\quad v(0)=u^0 \\
0 & = \psi(v, \lambda).
\end{aligned}
\end{equation}
Other weaker forms of travelling wave solution are possible where
the instantaneous wave speed $\lambda$ or time average wave speed
$\Lambda$ of an individual realization is replaced by its expectation over
realizations, for example
\begin{equation}\label{eq:spdae.E}
\begin{aligned}
d v &= \left[ v_{xx} + \expect{\lambda} v_x + f(v) \right] dt +
g(v) \circ d W,\quad v(0)=u^0 \\
0 & = \psi(v, \lambda);
\end{aligned}
\end{equation}
and
\begin{equation}\label{eq:spdae.ELam}
\begin{aligned}
d v &= \left[ v_{xx} + \expect{\Lambda} v_x + f(v) \right] dt +
g(v) \circ d W,\quad v(0)=u^0 \\
0 & = \psi(v, \lambda).
\end{aligned}
\end{equation}
Using the sample mean of $\lambda$ and $\Lambda$ for fixing
we are essentially using a ``group velocity'' to fix the wave and as a
result the mean profile will contain a spread as each individual
realization is not fixed at the same point.
By taking these weaker notions of wave speed to freeze the wave
we observe spreading of the front profiles, as discussed in
\cite{GrciaOjlvo+Sncho}.
\section{Results for the Nagumo Equation}\label{sec:results}
We compare the different estimates of the wave speed and apply the
technique of freezing the wave to the Nagumo equation
\eqref{eq:snagumo} for both multiplicative and additive
space-time white noise.
For the majority of our simulations we take $\Dx=0.1$, $\Dt=0.05$ and a
spatial domain of $L=500$ or $L=800$ with Neumann boundary conditions
and integrate till $t=100$. We compute $100$ realizations simultaneously.
In our computations of wave speeds unless stated we take $t_0=t/2$ to
reduce transient effects and we drop the dependence of the computed
wave speeds on the time $t$ and so report
$\Lambda_{\min}$, $\Lambda_{\min}^{fix}$, $\Lambda_\gamma$, $\Lambda_z$ and ${\Lambda_{\text{fit}}}_z$,
$z\in\{a,b,c\}$.
\subsection{Deterministic PDE}
\label{sec:resdet}
Before we examine the stochastic PDE we briefly examine deterministic
computations. We point out some
features of computing the travelling wave and speed by direct
simulation of the PDE \eqref{eq:pde} versus freezing and solving
the PDAE \eqref{eq:dae}.
In particular we examine the regime where the travelling wave is not
unique and the theory of \cite{thu05} on freezing the deterministic case no
longer holds.
For $\alpha=-0.25$ the asymptotic travelling wave and wave speed
depends on the leading profile data of the initial data $u^0$. We take
two initial profiles $u^0(x)=u_{k_0}(x)$ with $k_0=0.1 < k_*$ and
$k_0=1/\sqrt{2}>k_*$.
To compute the speed by minimization we present results with
reference functions $\hat u=u_{{\hat k}}$ with ${\hat k}=\sqrt{2}$.
Results with a reference function with ${\hat k}=0.1$ are identical
(see also \secref{sec:uref} for comments on the choice of reference function).
We show in \tabref{tab:detwspdsnone}
wave speeds computed from direct simulation of the PDE.
For the PDE $\Lambda_{\min}$ is computed by moving the profile $\hat u$ at the computed
wave speed from the minimization using the condition \eqref{eq:phase}.
There is good agreement between the computed wave speeds, although
$\Lambda_{\min}$ appears to have converged faster than the other measures of the
wave speed to theoretical value.
When we freeze the wave in the computational domain and solve the PDAE
we see from \tabref{tab:detwspdsfix} that the wave is frozen
(to single precision) since the level set positions of $a(t)$,$b(t)$
and $c(t)$ do not change and hence wave speeds ${\Lambda_{\text{fit}}}_z$, $z\in {a,b,c}$
from fitting the linear polynomial are zero (to single precision).
The wave speed $\Lambda_{\min}$ estimated from freezing the wave and the
minimisation the $L^2$ norm agrees with the wave speeds
computed from the PDE (and is in fact a better approximation to the
theoretical values).
\begin{table}
\begin{center}
\begin{tabular}{l|c|c|ccc|ccc}
${\hat k}=1/\sqrt{2}$ & Theory & $\Lambda_{\min}$ &$\Lambda_a$ & $\Lambda_b$ & $\Lambda_c$ & ${\Lambda_{\text{fit}}}_a$
& ${\Lambda_{\text{fit}}}_b$ & ${\Lambda_{\text{fit}}}_c$ \\ \hline
$k_0=1/\sqrt{2}$ & 1.06066 &
1.06047 &
1.06025 & 1.06027 & 1.06026 &
1.06025 & 1.06026 & 1.06026 \\
$k_0=0.1$ & $\geq 2.6$ &
2.59741 &
2.59690 & 2.59689 & 2.59689 &
2.59689 & 2.59689 & 2.59689
\end{tabular}
\end{center}
\caption{Different measures of the wave speed computed from solving
the deterministic PDE. To compute $\Lambda_{\min}$ the profile $\hat u$ travels
to minimize the $L^2$ norm \eqref{eq:phase}. Estimates of wave speeds
$\Lambda_z$, are from \eqref{eq:levLAM} the level set and ${\Lambda_{\text{fit}}}_z$
from \eqref{eq:LF} from fitting, $z\in {a,b,c}$ . We see these
measures of the wave speed agree to $4$ decimal places.
}
\label{tab:detwspdsnone}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{l|c|c|ccc|ccc}
${\hat k}=1/\sqrt{2}$ & Theory & $\Lambda_{\min}^{fix}$ & $\Lambda_a$ & $\Lambda_b$ & $\Lambda_c$ & ${\Lambda_{\text{fit}}}_a$
& ${\Lambda_{\text{fit}}}_b$ & ${\Lambda_{\text{fit}}}_c$ \\ \hline
$k_0=1/\sqrt{2}$ & 1.06066 &
1.06052 &
2.0e-07 &-1.1e-06 &1.5e-08&
1.8e-07 &-9.5e-07 &1.4e-08\\
$k_0=0.1$ & $\geq 2.6$ &
2.60048 &
3.1e-06 &-3.6e-07 &-1.4e-07&
1.3e-06 &-8.4e-08 &-6.0e-08
\end{tabular}
\end{center}
\caption{Wave speeds computed from solving the PDAE and freezing the
travelling wave. The fact that the wave does not move in the domain
can be seen from the level set wave speeds $\Lambda_z$ and ${\Lambda_{\text{fit}}}_z$,
$z\in\{a,b,c\}$ which are close to zero. The wave speed $\Lambda_{\min}^{fix}$
agrees with that computed for the PDE given in \tabref{tab:detwspdsnone}.}
\label{tab:detwspdsfix}
\end{table}
\subsection{Stochastic travelling wave and frozen wave}
To illustrate computations for the stochastic PDE
we start by taking Stratonovich multiplicative noise with $\mu=0.1$ and a
correlation length of $\xi=0.1$. In \tabref{tab:wspdsnone1} we show
results from solving the SPDE with the same single realization of the
noise using the same two
different sets of initial data and two difference reference functions
as for the deterministic case.
Since we have taken the same noise realization, when initial data is the
same our measures of the wave speed $\Lambda_z$, and ${\Lambda_{\text{fit}}}_z$, $z\in
{a,b,c}$ are identical and independent of the reference function $\hat u$.
The choice of reference function does change
the wave speed measured by the minimization in approximately the fourth
decimal place (compare $\Lambda_{\min}$ or $\Lambda_\gamma$ for the different ${\hat k}$
values in \tabref{tab:wspdsnone1}). This small difference is due to a
combination of interpolation errors and is not seen for the
SPDAE below where we do not need this interpolation.
If we compare the values of the deterministic PDE \tabref{tab:detwspdsnone}
and SPDE case \tabref{tab:wspdsnone1} for the single realization we see the
wave speeds with noise are slightly larger than the deterministic
case.
In \figref{fig:nag_none_a-025} we plot the result of a
single realization in (a) for the SPDE with initial data $u^0$ with
$k_0=1/\sqrt{2}$. The wave front is initially at $x\approx 200$ and
travels to $x\approx 500$. For the two different initial data we have
plotted in (b) the two distributions of the instantaneous wave speed
$\lambda$ used to compute the wave speed through the
minimization (with ${\hat k}=1/\sqrt{2}$). The mean of these distributions
gives the corresponding wave speeds, $1.07575$ for
$k_0=1/\sqrt{2}$ and $2.77146$ for $k_0=0.1$. For initial data
$k_0=1/\sqrt{2}$ with a wave speed of $1.07575$ the variance of
$\lambda$ is smaller.
In (c) we have plotted for the two different initial data sets
the instantaneous wave speed $\lambda(t)$ and the corresponding time
averaged wave speeds $\Lambda_{\min}(t)$ with $t_0=0$, $t_1=t$. We see faster
convergence of the wave speed for initial data $k_0=1/\sqrt{2}$ and
again the reduced variability in the instantaneous wave speed.
\begin{table}
\begin{center}
\begin{tabular}{l|cc|ccc|ccc}
$t_0=50, t_1=100$ & $\Lambda_{\min}$ & $\Lambda_\gamma$ & $\Lambda_a$ & $\Lambda_b$ &
$\Lambda_c$ & ${\Lambda_{\text{fit}}}_a$ & ${\Lambda_{\text{fit}}}_b$ & ${\Lambda_{\text{fit}}}_c$ \\
\hline
{\small $k_0=1/\sqrt{2}, {\hat k}=1/\sqrt{2}$}&
1.07575 & 1.06965 &
1.07484 & 1.06746 & 1.07536 &
1.07149 & 1.07147 & 1.06935 \\
{\small $k_0=1/\sqrt{2}, {\hat k}=0.1$} &
1.07538 & 1.06987 &
1.07484 & 1.06746 & 1.07536 &
1.07149 & 1.07147 & 1.06935 \\
{\small $k_0=0.1, {\hat k}=0.1$} &
2.77250 & 2.78015 &
2.80594 & 2.74516 & 2.76020 &
2.79657 & 2.72070 & 2.78112 \\
{\small $k_0=0.1, {\hat k}=1/\sqrt{2}$} &
2.77146 & 2.78178 &
2.80594 & 2.74516 & 2.76020 &
2.79657 & 2.72070 & 2.78112 \\
\end{tabular}
\end{center}
\caption{Wave speeds computed from solving a single realization of the
SPDE with noise intensity $\mu=0.1$ and correlation length $\xi=0.1$. To
compute $\Lambda_{\min}$ the profile $\hat u$ travels with the appropriate speed
found by minimization of the $L^2$ norm.}
\label{tab:wspdsnone1}
\end{table}
\begin{figure}[hth]
\begin{center}
(a) \hspace{0.32\textwidth} (b) \hspace{0.32\textwidth} (c) \\
\includegraphics*[width=0.32\textwidth]{mu01xi01_u_D}
\includegraphics*[width=0.32\textwidth]{mu01xi01_lm_none_AD}
\includegraphics*[width=0.32\textwidth]{mu01xi01_lmlmfix_none_AD}\\
\caption{(a) Space-time plot of a single realization of the SPDE
showing a travelling wave. In (b) distributions of the
instantaneous wave speeds $\lambda$ computed for the two different
initial data sets. In (c) we plot $\lambda(t)$ and the time averages
$\Lambda_{\min}(t)$ with $t_0=0$, $t_1=t$.}
\label{fig:nag_none_a-025}
\end{center}
\end{figure}
Let us now compare to a single realization where the stochastic
travelling wave is frozen and we solve the SPDAE \eqref{eq:stratspdae}
using the Heun method \eqref{eq:Heun}.
Results on the wave speeds for the two initial data sets and two
reference functions are reported in \tabref{tab:wspdsfix1}.
We chose $\Lambda_c$ and ${\Lambda_{\text{fit}}}_c$ to represent values computed form the
level set approaches. The level sets no longer travel (on average) and
hence have wave speeds with values close to zero.
Note that the noise path is not the same as solving the SPDE for
\tabref{tab:wspdsnone1} and so we do not expect the values to be
exactly the same, they are however close.
The wave speed $\Lambda_{\min}^{fix}$ estimated by the minimization is identical for
the two different reference functions (solving the SPDAE we do not
have the same interpolation errors as when solving the SPDE).
However, the choice of reference function is an issue for the SPDE and
we consider this further in \secref{sec:uref}.
In \figref{fig:nag_fix_a-025} we have plotted in (a) the space-time
plot of solution of the SPDAE. The front starts at $x\approx 200$ and
remains (on average) at that position throughout the computation
illustrating that the wave does not travel (compare to
\figref{fig:nag_none_a-025} (a)). In
(b) for the two different initial data $k_0=1/\sqrt{2}$ (with mean
$1.08522$) and $k_0=0.1$ (with mean $2.74311$) we
have plotted the two distributions of the instantaneous wave speed
$\lambda$ used to compute the wave speed through the minimization
(with ${\hat k}=1/\sqrt{2}$). Comparing with
\figref{fig:nag_none_a-025} (b) we see similar distributions and
greater variance with initial data with $k_0=0.1$ than
$k_0=1/\sqrt{2}$ (as in \figref{fig:nag_none_a-025}).
In (c) we have plotted for the two different initial data sets
the instantaneous wave speed $\lambda(t)$ and the time averaged wave
speed $\Lambda_{\min}(t)$ with $t_0=0$. We see faster convergence of the wave
speed $\Lambda(t)$ for $k_0=1/\sqrt{2}$ than for $k_0=0.1$.
\begin{table}
\begin{center}
\begin{tabular}{l|cc|ccc|ccc|cc}
$t_0=50$, $t_1=100$ & $\Lambda_{\min}^{fix}$ & $\Lambda_\gamma^{fix}$ & $\Lambda_c$ & ${\Lambda_{\text{fit}}}_c$ \\ \hline
$k_0=1/\sqrt{2}, {\hat k}=1/\sqrt{2}$ &
1.08522 & 1.08828 &
2.421e-04& -2.131e-04\\
$k_0=1/\sqrt{2}, {\hat k}=0.1$ &
1.08522 & 1.08828 &
2.421e-04& -2.131e-04 \\
$k_0=0.1, {\hat k}=0.1$ &
2.74311 & 2.76005 &
-5.703e-02& -6.404e-03\\
$k_0=0.1, {\hat k}=1/\sqrt{2}$ &
2.74311 & 2.76005 &
-5.703e-02& -6.404e-03\\
\end{tabular}
\end{center}
\caption{Wave speeds computed from solving a single realization of the
SPDAE with noise intensity $\mu=0.1$ and correlation length $\xi=0.1$.}
\label{tab:wspdsfix1}
\end{table}
\begin{figure}[hth]
\begin{center}
(a) \hspace{0.32\textwidth} (b) \hspace{0.32\textwidth} (c) \\
\includegraphics*[width=0.32\textwidth]{mu01xi01_u_fix_D}
\includegraphics*[width=0.32\textwidth]{mu01xi01_lm_fix_AD}
\includegraphics*[width=0.32\textwidth]{mu01xi01_lmlmfix_fix_AD}
\caption{(a) Space-time plot of a single realization of the frozen SPDE
showing a travelling wave. In (b) distributions of the
instantaneous wave speeds $\lambda$ computed for the two different
initial data sets. In (c) we plot $\lambda(t)$ and the time averages
$\Lambda_{\min}(t)$ with $t_0=0$, $t_1=t$. }
\label{fig:nag_fix_a-025}
\end{center}
\end{figure}
Rather than looking at a single realization more physically meaningful
results are found from taking the expectation over many realizations. In
\tabref{tab:wspds10100} we examine wave speeds based on $100$
realizations of both the SPDE and SPDAE. The different measures of the
wave speed are in broad agreement. The larger
uncertainty in $\Lambda_{\min}$ and $\Lambda_{\min}^{fix}$ originates in the large variance in the
instantaneous wave speeds $\lambda$ and is a drawback of the
minimization approach.
We also compare the profiles from the {SPDE}\xspace to profiles obtained
from the {SPDAE}\xspace. To avoid the spreading of the wave we need to align
individual realizations of the {SPDE}\xspace. We chose as a common reference
the level set $c(100)$. If we examine the final time profiles for the
runs we find that the weak error,
$\|\expect{u_{SPDAE}(100)} - \expect{u_{SPDE}(100)}\|_{L^2}^2$
for $10$ realizations is $\approx 0.0150$ and for $100$ realizations
$\approx 0.0144$ and $\approx 0.0117$ with $1000$ realizations.
\begin{table}
\begin{center}
\begin{tabular}{l|c|c|cc}
& $\Lambda_{\min}$ or $\Lambda_{\min}^{fix}$ & $\Lambda_\gamma$ or $\Lambda_\gamma^{fix}$ & $\Lambda_c$ & ${\Lambda_{\text{fit}}}_c$ \\ \hline
SPDE &
1.08588 $\pm$ 0.19680 & 1.08388 $\pm$ 2.73e-03 &
1.08381 $\pm$2.81e-03 & 1.08390 $\pm$2.68e-03 \\
SPDAE &
1.08951 $\pm$ 0.19512 & 1.08790 $\pm$ 2.39e-03 &
-4.0e-05 $\pm$ 2.0e-5 & 3.0e-05 $\pm$ 2.0e-5
\end{tabular}
\end{center}
\caption{Expected values of the wave speeds taken over 100
realizations solving the SPDE and the SPDAE. Initial data taken with
$k_0=1/\sqrt{2}$,and reference function with ${\hat k}=1/\sqrt{2}$.}
\label{tab:wspds10100}
\end{table}
In \figref{fig:SPDE_SPDAE_compare} we compare results for the SPDAE (a) and
SPDE (b) for a range of different nonlinearity's $\alpha\in\{-1,-0.5,-0.3,0,0.3,0.45\}$ and noise intensities
measured by $\mu^2 \in [0,1]$. Initial data approximates a
step function and the spatial correlation length of the noise $\xi$ is that
of the computational grid $\Dx$. The results in (a), where
$\Dx=\xi=0.5$ where agree with those in
\cite{Armro_etal,PhysRevE.58.5494}, reproduced in
\cite{GrciaOjlvo+Sncho}, where the authors obtain a
front velocity taking an average over an ``appropriate time window''
of $\int_L u(x,t) dt$ and compare to a small noise analysis. In (b) we
took a smaller spatial step $\Dx=0.1$ and have plotted the wave speed $\Lambda_{\min}$
computed both from minimization and from the level set,
$\Lambda_c$ on which the error bars are based.
We observe that
the effect of the two different approximations to spatially white
noise is to increase the speed of the wave.
Note that some realizations where the wave is frozen in
\figref{fig:SPDE_SPDAE_compare} (a) fail to exist due to numerical
instability, see \secref{sec:spdae_unstable}.
\begin{figure}[hbt]
\begin{center}
\psfrag{LAM}{$\Lambda_{\min}$}
\psfrag{mu^2}{$\mu^2$}
(a) \hspace{0.45\textwidth} (b) \\
\includegraphics*[width=0.42\textwidth]{nag_LAM_vs_noise_heun_new}
\includegraphics*[width=0.45\textwidth]{wsmu2_xi01_stratA}\\
\caption{Wave speeds $\Lambda_{\min}$ with increasing noise intensity for
Stratonovich noise with correlation length equal to that of the
grid. In (a) solving the SPDAE where the wave is frozen with
$\xi=0.5=\Dx$ (b) the SPDE with $\xi=0.1=\Dx$. Each line
corresponds
to a different nonlinearity with
$\alpha\in\{-1,-0.5,-0.3,0,0.3,0.45\}$.}
\label{fig:SPDE_SPDAE_compare}
\end{center}
\end{figure}
\subsubsection{Numerical instability}
\label{sec:spdae_unstable}
The numerical approximation of SDEs and SPDES where the solution is
constrained in phase space is an area under development.
For the Nagumo equations \eqref{eq:spdeStrat} (or \eqref{eq:spdeIto})
$u\in[0,1]$, another typical example is a positivity constraint where
$u>0$. Numerical instability can lead to non-physical solutions and
potentially to unphysical unbounded growth of the numerical solution.
A number of approaches have been proposed to
simulations to enforce constraints on the numerics and a review of
these types of methods for SDEs is contained in \cite{LordKoekkoekvanDijk}.
One method to avoid unbounded growth in numerics from nonphysical
solutions the nonlinearity and noise can be adapted as in
\cite{MoroSchurz07,Drng+etal:03,Drng.etal:05,Shardlow:05}.
We found that solving the SPDEs \eqref{eq:spdeStrat} (or
\eqref{eq:spdeIto}) such instability was not an issue. However, when
freezing the wave and solving the SPDAE \eqref{eq:stratspdae} did lead
to non-physical solutions. In \figref{fig:instab} we have
frozen the wave and show one realization at $t=34.7$ (a) with an
instantaneous wave speed $10.98$ and (b) $t=35.2$ with an
instantaneous speed $-10.60$. The non-physical regions where $u<0$ and
$u>1$ then grow in magnitude with further iterations.
\begin{figure}[hth]
\begin{center}
(a) \hspace{0.48\textwidth} (b) \\
\includegraphics*[width=0.48\textwidth]{advection_instabilityT347}
\includegraphics*[width=0.48\textwidth]{advection_instabilityT353}\\
\caption{Plot of realization of a single realization of the noise
illustrating instability (a) at $t=34.7$ and (b) $t=35.2$, with
corresponding instantaneous wave speeds of $10.98$ and $-10.60$.}
\label{fig:instab}
\end{center}
\end{figure}
\figref{fig:nag_none_a-025} and \figref{fig:nag_fix_a-025} show the
distribution of the instantaneous wave speed $\lambda$ as computed for
the SPDE and SPDAE respectively where the wave is frozen.
The SPDAE system \eqref{eq:stratspdae} includes the advection term
$\lambda v_x$, where $\lambda$ is a random variable with a particular
distribution - which may lead to either large positive and/or negative
values of $\lambda$.
Numerically this is particularly true for large
noise intensities or small correlation lengths of the noise.
The result of this is a loss of numerical stability.
Although we were able to control unbounded growth by modifying the
equation solved close to the $u=0$ and $u=1$, direct comparisons
of wave speeds to SPDE calculations showed this can lead to a bias
in the estimate, so we do not include such results here.
Hence, in \figref{fig:SPDE_SPDAE_compare} (a) results for the SPDAE
equation are reported with the expectation taken over solutions that
existed to the final time. A large number of initial realizations was
taken so that the final expectation is over at least $1000$
realizations. Although we observe the same results calculating the
wave speed based on level set methods, in general not taking the results where
there is numerical blow up may bias the statistics.
\subsubsection{Choice of reference function $\hat u$ for the minimization}
\label{sec:uref}
We commented early that natural choice for the reference function
would be to take either a deterministic travelling wave or the initial
data. However, the width of the reference function $\hat u$ plays an
important role in the computed wave speed for both the SPDE and
SPDAE. If we take the reference function $\hat u$ to be the Heaviside
function then the minimization of the $L^2$ norm fails.
We observe numerically that narrow reference functions can also lead
to numerical failure of the minimization and indeed there may be more
than one minimal position.
To illustrate this we solve the SPDE with $\alpha=0.25$ and examine
large noise intensity $\mu=1$ combined with a small correlation length
of $\xi=0.5$.
In \figref{fig:uhfail} (a) with $\hat u=u_k$, $k=1/\sqrt{2}$
we see that the width of the computed front is larger than the width
of the reference function $\hat u$ and in (b) is plotted the corresponding
instantaneous wave speed $\lambda(t)$ - with time average $0.6483$
and variance of $6.4568$.
In (b) the reference function $\hat u=u_{\hat k}$ has ${\hat k}=0.1$ and the width
of the reference is larger than the solution. In (d) is plotted the
corresponding
instantaneous wave speed $\lambda(t)$ - with time average $0.6091$
and variance over time of $3.8006$.
In (a) the computation of the minimization fails at a later time
($T\approx 55$) however in (b) computation was continued to $T>100$.
In (a) the minimization of the $L^2$ norm
is dominated by the random
fluctuations in the front which is avoided with a template function
with larger support.
Provided the width of the reference function is comparable or
larger than the width of the front the computations are robust
although convergence rates of the wave speed can be much slower for poor
choices of the reference function.
\begin{figure}[hth]
\psfrag{lm}[][][0.6]{$\lambda$}
\begin{center}
(a) \hspace{0.48\textwidth} (b) \\
\includegraphics*[width=0.42\textwidth]{A025mu1kur1sqrt2_1real_none_u}
\includegraphics*[width=0.4\textwidth]{lmhistfail} \\
(c) \hspace{0.48\textwidth} (d) \\
\includegraphics*[width=0.42\textwidth]{A025mu1kur01_1real_none_u}
\includegraphics*[width=0.4\textwidth]{lmhistworks}
\end{center}
\caption{One realization of the solution with two different reference
functions $\hat u=u_k$ at time $t=50$ (a) and (c) and the
corresponding different distributions of the instantaneous wave
speeds $\lambda(t)$ (b) and (d). In
(a) and (b) ${\hat k}=1/\sqrt{2}$ and (c) and (d) ${\hat k}=0.1$. Note
the smaller variance in (d) with ${\hat k}=1/\sqrt{2}$.}
\label{fig:uhfail}
\end{figure}
\subsection{Effects of Stratonovich and {It\^o}\xspace noise}
\label{sec:ItoStrat}
Accurate numerical calculations are notoriously difficult in the
deterministic case when the wave profile depends on the leading
profile of the wave, see for example \cite{ElmrVnVlck} for the Nagumo
equation or \cite{Qiu+Sln:98} for the Fisher equation.
We consider from now on initial data
that converges to the minimum speed wave in the deterministic case and
take initial data $u_0=u_k$ close to a step function with $k=50$.
We examine the effects on wave speed and support of the front from
changing the noise intensity and correlation length for both
Stratonovich and Ito noise.
First we examine the effects of Stratonovich noise on the travelling
wave in the Nagumo equation.
\figref{fig:stratlam} shows wave
speed as noise intensity $\mu$ increases for four different
correlation lengths $\xi=0.1,0.5,1$ and $10$. On each plot are plotted
different nonlinearities $\alpha=0.3,0.25,0,-0.25,-0.3,-0.5,-1$. Each
point on the plot is an average over $100$ realizations and wave
speeds measured both from minimization $\Lambda_{\min}$ and from the level set
$\Lambda_c$.
In \figref{fig:stratlam} we have plotted the corresponding average widths.
We see that increasing the noise intensity increases the wave speed,
where as increasing the correlation length of the noise decreases the
wave speed. The two effects essentially cancel each other in (d) and
we see no overall effect on the noise intensity on the wave speed.
We can also examine the form of the wave profile.
In \figref{fig:stratlam} we have plotted the corresponding average widths of
the wave as noise intensity $\mu$ and correlation length
$\xi$ are changed. For large noise the width of the waves increase and
this effect is again reduced as the correlation length is increased.
For a spatial correlation length $\xi=\Dx=0.1$
we have an approximation of white noise in space, for this case we see
that for $\alpha=0.45$ and $\alpha=0.25$ the width of the wave
increases and a larger computational domain is required.
\begin{figure}[hbt]
\begin{center}
(a) \hspace{0.45\textwidth} (b) \\
\includegraphics*[width=0.45\textwidth]{wsmu2_xi01_stratD}
\includegraphics*[width=0.45\textwidth]{wsmu2_xi05_stratD}\\
(c) \hspace{0.45\textwidth} (d) \\
\includegraphics*[width=0.45\textwidth]{wsmu2_xi1_stratD}
\includegraphics*[width=0.45\textwidth]{wsmu2_xi10_stratD}
\caption{Wave speeds $\Lambda_{\min}$ and $\Lambda_c$ for increasing
Stratonovich noise intensity and different spatial correlation
lengths (a) $\xi=0.1$, (b) $\xi=0.5$, (c) $\xi=1$ and (d)
$\xi=10$. Increasing the noise intensity increases the expected
wave speed where as increasing the
correlation length decreases the expected wave speed.}
\label{fig:stratlam}
\end{center}
\end{figure}
\begin{figure}[hbt]
\begin{center}
(a) \hspace{0.48 \textwidth} (b) \\
\includegraphics*[width=0.45\textwidth]{wdthmu2_xi01_strat_leg}
\includegraphics*[width=0.45\textwidth]{wdthmu2_xi05_strat_leg}\\
(c) \hspace{0.48 \textwidth} (d) \\
\includegraphics*[width=0.45\textwidth]{wdthmu2_xi1_strat_leg}
\includegraphics*[width=0.45\textwidth]{wdthmu2_xi10_strat_leg}
\caption{Expected width of the wave for increasing Stratonovich
noise intensity and different spatial correlation lengths (a) $\xi=0.1$,
(b) $\xi=0.5$, (c) $\xi=1$, and (d) $\xi=10$. As the noise
intensity is increased the expected width of the wave front
increases where as for fixed intensity increasing the correlation
length reduces the expected width.}
\label{fig:stratw}
\end{center}
\end{figure}
For {It\^o}\xspace noise the effect of the noise on wave speed and width of the
waves is less pronounced, see \figref{fig:itolam} for the wave speed
and \figref{fig:itow} for the corresponding width of the waves.
We see that for large noise, in contrast to the Stratonovich case, a
slight drop in the wave speed for a
correlation length $\xi<10$. As we change the noise intensity we see
(for most nonlinearities) a drop in the width of the wave -- and so
the front is steeper on average and the effect is more pronounced for
shorter correlation lengths.
\begin{figure}[hbt]
\begin{center}
(a) \hspace{0.45\textwidth} (b) \\
\includegraphics*[width=0.45\textwidth]{wsmu2_xi01_itoD}
\includegraphics*[width=0.45\textwidth]{wsmu2_xi05_itoD}\\
(c) \hspace{0.45\textwidth} (d) \\
\includegraphics*[width=0.45\textwidth]{wsmu2_xi1_itoD}
\includegraphics*[width=0.45\textwidth]{wsmu2_xi10_itoD}\\
\caption{Expected wave speeds $\Lambda_{\min}$ and $\Lambda_c$ for increasing
{It\^o}\xspace noise intensity and
different spatial correlation lengths (a) $\xi=0.1$,
(b) $\xi=0.5$, (c) $\xi=1$ and (d) $\xi=10$. As noise intensity is
increased we see a slight drop in wave speed and little effect from
the changing correlation length.}
\label{fig:itolam}
\end{center}
\end{figure}
\begin{figure}[hbt]
\begin{center}
(a) \hspace{0.45\textwidth} (b) \\
\includegraphics*[width=0.45\textwidth]{wdthmu2_xi01_ito_leg}
\includegraphics*[width=0.45\textwidth]{wdthmu2_xi05_ito_leg}\\
(c) \hspace{0.45\textwidth} (d) \\
\includegraphics*[width=0.45\textwidth]{wdthmu2_xi1_ito_leg}
\includegraphics*[width=0.45\textwidth]{wdthmu2_xi10_ito_legA}
\caption{Expected width of waves for increasing {It\^o}\xspace noise intensity and
different spatial correlation lengths (a) $\xi=0.1$,
(b) $\xi=0.5$, (c) $\xi=1$ and (d) $\xi=10$. Increasing noise
intensity narrows the width of the wave and this effect is
mitigated by increasing the correlation length. Legend for all
four plots is given in (d).}
\label{fig:itow}
\end{center}
\end{figure}
\subsection{Computations using averaged quantities}
In general computing wave profiles using averaged quantities leads to the wave
being 'polluted' by the spread of the individual waves (see
\cite{GrciaOjlvo+Sncho}).
In \eqref{eq:spdae.E} we propose using an expected value of the
instantaneous wave speeds for the SPDAE. We fix a spatial correlation
of $\xi=0.5$. For the SPDAE if we solve with
$\mu=0.1$, $\hat u=u_k$ with $k=0.1$ and $u^0=u_{k_0}$, $k_0=1/\sqrt{2}$
and $100$ realizations then we obtain an estimate of a wave speed of
$1.086$. This compares with $\Lambda_{\min}=1.086$ and $\Lambda_c=1.084$ from
solving the SPDE \eqref{eq:spdeStrat}.
If we examine the computed mean solution front we do not observe
spreading of the wave front (see \figref{fig:Elm} (a)). We also note
from (b) that the distribution of $\lambda(t)$ has smaller variance
than that from solving \eqref{eq:stratspdae}.
For the SPDE we can implement a version \eqref{eq:spdae.E} where we
move the reference function using the expected values of the
instantaneous wave speeds. In \figref{fig:Elm} (c) we plot the
distribution of $\lambda$ for same parameters as in (b). The mean
values agree although the distributions are different.
In (d) we see that computed wave speeds using the average
instantaneous speed and wave speeds $\Lambda_c$ computed using the
level set approach are the same over a range of nonlinearities. These
are the same as those computed using the SPDE, compare to
\figref{fig:stratlam} (b).
\begin{figure}[hbt]
\begin{center}
(a) \hspace{0.45\textwidth} (b) \\
\includegraphics*[width=0.42\textwidth]{WeakMeanUfix}
\includegraphics*[width=0.42\textwidth]{lmhistEfix} \\
(c) \hspace{0.45\textwidth} (d) \\
\includegraphics*[width=0.42\textwidth]{lmhistEnone}
\includegraphics*[width=0.42\textwidth]{WEAKws0105SPDE}
\end{center}
\caption{(a) Mean solution of SPDAE \eqref{eq:spdae.E}. Note that we
do not observe that the wave front has be spread taking the mean
instantaneous speed. In (b) is plotted the distribution of $\lambda$
used to freeze the wave in (a). In (c) we plot the distribution from
solving the SPDE using an average wave speed for the minimization and
for the SPDE and in (d) we compare wave speeds over a range on
nonlinearities and noise intensities for $\xi=0.5$.
}
\label{fig:Elm}
\end{figure}
\subsection{Additive noise}
\label{sec:add}
We briefly consider the case of additive noise in
the {SPDE}\xspace for which, unless
the noise has some special properties, a solution will in general
cease to exist at some finite time.
\begin{figure}[hbt]
\begin{center}
(a) \hspace{0.32\textwidth} (b) \\
\includegraphics*[width=0.45\textwidth]{comp_LAM_nu0_01_v5B}
\includegraphics*[width=0.45\textwidth]{comp_LAM_nu0_01_a01_v5B}
\caption{Influence of additive white noise on the wave speed $\Lambda_{\min}$ for (a)
$\alpha=0.25$ and (b) $\alpha=0.1$. In both cases there is a clear
increase in the wave speed from the deterministic case ($\nu=0$) as
the noise intensity is increased, with the final wave speeds in
order of the indicated niose intensities.}
\end{center}
\end{figure}
We now change the parameter $\alpha$ in the nonlinearity
to $\alpha=0.1$ and illustrate how the {SPDAE}\xspace approach deals with
nucleation and extinction of waves. In \figref{fig:nuc} we have plotted
in (a) a single realization of the {SPDE}\xspace (so not frozen) showing
nucleation and subsequent extinction ($t\approx 98$) of a
travelling wave. In (b) is plotted a single realization from computing
using the {SPDAE}\xspace approach. We see the wave is fixed in the domain and
at $t\approx 50$ a wave is nucleated at $x\approx 100$ by the
additive noise. The computations are based on the original wave which
remains fixed until it interacts with the nucleated wave and is
annihilated at $t\approx 94$ when the computations stop when the wave
cease to exist.
In (c) and in (d) we have plotted mean profiles for the {SPDE}\xspace and the
frozen {SPDAE}\xspace systems. In each case we see a well defined front from
the averaging and individual nucleations and annihilations are no longer
distinguishable (although in (d) a large solution pollutes the data at
$t\approx 130$).
\begin{figure}[hbt]
\begin{center}
(a) \hspace{0.45\textwidth} (b) \\
\includegraphics*[width=0.45\textwidth]{nag01_nu005_u_neu_trav_1}
\includegraphics*[width=0.45\textwidth]{nag01_nu005_u_neu_1}\\
(c) \hspace{0.45\textwidth} (d) \\
\includegraphics*[width=0.45\textwidth]{nag_utx_a01_nu005_neu_trav_1000}
\includegraphics*[width=0.45\textwidth]{nag_utx_a01_nu005_neu_1000}
\end{center}
\caption{Nucleation of travelling waves and annihilation for the
Nagumo equation with $\alpha=0.1$. In (a) the space-time plot shows
computations of the {SPDE}\xspace (no freezing) and in (b) the {SPDAE}\xspace where
the wave is frozen. In (c) and (d) are plotted means over
realizations for the frozen and travelling cases.}
\label{fig:nuc}
\end{figure}
\section{Discussion}
\label{sec:conc}
We have examined level set based methods and minimization to a
reference function methods to calculate the wave speed of a stochastic
travelling wave. Our numerical results illustrate these give
comparable results. Numerically we saw that for reference functions
with support much smaller than the support of the travelling wave that
the minimization may fail. Using the minimization technique for the
SPDE (when it is not frozen) is more computationally expensive than
the level set based methods as it requires interpolation at each time
step.
The algorithm described for freezing the wave and solving the SPDAE
has several numerical advantages over simply solving the {SPDE}\xspace if the
numerical instability issues could be over come. The frozen wave does
not require a large computational domain for long time simulations and
the generation of the noise path is not so computationally expensive.
The cost of of the minimization when the wave is fixed is minimal as
we simply need to compute two inner-products.
However the advection term is nontrivial - and the loss
of numerical stability is a real issue where some realizations fail to
exist as ignoring results where there is numerical blow up may bias
the statistics.
Our investigation of the Nagumo equation has revealed interesting
and new computational observations that we have not seen reported in
the literature. Although it was known that for Stratonovich noise
increasing noise intensity increases wave speed we have also seen it
increases the support of the wave. In addition increasing the spatial
correlation decreases the wave speed and decreases the support of the wave.
The reverse is observed for {It\^o}\xspace noise: the noise intensity seems to decrease
the wave speed and correlation length has little influence on the
speed decreases the support of the wave.
For additive noise in the Nagumo equation we see that the wave speed is
increased with the noise intensity like in the multiplicative case --
this is probably because of the small perturbations ahead of the front
that make the wave faster.
\bibliographystyle{siam}
|
train/arxiv
|
BkiUfGHxK0-nUh8iKzNh
| 5
| 1
|
\section{Proof of equivalence to Schwinger boson representation of
spin-1 condensate}
In the paper we suggested that an arbitrary $F=1$ spinor BEC wave function may be cast in the form
\begin{eqnarray} \Psi_{F=1} =\sqrt{\rho}e^{i\theta} \bm \eta , \end{eqnarray}
where the spinor part $\bm \eta$ is a linear combination of the AFM $(\bm \eta_A)$ and the FM $(\bm
\eta_F)$ parts,
\begin{eqnarray} \bm \eta = z_A \bm \eta_A + z_F \bm \eta_F , ~ \begin{pmatrix} z_A \\
z_F \end{pmatrix} = \begin{pmatrix} \cos \delta/2 \\ e^{i\tau} \sin \delta/2 \end{pmatrix} ,
\label{eq:our-decomposition}\end{eqnarray}
with a pair of coefficients $(z_A , z_F )$ obeying the constraint $|z_A |^2 + |z_F |^2 = 1$. Due to
the global phase $e^{i\theta}$ entering in the wave function $\Psi_{F=1}$ one can freely choose
$z_A$ to be real without loss of generality. Furthermore, the phase factor $e^{i\tau}$ of $z_F$
always appears multiplied by $e^{-i\gamma}$ of $\bm \eta_F$ and can be absorbed by it:
$e^{i(\tau-\gamma)} \rightarrow e^{-i\gamma}$. This leads to the simplified expression
\begin{eqnarray} \Psi_{F=1}= \sqrt{\rho} e^{i\theta} \left[\bm \eta_A \cos {\delta \over 2} + \bm \eta_F \sin {\delta
\over 2} \right] \label{eq:our-decomposition} \end{eqnarray}
given in Eq. (2) of the paper with
\begin{eqnarray} \bm \eta_A\!=\! \begin{pmatrix} -{1\over\sqrt{2}} e^{-i\alpha} \sin \beta
\\ \cos\beta \\ {1\over\sqrt{2}} e^{i\alpha} \sin \beta \end{pmatrix}, ~
\bm \eta_F \!=\! e^{-i\gamma}
\begin{pmatrix} e^{-i\alpha} \cos^2 {\beta \over 2 } \\
{1\over\sqrt{2}} \sin \beta \\ e^{i\alpha} \sin^2 {\beta \over 2 }
\end{pmatrix} . \label{eq:eta1-eta2} \nonumber \\ \end{eqnarray}
One can furthermore prove that the new scheme is equivalent to the standard Schwinger boson (SB)
expression of the (unnormalized) spin-1 wave function
[\onlinecite{spin-half-hydro-refeal,ueda-schwinger}]
\begin{eqnarray}
\Psi_\mathrm{SB} \sim \sqrt{\rho} e^{i\theta_\mathrm{SB}} (u_1 a^\dag + v_1 b^\dag ) (u_2
a^\dag + v_2 b^\dag ) |0\rangle .\end{eqnarray}
Each $(u_i, v_i ) = (\cos \theta_i /2 , e^{i \phi_i } \sin \theta_i /2 )$ defines a point $\v n_i$
on the unit sphere S$^2$ through the CP$^1$ mapping. The FM state is obtained by identifying
$\theta_1 = \theta_2$, $\phi_1 = \phi_2$, or $\v n_1 = \v n_2$. The antiferromagnetic phase is
identified with $\v n_2 = -\v n_1$, by writing $(u_2, v_2 ) = (v^*_1, - u_1 )$. Keeping in mind
that the angular momentum state $|J, m\rangle$ in the SB representation is
\begin{eqnarray} |J, m\rangle = {1\over\sqrt{(J\!+\!m)!(J\!-\!m)!}} (a^\dag)^{J\!+\! m} (b^\dag)^{J\!-\!m}
|0\rangle , \end{eqnarray}
the SB wave function becomes, together with proper normalization factor,
\begin{eqnarray} \Psi_\mathrm{SB} = \sqrt{\rho} e^{i\theta_\mathrm{SB}} \sqrt{2 \over 3+ \v n_1 \cdot \v n_2}
\begin{pmatrix} \sqrt{2} u_1 u_2 \\ u_1 v_2 +u_2 v_1 \\ \sqrt{2} v_1 v_2 \end{pmatrix}. \label{eq:SB-form}\end{eqnarray}
Components of the original spinor wave function $\Psi_{F=1}$ are related to the SB decomposition
through the roots of the quadratic equation
\begin{eqnarray} \psi_+ z^2 + \sqrt{2} \psi_0 z + \psi_- =0 , \label{eq:for-z} \end{eqnarray}
with the coefficients derived from the wave function $\Psi_{F=1} = ( \psi_+ , \psi_0, \psi_- )^T$.
The two roots ought to correspond, precisely, to $z_1 = - v_1/ u_1 $ and $z_2=- v_2 / u_2 $ if Eq.
(\ref{eq:SB-form}) should hold for arbitrary $F=1$ wave function~\cite{ueda-schwinger}. Once the
roots of a particular $F=1$ wave function are found they can be related to the SB parameters
through the formula
\begin{eqnarray} z_i = -{v_i \over u_i}= -e^{i \phi_i}\tan{\theta_i \over 2} .\label{z sol1} \end{eqnarray}
We may now apply this procedure to the new condensate wave function $\Psi_{F=1}$ shown in Eq.
(\ref{eq:our-decomposition}). The two roots are readily found to be
\begin{eqnarray} z_1 &=& - e^{i \alpha} \tan{\beta \over 2}, \nonumber \\
z_2 &=& e^{i
\alpha}\left(\cot{\beta \over 2}\!+\! {1 \over \sqrt{2} e^{i \gamma}
\cot {\delta \over 2} \sin^2 {\beta \over 2} \!-\! {\sin \beta \over 2} }
\right). \nonumber \\ \label{eq:z1-z2} \end{eqnarray}
From the first solution it follows that the first pair of SB parameters $(\phi_1, \theta_1)$ is
equal to $(\alpha, \beta)$ in the wave function $\Psi_{F=1}$. That is, once a SB construction is
given, the corresponding $\Psi_{F=1}$ can be found by identifying $(\alpha, \beta)$ with the first
pair of SB parameters $(\phi_1, \theta_1 )$. The second solution $z_2$ can then be used to relate
the remaining unknowns $(\gamma, \delta)$ to the SB parameters.
Rather than trying to tackle the second of Eq. (\ref{eq:z1-z2}) directly, we adopt a different, more pragmatic way to relate $(\gamma, \delta)$ in terms of SB parameters. It proves quite useful to compare the magnetization averages for each representation. From $\Psi_\mathrm{SB}$ one finds
\begin{eqnarray} \v S_\mathrm{SB} = \Psi^\dag_\mathrm{SB} \v F \Psi_\mathrm{SB} = \rho {2(\v n_1 + \v n_2
)\over 3 + \v n_1 \cdot \v n_2}.\end{eqnarray}
The average from $\Psi_{F=1}$ is more complicated, but fortunately one can relate it to the Euler rotation of some basis spinor
\begin{eqnarray} \v S_{F=1} = \Psi^\dag_{F=1}\v F \Psi_{F=1} = \rho {\cal R}(\alpha, \beta, \gamma) \begin{pmatrix} {1 \over \sqrt 2}\sin \delta \\ 0 \\
\sin^2 {\delta \over 2} \end{pmatrix} ,\nonumber \\ \label{eq:R}\end{eqnarray}
with ${\cal R}(\alpha, \beta, \gamma) = e^{- i \alpha S_z} e^{- i \beta S_y} e^{- i \gamma S_z}$ and $[S_\alpha ]_{\beta\gamma}
= -i \varepsilon_{\alpha\beta\gamma}$.
Squaring each average and identifying the two, $\v S_\mathrm{SB}^2 = \v S_{F=1}^2$, gives out the relation
\begin{eqnarray} 1- \left( \cos {\delta \over 2} \right)^4 = 8{\bigl(1+\v n_1 \cdot \v n_2
\bigr) \over \bigl(3+ \v n_1 \cdot \v n_2 \bigr)^2 } .\end{eqnarray}
The mixing angle $\delta$ is obtained easily as
\begin{eqnarray} \cos {\delta
\over 2}= \sqrt{ {1-\v n_1 \cdot \v n_2 \over 3+ \v n_1 \cdot \v
n_2}}. \label{eq:for-delta}\end{eqnarray}
For any SB wave function one can first work out $\v n_1$ and $\v n_2$, then use the above relation
to uniquely specify $\delta$ in Eq. (\ref{eq:our-decomposition}). Proper limits are recovered when
$\v n_1 = \v n_2$ (FM, $\delta = \pi$) and $\v n_1 = -\v n_2$ (AFM, $\delta = 0$).
To determine the remaining parameter $\gamma$, one notes that $\v S_{F=1}$ given in Eq.
(\ref{eq:R}) ought to be orthogonal to the following vector:
\begin{eqnarray}{\cal R}(\alpha,\beta,\gamma)\begin{pmatrix} 0\\1\\0 \end{pmatrix} =\begin{pmatrix}-\cos\alpha\cos\beta\sin\gamma -\sin\alpha \cos\gamma\\
-\sin\alpha\cos\beta\sin\gamma+ \cos\alpha\cos\gamma \\
\sin\beta\sin\gamma \end{pmatrix} . \nonumber \\ \label{eq:1.15}\end{eqnarray}
Taking the inner product of $\v S_{F=1}$, or equivalently of $\v S_\mathrm{SB}$, with Eq.
(\ref{eq:1.15}) should give zero:
\begin{eqnarray} \sin\gamma
\bigl(\sin\beta\cos\theta_2-\cos(\alpha-\phi_2)\cos\beta
\sin\theta_2 \bigr)\nonumber \\ - \cos\gamma\sin(\alpha-\phi_2)\sin\theta_2 =
0. \end{eqnarray}
Luckily this equation contains $\gamma$ only, and one finds
\begin{eqnarray} \tan \gamma =
{\sin\theta_2\sin(\alpha- \phi_2) \over \sin\beta\cos\theta_2
-\cos(\alpha - \phi_2)\cos\beta \sin\theta_2 }.\nonumber \\ \label{eq:for-gamma}\end{eqnarray}
All the variables on the right side are the SB parameters, already assumed known or given in advance.
Once a particular SB parametrization of the $F=1$ wave function is given, one can proceed
systematically to find the corresponding $(\alpha, \beta, \gamma, \delta)$ of the new decomposition
scheme, Eq. (\ref{eq:our-decomposition}), by (i) identifying $(\alpha, \beta ) = (\phi_1 , \theta_1
)$, and (ii) solving for $(\gamma, \delta)$ using Eqs. (\ref{eq:for-delta}) and
(\ref{eq:for-gamma}). Finally, the global phase factor $\theta$ in $\Psi$ is found by taking the
projection
\begin{eqnarray} e^{i \theta } = \bm \eta^\dag \Psi_\mathrm{SB} \end{eqnarray}
now that $\bm \eta$ is completely fixed. With the one-to-one correspondence to the SB wave function
established, Eq. (\ref{eq:our-decomposition}) constitutes a new, alternative way to express the
most general spin-1 wave function.
\section{Decomposition of the Gross-Pitaevskii equation}
In this section the Gross-Pitaevskii (GP) equation for spin-1 condensate as shown in Eq. (5) of
the paper is derived. The standard form of the GP equation,
\begin{eqnarray} i\hbar {\partial \over \partial t}\Psi = -{\hbar^2 \over 2m} \bm
\nabla^2 \Psi+ {1 \over 2} m \omega^2 r^2 \Psi + g_0 \rho \Psi + g_2 \v S \cdot \v F \Psi , \label{GP eq}\nonumber \\\end{eqnarray}
will be re-analyzed based on the decomposition being proposed in Eq. (\ref{eq:our-decomposition}).
On direct insertion of $\Psi_{F=1}$ into the GP equation one finds
\begin{widetext}
\begin{eqnarray} && i\hbar \Bigl ([\partial_t \omega_A +i \omega_A \partial_t \theta ] \bm \eta_A
+ [\partial_t \omega_F + i \omega_F \partial_t \theta ]\bm\eta_F+ \omega_A[\partial_t
\bm\eta_A]+\omega_F[\partial_t \bm\eta_F]
\Bigr)\nonumber \\
&& = {-\hbar^2\over 2m} \Bigl(\bigl[\partial^2_\mu \omega_A + 2 i
\partial_\mu \omega_A \partial_\mu \theta + \omega_A \bigl(i\partial^2_\mu
\theta- (\partial_\mu \theta)^2\bigr)\bigr]\bm \eta_A+
2\bigl[\partial_\mu \omega_A + i \omega_A \partial_\mu \theta \bigr]
\bigl[\partial_\mu \bm \eta_A \bigr] +\omega_A\bigl[\partial^2_\mu
\bm\eta_A\bigr]\nonumber \\
& & ~~~~~~~~~~+\bigl[\partial^2_\mu \omega_F + 2 i\partial_\mu \omega_F \partial_\mu \theta
+ \omega_F \bigl(i\partial^2_\mu
\theta- (\partial_\mu \theta)^2\bigr)\bigr]\bm \eta_F+ 2\bigl[ \partial_\mu \omega_F + i\omega_F
\partial_\mu \theta \bigr] \bigl[\partial_\mu \bm \eta_F \bigr] +\omega_F \bigl[\partial^2_\mu
\bm\eta_F\bigr] \Bigr) \nonumber \\
& &+{1 \over 2} m \omega^2 r^2
\bigl[\omega_A \bm \eta_A + \omega_F \bm \eta_F \bigr] + g_0\rho \bigl[
\omega_A \bm \eta_A + \omega_F \bm \eta_F \bigr] + g_2 \v S \cdot
\v F \bigl[\omega_A \bm\eta_A + \omega_F \bm \eta_F \bigr] .
\label{eq:our-GP}\end{eqnarray}
\end{widetext}
Here $\omega_A$ and $\omega_F$ are temporary abbreviations for $\omega_A \equiv \sqrt{\rho}z_A$
and $\omega_F \equiv \sqrt{\rho}z_F$.
Note that repeated $\mu$ implies summation for $\v r = x,~y,~z $. As in Ref. \onlinecite{refael} we expect to obtain various
hydrodynamic relations by projecting the above equation with $\bm \eta_A$ and $\bm\eta_F$ already
introduced in Eq. (\ref{eq:eta1-eta2}), and with a third one, $\bm \eta_{\overline{F}}$, defined by
\begin{eqnarray} \bm \eta_{\overline{F}} =\begin{pmatrix} e^{- i \alpha} \sin^2 { \beta \over 2} \\
{ 1 \over \sqrt{2}} \sin \beta \\ e^{ i \alpha} \cos^2 { \beta \over 2} \end{pmatrix} .\end{eqnarray}
As discussed in the main text, $\bm \eta_F, \bm \eta_A, \bm \eta_{\overline{F}}$ are the Euler
rotations ${\cal U}(\alpha, \beta, \gamma)$ of the three basis spinors $(1~ 0 ~ 0)^T$, $(0~1~0)^T$
and $(0~0~1)^T$, respectively. Only this time the Euler rotation ${\cal U}(\alpha, \beta, \gamma) =
e^{-i\alpha F_z} e^{-i\beta F_y } e^{-i\gamma F_z }$ is generated with a different set of spin
matrices $\v F$.
It will be seen shortly that projection with the first two spinors yield standard hydrodynamic
relations such as mass continuity, Euler equation, and the Landau-Lifshitz equation. The projection
with the third spinor, however, has been neglected in the past literature\cite{lamacraft} and yield
some surprising consequences.
Before proceeding to the hydrodynamic decomposition, some mathematical preliminaries are in order.
It proves extremely convenient to define a triad of orthogonal basis vectors,
\begin{widetext}
\begin{eqnarray} \v e_x &\equiv& {\cal R} \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} = ( \cos \beta \cos \alpha \cos\gamma
-\sin\alpha \sin\gamma , \cos \beta \sin \alpha \cos\gamma +\cos\alpha \sin\gamma , -\sin
\beta\cos\gamma ) , \nonumber \\
\v e_y &\equiv& {\cal R} \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} = ( -\sin \alpha\cos\gamma
-\cos\alpha\cos\beta \sin\gamma , \cos \alpha\cos\gamma -\cos\beta\sin\alpha \sin\gamma
, \sin\beta \sin \gamma ) , \nonumber \\
\v d & \equiv & {\cal R} \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} = (\sin \beta \cos \alpha, \sin \beta \sin \alpha,
\cos \beta), \label{eq:ex-ey-d}\end{eqnarray}
\end{widetext}
using the rotation matrix ${\cal R}$ introduced earlier in Eq. (\ref{eq:R}). For later convenience
one also defines $\v e_\pm \equiv \v e_x \pm i \v e_y$. Note that our definitions of the triad $(\v
e_x , \v e_y , \v d)$ are more general than those of Ref. \onlinecite{refael}.
Various connections can be defined among the three spinors: $-i \bm \eta_\alpha^\dag [ \partial_\mu
\bm \eta_\beta ]$. Some are zero, while all the non-zero connections can be related to the
derivatives of the geometric quantities defined in Eq. (\ref{eq:ex-ey-d}):
\begin{widetext}
\begin{eqnarray} \v e_x \cdot \partial_\nu \v e_y &=& -\cos\beta \partial_\nu \alpha - \partial_\nu \gamma = -i
\bm\eta^\dag_F [ \partial_\nu \bm\eta_F ]= i \bm\eta^\dag_{\overline{F}} [ \partial_\nu
\bm\eta_{\overline{F}} ], \nonumber \\
\v e_{+} \cdot \partial_\nu \v d &=& e^{-i \gamma } \bigl(\partial_\nu\beta + i \sin\beta \partial_\nu \alpha\bigr) ={\sqrt 2} \bm\eta^\dag_A [ \partial_\nu \bm\eta_F ] = \sqrt{2} \bm\eta^\dag_{\overline{F}} [ \partial_\nu \bm\eta_A ], \nonumber \\
\v e_{-} \cdot \partial_\nu \v d &=&e^{i \gamma }\bigl( \partial_\nu \beta -i \sin\beta \partial_\nu \alpha\bigr)
= -{\sqrt 2}\bm\eta^\dag_F [ \partial_\nu \bm\eta_A ] , \label{geometric relation}
\end{eqnarray}
\end{widetext}
where $\nu$ = $(\v r, t)$ .
The connection $-i\bm \eta^\dag_F \partial_\nu \bm \eta_F$ appears frequently in the hydrodynamic
equations and will be labeled $a_\nu$:
\begin{eqnarray} a_{\nu} = -\cos\beta \partial_\nu \alpha - \partial_\nu \gamma.
\end{eqnarray}
Crucial to the hydrodynamic formulation are the various projections of the second derivatives of
the spinor, $\bm \eta_\alpha^\dag \partial^2_\mu \bm \eta_\beta$, which can be re-written nicely in
terms of geometric quantities:
\begin{eqnarray} \bm\eta^\dag_A
\partial^2_\mu \bm\eta_A&=& -\bigl(\partial_\mu \v d\bigr)^2 , \nonumber \\
\bm\eta^\dag_A \partial^2_\mu \bm\eta_F &=& i\sqrt{2} a_{\mu} ( \v e_+ \cdot \partial_\mu \v d ) +
{1 \over \sqrt 2} \v e_+ \cdot \partial^2_\mu \v d , \nonumber \\
\bm\eta^\dag_F \partial^2_\mu \bm\eta_A&=& -{1 \over \sqrt 2} \v e_+ \cdot \partial^2_\mu \v d , \nonumber \\
\bm\eta^\dag_F \partial^2_\mu \bm\eta_F&=& i \partial_\mu a_{\mu} - \Bigl({1 \over 2}\bigl(\partial_\mu \v d\bigr)^2 +\bigl(a_{\mu}\bigr)^2 \Bigr) , \nonumber \\
\bm\eta^\dag_{\overline{F}}\partial^2_\mu \bm\eta_A&=& {1 \over \sqrt 2} \v e_+ \cdot \partial_\mu^2 \v d , \nonumber \\
\bm\eta^\dag_{\overline{F}} \partial^2_\mu \bm\eta_F&=& {1 \over 2} \Bigl(\v e_+ \cdot \partial_\mu \v d \Bigr)^2.
\end{eqnarray}
For convenience one can also define partial magnetization $ \bm \eta_\alpha^\dag \v F \bm
\eta_\beta = \v S_{\alpha\beta}$, which gives
\begin{eqnarray} \v S_{AF} = \v S_{\overline{F}A} &=& {1 \over \sqrt{2}} (\v e_x + i \v e_y ), \nonumber \\
\v S_{FA} = \v S_{A\overline{F}} &=& {1 \over \sqrt{2}} (\v e_x - i \v e_y ), \nonumber \\
\v S_{FF} &=& \v d .\end{eqnarray}
Magnetization for the general wave function $\Psi_{F=1}$ becomes $ \v S = \rho \bigl( \sqrt{2} z_A
z_F \v e_x + z_F^2 \v d\bigr)$. Other useful relations are $\v S_{AA}=\v S_{\overline{F}F}=\v
S_{F\overline{F}} = 0$, and $\v S_{\alpha\beta} \cdot \v d = \delta_{F\alpha}\delta_{F \beta}$.
After these technical preparations we can start to decompose the
equation (\ref{eq:our-GP}) by projecting first with $\eta^\dag_A$:
\begin{widetext}
\begin{eqnarray} i \hbar \Bigl[ {\cal D}_{A,t} z_A + {1\over \sqrt 2}z_F \bigl( \v e_{+} \cdot \partial_t \v d
\bigr) \Bigr] &=& {\hbar^2 \over 2m} \Bigl[ -\bigl({\cal D}_{A,\mu}\bigr)^2 z_A +z_A (\partial_\mu
\v d)^2 -\sqrt{2} ( {\cal D}_{F,\mu} z_F )\bigl( \v e_{+} \cdot \partial_\mu \v d \bigr) -{1\over
\sqrt{2}}z_F \bigl( \v e_{+} \cdot
\partial^2_\mu \v d \bigr ) \Bigr] \nonumber \\
& & +{1\over 2} m \omega^2 r^2 z_A + g_0 \rho z_A + g_2 \rho z_A z_F^2 . \label{geometric eta1 products}
\end{eqnarray}
Inner product with $\eta^\dag_F$ gives
\begin{eqnarray} i \hbar \Bigl[{\cal D}_{F,t}z_F - {1 \over \sqrt{2}} z_A \bigl (\v e_{-} \cdot \partial_t \v
d \bigr) \Bigr]&=&
{\hbar^2 \over 2m} \Bigl\{-\bigl({\cal D}_{F,\mu}\bigr)^2 z_F +z_F {1\over 2} (\partial_\mu \v d)^2
+\sqrt{2}( {\cal D}_{A,\mu}z_A ) \bigl( \v e_{-} \cdot \partial_\mu \v d \bigr) + {1 \over \sqrt 2}
z_A \bigl ( \v e_{-} \cdot \partial^2_\mu \v d \bigr) \Bigr\}\nonumber \\ & & +{1\over 2} m \omega^2 r^2 z_F
+g_0 \rho z_F + g_2 \rho (z_A^2z_F+z_F^3). \label{geometric eta2 products}\end{eqnarray}
Finally, inner product with $\eta^\dag_{\overline{F}}$ yields
\begin{eqnarray} -{i \hbar \over \sqrt 2} z_A \bigl (\v e_{+} \cdot \partial_t \v d \bigr) &=& {\hbar^2 \over
2m }\Bigl[\sqrt{2} ( {\cal D}_{A,\mu}z_A ) \bigl ( \v e_{+} \cdot
\partial_\mu \v d \bigr) + {1 \over \sqrt 2}z_A \bigl( \v e_{+} \cdot
\partial^2_\mu \v d \bigr) + {1\over 2}z_F \bigl(\v e_{+} \cdot
\partial_\mu \v d \bigr)^2 \Bigr]- g_2 \rho z_A^2z_F.
\label{geometric eta3 products} \end{eqnarray}
\end{widetext}
Here,
\begin{eqnarray} {\cal D}_{A,\nu} & \equiv & \partial_\nu + i \partial_\nu \theta+ {1\over 2}\partial_\nu\ln\rho , \nonumber \\
{\cal D}_{F,\nu} & \equiv & \partial_\nu + i \partial_\nu \theta + i a_{\nu}+ {1\over
2}\partial_\nu\ln\rho , \label{eq:cov-der} \end{eqnarray}
are the covariant derivatives appropriate for AFM and FM manifold, respectively.
Various hydrodynamic relations existing in the current literature can be derived by going to the FM
limit, $z_A = 0, z_F =1$, and $\v S = \rho \v d$. In this case Eq. (\ref{geometric eta1 products})
is reduced to
\begin{eqnarray} i \v e_{+} \cdot D_{F, t} \v d &=& -{\hbar \over 2 m \rho} \v e_+ \cdot \partial_\mu(\rho
\partial_\mu \v d). \label{FM eta1 products} \end{eqnarray}
The velocity vector in the FM manifold is given by $\v v_F = (\hbar/m) (\bm \nabla \theta + \v a)$.
The material derivative, which is different from the covariant derivative given earlier in Eq.
(\ref{eq:cov-der}), then becomes $D_{F, t} \equiv \partial_t + \v v_{F} \cdot \bm \nabla$. The
$g_2$-interaction term vanishes because $\v d \cdot \v S_{AF} = 0$. By matching the real and the
imaginary parts on both sides of Eq. (\ref{FM eta1 products}) one recovers the Landau-Lifshitz
equation
\begin{eqnarray}\rho D_{F,t} \v d &=& {\hbar \over 2 m} \v d \times \partial_\mu(\rho \partial_\mu \v
d) . \label{FM Landau} \end{eqnarray}
Equation (\ref{geometric eta2 products}) in the FM limit has the imaginary part that gives the mass
continuity, $ \partial_t \rho=-\bm \nabla \cdot (\rho \v v_F) $, while its real parts give the
Euler equation~\cite{refael}
\begin{widetext}
\begin{eqnarray}
D_{F,t} \v v_F &=& { \hbar \over m} \Bigl[\v v_F \times {\cal B} + {\cal E}
-\bm \nabla \Bigl({ \hbar \over 4 m } \bigl(\partial_\mu \v d \bigr)^2 - { \hbar \over 2m} {\bm \nabla ^2 \sqrt{\rho} \over \sqrt{\rho} } + {1 \over 4} m \omega^2 r^2 + {1 \over 2} g_0 \rho + {1 \over 2}g_2 \rho \Bigr) \Bigr].
\end{eqnarray}
\end{widetext}
Here, ${\cal B} = -\bm \nabla \times \bm a$ and ${\cal E} = \partial_t \bm a - \bm \nabla a_t$ are
effective magnetic and electric fields experienced by the condensate. All the familiar hydrodynamic
relations for the FM manifold are recovered from the first two of the projected equations.
A surprise occurs when we examine Eq. (\ref{geometric eta3
products}), which becomes in the FM limit,
\begin{eqnarray} \bigl(\v e_{+}\cdot
\partial_\mu \v d \bigr)^2 &=& 0 \label{FM eta3 products}. \label{FM stuck}
\end{eqnarray}
This is the new and unexpected result found from the third projection. It implies $\partial_\mu \v
d = 0$, and when combined with Eq. (\ref{FM Landau}), also implies $\partial_t \v d = 0$. The
result hold in the presence of the confining trap, as well as the interactions. The only
sustainable dynamics of the $\v d$-vector within the FM manifold is the one of constant, $\v d = \v
d_0$, implying that any inhomogeneity in the initial $\v d$-vector configuration would immediately
throw the condensate wave function $\Psi(\v r , t)$ outside of the FM manifold at $t>0$.
The AFM limit also poses a problem as one can see by examining the limit $z_A$=$1$, $z_F$=$0$, and
$\v S=0$. This time it is the first projection, Eq. (\ref{geometric eta1 products}), that yields
the mass continuity and the Euler equation:
\begin{widetext}
\begin{eqnarray} D_{A,t} \v v_A &=& -{\hbar \over m} \bm \nabla\Bigl[ {\hbar \over 2 m}(\partial_\mu \v
d)^2-{\hbar \over 2 m}{\bm \nabla^2 \sqrt{\rho}\over \sqrt{\rho}}+{m \over 2} \omega^2 r^2 + g_0
\rho \Bigr]. \end{eqnarray}
\end{widetext}
The velocity field for the AFM condensate is $\v v_A \equiv (\hbar / m) \bm \nabla \theta$.
Definition of the material derivative $D_{A,t}=\partial_t + \v v_A \cdot \bm \nabla$ is similarly
modified. The second projection, Eq. (\ref{geometric eta2 products}), in the AFM limit becomes the
Euler equation
\begin{eqnarray} \rho D_{A,t} \v d &=& -{\hbar \over 2 m} \v d \times \partial_\mu(\rho \partial_\mu \v
d).
\label{L-L s eq AFM} \end{eqnarray}
Finally, the third projection Eq. (\ref{geometric eta3 products}) in the
AFM limit can be re-arranged as
\begin{eqnarray} i \rho D_{A,t} \v d
&=& {\hbar \over 2 m} \v d \times \partial_\mu(\rho \partial_\mu \v d) , \label{L-L eq AFM}
\end{eqnarray}
which looks similiar to the LL equation but with an opposite sign on the right side. Combining Eq.
(\ref{L-L s eq AFM}) and Eq. (\ref{L-L eq AFM}), we conclude that each term must vanish separately:
\begin{eqnarray} \v d \times
\partial_\mu(\rho\partial_\mu \v d) = 0 , ~ \rho D_{A,t} \v d = 0\label{AFM result}.
\label{AFM restriction} \end{eqnarray}
The first of these results implies
\begin{eqnarray} \v d \times \partial_\mu(\rho \partial_\mu \v d) = \partial_\mu
\bigl(\rho \v d \times \partial_\mu \v d \bigr)= 0, \label{eq:2.21}\end{eqnarray}
hence $\rho \v d \times \partial_\mu \v d$ must stay constant throughout the AFM condensate.
If the density $\rho$ was uniform it implies a uniform spiral (or cycloidal) structure for $\v d$. It is the only structures that can sustain dynamics within the AFM manifold. For all other initial configurations the constraint imposed by Eq. (\ref{eq:2.21}) effectively throws the condensate out of the AFM manifold.
Hydrodynamic relations obtained in each specific manifold are arranged in Table \ref{table}.
\begin{table*}
\begin{tabular}{|c|c|c|}\hline
& AFM & FM \\
\hline
& $D_{A,t}=\partial_t + \v v_A \cdot \bm \nabla $ &
$D_{F,t}=\partial_t + \v v_F \cdot \bm \nabla $\\
&$\v v_A = (\hbar / m )\bm \nabla \theta $ & $\v v_F = (\hbar /m )(\bm \nabla \theta + \bm a) $
\\ \hline
$\bm \eta_A^\dag$ & $\partial_t \rho = -\bm\nabla \cdot (\rho \v v_A)$ &
$\rho D_{F,t} \v d = {\hbar \over 2 m} \v d \times \partial_\mu(\rho \partial_\mu \v d)$\\
&$D_{A,t} \v v_A = -{\hbar \over m} \bm \nabla\Bigl[ {\hbar \over 2 m}(\partial_\mu
\v d)^2-{\hbar \over 2 m}{\bm \nabla^2 \sqrt{\rho}\over \sqrt{\rho}}+{1 \over 2}m \omega^2 r^2 + g_o \rho \Bigr]$ & \\\hline
$\bm \eta_F^\dag$ & $ $ & $\partial_t \rho=-\bm
\nabla \cdot (\rho \v v_F)$ \\
& $ \rho D_{A,t} \v d = - {\hbar \over 2m }\v d \times \partial_\mu(\rho \partial_\mu \v d)$& $D_{F.t}\v v_F = {\hbar \over m} \bigl[ \v v_F \times {\cal B } + {\cal E} $\\
& & $- \bm \nabla \Bigl( {\hbar \over 4 m } (\partial_\mu \v d)^2-{\hbar \over 2 m }{\bm \nabla^2 \sqrt{\rho}\over \sqrt{\rho}} + {1\over 4}m \omega^2 r^2+{1 \over 2}g_0\rho +{1 \over 2}g_2\rho \Bigr) \bigr]$\\ \hline
$\bm \eta_{\overline{F}}^\dag$ & $\v d \times \partial_\mu\bigr(\rho \partial_\mu \v d \bigr)=0 ,~\rho D_{A,t} \v d =0$ & $\Bigl(\v e_{+} \cdot \partial_\mu \v d \Bigr)^2 = 0$\\\hline
\end{tabular}
\caption{List of hydrodynamic equations obtained in the FM and AFM limits.} \label{table}
\end{table*}
\section{Small fluctuation analysis}
\subsection{Small fluctuation around FM ground state}
We re-examine the previous results~\cite{ho,machida} for the small
fluctuation around the FM ground state in view of the general spin-1
wave function $\Psi_{\mathrm{FM}+\mathrm{AFM}} = \sqrt{\rho} e^{i
\theta} (\bm \eta_F \sin {\delta \over 2} + \bm \eta_A \cos {\delta
\over 2})$. Since the FM ground state is taking place around $\delta
= \pi$, expanding the wave function up to linear order in $\delta$
in the vicinity of $\delta = \pi$ gives
$\Psi_{\mathrm{FM}+\delta\cdot \mathrm{AFM}} = \sqrt{\rho} e^{i
\theta} (\bm \eta_F + \delta \cdot \bm \eta_A) $ where $\delta$
comes from re-defining the mixing angle ${- {\delta \over 2}
\rightarrow \delta}$. We take the fully polarized FM ground state
with the magnetization $\v d$ along the $(0, 0, 1)$ direction. By
noting that the small fluctuation of $\v d = (\cos \alpha \sin
\beta, \sin \alpha \sin \beta, \cos \beta)$ occurs around $\beta =
0$, we can expand the wave function also with respect to small
$\beta$ up to first order:
\begin{eqnarray} \Psi_{\mathrm{FM}+\delta\cdot \mathrm{AFM}} \simeq \sqrt{\rho}
e^{i \theta} \left[ \begin{pmatrix} e^{-i (\alpha + \gamma)} \\ {1 \over
\sqrt{2} } \beta e^{-i \gamma} \\ 0 \end{pmatrix} + \delta \begin{pmatrix} 0 \\ 1 \\
0\end{pmatrix} \right] . \end{eqnarray}
Higher-order terms such as $\delta \times \beta$, $\beta^2$ are
assumed to vanish. The two unit vectors which form a triad together
with $\v d$ are $\v e_x \simeq (\cos (\alpha+ \gamma), \sin
(\alpha+\gamma), 0 )$, $\v e_y \simeq (- \sin (\alpha+ \gamma), \cos
(\alpha+\gamma), 0 )$ for small $\beta$. See Eq. (\ref{eq:ex-ey-d})
for definition of the triad. In the particular analysis at hand the
orientations of $\v e_x, \v e_y$ can be arbitrary without affecting
the physical outcome. In other words, the angle $\alpha + \gamma$
can be chosen arbitrarily. One particular gauge choice $\alpha
+\gamma = 0 $ resulting in $\v e_x \simeq (1, 0, 0)$, $\v e_y \simeq
(0, 1, 0)$ simplifies the above wave function to
\begin{eqnarray} \Psi_{\mathrm{FM}+\delta\cdot \mathrm{AFM}} \simeq \sqrt{\rho} e^{i \theta} \begin{pmatrix} 1 \\ {1 \over \sqrt{2} }d_+ + \delta \\ 0 \end{pmatrix}, \label{eq:FM-flc-wf}\end{eqnarray}
where $d_+ = d_x + id_y= \beta e^{i \alpha}$. The spin average for
the fluctuating wave function is given by $\v S = \Psi^\dag \v F
\Psi = \rho(d_x + \sqrt{2} \delta, d_y, 1)$. Inserting Eq.
(\ref{eq:FM-flc-wf}) into the GP equation gives
\begin{eqnarray} && i \hbar \partial_t \left({d_+ \over \sqrt{2}} \!+\! \delta \right) \nonumber \\
&& = - {\hbar^2 \over 2 m} \bm \nabla^2 \left({d_+ \over \sqrt{2}}
\!+\! \delta \right) + \rho (g_0 \!+\! g_2) \left({d_+ \over
\sqrt{2}}+ \delta \right). \nonumber \\ \end{eqnarray}
We neglect the density fluctuation stemming from the first component
of wave function Eq. (\ref{eq:FM-flc-wf}) since it is massive. We
search for a solution of the form
\begin{eqnarray} \left({d_+ \over \sqrt{2}} + \delta \right) = e^{-i \mu t} z_0
(e^{i \v k \cdot \v r - i \omega t} ), \end{eqnarray}
with the overall phase factor $e^{-i \mu t}$. Equating the chemical
potential $\mu$ with $\mu = \rho (g_0 + g_2)$ cancels out the $(g_0
+ g_2)$ term in the dispersion relation and leads to the well-known
quadratic spin-wave dispersion $\hbar \omega_{\v k} = \hbar^2 \v k^2
/2m$~\cite{ho,machida}.
\\
\subsection{Small fluctuation around AFM ground state}
Since the AFM ground state is taking place around $\delta =0$,
expanding the wave function up to the linear order in $\delta$ gives
$\Psi_{\mathrm{AFM}+ \delta\cdot \mathrm{FM}} = \sqrt{\rho} e^{i
\theta} (\bm \eta_A + \delta \cdot \bm \eta_F )$ after the
replacement ${\delta \over 2}\rightarrow \delta$. To describe the
small fluctuation about the AFM ground state, we also expand each
wave function $\bm \eta_A$, $\bm \eta_F$ for small $\beta$ up to
linear order:
\begin{eqnarray} \Psi_{\mathrm{AFM}+\delta\cdot\mathrm{FM}} \simeq \sqrt{\rho}
e^{i \theta} \left[ \begin{pmatrix} -{1 \over \sqrt{2}}d_- \\ 1 \\ {1 \over
\sqrt{2}}d_+ \end{pmatrix} + \delta \begin{pmatrix} e^{-i (\alpha + \gamma)} \\ 0 \\
0 \end{pmatrix} \right]. \nonumber \\ \label{eq:AFM-fluctuated-wf}\end{eqnarray}
Suppose for a moment that one tried to capture the small fluctuation
effects without leaving the AFM manifold, say by turning off
$\delta$ from the above. Inserting such a wave function into the GP
equation leads to a pair of equations,
\begin{eqnarray} i \hbar \partial_t \left({d_- \over \sqrt{2}}\right) &=&
{\hbar^2 \over 2m} \bm \nabla^2 \left({d_- \over \sqrt{2}}\right) ,
\nonumber \\
i \hbar \partial_t \left( {d_- \over \sqrt{2}} \right) &=& -{\hbar^2
\over 2m} \bm \nabla^2 \left( {d_- \over \sqrt{2}} \right),
\label{eq:AFM-flc-without-delta}\end{eqnarray}
which are in obvious contradiction to each other, or one must set
each of the terms in the equation to zero, freezing the dynamics
altogether. By allowing the FM component ($\delta \neq 0$), though, the
coupled equations are modified to
\begin{widetext}
\begin{eqnarray} i \hbar \partial_t \left({d_- \over \sqrt{2}}\right) &=&
{\hbar^2 \over 2m} \bm \nabla^2 \left({d_- \over \sqrt{2}}\right) -
g_2 \rho \left(\delta e^{-i (\alpha + \gamma)} \right), \nonumber \\
i \hbar
\partial_t \left( -{d_- \over \sqrt{2}} + \delta e^{-i (\alpha +
\gamma)} \right) &=& -{\hbar^2 \over 2m} \bm \nabla^2 \left(-{d_-
\over \sqrt{2}} + \delta e^{-i (\alpha + \gamma)} \right) + g_2 \rho
\left(\delta e^{-i (\alpha + \gamma)} \right) .
\label{eq:AFM-GP-eq}\end{eqnarray}
\end{widetext}
Solutions can be found in the form
\begin{eqnarray} - {d_- \over \sqrt{2}} + \delta e^{-i (\alpha + \gamma)} &=& u e^{i \v k \cdot \v r - i \omega t},\nonumber \\
{d_- \over \sqrt{2}} &=& v e^{i \v k \cdot \v r - i \omega t}, \end{eqnarray}
with the coefficients of $u$ and $v$. Solving the matrix problem
\begin{eqnarray} \begin{pmatrix} \hbar \omega - {\hbar^2 \v k^2 \over 2m} - g_2 \rho & -g_2
\rho \\ g_2 \rho & \hbar \omega + {\hbar^2 \v k^2 \over 2m} + g_2
\rho \end{pmatrix} \begin{pmatrix} u \\ v \end{pmatrix} = 0 \end{eqnarray}
successfully re-produces the low-energy dispersion $\hbar \omega_{\v
k} = \sqrt{{\hbar ^2 \v k^2\over 2 m} \left({\hbar^2 \v k^2\over 2m
} + 2 g_2 \rho \right)}$ first found in Refs. \onlinecite{ho,
machida}. In both examples of FM or AFM ground states, fluctuation
into the ``other sector" as expressed by non-zero mixing angle
$\delta$ is an inevitable ingredient in the proper dynamical
description.
\section{Numerical Solution of Gross-Pitaevskii Equation}
To simulate the $F=1$ condensate dynamics in one- and two-dimensional systems we must solve the GP equation,
\begin{eqnarray} i \hbar \frac{\partial}{\partial t} \Psi &=& \Bigg( - {\hbar^2 \over 2m}\bm \nabla^2 + {1\over
2} m \omega^2 r^2 + g \mu_B B(\v r)F_z \nonumber \\
&& ~ + g_0 \Psi^\dag \Psi + g_2 (\Psi^\dag \v F \Psi) \cdot \v F \Bigg) \Psi , \end{eqnarray}
where we also included the linear Zeeman term involving the Land$\acute{\text{e}}$ hyperfine
$g$-factor, the Bohr magneton $\mu_B$, and an external magnetic field $B(\v r)$ applied in the
$z$-direction. We employ dimensionless units in which the energy, length, and time scales are
measured by $\hbar \omega$, $\sqrt{\hbar/ m \omega}$, and $1/\omega$, respectively. Here $\omega$
is the frequency of the trapping potential. The dimensionless linear Zeeman term reads $H_z (\v r)
= g \mu_B F_z B (\v r) / \hbar \omega$. The wave function is normalized $\int \Psi^\dag \Psi = 1$
while the total boson number $N$ multiplies the two interaction constants $g_0$ and $g_2$. The GP
equation in dimensionless form becomes
\begin{eqnarray} i \frac{\partial}{\partial t} \Psi &=& \Bigg( - {1 \over 2} \bm \nabla^2 + {1\over 2} r^2 +
H_z (\v r) \nonumber \\
&& + {g_0 N \over \hbar \omega} \Psi^\dag \Psi + {g_2 N \over \hbar \omega} (\Psi^\dag \v F \Psi)
\cdot \v F \Bigg) \Psi . \end{eqnarray}
Throughout the simulation we choose $g= -1/2$, and $\omega= 2\pi \times 100$Hz, which makes $H_z
(\v r) \simeq 6 B(\v r) F_z$/mG. Interaction parameters for the simulation were $g_0 N /\hbar
\omega = 100$ and $g_2 N /\hbar \omega = 10$, respectively.
For the two-dimensional GP simulation, the initial Skyrmion configuration is taken from the $\v
d$-vector
\begin{eqnarray} \v d_S = {1\over r^2 \! + \! R^2 } \big(2R y, \, -2R x,
\, -r^2 \! + \! R^2 \big) , \end{eqnarray}
resulting in the initial-state wave function,
\begin{eqnarray} \Psi_A^{(S)} (\v r,t=0) = \frac{\sqrt{\rho(\v r)}}{r^2 \! + \! R^2} \begin{pmatrix} -\sqrt{2} R(ix + y) \\
-r^2 \!+\! R^2
\\ -\sqrt{2} R(ix - y) \end{pmatrix} . \label{eq:2d_AFM_skyrmion}
\end{eqnarray}
The density profile chosen is a gaussian $\rho(\v r) \sim \text{exp}(-r^2 / 2 \sigma^2)$ of width
$\sigma$. The size of the Skyrmion is controlled by $R$. Real-time simulations were performed under
zero and non-uniform ($\v B = B(y) \hat{z}, dB(y)/dy = 1.6 \times 10^{-4}$G/$\mu$m) magnetic fields with $\sigma = 2.2
\mu$m and $R = 3.6 \mu$m, and uniform ($\v B = B_0 \hat{z}, B_0 = 1.6$mG) magnetic fields with
$\sigma = 4.9 \mu$m and $R = 3.6 \mu$m. In all cases, we observe
oscillations of the strong magnetization patterns. Maximum simulation time is set to allow the
observation of a sufficient number of oscillations in the magnetization pattern. Periods of
observed oscillations under zero, uniform, non-uniform magnetic fields were about $6.5 \,
\omega^{-1}$, $6.7 \, \omega^{-1}$, and $6.5 \, \omega^{-1}$, respectively.
For one-dimensional simulation we chose the initial $\v d$-vector configuration
\begin{eqnarray} \v d = {1\over x^2 \! + \! R^2 } \big( 2R x, 0, x^2 \! - \! R^2 \big) , \end{eqnarray}
which realizes a rapid rotation of the vector over the length $R$ from the origin. Corresponding
initial-state wave function reads
\begin{eqnarray} \Psi_{A}^{(1D)} (x, t=0) = {\sqrt{\rho(x)} \over x^2 + R^2} \begin{pmatrix} - \sqrt{2} R x \\ x^2 - R^2 \\
\sqrt{2} R x \end{pmatrix}, \label{eq:1d_twisted AFM} \end{eqnarray}
where the density profile is a gaussian $\rho(\v r) \sim \text{exp}(-r^2 / 2 \sigma^2)$ of width
$\sigma$. Zero and non-uniform ($\v B = B(x) \hat{z}, dB(x)/dx = 2.0 \times 10^{-4}$G/$\mu$m) magnetic fields were
applied to the initial wave function of $\sigma = 2.0 \mu$m and $R = 2.0 \mu$m. Interaction
parameters for the calculation are the same as in two-dimensional simulation. Again we observe
oscillations of the strong magnetization satellites around the center. Periods of strong
magnetization satellites under zero and non-uniform magnetic fields are about $6.5 \, \omega^{-1}$
for both circumstances.
|
train/arxiv
|
BkiUaf_xK1yAgWay2Xx3
| 5
| 1
|
\section{Introduction}
Over the past decade, increasingly accurate helioseismic observations from
ground-based and space-based instruments have given us a reasonably good
description of the dynamics of the solar interior \citep[e.g.][]{bi50,
thompson2003}. Helioseismic inferences have confirmed that the differential
rotation observed at the surface persists throughout the convection
zone. There appears to be very little, if any, variation of the rotation rate
with latitude in the outer radiative zone ($0.4 > r/\RSun > 0.7$). In that
region the rotation rate is almost constant ($\approx 430$ nHz), while at the
base of the convection zone, a shear layer ---known as the tachocline---
separates the region of differential rotation throughout the convection zone
from the one with rigid rotation in the radiative zone.
Despite the large scatter amongst the rotational splittings that are
sensitive to the solar core \citep[see discussion in][]{Eff-Darwich2002}, we
can rule out an inward increase or decrease of the solar internal rotation
rate down to $r/\RSun \approx 0.25$, by more than $20 \%$ of the surface rate at
mid-latitude \citep{bi3, Eff-Darwich2002, couvidat2003}. This is in clear
disagreement with the theoretical hydrodynamical models that predict a much
faster rotation in the solar core, namely 10 to 50 times faster than the
surface rate \citep[e.g.][]{thompson2003}.
More recently, \cite{garcia2004} and \cite{korzennik2005} have independently
developed new mode fitting procedures to improve the quality and precision of
the characterization of the modes that are sensitive to the rotation in the
solar core. By using very long time series ---spanning nearly six years of
observations--- collected with the MDI \citep{mdi}, GONG \citep{gong} and GOLF
\citep{golf} instruments they have measured rotational splittings for modes
with frequencies as low as $1.1$ mHz.
We present here an attempt to constraint the radial and latitudinal
distribution of the rotation rate in the radiative interior through the
inversion of a combined MDI, GONG \& GOLF data set. We also attempt to
establish the sensitivity of helioseismic data sets to the dynamics of the
inner solar radiative interior, as well as the level of accuracy that
helioseismic data should have to resolve the solar core.
\section{Theoretical background}
The starting point of all linear rotational helioseismic inversion
methodologies is the functional form of the perturbation in frequency, $\Delta
\nu _{n \ell m}$, induced by the rotation of the sun, $\Omega(r,\theta)$, and
given by \citep[see derivation in][]{bi46}:
\begin{equation}
\Delta \nu _{n \ell m} = \frac{1}{2\pi}\int_0^R \int_0^{\pi}
K_{n \ell m}(r,\theta)\Omega(r,\theta)drd\theta \pm \epsilon_{n \ell m}
\label{eq:equation4}
\end{equation}
The perturbation in frequency, $\Delta \nu _{n \ell m}$, with the
observational error, $\epsilon_{n \ell m}$, that corresponds to the rotational
component of the frequency splittings, is given by the integral of the product
of a sensitivity function, or kernel, $K_{n \ell m}(r,\theta)$, with the
rotation rate, $\Omega(r,\theta)$, over the radius, $r$, and the co-latitude,
$\theta$. The kernels, $K_{n \ell m}(r,\theta)$, are known functions of the
solar model.
Equation \ref{eq:equation4} defines a classical inverse problem for the
sun's rotation. The inversion of this set of $M$ integral equations -- one for
each measured $\Delta \nu _{n \ell m}$ -- allows us to infer the rotation rate
profile as a function of radius and latitude from a set of observed rotational
frequency splittings (hereafter referred as splittings).
The inversion method we use is based on the Regularized Least-Squares
methodology (RLS). The RLS method requires the discretization of the integral
relation to be inverted. In our case, Eq.~\ref{eq:equation4} is transformed
into a matrix relation
\begin{equation}
D = A x + \epsilon \label{eq:equation5}
\end{equation}
where $D$ is the data vector, with elements $\Delta \nu _{n \ell m}$ and
dimension $M$, $x$ is the solution vector to be determined at $N$ model grid
points, $A$ is the matrix with the kernels, of dimension $M \times N$ and
$\epsilon$ is the vector containing the corresponding observational
uncertainties.
The RLS solution is the one that minimizes the quadratic difference
$\chi^2=|Ax-D|^2$, with a constraint given by a smoothing matrix, $H=G^TG$,
introduced in order to lift the singular nature of the problem \citep[see, for
additional details,][]{Eff-Darwich1997}. The matrix $G$ represents the
first-order discrete differential operator, although it will be shown below
that the inversion technique we have developed is to a first order
approximation independent of the choice of $G$. The general relation to be
minimized is
\begin{equation}
S(x) = (Ax-D)^T(Ax-D) + \gamma xHx
\end{equation}
where $\gamma$ is a scalar introduced to give a suitable weight to the
constraint matrix $H$ on the solution. Hence, the function $x$ is approximated
by
\begin{equation}
x_{\rm est} = (A^TA + \gamma H)^{-1}A^TD \label{eq:equation6}
\end{equation}
Replacing $D$ from Eq.~\ref{eq:equation5} we obtain
\begin{equation}
x_{\rm est} = (A^TA + \gamma H)^{-1}A^TAx \stackrel{\mathrm{def}}{=} Rx
\label{eq:equation7}
\end{equation}
hence
\begin{equation}
R = (A^TA + \gamma H)^{-1}A^TA \label{eq:equation8}
\end{equation}
The matrix $R$, that combines forward and inverse mapping, is referred to as
the resolution or sensitivity matrix \citep{friedel2003}. Ideally, $R$ would be
the identity matrix, which corresponds to perfect resolution. However, if we
try to find an inverse with a resolution matrix $R$ close to the identity, the
solution is generally dominated by the noise magnification. The individual
columns of $R$ display how anomalies in the corresponding model are imaged by
the combined effect of measurement and inversion. In this sense, each element
$R_{ij}$ reveals how much of the anomaly in the $j^{th}$ inversion model grid
point is transferred into the $i^{th}$ grid point. Consequently, the diagonal
elements $R_{ii}$ states how much of the information is saved in the model
estimate and may be interpreted as the resolvability or sensitivity of
$x_{i}$. We defined the sensitivity $\lambda_{i}$ of the grid point $x_{i}$
to the inversion process as follows:
\begin{equation}
\lambda_i = \frac{R_{ii}}{\sum_{j=1}^{N}R_{ij}} \label{eq:equation10}
\end{equation}
With this definition, a lower value of $\lambda_i$ means a lower sensitivity
of $x_{i}$ to the inversion of the solar rotation. We define a smoothing
vector $W$ with elements $w_i=\lambda_i^{-1}$ that is introduced in
Eq.~\ref{eq:equation6} to complement the smoothing parameter $\gamma$, namely
\begin{equation}
x_{\rm est} = (A^TA + \gamma W H)^{-1}A^TD \label{eq:equation11}
\end{equation}
Such substitution allows to apply different regularizations to different model
grid points $x_{i}$ whose sensitivities depend on the data set that are used
in the inversions. In this sense, the inversion is a two steps process: first
$R$ is obtained from Eq.~\ref{eq:equation7} for a small value of the
regularization parameter $\gamma$. Then, the smoothing vector $W$ is
calculated through Eq.~\ref{eq:equation10} and the inversion estimates are
obtained through Eq.~\ref{eq:equation11}. A set of results can be calculated
for different values of $\gamma$, the optimal solution being the one with the
best trade-off between error propagation and the quadratic difference
$\chi^2=|Ax_{\rm est}-D|^2$ as introduced in \cite{Eff-Darwich1997}.
In this paper we show how to use the matrix $R$ to study the sensitivity of
helioseismic data sets to the rotation rate of the solar interior.
Consequently, we present a theoretical analysis of the effect of adding low
frequency and low degree $p$-modes, high frequency and low degree $p$-modes,
and $g$-modes on the rotation rate of the solar core derived through numerical
helioseismic inversion techniques.
\section{Observational mode parameters and inversion results from 2088-days
long MDI, GOLF and GONG time series}
The work presented here is based on rotational frequency splittings measured
from observations by the GONG ground-based network and the MDI and GOLF
experiments on board the SOHO spacecraft. All rotational splittings were
computed from 2088-days long time series, starting April 30th 1996 and ending
January 17th 2002, as summarized in Table~\ref{tab:table1}.
The three data sets {\sf KM, KG \& GG} (see Table~\ref{tab:table1} for
explanations) contain for the first time very low frequency rotational
splittings ($\nu < 1.7$ mHz). These low frequency modes provide data of
exceptional quality, since the width of the mode peaks is much smaller than
the rotational splitting. It is therefore much easier to separate the
rotational splittings from the effects caused by the finite lifetime and the
stochastic excitation of the modes. The data set {\sf SM} (see again
Table~\ref{tab:table1}) was obtained by averaging all the data sets resulting
from fitting the 72-days long MDI time series \citep{bi50} that overlap the
April 30th 1996 to January 17th 2002 period. The averaging process reduces
significantly the number of $\ell < 8$ modes in that data set.
Since these data sets have been calculated from different time-series and
peak-fitting techniques, one can expect some differences among the different
data sets. When using different time-series, the mode parameters can be
affected by the changing solar activity cycle. Moreover, fitting techniques
can give different results if they are applied to either individual peaks or
to ridges \citep{korzennik2005}. Differences between MDI, GOLF and GONG can
also arise from systematics introduced by the merging process used by GONG to
obtain single time-series from multiple stations located
worldwide. Differences may also come from the different spatial filters and
leakage matrices that are used to isolate the signal of an individual mode
\citep{korzennik2005, chaplin2006}. In any case, we combined the various data
sets in a single set -- following the prescription described in
Table~\ref{tab:table2}. Our newly developed inversion methodology was
applied to the combined set to infer the rotation rate in the solar interior.
Figure~1 shows the observational frequency splitting uncertainties of the
combined data set as a function of radial order and degree, whereas Figs.~2
and 3 show the splittings uncertainties for sectoral modes as a function of a
proxy of the inner turning point of the modes, $\ell/\nu$, and as a function
of frequency, $\nu$, respectively. These plots clearly illustrate the well
known and challenging fact that only a small number of modes penetrate the
solar core and that the largest uncertainties are associated with these
modes. Indeed, the combined data set does not include low degree high
frequency modes ({\em i.e.}, $\ell < 4$ and $\nu > 2.2$ mHz), since at higher
frequencies unwanted bias appears in the estimated splittings due to the
difficulty in separating the effect of rotational splitting from the limited
lifetime of the modes \citep{appour2000,chaplin2006}.
The inversion of the combined MDI-GOLF-GONG data (see Fig.~4) confirms that
the well-known differential rotation observed at the surface persists
throughout the convection zone. Although only $\ell < 25$ modes were used in
the inversion, it was possible to infer the rotation rate in the convection
zone as the result of the exceptional quality of the low frequency splittings
($\nu < 1.7$ mHz) obtained by \citet{korzennik2005}. The differential rotation
changes abruptly at approximately $0.7 R_{\odot}$ to rigid rotation throughout
the radiative zone. The radial distribution of the rotation is approximately
flat, at a rate of $\approx 430$ nHz, decreasing below $0.2 R_{\odot}$. The
tendency below $0.15 R_{\odot}$ is not real and results from extrapolation of the
trend seen at larger radii, as explained in the following section.
\section{Sensitivity analysis for the inversion of the solar radiative
interior.}
A theoretical analysis was carried out in order to determine the effect of
different low degree mode sets on the derivations of the solar rotation rate
of the inner radiative interior. Four different artificial data sets,
hereafter referred as {\Aone} to {\Afour} were calculated using
Eq.~\ref{eq:equation4} and an artificial rotation rate $\Omega_{\rm
A}(r,\theta)$ that is shown in Fig.~5. The different artificial data sets
correspond to different mode sets and/or uncertainties, as explained in
Table~3. The observational uncertainties (standard errors) were taken from the
combined mode set used in the previous section, whereas the noise added to the
artificial data was calculated from normal distribution with the observed
uncertainties. The {\Aone} data set contains the same mode set than the
combined MDI-GOLF-GONG data set. Errors for g modes were arbitrarily set to 6
nHz (the mean of the observational uncertainties for the acoustic mode
splittings), since at present there are not reliable estimates for the
uncertainties of g-modes frequency splittings. In any case, we are interested
in the behavior of the inversion methodology when g modes are added, rather
than in the results of the inversion for different values of the splittings
and the observational errors.
The sensitivity vector, $\Lambda$, was computed for the four artificial data
sets, as illustrated in Figs.~6 and 7, where the sensitivities,
${\lambda_i}$, for the rotation rate in the solar interior are
presented as a function of the radius, for each artificial data set at the
equator or for several latitudes for a given set. The data sets {\Aone} and
{\Atwo} are significantly less sensitive to the rotation of the solar core
than the other sets. Although the {\Atwo} set includes the same high frequency
modes as set {${\sf A}_3$}, the errors of the {\Atwo} set
are significantly larger and hence the sensitivities do not differ from those
obtained for the {\Aone} set. The addition of two $g$-modes (in set {\Afour})
increases significantly the sensitivity to the solar core. However, it is
important to notice that even with the addition of $g$-modes, the sensitivity
at the solar core varies with the latitude, as illustrated in Fig.~7. For all
sets, the sensitivities to the equatorial regions of the solar core are larger
than the sensitivities at other latitudes.
The effect on the sensitivity vector of the choice of the smoothing matrix $H$
is presented in Fig.~8, where the equatorial sensitivities ${\lambda_i}$ for
the first, second and third-order discrete differential operators $G$ are
shown. The larger the order of the operator $G$ the larger the sensitivity,
except near the edges, and in particular the core. The smoothing vector $W$ is
obtained from the sensitivity vector $\Lambda$ and hence, the regularization
constraint will not only depend on the mode set, but also on the shape of $G$.
The choice of the number and spatial distribution of the model grid points,
$N$, is an important aspect of the inversion process. In non-adaptive
regularization inversions, decreasing the number of grid points is in itself a
form of regularization. The inversion procedure described here will also
adjust to the distribution of grid points. This is illustrated in Fig.~9,
where we show that the variation of the inversion sensitivities (and hence the
regularization constraints) is not constant with radius and latitude when the
number of grid points are changed. In this sense, the {\it a priori} choice of
the distribution of model grid points will not constrain the inversion
results.
The inversion methodology developed for the work presented here differs from
standard RLS techniques by introducing a smoothing vector $W$. The purpose of
this vector is to avoid over-smoothing the inversion solution in certain
regions of the solar interior and hence loose valuable information. This is
illustrated in Fig.~10, where two different inversions of the same data set
are presented, namely a standard RLS inversion (no vector $W$ is added) dotted
line, and the newly developed RLS inversion, solid line. The standard RLS
inversion tends to over-smooth and hence to assign unrealistic low errors to
the estimated rotation in the convection zone to get a stable solution in the
radiative regions. However when $W$ is added to the inversion procedure, the
over-smoothing problem in the convection zone is mitigated, since the
sensitivities ${\lambda_i}$ in the convection zone are larger and hence, the
smoothing coefficients ${w_i}$ are lower. The standard RLS technique would
assign the same smoothing coefficients to all model grid points in the
inversion.
The correspondence between the sensitivity analysis shown in Figs.~6 and 7 and
the information contained in the resolution matrix $R$ is illustrated in
Figs.~11 to 13, where we show the resolution vector corresponding to the
estimate at the equator for $r_i=0.06 R_{\odot}$. This corresponds to the
averaging kernel defined by \cite{backus1970} for the Optimal Localized
Averages technique. In the ideal case, the resolution should be unity at the
location where the solution is estimated and zero elsewhere. Only in the
inversion of the {\Afour} set is the largest amplitude of the resolution
vector centered at $r_i=0.06 R_{\odot}$, and thus the result is reliable. The poor
localization of the resolution at $r_i=0.06 R_{\odot}$ for the inversions of the
{\Aone} to {${\sf A}_3$} set could not be improved by any inversion
technique. However, this lack of resolution is taken into account by the
sensitivity analysis, since it evaluates low sensitivities and hence assigns
large regularizations to those grid points.
At larger radii both the location and the amplitude of the resolution are
significantly increased, as illustrated in Fig.~14, where we show the
resolution corresponding to the estimate at the equator for $r_i=0.2 R_{\odot}$
obtained with the data set {${\sf A}_3$}.
The conclusions derived from Figs.~6 and 7 can also be drawn from the
inversions of the data sets, as illustrated in Figs.~15 to 18. Figures~15 and
16 show the inverted profile and error distribution for the {\Aone} set at
several latitudes and demonstrates that there is good sensitivity to the
latitudinal variation in the radiative rotation rate above $r \approx 0.3
R_{\odot}$. Hence, the absence of differential rotation for the solar rotation
rate in the radiative interior (Fig.~4) is real, not an artifact of the
inversion procedure. The unrealistic flat rotation rate below $r \approx 0.15
R_{\odot}$ resulting from the inversions of the {\Aone} and {\Atwo} sets is due to
the lack of sensitivity of the mode set to that region. As a result the
inversion extrapolates the trend of the solution at larger radii. There are
not significant differences in the inversions of sets {\Aone} and {\Atwo},
although set {\Atwo} includes low degree and high frequency modes. However,
the new information contained in the low degree and high frequency modes of
set {\Atwo} is lost due to they large observational uncertainties. In all
four cases, larger differences between the artificial and the inverted
rotation rates are seen at higher latitudes, especially in the radiative
interior, as a result of the lack of sensitivity of the inversions to the
polar regions.
Only in the cases of sets {${\sf A}_3$} and {\Afour} (see Fig.~17), was it
possible to infer the main trends of the rotation rate below $r \approx 0.15
R_{\odot}$. However, it was necessary to include, either, data with unrealistic
small observational errors (set {${\sf A}_3$}), or, a couple of $g$-modes (set
{\Afour}), modes that have yet to be unambiguously observed. The most likely
way to reduce the observational uncertainties consists of increasing the
length of the time series. Figure~19 compares the observational errors for the
$\ell=25$ sectoral modes for five 728-days long data sets to the 2088-days
long data set, all estimated by \cite{korzennik2005}. The formal observational
uncertainties are proportional to the square root of the length of the time
series, hence it would be necessary to observe for decades to reduce the
observational uncertainties of the very low degree modes to the current levels
of the $\ell=25$ modes, all the while assuming that we can also reduce the
residual bias in our current estimates of the low degree and high frequency
splittings \citep[see discussions in][]{appour2000,chaplin2006}.
\section{Conclusions}
We have used for the first time a combined MDI-GOLF-GONG data set of
rotational frequency splittings that covers the largest possible frequency
range, spanning from 1.1 mHz to 3.9 mHz. This mode set was determined from
2088-days long time series acquired by the MDI, GOLF and GONG instruments and
analyzed independently by several authors, namely \citet{korzennik2005},
\citet{garcia2004}, \citet{gelly2002}, \citet{bi50} and \citet{jim}. Very low
frequency splittings ($\nu < 1.7$ mHz) were included to improve the precision
and the resolution of the inversion in the solar interior.
In order to optimally invert this unique data set, we implemented a new
inversion methodology that combines the regularized least-squares technique
with the analysis of the sensitivity of the solution at all model grid point
to the mode set being inverted. The inversion of the actual MDI-GOLF-GONG
data set reveals that the sun rotates as a rigid solid in most of the
radiative interior and slowing down below $0.2 R_{\odot}$.
The calculation of the sensitivity vector $\Lambda$ offers a rapid and
intuitive way of evaluating the sensitivity of helioseismic data to the
dynamics of the solar interior, in particular in the core ($r < 0.25
R_{\odot}$). We conclude that with the present accuracy of the available
splittings, it is not possible to derive the dynamical conditions below $r
\approx 0.2 R_{\odot}$. This results from the relatively large observational
uncertainties of the modes sensitive to the solar core, in particular the low
degree and high frequency modes. The level of uncertainties that is needed to
infer the dynamical conditions in the core when only including $p$-modes is
unlikely to be reached in the near future, and hence sustained efforts are
needed towards the detection and characterization of $g$ modes.
\section{Acknowledgments}
The Solar Oscillations Investigation - Michelson Doppler Imager project on
SOHO is supported by NASA grant NAS5--3077 at Stanford University. SOHO is a
project of international cooperation between ESA and NASA.
The GONG project is funded by the National Science Foundation through the
National Solar Observatory, a division of the National Optical Astronomy
Observatories, which is operated under a cooperative agreement between the
Association of Universities for Research in Astronomy and the NSF.
This work was funded by the Spanish grant AYA2004-04462. SGK was supported by
NASA grants NAG5--13501 \& NNG05GD58G and by NSF grant ATM--0318390.
|
train/arxiv
|
BkiUbio241xg-CaAFGDJ
| 5
| 1
|
\section{Introduction}
\label{sec:introduction}
We address the problem of curve fitting on a Riemannian manifold $\M$.
From a set of data points $d_0,\dots,d_m \in \M$ associated with times
$t_0,\dots,t_m$ on a given time-interval $[0,n]$, we seek
a $\C^1$ curve $\bspline: [0,n] \to \M$ that is ``sufficiently straight'',
while approximating ``sufficiently well'' the data points at the given
times.
Curve fitting on manifold appears in several applications where denoising
or resampling time-dependent data is required. For instance, in
Arnould {\emph{et al.}}~\cite{Arnould2015}, the evolution of an organ is
observed by interpolating several contours of a tumoral tissue on a
shape manifold. Regression is also of interest in problems where
3D rigid rotations of objects are involved, as in motion planning
of rigid bodies or in computer graphics~\cite{Park2010}. In that case,
the manifold would be the special orthogonal group $\mathrm{SO}(3)$.
A widely used strategy to address the fitting problem in general
is to encapsulate the fitting and straightness constraints in a
single optimization problem
\begin{equation}
\label{eq:E}
\min_{\gamma \in \Gamma} E_\lambda(\gamma)
\coloneqq \int_{t_0}^{t_m} \left\|\frac{\mathrm{D}^2 \gamma(t)}{\mathrm{d}t^2}
\right\|_{\gamma(t)}^2 \mathrm{d}t + \lambda \sum_{i=0}^m \d^2(\gamma(t_i),d_i),
\end{equation}
where $\Gamma$ is an admissible set of curves $\gamma$ on $\M$,
$\frac{\mathrm{D}^2}{\mathrm{d}t^2}$ is the (Levi-Civita) second covariant derivative,
$\|\cdot\|_{\gamma(t)}$ is the Riemannian metric at $\gamma(t)$, and
$\d(\cdot,\cdot)$ is the Riemannian distance.
The parameter $\lambda$ permits to strike the balance between the regularizer
$\int_{t_0}^{t_m} \|\frac{\mathrm{D}^2 \gamma(t)}{\mathrm{d}t^2}\|_{\gamma(t)}^2 \mathrm{d}t$
and the fitting term $\sum_{i=0}^m \d^2(\gamma(t_i),d_i)$.
This problem has been tackled in different ways in the past few years.
We cite for instance Samir {\emph{et al.}}~\cite{Samir2012} that approached
the solution of problem~\eqref{eq:E} with a manifold-valued
steepest-descent method on an infinite dimensional Sobolev
space equipped with the Palais-metric. In Boumal {\emph{et al.}}~\cite{Boumal2011},
the search space is reduced to the product manifold $\M^M$,
as the curve $\bspline$ is discretized in $M$ points,
and the covariant derivative from~\eqref{eq:E} is approached with
finite differences on manifolds. A technique for regression based on
unwrapping and unrolling has been recently proposed by Kim {\emph{et al.}}~\cite{Kim2018}.
Finally, we mention Lin {\emph{et al.}}~\cite{Lin2017}, who proposed a
polynomial regression technique based on projections on tangent spaces.
The limit case when $\lambda \to \infty$ concerns interpolation. We
cite here several works that solve this problem by means
of B\'ezier curves~\cite{Arnould2015,Absil2016}.
In those works, the search space $\Gamma$ is reduced to composite
cubic B\'ezier splines $\bspline$ and the optimality of~\eqref{eq:E} is guaranteed
only when $\M = \R^r$. However, the main advantages of these methods
are twofold: \emph{(i)} the search space is drastically reduced to the
so-called \emph{control points} of $\bspline$ (see, {e.g.},~\cite{Farin2002}
for an overview on B\'ezier curves); \emph{(ii)} they are
very simple to implement on any Riemannian manifold, as only two
objects are required: the Riemannian exponential and the Riemannian
logarithm, while most of the other techniques require a gradient or
heavy computations of parallel transportation.
Our method aims to extend these works to fitting, and is extensively
described in~\cite{Gousenbourger2018} for the case where $m=n$.
We build several polynomial pieces by solving the
problem~\eqref{eq:E} on carefully chosen tangent spaces, and then
blend together these curves in such a way that $\bspline$
\emph{(i)} is differentiable, \emph{(ii)} is the natural cubic smoothing
spline when $\M$ is a Euclidean space, \emph{(iii)} interpolates
the data points if $m = n$ when $\lambda \to \infty$.
Furthermore, we assess that
the method is easy-to-use, as \emph{(iv)} it only requires the knowledge
of the Riemannian exponential and the Riemannian logarithm on $\M$;
\emph{(v)} the curve can be stored with only $\mathcal O(n)$ tangent vectors;
and, finally, \emph{(vi)} given this representation, computing $\gamma(t)$
at $t \in [0,n]$ only requires $\mathcal O(1)$ exponential and logarithm evaluations.
We present here the above-mentioned method and give results for
fitting on the sphere $\mathrm{S}^2$. We refer to~\cite{Gousenbourger2018}
for more details and for the proof of the six properties.
\begin{figure}[t!]
\centering
\begin{tikzpicture}[scale=.85]
\input{pics/blended}
\end{tikzpicture}
\caption{The curve $\bspline(t)$ is made of natural cubic splines
computed on different tangent spaces. The cubic splines can
be obtained equivalently as B\'ezier curves, using a technique
close to~\cite{Arnould2015}. They are then
blended together with carefully chosen weights.
}
\label{fig:blended}
\end{figure}
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}[scale=.7]
\input{pics/sphere_regression}
\end{tikzpicture}\\
\begin{tikzpicture}
\input{pics/sphere_regression_speed}
\end{tikzpicture}
\caption{Smoothing curve $\bspline:[0,4]\to\M$ fitting $100$ data points.}
\label{fig:regression}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\begin{tikzpicture}[scale=.7]
\input{pics/sphere_fitting}
\end{tikzpicture}\\
\begin{tikzpicture}
\input{pics/sphere_fitting_speed}
\end{tikzpicture}
\caption{Fitting curve $\bspline:[0,9]\to \M$, with $\lambda = 10^8$.}
\label{fig:fitting}
\end{subfigure}
\caption{The data points (red dots) are fitted by a $\C^1$ composite
blended spline $\bspline(t)$ (blue). The blended spline is here
represented as a B\'ezier curve conducted by its control points
(green circles).
}
\label{fig:sphere}
\end{figure*}
\section{Method}
\label{sec:method}
\paragraph{Framework.} Consider a Riemannian manifold $\M$ and a set of $m+1$ data points
$d_0,\dots,d_m \in \M$ associated with parameters $t_0,\dots,t_m$ over
an interval $[0,n]$.
Our method relies on computations on tangent spaces. For this, we
define the points $d(i)$, $i=0,\dots,n$, where $d(i) = d_{k_i}$ is the data point
whose associated parameter $t_{k_i}$ is the closest to $t = i$.
We denote $T_{d(i)}\M$ its associated tangent space.
Consider finally the search space $\Gamma$ from
\eqref{eq:E} reduced to the space of $\C^1$ composite curves
$$
\bspline: [0,n] \to \M : f_i(t-i), \ i = \lfloor t \rfloor,
$$
where the functions $f_i: [i,i+1] \to \M$ are called
\emph{blended functions}. They are given by
$$
f_i(t-i) = \av[(L_i(t),R_i(t)),(1-w(t),w(t))],
$$
for $i = 0,\dots,n$ and where $\av[(x,y),(1-a,a)]$ is a Riemaniann weighted mean.
The fitting technique we present here consists in computing the
functions $L_i(t)$, $R_i(t)$ and choosing the weight function $w(t)$
such that the six above-mentioned properties are met.
\paragraph{Optimal curves.} The functions $L_i(t)$ and $R_i(t)$ are
obtained as follows. We note $\tilde x = \Log{d(i)}{x}$ and
$\hat x = \Log{d(i+1)}{x}$, the representation of the point $x\in\M$
in the tangent spaces at $d(i)$ and $d(i+1)$ respectively. We define
$L_i(t) = \Exp{d(i)}{\tilde \bspline(t)}$ and $R_i(t) = \Exp{d(i+1)}{\hat \bspline(t)}$,
where $\tilde \bspline(t)$ is the natural cubic spline fitting the
data points $\tilde d_0, \dots, \tilde d_m$ on $T_{d(i)}\M$, and
accordingly for $\hat \bspline(t)$. Note that $\tilde \bspline(t)$
(resp. $\hat \bspline(t)$) are therefore solutions of~\eqref{eq:E}
on the corresponding tangent space.
\paragraph{Riemannian averaging.} Finally, the choice of the weight
function $w(t)$ is of high importance in order to meet the differentiability
property. The weight function must thus be chosen such that
$L_i(0) = f_i(0)$, $R_i(1) = f_i(1)$, $\dot L_i(0) = \dot f_i(0)$ and
$\dot R_i(1) = \dot f_i(1)$. This is obtained for $w(1) = 1$, and
$w(0) = w'(0) = w'(1) = 0$. Among all the possible weight functions,
we choose $w(t) = 3 t^2 - 2 t^3$.
The blending method is represented in Figure~\ref{fig:blended}.
\section{Results}
\label{sec:results}
We show two examples on $\mathrm S^2$.
Figure~\ref{fig:regression} presents a smoothing curve fitting
$100$ noisy points at times $t_i \in [0,4]$ with $\lambda = 100$. Figure~\ref{fig:fitting}
shows the fitting curve obtained for $10$ data points at times
$t_i = i$, $i=0,\dots,9$, for $\lambda = 10^8$. We observe in both cases
that the curve is $\C^1$ (property \emph{(ii)}) and that the data
points are interpolated (property \emph{(iii)}) when $\lambda \to \infty$.
Property \emph{(i)} is obtained by construction. Properties \emph{(iv-vi)}
are shown and proved in~\cite{Gousenbourger2018}. Additionnal examples
on the special orthogonal group $\textrm{SO}(3)$ or on the manifold of
positive semidefinite matrices of size $p$ and rank $q$, $\mathcal S_+(p,q)$,
are also provided in~\cite{Gousenbourger2018}.
{\small
|
train/arxiv
|
BkiUb-fxK3YB9m3uuaU9
| 5
| 1
|
\section{Introduction}
{
Multi-scale problems can be found naturally in many fields of science, such as biology, chemistry, fluid dynamics and material science, where processes at different time and/or spatial scales may be described by diverse laws \cite{Weinan11,Weinan03,Pavliotis08}. Thanks to the improvements in computational power and the need for ever more faithful models of real-world complex systems, the interest in multi-scale modelling techniques has increased in recent years.
}
{
Multi-scale models are aimed at providing an efficient and more accurate representation of a complex system by coupling (sub)models that address different features/attributes at different levels of detail. However, the modeling of such systems is far from straightforward, not only because they are made up of dynamical systems of different characteristics, but also because of often intricate cross dependencies among the relevant physical processes at work \cite{Weinan11}. As a consequence, the problem of tracking the evolution of a multi-scale dynamical system involves the prediction and estimation of several sets of variables that live in different state-spaces and evolve in different time scales. Moreover, the tracking of the variables of interest usually has to be performed from the observation of partial and noisy observations. Efficient recursive algorithms for this task are badly needed.
}
{
The simplest case of a multi-scale problem consists of a system with unknown static parameters and dynamic state variables, since the parameters may be considered as state variables that evolve at a greater time scale. Hence, it is a multi-scale problem with only two time scales.
In general, carrying out both parameter estimation and state tracking at once implies several practical and theoretical difficulties. Within this framework, a few well-principled methods have been proposed in the last few years, though. Two examples of schemes that yield theoretically-guaranteed solutions to this problem are \gls{smc2} \cite{Chopin12} or \gls{pmcmc} \cite{Andrieu10}. They aim at computing the joint posterior probability distribution of all the unknown variables and parameters of the system, which provides all the information to obtain both point estimates and quantifications of the estimation error. Unfortunately, both \gls{smc2} and \gls{pmcmc} are batch techniques. In other words, every time a new observation arrives, the whole sequence of observations may have to to be re-processed from scratch in order to update the estimates, leading to a quadratic increase of the computational effort over time. As an alternative, \glspl{npf} \cite{Crisan18bernoulli} apply the same principles as \gls{smc2} in a recursive way. An NPF estimates both parameters and states using a scheme of two intertwined layers of Monte Carlo methods, one inside the other, where the first layer estimates the parameters while the second layer tracks the state dynamical variables. This methodology is better suited for long sequences of observations, however, the use of Monte Carlo in both layers of filters still makes its computational cost prohibitive in high-dimensional problems. Nested hybrid filters (NHFs) \cite{Perez-Vieites18} are extensions of the NPF that introduce Gaussian filtering techniques in the second layer of the algorithm, reducing the computational cost considerably and making the methodology more appealing for online processing.
}
{
In the last few years, other algorithms with nested, or layered, structures (in the vein of SMC$^2$, NPF or NHF) have been proposed in ordert to address inference in high-dimensional, complex models. The most recent examples are the \gls{stpf} \cite{Beskos17} and the \gls{nsmc} \cite{Naesseth19}. Both methods are intended to outperform classical sequential Monte Carlo (SMC) in high dimensional systems. They rely on spatial structures within the state space (a Markov random field in \cite{Naesseth19} and an auto-regressive structure in \cite{Beskos17}) and, therefore, they may be useful to tackle multiple spatial scales.
}
{
One of the most typical examples of multi-scale systems is the two-scale Lorenz 96 model \cite{Lorenz96,Carlu18,Arnold13}. This is a simplified weather model which includes different spatial and temporal scales. Specifically, it is a heterogeneous multi-scale model, i.e. a model where the macro-scale level description needs to be completed by the data extracted from a micro-scale level \cite{Vanden07}. The coupling of both scales can be handled in two different ways:
\begin{itemize}
\item applying parameterization \cite{Arnold13,Pavliotis08,Vissio18} or
\item using a macro-micro solver \cite{Weinan03,Weinan05}.
\end{itemize}
The former is a method used to replace the contribution of the micro-scale level of the model by a simplified process that depends only on the slow state variables of the macro scale. Once the model is simplified to a single scale, many other algorithms can be used to estimate the evolution of the macro-scale process, that is usually the scale of interest.} However, the latter method aims at estimating both scales and avoids any simplification of the model. Inference in the two-scale Lorenz 96 model has been addressed using algorithms such as particles filters \cite{Yeong20,Miguez15eus,Lingala12}, Gaussian filters \cite{Shen18,Tsuyuki12,Pulido18,Grooms15} or a combination of different methods \cite{Grooms21,Santitissadeekorn15}.
{
In this paper, we propose a generalization of the \gls{nhf} methodology aimed at performing recursive Bayesian inference for a class of heterogeneous multi-scale state-space models \cite{Abdulle2012} that can be numerically approximated with a micro-macro solver.We analyse the case of a Lorenz 96 system that displays three time scales (static parameters, slow dynamic state variables at the macro-scale and fast dynamic state variables at the micro-scale), but the methodology works in the same way for more general examples (namely, systems with $n$ scales either in time or space).
}
{
The new scheme can be described as a three-layer nested smoother that approximates, in a recursive manner, the posterior probability distributions of the parameters and the two sets of state variables given the sequence of available observations. Specifically, we approximate the posterior probability distribution of the parameters in a first layer of computation, the posterior probability distribution of the slow state variables in a second layer, and the posterior probability distribution of the fast state variables in a third layer. The computations on the second layer are conditional on the candidate parameter values generated on the first layer, while the calculations on the third layer are conditional on the candidates drawn at the first and second layers. The inference techniques used in each layer can vary, leading to different computational costs and degrees of accuracy. As examples that illustrate the methodology, we propose two methods. The first one uses SMC algorithms in the first and second layers, intertwined with an unscented Kalman filter (UKF) \cite{Julier00} in the third layer. Similarly, the second method uses a \gls{smc} algorithm in the first layer, but incorporates the use of \glspl{enkf} \cite{Evensen03} and a \glspl{ekf} in the second and third layers of the scheme, respectively.
}
{
The rest of the paper is organized as follows. We state the problem to be addressed in Section \ref{sec:Multi ProblemStatement}. In Section \ref{sec:Multi Optimalfilter}, we describe the optimal smoother for multi-scale systems with static parameters and two sets of dynamic state variables. Two specific methods derived from the general methodology are shown in Section \ref{sec:Multi MultiscaleNestedFilter}. Numerical results for the stochastic two-scale Lorenz 96 model are shown in Section \ref{sec:Multi Example} and conclusions are drawn in Section \ref{sec:Multi Conclusions}.
}
\section{Problem Statement} \label{sec:Multi ProblemStatement}
\subsection{State space models} \label{subsec:Multi Statespacemodels}
In this chapter we place our attention on state space models that result from the analysis of physical systems that display (intertwined) dynamical features at different time scales. To be specific, let us consider the class of multidimensional \glspl{sde} that can be written as
\newpage
\begin{align}
d\boldsymbol{x} = f_{\boldsymbol{x}}(\boldsymbol{x},\boldsymbol{\theta})d\tau + g_{\boldsymbol{x}}(\boldsymbol{z},\boldsymbol{\theta})d\tau+ \boldsymbol{Q}_x d\boldsymbol{v}, \label{eqsdex}\\
d\boldsymbol{z} = f_{\boldsymbol{z}}(\boldsymbol{x},\boldsymbol{\theta})d\tau + g_{\boldsymbol{z}}(\boldsymbol{z},\boldsymbol{\theta})d\tau + \boldsymbol{Q}_z d\boldsymbol{w}, \label{eqsdez}
\end{align}
where
\begin{itemize}
\item $\tau$ denotes continuous time,
\item $\boldsymbol{x}(\tau) \in \Reals^{d_x}$ and $\boldsymbol{z}(\tau) \in \Reals^{d_z}$ are the slow and fast states of the system, respectively,
\item $f_{\boldsymbol{x}} \colon \Reals^{d_x} \times \Reals^{d_\theta} \to \Reals^{d_x}$, $g_{\boldsymbol{x}} \colon \Reals^{d_z} \times \Reals^{d_\theta} \to \Reals^{d_x}$, $f_{\boldsymbol{z}} \colon \Reals^{d_x} \times \Reals^{d_\theta} \to \Reals^{d_z}$ and $g_{\boldsymbol{z}} \colon \Reals^{d_z} \times \Reals^{d_\theta} \to \Reals^{d_z}$ are (possibly nonlinear) transition functions parameterized by a fixed vector of unknown parameters, $\boldsymbol{\theta} \in \Reals^{d_\theta}$,
\item $\boldsymbol{Q}_x$ and $\boldsymbol{Q}_z$ are known scaling matrices that control the intensity and covariance of the stochastic perturbations,
\item and $\boldsymbol{v}(\tau)$ and $\boldsymbol{w}(\tau)$ are vectors of independent standard Wiener processes with dimension $d_x$ and $d_z$, respectively.
\end{itemize}
Equations \eqref{eqsdex}--\eqref{eqsdez} do not have closed form solutions for general nonlinear functions $f_{\boldsymbol{x}}$, $f_{\boldsymbol{z}}$, $g_{\boldsymbol{x}}$ and $g_{\boldsymbol{z}}$ and they have to be discretized for their numerical integration. In order to handle the slow and fast time scales, we apply a macro-micro solver \cite{Vanden03,Weinan05} that runs an Euler-Maruyama scheme for each set of state variables, albeit with different integration steps. To be specific, we use $\Delta_z$ as the integration step of $\boldsymbol{z}$ while $\Delta_x \gg \Delta_z$ is the integration step of $\boldsymbol{x}$. Then, we can simulate $\boldsymbol{x}$ and $\boldsymbol{z}$ using the pair of difference equations
\begin{align}
\boldsymbol{x}_t &= \boldsymbol{x}_{t-1} + \Delta_x(f_{\boldsymbol{x}}(\boldsymbol{x}_{t-1}, \boldsymbol{\theta}) +g_{\boldsymbol{x}}(\bar{\boldsymbol{z}}_t,\boldsymbol{\theta})) + \sqrt{\Delta_x} \boldsymbol{Q}_x \boldsymbol{v}_t, \label{eqIntx}\\
\boldsymbol{z}_n &= \boldsymbol{z}_{n-1} + \Delta_z(f_{\boldsymbol{z}}(\boldsymbol{x}_{\lfloor\frac{n-1}{h}\rfloor}, \boldsymbol{\theta}) +g_{\boldsymbol{z}}({\boldsymbol{z}}_{n-1},\boldsymbol{\theta})) + \sqrt{\Delta_z} \boldsymbol{Q}_z \boldsymbol{w}_n, \label{eqIntz}
\end{align}
where $\boldsymbol{x}_t \approx \boldsymbol{x}(t\Delta_x)$ and $\boldsymbol{z}_n \approx \boldsymbol{z}(n\Delta_z)$ are the state signals, $t \in \Naturals$ denotes discrete time in the time scale of the slow variables, $n \in \Naturals$ denotes discrete time in the fast time scale, $h = \frac{\Delta_x}{\Delta_z} \in \Integers^+$ is the number of fast steps (in the scale of $\boldsymbol{z}$) per slow step (in the scale of $\boldsymbol{x}$), $\boldsymbol{v}_t$ and $\boldsymbol{w}_n$ are Gaussian \glspl{rv} of zero mean and covariance matrices $\boldsymbol{I}_{d_x}$ and $\boldsymbol{I}_{d_z}$ respectively, and $\bar{\boldsymbol{z}}_t$ is an average of the fast signal computed as
\begin{equation}
\bar{\boldsymbol{z}}_t = \frac{1}{h} \sum_{i = h(t-1)+1}^{ht} \boldsymbol{z}_i. \label{eqbarz}
\end{equation}
We assume that the available observations may be directly related to both sets of state variables
$\boldsymbol{x}_t$ and $\boldsymbol{z}_n$, but only in the (slow) time scale of $\boldsymbol{x}$. To be specific, the $t$-th observation is a $d_y$-dimensional \gls{rv}, $\boldsymbol{y}_t \in \Reals^{d_y}$, which we model as
\begin{equation}
\boldsymbol{y}_t = l(\boldsymbol{z}_{ht}, \boldsymbol{x}_t, \boldsymbol{\theta}) + \boldsymbol{r}_t,
\label{eqObsY}
\end{equation}
where $l \colon \Reals^{d_z} \times \Reals^{d_x} \times \Reals^{d_\theta} \to \Reals^{d_y}$ is a transformation that maps the states into the observation space, and $\boldsymbol{r}_t$ is a zero-mean observational-noise vector with covariance matrix $\boldsymbol{R}$.
\subsection{Model inference} \label{subsec:Multi Modelinference}
We aim at performing \textit{joint} Bayesian estimation of the parameters, $\boldsymbol{\theta}$, and all states, $\boldsymbol{x}$ and $\boldsymbol{z}$, for the state-space model described by Eqs. \eqref{eqIntx}-\eqref{eqIntz} and \eqref{eqObsY}. Typically, the three vectors of unknowns are tightly coupled. The estimation of the fixed parameters is necessary to track both sets of state variables and, at the same time, tracking the slow state variables is needed for predicting the time evolution of the fast states and vice versa.
While in many practical applications one is typically interested in filtering, i.e., the computation of the posterior \gls{pdf} of $\boldsymbol{\theta}$, $\boldsymbol{x}_t$ and $\boldsymbol{z}_n$ (for $n=ht$) given the data sequence $\boldsymbol{y}_{1:t} = \{ \boldsymbol{y}_1, \boldsymbol{y}_2, \ldots, \boldsymbol{y}_t \}$, we find more convenient to tackle the smoothing \gls{pdf} $p(\boldsymbol{z}_{h(t-1)+1:ht}, \boldsymbol{x}_{0:t}, \boldsymbol{\theta} | \boldsymbol{y}_{1:t})$. Using the chain rule, we can factorize the latter density as
\begin{equation}
p(\boldsymbol{z}_{h(t-1)+1:ht}, \boldsymbol{x}_{0:t}, \boldsymbol{\theta} | \boldsymbol{y}_{1:t}) =
p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t}, \boldsymbol{\theta}) p(\boldsymbol{x}_{0:t} | \boldsymbol{y}_{1:t}, \boldsymbol{\theta}) p(\boldsymbol{\theta} | \boldsymbol{y}_{1:t}),
\label{eqjpdf}
\end{equation}
where we identify the three key conditional distributions that we seek to compute (or approximate). Each one of these \glspl{pdf} can be handled in a different \textit{layer} of computation. Hence, we aim at designing a nested inference algorithm (in the vein of \cite{Perez-Vieites18}) with three layers. In the first layer we compute $p(\boldsymbol{\theta} | \boldsymbol{y}_{1:t})$, in the second one we obtain $p(\boldsymbol{x}_{0:t} | \boldsymbol{y}_{1:t}, \boldsymbol{\theta})$, and in the third layer we tackle $p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t}, \boldsymbol{\theta})$.
Hereafter we describe the methodology for the optimal (yet impractical) calculation of the posterior \gls{pdf} in Eq. \eqref{eqjpdf} as well as two approximate numerical solutions that admit feasible computational implementations.
\subsection{Notation}
{
We use lower-case, regular-face letters, e.g., $x$, to denote scalar quantities and bold-face, e.g., $\boldsymbol{x}$, to denote vectors. Matrices are represented by bold-face upper-case letters, e.g., $\boldsymbol{X}$. We make no notational difference between between deterministic quantities and random variables (r.v.'s).
}
{
If $\boldsymbol{x}$ is a $d$-dimensional random vector on $\Reals^d$, we denote its pdf with respect to the Lebesgue measure as $p(\boldsymbol{x})$. This is an argument-wise notation, i.e., if $\boldsymbol{x}$ and $\boldsymbol{y}$ are random vectors, we denote their pdf's as $p(\boldsymbol{x})$ and $p(\boldsymbol{y})$ even if they are possibly different functions (when $\boldsymbol{x}$ and $\boldsymbol{y}$ obey different probability laws). Similary, $p(\boldsymbol{x},\boldsymbol{y})$ denotes the joint pdf of $\boldsymbol{x}$ and $\boldsymbol{y}$, while $p(\boldsymbol{x},\boldsymbol{y})$ is the conditional pdf of $\boldsymbol{x}$ given $\boldsymbol{y}$. This notation, which has been broadly used in the field if particle filtering \cite{Liu98,Doucet00,Djuric03,Doucet09}, is simple yet sufficient to describe the methodologies in this paper.
}
{
We also resort to a more specific notation for Gaussian pdf's. If $\boldsymbol{x}$ is a $d$-dimensional Gaussian random vector with mean $\bar \boldsymbol{x}$ and covariance matrix $\boldsymbol{C}>0$ then we can explicitly write the pdf $p(\boldsymbol{x})$ as
$$
{\mathcal N}(\boldsymbol{x} | \bar \boldsymbol{x}, \boldsymbol{C}) =
\frac{1}{\left( 2\pi \right)^{\frac{d}{2}} |\boldsymbol{C}|^{\frac{1}{2}} }
\exp\left\{
-\frac{1}{2}\left(
\boldsymbol{x} - \bar \boldsymbol{x}
\right)^\top \boldsymbol{C}^{-1} \left(
\boldsymbol{x} - \bar \boldsymbol{x}
\right)
\right\},
$$
where the superscript $^\top$ indicates transposition and $|\boldsymbol{C}|$ denotes the determinant of matrix $\boldsymbol{C}$.
}
\section{Optimal nested smoother} \label{sec:Multi Optimalfilter}
We introduce the optimal nested smoothing algorithm, consisting of three layers, that computes each of the \glspl{pdf} in Eq. \eqref{eqjpdf}. The scheme is summarized in Fig. \ref{figscheme}. As a result, we obtain the posterior smoothing density $p(\boldsymbol{z}_{h(t-1)+1:ht}, \boldsymbol{x}_{0:t}, \boldsymbol{\theta} | \boldsymbol{y}_{1:t})$ which, in turn, can be used to compute the optimal filtering \gls{pdf}, $p(\boldsymbol{z}_{ht}, \boldsymbol{x}_t, \boldsymbol{\theta} | \boldsymbol{y}_{1:t})$, by marginalization if necessary. When the exact computations demanded by this algorithm are not feasible (for general nonlinear and/or non-Gaussian dynamical systems) it serves as a template for approximate numerical schemes, as shown in Section \ref{sec:Multi MultiscaleNestedFilter}.
\subsection{First layer: static parameters}
The aim of this layer is to compute the posterior \gls{pdf} of the parameters, $p(\boldsymbol{\theta} | \boldsymbol{y}_{1:t})$, recursively. We assume that the \textit{a priori} density $p(\boldsymbol{\theta})$ is known.
At time $t$, assume that $p(\boldsymbol{\theta} | \boldsymbol{y}_{1:t-1})$ has been calculated. When a new observation, $\boldsymbol{y}_t$, is obtained, we need to compute the likelihood $p(\boldsymbol{y}_t | \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ in order to obtain the posterior \gls{pdf} of $\boldsymbol{\theta}$ at time $t$ as
\begin{equation}
p(\boldsymbol{\theta} | \boldsymbol{y}_{1:t}) \propto p(\boldsymbol{y}_t | \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) p(\boldsymbol{\theta} | \boldsymbol{y}_{1:t-1}). \label{eq:Multi postparam 1st}
\end{equation}
However, the parameter likelihood $p(\boldsymbol{y}_t | \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ cannot be computed directly. Instead, we decompose it as
\begin{equation}
p(\boldsymbol{y}_t | \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) = \int p(\boldsymbol{y}_t | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) p(\boldsymbol{x}_{0:t} | \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) d\boldsymbol{x}_{0:t}, \label{eq:Multi likelihood 1st}
\end{equation}
where $p(\boldsymbol{y}_t | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ and $p(\boldsymbol{x}_{0:t} | \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ are the likelihood and the predictive \gls{pdf} of the state sequence $\boldsymbol{x}_{0:t}$, respectively, conditional on the previous observations and the parameters. These \glspl{pdf} are computed in the second layer of the algorithm.
\begin{figure}[!ht]
\centering
{\hspace*{-1cm}
\input{Scheme}}
\caption{Schematic depiction of the optimal smoother. Each column represents a layer of computation and the dependencies among \glspl{pdf} are indicated by arrows. The dashed arrows are used to show relations among different layers while the solid arrows represent dependencies in the same layer. Arrows \textit{a} and \textit{b} indicate that some intermediate computations are needed to relate both \glspl{pdf}.}
\label{figscheme}
\end{figure}
\subsection{Second layer: slow states}
Computations in this layer are conditional on the parameter vector $\boldsymbol{\theta}$. We seek to compute the smoothing posterior $p(\boldsymbol{x}_{0:t} |\boldsymbol{y}_{1:t}, \boldsymbol{\theta})$ as well as the predictive density $p(\boldsymbol{x}_{0:t} |\boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ and the likelihood $p(\boldsymbol{y}_t | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$, which are needed in the first layer --see Eq. \eqref{eq:Multi likelihood 1st}. We assume that the prior density $p(\boldsymbol{x}_0)$ is known and the posterior \gls{pdf} of the slow states at time $t-1$ (conditional on $\boldsymbol{\theta}$), $p(\boldsymbol{x}_{0:t-1} | \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$, is available at time $t$.
We first seek the predictive density of $\boldsymbol{x}_{0:t}$, namely,
\begin{equation}
p(\boldsymbol{x}_{0:t}| \boldsymbol{y}_{1:t-1},\boldsymbol{\theta}) = p(\boldsymbol{x}_t | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1},\boldsymbol{\theta}) p(\boldsymbol{x}_{0:t-1}| \boldsymbol{y}_{1:t-1},\boldsymbol{\theta}), \label{eq:Multi predx 2nd}
\end{equation}
which is obtained recursively from the posterior at time $t-1$, $p( \boldsymbol{x}_{0:t-1}| \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$, but requires the evaluation of the marginal density $p(\boldsymbol{x}_t | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1},\boldsymbol{\theta})$. The latter is not directly available. It has to be computed as an integral \gls{wrt} the fast state variables, in particular
\begin{align}
p(\boldsymbol{x}_t | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) =& \int p(\boldsymbol{x}_t | \boldsymbol{z}_{h(t-1)+1:ht}, \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) \times \nonumber \\
&\times p(\boldsymbol{z}_{h(t-1)+1:ht} |\boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) d\boldsymbol{z}_{h(t-1)+1:ht}. \label{eq:Multi transitionx 2nd}
\end{align}
The two densities in the integrand of Eq. \eqref{eq:Multi transitionx 2nd}, which involve the fast state variables $\boldsymbol{z}_{h(t-1)}, \ldots, \boldsymbol{z}_{ht}$, are calculated in the third layer. Recall that $h$ is the number of discrete-time steps of the fast states per each single time step of the slow variables (i.e., the $\boldsymbol{z}_n$'s are $h$ time faster than the $\boldsymbol{x}_t$'s).
As for the likelihood, when $\boldsymbol{y}_t$ becomes available we update the posterior density of $\boldsymbol{x}_{0:t}$ (conditional on $\boldsymbol{\theta}$) as
\begin{equation}
p(\boldsymbol{x}_{0:t} |\boldsymbol{y}_{1:t}, \boldsymbol{\theta}) \propto p(\boldsymbol{y}_t | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) p(\boldsymbol{x}_{0:t} | \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}). \label{eq:Multi postx 2nd}
\end{equation}
In the equation above, the likelihood $p(\boldsymbol{y}_t | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ can be computed as an integral \gls{wrt} the fast state variables, specifically,
\begin{align}
p(\boldsymbol{y}_t | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) =& \int p(\boldsymbol{y}_t | \boldsymbol{z}_{h(t-1)+1:ht}, \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) \times \nonumber \\
& \times \quad p(\boldsymbol{z}_{h(t-1)+1:ht} |\boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) d\boldsymbol{z}_{h(t-1)+1:ht} \nonumber \\
=& \int p(\boldsymbol{y}_t | \boldsymbol{z}_{ht}, \boldsymbol{x}_t, \boldsymbol{\theta}) p(\boldsymbol{z}_{h(t-1)+1:ht} |\boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) d\boldsymbol{z}_{h(t-1)+1:ht}. \label{eq:Multi likelihood 2nd}
\end{align}
The likelihood function $p(\boldsymbol{y}_t | \boldsymbol{z}_{ht}, \boldsymbol{x}_t, \boldsymbol{\theta})$ can be obtained directly from the state-space model described by Eqs. \eqref{eqIntx}-\eqref{eqObsY}, while the conditional \gls{pdf} of the subsequence $\boldsymbol{z}_{h(t-1)+1:ht}$ can be further decomposed as
\begin{eqnarray}
p(\boldsymbol{z}_{h(t-1)+1:ht} |\boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) &=& \frac{ p(\boldsymbol{x}_t | \boldsymbol{z}_{h(t-1)+1:ht}, \boldsymbol{x}_{t-1}, \boldsymbol{\theta})}{p( \boldsymbol{x}_t| \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})} \nonumber \\
&& \times p( \boldsymbol{z}_{h(t-1)+1:ht}| \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}). \label{eq:Multi predz weird 2nd}
\end{eqnarray}
Both the likelihood $p(\boldsymbol{y}_t | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ of Eq. \eqref{eq:Multi likelihood 2nd} and the {predictive density $p( \boldsymbol{z}_{h(t-1)+1:ht}| \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ of Eq. \eqref{eq:Multi predz weird 2nd}} are explicitly computed in the third layer.
\subsection{Third layer: fast states}
Computations on this layer are conditional on the parameter vector $\boldsymbol{\theta}$ and the sequence of slow states $\boldsymbol{x}_{0:t}$. In particular, we seek to compute the conditional posterior \glspl{pdf} of $\boldsymbol{z}_{h(t-1)+1:ht}$, including the predictive densities,
$$
p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) \quad
\text{and} \quad
p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}),
$$
as well as the updated density $p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t}, \boldsymbol{\theta})$. We also evaluate the plain likelihood function $p(\boldsymbol{y}_t | \boldsymbol{z}_{ht}, \boldsymbol{x}_t, \boldsymbol{\theta})$. We assume that the prior \gls{pdf} of the fast states, $p(\boldsymbol{z})$, is known and the posterior up to time $t-1$, $p(\boldsymbol{z}_{h(t-2)+1:h(t-1)} | \boldsymbol{x}_{t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$, is available to enable recursive computations.
The first predictive \gls{pdf} is computed recursively from the posterior up to time $t-1$ as the integral
\begin{eqnarray}
p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1},\boldsymbol{\theta}) &=& \nonumber\\
\int p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{z}_{h(t-2)+1:h(t-1)},\boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1},\boldsymbol{\theta}) \times &&\nonumber\\
\times p(\boldsymbol{z}_{h(t-2)+1:h(t-1)} | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) d\boldsymbol{z}_{h(t-2)+1:h(t-1)} &=&\nonumber \\
\int p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{z}_{h(t-1)},\boldsymbol{x}_{t-1}, \boldsymbol{\theta}) \times &&\nonumber\\
p(\boldsymbol{z}_{h(t-2)+1:h(t-1)} | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) d\boldsymbol{z}_{h(t-2)+1:h(t-1)}, &&\label{eq:Multi predz 3rd}
\end{eqnarray}
where the transition \gls{pdf} $p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{z}_{h(t-1)},\boldsymbol{x}_{t-1}, \boldsymbol{\theta})$ is obtained immediately by iterating Eq. \eqref{eqIntx} in the state-space model $h$ times {and $p(\boldsymbol{z}_{h(t-2)+1:h(t-1)} | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) d\boldsymbol{z}_{h(t-2)+1:h(t-1)}$ is the posterior \gls{pdf} of the fast states in the previous time step. Besides,} the second predictive density, $p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$, is obtained by substituting the first predictive \gls{pdf} of Eq. \eqref{eq:Multi predz 3rd} into Eq. \eqref{eq:Multi predz weird 2nd}\footnote{Note that, in Eq. \eqref{eq:Multi predz weird 2nd}, the density $p( \boldsymbol{x}_t| \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ is the normalization constant for the conditional \gls{pdf} $p( \boldsymbol{z}_{h(t-1)+1:ht}| \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$, while $p(\boldsymbol{x}_t | \boldsymbol{z}_{h(t-1)+1:ht}, \boldsymbol{x}_{t-1}, \boldsymbol{\theta})$ results from the iteration of Eq. \eqref{eqIntx}.}.
Finally, when the observation $\boldsymbol{y}_t$ becomes available, we compute the plain likelihood $p(\boldsymbol{y}_t | \boldsymbol{z}_{ht}, \boldsymbol{x}_t, \boldsymbol{\theta})$ (from Eq. \eqref{eqObsY} in the state-space model) and then update the conditional posterior \gls{pdf} of the fast state variables, namely,
\begin{align}
p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t}, \boldsymbol{\theta}) =& \frac{ p(\boldsymbol{y}_t | \boldsymbol{z}_{ht},\boldsymbol{x}_t, \boldsymbol{\theta}) p(\boldsymbol{x}_t |\boldsymbol{z}_{h(t-1)+1:ht}, \boldsymbol{x}_{t-1}, \boldsymbol{\theta})}{p(\boldsymbol{y}_t | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) p(\boldsymbol{x}_t |\boldsymbol{x}_{0:t-1},\boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})} \times \nonumber \\
&\times p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) \nonumber
\end{align}
or, simply,
\begin{align}
p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t}, \boldsymbol{\theta}) \propto& p(\boldsymbol{y}_t | \boldsymbol{z}_{ht},\boldsymbol{x}_t, \boldsymbol{\theta}) p(\boldsymbol{x}_t |\boldsymbol{z}_{h(t-1)+1:ht}, \boldsymbol{x}_{t-1}, \boldsymbol{\theta}) \times \nonumber \\
& \times p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta}) \label{eq:Multi postz 3rd}
\end{align}
if we skip the normalization constant that is typically not needed explicitly for numerical implementations.
\subsection{Outline of the optimal nested smoother}
The optimal nested smoother uses each layer of computation to track a subset of \glspl{rv} that evolve over their own time scale, by computing the corresponding predictive and updated \glspl{pdf} (when observations are collected), as well as the necessary likelihoods. To be specific:
\begin{itemize}
\item The third layer tracks the fast state variables, $\boldsymbol{z}_n$, and computes the predictive \gls{pdf} $p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ {of Eq. \eqref{eq:Multi predz 3rd}} and the likelihood $p(\boldsymbol{y}_t | \boldsymbol{z}_{ht}, \boldsymbol{x}_t, \boldsymbol{\theta})$. They are used to track the conditional posterior density $p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t}, \boldsymbol{\theta})$ {of Eq. \eqref{eq:Multi postz 3rd}. }
\item The second layer takes the \gls{pdf} $p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ and the likelihood $p(\boldsymbol{y}_t | \boldsymbol{z}_{ht}, \boldsymbol{x}_t, \boldsymbol{\theta})$ in order to compute the predictive \gls{pdf} $p(\boldsymbol{x}_{0:t}| \boldsymbol{y}_{1:t-1},\boldsymbol{\theta})$ in Eq. \eqref{eq:Multi predx 2nd} and the likelihood $p(\boldsymbol{y}_t | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ in Eq. \eqref{eq:Multi likelihood 2nd}. These are used to track the posterior \gls{pdf} of the slow state, $p(\boldsymbol{x}_{0:t} |\boldsymbol{y}_{1:t}, \boldsymbol{\theta})$, {of Eq. \eqref{eq:Multi postx 2nd}}.
\item The first layer takes the \glspl{pdf} $p(\boldsymbol{x}_{0:t}| \boldsymbol{y}_{1:t-1},\boldsymbol{\theta})$ and $p(\boldsymbol{y}_t | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ to track the posterior \gls{pdf} of the parameters, $p(\boldsymbol{\theta}|\boldsymbol{y}_{1:t})$, {of Eq. \eqref{eq:Multi postparam 1st}}.
\end{itemize}
Finally, the three conditional posterior \glspl{pdf} are needed to compute the joint smoothing density $p(\boldsymbol{z}_{h(t-1)+1:ht}, \boldsymbol{x}_{0:t}, \boldsymbol{\theta} | \boldsymbol{y}_{1:t})$ in Eq. \eqref{eqjpdf}.
Figure \ref{figscheme} is a schematic representation of the optimal smoother, which displays each layer in a different column. Most of the \glspl{pdf} that need to be computed are included in this scheme, showing the dependencies among them with arrows. These relations are direct except for the arrows labeled \textit{a} and \textit{b}, which require one or more intermediate computations. In the case of arrow \textit{a}, the predictive \gls{pdf} $p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ is used to compute $p(\boldsymbol{x}_t | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ in Eq. \eqref{eq:Multi transitionx 2nd}, that is necessary to calculate the predictive density $p(\boldsymbol{x}_{0:t} | \boldsymbol{y}_{1:t-1},\boldsymbol{\theta})$ of the second layer in Eq. \eqref{eq:Multi predx 2nd}. As for the arrow labeled \textit{b}, the predictive density $p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t-1}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ is used to compute the \gls{pdf} $p(\boldsymbol{z}_{h(t-1)+1:ht}| \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1}, \boldsymbol{\theta})$ in Eq. \eqref{eq:Multi predz weird 2nd}, that is used, in turn, to obtain the likelihood $p(\boldsymbol{y}_t | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t-1},\boldsymbol{\theta})$ in Eq. \eqref{eq:Multi likelihood 2nd}.
\section{Approximate smoothing algorithms} \label{sec:Multi MultiscaleNestedFilter}
The optimal algorithm described in Section \ref{sec:Multi Optimalfilter} cannot be implemented exactly for most practical models. Instead, one needs to devise suitable approximations that can be implemented numerically in an efficient way. One possible approach is a full-blown \gls{smc} implementation that extends the nested particle filter of \cite{Crisan18bernoulli}. However, such a scheme with three layers of computation results in a prohibitive computational cost. Instead, we introduce herein two different algorithms that combine \gls{smc} and Gaussian approximations at the different layers. The resulting algorithms can be implemented numerically in a more efficient manner and are suitable for parallelization, which leads to very fast runtimes.
The first method involves using \gls{smc} schemes both at the first and second layer, together with a bank of \glspl{ukf} \cite{Julier04,Menegaz15} to approximate (as Gaussians) the conditional densities to be computed at the third layer. This implementation has great potential for parallelization, but it is computationally costly nevertheless. Hence, we also introduce a second, less demanding scheme that utilizes the same \gls{smc} scheme at the first layer but employs \glspl{enkf} \cite{Evensen03} at the second layer and simple \glspl{ekf} \cite{Anderson79} to approximate the densities needed in the second and third layer, respectively. A numerical study of performance is carried out in Section \ref{sec:Multi Example} for both implementations.
\subsection{First scheme} \label{subsec:Multi FirstSch}
We introduce a numerical approximate smoother where the probability measures $p(\boldsymbol{\theta} | \boldsymbol{y}_{1:t})d\boldsymbol{\theta}$ and $p(\boldsymbol{x}_{0:t} | \boldsymbol{y}_{1:t}, \boldsymbol{\theta})d\boldsymbol{x}_{0:t}$ are approximated using \gls{smc} while we replace the conditional smoothing \gls{pdf} $p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t}, \boldsymbol{\theta})$ by a sequence of Gaussian approximations of the densities $p(\boldsymbol{z}_n | \boldsymbol{x}_{0:t}, \boldsymbol{y}_{1:t}, \boldsymbol{\theta})$, for $n = h(t-1)+1, \ldots, ht$, computed using a bank of \glspl{ukf}.
\paragraph{First layer}
Algorithm \ref{alg:Multi FirstScheme SMC1} describes the first layer of the nested smoother, which aims at the approximation of the posterior distribution of the parameters. It receives as inputs the prior \glspl{pdf} of the parameters, $p(\boldsymbol{\theta})$, and the two subsets of state variables, $p(\boldsymbol{x}_0)$ and $p(\boldsymbol{z}_0)$. In the initialization step, they are used to generate starting Monte Carlo particles (for the \gls{smc} schemes) and sigma-points (for the \glspl{ukf}) needed at each layer. Specifically, we generate $N$ parameter samples $\{\boldsymbol{\theta}_0^i\}_{1 \le i \le N}$, $J$ slow state particles per each parameter sample, $\{\boldsymbol{x}_0^{i,j}\}_{1 \le j \le J}$, and $L$ sigma-points of the fast state per each slow state sample, $\{\boldsymbol{z}_0^{i,j,l}\}_{0 \le l \le L-1}$, to obtain a set of the form $\{\boldsymbol{\theta}_0^i, \{\boldsymbol{x}_0^{i,j}, \{\boldsymbol{z}_0^{i,j,l}\}_{0 \le l \le L-1} \}_{1 \le j \le J}\}_{1 \le i \le N}$. All particles are independent at time $t=n=0$, provided the priors are independent.
Additionally, a Markov kernel $\kappa_N^{\boldsymbol{\theta}'}(d\boldsymbol{\theta})$ is needed for the jittering of parameter samples \cite{Crisan18bernoulli}, i.e., to draw a new set of particles, $\{\bar{\boldsymbol{\theta}}_t^i\}_{1\le i \le N}$, at each discrete-time step.
This is needed to preserve the diversity of the particles, otherwise after a few resampling steps the parameter particles would be reduced to just a few distinct values and the filter would collapse.
At every time step $t$ (in the slow time scale), we compute the approximate likelihood for each particle $\bar \boldsymbol{\theta}_t^i$, namely
$$
\hat{p}^{J,L} (\boldsymbol{y}_t | \boldsymbol{y}_{1:t-1}, \bar{\boldsymbol{\theta}}_t^i) \approx p(\boldsymbol{y}_t | \boldsymbol{y}_{1:t-1}, \bar{\boldsymbol{\theta}}_t^i),
$$
in order to obtain the non-normalized weights $\{\tilde{v}_t^i\}_{1 \le i \le N}$. The superscripts $J$ and $L$ indicate the dependence of the approximation on the number of particles generated for the second layer ($J$) and the number of sigma-points employed by the \glspl{ukf} in the third layer ($L$). The states $\{\boldsymbol{x}_{t-1}^{i,j}, \{ \boldsymbol{z}_{h(t-1)}^{i,j,l}\}_{0 \le l \le L-1} \}_{1 \le j \le J}$ are propagated to time $t$ in the nested layers of filtering in step \ref{stepal1propagatestate}. Finally, we normalize the weights in order to resample not only the parameter particles $\bar{\boldsymbol{\theta}}_t^i$, but also their associated sets of state variables.
\par\noindent\rule{\textwidth}{0.4pt}
\begin{algoritmo} {\gls{smc} approximation of $p(\boldsymbol{\theta} | \boldsymbol{y}_{1:t})$ in the first method \label{alg:Multi FirstScheme SMC1}}
\textbf{Inputs}
\begin{itemize}
\item[-] Prior distributions $p(\boldsymbol{\theta})$, $p(\boldsymbol{x}_0)$ and $p(\boldsymbol{z}_0)$.
\item[-] A Markov kernel $\kappa_N^{\boldsymbol{\theta}'}(d\boldsymbol{\theta})$ which, given $\boldsymbol{\theta}'$, generates jittered parameters $\boldsymbol{\theta} \in \Reals^{d_\theta}$.
\end{itemize}
\textbf{Initialization:} this is a joint initialization for all three layers.
\begin{itemize}
\item[-] Draw $N$ i.i.d. sample $\boldsymbol{\theta}_0^i$, $i = 1,\ldots,N$ from the prior distribution $p(\boldsymbol{\theta})$.
\item[-] Draw $J$ i.i.d. samples $\boldsymbol{x}_0^{i,j}$, $i = 1,\ldots,N$, $j = 1,\ldots,J$, from the prior distribution $p(\boldsymbol{x}_0)$.
\item[-] Compute $L=2d_z+1$ sigma-points, $\boldsymbol{z}_0^{i,j,l}$, with their respective weights, $\lambda_0^{i,j,l}$, $i = 1,\ldots,N$, $j = 1,\ldots,J$, $l = 0,\ldots,L-1$, from the prior distribution $p(\boldsymbol{z}_0 | \hat{\boldsymbol{z}}_0, \boldsymbol{C}_0(\boldsymbol{z}))$ as
\begin{align*}
\boldsymbol{z}_{0}^{i,j,0} &= \hat{\boldsymbol{z}}_{0}, \quad &\lambda_0^{i,j,0} &= \frac{1}{1+d_z}, \\
\boldsymbol{z}_{0}^{i,j,l} &= \hat{\boldsymbol{z}}_{0} + \boldsymbol{S}_l, \quad &\lambda_0^{i,j,l} &= \frac{1-\lambda_0^{i,j,0}}{2 d_z},\\
\boldsymbol{z}_{0}^{i,j,l+d_z} &= \hat{\boldsymbol{z}}_{0} - \boldsymbol{S}_l, \quad &\lambda_0^{i,j,l+d_z} &= \frac{1-\lambda_0^{i,j,0}}{2 d_z},
\end{align*}
for $l = 1,\ldots,d_z$, where $\boldsymbol{S}_l$ is the $l$-th row or column of the matrix square root of $\frac{d_z}{1 - \lambda_0^{i,j,0} } \boldsymbol{C}_{0}(\boldsymbol{z})$.
\end{itemize}
\textbf{Procedure} For $t \ge 0$:
\begin{enumerate}
\item Draw $N$ i.i.d samples $\bar{\boldsymbol{\theta}}_t^i$ from $\kappa_N^{\boldsymbol{\theta}_{t-1}^i}(d\boldsymbol{\theta})$.
\item For $i=1,\ldots,N$:
\begin{enumerate}
\item Compute
\begin{equation}
\tilde{v}_t^i = \hat{p}^{J,L} (\boldsymbol{y}_t | \boldsymbol{y}_{1:t-1}, \bar{\boldsymbol{\theta}}_t^i),
\end{equation}
where the approximate likelihood is evaluated at layer 2. \label{stepal1likelihhod}
\item Obtain new particles $\{\boldsymbol{x}_t^{i,j}, \{{\boldsymbol{z}}_{ht}^{i,j,l}\}_{0 \le l \le L-1} \}_{1 \le j \le J}$ at time $t$ (from layers 2 and 3). \label{stepal1propagatestate}
\item Normalize the weights
\begin{equation}
\quad v_t^i = \frac{\tilde{v}_t^i}{\sum_{i=1}^{N} \tilde{v}_t^i}. \label{eqal1normweights}
\end{equation} \label{stepal1normweights}
\end{enumerate}
\item Resample: set for each $m=1,\ldots,N$ and with probability $v_t^i$ \label{stepal1resampling}
\begin{equation}
\{\boldsymbol{\theta}_t^m, \{\boldsymbol{x}_t^{(m,j)}, \{{\boldsymbol{z}}_{ht}^{m,j,l}\}_{0 \le l \le L-1} \}_{1\leq j \leq J}\} = \{\bar{\boldsymbol{\theta}}_t^i, \{\boldsymbol{x}_t^{(i,j)}, \{{\boldsymbol{z}}_{ht}^{i,j,l}\}_{0 \le l \le L-1} \}_{1\leq j \leq J}\}.
\end{equation}
\end{enumerate}
\textbf{Outputs:} $\{\boldsymbol{\theta}_t^i, \{\boldsymbol{x}_t^{(i,j)}, \{{\boldsymbol{z}}_{ht}^{i,j,l} \} \}_{1\leq j \leq J}\}_{1\le i \le N}$.
\end{algoritmo}
\par\noindent\rule{\textwidth}{0.4pt}
\paragraph{Second layer}
Algorithm \ref{alg:Multi FirstScheme SMC2} describes the implementation of a bank of conditional \gls{smc} schemes in the second layer of the multi-scale nested smoother, one for each particle $\bar{\boldsymbol{\theta}}_t^i$, $i=1, \ldots, N$. In this second layer we approximate the posterior distribution with density $p(\boldsymbol{x}_{0:t} | \boldsymbol{y}_{1:t}, \bar{\boldsymbol{\theta}}_t^i)$. The procedure is similar to the one in Algorithm \ref{alg:Multi FirstScheme SMC1}, starting with the generation of the particles $\bar{\boldsymbol{x}}_t^{i,j}$, $j = 1, \ldots, J$, the computation of the approximate likelihood
$$
\hat{p}^L(\boldsymbol{y}_t | \bar{\boldsymbol{x}}_t^{i,j}, \boldsymbol{x}_{0:t-1}^{i,j}, \boldsymbol{y}_{1:t-1}, \bar{\boldsymbol{\theta}}_t^i) \approx
p(\boldsymbol{y}_t | \bar{\boldsymbol{x}}_t^{i,j}, \boldsymbol{x}_{0:t-1}^{i,j}, \boldsymbol{y}_{1:t-1}, \bar{\boldsymbol{\theta}}_t^i)
$$
and the non-normalized weights $\{\tilde{u}_t^{i,j}\}_{1 \le j \le J}$ in step \ref{stepal2likelihood}. By averaging the latter weights we can obtain $\tilde{v}_t^{i}$ for its use in the first layer\footnote{The average $\tilde{v}_t^{i} = \frac{1}{J}\sum_{j=1}^J\tilde{u}_t^{i,j}$ is an approximation of the integral of Eq. \eqref{eq:Multi likelihood 1st}.}. After propagating the fast state variables in the third layer (as described below), one can resample the set $\{ \boldsymbol{x}_{0:t}^{i,j}, \{ \boldsymbol{z}_{ht}^{i,j,l} \}_{0 \le l \le L-1} \}_{1 \le j \le J}$ using the normalized weights $\{ u_t^{i,j} \}_{j=1}^J$ obtained in step \ref{stepal2normweights}.
\paragraph{Third layer}
Algorithm \ref{alg:Multi FirstScheme UKF} outlines the implementation of a bank of \glspl{ukf} \cite{Julier04} conditional on each parameter sample $\bar{\boldsymbol{\theta}}_t^i$ and the set of slow states $\{ \bar{\boldsymbol{x}}_t^{i,j} \} \cup \boldsymbol{x}_{0:t-1}^{i,j}$. If we follow the template of the optimal smoother, then we should seek an approximation of the density $p(\boldsymbol{z}_{{h(t-1)+1:ht}} | \boldsymbol{x}_{0:t-1}^{i,j},\boldsymbol{y}_{1:t-1}, \bar{\boldsymbol{\theta}}_t^i)$. However, performing this calculation with a \gls{ukf}-like scheme implies that the dimension of the filter should be $d_z \times h$, in order to include the whole subsequence of states $\boldsymbol{z}_{{h(t-1)+1:ht}}$. Such approach would demand $2 d_z h + 1$ sigma-points for each conditional \gls{ukf} algorithm, and the computation of $NJL$ covariance matrices with dimension $2d_zh \times 2d_zh$ each, which is impractical even for moderate $d_z$ and $h$. For simplicity, in order to avoid operations with large matrices, we choose to compute Gaussian approximations of the marginal predictive densities
$
p(\boldsymbol{z}_q|\boldsymbol{x}_{0:t-1}^{i,j},\boldsymbol{y}_{1:t-1},\bar{\boldsymbol{\theta}}_t^i),
$
for $q = h(t-1)+1, \ldots, ht$, and then use these marginals to estimate the average of the fast states $\bar{\boldsymbol{z}}_t$ which is necessary in the micro-macro solver of Eq. \eqref{eqIntx}. The complete procedure is outlined in Algorithm \ref{alg:Multi FirstScheme UKF}, with further details below.
Step \ref{stepal3predictivez} of Algorithm \ref{alg:Multi FirstScheme UKF} generates new sigma-points in the space of fast states, $\tilde{\boldsymbol{z}}_q^{i,j,l}$, for $q = h(t-1) +1, \ldots, ht$ and $l = 0, \ldots, L-1$, conditional on the parameters $\bar{\boldsymbol{\theta}}_t^i$ and slow variables $\boldsymbol{x}_{t-1}^{i,j}$. At each time $q$, we compute a predictive mean and a covariance matrix as
\begin{align}
\check{\boldsymbol{z}}_q^{i,j} &= \sum_{l=0}^{L-1} \lambda_{q-1}^{i,j,l} \tilde{\boldsymbol{z}}_q^{i,j,l} \quad \text{and} \label{eqpredmeanz}\\
\check{\boldsymbol{C}}_q^{i,j}(\boldsymbol{z}) & =\sum_{l=0}^{L-1} \lambda_{q-1}^{i,j,l} (\tilde{\boldsymbol{z}}_q^{i,j,l} - \check{\boldsymbol{z}}_q^{i,j}) (\tilde{\boldsymbol{z}}_q^{i,j,l} - \check{\boldsymbol{z}}_q^{i,j} )^\top + \Delta_z \boldsymbol{Q}_z, \label{eqpredcovz}
\end{align}
where the $\lambda_{q-1}^{i,j,l}$'s are the weights\footnote{These weights are deterministic and can be computed a priori in different ways. See \cite{Menegaz15} for a survey of methods.} of the sigma points $\breve{\boldsymbol{z}}_{q-1}^{i,j,l}$ and $\Delta_z \boldsymbol{Q}_z$ is the covariance matrix of the noise in Eq. \eqref{eqIntz}. The mean in Eq. \eqref{eqpredmeanz} and the covariance in Eq. \eqref{eqpredcovz} yield the approximation
\begin{equation}
\mathcal{N}(\boldsymbol{z}_q | \check{\boldsymbol{z}}_q^{i,j}, \check{\boldsymbol{C}}_q^{i,j}(\boldsymbol{z})) \approx p(\boldsymbol{z}_q|\boldsymbol{x}_{0:t-1}^{i,j},\boldsymbol{y}_{1:t-1},\bar{\boldsymbol{\theta}}_t^i)
\label{eqX01}
\end{equation}
and we compute a new weighted set of sigma-points $\{ \breve{\boldsymbol{z}}_q^{i,j,l}, \lambda_q^{i,j,l} \}$ to represent the Gaussian density in Eq. \eqref{eqX01}.
\par\noindent\rule{\textwidth}{0.4pt}
\begin{algoritmo} {\gls{smc} approximation of $p(\boldsymbol{x}_{0:t} | \boldsymbol{y}_{1:t}, \boldsymbol{\theta})$ \label{alg:Multi FirstScheme SMC2}}
\textbf{Inputs}
\begin{itemize}
\item[-] Known parameter $\bar{\boldsymbol{\theta}}_t^i$ and known initial states, $\boldsymbol{x}_{t-1}^{i,j}$ and $\boldsymbol{z}_{h(t-1)}^{i,j,l}$, for $j= 1,\ldots,J$ and $l=0,\ldots,L-1$.
\end{itemize}
\textbf{Procedure} For $t \ge 0$:
\begin{enumerate}
\item For $j=1,\ldots,J$:
\begin{enumerate}
\item Compute
\begin{eqnarray}
\tilde{u}_t^{i,j} &=& \hat{p}^{L} (\boldsymbol{y}_t | \bar{\boldsymbol{x}}_t^{i,j}, \boldsymbol{x}_{0:t-1}^{i,j}, \boldsymbol{y}_{1:t-1}, \bar{\boldsymbol{\theta}}_t^i), \\ \tilde{v}_t^i &=& \frac{1}{J} \sum_{j=1}^{J} \tilde{u}_t^{i,j},
\end{eqnarray}
where the new particle $\bar{\boldsymbol{x}}_t^{i,j}$ is generated, and the approximate likelihood is evaluated, at layer 3. \label{stepal2likelihood}
\item Obtain new particles $\{{\boldsymbol{z}}_{ht}^{i,j,l}\}_{0 \le l \le L-1}$ at time $t$, from layer 3. \label{stepal2propagatestate}
\item Normalize the weights
\begin{equation}
u_t^{i,j} = \frac{\tilde{u}_t^{i,j}}{\sum_{j=1}^{J} \tilde{u}_t^{i,j}}. \label{eqal2normweights}
\end{equation} \label{stepal2normweights}
\end{enumerate}
\item Resample: set
\begin{equation}
\{\boldsymbol{x}_t^{i,m}, \{{\boldsymbol{z}}_{ht}^{i,m,l}\}_{0\le l \le L-1} \} = \{\bar{\boldsymbol{x}}_t^{i,j}, \{{\boldsymbol{z}}_{ht}^{i,j,l}\}_{0\le l \le L-1} \}
\end{equation}
with probability $u_t^{i,j}$ for each $m=1,\ldots,J$. \label{stepal2resampling}
\end{enumerate}
\textbf{Outputs:} $\{\boldsymbol{x}_t^{i,j}, \{{\boldsymbol{z}}_{ht}^{i,j,l}\}_{0\le l \le L-1 } \}_{1 \le j \le J} $ and $\tilde{v}_t^{i}$.
\end{algoritmo}
\par\noindent\rule{\textwidth}{0.4pt}
\par\noindent\rule{\textwidth}{0.4pt}
\begin{algoritmo} {\gls{ukf} approximation of $p(\boldsymbol{z}_{h(t-1)+1:ht} | \boldsymbol{x}_{0:t},\boldsymbol{y}_{1:t}, \boldsymbol{\theta})$ \label{alg:Multi FirstScheme UKF}}
\textbf{Inputs}
\begin{itemize}
\item[-] Integration steps $\Delta_x$, $\Delta_z$ and time scale ratio $h = \frac{\Delta_x}{\Delta_z} \in \Integers^+$.
\item[-] Known parameter vector $\bar{\boldsymbol{\theta}}_t^i$ and initial slow state $\boldsymbol{x}_{t-1}^{i,j}$. Weighted sigma-points for the fast state at time $h(t-1)$, denoted $\{ \boldsymbol{z}_{h(t-1)}^{i,j,l}, \lambda_{h(t-1)}^{i,j,l} \}_{l=0}^{L-1}$.
\end{itemize}
\textbf{Procedure} For $t > 0$:
\begin{enumerate}
\item Set $\breve{\boldsymbol{z}}_{h(t-1)}^{i,j,l}={\boldsymbol{z}}_{h(t-1)}^{i,j,l}$. For $l=0,\ldots,L-1$ and for $q=h(t-1)+1,...,ht$: \label{stepal3predictivez}
\begin{enumerate}
\item Integrate with step $\Delta_z$
\begin{equation}
\tilde{\boldsymbol{z}}_{q}^{i,j,l} = \breve{\boldsymbol{z}}_{q-1}^{i,j,l} + \Delta_z ( f_{\boldsymbol{z}}(\breve{\boldsymbol{z}}_{q-1}^{i,j,l},\bar{\boldsymbol{\theta}}_t^i) + g_{\boldsymbol{z}}(\boldsymbol{x}_{t-1}^{i,j}, \bar{\boldsymbol{\theta}}_t^i) ),
\end{equation}
and compute the predictive mean, $\check{\boldsymbol{z}}_{q}^{i,j}$, and the predictive covariance matrix, $\check{\boldsymbol{C}}_q^{i,j}(\boldsymbol{z})$, using Eqs. \eqref{eqpredmeanz} and \eqref{eqpredcovz}. \label{stepal3propagatez}
\item Approximate the predictive \gls{pdf} of the fast states as Gaussian density,
$$
p(\boldsymbol{z}_{q} | \boldsymbol{x}_{0:t-1}^{i,j}, \boldsymbol{y}_{1:t-1}, \bar{\boldsymbol{\theta}}_t^i) \approx \mathcal{N}(\boldsymbol{z}_q | \check{\boldsymbol{z}}_{q}^{i,j},\check{\boldsymbol{C}}_q^{i,j}(\boldsymbol{z}) ).
$$
Represent this Gaussian distribution by a set of weighted sigma-points denoted $\{ \breve{\boldsymbol{z}}_q^{i,j,l}, \lambda_q^{i,j,l} \}_{l=0}^{L-1}$. \label{stepal3sigmapointspred}
\end{enumerate}
\item In the space of the slow state variables: \label{stepal3statespacex}
\begin{enumerate}
\item For $l=0,\ldots,L-1$, project the sigma-points $\breve{\boldsymbol{z}}_{h(t-1)+1:ht}^{i,j,l}$ to obtain sigma-points in the space of the slow states,
\begin{equation}
\tilde{\boldsymbol{x}}_t^{i,j,l} = \boldsymbol{x}_{t-1}^{i,j} + \Delta_x(f_{\boldsymbol{x}}(\boldsymbol{x}_{t-1}^{i,j}, \bar{\boldsymbol{\theta}}_t^i) + g_{\boldsymbol{x}}(\bar{\boldsymbol{z}}_t^{i,j,l}, \bar{\boldsymbol{\theta}}_t^i) ),
\end{equation}
where $\bar{\boldsymbol{z}}_t^{i,j,l} = \frac{1}{h} \sum_{q = h(t-1)+1}^{ht} {\breve{\boldsymbol{z}}_{q}^{i,j,l}}$. Then, compute a mean vector $\check{\boldsymbol{x}}_t^{i,j}$ and a covariance matrix $\check{\boldsymbol{C}}_t^{i,j}(\boldsymbol{x})$ using Eqs. \eqref{eqpredmeanx} and \eqref{eqpredcovx}.
\label{stepal3propagatex}
\item Sample $\bar{\boldsymbol{x}}^{i,j}_t \sim \mathcal{N}(\boldsymbol{x}_t | \check{\boldsymbol{x}}_t^{i,j},\check{\boldsymbol{C}}_t^{i,j}(\boldsymbol{x}))$. \label{stepal3samplex}
\end{enumerate}
\item Once we collect a new observation $\boldsymbol{y}_t$, \label{stepal3observation}
\begin{enumerate}
\item For $l = 0,\ldots,L-1$, project the sigma-points $\breve{\boldsymbol{z}}_{ht}^{i,j,l}$ and the new sample $\bar{\boldsymbol{x}}^{i,j}_t$ into the observation space,
\begin{equation}
\tilde{\boldsymbol{y}}_t^{i,j,l} = l(\breve{\boldsymbol{z}}_{ht}^{i,j,l}, \bar{\boldsymbol{x}}^{i,j}_t, \bar{\boldsymbol{\theta}}_t^i),
\end{equation}
then compute the mean vector $\hat{\boldsymbol{y}}_{ht}^{i,j}$ and the covariance matrix ${\boldsymbol{C}}_t^{i,j}(\boldsymbol{y})$ using Eqs. \eqref{eqmeany} and \eqref{eqcovy}.
\item Compute $\tilde{w}_t^{i,j,l} = p(\boldsymbol{y}_t | \breve{\boldsymbol{z}}_{ht}^{i,j,l}, \bar{\boldsymbol{x}}_t^{i,j}, \bar{\boldsymbol{\theta}}_t^i) p(\bar{\boldsymbol{x}}^{i,j}_t|\breve{\boldsymbol{z}}_{h(t-1)+1:ht}^{i,j,l}, {\boldsymbol{x}}_{t-1}^{i,j}, \bar{\boldsymbol{\theta}}_t^i)$ and the weights for the second layer
\begin{equation}
\tilde{u}_t^{i,j} = \sum_{l=0}^{L-1} \lambda_{ht}^{i,j,l} \tilde{w}_t^{i,j,l}.
\end{equation}
\end{enumerate}
\item Update the mean and the covariance matrix of the fast variables
\begin{align}
\bK_t (\boldsymbol{z}) &= {\boldsymbol{C}}_t^{i,j}(\boldsymbol{z},\boldsymbol{y}) \big({\boldsymbol{C}}_t^{i,j}(\boldsymbol{y})\big)^{-1}, \label{eqKalmangain}\\
\hat{\boldsymbol{z}}_{ht}^{i,j} &= \check{\boldsymbol{z}}_{ht}^{i,j} + \bK_t(\boldsymbol{z}) \big(\boldsymbol{y}_t - \hat{\boldsymbol{y}}_t^{i,j} \big) \quad \text{and} \\
\hat{\boldsymbol{C}}_{ht}^{i,j}(\boldsymbol{z}) &= \check{\boldsymbol{C}}_{ht}^{i,j}(\boldsymbol{z}) + \bK_t(\boldsymbol{z}) \boldsymbol{C}_t^{i,j}(\boldsymbol{y}) \big(\bK_t(\boldsymbol{z})\big)^\top,
\end{align}
where ${\boldsymbol{C}}_t^{i,j}(\boldsymbol{z},\boldsymbol{y})$ is the cross-covariance matrix computed in Eq. \eqref{eqcrosscov}. \label{stepal3update}
\item From the new Gaussian \gls{pdf} $\mathcal{N}(\boldsymbol{z}_{ht} | \hat{\boldsymbol{z}}_{ht}^{i,j},\hat{\boldsymbol{C}}_t^{i,j}(\boldsymbol{z}))$, generate $L=2d_z+1$ sigma-points and weights $\{ \boldsymbol{z}_{ht}^{i,j,l}, \lambda_{ht}^{i,j,l}\}_{0 \le l \le L-1}$. \label{stepal3sigmapointsupd}
\end{enumerate}
\textbf{Outputs:} $\{{\boldsymbol{z}}_{ht}^{i,j,l}\}_{0\le l \le L-1}$, $\bar{\boldsymbol{x}}_t^{i,j}$ and $\tilde{u}_t^{i,j}$.
\end{algoritmo}
\par\noindent\rule{\textwidth}{0.4pt}
In step \ref{stepal3statespacex} of Algorithm \ref{alg:Multi FirstScheme UKF} we use the sigma-points at time $q = ht$ to generate new particles for the slow states at time $t$. Specifically, we project the $\breve{\boldsymbol{z}}_{ht}^{i,j,l}$'s through the state equation of the slow state variables to obtain sigma-points in the space of the slow variables, denoted $\tilde{\boldsymbol{x}}_t^{i,j,l}$. From these sigma-points, we obtain a mean vector and a covariance matrix, respectively,
\begin{align}
\check{\boldsymbol{x}}_t^{i,j} &= \sum_{l=0}^{L-1} \lambda_{ht}^{i,j,l} \tilde{\boldsymbol{x}}_t^{i,j,l} \quad \text{and} \label{eqpredmeanx}\\
\check{\boldsymbol{C}}_t^{i,j}(\boldsymbol{x}) & =\sum_{l=0}^{L-1} \lambda_{ht}^{i,j,l} (\tilde{\boldsymbol{x}}_t^{i,j,l} - \check{\boldsymbol{x}}_t^{i,j}) (\tilde{\boldsymbol{x}}_t^{i,j,l} - \check{\boldsymbol{x}}_t^{i,j} )^\top + \Delta_x \boldsymbol{Q}_x, \label{eqpredcovx}
\end{align}
where $\Delta_x \boldsymbol{Q}_x$ is the covariance matrix of the noise in Eq. \eqref{eqIntx}. Eqs. \eqref{eqpredmeanx} and \eqref{eqpredcovx} yield a Gaussian approximation of the predictive \gls{pdf} of the slow states, namely,
$$
p(\boldsymbol{x}_t|\boldsymbol{x}_{0:t-1}^{i,j},\boldsymbol{y}_{0:t-1},\bar{\boldsymbol{\theta}}_t^i) \approx {\mathcal N}(\boldsymbol{x} | \check{\boldsymbol{x}}_t^{i,j},\check{\boldsymbol{C}}_t^{i,j}(\boldsymbol{x})).
$$
We generate the new particle $\bar{\boldsymbol{x}}_t^{i,j}$ from this Gaussian density.
In step \ref{stepal3observation} of the algorithm we propagate the sigma-points $\breve{\boldsymbol{z}}_{ht}^{i,j,l}$ and the particle $\bar{\boldsymbol{x}}_t^{i,j}$ through the observation function $l(\cdot)$ to obtain projected sigma-points (on the observation space) $\{ \tilde{\boldsymbol{y}}_t^{i,j,l} \}_{0\le l \le L-1}$. We use these projected sigma-points to obtain a predictive mean and covariance matrix for the observation $\boldsymbol{y}_t$, namely,
\begin{align}
\hat{\boldsymbol{y}}_t^{i,j} &= \sum_{l=0}^{L-1} \lambda_{ht}^{i,j,l} \tilde{\boldsymbol{y}}_t^{i,j,l} \quad \text{and} \label{eqmeany}\\
{\boldsymbol{C}}_t^{i,j}(\boldsymbol{y}) & =\sum_{l=0}^{L-1} \lambda_{ht}^{i,j,l} (\tilde{\boldsymbol{y}}_t^{i,j,l} - \hat{\boldsymbol{y}}_t^{i,j}) (\tilde{\boldsymbol{y}}_t^{i,j,l} - \hat{\boldsymbol{y}}_t^{i,j} )^\top + \boldsymbol{R}, \label{eqcovy}
\end{align}
where $\boldsymbol{R}$ is the covariance matrix of the noise in the observation equation. At this step we also compute the non-normalized importance weight $\tilde{u}_t^{i,j}$ which is output to layer 2.
In step \ref{stepal3update}, we compute the Kalman gain using the observation covariance matrix of Eq. \eqref{eqcovy} and the cross-covariance matrix
\begin{equation}
{\boldsymbol{C}}_t^{i,j}(\boldsymbol{z},\boldsymbol{y}) =\sum_{l=0}^{L-1} \lambda_{ht}^{i,j,l} (\breve{\boldsymbol{z}}_{ht}^{i,j,l} - \check{\boldsymbol{z}}_{ht}^{i,j} ) (\tilde{\boldsymbol{y}}_t^{i,j,l} - \hat{\boldsymbol{y}}_t^{i,j} )^\top. \label{eqcrosscov}
\end{equation}
Then we update the mean, $\hat{\boldsymbol{z}}_{ht}^{i,j}$, and covariance matrix, $\hat{\boldsymbol{C}}_{ht}^{i,j}(\boldsymbol{z})$, of the fast state variables to obtain the approximation
\begin{equation}
p(\boldsymbol{z}_{ht} | \bar{\boldsymbol{x}}_t^{i,j}, \boldsymbol{x}_{0:t-1}^{i,j}, \boldsymbol{y}_{1:t}, \bar{\boldsymbol{\theta}}_t^i) \approx
{\mathcal N}(\boldsymbol{z}_{ht} | \hat{\boldsymbol{z}}_{ht}^{i,j}, \hat{\boldsymbol{C}}_{ht}^{i,j}(\boldsymbol{z}) ).
\label{eqX02}
\end{equation}
Finally, in step \ref{stepal3sigmapointsupd} we generate new weighted sigma-points to characterize the Gaussian \gls{pdf} in Eq. \eqref{eqX02}.
\subsection{Second scheme} \label{subsec:Multi SecondSch}
The method in Section \ref{subsec:Multi FirstSch} may still have a prohibitive computational cost, as it generates a total of $N \times J \times L$ particles in the joint space of the parameters, the slow states and the fast states (if we count sigma-points as deterministic particles). In this section we describe a computationally-lighter procedure that replaces the \gls{smc} procedure in layer 2 by an \gls{enkf} \cite{Evensen03} and the \gls{ukf} in layer 3 by a simpler \gls{ekf} \cite{Anderson79}. The complete procedure is described below.
\paragraph{First layer}
We describe the use of a \gls{smc} algorithm for the first layer of the nested algorithm in order to approximate $p(\boldsymbol{\theta} | \boldsymbol{y}_{1:t})$. This is the same as in Algorithm \ref{alg:Multi FirstScheme SMC1}, except that we need to take into account the initializations needed for layers 2 and 3. Algorithm \ref{alg:Multi SecondScheme SMC} receives as inputs the prior distributions of the parameters, $p(\boldsymbol{\theta})$, and both subsets of state variables, $p(\boldsymbol{x}_0)$ and $p(\boldsymbol{z}_0)$. They are used to generate
\begin{itemize}
\item the initial particles from $p(\boldsymbol{\theta})$ for the \gls{smc} scheme, denoted $\{\boldsymbol{\theta}_0^i\}_{i=1}^N$,
\item the samples from $p(\boldsymbol{x}_0)$ utilized to build the ensembles $\boldsymbol{X}_0^i$, $i=1, \ldots, N$, for the \glspl{enkf} in the second layer,
\item and the mean and covariance matrix of $\boldsymbol{z}$ for the \glspl{ekf} in the third layer, denoted $\boldsymbol{z}_0^{i,j} = \mathbb{E}[\boldsymbol{z}_0]$ and $\boldsymbol{C}_0^{i,j}(\boldsymbol{z})=\text{Cov}(\boldsymbol{z}_0)$, respectively (note that they are the same for all $i$ and $j$).
\end{itemize}
The rest of the procedure is the same as in Algorithm \ref{alg:Multi FirstScheme SMC1}.
\paragraph{Second layer}
In Algorithm \ref{alg:Multi SecondScheme EnKF} we employ an \gls{enkf} to obtain ensemble approximations of $p(\boldsymbol{x}_t|\boldsymbol{y}_{1:t-1},\bar{\boldsymbol{\theta}}_t^i)$ and $p(\boldsymbol{x}_t|\boldsymbol{y}_{1:t},\bar{\boldsymbol{\theta}}_t^i)$. The ensembles are denoted $\bar{\boldsymbol{X}}_t^i$ and $\boldsymbol{X}_t^i$, respectively, and they are used to approximate the computations that involved the joint \glspl{pdf} $p(\boldsymbol{x}_{0:t}|\boldsymbol{y}_{1:t-1},\bar{\boldsymbol{\theta}}_t^i)$ and $p(\boldsymbol{x}_{0:t}|\boldsymbol{y}_{1:t},\bar{\boldsymbol{\theta}}_t^i)$ in the optimal smoother. Note that all calculations are conditional on the $i$-th parameter particle, $\bar{\boldsymbol{\theta}}_t^i$.
The scheme is similar to Algorithm \ref{alg:Multi FirstScheme SMC2}. At step \ref{stepal5likelihood}, we retrieve the new samples $\bar{\boldsymbol{x}}_t^{i,j}$ and the approximate likelihood $\tilde{u}_t^{i,j}$ from layer 3, and compute the non-normalized importance weight $\tilde{v}_t^i$ which is output to layer 1.
At steps \ref{stepal5predx} and \ref{stepal5observation} we generate the predictive ensemble for the slow states, $\bar{\boldsymbol{X}}^i_t$, and the observations, $\boldsymbol{Y}_t^i$, respectively. These ensembles are then used, when the new observation $\boldsymbol{y}_t$ is collected, to compute an updated ensemble $\boldsymbol{X}_t^i$ which yields non-weighted particle approximation of the distribution with \gls{pdf} $p(\boldsymbol{x}_t|\boldsymbol{y}_{1:t},\bar{\boldsymbol{\theta}}_t^i)$. The update step of the \gls{enkf} can be implemented in different ways. Here we follow the scheme in \cite{Mandel06} which avoids the direct computation of the inverse of the covariance observation matrix ($d_y \times d_y$), being better suited for high-dimensional systems.
\paragraph{Third layer}
In Algorithm \ref{alg:Multi SecondScheme EKF} we describe how to use an \gls{ekf} to obtain Gaussian approximations $\mathcal{N}(\boldsymbol{z}_q | \check{\boldsymbol{z}}_q, \check{\boldsymbol{C}}_q(\boldsymbol{z})) \approx p(\boldsymbol{z}_q | \boldsymbol{x}_{t-1}^{i,j}, \boldsymbol{y}_{1:t-1}, \bar{\boldsymbol{\theta}}_t^i)$, $q=h(t-1)+1, \ldots, ht$, and the updated {\gls{pdf} $\mathcal{N}(\boldsymbol{z}_{ht} | \hat{\boldsymbol{z}}_{ht}^{i,j}, \hat{\boldsymbol{C}}_{ht}^{i,j}(\boldsymbol{z})) \approx p(\boldsymbol{z}_{ht} | \bar{\boldsymbol{x}}_t^{i,j},\boldsymbol{x}_{t-1}^{i,j}, \boldsymbol{y}_{1:t}, \bar{\boldsymbol{\theta}}_t^i)$. }We also generate the new slow states at time $t$, denoted $\bar{\boldsymbol{x}}_t^{i,j}$, and the likelihood estimates $\tilde{u}_t^{i,j} \approx p(\boldsymbol{y}_t|\bar{\boldsymbol{x}}_t^{i,j},\boldsymbol{x}_{0:t-1}^{i,j},\boldsymbol{y}_{1:t-1},\bar{\boldsymbol{\theta}}_t^i)$. Note that all computations in this layer are conditional on $\boldsymbol{x}_{t-1}^{i,j}$ and $\bar{\boldsymbol{\theta}}_t^i$.
In step \ref{stepal6predictivez}, the algorithm propagates the mean, $\hat{\boldsymbol{z}}_{h(t-1)}^{i,j}$, and the covariance matrix, $\hat{\boldsymbol{C}}_{h(t-1)}^{i,j}(\boldsymbol{z})$, for $q = h(t-1)+1, \ldots, ht$, conditional on the parameters $\bar{\boldsymbol{\theta}}_t^i$ and slow variables $\boldsymbol{x}_{t-1}^{i,j}$. At each time step $q$, we obtain a predictive mean, $\check{\boldsymbol{z}}_{q}^{i,j}$, and the predictive covariance matrix, $\check{\boldsymbol{C}}_q^{i,j}(\boldsymbol{z})$. The average of the predictive means, $\bar{\boldsymbol{z}}_t^{i,j}=\frac{1}{h}\sum_{q=h(t-1)+1}^{ht} \check{\boldsymbol{z}}_q^{i,j}$, is then used to propagate the slow state $\boldsymbol{x}_{t-1}^{i,j}$ and generate the new sample $\bar{\boldsymbol{x}}_t^{i,j}$ from the Gaussian approximation
$$
{\mathcal N}(\boldsymbol{x}_t | \check{\boldsymbol{x}}_t^{i,j},\Delta_x\boldsymbol{Q}_x) \approx p(\boldsymbol{x}_t|\boldsymbol{x}_{t-1}^{i,j},\boldsymbol{y}_{1:t-1},\bar{\boldsymbol{\theta}}_t^i)
$$
at step \ref{stepal6samplex}. Note that the covariance of the $\boldsymbol{z}_q^{i,j}$'s is neglected for simplicity in this computation and we use just the covariance matrix $\Delta_x\boldsymbol{Q}_x$ of the slow state in Eq. \eqref{eqIntx}.
In step \ref{stepal6observation} we project the predictive mean $\check{\boldsymbol{z}}_{ht}^{i,j}$ and the sample $\bar{\boldsymbol{x}}_t^{i,j}$ into the observation space to obtain the predictive observation $\hat{\boldsymbol{y}}_t^{i,j}$. When the \textit{actual} observation $\boldsymbol{y}_t$ is available we also estimate the likelihood $p(\boldsymbol{y}_t|\bar{\boldsymbol{x}}_t^{i,j},\boldsymbol{x}_{0:t-1}^{i,j},\boldsymbol{y}_{1:t-1},\bar{\boldsymbol{\theta}}_t^i)$ as
$$
\tilde{u}_t^{i,j} = p(\boldsymbol{y}_t | \check{\boldsymbol{z}}_{ht}^{i,j}, \bar{\boldsymbol{x}}_t^{i,j}, \bar{\boldsymbol{\theta}}_t^i) p(\boldsymbol{x}_t | \check{\boldsymbol{z}}_{h(t-1)+1:ht}^{i,j}, {\boldsymbol{x}}_{t-1}^{i,j}, \bar{\boldsymbol{\theta}}_t^i).
$$
Note that we use the predictive means $\check{\boldsymbol{z}}_{h(t-1):ht}^{i,j}$ for simplicity, instead of actually integrating \gls{wrt} the random states $\boldsymbol{z}_{h(t-1):ht}$.
Finally, in step \ref{stepal6update}, we compute the Kalman gain using the predictive covariance matrix of Eq. \eqref{eqekfpredcovz} and the Jacobian matrix $\bH_t^{i,j}(\boldsymbol{z})$. Then, we update the mean, $\hat{\boldsymbol{z}}_{ht}^{i,j}$, and covariance matrix, $\hat{\boldsymbol{C}}_{ht}^{i,j}(\boldsymbol{z})$, of the fast variables for the next time step.
\par\noindent\rule{\textwidth}{0.4pt}
\begin{algoritmo} {\gls{smc} approximation of $p(\boldsymbol{\theta} | \boldsymbol{y}_{1:t})$ in the second method \label{alg:Multi SecondScheme SMC}}
\textbf{Inputs}
\begin{itemize}
\item[-] Prior distributions $p(\boldsymbol{\theta})$, $p(\boldsymbol{x}_0)$ and $p(\boldsymbol{z}_0)$.
\item[-] A Markov kernel $\kappa_N^{\boldsymbol{\theta}'}(d\boldsymbol{\theta})$ which, given $\boldsymbol{\theta}'$, generates jittered parameters $\boldsymbol{\theta} \in \Reals^{d_\theta}$.
\end{itemize}
\textbf{Initialization:} this is a joint initialization for all three layers.
\begin{itemize}
\item[-] Draw $N$ i.i.d. sample $\boldsymbol{\theta}_0^i$, $i = 1,\ldots,N$ from the prior distribution $p(\boldsymbol{\theta})$.
\item[-] Draw $NJ$ i.i.d. samples $\boldsymbol{x}_0^{i,j}$, $i = 1,\ldots,N$, $j = 1,\ldots,J$, from the prior distribution $p(\boldsymbol{x}_0)$, and build the ensembles $\boldsymbol{X}_{0}^i$, $i=1, \ldots, N$, as
\begin{equation}
\boldsymbol{X}_{0}^i = [ \boldsymbol{x}_0^{i,1}, \ldots, \boldsymbol{x}_{0}^{i,J}].
\end{equation}
\item[-] Set $\hat{\boldsymbol{z}}_0^{i,j} = \bar{\boldsymbol{z}}_0$ and $\hat{\boldsymbol{C}}_{0}^{i,j}(\boldsymbol{z})=\boldsymbol{C}_0$, for $i = 1,\ldots,N$, $j = 1,\ldots,J$, where $\bar \boldsymbol{z}_0$ and $\boldsymbol{C}_0$ are the prior mean and prior covariance of $\boldsymbol{z}_0$, respectively, obtained from the prior density $p(\boldsymbol{z}_0 )$.
\end{itemize}
\textbf{Procedure} For $t \ge 0$:
\begin{enumerate}
\item Draw $N$ i.i.d samples $\bar{\boldsymbol{\theta}}_t^i$ from $\kappa_N^{\boldsymbol{\theta}_{t-1}^i}(d\boldsymbol{\theta})$.
\item For $i=1,\ldots,N$:
\begin{enumerate}
\item Retrieve
\begin{equation}
\tilde{v}_t^i = \hat{p}^{J} (\boldsymbol{y}_t | \boldsymbol{y}_{1:t-1}, \bar{\boldsymbol{\theta}}_t^i),
\end{equation}
where the approximate likelihood is evaluated at layer 2. \label{stepal4likelihhod}
\item Obtain new particles $\{\hat{\boldsymbol{X}}_t^{i}, \{\hat{\boldsymbol{z}}_{ht}^{i,j}, \hat{\boldsymbol{C}}_t^{i,j}(\boldsymbol{z})\}_{1 \le j \le J} \}$ at time $t$ (from layers 2 and 3). \label{stepal4propagatestate}
\item Normalize the weights
\begin{equation}
\quad v_t^i = \frac{\tilde{v}_t^i}{\sum_{i=1}^{N} \tilde{v}_t^i}. \label{eqal6normweights}
\end{equation} \label{stepal4normweights}
\end{enumerate}
\item Resample: set for each $m=1,\ldots,N$
\begin{align}
\{\boldsymbol{\theta}_t^m, &\boldsymbol{X}_t^{m}, \{\hat{\boldsymbol{z}}_{ht}^{m,j}, \hat{\boldsymbol{C}}_t^{m,j}(\boldsymbol{z})\}_{j=1}^J \}
= \{\bar{\boldsymbol{\theta}}_t^i, \hat{\boldsymbol{X}}_t^{i}, \{\hat{\boldsymbol{z}}_{ht}^{i,j}, \hat{\boldsymbol{C}}_t^{i,j}(\boldsymbol{z})\}_{j=1}^J \}
\end{align}
with probability $v_t^i$. \label{stepal4resampling}
\end{enumerate}
\textbf{Outputs:} $\{\boldsymbol{\theta}_t^i, \boldsymbol{X}_t^{i}, \{\hat{\boldsymbol{z}}_{ht}^{i,j}, \hat{\boldsymbol{C}}_t^{i,j}(\boldsymbol{z})\}_{j=1}^J \}_{i=1}^N$.
\end{algoritmo}
\par\noindent\rule{\textwidth}{0.4pt}
\par\noindent\rule{\textwidth}{0.4pt}
\begin{algoritmo} {\gls{enkf} approximation of {$p(\boldsymbol{x}_t | \boldsymbol{y}_{1:t}, \boldsymbol{\theta})$} \label{alg:Multi SecondScheme EnKF}}
\textbf{Inputs}
\begin{itemize}
\item[-] Parameter vector $\bar{\boldsymbol{\theta}}_t^i$; ensemble of slow states, $\boldsymbol{X}_{t-1}^i = [\boldsymbol{x}_{t-1}^{i,1},\ldots,\boldsymbol{x}_{t-1}^{i,J}]$; mean vector $\hat{\boldsymbol{z}}_{h(t-1)}^{i,j}$ and the covariance matrix $\hat{\boldsymbol{C}}^{i,j}_{t-1}(\boldsymbol{z})$, for $j= 1,\ldots,J$.
\end{itemize}
\textbf{Procedure} For $t \ge 0$:
\begin{enumerate}
\item For $j=1,\ldots,J$:
\begin{enumerate}
\item \label{stepal5likelihood} Retrieve the new sample $\bar{\boldsymbol{x}}_t^{i,j}$ and the likelihood estimate $\tilde{u}_t^{i,j}$ from layer 3 and compute the non-normalized importance weight
\begin{eqnarray}
\tilde{v}_t^i &=& \frac{1}{J} \sum_{j=1}^{J} \tilde{u}_t^{i,j},
\end{eqnarray}
\item Obtain the new mean $\hat{\boldsymbol{z}}_{ht}^{i,j}$ and covariance matrix $\hat{\boldsymbol{C}}^{i,j}_t(\boldsymbol{z})$ at time $t$, from layer 3. \label{stepal5propagatestate}
\end{enumerate}
\item \label{stepal5predx} Compute the predictive mean $\check{\boldsymbol{x}}_t^i$ and construct the predictive ensemble $\bar{\boldsymbol{X}}_t^{i}$ as
\begin{equation}
\check{\boldsymbol{x}}_t^i = \frac{1}{J} \sum_{j=1}^{J}\bar{\boldsymbol{x}}_t^{i,j} \quad \text{and} \quad \bar{\boldsymbol{X}}_t^{i} = [ \bar{\boldsymbol{x}}_t^{i,1}, \ldots, \bar{\boldsymbol{x}}_t^{i,J} ].
\end{equation}
\item \label{stepal5observation} Obtain predictive observations $\tilde{\boldsymbol{y}}_t^{i,j}$ from layer 3, then compute the mean $\hat{\boldsymbol{y}}_t^i$ and the ensemble $\boldsymbol{Y}_t^i$ as
\begin{eqnarray}
\hat{\boldsymbol{y}}_t^i &=& \frac{1}{J} \sum_{j=1}^{J} \tilde{\boldsymbol{y}}_t^{i,j} \quad \text{and} \quad {\boldsymbol{Y}}_t^{i} = [ \tilde{\boldsymbol{y}}_t^{i,1}, \ldots, \tilde{\boldsymbol{y}}_t^{i,J} ], \quad \text{for } j=1,\ldots,J.
\end{eqnarray}
\item Update the ensemble of slow variables \label{stepal5update}
\begin{eqnarray}
{\boldsymbol{C}}_t^{i}(\boldsymbol{x},\boldsymbol{y}) &=& \frac{1}{J} \tilde{\boldsymbol{X}}_t^{i} (\tilde{\boldsymbol{Y}}_t^{i})^{\top}, \nonumber\\
\big({\boldsymbol{C}}_t^{i}(\boldsymbol{y})\big)^{-1}&=& \boldsymbol{R}^{-1} - \boldsymbol{R}^{-1} \frac{1}{J} \tilde{\boldsymbol{Y}}_t^i \bigg(\boldsymbol{I}_J + (\tilde{\boldsymbol{Y}}_t^i)^{\top} \boldsymbol{R}^{-1} \frac{1}{J} \tilde{\boldsymbol{Y}}_t^i \bigg)^{-1} (\tilde{\boldsymbol{Y}}_t^i)^{\top} \boldsymbol{R}^{-1}, \nonumber \\
\bK_t(\boldsymbol{x}) &=& {\boldsymbol{C}}_t^{i}(\boldsymbol{x},\boldsymbol{y}) \big({\boldsymbol{C}}_t^{i}(\boldsymbol{y}) \big)^{-1},\label{eqenkfKalmangain}\\
\hat{\boldsymbol{X}}_t^{i} &=& \bar{\boldsymbol{X}}_t^{i} + \bK_t(\boldsymbol{x}) {\big(\boldsymbol{y}_t \mathds{1}_{d_y \times J} + \boldsymbol{T}_t^i - {\boldsymbol{Y}}_t^{i}\big)},
\end{eqnarray}
where $\tilde{\boldsymbol{X}}_t^{i}=\bar{\boldsymbol{X}}_t^{i} - \check{\boldsymbol{x}}_t^i \mathds{1}_{d_x \times J}$ and $\tilde{\boldsymbol{Y}}_t^{i} = {\boldsymbol{Y}}_t^{i} - \hat{\boldsymbol{y}}_t^{i} \mathds{1}_{d_y \times J}$, ${\boldsymbol{C}}_t^{i}(\boldsymbol{x},\boldsymbol{y})$ is the cross covariance matrix, ${\boldsymbol{C}}_t^{i}(\boldsymbol{y})$ is the covariance matrix of the observation, $\boldsymbol{R}$ is the covariance matrix of the noise in the observation equation, $\mathds{1}_{a \times b}$ is a $a \times b$ matrix of ones and {$\boldsymbol{T}_t^i = \left[ \boldsymbol{r}_t^1, \ldots, \boldsymbol{r}_t^J \right]$ with $\boldsymbol{r}_t^j \sim \mathcal{N}(\boldsymbol{r}_t | \boldsymbol{0}, \boldsymbol{R})$ is a matrix of Gaussian perturbations.}
\end{enumerate}
\textbf{Outputs:} $\hat{\boldsymbol{X}}_t^{i}$, $\{ \hat{\boldsymbol{z}}_{ht}^{i,j}, \hat{\boldsymbol{C}}_t^{i,j}(\boldsymbol{z}) \}_{j=1}^J $ and $\tilde{v}_t^{i}$.
\end{algoritmo}
\par\noindent\rule{\textwidth}{0.4pt}
\par\noindent\rule{\textwidth}{0.4pt}
\begin{algoritmo} {\gls{ekf} approximation of $p(\boldsymbol{z}_{ht} | \boldsymbol{x}_t,\boldsymbol{y}_{1:t}, \boldsymbol{\theta})$ \label{alg:Multi SecondScheme EKF}}
\textbf{Inputs}
\begin{itemize}
\item[-] Integration steps $\Delta_x$, $\Delta_z$ and time scale ratio $h = \frac{\Delta_x}{\Delta_z} \in \Integers^+$.
\item[-] Parameter vector $\bar{\boldsymbol{\theta}}_t^i$; slow states ${\boldsymbol{x}}_{t-1}^{i,j}$ (i.e. the $j$-th column of $\boldsymbol{X}_{t-1}^i$) and mean fast state $\hat{\boldsymbol{z}}_{h(t-1)}^{i,j}$ with covariance matrix $\hat{\boldsymbol{C}}_{t-1}^{i,j}(\boldsymbol{z})$.
\end{itemize}
\textbf{Procedure} For $t \ge 0$:
\begin{enumerate}
\item \label{stepal6predictivez} Set $\check{\boldsymbol{z}}_{h(t-1)}^{i,j}={\boldsymbol{z}}_{h(t-1)}^{i,j}$ and $\check{\boldsymbol{C}}_q^{i,j}(\boldsymbol{z}) = {\boldsymbol{C}}_{h(t-1)}^{i,j}(\boldsymbol{z})$. \\For $q=h(t-1)+1,...,ht$, compute
\begin{eqnarray}
\check{\boldsymbol{z}}_{q}^{i,j} &=& \check{\boldsymbol{z}}_{q-1}^{i,j} + \Delta_z ( f_{\boldsymbol{z}}(\check{\boldsymbol{z}}_{q-1}^{i,j},\bar{\boldsymbol{\theta}}_t^i) + g_{\boldsymbol{z}}({\boldsymbol{x}_{t-1}^{i,j}}, \bar{\boldsymbol{\theta}}_t^i) ), \label{eqekfpredmeanz}\\
\check{\boldsymbol{C}}_q^{i,j}(\boldsymbol{z}) &=& \boldsymbol{J}_q^{i,j}(\boldsymbol{z}) \check{\boldsymbol{C}}_{q-1}^{i,j}(\boldsymbol{z}) \big(\boldsymbol{J}_q^{i,j}(\boldsymbol{z})\big)^\top + \Delta_z \boldsymbol{Q}_z, \label{eqekfpredcovz}
\end{eqnarray}
where $\boldsymbol{J}_q^{i,j}(\boldsymbol{z})$ is the Jacobian matrix of the transition function of $\boldsymbol{z}$ and $\Delta_z \boldsymbol{Q}_z$ is the covariance matrix of the noise in Eq. \eqref{eqIntz}.
\item In the space of the slow state variables: \label{stepal6samplex}
\begin{enumerate}
\item Project the predictive means $\check{\boldsymbol{z}}_{h(t-1)+1:ht}^{i,j}$ into the space of slow variables,
\begin{equation}
\check{\boldsymbol{x}}_t^{i,j} = {\boldsymbol{x}_{t-1}^{i,j}} + \Delta_x(f_{\boldsymbol{x}}({\boldsymbol{x}_{t-1}^{i,j}}, \bar{\boldsymbol{\theta}}_t^i) + g_{\boldsymbol{x}}(\bar{\boldsymbol{z}}_t^{i,j}, \bar{\boldsymbol{\theta}}_t^i) ),
\end{equation}
where $\bar{\boldsymbol{z}}_t^{i,j} = \frac{1}{h} \sum_{q = h(t-1)+1}^{ht} {\check{\boldsymbol{z}}_{q}^{i,j}}$. \label{stepal6statespacex}
\item Sample $\bar{\boldsymbol{x}}^{i,j}_t \sim \mathcal{N}(\boldsymbol{x}_t | \check{\boldsymbol{x}}_t^{i,j},\Delta_x \boldsymbol{Q}_x)$, where $\Delta_x \boldsymbol{Q}_x$ is the covariance matrix of the noise in Eq. \eqref{eqIntx}.
\end{enumerate}
\item \label{stepal6observation} When a new observation $\boldsymbol{y}_t$ is collected:
\begin{enumerate}
\item Project the predictive mean $\check{\boldsymbol{z}}_{ht}^{i,j}$ and the slow state $\bar{\boldsymbol{x}}_t^{i,j}$ into the observation space
\begin{equation}
\tilde{\boldsymbol{y}}_t^{i,j} = l(\check{\boldsymbol{z}}_{ht}^{i,j}, \bar{\boldsymbol{x}}^{i,j}_t , \bar{\boldsymbol{\theta}}_t^i). \label{eqobsekf}
\end{equation}
\item Compute $\tilde{u}_t^{i,j} = p(\boldsymbol{y}_t | \check{\boldsymbol{z}}_{ht}^{i,j}, \bar{\boldsymbol{x}}_t^{i,j}, \bar{\boldsymbol{\theta}}_t^i) p(\boldsymbol{x}_t | \check{\boldsymbol{z}}_{h(t-1)+1:ht}^{i,j}, {\boldsymbol{x}}_{t-1}^{i,j}, \bar{\boldsymbol{\theta}}_t^i)$. {This is an estimate of the likelihood $p(\boldsymbol{y}_t|\bar{\boldsymbol{x}}_t^{i,j},\boldsymbol{x}_{0:t-1}^{i,j},\boldsymbol{y}_{1:t-1},\bar{\boldsymbol{\theta}}_t^i)$.}
\end{enumerate}
\item \label{stepal6update} Update the mean and the covariance matrix of the fast variables
\begin{eqnarray}
\bK_t(\boldsymbol{z}) &=& \check{\boldsymbol{C}}_{ht}^{i,j}(\boldsymbol{z}) \boldsymbol{H}_t^{i,j}(\boldsymbol{z})^{\top} \bigg(\boldsymbol{H}_t^{i,j}(\boldsymbol{z}) \check{\boldsymbol{C}}_{ht}^{i,j}(\boldsymbol{z}) \boldsymbol{H}_t^{i,j}(\boldsymbol{z})^{\top} + \boldsymbol{R} \bigg)^{-1},\label{eqekfKalmangain}\\
\hat{\boldsymbol{z}}_{ht}^{i,j} &=& \check{\boldsymbol{z}}_{ht}^{i,j} + {\bK_t(\boldsymbol{z}) \big( \boldsymbol{y}_t - \tilde{\boldsymbol{y}}_t^{i,j}\big)} \quad \text{and} \\
\hat{\boldsymbol{C}}_{ht}^{i,j}(\boldsymbol{z}) &=& \bigg(\boldsymbol{I}_{d_z} - \bK_t(\boldsymbol{z}) \boldsymbol{H}_t^{i,j}(\boldsymbol{z})\bigg)\check{\boldsymbol{C}}_{ht}^{i,j}(\boldsymbol{z}),
\end{eqnarray}
where $\boldsymbol{H}_t^{i,j}(\boldsymbol{z})$ is the Jacobian matrix of function $l(\cdot,\bar{\boldsymbol{x}}_t^{i,j},\bar{\boldsymbol{\theta}}_t^i)$ \gls{wrt} $\check{\boldsymbol{z}}_{ht}^{i,j}$ and $\boldsymbol{R}$ is the covariance matrix of the noise in the observation equation. {We obtain the approximation $\mathcal{N}(\boldsymbol{z}_{ht} | \hat{\boldsymbol{z}}_{ht}^{i,j},\hat{\boldsymbol{C}}_{ht}^{i,j}(\boldsymbol{z})) \approx p(\boldsymbol{z}_{ht} | \bar{\boldsymbol{x}}_t^{i,j}, \boldsymbol{x}_{0:t-1}^{i,j},\boldsymbol{y}_{0:t},\bar{\boldsymbol{\theta}}_t^i)$.}
\end{enumerate}
\textbf{Outputs:} $\hat{\boldsymbol{z}}_{ht}^{i,j}$,$\hat{\boldsymbol{C}}_{ht}^{i,j}(\boldsymbol{z})$, $\bar{\boldsymbol{x}}_t^{i,j}$ and $\tilde{u}_t^{i,j}$.
\end{algoritmo}
\par\noindent\rule{\textwidth}{0.4pt}
\section{Example} \label{sec:Multi Example}
\subsection{Stochastic two-scale Lorenz 96 model} \label{subsec:Multi StochasticL96model}
In order to illustrate the application of the methods described in Section \ref{sec:Multi MultiscaleNestedFilter}, we consider a stochastic version of the two-scale Lorenz 96 model \cite{Arnold13}, which depends on a set of fixed parameters, a set of fast variables and a set of slow variables. The slow variables are represented by a $d_x$-dimensional vector, $\boldsymbol{x}$, while the fast variables, $\boldsymbol{z}$, are $d_z$-dimensional. {Let us assume there are $R$ fast variables per slow variable, therefore $d_z = R d_x$.} The system is described, in continuous-time $\tau$, by the \glspl{sde}
\begin{eqnarray}
{d x_{j}}&= \bigg[ -x_{j-1}(x_{j-2}-x_{j+1}) - x_j+F - \frac{H C}{B} \sum_{l=(j-1)R}^{Rj-1} z_l \bigg] d \tau+ \sigma_x d v_j , \label{eqlorenz96sdex}\\
{d z_{l}}&= \bigg[ -CBz_{l+1}(z_{l+2}-z_{l-1}) - Cz_l + \frac{CF}{B} +\frac{HC}{B}x_{{\lfloor (l-1)/R \rfloor}} \bigg] d \tau + \sigma_z d w_l, \label{eqlorenz96sdez}
\end{eqnarray}
where $j = 0,\ldots,d_x-1$, $l = 0,\ldots, d_z-1$; $\boldsymbol{v}=(v_0,\ldots,v_{d_x-1})^\top$ and $\boldsymbol{w}=(w_0,\ldots,w_{d_z-1})^\top$ are, respectively, $d_x$- and $d_z$-dimensional vectors of independent standard Wiener processes; $\sigma_x > 0$ and $\sigma_z > 0$ are known scale parameters and $\boldsymbol{\theta}=(F,H,C,B)^\top \in \Reals$ are static model parameters. Using a micro-macro solver \cite{Vanden03,Weinan05} that runs an Euler-Maruyama scheme at each time-scale to integrate Eqs. \eqref{eqlorenz96sdex}--\eqref{eqlorenz96sdez}, the discrete-time state equation can be written as
\begin{eqnarray}
x_{t+1,j} &=& x_{t,j} + \Delta_x(f_{\boldsymbol{x},j}(\boldsymbol{x}_t, \boldsymbol{\theta}) + g_{\boldsymbol{x},j}(\bar{\boldsymbol{z}}_{t+1},\boldsymbol{\theta}) ) + \sqrt{\Delta_x} \sigma_x v_{t+1,j}, \label{eqLorenz96X}\\
z_{n+1,l} &=& z_{n,l} + \Delta_z( f_{\boldsymbol{z},l}(\boldsymbol{x}_{\lfloor\frac{n}{h}\rfloor}, \boldsymbol{\theta}) + g_{\boldsymbol{z},l}(\boldsymbol{z}_n, \boldsymbol{\theta})) + \sqrt{\Delta_z} \sigma_z w_{n+1,l}, \label{eqLorenz96Z}
\end{eqnarray}
where
$$
\boldsymbol{x}_t=(x_{t,0}, \ldots, x_{t,d_x-1})^\top \quad \text{and} \quad
{\boldsymbol{z}_n=(z_{n,0}, \ldots, z_{n,d_z-1})^\top}
$$
are the discrete-time slow and fast variables, respectively; $\bar{\boldsymbol{z}}_t$ is the time-average
$$
\bar{\boldsymbol{z}}_t = {\frac{1}{h} \sum_{n=h(t-1)+1}^{ht} \boldsymbol{z}_n}
$$
and we denote {$\bar{\boldsymbol{z}}_t=(\bar{z}_{t,0}, \ldots, \bar{z}_{t,d_z-1})^\top$; }the terms $v_{t,j}$ and $w_{n,l}$ are independent Gaussian variables with identical ${\mathcal N}(\cdot|0,1)$ \gls{pdf} for all $t$, $j$, $n$ and $l$, and the functions
\begin{eqnarray}
f_{\boldsymbol{x},j} &:& \Reals^{d_x} \times \Reals^{d_{\theta}} \to \Reals^{d_x}, \nonumber\\
g_{\boldsymbol{x},j} &:& \Reals^{d_z} \times \Reals^{d_{\theta}} \to \Reals^{d_x}, \nonumber\\
f_{\boldsymbol{z},l} &:& \Reals^{d_x} \times \Reals^{d_{\theta}} \to \Reals^{d_z} \quad \text{and}\nonumber\\
g_{\boldsymbol{z},l} &:& \Reals^{d_z} \times \Reals^{d_{\theta}} \to \Reals^{d_z} \nonumber
\end{eqnarray}
can be expressed as
\begin{align}
f_{\boldsymbol{x},j}(\boldsymbol{x}_t,\boldsymbol{\theta}) &= -x_{t,j-1}(x_{t,j-2}-x_{t,j+1}) - x_{t,j} + F, \nonumber \\
g_{\boldsymbol{x},j}(\bar{\boldsymbol{z}}_t, \boldsymbol{\theta}) &= - \frac{H C}{B} \sum_{l=(j-1)R}^{Rj-1} \bar{z}_{t,l}, \nonumber \\
f_{\boldsymbol{z},l}(\boldsymbol{x}_t, \boldsymbol{\theta}) &= \frac{HC}{B}x_{t,{\lfloor (l-1)/R \rfloor}} \quad \text{and} \nonumber \\
g_{\boldsymbol{z},l}(\boldsymbol{z}_n,\boldsymbol{\theta}) &= -CBz_{n,l+1}(z_{n,l+2}-z_{n,l-1}) - Cz_{n,l} + \frac{CF}{B}. \nonumber
\end{align}
We assume that the observations are linear and Gaussian, namely,
\begin{equation}
\boldsymbol{y}_t = \boldsymbol{A}_t
\begin{bmatrix}
\boldsymbol{x}_t \\
\boldsymbol{z}_{ht}
\end{bmatrix}
+ \boldsymbol{r}_t, \label{eqObs96}
\end{equation}
where $\boldsymbol{A}_t$ is a known $d_y \times (d_x+d_z)$ matrix and $\boldsymbol{r}_t$ is a $d_y$-dimensional Gaussian random vector with known covariance matrix
\begin{equation}
\boldsymbol{R} =
\begin{bmatrix}
\sigma_{y,x}^2 \boldsymbol{I}_{d_{x}} & \boldsymbol{0} \\
\boldsymbol{0} & \sigma_{y,z}^2 \boldsymbol{I}_{d_{z}}
\end{bmatrix}, \label{eqR}
\end{equation}
and $\sigma_{y,x}^2, \sigma_{y,z}^2 >0$ are known variances.
\subsection{Numerical results} \label{subsec:Multi Numericalresults}
We have run simulations for the two-scale Lorenz 96 model of Section \ref{subsec:Multi StochasticL96model}, with dimensions $d_x = 10$ and $d_z=50$. The time steps for the Euler-Maruyama integrators are $\Delta_x = 10^{-3}$ and $\Delta_z = 10^{-4}$ continuous-time units. We set the fixed parameters as $F=8$, $H=0.75$, $C=10$ and $B=15$. In order to obtain the initial states $\boldsymbol{x}_0$ and $\boldsymbol{z}_0$, we simulate a deterministic version of Eqs. \eqref{eqLorenz96X}--\eqref{eqLorenz96Z} ($\sigma_x = \sigma_z = 0$) for $20$ continuous-time units. We set the initial states as the values of variables $\boldsymbol{x}$ and $\boldsymbol{z}$ at the last time step of this simulation. This initialization is used in all simulations of this computer experiment in order to generate both ``ground truth" sequences of $\boldsymbol{x}_t$ and $\boldsymbol{z}_n$ and the associated sequences of observations $\boldsymbol{y}_t$. {We set the matrix $\boldsymbol{A}_t=\bI_{d_y}$, for $d_y = d_x + d_z$. }
In the experiments, we compare the performance of both methods proposed (the first one of Section \ref{subsec:Multi FirstSch} and the second one of Section \ref{subsec:Multi SecondSch}).
We experiment with different number of samples $N$ and $J$ in the first and second layers of the former methods.
Additionally, for the first method (\gls{smc}-\gls{smc}-\gls{ukf}) we run the multi-scale hybrid filter with $L = 2 d_z + 1 = 101$ sigma-points for the \gls{ukf} in the third layer. We need to estimate $\boldsymbol{\theta} = [F, C, H, B]^\top$ (hence, $d_\theta = 4$). The prior for the unknown parameters is uniform, namely $p(\boldsymbol{\theta}) = \mathcal{U}([2,20]^2)$, while the priors used in the filtering algorithm for both unknown state variables are Gaussian, namely $p(\boldsymbol{x}_0) = \mathcal{N}(\boldsymbol{x}_0, 0.1 \boldsymbol{I}_{d_x})$ and $p(\boldsymbol{z}_0) = \mathcal{N}(\boldsymbol{z}_0, 10 \boldsymbol{I}_{d_z})$. The noise scaling factors, $\sigma_x = \frac{1}{2}$, $\sigma_z = \frac{1}{16}$, $\sigma_{y,x} = 10^{-1}$ and $\sigma_{y,z} = 10^{-3}$, are known. The jittering kernel is $\kappa_N^{\boldsymbol{\theta}'}(d\boldsymbol{\theta}) = \mathcal{N}(\boldsymbol{\theta} | \boldsymbol{\theta}', \tilde{\sigma}^2 \boldsymbol{I}_{d_{\theta}})$, where $\tilde{\sigma}^2 = \frac{0.05}{\sqrt{N^3}}$ {is selected following \cite{Crisan18bernoulli}}.
We assess the accuracy of the algorithms in terms of the \gls{nmse} of the estimators of the parameters, the slow state variables and the fast state variables. In the plots, we show the \glspl{nmse} computed at time $t$,
\begin{align}
\text{NMSE}_{\boldsymbol{\theta},t} &= \frac{\| \boldsymbol{\theta}_t - \hat{\boldsymbol{\theta}}_t \|^2}{\| \boldsymbol{\theta}_t \|^2}, \\
\text{NMSE}_{\boldsymbol{x},t} &= \frac{\| \boldsymbol{x}_t - \hat{\boldsymbol{x}}_t \|^2}{\| \boldsymbol{x}_t \|^2}, \\
\text{NMSE}_{\boldsymbol{z},t} &= \frac{\| \boldsymbol{z}_{ht} - \hat{\boldsymbol{z}}_{ht} \|^2}{ \| \boldsymbol{z}_{ht} \|^2},
\end{align}
averaged over 50 independent simulation runs of 20 continuous-time units each, where the estimators take the form
\begin{eqnarray}
\hat{\boldsymbol{\theta}}_t &=& \sum_{i=1}^{N} v_t^i {\boldsymbol{\theta}}^i, \\
\hat{\boldsymbol{x}}_t &=& \sum_{i=1}^{N} \sum_{j=1}^{J} v_t^i u_t^{i,j} {\boldsymbol{x}}^{i,j}_t \quad \text{and} \\
\hat{\boldsymbol{z}}_{ht} &=& {\sum_{i=1}^{N} \sum_{j=1}^{J} \sum_{l=0}^{L} v_t^i u_t^{i,j} \lambda_{ht}^{i,j,l} {\boldsymbol{z}}^{i,j,l}_{ht}, }
\end{eqnarray}
for the first method. For the second method the estimators of the state variables are
\begin{eqnarray}
\hat{\boldsymbol{x}}_t &=& \frac{1}{J} \sum_{i=1}^{N} \sum_{j=1}^{J} v_t^i {\boldsymbol{x}}^{i,j}_t \quad \text{and} \\
\hat{\boldsymbol{z}}_{ht} &=& \frac{1}{J} \sum_{i=1}^{N} \sum_{j=1}^{J} v_t^i {\boldsymbol{z}}^{i,j}_{ht},
\end{eqnarray}
where $\boldsymbol{x}_t^{i,j}$ is the $j$-th member of the ensemble $\boldsymbol{X}_t^i$ in Algorithm \ref{alg:Multi SecondScheme EnKF}.
Figure \ref{fig:Multi changeJ} shows the performance of the proposed methods for different values of $J$ (number of samples in the second layer) and $N=20$. This is evaluated in terms of averaged \gls{nmse}$_{\boldsymbol{\theta}}$, \gls{nmse}$_{\boldsymbol{x}}$ and \gls{nmse}$_{\boldsymbol{z}}$ together with the running time in hours. The first method (\gls{smc}-\gls{smc}-\gls{ukf}) shows an improvement in the accuracy as the number of samples $J$ increases, although this improvement is only significant for the slow state (Fig. \ref{figNMSExvsJ}). The second method (\gls{smc}-\gls{enkf}-\gls{ekf}) remains stable with $J$. The second method outperforms the first one in accuracy of the parameter estimation (Fig. \ref{figNMSEthetavsJ}) as well as the slow state estimation (Fig. \ref{figNMSExvsJ}). However, the first method obtains a better \gls{nmse}$_{\boldsymbol{z}}$. Additionally, the second method runs faster since the computational cost is considerably lower.
\begin{figure}[htb]
\centering
\begin{subfigure}{0.49\linewidth}
\centering
\input{NMSEthetavsJ.tex}
\caption{\gls{nmse}$_{\boldsymbol{\theta}}$ for different number of particles/ensembles ($J$) in the second layer of the filter.}
\label{figNMSEthetavsJ}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\centering
\input{NMSExvsJ.tex}
\caption{\gls{nmse}$_{\boldsymbol{x}}$ for different number of particles/ensembles ($J$) in the second layer of the filter.}
\label{figNMSExvsJ}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\centering
\input{NMSEzvsJ.tex}
\caption{\gls{nmse}$_{\boldsymbol{z}}$ for different number of particles/ensembles ($J$) in the second layer of the filter.}
\label{figNMSEzvsJ}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\centering
\input{timingvsJ.tex}
\caption{Running time for different number of particles/ensembles ($J$) in the second layer of the filter.}
\label{figtimingvsJ}
\end{subfigure}
\caption{Averaged \gls{nmse}$_{\boldsymbol{\theta}}$ (\ref{figNMSEthetavsJ}), \gls{nmse}$_{\boldsymbol{x}}$ (\ref{figNMSExvsJ}), \gls{nmse}$_{\boldsymbol{z}}$ (\ref{figNMSEzvsJ}) and average running time (\ref{figtimingvsJ}) of \gls{smc}-\gls{smc}-\gls{ukf} (in blue) and \gls{smc}-\gls{enkf}-\gls{ekf} (in red), averaged over 50 simulation runs. The number of particles of the first layer (\gls{smc}) is set to $N=20$.}
\label{fig:Multi changeJ}
\end{figure}
Figure \ref{fig:Multi changeN} compares the performance of the proposed methods and the \gls{enkf} for different values of $N$ (number of samples in the first layer) and $J=50$. This is shown with the averaged \gls{nmse}$_{\boldsymbol{\theta}}$, \gls{nmse}$_{\boldsymbol{x}}$ and \gls{nmse}$_{\boldsymbol{z}}$ together with the running time in hours.
Similar to the previous figure, the first method (\gls{smc}-\gls{smc}-\gls{ukf}) shows a slight improvement in the accuracy of the slow state estimation as the number of samples $J$ increases (Fig. \ref{figNMSExvsN}). The second method (\gls{smc}-\gls{enkf}-\gls{ekf}) remains stable with $N$. The second method outperforms the first one in accuracy of the parameter estimation (Fig. \ref{figNMSEthetavsN}) and the slow state estimation (Fig. \ref{figNMSExvsN}), but not for the fast state estimation (Fig. \ref{figNMSEzvsN}). Again, the second method runs faster since the computational cost is considerably lower.
\begin{figure}[ht!]
\centering
\begin{subfigure}{0.49\linewidth}
\centering
\input{NMSEthetavsN.tex}
\caption{\gls{nmse}$_{\boldsymbol{\theta}}$ for different number of particles/ensembles ($N$) in the first layer of the filter.}
\label{figNMSEthetavsN}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\centering
\input{NMSExvsN.tex}
\caption{\gls{nmse}$_{\boldsymbol{x}}$ for different number of particles/ensembles ($N$) in the first layer of the filter.}
\label{figNMSExvsN}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\centering
\input{NMSEzvsN.tex}
\caption{\gls{nmse}$_{\boldsymbol{z}}$ for different number of particles/ensembles ($N$) in the first layer of the filter.}
\label{figNMSEzvsN}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\centering
\input{timingvsN.tex}
\caption{Running time for different number of particles/ensembles ($N$) in the first layer of the filter.}
\label{figtimingvsN}
\end{subfigure}
\caption{Averaged \gls{nmse}$_{\boldsymbol{\theta}}$ (\ref{figNMSEthetavsJ}), \gls{nmse}$_{\boldsymbol{x}}$ (\ref{figNMSExvsJ}), \gls{nmse}$_{\boldsymbol{z}}$ (\ref{figNMSEzvsJ}) and average running time (\ref{figtimingvsJ}) of \gls{smc}-\gls{smc}-\gls{ukf} (in blue) and \gls{smc}-\gls{enkf}-\gls{ekf} (in red), averaged over 50 simulation runs. The number of particles/ensembles of the second layer (\gls{smc} for the first method and \gls{enkf} for the second method) is set to $J=50$.}
\label{fig:Multi changeN}
\end{figure}
Finally, we show results for a computer experiment in which we have used the \gls{smc}-\gls{enkf}-\gls{ekf} method to estimate the parameters $F$, $C$, $B$ and $H$ and track the state variables of the two-scale Lorenz 96 system with dimension $d_x = 10$ and $d_z = 50$. The number of particles used to approximate the sequence of parameter posterior distributions is $N=50$ and the number of samples in the ensembles of the second layer is $J=50$.
Figure \ref{fig:Multi x1z1} shows the true state trajectories, together with their estimates, for the first slow state variable ($x_1$) and the first fast state variable ($z_1$) of the two-scale Lorenz 96 model. We note that although the accuracy of the estimation of the fast variable is similar throughout the whole simulation run (over 20 continuous-time units), we only show the last $2$ continuous-time units of the simulation.
\begin{figure}[ht!]
\centering
\begin{subfigure}{0.49\linewidth}
\centering
\input{x1.tex}
\caption{Time sequence of $x_1$}
\label{subfig:Multi x1}
\end{subfigure}
\begin{subfigure}[htb]{0.49\linewidth}
\centering
\input{z1.tex}
\caption{Time sequence of $x_2$}
\label{subfig:Multi z1}
\end{subfigure}
\caption{Sequences of state values (black line) and estimates (dashed red line) in $x_1$ (plot \ref{subfig:Multi x1}) and $z_1$ (plot \ref{subfig:Multi z1}) over time.}
\label{fig:Multi x1z1}
\end{figure}
In Fig. \ref{fig:Multi pdfs} we observe the estimated posterior \glspl{pdf} of the fixed parameters $F$, $C$, $B$ and $H$, together with the ground truth values. Figure \ref{subfig:Multi pdfF} displays the approximate posterior \gls{pdf} of the parameter $F$ (red dashed line) together with the true value $F=8$ (vertical black line), Fig. \ref{subfig:Multi pdfC} displays the approximate posterior \gls{pdf} of the parameter $C$ (blue dashed line) together with the true value $C=10$ (vertical black line), Fig. \ref{subfig:Multi pdfB} displays the approximate posterior \gls{pdf} of the parameter $B$ (green dashed line) together with the true value $B=15$ (vertical black line) and Fig. \ref{subfig:Multi pdfH} displays the approximate posterior \gls{pdf} of the parameter $H$ (magenta dashed line) together with the true value $H=0.75$ (vertical black line).
We observe that for all the \glspl{pdf}, nearly all probability mass is allocated close to the true values, except for the parameter $B$ (Fig. \ref{subfig:Multi pdfB}). In this case, the \gls{pdf} is slightly shifted \gls{wrt} the true value.
\begin{figure}[htb!]
\vspace{1cm}
\centering
\begin{subfigure}{0.49\linewidth}
\centering
\input{pdfF.tex}
\caption{Posterior \gls{pdf} of $F$}
\label{subfig:Multi pdfF}
\end{subfigure}
\hfill
\begin{subfigure}[htb]{0.49\linewidth}
\centering
\input{pdfC.tex}
\caption{Posterior pdf of $C$}
\label{subfig:Multi pdfC}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\centering
\input{pdfB.tex}
\caption{Posterior \gls{pdf} of $B$}
\label{subfig:Multi pdfB}
\end{subfigure}
\hfill
\begin{subfigure}[htb]{0.49\linewidth}
\centering
\input{pdfH.tex}
\caption{Posterior pdf of $H$}
\label{subfig:Multi pdfH}
\end{subfigure}
\caption{Posterior density of the parameters (dashed lines) at time $\tau = 20$. The true values are indicated by a black vertical line.}
\label{fig:Multi pdfs}
\end{figure}
\section{Conclusions} \label{sec:Multi Conclusions}
We have introduced a further generalization of the \gls{nhf} methodology of \cite{Perez-Vieites18} that, using long sequences of observations collected over time, estimates the static parameters and the stochastic dynamical variables of a class of heterogeneous multi-scale state-space models \cite{Abdulle2012}.
This scheme combines three layers of filters, one inside the other. It approximates recursively the posterior probability distributions of the parameters and the two sets of state variables given the sequence of available observations.
In a first layer of computation we approximate the posterior probability distribution of the parameters, in a second layer we approximate the posterior probability distribution of the slow state variables, and the posterior probability distribution of the fast state variables is approximated in a third layer. The inference techniques used in each layer can vary, leading to different computational costs and degrees of accuracy.
To be specific, we describe two possible algorithms that derive from this scheme, combining Monte Carlo methods and Gaussian filters at different layers. The first method involves using sequential Monte Carlo (SMC) methods in both first and second layers, together with a bank of unscented Kalman filter (UKFs) in the third layer (i.e., the \gls{smc}-\gls{smc}-\gls{ukf}). The second method employs a \gls{smc} in the first layer, ensemble Kalman filters (EnKFs) at the second layer and introduces the use of a bank of extended Kalman filters (EKFs) in the third layer (i.e., the \gls{smc}-\gls{enkf}-\gls{ekf}).
We have presented numerical results for a two-scale stochastic Lorenz 96 model with synthetic data and we have evaluated the performance of the algorithm in terms of the normalized mean square errors (NMSEs) for the parameters and the dynamic (slow and fast) state variables.
The proposed implementations (both of them) obtain good results in terms of accuracy, having a considerably reduction in running time (i.e., the computational cost) with the second method. Further research is still needed, studying the stability of the multi-layer structure when the sequence of observations are rare and/or few.
Moreover, we can compare the proposed algorithms with other methods, using the Lorenz 96 system but also other models.
\section*{Acknowledgements}
This research was partially supported by the Office of Naval Research (award no. N00014-19-1-2226), \textit{Agencia Estatal de Investigaci\'on} of Spain (ref. RTI2018-099655-B-I00 CLARA) and \textit{Comunidad Aut\'onoma de Madrid} (ref. Y2018/TCS-4705 PRACTICO).
\bibliographystyle{plain
|
train/arxiv
|
BkiUbBc4ubnjop7FD4MU
| 5
| 1
|
\section{Introduction}
The Holographic Principle \cite{tHooft:1993dmi,Susskind:1994vu} is, at present, our most promising tool for understanding quantum gravity. Its best-studied realization is the Anti-de~Sitter/Conformal Field Theory (AdS/CFT) correspondence \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj,Aharony:1999ti}.
The idea of holography, {\it viz.} understanding a higher-dimensional gravitational theory in terms of a theory without gravity that lives on its boundary, was based on the fact that the entropy of a black hole is proportional to its area and not its volume. This area law is not tied to AdS spacetimes. Therefore, the overwhelming expectation is that holography is true in general and applies also to spacetimes that are not asymptotically AdS. In particular, an outstanding question is how to formulate a holographic duality for asymptotically flat spacetimes, see \cite{Polchinski:1999ry,Susskind:1998vk,Giddings:1999jq} for some early attempts.
\medskip
\noindent Given the spectacular success of AdS/CFT, a particularly compelling way to attempt building holography in flat space is to take the AdS radius to infinity and the corresponding limit on the CFT side. It has been shown that the infinite radius limit in the bulk manifests itself as a limit where the speed of light in the dual field theory goes to zero \cite{Bagchi:2012cy}. This ultra-relativistic (UR) limit leads to a class of field theories called Carrollian field theories, where the Carroll group replaces the relativistic Poincar{\'e} group. The conformal versions of these are Carrollian CFTs. The putative dual theories of flat space are therefore Carrollian CFTs in one lower dimension \cite{Bagchi:2010eg,Bagchi:2012cy, Bagchi:2016bcd,Bagchi:2019xfx,Bagchi:2019clu,Banerjee:2020qjj}. These field theories live naturally on the null boundary of asymptotically flat spacetime as well as on black hole horizons, see e.g.~\cite{Donnay:2019jiz}.
\medskip
\noindent
Asymptotic symmetries are generated by boundary condition preserving transformations that do not fall off sufficiently fast near the boundary, see e.g.~\cite{Compere:2018aar,Harlow:2019yfa} and refs.~therein. In many applications, the associated asymptotic symmetry group (ASG) provides an infinite enhancement of the bulk isometries. The most famous examples are asymptotically flat spacetimes in 4d, studied originally by Bondi, van~der~Burgh, Metzner and Sachs (BMS) \cite{Bondi:1962px,Sachs:1962zza}, who found an enhancement of Poincar\'e symmetries by an infinite set of angle-dependent translations, and asymptotically AdS spacetimes in 3d, where the analysis of Brown and Henneaux \cite{Brown:1986nw} uncovered two copies of the Virasoro algebra generating the asymptotic symmetries. The latter example is seen by many as a precursor to the AdS/CFT correspondence. Therefore, the BMS analysis could be a precursor to a flat space holographic correspondence. This vision is behind many current approaches to flat space holography, including ours.
\medskip
\noindent
One version of the BMS group is the semi-direct product of the group of all (local) conformal transformations of the sphere at infinity (also known as super-rotations) with super-translations, the angle-dependent translations at null infinity.\footnote{
The original BMS analysis did not allow for super-rotations, which were introduced in \cite{Barnich:2009se}. Super-rotations can be further generalized to arbitrary diffeomorphisms of the sphere, see \cite{Campiglia:2014yka}. The physical relevance of superboosts was uncovered in \cite{Compere:2018ylh}. Moreover, super-translations can be generalized to have arbitrary spin, see the discussion in \cite{Grumiller:2019fmp,Campoleoni:2020ejn}.
} For bulk dimension $D>4$, there are choices of boundary conditions for which these BMS groups are again infinite-dimensional \cite{Kapec:2015vwa,Fuentealba:2021yvo} (as opposed to those which keep just the Poincar{\'e} group \cite{Hollands:2003ie,Tanabe:2009va,Tanabe:2011es}). These choices seem to be the ones that are most interesting for physical purposes, keeping in mind results like Weinberg's soft graviton theorem \cite{Weinberg:1965nx} that holds in all dimensions and recent findings that link these theorems to asymptotic symmetries \cite{He:2014laa}.
\medskip
\noindent
A recipe for
holography in arbitrary spacetimes is to compute the ASG for the bulk gravitational theory and posit that the ASG generates the global symmetries of the dual boundary field theory. Following this line of argument, the BMS group should govern the boundary dynamics of field theories putatively dual to asymptotically flat spacetimes \cite{Bagchi:2010eg, Bagchi:2012cy}. This program was implemented successfully in 3d, including a concrete proposal for a field theory dual \cite{Bagchi:2012yk}, Cardy-type of state counting from BMS-symmetries \cite{Bagchi:2012xr,Barnich:2012xq}, the holographic calculation of correlation functions \cite{Bagchi:2015wna}, entanglement entropy \cite{Bagchi:2014iea,Jiang:2017ecm,Hijano:2017eii}, and more; a selected list of papers is \cite{Barnich:2012rz,Bagchi:2013qva,Bagchi:2013lma,Barnich:2014kra,Krishnan:2013wta,Hartong:2015usd,Campoleoni:2016vsh,Bagchi:2016geg,Hijano:2017eii,Hijano:2018nhq}. In this paper, we focus on higher dimensions. Some relevant literature in this context is \cite{Bagchi:2016bcd,Bagchi:2019xfx,Bagchi:2019clu,Banerjee:2020qjj, Campoleoni:2021blr, Chen:2021xkw}. Duals of 4d flat space in terms of a 2d CFT living on the celestial sphere has gathered a lot of recent interest. We will not be exploring this rather intriguing direction in our work here. We point the interested reader to the excellent reviews \cite{Strominger:2017zoo,Pasterski:2021rjz}.
\medskip
\noindent The limiting construction alluded to above is consistent between gravity and field theory sides. Namely, the Carrollian conformal algebra (CCA) is isomorphic to the BMS algebra in one higher dimension \cite{Duval:2014uva}.
\be{ccabms}
\mathfrak{Cconf}_d = \mathfrak{bms}_{d+1}
\end{equation}
Thus, the limiting construction and the intrinsic construction are consistent with each other, at least at the level of the symmetries.
\medskip
\noindent
A final point of clarification in this limiting vs.~intrinsic analyses is the infinite enhancement of symmetries. As we just mentioned, the BMS algebra is infinite-dimensional for bulk dimensions $D=3,4$ and for certain boundary conditions in $D>4$. In AdS spacetimes, on the other hand, until recently there were infinite enhancements only when $D=3$.\footnote{
In \cite{Compere:2019bua} new AdS$_4$ boundary conditions were introduced that lead to infinite-dimensional asymptotic symmetries and allow to take a flat space limit to BMS. The implications of this approach for AdS/CFT are not completely clear yet. Thus, we follow a different route.} The natural question then is how one can see these infinite enhancements for boundary dimensions $d>3$ from the point of view of the limit. It turns out that the finite algebra ($iso(d,1)$) that one gets by performing the {\.I}n\"on\"u--Wigner contraction of the $so(d,2)$ algebra of the isometries of AdS$_{d+1}$ is the Poincar{\'e} algebra in $(d+1)$ dimension, which can be lifted to an infinite-dimensional algebra. This infinite lift matches with the infinite enhancement of the BMS group.
\medskip
\noindent
Our principal long-term goal is to establish flat space holography, specifically to construct quantum field theories that are dual to (quantum) gravity in asymptotically flat spacetimes. More modestly, it would already be progress to understand one example in some detail. The best-known realization of the Holographic Principle is Maldacena's correspondence between Type IIB superstring theory on AdS$_5 \times$ S$^5$ and $\mathcal{N}=4$ super Yang--Mills theory in $d=4$ \cite{Maldacena:1997re}. It would thus be useful to understand if a version of this correspondence exists for asymptotically flat spacetimes. The bulk side is well-understood: neglecting the S$^5$ for now, in the weak coupling regime, the gravitational theory is Type IIB supergravity in $D=5$ flat spacetimes. We would like to understand the analogue of this flat space limit on the dual field theory side. In view of what we have introduced above, the putative dual would be a Carrollian superconformal field theory.\footnote{While our focus is on null infinity, it is also possible to consider
spatial infinity, where non-linear asymptotic symmetries emerge in higher dimensions \cite{Fuentealba:2021yvo}. Currently, there is no proposal for a dual field theory at spatial infinity, nor is there a physical interpretation of what such a field theory would represent.} Our objective in this paper is to initiate a study of these field theories by formulating the algebraic structure behind them. In short, we construct Carrollian superconformal symmetry and concentrate on boundary dimensions greater than three.\footnote{There are known issues regarding the definition of the null structure and radiation in asymptotically flat spacetimes in odd dimensions higher than four \cite{Valiente-Kroon:2002xys,Hollands:2004ac}. Our algebraic constructions in this paper is not sensitive to these issues.}
\medskip
\noindent
Our paper is organized as follows. In Sec.~\ref{Scaling for the Bosonic generators of Finite CCA}, we revisit earlier work on the CCA and its representation theory. In Sec.~\ref{sec3}, we focus the $\mathcal{N}=1$ Carrollian Superconformal Algebra (CSA). We discuss types of possible scaling that generate the algebra and then construct a superspace formulation for it. Later in the section, we extend the finite $\mathcal{N}=1$ CSA to admit the infinite `fermionic super-translations' in $d=4$ and $R$-symmetry. We generalise the infinite CSA to $\mathcal{N}=4$ in Sec.~\ref{n=4}. The proposed infinite-dimensional lift is consistent with results in the existing literature for $d=2, 3$. In Sec.~\ref{sec8}, we work out aspects of the representation theory of the infinite CSA from the intrinsic point of view, by considering the actions of the generators on Carrollian fields. The obtained intrinsic multiplet structure matches that from the limiting picture. We finish in Sec.~\ref{Conclusions} with a summary and discussions of other current and future directions of research.
\medskip
\noindent
There are four appendices consisting of details omitted in the main text.
Appendix \ref{relal} displays the most useful form of the relativistic $SU(2,2|N)$ algebra for our purposes. Appendix \ref{appenb} provides a consistency check for our proposed infinite-dimensional extension of the Carrollian superconformal symmetries by reproducing various results on super BMS algebras that have recently appeared in the literature. Appendix \ref{identities section} lists identities that have been used throughout the paper. Appendix \ref{appen3} shows all the details of the representation theory of infinite CSA.
\section{Bosonic story --- Carrollian CFTs}\label{Scaling for the Bosonic generators of Finite CCA}
We begin our discussions by revisiting the bosonic construction of Carrollian CFTs. We shall quickly remind the reader of the UR limit of a CFT in generic dimensions and the corresponding finite-dimensional CCA emerging from it. We then give this an infinite lift. Finally, mention some rudimentary facts about Carrollian CFT representation theory.
\subsection{Algebra}
The UR or Carrollian limit of a relativistic CFT is reached by performing an In\"{o}n\"{u}--Wigner
contraction on the relativistic conformal generators. The corresponding contraction of the spacetime coordinates for a $d$-dimensional CFT is described as
\be{stscale}
x_i \to x_i\qquad\qquad t \to \epsilon\, t\qquad\qquad \epsilon \to 0\,.
\end{equation}
Here, $i$ runs over the spatial coordinates $i=1,\hdots,d-1$. The above contraction can also be interpreted as taking the limit of vanishing speed of light, $c\to 0$. The UR generators are obtained by performing the space-time contraction on the parent relativistic generators. For example, we obtain UR boost generator by scaling relativistic boost and regularising it as
\be{convensca}
B_i^{\textrm{\tiny rel}}= -x_i \partial_t-t\partial_i \xrightarrow[]{\text{rescale}\, t} -\frac{1}{\epsilon}x_i \partial_t-t\partial_i
\xrightarrow[]{\text{rescale}\, B_i} B_i=
\displaystyle \lim_{\epsilon \rightarrow 0}\epsilon B_i^{\textrm{\tiny rel}}
\xrightarrow[\text{limit}]{\text{Carroll}} B_i=- x_i \partial_t\,.
\end{equation}
The other UR generators are obtained similarly from their parent relativistic conformal generators. They are given by
\begin{subequations}
\label{genearl}
\begin{align}
H &= \partial_t & B_i&=-x_i \partial_t & K_i &= -2 x_j (t\partial_t+x_i\partial_i)+x_j x_j \partial_i & K &=x_i x_i \partial_t \\
D&=-(t\partial_t+x_i \partial_i) & P_i&=\partial_i & J_{ij}&=-(x_i\partial_j-x_j\partial_i)\,. &&
\end{align}
\end{subequations}
These generate the finite Conformal Carrollian Algebra (f-CCA),\footnote{We are particularly interested in level-$N$ CCA (CCA$_N$) when $N=2$. The UR limit of the conformal generators gives rise to generators of CCA$_N$ for $N=2$. For details see \cite{Duval:2014uva,Duval:2014lpa,Bagchi:2019clu}.} which is $iso(d,1)$ for a $d$-dimensional field theory \cite{Bagchi:2016bcd,Bagchi:2019xfx}.
\begin{subequations}
\label{algebra}
\begin{align}
[J_{ij}, B_k ]&=\delta_{k[i}B_{j]} & [J_{ij}, P_k ]&=\delta_{k[i}P_{j]} & [J_{ij}, K_k ]&=\delta_{k[i}K_{j]} & [B_i,P_j]&=\delta_{ij}H\\
[B_i,K_j]&=\delta_{ij}K & [D,K]&=-K & [K,P_i]&=2B_i & [K_i,P_j]&=-2D\delta_{ij}-2J_{ij} \nonumber \\ [H,K_i]&=2B_i & [D,H]&=H & [D,P_i]&=P_i & [D,K_i]&=-K_i\,.
\end{align}
\end{subequations}
We focus on $d=4$. The sub-algebra consisting of the generators $\{J_{ij}, B_i, P_i, H\}$ forms the $c\to0$ limit of the Poincar{\'e} algebra {\it viz.} the Carrollian algebra \cite{Leblond65}. The generators $\{J_{ij},P_i,D,K_i\}$ form $so(5)$, the conformal algebra of $3$-dimensional Euclidean space.
\medskip
\noindent
We use an equivalent way of writing the generators of f-CCA from their parent relativistic generators :
\be{bosgen}
\{H^{\text{rel}}, K^{\text{rel}}, B^{\text{rel}}_i\}= \frac{1}{\epsilon}\{H, K, B_i\} \quad K^{\text{rel}}_i = K_i\quad P^{\text{rel}}_i= P_i \quad D^{\text{rel}}= D \quad J_{ij}^{\text{rel}} = J_{ij}
\end{equation}
We rephrase this by saying
\be{1}
G_A \to \frac{1}{\epsilon} \,G_A\qquad\qquad G_a \to G_a
\end{equation}
where $G_A$ is the set of generators that scale in the limit $c\to0$, while $G_a$ is the set that remains unchanged.
It is possible to give the finite algebra in \eqref{algebra} an infinite-dimensional lift by introducing time translation generator with arbitrary spatial dependence
\be{2}
M_f=f(x_i)\partial_t\,.
\end{equation}
Here, $M_f$ generates the infinite set of super-translations and $f(x_i)$ is an arbitrary function of the spatial co-ordinates $x_i$, which we restrict to polynomials. We obtain the finite generators of f-CCA, i.e., $M_f = H,B_i,K$ when $f(x_i)=1,-x_i,x_k x_k$ respectively. The super-translation generators $M_f$ along with the finite set of generators $\{B_i,J_{ij},H,P_i,D,K,K_i\}$ describe the infinite-dimensional CCA. For $d\geq 4$ it can be written as \cite{Basu:2018dub,Bagchi:2016bcd}:
\begin{subequations}
\label{infinitealgebra1}
\begin{align}
[P_i, M_f] &=M_{\partial_i f} & \quad [D,M_f] &=M_{(-x_i \partial_i f+f)}\\
[K_i,M_f]&= M_{2x_i f+x_k x_k\partial_i f-2x_ix_k\partial_k f} &\quad [J_{ij},M_f]&= M_{-x_{[i}\partial_{j]}f}\,.
\end{align}
\end{subequations}
For a more explicit version of the algebra (in terms of modes), the reader is pointed to \cite{Bagchi:2016bcd}.
\subsection{Representation theory}
In \cite{Bagchi:2016bcd}, the representation theory of the CCA based on highest weights was described. The analysis was further extended to fields of different integer and half-integer spins in \cite{Bagchi:2019xfx}. Even though the construction was primarily intended for $d=4$, it was expected to work for higher dimensions as well. For the CCA, the states were labeled with the eigenvalues of dilatation and rotation generators. We briefly describe this construction below.
\medskip
\noindent
We label the Carrollian CFT fields with scaling dimension $\Delta$ and spin $j$ as
\be{4}
[D,\Phi(0,0)]=\Delta \Phi(0,0)\qquad\qquad [J^2, \Phi(0,0)]=j(j+1)\Phi(0,0)\,.
\end{equation}
The action of Carrollian rotation, space- and time-translation on a generic field is given by
\be{5}
[J_{ij},\Phi(0,0)]=\Sigma_{ij}\Phi(0,0),\quad[H,\Phi(t,x_i)]=\partial_t \Phi(t,x_i),\quad[P_i,\Phi(t,x_i)]=\partial_i \Phi(t,x_i)\,.
\end{equation}
The Carrollian conformal primaries are defined as
\be{6}
[K_i,\Phi(0,0)]=0,\;\; [K,\Phi(0,0)]=0,~~[M_{f},\Phi(0,0)]=0~~\text{for polynomial degree} > 1\,.
\end{equation}
The fields are not eigenstates of Carrollian boosts. Hence, using the Jacobi identity, the transformation of a generic field under Carrollian boosts can be written as
\be{7}
[B_k,\Phi(0,0)]=r\varphi_k+\, f \sigma_k\phi + f^{\prime} \sigma_k\chi\, + a A_t \delta_{ik}+b A_k+ \hdots
\end{equation}
Here, $\varphi, \,\{\phi,\chi \},\, \{A_t , A_k\}$ denote the primaries involving different spins $(0,\frac{1}{2},1)$. The constants $r,\{f,f^\prime\},\{a,b\}$ are determined from the dynamics of the corresponding theory. The above expression can be generalised for higher spins as well.
\medskip
\noindent Using the fact that the Carrollian primary $\Phi(t,x_i)$ evolves to a generic spacetime point from the origin as
\be{8}
\Phi(t,x)=U \Phi(0,0) U^{-1}\quad \text{where} ~ U=e^{-tH-x_i P_i}
\end{equation}
the action of the finite and infinite sets of generators of CCA on this generic Carrollian primary $\Phi(t,x_i)$ can be written as
\begin{subequations}
\label{repgen}
\begin{align}
[J_{ij}, \Phi(t,x_i)]
&=- (x_i \partial_j-x_j \partial_i ) \Phi(t,x)+\Sigma_{ij}\Phi(t,x_i)\\
[B_j, \Phi(t,x_i)]
&=-x_j\partial_t \Phi(t,x)+U_j\qquad\qquad\qquad\qquad U_j:=U [B_j, \Phi(0,0)]U^{-1}\\
[D, \Phi(t,x_i) ]
&= (-t\partial_t-x_i \partial_i+\Delta) \Phi(t,x_i)\\
[K_j, \Phi(t,x_i) ]
&= - (2\Delta x_j+2x_jt\partial_t+2x_i x_j \partial_i-2x_i \Sigma_{ij}- x_i x_i \partial_j )\,\Phi(t,x)+2t \, U_j\\
[M_f, \Phi(t,x) ] &= f(x_i)\partial_t \Phi(t,x)+\partial_j f\:U_j\,.
\end{align}
\end{subequations}
In later sections, we will use these results from the bosonic representation theory as hints to build the representations for the supersymmetric version.
\section{\texorpdfstring{$\boldsymbol{\mathcal{N}=1}$}{N=1} Carrollian superconformal symmetry}\label{sec3}
Having acquainted ourselves with the basics of the bosonic construction, we now venture into the world of Carrollian superconformal symmetries. We begin with the assertion
\be{CB}
\mathfrak{Csconf}_d = \mathfrak{sbms}_{d+1}.
\end{equation}
This is a natural supersymmetric generalisation of the earlier isomphism \refb{ccabms}. In this section, we construct CSAs by taking UR contractions of the parent relativistic superconformal algebra and then from the suggestive form of the contracted algebras, give these infinite-dimensional lifts. Supersymmetric BMS algebras have been studied in the literature for lower dimensions. For the known cases of $d=2$ \cite{Barnich:2014cwa,Banerjee:2016nio,Lodato:2016alv,Bagchi:2016yyf,Caroca:2018obf,Fuentealba:2017fck} (and $d=3$ \cite{Fotopoulos:2020bqj}), we shall check our proposal \refb{CB}.
There is an important point we clarify first. The $d=2$ algebra ($\mathfrak{sbms}_3$) has two distinct sub-categories, called the homogeneous and inhomogeneous \cite{Bagchi:2015nca,Bagchi:2017cte} (or democratic/despotic \cite{Lodato:2016alv}). These correspond to asymptotic symmetries of normal supergravity and twisted supergravity respectively. For our higher-dimensional explorations in this paper, from the context of holography of asymptotically flat spacetimes, at present, we are interested in the dual to normal supergravity and hence focus on the homogeneous algebras. The contractions we perform are chosen to achieve this. We leave the inhomogeneous algebras for future work. Below we first give a brief account of these two different types of super-BMS algebras.
\subsection{Two different SUSY extensions}\label{sec:3.1}
The BMS algebras arise as the asymptotic symmetries of gravitational theories at the null boundary of asymptotically flat spacetimes. The super BMS algebras, in the same way, are expected to arise as the asymptotic symmetries of flat space supergravity. The asymptotic symmetries of the minimal supersymmetric extension to 3D Einstein gravity, viz. $\mathcal{N}=1$ supergravity, was constructed in \cite{Barnich:2014cwa}. The resulting asymptotic symmetry algebra is
\begin{subequations}
\label{n1}
\begin{align}
[L_n,L_m]&=(n-m)\,L_{n+m}+\frac{c_L}{12}\,n(n^2-1)\delta_{n+m,0}\\
[L_n,M_m]&=(n-m)\,M_{n+m}+\frac{c_M}{12}\,n(n^2-1)\delta_{n+m,0}\\
[L_n,Q_r]&=(\frac{n}{2}-r)\,Q_{n+r}\\
\{Q_r,Q_s\}&=M_{r+s}+\frac{c_M}{6}\,\big(r^2-\tfrac{1}{4}\big)\,\delta_{r+s,0}\,.
\end{align}
\end{subequations}
Here, $L_n,M_n$'s generate the super-rotations and super-translations respectively, and the fermionic generators are given by $Q_r$, with $n,m\in\mathbb{Z}$ and either $r,s\in\mathbb{Z}$ (Ramond) or $r,s\in\mathbb{Z}+\frac12$ (Neveu--Schwarz). For Poincar{\'e} supergravity with Newton's constant $G$, we get $c_L=0, c_M=\frac{3}{G}$. The above algebra can be obtained by starting with a super-Virasoro algebra $(\l^+_n, {\mathcal{Q}}^+_r)$ and a Virasoro algebra $(\l^-_n)$
\begin{subequations}
\label{11a}
\begin{align}
[\l^\pm_n, \l^\pm_m] &= (n-m)\, \l^\pm_{n+m} +\frac{c^\pm}{12}\,n(n^2-1)\delta_{n+m,0}\\
[\l^+_n, {\mathcal{Q}}^+_r] &= \big(\frac{n}{2}-r\big)\, {\mathcal{Q}}_{n+r}\\
\{{\mathcal{Q}}^+_r, {\mathcal{Q}}^+_s\} &= 2 \l_{r+s} +\frac{c^+}{3} \,\big(r^2-\tfrac{1}{4}\big)
\end{align}
\end{subequations}
and contracting it as follows
\be{9}
L_n = \l^+_n - \l^-_{-n}\qquad\qquad M_n = \epsilon\, (\l^+_n + \l^-_{-n})\qquad \qquad Q_r = \sqrt{\epsilon} \,{\mathcal{Q}}^+_r\,.
\end{equation}
When there are more supersymmetry generators, there is more room to man{\oe}uvre and one can scale the relativistic superalgebra in different ways, keeping the bosonic part of the algebra the same and thus obtaining different supersymmetric extensions of the contracted algebra \cite{Bagchi:2016yyf}.
We now focus on the relativistic $\mathcal{N}=(1,1)$ case. In \cite{Lodato:2016alv}, it was shown that there are two distinct theories of supergravity in asymptotically flat spacetimes, with two different asymptotic symmetry algebras, that one can obtain starting out with $\mathcal{N}=(1,1)$ supergravity in AdS$_3$. These two different symmetry algebras also arise on the worldsheet of the tensionless superstring \cite{Bagchi:2016yyf,Bagchi:2017cte, Bagchi:2018wsn}. The homogeneous or democratic limit leads to an analogue of the above algebra \refb{n1}:
\begin{subequations}
\label{noR}
\begin{align}
[L_n,L_m]&=(n-m)\,L_{n+m}+\frac{c_L}{12}\,n(n^2-1)\delta_{n+m,0}\\
[L_n,M_m]&=(n-m)\,M_{n+m}+\frac{c_M}{12}\,n(n^2-1)\delta_{n+m,0}\\
[L_n,Q^{\a}_r]&=\Big(\frac{n}{2}-r\Big)\,Q^{\a}_{n+r}\\
\{Q^{\a}_r,Q^{\b}_s\}&=\delta^{\a \b}\,\Big(M_{r+s}+\frac{c_M}{6}\,\big(r^2-\tfrac{1}{4}\big)\,\delta_{r+s,0}\Big)\,.
\end{align}
\end{subequations}
Here, we have $\a,\b=+,-$. So this differs from the previous algebra only in terms of the number of fermionic generators. The contraction, from two copies of the super-Virasoro algebra, now becomes
\be{10}
L_n = \l^+_n - \l^-_{-n}\qquad \qquad M_n = \epsilon \,(\l^+_n + \l^-_{-n})\qquad \qquad Q^\pm_r = \sqrt{\epsilon}\, {\mathcal{Q}}^\pm_r\,.
\end{equation}
The supercharges scale in the same way and hence the name homogeneous. One can also start with a super-Virasoro with additional $R$-symmetry and a Virasoro and contract to get $R$-symmetry in the super BMS algebra. Some details of such algebras and ones with higher $\mathcal{N}$ are presented in the appendix. The principal point to note in the above algebras \refb{n1} and \refb{noR} is that the anticommutator between the fermionic generators always produce super-translations $M_n$. This is what will distinguish this sector from its inhomogeneous counterpart.
We now elaborate on the inhomogeneous version of super BMS. Here, unlike the homogeneous sector, we take different combinations of the fermionic supercharges and scale them asymmetrically. The contraction that gets one from two copies of super-Virasoro to the inhomogeneous BMS algebra is
\be{11}
L_n = \l^+_n - \l^-_{-n}\,, \quad M_n = \epsilon (\l^+_n + \l^-_{-n})\,, \quad G_r = {\mathcal{Q}}^+_r - i {\mathcal{Q}}^-_{-r}\,, \quad
\bar{G}_r = \epsilon({\mathcal{Q}}^+_r + i {\mathcal{Q}}^-_{-r})\,.
\end{equation}
The anti-commutators of the fermionic generators $(G_r,\bar{G}_s)$ now produce the super-rotations $L_n$ as well as the super-translations $M_n$. The inhomogeneous or despotic super BMS$_3$ algebra is obtained as a flat space limit to $\mathcal{N}=(1,1)$ AdS$_3$ supergravity \cite{Lodato:2016alv}. The resulting supergravity theory is a twisted version of flat space supergravity, the asymptotic symmetries of which are given by
\begin{subequations}
\label{1a}
\begin{align}
[L_m,L_n]&=(m-n)\,L_{m+n} & [L_m,M_n]&=(m-n)\,M_{m+n}\\
[L_m,G_r]&=\Big(\frac{m}{2}-r\Big)\,G_{m+r} &[L_m,\bar{G}_r]&=[M_m,G_r]=\(\frac{m}{2}-r\)\,\bar{G}_{m+r} \\
\{G_r,G_s\}&=2L_{r+s}+\frac{c_L}{3}\,\big(r^2-\tfrac{1}{4}\big)\,\delta_{r+s,0} & \{G_r,\bar{G}_s\}&=2M_{r+s}+\frac{c_M}{3}\,\big(r^2-\tfrac{1}{4}\big)\,\delta_{r+s,0}\,.
\end{align}
\end{subequations}
For the purposes of this paper, we are interested in constructing higher-dimensional version of the super BMS or equivalently CSAs. We focus on the symmetries of the dual field theory of Poincar{\'e} supergravity and hence concentrate entirely on the homogeneous versions of the above-mentioned algebras.
\subsection[Constructing the \texorpdfstring{$\mathcal{N}=1$}{N=1} algebra]{Constructing the \texorpdfstring{$\boldsymbol{\mathcal{N}=1}$}{N=1} algebra}
We aim to construct CSAs for arbitrary dimensions. For the moment, we focus on $d=4$. The strategy that we follow is the following:
\begin{itemize}
\item Take an appropriate UR contraction of the relativistic superalgebra $SU(2,2|\mathcal{N})$, so that we reach the homogeneous version of the contracted algebra. This generates a finite algebra with the same number of generators as $SU(2,2|\mathcal{N})$.
\item Write the super-algebra in a suggestive form and give it an infinite-dimensional lift. For this, we will construct a Carrollian superspace and will write the generators in superspace coordinates.
\item Check the consistency with similar constructions for the lower-dimensional cases to verify the isomorphism with the super BMS algebras.
\end{itemize}
We begin our analysis with the simplest case of $\mathcal{N}=1$ supersymmetry. For $\mathcal{N}=1$, the number of supercharges in the parent relativistic theory is 8 (4 supercharges $\{Q,\bar{Q}\}$, and 4 superconformal charges $\{S,\bar{S}\}$):
$
(Q_\a, \bar Q_{\dot \a},S_\a, \bar S_{\dot \a}),~\text{where} \:\a,\dot{\a}=1,2.
$
We also have the $R$-symmetry group, which is $U(1)$ for $\mathcal{N}=1$. There can be numerous ways to perform the UR contraction on the relativistic superconformal algebra, resulting in different possible algebras. As explained above, taking a cue from the two-dimensional algebras, we focus on the homogeneous algebras, where the fermionic generators close to form super-translations.
\subsubsection*{Possible scalings}
We start by considering two ways to contract the fermionic generators of $\mathcal{N}=1$ relativistic superconformal algebra in $d=4$, namely the symmetric and asymmetric scaling. Both will lead to the same algebra.
\subsubsection*{Symmetric scaling}\label{ss}
First, we describe the symmetric scaling of the 8 supercharges. We assume
\be{assum}
\bar{Q}_{\dot \alpha}= (Q_\a)^\dagger\qquad\qquad \bar{S}_{\dot \alpha}= (S_\a)^\dagger\,.
\end{equation}
The symmetric scaling is given by
\be{symscale}
Q_\a \to \frac{1}{ \sqrt \epsilon} Q_\a\qquad \bar Q_{\dot \a} \to \frac{1}{ \sqrt\epsilon} \bar Q_{\dot\a}\qquad S_\a \to \frac{1}{ \sqrt\epsilon} S_\a\qquad \bar S_{\dot \a} \to \frac{1}{ \sqrt\epsilon} \bar S_{\dot\a}\qquad R \to R\,.
\end{equation}
The above scaling for the fermionic generators along with the bosonic generators in \eqref{bosgen} results in the following algebra:
\smallskip
\noindent
\textit{Fermionic sector:}
\begin{subequations}
\label{fer1}
\begin{align}
\{Q_\a, \bar Q_{\dot \a} \}&=2 \sigma^0_{\a \dot \a}H & \{Q_\a, Q_{ \b} \}&=\{\bar{Q}_{\dot \a}, \bar Q_{\dot \b} \}=0\\
\{S_\a, \bar S_{\dot \a} \}&=2 \sigma^0_{\a \dot \a}K & \{S_\a, S_{ \b} \}&=\{\bar{S}_{\dot \a}, \bar S_{\dot \b} \}=0\\
\{Q_\a, S_{\b} \}&= 2 (\sigma^{0i})_\a^{~ \gamma} \epsilon_{\gamma \b}B_i & \{Q^A_\a, \bar S^B_{\dot \b} \}&=0 \\
\{\bar Q_{\dot\a}, \bar{S}_{\dot\b} \}&= -2\epsilon_{\dot \a \dot \gamma} (\bar \sigma^{0i})^{\dot \gamma}_ {~\dot \b} B_i & \{\bar Q^A_{\dot\a}, S^B_{ \b} \}&=0
\end{align}
\end{subequations}
\smallskip
\noindent
\textit{Mixed sector:}
\begin{subequations}
\label{mix11}
\begin{align}
[ Q_\a,P_i]&=[Q_\a, H]=0 & [ S_\a,P_i]&=i (\sigma^i)_{\a \dot \a}\bar Q^{\dot \a}\\
[ \bar Q_{\dot\a},P_i]&=[\bar Q_\a, H]=0 & [ \bar S_{\dot\a},P_i]&=i \epsilon_{\dot \a \dot \b}(\bar\sigma^i)^{\dot \b \a} Q_{ \a}\\
[Q_\a, B_i]&=[\bar Q_{\dot\a}, B_i]=0 & [S_\a, H]&=[\bar S_{\dot \a}, H]=0\\
[ Q_\a,K]&=[\bar Q_{\dot\a}, K]=0 & [S_\a, B_i]&=[\bar S_{\dot\a}, B_i]=0\\
[ Q_\a,J_{ij}]&=-\frac{i}{2}(\sigma_{ij})^{\:\;\:\b}_\a Q_{\b} & [ S_\a,K]&=[\bar S_{\dot\a}, K]=0\\
[ \bar Q_{\dot\a},J_{ij}]&=-\frac{i}{2}\epsilon_{\dot \a \dot \b}(\bar \sigma_{ij})^{\dot \b}_{\:\;\:\dot \gamma} \bar Q^{ \dot \gamma} & [ S_\a,K_i]&=[\bar S_{\dot\a}, K_i]=0\\
[ Q_\a,D]&=-\frac{1}{2}Q_\a & [ S_\a,J_{ij}]&=-\frac{i}{2}(\sigma_{ij})^{\:\;\:\b}_\a S_{\b}\\
[ \bar Q_{\dot\a},D]&=-\frac{1}{2}\bar Q_{\dot\a} & [ \bar S_{\dot\a},J_{ij}]&=-\frac{i}{2}\epsilon_{\dot \a \dot \b}(\bar \sigma_{ij})^{\dot \b}_{\:\;\:\dot \gamma} \bar S^{ \dot \gamma}\\
[ Q_\a,K_i]&=i (\sigma^i)_{\a \dot \a}\bar S^{\dot \a} & [ S_\a,D]&=\frac{1}{2}S_\a\\
[ \bar Q_{\dot\a},K_i]&=i \epsilon_{\dot \a \dot \b}(\bar\sigma^i)^{\dot \b \a} S_{ \a} & [ \bar S_{\dot\a},D]&=\frac{1}{2}\bar S_{\dot\a}
\end{align}
\end{subequations}
\smallskip
\noindent
\textit{$R$-symmetry sector:}
\be{ral1}
[R,Q_\a]=-i\frac{3}{4}Q_\a\qquad [R,\bar Q_{\dot \a}]=i\frac{3}{4}\bar Q_{\dot \a}\qquad [R,S_\a]=i\frac{3}{4}S_\a\qquad [R,\bar S_\a]=-i\frac{3}{4}\bar S_{\dot\a}\,.
\end{equation}
\noindent The above algebra \eqref{fer1}--\eqref{ral1} is the finite part of $\mathcal{N}=1$ CSA in $d=4$. The generators $\{B_i, K, H, Q_\a, \bar Q_{\dot \a}, S_\a,\bar S_{\dot \a}\}$ are analogous to the translational part of finite CSA. In other words, the anti commutators \eqref{fer1} between the fermionic generators give rise to the bosonic generators $(H,B_i,K)$, which constitute the finite super-translations of the bosonic CCA.
\smallskip
\noindent
With the $R$-symmetry scaling as in \refb{symscale}, $R$ does not appear on the RHS of the fermionic sector of the algebra and also is not a central term. However, if we choose the scaling such that $R \to \frac{1}{\epsilon}R$, then $R$ becomes a central term: it appears on the RHS of the fermionic sector of the algebra. The corresponding brackets are
\begin{subequations}
\label{rdiffscal}
\begin{align}
\{Q_\a, \bar Q_{\dot \a} \}&=2 \sigma^0_{\a \dot \a}H & \{Q_\a, Q_{ \b} \}&=\{\bar{Q}_{\dot \a}, \bar Q_{\dot \b} \}=0\\
\{S_\a, \bar S_{\dot \a} \}&=2 \sigma^0_{\a \dot \a}K & \{S_\a, S_{ \b} \}&=\{\bar{S}_{\dot \a}, \bar S_{\dot \b} \}=0\\
\{Q_\a, S_{\b} \}&= 2 (\sigma^{0i})_\a^{~ \gamma} \epsilon_{\gamma \b}B_i -4i \epsilon_{\a \b}R & \{Q^A_\a, \bar S^B_{\dot \b} \}&=0\\
\{\bar Q_{\dot\a}, \bar{S}_{\dot\b} \}&= -2\epsilon_{\dot \a \dot \gamma} (\bar \sigma^{0i})^{\dot \gamma}_ {~\dot \b} B_i +4i \epsilon_{\dot\a \dot\b}R & \{\bar Q^A_{\dot\a}, S^B_{ \b} \}&=0\\
[Q_\a,R]&=[\bar Q_{\dot\a},R]=0 & [S_\a,R]&=[\bar S_{\dot\a},R]=0\,.
\end{align}
\end{subequations}
There is no change in the mixed sector.
The scaling ($R \to \epsilon^n R, n>0$) will result in the absence of the $R$ term on the RHS of the algebra. In this scaling, the LHS of $[Q_\a,R]$ scales as $\epsilon^{n-1}$ and thus vanishes in the limit $\epsilon \to 0$.
\subsubsection*{Asymmetric scaling} \label{asymmetricscaling}
Now, we explore another type of scaling where we scale the fermionic generators aysmmetrically. Contrary to expectations, we end up with the same algebra as before. The important point here is that the hermiticity conditions change as compared to \eqref{assum}. The asymmetric scaling is given by either of the following two possibilities.
\begin{subequations}
\label{assym3}
\begin{align}
&\textrm{either:}& Q_\a &\to Q_\a & \bar Q_{\dot \alpha}&\to \frac{1}{\epsilon}\bar Q_{\dot \alpha} & S_\a &\to \frac{1}{\epsilon} S_\a & \bar S_{\dot \alpha}&\to \bar S_{\dot \alpha} & R&\to R\\
&\textrm{or:}&Q_\a &\to \frac{1}{\epsilon}Q_\a & \bar Q_{\dot \alpha}&\to\bar Q_{\dot \alpha} & S_\a &\to S_\a & \bar S_{\dot \alpha}&\to\frac{1}{\epsilon}\bar S_{\dot \alpha} & R&\to R\,.
\end{align}
\end{subequations}
The above scaling leaves the algebra the same as \eqref{fer1} -- \eqref{ral1}.
However, for this case, the hermiticity conditions
\be{12}
\bar{Q}_{\dot \alpha}= (S_\a)^\dagger\qquad\qquad \bar{S}_{\dot \alpha}= (Q_\a)^\dagger
\end{equation}
exchange positive and negative scaling weights, analogous to usual Virasoro generators $L_n^\dagger = L_{-n}$. We elucidate the connection between the symmetric and asymmetric scaling more prominently when addressing the superspace formalism below.
\subsection{Carrollian superspace}\label{sec4}
We have explored the various types of possible scaling for the fermionic generators for $\mathcal{N}=1$ finite CSA by contracting their parent relativistic generators. We found that the algebra remains the same for both symmetric and asymmetric scalings. We now build the Carroll equivalent of the superspace formalism and connect the different scalings. In the superspace coordinates $\theta,\bar{\theta}$ the relativistic supercharges are given as
\begin{subequations}
\begin{align}
\label{q1}
Q_\a&=\partial_\a-\sigma^\mu_{\a\dot \beta}\bar\Theta^{\dot \beta}\partial_\mu\\
\label{q2}\bar Q_{\dot \alpha} &= -\bar{\partial}_{\dot \alpha}+ \Theta^\b\sigma^\mu_{\b\dot \alpha}\partial_\mu
\end{align}
\end{subequations}
where, $\partial_\a=\frac{\partial}{\partial \Theta^\a},\bar{\partial}_{\dot \alpha}=\frac{\partial}{\partial \Theta^{\dot \alpha}} $.
Let us now take the UR scaling on the superspace coordinates,
\be{thetaa}
\Theta^\a \to \epsilon^a \Theta^\a\qquad\qquad \bar\Theta^{\dot \alpha} \to \epsilon^b \bar\Theta^{\dot \alpha} \,.
\end{equation}
Here, $a,b$ are some arbitrary parameters which will be determined later. Now, the LHS of \eqref{q1} gives \be{13}\epsilon^{-a} \partial_\a-\epsilon^{b-1}\sigma^0_{\a \dot \beta}\bar\Theta^{\dot \beta}\partial_t-\epsilon^b \sigma^i_{\a \dot \beta}\bar\Theta^{\dot \beta} \partial_i.\end{equation}
Requiring $\{Q,\bar{Q}\}\sim H$ obtains
\be{avalue}
b=1-a\,.
\end{equation}
From the above relation, we can consider three simple choices
out of the infinitely many possible choices for $a$. There are two natural choices (assuming $a=b$, so that both sectors scale in the same
way or assuming $a=0$, so
that only one sector scales) and a third one related to one of them by
exchanging unbarred and barred sectors (assuming $a=1$ which is the same after exchange in $a=0$), as
evident from \eqref{thetaa}.
\subsection*{Case 1: Symmetric scaling}
Here we have $a=\frac{1}{2},\: \Theta^\a \to \sqrt\epsilon \Theta^\a, \: \: \bar\Theta^{\dot \alpha} \to \sqrt\epsilon \bar\Theta^{\dot \alpha}$.
We first choose $a=\frac{1}{2}$, yielding
\be{14}
Q_\a \to \frac{1}{\sqrt \epsilon} Q_\a\qquad\qquad\bar Q_{\dot \alpha} \to \frac{1}{\sqrt \epsilon}\bar Q_{\dot \alpha}\,. \end{equation}
This reproduces the symmetric scaling (see section \ref{ss}). Next we write the bosonic and other fermionic generators in terms of the superspace coordinates.
\paragraph{Fermionic generators:} The $Q$ supercharges in the contracted superspace are given by
\be{15}
Q_\a=\partial_\a-\sigma^0_{\a\dot \beta}\bar\Theta^{\dot \beta}\partial_t\qquad\qquad \bar Q_{\dot \alpha} = - \bar{\partial}_{\dot \alpha}+ \Theta^\b \sigma^0_{\b\dot \alpha}\partial_t\,.
\end{equation}
The relativistic $S$ supercharges are given by
\begin{subequations}
\label{16}
\begin{align}
S_\a&=-i \epsilon^{\dot\b \dot\gamma} (\sigma_\mu)_{\a\dot\gamma }(x^\mu_{-})\bar{\partial}_{\dot \beta}+i \epsilon^{\dot \beta \dot\gamma} (\sigma_\mu)_{\a \dot \gamma}(x^\mu_{+})\Theta^\b \sigma^\nu_{\b \dot \b}\partial_\nu-2i (\Theta \Theta)\partial_\a\\
\bar S_{\dot \alpha}&=-i \epsilon^{\b \gamma} (\sigma_\mu)_{ \gamma\dot\a}(x^\mu_{+})\partial_{\b}+i \epsilon^{\b \gamma} (\sigma_\mu)_{\gamma \dot \a}(x^\mu_{-})\bar\Theta^{\dot \beta} \sigma^\nu_{\b \dot \b}\partial_{\nu} +2i (\bar\Theta \bar\Theta)\bar{\partial}_{\dot \alpha}\,.
\end{align}
\end{subequations}
Taking the UR limit on the first equation,
\be{17}
S_\a=\frac{1}{\sqrt \epsilon}[-i \epsilon^{\dot \beta \dot\gamma} (\sigma_i)_{\a \dot \gamma}(x_i)\bar{\partial}_{\dot \beta}+i \epsilon^{\dot \beta \dot\gamma} (\sigma_i)_{\a \dot \gamma}(x_i)\Theta^\b \sigma^0_{\b \dot \b}\partial_t]+\mathcal O(\sqrt \epsilon)
\end{equation}
is consistent with our scaling in \refb{ss}, which gives
\be{18}S_\a \to \frac{1}{\sqrt \epsilon}S_\a\qquad\qquad\bar S_{\dot \a} \to \frac{1}{ \sqrt\epsilon} \bar S_{\dot\a} \end{equation}
The same argument holds for the second equation as well. Hence, we can write the contracted $S$ supercharges as
\begin{subequations}
\label{19}
\begin{align}
S_\a&=-i \epsilon^{\dot \beta \dot\gamma} (\sigma_i)_{\a \dot \gamma}(x_i)\bar{\partial}_{\dot \beta}+i \epsilon^{\dot \beta \dot\gamma} (\sigma_i)_{\a \dot \gamma}(x_i)\Theta^\b \sigma^0_{\b \dot \b}\partial_t\\
\bar S_{\dot \alpha}&=-i \epsilon^{\b \gamma} (\sigma_i)_{ \gamma\dot \a}(x_i)\partial_{\b}+i \epsilon^{\b \gamma} (\sigma_i)_{ \gamma\dot \a}(x_i)\bar\Theta^{\dot\b} \sigma^0_{\b \dot \b}\partial_t\,.
\end{align}
\end{subequations}
\paragraph{Bosonic generators:} The space and time translation generators do not have any fermionic pieces.
Hence, the contracted generators in the superspace are
\be{20}
P_i=\partial_i\qquad\qquad H=\partial_t\,.
\end{equation}
Taking the UR limit for the relativistic dilatation generator,
\be{21}
D=-(x_k \partial_k+t\partial_t+\frac{1}{2}(\Theta^\a \partial_\a+\bar\Theta^{\dot \alpha}\bar{\partial}_{\dot \a}))\xrightarrow{\text{UR limit}}-(x_k \partial_k+t\partial_t+\frac{1}{2}(\Theta^\a \partial_\a+\bar\Theta^{\dot \alpha}\bar{\partial}_{\dot \a}))
\end{equation}
yields the superspace Carrollian dilation generator,
\be{22}
D=-(x_k \partial_k+t\partial_t+\frac{1}{2}(\Theta^\a \partial_\a+\bar\Theta^{\dot \alpha}\bar{\partial}_{\dot \a}))\,.
\end{equation}
The relativistic SCT generators in superspace are
\be{23}
K_\mu=\left( x^\nu x_\nu+2 (\Theta \Theta)(\bar\Theta \bar\Theta)\right)\partial_\mu-2\left(x_\mu x^\nu + (\Theta \sigma_\mu \bar\Theta)(\Theta \sigma^\nu\bar\Theta)\right)\partial_\nu-\epsilon^{\b \gamma}(\sigma^\nu \bar \sigma_\mu)_\gamma^{\;\alpha} \Theta_\a x_\nu \partial_\b\,.
\end{equation}
First, consider the temporal part of SCT which we call $K$. Taking the UR limit, the contracted $K$ in superspace is given by
\be{24}
K= x_i x_i \partial_t\,.
\end{equation}
The spatial part of SCT $K_i$ contracts to
\be{25}
K_i=-2x_i (t\partial_t+x_k \partial_k) +x_k x_k \partial_i-\epsilon^{\b \gamma}(\sigma^j \bar \sigma_i)_\gamma^{\;\alpha} \Theta_\a x_j \partial_\b\,.
\end{equation}
The relativistic Lorentz generators are
\be{boostrot}
J_{\mu\nu}=-x_\mu \partial_\nu+x_\nu \partial_\mu-\frac{i}{2}\(\sigma^{\a \b}_{\mu \nu}\Theta_\a \partial_\b-\bar\sigma^{\dot \alpha\dot \beta}_{\mu \nu}\bar\Theta_{\dot \alpha} \bar{\partial}_{\dot \beta}\)\,.
\end{equation}
In the UR limit, these give rise to Carrollian boosts and rotations in superspace
\be{26}
B_i=-x_i\partial_t\qquad\qquad J_{ij}=-x_i \partial_j+x_j \partial_i-\frac{i}{2}\(\sigma^{\a \b}_{ij}\Theta_\a \partial_\b-\bar\sigma^{\dot \alpha\dot \beta}_{ij}\bar\Theta_{\dot \alpha} \bar{\partial}_{\dot \beta}\)\,.
\end{equation}
It is straightforward to check that the above fermionic and bosonic generators in the superspace give back the finite $\mathcal{N}=1$ CSA.
\paragraph{$\boldsymbol{R}$ symmetry generators:} Finally, we consider the superspace formulation of the $R$-symmetry generator. For $\mathcal{N}=1$, the symmetry group is $U(1)$. The relativistic $R$-symmetry generator is given by
\be{27}
R=-\frac{1}{2}(\Theta^\a \partial_\a-\bar\Theta^{\dot \a}\bar{\partial}_{\dot \a})\,.
\end{equation}
Irrespective of any scaling of the $\Theta$, the contracted generator remains the same, suggesting
$R\to R$. Following this scaling, the $R$ terms drop from the RHS of the $\{Q,S\}$ and $\{\bar Q,\bar S\}$ brackets in the algebra of section \ref{ss}. Also, the brackets with the supercharges become non-vanishing.
\be{newR}
[R,Q_\a]=-\frac{3i}{4}Q_\a\qquad\; [R,\bar Q_{\dot \alpha}]=\frac{3i}{4}\bar Q_{\dot \alpha}\qquad\; [R,S_\a]=\frac{3i}{4}S_\a\qquad\; [R,\bar S_{\dot \alpha}]=-\frac{3i}{4}\bar S_{\dot \alpha}
\end{equation}
\subsection*{Case 2. Asymmetric scaling}
Here we have $a=0,\; \Theta_\a \to \Theta_\a, \; \bar\Theta_{\dot\a} \to \epsilon\bar\Theta_{\dot\a}$. Consider the value of $a=0$ in \eqref{avalue} and repeat the analysis similar to Case 1 (symmetric scaling). This value of $a$ implies the following scalings for the superconformal generators:
\begin{subequations}
\label{asymm1}
\begin{align}
Q_\a &\to Q_\a & \bar Q_{\dot \alpha}&\to \frac{1}{\epsilon}\bar Q_{\dot \alpha} & S_\a &\to \frac{1}{\epsilon} S_\a & \bar S_{\dot \alpha}&\to \bar S_{\dot \alpha} & R&\to R\\
H&\to \frac{1}{\epsilon}H & B_i &\to \frac{1}{\epsilon}B_i & K&\to \frac{1}{\epsilon}K & P_i&\to P_i & D&\to D & K_i&\to K_i & J_{ij}&\to J_{ij}\,.
\end{align}
\end{subequations}
It is the asymmetric scaling as discussed in Sec.~\ref{asymmetricscaling}. The superspace formulation of finite-CSA generators also justifies the scaling of the bosonic part in \eqref{bosgen}. The algebra remains the same as \eqref{fer1}--\eqref{ral1}. Also, for this case,
\be{29}
\bar{Q}_{\dot \alpha}= (S_\a)^\dagger\qquad\qquad \bar{S}_{\dot \alpha}= (Q_\a)^\dagger\,.
\end{equation}
There is again an identical asymmetric sector with the barred and unbarred generators interchanged.
\subsection{Infinite extension of algebra} \label{infext}
In this section, we construct an infinite extension of the finite CSA \eqref{fer1}--\eqref{ral1}. We focus on $d=4$, but similar constructions work for $d>4$. We begin with the fermionic generators in the superspace coordinates.
\begin{subequations}
\begin{align}
Q_\a&=\partial_\a-\sigma^t_{\a\dot\b}\bar\Theta^{\dot \beta}\partial_t & \bar S_{\dot \alpha}&=-i \epsilon^{\b\gamma}(\sigma_i)_{\gamma\dot \alpha}x_i Q_\b \\
\bar Q_{\dot \alpha}&=-{\bar{\partial}}_{\dot \alpha}+\sigma^t_{\b\dot\a}\Theta^{\b}\partial_t & S_{\a}&=i \epsilon^{{\dot \beta}\dot\gamma}(\sigma_i)_{\a\dot\gamma}x_i \bar Q_{\dot \beta}
\end{align}
\end{subequations}
Let us now define the matrix $\Lambda$, which can be used to flip from $Q$ to $\bar{S}$.
\be{lambda}
\Lambda^\b_{\;\;\;\dot \alpha}=-i \epsilon^{\b\gamma}(\sigma_i)_{\gamma\dot \alpha}x_i\qquad\Rightarrow \qquad \bar S_{\dot \alpha}=\Lambda^\b_{\;\: \dot \alpha} Q_\b
\end{equation}
Similarly, we define $\bar{\L}$ which can be used to go from $\bar Q$ to $S$.
\be{barlambda}
\bar\Lambda^{\;\;\;\dot\b}_{\a}=i \epsilon^{\dot \beta\dot\gamma}(\sigma_i)_{\a\dot\gamma}x_i\qquad\Rightarrow\qquad\bar S_{\a}=\bar\Lambda^{\;\;\;\dot \beta}_{\a} \bar Q_{\dot \beta}
\end{equation}
We deduce the identities
\be{31}
\Lambda^\a_{\;\: \dot \beta}\bar\Lambda^{\;\;\dot\gamma}_{ \a}=(x_i x_i)\delta^{\dot\gamma}_{\dot\b}\qquad\qquad\Lambda^\a_{\;\: \dot \beta}\bar\Lambda^{\;\:\dot\b}_{ \gamma}=(x_i x_i)\delta^{\a}_{\gamma},\Lambda^\b_{\;\;\;\dot \alpha}=[\bar\Lambda^{\;\;\;\dot\b}_{\a}]^\dagger\,.
\end{equation}
Conventions and further identities related to $\L$ and $\bar\Lambda$ are collected in Appendix \ref{identities section}. Let us now define the following fermionic generators:
\begin{subequations}
\label{fourierfermionic}
\begin{align}
G^{+}_{r_1,r_2,r_3}&=x^{r_1}y^{r_2}z^{r_3}Q_\a & G^{-}_{r_1,r_2,r_3}&=x^{r_1}y^{r_2}z^{r_3}\L_{\;\;\dot \alpha}^{\a} Q_\a\\
\tilde G^{+}_{r_1,r_2,r_3}&=x^{r_1}y^{r_2}z^{r_3}\bar Q_{\dot \alpha} & \tilde G^{-}_{r_1,r_2,r_3}&=x^{r_1}y^{r_2}z^{r_3}{\bar \L}_{\a}^{\;\;\dot \alpha}\bar Q_{\dot \alpha}\,.
\end{align}
\end{subequations}
Here, $r_i$'s can take any integer value.
Then,
\be{tal}
[G^{+}_{r_1,r_2,r_3},P_x]=-r_1 G^{+}_{r_1-1,r_2,r_3} \qquad\quad [G^{-}_{r_1,r_2,r_3},P_x]=-r_1 G^{-}_{r_1-1,r_2,r_3}-\partial_x \L G^{+}_{r_1,r_2,r_3}\\
\end{equation}
and similar expressions for the tilde sector with $\L$ replaced by $\bar\Lambda$.
Similarly, the fermionic generators give the following brackets with SCT generator $K_i$:
\begin{subequations}
\label{sctal}
\begin{align}
[G^{+}_{r_1,r_2,r_3},K_x]&=(r_1+2r_2+2r_3) G^{+}_{r_1+1,r_2,r_3}-r_1\big(G^{+}_{r_1-1,r_2+2,r_3}+G^{+}_{r_1-1,r_2,r_3+2}\big)\nonumber\\
&\quad -(\L\cdot \partial_x \bar\Lambda )G^{+}_{r_1,r_2,r_3}\\
[G^{-}_{r_1,r_2,r_3},K_x]&=(r_1+2r_2+2r_3) G^{-}_{r_1+1,r_2,r_3}-r_1\big(G^{-}_{r_1-1,r_2+2,r_3}+G^{-}_{r_1-1,r_2,r_3+2}\big)
\end{align}
\end{subequations}
and similar expressions for the tilde sector, now with $\L\cdot \partial_x \bar\Lambda$ replaced by $\partial_x \L \cdot \bar\Lambda$.
The brackets for the other components of $P_i$ and $K_i$ follows similarly. The brackets with the dilatation generator $D$ are given by
\be{dal}
[G^{\pm}_{r_1,r_2,,r_3},D]=(r_1+r_2+r_3 \mp \frac{1}{2}) G^{\pm}_{r_1,r_2,r_3}\,.
\end{equation}
The same equations hold for the ${\tilde{G}}^{\pm}$.
Let us now combine both the finite and infinite fermionic generators in \eqref{fourierfermionic} to define
\be{32}
G_f=f(x_i,\L)Q_\a\,,
\end{equation}
where $f$ can be a function of the coordinates $x_i$ and $\L$. Thus, we have the following cases, depending on the choice of $f$:
\begin{subequations}
\label{33}
\begin{align}
f(x_i,\L)&=1 && \Rightarrow & G_f&=Q_{\a}\\
f(x_i,\L)&=x^{r_1}y^{r_2}z^{r_3},~\text{or } g(x_i) && \Rightarrow & G_f&=G^{+}_{r_1,r_2,r_3}\\
f(x_i,\L)&=\L^{\a}_{\;\;\dot \beta} && \Rightarrow & G_f&=\bar S_{\dot \beta}\\
f(x_i,\L)&=g(x_i)\L^{\a}_{\;\;\dot \beta} && \Rightarrow & G_f&=G^{-}_{r_1,r_2,r_3}\,.
\end{align}
\end{subequations}
Similarly, we define
\be{34}
\bar{G}_f=f(x_i,\bar\Lambda)\bar Q_{\dot \alpha}
\end{equation}
so as to combine rest of the fermionic generators in \eqref{fourierfermionic} as follows
\begin{subequations}
\label{35}
\begin{align}
f(x_i,\bar\Lambda)&=1 && \Rightarrow & \bar{G}_f&=\bar Q_{\dot \alpha}\\
f(x_i,\bar\Lambda)&=x^{r_1}y^{r_2}z^{r_3},~\text{or } g(x_i)&& \Rightarrow & \bar{G}_f&=\tilde{G}^{+}_{r_1,r_2,r_3}\\
f(x_i,\bar\Lambda)&=\bar\Lambda_{\b}^{\;\;\dot \alpha} && \Rightarrow & \bar{G}_f&=S_{\b}\\
f(x_i,\bar\Lambda)&=g(x_i)\bar\Lambda_{\a}^{\;\;\dot \beta} && \Rightarrow & \bar{G}_f&=\tilde{G}^{-}_{r_1,r_2,r_3}.
\end{align}
\end{subequations}
Then, we can finally write the infinite $\mathcal{N}=1$ CSA algebra in $d=4$ succinctly as,
\be{infal1}\boxed{
\begin{aligned}
&[G_f,P_i]=G_{-\partial_i f} &[\bar{G}_f,P_i] &=\bar{G}_{-\partial_i f}\\
&[G_f,D]=G_{-\frac{1}{2}f+x_i\partial_i f} &[\bar{G}_f,D]&=\bar{G}_{-\frac{1}{2}f+x_i\partial_i f}\\
&[G_f,K_i]=G_{-x_k x_k\partial_i f+2x_i x_k\partial_k f-( \L \cdot \partial_i \bar\Lambda)f} & [\bar{G}_f,K_i]&=\bar{G}_{-x_k x_k\partial_i f+2x_i x_k\partial_k f-f(\partial_i \L\cdot\bar\Lambda)}
\end{aligned}}
\end{equation}
If we replace $f=x^r$ or $\L$ or $\bar\Lambda$, it reproduces \eqref{tal} -- \eqref{dal}. Finally,
\be{infal2}\boxed{
\{G_f,\bar{G}_g\}=2 \sigma^0 M_{f(x_i,\L)\cdot g(x_i,\bar\Lambda)}
}\end{equation}
For completeness, we also present the infinite extension of the bosonic part:
\be{infalgebra}
\boxed{
\begin{aligned}
&[M_f,P_i] =M_{-\partial_i f}, \quad [M_f,D] =M_{-f+x_i \partial_i f}\\
& [M_f,K_i]= M_{2x_i x_k\partial_k f-x_k x_k\partial_i f-2x_i f}\,.
\end{aligned} }
\end{equation}
\subsubsection*{Inclusion of $\boldsymbol{R}$ symmetry:} It is possible to give an infinite extension to the $R$-symmetry as well. Defining
\be{rsym1}
\mathfrak{R}_{r_1,r_2,r_3}=x^{r_1}y^{r_2}z^{r_3}R
\end{equation}
yields
\begin{subequations}
\label{rsym2}
\begin{align}
&[\mathfrak{R}_{r_1,r_2,r_3},P_x]=-r_1 \mathfrak{R}_{r_1-1,r_2,r_3} \\ &[\mathfrak{R}_{r_1,r_2,,r_3},D]=(r_1+r_2+r_3) \mathfrak{R}_{r_1,r_2,,r_3} \\
&[\mathfrak{R}_{r_1,r_2,r_3},K_x]=(r_1+2r_2+2r_3) \mathfrak{R}_{r_1+1,r_2,r_3}-r_1\Big(\mathfrak{R}_{r_1-1,r_2+2,r_3}+\mathfrak{R}_{r_1-1,r_2,r_3+2}\Big)\\
&[\mathfrak{R}_{s_1,s_2,s_3},G^{\pm}_{r_1,r_2,r_3}]=-i\frac{3}{4} G^{\pm}_{r_1+s_1,r_2+s_2,r_3+s_3} \\ &[\mathfrak{R}_{s_1,s_2,s_3},\tilde G^{\pm}_{r_1,r_2,r_3}]=i\frac{3}{4} \tilde G^{\pm}_{r_1+s_1,r_2+s_2,r_3+s_3}
\end{align}
\end{subequations}
Alternatively, \eqref{rsym1} can also be written as,
\be{36}
\mathfrak{R}_{f}=f(x_i)R
\end{equation}
Thus, we can rewrite \eqref{rsym2} as,
\be{rsymminf}
\boxed{
\begin{aligned}
&[\mathfrak{R}_f,P_i]=\mathfrak{R}_{-\partial_i f} &[\mathfrak{R}_f,D]&=\mathfrak{R}_{x_i\partial_i f}\\
&[\mathfrak{R}_f,K_i]=\mathfrak{R}_{-x_k x_k\partial_i f+2x_i x_k\partial_k f} &&\\
&[\mathfrak{R}_g,G_f]=-i\frac{3}{4}G_{f\cdot g} &[\mathfrak{R}_g,\bar{G}_f]&=i\frac{3}{4}\bar{G}_{f\cdot g}\\
&[\mathfrak{R}_f,M_g]=0& [\mathfrak{R}_f,\mathfrak{R}_g]&=0.
\end{aligned}}
\end{equation}
The complete infinite $\mathcal{N}=1$ CSA in $D=4$ is given by the boxed equations \eqref{infal1}--\eqref{infalgebra} and \eqref{rsymminf}.
\section{Generalisations to higher \texorpdfstring{$\boldsymbol{\mathcal{N}}$}{N}}\label{n=4}
In the previous sections, we discussed finite and infinite $\mathcal{N}=1$ CSA. Here, we extend our formulation for $\mathcal{N}=4$. First, let us summarise the finite superconformal generators for $\mathcal{N}=4$ CSA. The supercharges and superconformal charges are given by, $\{Q^A_\a, \bar Q^A_{\dot \a}, S^A_\a, \bar S^A_{\dot \a}\}$, $(\text{where,~}A,B=1,...4)$. The $R$-symmetry group is given by $R_I$, generating an $SU(4)$ algebra.
\medskip
\noindent
Similar to $\mathcal{N}=1$, we can have two types of scalings belonging to symmetric and asymmetric cases. They are given by $ R_I \to R_I$ in either case, and additionally
\begin{subequations}
\label{37}
\begin{align}
&\text{Symmetric:} & Q^A_\a &\to \frac{1}{ \sqrt \epsilon} Q^A_\a & \bar Q^A_{\dot \a} &\to \frac{1}{ \sqrt\epsilon} \bar Q^A_{\dot\a} & S^A_\a &\to \frac{1}{ \sqrt\epsilon} S^A_\a & \bar S^A_{\dot \a} &\to \frac{1}{ \sqrt\epsilon} \bar S^A_{\dot\a} \\
&\text{Asymmetric:} & Q^A_\a &\to \frac{1}{ \epsilon} Q^A_\a & \bar Q^A_{\dot \a} &\to \bar Q^A_{\dot\a} & S^A_\a &\to S^A_\a & \bar S^A_{\dot \a} &\to \frac{1}{ \epsilon} \bar S^A_{\dot\a} \,.
\end{align}
\end{subequations}
The above scalings give the following finite CSA for $\mathcal{N}=4$.
\medskip
\noindent \textit{Fermionic sector:}
\begin{subequations}
\label{n41}
\begin{align}
\{Q^A_\a, \bar Q^B_{\dot \a} \}&=2 \sigma^0_{\a \dot \a}\delta^{AB} H & \{Q^A_\a, Q^B_{ \b} \}&=\{\bar{Q}^A_{\dot \a}, \bar Q^B_{\dot \b} \}=0\\
\{S^A_\a, \bar S^B_{\dot \a} \}&=2 \sigma^0_{\a \dot \a} \delta^{AB} K & \{S^A_\a, S^B_{ \b} \}&=\{\bar{S}_{\dot \a}, \bar S_{\dot \b} \}=0\\
\{Q^A_\a, S^B_{\b} \}&= 2 (\sigma^{0i})_\a^{~ \gamma} \epsilon_{\gamma \b} \delta^{AB} B_i & \{Q^A_\a, \bar S^B_{\dot \b} \}&=0 \\
\{\bar Q^A_{\dot\a}, \bar{S}^B_{\dot\b} \}&= -2\epsilon_{\dot \a \dot \gamma} (\bar \sigma^{0i})^{\dot \gamma}_ {~\dot \b} \delta^{AB} B_i & \{\bar Q^A_{\dot\a}, S^B_{ \b} \}&=0
\end{align}
\end{subequations}
\noindent \textit{Mixed sector:}
\begin{subequations}
\label{n42}
\begin{align}
[ Q^A_\a,P_i]&=[Q^A_\a, H]=0 & [ S^A_\a,P_i]&=i (\sigma^i)_{\a \dot \a}\bar Q^{A\dot \a}\\
[ \bar Q^A_{\dot\a},P_i]&=[\bar Q^A_{\dot\a}, H]=0 & [ \bar S^A_{\dot\a},P_i]&=i \epsilon_{\dot \a \dot \b}(\bar\sigma^i)^{\dot \b \a} Q^A_{ \a}\\
[Q^A_\a, B_i]&=[\bar Q^A_{\dot\a}, B_i]=0 &
[S^A_\a, H]&=[\bar S^A_{\dot \a}, H]=0\\
[ Q^A_\a,K]&=[\bar Q^A_{\dot\a}, K]=0 &
[S^A_\a, B_i]&=[\bar S^A_{\dot\a}, B_i]=0\\
[ Q^A_\a,J_{ij}]&=-\frac{i}{2}(\sigma_{ij})^\b_\a Q^A_{\b} & [ S^A_\a,K]&=[\bar S^A_{\dot\a}, K]=0 \\
[ \bar Q^A_{\dot\a},J_{ij}]&=-\frac{i}{2}\epsilon_{\dot \a \dot \b}(\bar \sigma_{ij})^{\dot \b}_{\dot \gamma} \bar Q^{A \dot \gamma} &
[ S^A_\a,K_i]&=[\bar S^A_{\dot\a}, K_i]=0\\
[ Q^A_\a,D]&=-\frac{1}{2}Q^A_\a &
[ S^A_\a,J_{ij}]&=-\frac{i}{2}(\sigma_{ij})^\b_\a S^A_{\b} \\
[ \bar Q^A_{\dot\a},D]&=-\frac{1}{2}\bar Q^A_{\dot\a} &
[ \bar S^A_{\dot\a},J_{ij}]&=-\frac{i}{2}\epsilon_{\dot \a \dot \b}(\bar \sigma_{ij})^{\dot \b}_{\dot \gamma} \bar S^{A \dot \gamma}\\
[ Q^A_\a,K_i]&=i (\sigma^i)_{\a \dot \a}\bar S^{A\dot \a} &
[ S^A_\a,D]&=\frac{1}{2}S^A_\a\\
[ \bar Q^A_{\dot\a},K_i]&=i \epsilon_{\dot \a \dot \b}(\bar\sigma^i)^{\dot \b \a} S^A_{ \a} &
[ \bar S^A_{\dot\a},D]&=\frac{1}{2}\bar S^A_{\dot\a}
\end{align}
\end{subequations}
\medskip
\noindent \textit{$R$-symmetry sector:}
\begin{subequations}
\label{n45}
\begin{align}
[Q^A_\a,R_I]&= \mathcal{\hat{B}}_I ^{AB}Q^B_{\a} & [\bar Q^A_{\dot\a},R_I]&=-( \mathcal{\hat{B}}_I ^{AB})^\star\bar Q^B_{\dot\a}\\
[S^A_\a,R_I]&=- \mathcal{\hat{B}}_I ^{AB}S^B_{\a} & [\bar S^A_{\dot\a},R_I]&=( \mathcal{\hat{B}}_I ^{AB})^\star\bar S^B_{\dot\a}\\
[R_I,R_J]&=i \mathfrak{t}_{IJ}^K R_K & \mathcal{\hat{B}}_I &=( \mathcal{\hat{B}}_I )^\dagger\,.
\end{align}
\end{subequations}
Using the hermiticity property of $\hat{\mathcal{B}}$ we can write $[(\hat{\mathcal{B}}_I)^{AB}]^\star=(\hat{\mathcal{B}}_I)^{BA}$. The structure constants of the $R$-symmetry are denoted by $\mathfrak{t}$. Furthermore, we can show that $\mathcal{\hat{B}}_I$ is real if we use the Jacobi identity between $\{Q,S,R_I\}$ or $\{\bar{Q},\bar{S},R_I\}$. Hence, we can write \eqref{n45} as
\begin{subequations}{}\label{n44}
\begin{align}
&[Q^A_\a,R_I]= \mathcal{\hat{B}}_I ^{AB}Q^B_{\a},~& [\bar Q^A_{\dot\a},R_I]&=-( \mathcal{\hat{B}}_I ^{AB})\bar Q^B_{\dot\a},\\
&[S^A_\a,R_I]=- \mathcal{\hat{B}}_I ^{AB}S^B_{\a},~& [\bar S^A_{\dot\a},R_I]&=( \mathcal{\hat{B}}_I ^{AB})\bar S^B_{\dot\a},\\
&[R_I,R_J]=i \mathfrak{t}_{IJ}^K R_K.
\end{align}
\end{subequations}
The generators $\{H,B_i,K, Q^A_\a,S^A_\a, \bar Q^A_{\dot \a},\bar S^A_{\dot \a}\}$ are analogous to the translational part. The $R$-symmetry group is left unscaled here so that the $SU(4)$ algebra $[R_I, R_J]=i\mathfrak{t}_{IJ}^K R_K$ remains uncontracted.
\medskip
\noindent
We can have another choice of contraction for the $R$-symmetry, $SU(4)\sim SO(6)\to ISO(5)$, such that
\be{40}
R_I \to \frac{1}{\epsilon}\,\mathcal{R}_P, R_I
\end{equation}
where $R_I$ has 10 generators of $SO(5)$, and $\mathcal{R}_P$ has the remaining 5 generators of $ISO(5)$.
The resultant algebra differs from \eqref{n41},\eqref{n44} in the following brackets:
\begin{subequations}\label{41}
\begin{align}
\{Q^A_\a, S^B_{\b} \}&= 2 (\sigma^{0i})_\a^{~ \gamma} \epsilon_{\gamma \b} \delta^{AB} B_i +\mathcal{\hat{B}}_P^{AB}\mathcal{R}_P, \\
\{\bar Q^A_{\dot\a}, \bar{S}^B_{\dot\b} \}&= -2\epsilon_{\dot \a \dot \gamma} (\bar \sigma^{0i})^{\dot \gamma}_ {~\dot \b} \delta^{AB} B_i+(\mathcal{\hat{B}}_P^{AB})^\star \mathcal{R}_P. \\
[R_I,R_J]&=i \mathfrak{t}_{IJ}^K R_K, ~~ [\mathcal{R}_P, \mathcal{R}_S]=0, ~~[R_I,\mathcal{R}_P]=i \mathfrak{t}_{IP}^S \mathcal{R}_S\\
[R_I, *]&=\text{ same as before},~~ [\mathcal{R}_r, *]=0.
\end{align}
\end{subequations}
\subsection*{Infinite Extension of $\boldsymbol{\mathcal{N}=4}$ CSA in $\boldsymbol{d=4}$:}
Now, we will write down the generalisation of infinite $\mathcal{N}=4$ CSA in $d=4$. We are introducing the generalised infinite fermionic generators $G_f^A, \bar{G}_f^A$, where $A=1,\hdots,4$.
\medskip
\noindent
\begin{subequations}
\label{42}
\begin{align}
f(x_i)&=1 && \Rightarrow & G_f^A &=Q_\a^A,\\
f(x_i)&=g(x_i)& &\Rightarrow & G_f^A&=g(x_i)Q_\a^A\\
f(x_i)&=\L^{\a}_{\;\;\dot \beta} && \Rightarrow & G_f^A&=\bar S_{\dot \alpha}^A\\
f(x_i)&=g(x_i)\L^{\a}_{\;\;\dot \beta} && \Rightarrow & G_f^A&=g(x_i)\bar S_{\dot \alpha}^A.
\end{align}
\end{subequations}
Here, $f$ can be a function of the coordinates $x_i$ and $\L$.
Also
\begin{subequations}
\begin{align}
f(x_i)&=1 && \Rightarrow & \bar{G}_f^A &=\bar Q_{\dot \alpha}^A,\\
f(x_i)&=g(x_i)& &\Rightarrow & \bar{G}_f^A&=g(x_i)\bar Q_{\dot \alpha}^A\\
f(x_i)&=\bar\Lambda_{\a}^{\;\;\dot \beta} && \Rightarrow & \bar{G}_f^A&=S_{\a}^A\\
f(x_i)&=g(x_i)\bar\Lambda_{\a}^{\;\;\dot \beta} && \Rightarrow & \bar{G}_f^A&=g(x_i)\bar S_{\a}^A.
\end{align}
\end{subequations}
The resultant $\mathcal{N}=4$ infinite super CCA is
\be{infal}\boxed{
\begin{aligned}
&[G_f^A,P_i]=G_{-\partial_i f}^A & [\bar{G}_f^A,P_i]&=\bar{G}_{-\partial_i f}^A\\
&[G_f^A,D]=G_{-\frac{1}{2}f+x_i\partial_i f}^A & [\bar{G}_f^A,D]&=\bar{G}_{-\frac{1}{2}f+x_i\partial_i f}^A\\
&[G_f^A,K_i]=G_{-x_k x_k\partial_i f+2x_i x_k\partial_k f-f(\partial_i \L\cdot\bar\Lambda)}^A & [\bar{G}_f^A,K_i]&=\bar{G}_{-x_k x_k\partial_i f+2x_i x_k\partial_k f-f(\partial_i \L\cdot\bar\Lambda)}^A \\
&\{G_f^A,\bar{G}_g^B\}=2\delta^{AB} \sigma^0 M_{f(x_i,\L)\cdot g(x_i,\bar\Lambda)}&&
\end{aligned}}
\end{equation}
\subsubsection*{Inclusion of $\boldsymbol{R}$-symmetry:} We propose an infinite extension for the $R$-symmetry generators $R_I$.
\be{rsym3}
\mathfrak{R}^{(I)}_f=f(x^i)R_I
\end{equation}
Then
\be{44}
\boxed{
\begin{aligned}
&[\mathfrak{R}^{(I)}_f,P_i]=\mathfrak{R}^{(I)}_{-\partial_i f} & [\mathfrak{R}^{(I)}_f,D]&=\mathfrak{R}^{(I)}_{x_i\partial_i f} \\
&[\mathfrak{R}^{(I)}_f,K_i]=\mathfrak{R}^{(I)}_{-x_k x_k\partial_i f+2x_i x_k\partial_k f} &&\\
&[G_f^A,\mathfrak{R}^{(I)}_g]=(\mathcal{\hat{B}}_I)^{AB}G^B_{f\cdot g} & [\bar{G}_f^B,\mathfrak{R}^{(I)}_g]&=-(\mathcal{\hat{B}}_I)^{AB}\bar{G}^B_{f\cdot g}~ \\
&[\mathfrak{R}^{(I)}_f,M_g]=0 &[\mathfrak{R}^{(I)}_f,\mathfrak{R}^{(J)}_g]&=i\mathfrak{t}^{K}_{IJ}\mathfrak{R}^{K}_{fg}
\end{aligned}}
\end{equation}
Conventions and useful identities can be found in Appendix \ref{identities section}.
\medskip
\noindent Our proposed infinite extension of the Carrollian superconformal symmetry is compatible with the known results in $d=2$ and $d=3$ boundary dimensions. We provide a detailed cross-check of the same in Appendices \ref{sec7} for $d=2$ and \ref{superbms4} for $d=3$.
\section{Representation theory of \texorpdfstring{$\boldsymbol{\mathcal{N}=1}$}{N=1} CSA}\label{sec8}
In this section, we take the first steps towards building a representation theory for the infinite-dimensional algebras we have constructed in our previous sections. We first review relevant aspects of relativistic superconformal algebras and then focus on analogous constructions for the $\mathcal{N}=1$ CSA. Our analysis is by no means exhaustive and we plan to return to aspects of representation theory in the future.
\subsection[Relativistic \texorpdfstring{$\mathcal{N}=1$}{N=1} chiral super multiplet]{Relativistic \texorpdfstring{$\boldsymbol{\mathcal{N}=1}$}{N=1} chiral super multiplet}\label{wz}
We begin by revisiting some relevant aspects of the representation theory of the relativistic supersymmetric algebra. We introduce first conventions here and summarise important characteristics of the Wess--Zumino chiral super multiplet structure. These discussions will help prepare the ground for the construction of the representation theory of CSA later in the section.
\medskip
\noindent
For a generic relativistic field $\Phi(x_i,t)$ at an arbitrary spacetime point the infinitesimal supersymmetric transformation is given by
\be{45}
\delta_{\a}\Phi(x_i,t)=-i[(\a Q+\bar Q \bar{\a}),\Phi(x_i,t)]\,.
\end{equation}
Here, $\a_\b,\bar{\a}_{\dot \beta}$ are relativistic anti-commuting parameters. The finite supersymmetric transformation on the generic field is written as
\be{46}
\Phi(x_i,t)\to \Phi^\prime(x_i,t)=U^\dagger \Phi(x_i,t) U\qquad\qquad U=\exp\{-i(\a Q+\bar Q \bar{\a})\}
\end{equation}
We quote here the $\mathcal{N}=1$ relativistic algebra between the fermionic generators $Q_\a,\bar Q_{\dot \alpha}$.
\be{rwz}
{[\a^\b_1 Q_\b,\bar Q_{\b}\bar{\a}^{\dot \beta}_2]=2\a^\b_1(\sigma^\mu)_{\b\dot \beta}\bar{\a}^{\dot \beta}_2P_\mu}\qquad\qquad [P_\mu,\Phi(x_i,t)]=-i\partial_\mu\Phi(x_i,t)
\end{equation}
Thus, the two successive infinitesimal SUSY transformations act on a generic field as
\be{47}
[\delta_{\a_1},\delta_{\a_2}]\Phi(x_i,t)=-2i(\a_1\sigma^\mu\bar{\a}_2-\a_2\sigma^\mu\bar{\a}_1)\partial_\mu \Phi(x_i,t)\,.
\end{equation}
\paragraph{Wess--Zumino multiplet:} Now, we take a complex relativistic scalar field $\phi(x_i,t)$ along with its complex conjugate $\phi^\star(x_i,t)$ to review the relativistic representation theory of $\mathcal{N}=1$ supersymmetric algebra \eqref{rwz} on a complex scalar field. The infinitesimal supersymmetric transformations $Q_\a,\bar Q_{\dot \alpha}$ acting on the complex scalar field generate chiral fermions $\psi_{\a}$ (left chiral) and $\chi_{\dot \alpha}$ (right chiral). In this process, the mass dimension change by $\pm \frac{1}{2}$ resulting in chiral fermion or the derivative of the scalar field. Next, the chiral fermions of mass dimension $\frac{3}{2}$ transform into complex auxiliary scalar field $F,F^\star$ by the action of $Q_\a,\bar Q_{\dot \alpha}$ on them. The mass dimension of the auxiliary field is 2. Thus, we get two sectors in the multiplet structure. The left chiral sector is generated by the following transformations
\begin{subequations}{}\label{48}
\begin{align}
&\delta_{\a}\phi(x)=\sqrt{2}\a\psi(x),\\
&\delta_{\a}\psi(x)=\sqrt{2}i (\sigma^\mu\bar{\a}\partial_\mu \phi(x))+\sqrt{2}\a\tilde{F}(x),\\
&\delta_{\a}F(x)=-\sqrt{2}i(\partial_\mu\psi(x)\sigma^\mu\bar{\a}).
\end{align}
\end{subequations}
The right chiral sector is generated as following
\begin{subequations}{}\label{49}
\begin{align}
\delta_{\a}\phi^\star(x)&=\sqrt{2}\bar{\a}\chi(x),\\
\delta_{\a}\chi(x)&= -\sqrt{2}i({\a}\sigma^\mu\partial_\mu \phi^\star(x))+\sqrt{2}\bar{\a}\tilde{F}^\star(x),\\
\delta_{\a}F^\star(x)&=\sqrt{2}i(\a\sigma^\mu \partial_\mu\chi(x)).
\end{align}
\end{subequations}
This multiplet is also known as the Wess--Zumino multiplet.
\subsection{Carrollian fermions in Weyl representation}\label{carrollian_weyl}
In this section, we discuss the Carrollian fermions in Weyl representation. The Carrollian Weyl spinors decouple and the equations of motion do not carry any interaction term. To see this, we start with the relativistic Weyl fermion
\be{50}
\Psi_{W}=\begin{bmatrix}
\psi_L\\
\chi_R
\end{bmatrix}\,.
\end{equation}
Here, $\psi_L$ and $\chi_R$ are left and right chiral spinors. The equations of motion are
\be{51}
i\gamma^\mu \partial_\mu \Psi_{W}=0\qquad\implies \qquad
i(\sigma^0\partial_t\psi_L-\sigma^i\partial_i\psi_L)=0 \qquad\qquad
i(\sigma^0\partial_t\chi_R+\sigma^i\partial_i\chi_R)=0\,.
\end{equation}
The gamma matrices are in Weyl representation
\be{52}
\gamma^0=\begin{bmatrix}
0&\sigma^0\\
\bar{\sigma}^0&0
\end{bmatrix}\qquad\qquad
\gamma^i=\begin{bmatrix}
0&\sigma^i\\
\bar{\sigma}^i&0
\end{bmatrix}\,.
\end{equation}
The action of boosts on the chiral fermions at the origin is given by
\be{53}
[B_i,\Psi_W(0,0)]=\Sigma_{i0}\Psi_{W}(0,0)\qquad\qquad\Sigma_{i0}=-\frac{1}{4}[\gamma^i,\gamma^0]=\begin{bmatrix}
-\frac{\sigma^i}{2}&0\\
0&\frac{\sigma^i}{2}
\end{bmatrix}\,.
\end{equation}
Similarly, under the rotation generators the chiral spinors transform as
\be{54}
[J_{ij},\Psi_W(0,0)]=\Sigma_{i0}\Psi_{W}(0,0)\qquad\qquad\Sigma_{ij}=-\frac{1}{4}[\gamma^i,\gamma^j]=\begin{bmatrix}
\frac{1}{4}[\sigma^i,\sigma^j]&0\\
0&\frac{1}{4}[\sigma^i,\sigma^j]
\end{bmatrix}\,.
\end{equation}
Now, we are taking the UR limit on the relativistic equations of motion. Thus,
\be{55}
t\to\epsilon t\qquad\qquad x_i\to x_i\qquad\qquad\psi_L\to \epsilon^{r}\psi_L\qquad\qquad\chi_R\to\epsilon^s\chi_R\qquad\qquad \epsilon\to 0
\end{equation}
\be{urref1}
\epsilon^{r-1}[\sigma^0\partial_t\psi_L-\epsilon \sigma^i \partial_i \psi_L]=0\qquad\qquad\epsilon^{s-1}[\sigma^0\partial_t\chi_R+\epsilon \sigma^i \partial_i \chi_R]=0\,.
\end{equation}
Here, $r,s$ are some arbitrary parameters. We can see from equation \eqref{urref1} that the chiral spinors decouple irrespective of $r,s$ and we do not obtain any relation between $r$ and $s$. For convenience, we consider $r=s=1$ and obtain the Carrollian Weyl equations of motion from the leading order term in the $\epsilon \to 0$ limit.
\be{56}
i\sigma^o\:\partial_t \psi_L=0\qquad\qquad i\sigma^o\:\partial_t \chi_R=0\,.
\end{equation}
Thus, under rotations and boosts the spinors transform as
\begin{subequations}{}\label{urref3}
\begin{align}
&[B_i,\psi_L(0,0)]=0&[B_i,\chi_R(0,0)]&=0\\
&[J_{ij},\psi_L(0,0)]=\frac{1}{4}[\sigma^i,\sigma^j]\psi_L(0,0)&[J_{ij},\chi_R(0,0)]&=\frac{1}{4}[\sigma^i,\sigma^j]\chi_R(0,0)\,.
\end{align}
\end{subequations}
Using the identity $\frac{1}{4}[\sigma^i,\sigma^j]=\frac{i}{2}(\sigma^{ij})=\frac{i}{2}(\bar{\sigma}^{ij})$, we can put back the indices to write
\begin{subequations}{}\label{urref2}
\begin{align}
&[J_{ij},\psi_{\a}]=\frac{i}{2}(\sigma^{ij})_{\a}^{\;\;\b}\psi_{\b},\\
&[J_{ij},\chi^{\dot \alpha}]=\frac{i}{2}(\bar{\sigma}^{ij})^{\dot \alpha}_{\;\;\dot \beta}\chi^{\dot \beta},\text{and, }[J_{ij},\chi_{\dot \alpha}]=\frac{i}{2}\epsilon_{\dot \alpha\dot \beta}(\bar{\sigma}^{ij})^{\dot \alpha}_{\;\;\dot{\rho}}\epsilon^{\dot{\rho}\dot \gamma}\chi_{\dot{\gamma}} \:.
\end{align}\end{subequations}
We will use \eqref{urref3}, \eqref{urref2} when discussing representation theory of CSA.
\subsection[Representation theory of \texorpdfstring{$\mathcal{N}=1$}{N=1} CSA in \texorpdfstring{$d=4$}{D=4}]{Representation theory of \texorpdfstring{$\boldsymbol{\mathcal{N}=1}$}{N=1} CSA in \texorpdfstring{$\boldsymbol{d=4}$}{D=4}}
To discuss the representation theory of $\mathcal{N}=1$ CSA in $d=4$, first, we consider the action of the finite set of generators on a complex scalar field and then extend our formalism to the infinite set of CSA generators.
\medskip
\noindent We construct the highest-weight representation for $\mathcal{N}=1$ CSA on fields with different spins. Similar to its non-supersymmetric cousin (reviewed in Sec.~\ref{Scaling for the Bosonic generators of Finite CCA}), we label the states with their scaling dimensions $\Delta$ and spin $j$. Due to $R$-symmetry transformations we have an additional label $r$ on the states. Using the state-operator correspondence, let us write
\be{59}
[D,\Phi(0,0)]=\Delta \Phi(0,0)\qquad\; [J^2, \Phi(0,0)]=j(j+1)\Phi(0,0)\qquad\; [R,\Phi(0,0)]=r\Phi(0,0)\,.
\end{equation}
Here, $\Delta$ is the eigenvalue of the dilatation operator $D$. $J^2$ is the eigenvalue of the quadratic Casimir associated with $SO(3)$ rotations, and $j$ is the corresponding spin eigenvalue. The $R$-symmetry generator commutes with all other bosonic generators (the $R$-symmetry group for $\mathcal{N}=1$ CSA is $U(1)$). Hence, the states also carry a label $r$.
\medskip
\noindent We expect the spectrum, in particular the scaling dimension $\Delta$, to be bounded from below. This allows us to define CSA primaries in the CFT sense. Primaries are annihilated by the finite CSA generators
\be{60}
[K_i,\Phi(0,0)]=0\qquad\; [K,\Phi(0,0)]=0\qquad\; [S_\alpha,\Phi(0,0)]=0\qquad\; [\bar{S}_{\dot\alpha},\Phi(0,0)]=0\,.
\end{equation}
\medskip
\noindent
In the finite CSA we have $\{S,\bar S\} \sim K$. Hence, the constraint regarding the temporal part of special conformal transformation $K$ $([K,\Phi(0,0)]=0)$ becomes implicit. Incorporating the infinite generators, the CSA primaries are defined as
\begin{subequations}{}\label{61}
\begin{align}
[K_i,\Phi(0,0)]&=0, &[S_\alpha,\Phi(0,0)]&=0,\; [\bar{S}_{\dot\alpha},\Phi(0,0)]=0\\
[G^{-}_{r_1,r_2,r_3},\Phi(0,0)]&=0,&[\tilde G^{-}_{r_1,r_2,r_3},\Phi(0,0)]&=0~~\text{for any $r_1,r_2,r_3\geq 0$}\\
[M_f,\Phi(0,0)]&=0, &\text{for the degree of polynomial $ > 0$}.
\end{align}
\end{subequations}
\medskip
\noindent
The scaling dimensions of the fermionic operators $Q_\a,\bar Q_{\dot \alpha}$ are $+\frac{1}{2}$, whereas that of the fermionic superconformal operators $S_\a,\bar S_{\dot \alpha}$ are $-\frac{1}{2}$. The states are created by acting with $Q,\bar Q$ and $P_i,H$ on the CSA primaries. Also, $Q,\bar S$ have label $r=-\frac{3i}{4}$ and $S,\bar Q$ have label $\frac{3i}{4}$ under the $R$ transformation.
\medskip
\noindent
We define the set of mutually anti-commuting parameters $\varrho_\b,{\bar{\varrho}}_{\dot \beta}$. Additionally, they anticommute with every fermionic object and commute with every bosonic object. For a generic field $\Phi(x_i,t)$, we define the infinitesimal transformations,
\be{62}
\delta_{\varrho}\Phi(x_i,t)=[\varrho^\b Q_\b,\Phi(x_i,t)]\qquad\qquad\bar{\delta}_{\bar{\varrho}}\Phi(x_i,t)=[{\bar Q_{\dot \beta}} \bar{\varrho}^{\dot \beta},\Phi(x_i,t)]
\end{equation}
with the convention
\be{63}
\varrho Q=\varrho^{\b}Q_{\b}=-\varrho_{\b}Q^{\b}=Q^{\b}\varrho_{\b}=Q\varrho\qquad\qquad\bar{Q}\bar{\varrho}=\bar{\varrho}_{\dot \beta}\bar{Q}^{\dot \beta}=-\bar{Q}^{\dot \beta}\bar{\varrho}_{\dot \beta}=\bar{\varrho}\bar{Q}=\bar{\varrho}\bar Q\,.
\end{equation}
Also, for two fermionic objects (with $A=0,i$)
\be{64}
\psi \sigma^A \chi=\psi^\a (\sigma^A)_{\a\dot \beta}\chi^{\dot \beta}\qquad\qquad \chi \bar{\sigma}^A \psi=\chi_{\dot \alpha} (\bar\sigma^A)^{\dot \alpha\b}\psi_\b\,.
\end{equation}
\smallskip
\noindent
\subsection{Carrollian chiral multiplet}\label{Carrollian chiral multiplet}
In this subsection, we discuss the representation of $\mathcal{N}=1$ CSA on the fields. We start with a complex scalar field where $\Phi=\phi$ and its complex conjugate is $\phi^\star$. We consider the Carrollian fermions $\Psi$ in the Weyl representation. We also consider a complex auxiliary scalar field $F$ along with its complex conjugate $F^\star$. Let us first summarise the main results. The multiplet structure is the following:
\be{65}
\begin{split}
& \text{Complex scalars:~} ( \phi, \phi^\star)\\
& \text{Weyl Fermions:~} \Psi_{\text{Weyl}}=\begin{bmatrix}
\psi_{\a}\\
\chi_{\dot \alpha}
\end{bmatrix}
\\
& \text{Complex auxiliary scalars:~} (F, F^{\star})\,.
\end{split}
\end{equation}
There are two distinct sectors in the multiplet structure, analogous to its relativistic Wess--Zumino counterpart. We name these two parts as left and right chiral sectors. The multiplet structures can be easily realised as displayed in Figure \ref{multiplet}.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{diagram.png}
\caption{Carrollian chiral multiplet structure}
\label{multiplet}
\end{center}
\end{figure}
In the left chiral sector, we have the field contents $(\phi,\psi,F)$. The fields transform in the following way under infinite $\mathcal{N}=1$ CSA generators:
\begin{subequations}{}\label{66}
\begin{align}
[G_f,\phi(x_i,t)]&=if\psi(x_i,t) &[\tilde{G}_h,\phi(x_i,t)]&=0\\ \label{eq:angelinajolie}
\{G_f,\psi(x_i,t)\}&=if\;\
\;F(x_i,t)&\{\tilde{G}_h,\psi(x_i,t)\}&=-2i\:\tilde{h}\:{\bar{\sigma}}^0\partial_t\phi(x_i,t)\\
[G_f,F(x_i,t)]&=0&[\tilde{G}_h,F(x_i,t)]&=-2i\;\tilde{h}\;(\partial_t\psi(x_i,t)\sigma^0)\\
[\mathfrak{R}_{k},\phi(x_i,t)]&=r\:\hat{k}\:\phi(x_i,t)&
[\mathfrak{R}_{k},\psi(x_i,t)]&=(r-\frac{3i}{4})\:\hat{k}\:\psi(x_i,t)\\
[\mathfrak{R}_{k},F(x_i,t)]&=(r-\frac{3i}{2})\:\hat{k}\:F(x_i,t)
\end{align}
\end{subequations}
where
\be{67}
G_f=f(x_i,\L)Q\qquad\qquad \tilde{G}_h=\tilde{h}(x_i,\bar\Lambda)\bar Q\qquad\qquad\mathfrak{R}_{k}=\hat{k}(x_i)R\,.
\end{equation}
Just to explain the notation, we remark that the left anti-commutator \eqref{eq:angelinajolie} for $f=1$ expands as $\{Q_\alpha,\psi_\beta(x_i,t)\}=i\;\epsilon_{\b\a}\;F(x_i,t)$ in component notation.
\medskip
\noindent
Similarly, for the right chiral sector, we have $(\phi^\star,\chi,F^\star)$. Under infinite $\mathcal{N}=1$ super Carrollian conformal generators the right chiral sector emerges as
\begin{subequations}{}\label{68}
\begin{align}
[G_f,\phi^\star(x_i,t)]&=0&[\tilde{G}_h,\phi^\star(x_i,t)]&=i\;\tilde{h}\;\chi(x_i,t)\\
\{G_f,\chi(x_i,t)\}&=-2i\:f\:\sigma^0\partial_t\phi^\star(x_i,t)&\{\tilde{G}_h,\chi(x_i,t)\}&=-i\;\tilde{h}\;\
\;F^\star(x_i,t)\\
[G_f,F^\star(x_i,t)]&=2i\;f\;(\sigma^0\partial_t\chi(x_i,t))&[\tilde{G}_h,F^\star(x_i,t)]&=0\\
[\mathfrak{R}_{k},\phi^\star(x_i,t)]&=r^\star\:\hat{k}\:\phi^\star(x_i,t)&\\
[\mathfrak{R}_{k},\chi(x_i,t)]&=(r^\star+\frac{3i}{4})\:\hat{k}\:\chi(x_i,t)& [\mathfrak{R}_{k},F^\star(x_i,t)]&=(r^\star+\frac{3i}{2})\:\hat{k} F^\star(x_i,t)\,.
\end{align}
\end{subequations}
For further details of the representation theory we recall the action of Carrollian boost and Carrollian rotation generators on fields of different spins. We start with a generic complex scalar primary $\phi$ at the origin. The rotation and boost generators operate on a scalar as
\be{69}
[J_{ij},\phi(0,0)]=0\qquad\qquad[B_i,\phi(0,0)]=0\,.
\end{equation}
For Carrollian Weyl spinors, we can write
\begin{subequations}{}\label{70}
\begin{align}
[J_{ij},\psi_{\a}(0,0)]&=\frac{1}{4}[\sigma_i,\sigma_j]_{\a}^{\:\:\b}\psi_{\b}(0,0)=\frac{i}{2}(\sigma_{ij})_{\a}^{\:\:\b}\psi_{\b}(0,0)\\
[J_{ij},\chi_{\dot \alpha}(0,0)]&=\frac{1}{4}\epsilon_{\dot \alpha\dot \beta}[\sigma_i,\sigma_j]^{\dot \beta}_{\:\:\dot{\rho}}\epsilon^{\dot{\rho}{\dot \gamma}}\chi_{\dot \gamma}(0,0)=\frac{i}{2}\epsilon_{{\dot \alpha}\dot \beta}(\bar{\sigma}_{ij})^{\dot \beta}_{\:\:\dot{\rho}}\epsilon^{{\dot{\rho}}\dot \gamma}\chi_{\dot \gamma}(0,0)\\
[B_i,\psi_\a(0,0)]&=[B_i,\chi_{\dot \alpha}(0,0)]=0\,.
\end{align}
\end{subequations}
\subsubsection*{Left chiral sector} We are now equipped to understand the transformation of fields under different fermionic generators. We use the Jacobi identities to find the infinitesimal transformations. The details of the calculations are presented in the Appendix \ref{representation1}.
\begin{subequations}{}\label{529}
\begin{align}
\{B_i,\phi(0,0),Q_{\a}\}:&\implies [Q_\a,\phi(0,0)]=i\psi_\a(0,0)\qquad\delta_{\varrho}\phi(0,0)=i\varrho\psi(0,0)\\\label{boost1}
\{B_i,\phi(0,0),\bar Q_{\dot \alpha}\}:&\implies [[\bar Q_{\dot \alpha},\phi],B_i]=0\\
&\implies [\bar Q_{\dot \alpha},\phi(0,0)]=0\qquad \bar{\delta}_{\bar{\varrho}} \phi(0,0)=0\\
\{J_{ij},\phi(0,0),Q_{\a}\}: &\implies [J_{ij},\psi_{\a}(0,0)]=\frac{i}{2}(\sigma_{ij})_{\a}^{\:\:\b}\psi_{\b}(0,0)\\
\{Q_{\a},\bar Q_{\dot \alpha},\phi(0,0)\}:& \implies \{\bar Q_{\dot \alpha},\psi_{\a}\}=-2i(\sigma^0)_{\a\dot \alpha}\partial_t\phi(0,0)\\
&\implies \bar{\delta}_{\bar{\varrho}}\psi_{\b}(0,0)=2i(\sigma^0 \bar{\varrho})_{\b}\partial_t \phi(0,0)\\
\{J_{ij},Q_\a,\psi_\b(0,0)\}: &\implies \{Q_\a,\psi_\b(0,0)\}=i\epsilon_{\b\a}F(0,0)\\
&\implies\delta_{\varrho}\psi_\b (0,0)=i\varrho_\b F(0,0)\\
\{Q_{\a},\bar Q_{\dot \alpha},\psi_\b(0,0)\}:&\implies [\bar Q_{\dot \alpha},F(0,0)]=-2i(\partial_t\psi\;\sigma^0)_{\dot \alpha}\\
&\implies\bar{\delta}_{\bar{\varrho}} F(0,0)=-2i(\partial_t\psi\;\sigma^0\;\bar{\varrho})\\
\{Q_{\a},\bar Q_{\dot \alpha},F(0,0)\}:&\implies [Q_\a,F(0,0)]=0\qquad\delta_{\varrho}F(0,0)=0
\end{align}
\end{subequations}
The transformations above are consistent with the Jacobi identities involving all other finite CSA generators. We have also checked for consistency of the above infinitesimal transformations with our finite CSA. The details of the calculations are provided in Appendix \ref{representation1}.
\noindent The super Carrolian conformal primaries also carry $R$ charges. Let
\be{73}
[R,\phi]=r\phi\,.
\end{equation}
Then, we can further show from the Jacobis
\begin{subequations}{}\label{73.5}
\begin{align}
& \{Q_\a,R,\phi\}: &\implies&& [R,\psi_\a]&=(r-\frac{3i}{4})\psi_\a\\
& \{Q_\a,\psi_\b,R\}: &\implies&& [R,F]&=(r-\frac{3i}{2})F\,.
\end{align}
\end{subequations}
Next we consider the action of other fermionic generators $S_\a,\bar S_{\dot \alpha}$ on a CSA primary at a generic spacetime point. For a CSA primary $\Phi$
\be{74}
[S_\a,\Phi(0,0)]=[{\bar S}_{\dot \alpha},\Phi(0,0)]=0\,.
\end{equation}
Also, the transformation of primaries at a generic spacetime point can be obtained as
\be{75}
[\mathcal{O},\Phi(x_i,t)]=U[U^{-1}\mathcal{O}U,\Phi(0,0)]U^{-1}\qquad\qquad U=e^{-tH-x_iP_i}
\end{equation}
where $\mathcal{O}$ denotes a generic CSA generator. Using the Baker--Campbell--Hausdorff formula, we can evaluate the transformation rules of the primaries at a generic spacetime point under the fermionic generators
\begin{subequations}{}\label{leftS1}
\begin{align}
[S_\a,\phi(t,x_i)]&=0\\
[\bar S_{\dot \alpha},\phi(t,x_i)]&=\epsilon_{\dot \alpha\dot \beta}(x_i \bar{\sigma}_i)^{\dot \beta\a}\:\psi_{\a}(t,x_i)\\
\{S_\a,\psi_{\b}(t,x_i)\}&=-2(x_i \sigma_i)_{\a\dot \alpha}\:\epsilon^{\dot \alpha\dot \beta}(\bar{\sigma}^0)_{\dot \beta\b}\partial_t\phi(t,x_i)\\
\{{\bar S}_{\dot \alpha},\psi_{\b}(t,x_i)\}&=-\epsilon_{\dot \alpha\dot \beta}(x_i \bar{\sigma}_i)^{\dot \beta\a}\epsilon_{\a\b}F(t,x_i),\\
[{\bar S}_{\dot \beta},F(t,x_i)]&=0\\
[S_\b,F(t,x_i)]&=-2(x_i \sigma_i)_{\b\dot \beta}\:\epsilon^{\dot \beta\dot \gamma}(\bar{\sigma}^0)_{\dot \gamma\rho}\partial_t\psi^{\rho}(t,x_i)\,.
\end{align}
\end{subequations}
The details of these derivations are provided in Appendix \ref{representation1}.
\medskip
Finally, we consider the action of infinite-dimensional fermionic generators on the fields. These infinite-dimensional fermionic generators can be written as
\be{76}
G_f=f(x_i,\L)Q\qquad\qquad \tilde{G}_h=\tilde{h}(x_i,\bar\Lambda)\bar Q\qquad\qquad\mathfrak{R}_{k}=\hat{k}(x_i)R\,.
\end{equation}
The primaries of different spins transform as
\begin{subequations}{}\label{77}
\begin{align}
[G_f,\phi(x_i,t)]&=if\psi(x_i,t),~~&[\tilde{G}_h,\phi(x_i,t)]&=0\\
\{G_f,\psi(x_i,t)\}&=if\;\epsilon_{L}\;F(x_i,t),~~&\{\tilde{G}_h,\psi(x_i,t)\}&=-2i\:\tilde{h}\:{\bar{\sigma}}^0\partial_t\phi(x_i,t),~~\epsilon_L=\epsilon_{\b\a}\\
[G_f,F(x_i,t)]&=0,~~&[\tilde{G}_h,F(x_i,t)]&=-2i\;\tilde{h}\;(\partial_t\psi(x_i,t)\sigma^0)\\
[\mathfrak{R}_{k},\phi(x_i,t)]&=r\:\hat{k}\:\phi(x_i,t)\\
[\mathfrak{R}_{k},\psi(x_i,t)]&=(r-\frac{3i}{4})\:\hat{k}\:\psi(x_i,t)\\
[\mathfrak{R}_{k},F(x_i,t)]&=(r-\frac{3i}{2})\:\hat{k}\:F(x_i,t)\,.
\end{align}
\end{subequations}
As a consistency check, notice that putting $f=\L,\tilde{h}=\bar\Lambda$ in the above equations reproduces the commutators \eqref{leftS1}.
\paragraph{Left chiral sector from limit:} So far, we have studied various aspects of the representation theory of $\mathcal{N}=1$ CSA in $d=4$ purely from an intrinsic approach which involves studying the action of various generators acting on the CSA primaries. Here, we examine the consistency of our construction taking the limiting approach. This involves taking the UR limit on the parent relativistic representation theory of $\mathcal{N}=1$ supersymmetric theory. We restrict ourselves only on the Wess--Zumino multiplet reviewed in section \ref{wz}.
The relativistic infinitesimal transformations of the Wess--Zumino multiplets are given by
\begin{subequations}{}\label{relleft1}
\begin{align}
&\delta_{\a}\phi(x)=\sqrt{2}\a\psi(x)\\
&\delta_{\a}\psi(x)=\sqrt{2}i (\sigma^\mu\bar{\a}\partial_\mu \phi(x))+\sqrt{2}\a\tilde{F}(x)\\
&\delta_{\a}F(x)=-\sqrt{2}i(\partial_\mu\psi(x)\sigma^\mu\bar{\a})\\
\nonumber &\text{where, }\delta_{\a}=-i(\a Q+\bar Q \bar{\a}),~\a_\b,\bar{\a}_{\dot \beta}: ~~\text{relativistic anti-commuting parameters.}
\end{align}
\end{subequations}
Take the asymmetric scaling \eqref{asymm1},
\be{78}
Q\to Q\qquad\qquad\bar Q\to \frac{1}{\epsilon}\bar Q\qquad\implies\qquad \delta_{\a}\to -i(\delta_\varrho+\frac{1}{\epsilon}\bar{\delta}_{\bar{\varrho}})
\end{equation}
together with the scaling of the complex scalar field and Weyl chiral fermions, along with the usual spacetime scalings
\be{79}
\phi \to \frac{1}{\epsilon}\phi\qquad\qquad\psi\to \frac{1}{\epsilon}\psi\qquad\qquad F\to \frac{1}{\epsilon}F\qquad\qquad \partial_t\to\frac{1}{\epsilon}\partial_t\qquad\qquad \partial_i \to \partial_i\,.
\end{equation}
Inserting these scalings in \eqref{relleft1}, and equating the orders of $\frac{1}{\epsilon}$ on both sides of the infinitesimal transformations yields
\begin{subequations}{}\label{80}
\begin{align}
-i\delta_{\varrho}\phi&=\sqrt{2}\varrho \psi,&\bar{\delta}_{\bar{\varrho}}\phi&=0,\\
-i\bar{\delta}_{\bar{\varrho}}\psi&=\sqrt{2}i((\sigma^0\bar{\varrho})\partial_t\phi+\epsilon(\sigma^i\bar{\varrho}) \partial_i\phi), &i\delta_{\varrho}\psi&=\sqrt{2}\varrho F,\\
\delta_\varrho F&=0,&i\bar{\delta}_{\bar{\varrho}}F&=-\sqrt{2}i((\partial_t\psi \:\sigma^0\:\bar{\varrho})+\epsilon(\partial_i\psi \:\sigma^i\:\bar{\varrho})).
\end{align}
\end{subequations}
Finally, in the limit $\epsilon\to0$,
\be{81}\boxed{
\phantom{\Bigg(}
\begin{aligned}
\delta_{\varrho}\phi&=i\sqrt{2}\varrho \psi&\bar{\delta}_{\bar{\varrho}}\phi&=0\\
\delta_{\varrho}\psi&=\sqrt{2}i\varrho F &\bar{\delta}_{\bar{\varrho}}\psi&=-\sqrt{2}(\sigma^0\bar{\varrho})\partial_t\phi\\
\delta_\varrho F&=0&\bar{\delta}_{\bar{\varrho}}F&=\sqrt{2}(\partial_t\psi \:\sigma^0\:\bar{\varrho})\,.
\end{aligned}
\phantom{\Bigg)}}
\end{equation}
These infinitesimal transformations are consistent with the transformation rules obtained in an intrinsic analysis without the normalization constants.
\medskip
\noindent
\subsection*{Right chiral sector} Finally, we focus on the right chiral sector of the $\mathcal{N}=1$ CSA representation theory. We repeat an analysis similar to its left sided counterpart, without presenting all intermediate steps, since the calculations are analogous to the left chiral counterpart. The details of the calculations for the Jacobi identities are described in Appendix \ref{representation2}. Here, we only mention the most important results, especially when they differ from the left chiral sector. At $t=x_i=0$ we have the (graded) commutation relations
\begin{subequations}{}
\label{540}
\begin{align}
\label{540a} [\bar Q_{\dot \alpha},\phi^\star]&=i\chi_{\dot \alpha}& [Q_{\a},\phi^\star]&=[\bar Q_{\dot \alpha},F^{\star}]=0 \\ \label{540b}
\{Q_{\a},\chi_{\dot \alpha}\}&=-2i(\sigma^0)_{\a\dot \alpha}\partial_t\phi^\star& \{\bar Q_{\dot \alpha},\chi_{\dot \beta}\}&=-i\epsilon_{\dot \beta{\dot \alpha}}F^\star\\ \label{540c}
[Q_{\a},F^\star]&=2i(\sigma^0)_{\a\dot \alpha}\epsilon^{\dot \alpha\dot{\rho}}\partial_t\chi_{\dot{\rho}} &[J_{ij},\chi_{\dot \alpha}]&=\frac{i}{2}\epsilon_{\dot \alpha\dot \beta}(\bar{\sigma}^{ij})^{\dot \beta}_{\:\:\dot{\rho}}\epsilon^{\dot{\rho}\dot \gamma}\chi_{\dot \gamma}\\ \label{544}
[R,\phi^{\star}]=r^\star\phi^{\star}\quad [R,\chi_{\dot \alpha}]&=(r^\star+\frac{3i}{4})\chi_{\dot \alpha} & [R,F^{\star}]&=(r+\frac{3i}{2})F^{\star}\,.
\end{align}
\end{subequations}
\noindent The $S$-transformations give
\begin{subequations}{}\label{rightS}
\begin{align}
[S_\a,\phi^\star(t,x_i)]&=(x_i {\sigma}_i)_{\a\dot \alpha}\epsilon^{\dot \alpha\dot \beta}\:\chi_{\dot \beta}(t,x_i)&
[\bar S_{\dot \alpha},\phi^\star(t,x_i)]&=0\\
\nonumber \{{\bar S}_{\dot \alpha},\chi_{\dot \beta}(t,x_i)\}&=-2\epsilon_{\dot \alpha\dot \beta}(x_i \bar{\sigma}_i)^{\dot \beta\a}\:({\sigma}^0)_{\a\dot{\rho}}\partial_t\phi^\star(t,x_i)&
\{S_\a,\chi_{\dot \beta}(t,x_i)\}&=-(x_i {\sigma}_i)_{\a\dot \beta}F^\star(t,x_i)\\
[{\bar S}_{\dot \alpha},F^\star(t,x_i)]&=2\epsilon_{\dot \alpha\dot \beta}(x_i \bar{\sigma}_i)^{\dot \beta\a}\:({\sigma}^0)_{\a\dot{\rho}}\partial_t\chi^{\dot{\rho}}(t,x_i)&
[S_\a,F^\star(t,x_i)]&=0\,.
\end{align}
\end{subequations}
Thus, we can find the transformation rules of the right chiral sector under $\mathcal{N}=1$ finite CSA generators, consistent with the $\mathcal{N}=1$ CSA algebra. Finally, we write the transformation rules for the infinite (fermionic) symmetry generators resembling \eqref{68}.
\noindent
As a check, we can put $f=\L,\tilde{h}=\bar\Lambda$ in the equations \eqref{68}, which reproduce the $S$-transformation relations \eqref{rightS}.\\
\medskip
\noindent
\paragraph{Right chiral sector from the limit:} After the intrinsic analysis, we revisit the right chiral sector from the limiting point of view. The relativistic infinitesimal transformations of the right chiral multiplet are given as
\begin{subequations}{}\label{relright}
\begin{align}
\delta_{\a}\phi^\star(x)&=\sqrt{2}\bar{\a}\chi(x),&
\delta_{\a}\chi(x)&= -\sqrt{2}i({\a}\sigma^\mu\partial_\mu \phi^\star(x))+\sqrt{2}\bar{\a}\tilde{F}^\star(x),\\
\delta_{\a}F^\star(x)&=\sqrt{2}i(\a\sigma^\mu \partial_\mu\chi(x)),\\
\nonumber\text{where }\delta_{\a}&=-i(\a Q+\bar Q \bar{\a}),&\a_\b,\bar{\a}_{\dot \beta} &\text{: relativistic anti-commuting parameters.}
\end{align}
\end{subequations}
\medskip
\noindent
We take the asymmetric scaling \eqref{assym3} ($Q\to \frac{1}{\epsilon}Q, \: \bar Q\to\bar Q$) along with the scaling of the complex scalar field and Weyl chiral fermions, and the usual spacetime scaling, implying
\be{84}
\delta_{\a}\to-i(\frac{1}{\epsilon}\delta_\varrho+\bar{\delta}_{\bar{\varrho}}),~~ \phi^\star \to \frac{1}{\epsilon}\phi^\star,~~ \chi\to \frac{1}{\epsilon}\chi,~~ F^\star\to \frac{1}{\epsilon}F^\star,~~ \partial_t\to\frac{1}{\epsilon}\partial_t,~~ \partial_i \to \partial_i\,.
\end{equation}
Plugging these scalings into \eqref{relright} and equating the orders $\frac{1}{\epsilon}$ on both sides of the infinitesimal transformations, obtains the following transformation relations in the limit $\epsilon\to0$
\be{87}
\boxed{
\phantom{\Bigg(}
\begin{aligned}
\delta_{\varrho}\phi^\star&=0&\bar{\delta}_{\bar{\varrho}}\phi^\star&=i\sqrt{2}\bar{\varrho} \chi\\
\delta_{\varrho}\chi&=\sqrt{2}(\varrho\sigma^0)\partial_t\phi^\star&\bar{\delta}_{\bar{\varrho}}\chi&=i\sqrt{2}\bar{\varrho} F^\star\\
\delta_{\varrho}F^\star&=-\sqrt{2}(\varrho\sigma^0\partial_t\chi)&\bar{\delta}_{\bar{\varrho}} F^\star&=0\,.
\end{aligned}
\phantom{\Bigg)}
}
\end{equation}
These infinitesimal transformations are analogous to the left chiral counterpart \eqref{81} and are consistent with the transformation rules we obtained in intrinsic analysis without the normalization constants.
\section{Conclusions}
\label{Conclusions}
We have explored the algebraic structures associated with Carrollian superconformal symmetry --- or equivalently super BMS symmetry --- in the context of field theories in dimensions greater than three. We specifically concentrated on boundary dimension $d=4$, but the method outlined is simply generalisable for all higher dimensions. We identified the contraction of the corresponding relativistic algebra that is useful for holography of asymptotically flat spacetimes and then lifted the finite contracted algebra to an infinite-dimensional one. Our constructions generalised to higher $\mathcal{N}$. We checked our proposed infinite lift by reproducing results in the literature for $d=2, 3$. Finally, we returned to $\mathcal{N}=1$ Carrollian superconformal field theories and worked out details of its representation theory. In particular, we discussed the Carrollian version of the Wess--Zumino multiplet, laying the groundwork for similar constructions for other multiplets to be addressed in the future.
\subsection{Discussion and future directions}
The algebraic construction in this paper is relevant for understanding Carrollian superconformal field theories. Our final aim is to build a Carrollian version of $\mathcal{N}=4$ $SU(N)$ super Yang--Mills theory so that we have a concrete proposal for the boundary theory in the flat limit of Maldacena's original AdS/CFT correspondence. Building on the representation theory of the $\mathcal{N}=1$ Carrollian superconformal field theories in the present paper, we aim to construct explicit examples of Carrollian superconformal field theories, starting with $\mathcal{N}=1$ Carrollian SUSY electrodynamics, before moving to Yang--Mills and higher $\mathcal{N}$. It could be rewarding to find all possible central extensions of our algebras \eqref{infal1}-\eqref{infalgebra} and \eqref{infal}. Our preliminary analysis for finding central terms by educated guessing was unsuccessful, but we did not attempt a complete analysis of finding non-trivial co-cycles.
\medskip
\noindent The fact that the underlying symmetry structures are infinite-dimensional may indicate some hidden integrability in the Carrollian sector of these SUSY gauge theories. $\mathcal{N}=4$ SYM provides a natural playground to investigate integrability \cite{Beisert:2010jr}. A first step would be to understand what exists in the planar limit of the Carrollian $\mathcal{N}=4$ SYM, which possibly can be obtained directly from the corresponding UR limit. But there could be more structure in the Carrollian sector and perhaps integrability beyond the planar limit. If this were true, we would have stumbled on a new integrable sector of relativistic $\mathcal{N}=4$ SYM, viz., the Carrollian sector.
\medskip
\noindent When considering the flat version of the AdS$_5$/CFT$_4$ correspondence, it is necessary to understand the 'tHooft large-$N$ limit in the Carrollian context. Here the speed of light goes to zero along with the usual 'tHooft scaling of
$N\to \infty$ and fixed $\lambda\equiv g_{\text{YM}}^2 N$.
There is a myriad of possibilities on how to take this modified large $N$ limit, generalising the original analysis in relativistic gauge theories. Now that we understand the algebraic structures that govern the symmetries of the Carrollian superconformal theory we are after, the possible large $N$ limits have to be consistent with the particular scalings discussed in our paper. It could be useful to first understand this in the $\mathcal{N}=1$ context before generalising to the case of interest, viz., $\mathcal{N}=4$.
\medskip
\noindent After the large $N$ limit is established, the next step would be to construct the bulk-boundary dictionary between flat space supergravity (and, ultimately, superstrings) and these Carrollian superconformal theories. For the AdS$_5$/CFT$_4$ correspondence, the curvature radii of the AdS$_5$ and the accompanying S$^5$ are equal and it would seem that a flat limit on the AdS$_5$ needs to be accompanied by a similar flat limit on the S$^5$, thus making the resulting theory a 10-dimensional flat space theory. This should be reflected in terms of a contracted $R$-symmetry of the form we discussed in our generalisations to $\mathcal{N}=4$ in Sec.~\ref{n=4}. Eventually, we intend to understand the bulk-boundary dictionary both as a limit of the original correspondence and intrinsically. It is likely that the limit only yields a part of the answer, as the contraction of symmetries generates only the Poincar{\'e} subalgebra of the BMS algebra and not the entire symmetry structure. Our expectation is that the infinite-dimensional symmetries constructed in our work will make it easier to understand holography in flat space in general dimensions.
\bigskip
\paragraph{Note added:} While completing this work a paper was posted on the {\tt arXiv}, Ref.~\cite{Banerjee:2022abf}, that has overlap with ours. While their focus is on boundary dimensions $d=2$ and $d=3$, our focus is on boundary dimension $d>3$, specifically $d=4$.
\bigskip
\section*{Acknowledgements}
It is a pleasure to thank Stephane Detournay, Hong Liu, and Joan Sim\'on for discussions at early stages, Matthias Heumesser, and Wout Merbis at intermediate stages, and Rudranil Basu at all stages of this project.
\smallskip
\noindent AB is partially supported by a Swarnajayanti fellowship of the Department of Science and Technology, India and by the following grants from the Science and Engineering Research Board: SB/SJF/2019-20/08, MTR/2017/000740, CGR/2020/002035.
\noindent DG was supported by the Austrian Science Fund (FWF), projects P~30822, P~32581 and P~33789.
\noindent PN was supported by the Science and Engineering Research Board (SERB-OVDF) fellowship ODF/2018/000759 during the early stages of this work.
|
train/arxiv
|
BkiUbZ84uzki04AfVVxM
| 5
| 1
|
\section{Introduction}
\label{intro}
With the startup of the CERN LHC heavy-ion collisions will enter a new era. With a center of mass
energy of 5.5 TeV, about a factor 30 larger than the available energy at RHIC, matter will be produced
in a new domain of high energy density (15 to 60 GeV/fm$^3$) and high initial
temperature (factor 3 to 5 higher than the critical Temperature, T$_c$). At high temperature
and high energy density QCD calculations \cite{karsch1,lat2}
predict a phase transition from hadron gas to quark gluon plasma (QGP) and chiral symmetry restoration.
The expected time the system will expend in the QGP phase is of the order of 10~$fm/c$ and it is expected to extend
over a large volume. The cross sections of hard probes are increasing by large factors at LHC energies compared
to RHIC or SPS \cite{fabjan1}.
Among the different observables photons and dileptons are particularly interesting because they would directly probe
the high temperature and high density phase of these collisions.
The pp runs will provide a test of pQCD as well as important reference data for heavy-ion runs. With the first
heavy-ion run expected in 2009, the global event characteristics and bulk properties will be established,
and hard probes measurements could be started.
\section{Direct photon phenomenology at the LHC}
There are different production mechanisms of photons in heavy-ion collisions.
Direct photons are produced at leading order (LO) in quark gluon Compton scattering (qg$\rightarrow$ q$\gamma$)
and quark anti-quark annihilation (q$\bar q$ $\rightarrow$g$\gamma$). The next-to-leading order (NLO) process
is dominated by bremsstrahlung and fragmentation photons (qg$\rightarrow$qg$\gamma$). In addition, there are
photons from the jet re-interaction in the medium and thermal photons from QGP and from the hot ha\-dro\-nic
stage following the QGP.
Direct photons have to be isolated from the main source of background photons, i.e. the decay of hadrons
(mainly $\pi^0$'s and $\eta$'s) after the QGP phase.
\begin{figure}
\includegraphics[width=0.99\linewidth]{photons-at-lhc.eps}
\caption{Contributing sources of high-p$_T$ photons at midrapidity in central Pb-Pb collisions at LHC from \cite{phot1}.}
\label{figgamlhc}
\end{figure}
The different contributions to the direct photon spectrum for LHC energies as calculated by \cite{phot1}
is presented in Fig.~\ref{figgamlhc}. Other predictions are also available \cite{phot2,phot3,phot4,armesto}.
At RHIC $\gamma$'s from the parton scattering at LO were the dominant contribution
for p$_T> 5$ GeV/c while at LHC jet-photon conversion in the plasma dominates between 8 and 14 GeV/c.
Only for p$_T>$20 GeV/c the hard NN scattering dominates.
Although photons are more abundantly produced at LHC compared to RHIC (about a factor 10 larger),
the ratio of direct photons to $\pi^0$ is smaller at LHC. The ratio is $\sim$10\% for a p$_T$ of 20 GeV/c \cite{photyellow},
thus a very good PID to distinguish between direct and decay photons would be essential.
Recently it has been proposed \cite{turbide1} that in order to distinguish between the different
photon sources one can study their azimuthal anisotropy as function of p$_T$ in conjuction with R$_{AA}$.
A calculation for semicentral Pb-Pb collisions at LHC energies \cite{liu1} predicts
v$_2$ = 0 for initial production, positive for jet fragmentation and negative for jet photon conversion proceses.
Therefore, a negative v$_2$ together with R$_{AA}>$1 will be an unambigous signature of medium produced photons.
The high p$_T$ photon measurement opens the possibility of studying the jet fragmentation function (FF) and its
modification (redistribution of the parton energy inside the hadron jet) due to the medium effects in heavy-ion collisions
by using the $\gamma$-jet channel. As the photon is not affected by final state interactions, its transverse energy (E$_T$)
gives the energy of the jet before modification in the medium. This measurement would be complementary to jet
reconstruction in the energy range from 20 GeV to 80 GeV \cite{klein1}, or above 75 GeV \cite{loizides1,grau1}.
\section{ALICE, ATLAS and CMS as photon and dilepton detectors at the LHC}
\begin{figure}
\includegraphics[width=0.95\linewidth]{alice_pseudorapidity.eps}
\includegraphics[width=0.95\linewidth]{atlas_pseudorapidity.eps}
\includegraphics[width=0.95\linewidth]{denterria_cms_eta_phi_acceptance.eps}
\caption{ALICE (top), ATLAS (middle), and CMS (bottom) pseudorapidity and azimuthal coverage.}
\label{figpseudet}
\end{figure}
Three experiments will participate in the heavy-ion program at the CERN LHC.
ALICE is the experiment devoted to the study of heavy-ion collisions \cite{ali1,ali2},
and it will also participate in the pp running with an extensive physics program.
ATLAS \cite{atlas1} and CMS \cite{cms1} are experiments dedicated to study $pp$ collisions although by now
they also have a rich heavy-ion program \cite{atlas2,cms2}.
\begin{figure*}
\includegraphics[width=0.3\linewidth]{alice-fig2a.eps}
\includegraphics[width=0.6\linewidth]{cms_atlas2.epsi}
\caption{Schematic layout of the ALICE (left), ATLAS (middle) and CMS (right) detectors.}
\label{figdet}
\end{figure*}
\begin{table*}
\begin{tabular}{|c||c||c||c||} \hline
~~~~~~Exp~~~~~~~&~~~~~~~~~~~~~~~~~ATLAS~~~~~~~~~~~~~~~~~~&~~~~~~~~~~~~~~~~~~~~~CMS~~~~~~~~~~~~~~~~~~~&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ALICE~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \hline
\end{tabular}
\begin{tabular}{|c||c|c||c|c||c|c|c||} \hline
Name & LAr Barrel & LArEndCap & ECAL(EB) & ECAL(EE) & PHOS & EMCAL & Barrel \\ \hline
Coverage & 0$<|\eta|<$1.4& 1.4$<|\eta|<$3.2& 0$<|\eta|<$1.5& 1.5$<|\eta|<$3.&0$<|\eta|<$0.12& 0$<|\eta|<$0.7 & (0$<|\eta|<$0.9 \\
& 2$\pi$ & 2$\pi$ & 2$\pi$ & 2$\pi$ & 0.6$\pi$ & 0.6$\pi$ & 2.$\pi$) $\cdot$ 7X/X$_0$ \\ \hline
Granularity&0.003$\times$0.100&0.025$\times$0.100&0.017$\times$0.017&0.017$\times$0.017&0.004$\times$0.004&0.014$\times$0.014&3$\cdot$10$^{-4} \times$ 2$\cdot$10$^{-4}$\\
$\Delta\eta\times\Delta\phi$& 0.025$\times$0.025& 0.025$\times$0.025& & to & & & resolution \\
& 0.025$\times$0.025&0.025$\times$0.025& & 0.05 $\times$ 0.05 & & & \\ \hline
Resolution &10\%/$\sqrt{E}$$\bigoplus$&10\%/$\sqrt{E}\bigoplus$&2.7\%/$\sqrt{E}\bigoplus$&5.7\%/$\sqrt{E}\bigoplus$&1.3\%/$\sqrt{E}\bigoplus$&7\%/$\sqrt{E}\bigoplus$&2\% low pt \\
& 0.5\% & 0.5\% & 0.55\% &0.55\% & 1.1\% & 1.5\% & 5\% high pt \\ \hline
\end{tabular}
\caption{Compilation of the photon detectors in the three LHC experiments with a brief description of their characteristics.}
\label{tablephotons}
\end{table*}
The ALICE detector (Fig.~\ref{figpseudet} top , Fig.~\ref{figdet} left) \cite{ali1,ali2} consist
of central barrel detectors, a forward muon spectrometer and forward multiplicity and centrality detectors.
The central barrel inside a large solenoid magnet with the field up to 0.5 T covers almost 2 units
of pseudo-rapidity ($|\eta|<$0.9). The charged particle tracking detectors are the ITS (Inner
Tracking System) with three different silicon technologies (pixels, drift and strips), the
TPC (Time Projection Chamber) main tracking system for charged particles, and the TRD (Transition
Radiation Detector). All three tracking detectors are also used for particle identification (PID) by
measuring ionization losses or transition radiation. The TOF (Time Of Flight) detector is used for
PID in the intermediate momentum range (0.2 to 2.5 GeV/c).
Three smaller single arm detectors complete the central barrel, the HMPID (High Momentum Particle
Identification) detector that will extend the PID up to momenta of 4-5 GeV/c, the high resolution
high granularity electromagnetic calorimeter PHOS (PHOton Spectrometer) to measure photons and
neutral mesons and the EMCAL (ElectroMagnetic CALorimeter) to improve the ALICE capabilities in
the measurement of high energy jets and direct photons.
At forward rapidities, ALICE triggers and detects muons using the muon spectrometer with tracking and trigger
chambers in a 3~Tm magnetic field.
ALICE has an excellent PID: $\pi$/K/p identification up to 50 GeV/c and $\gamma$,$e$, and $\mu$ up to 100 GeV/c.
The ALICE specific photon detectors are PHOS and the EMCAL. In addition, photons that
convert in the detector material ($\gamma$Z$\rightarrow$e$^-$e$^+$Z ) can be detected by measuring
the e$^+$ and e$^-$ in the central barrel. This is a clean photon identification method, providing
directional information used to reject non vertex background. The momentum resolution at low p$_T$
using this method is better than electromagnetic calorimeters.
The possibility of using a L1 TRD trigger \cite{trd} to make use of this method at high p$_T$ is under investigation.
The ATLAS detector (Fig.~\ref{figpseudet} middle, Fig.~\ref{figdet} middle) \cite{atlas1,atlas2} has
hermetic azimuthal coverage over a wide range in pseudo-rapidity. The unique feature of the ATLAS detector
is its ca\-lo\-ri\-me\-try, composed of several independent longitudinal sampling layers of electromagnetic and
hadronic calorimetry with full azimuthal and $|\eta|<$5 units coverage. Specially the Liquid Argon (LAr)
electromagnetic calorimeter, which provides
excellent energy and position information on electrons and photons. The inner tracking system
within a 2T solenoidal field is equiped with silicon pixels, silicon strips and straw-tube tran\-si\-tion-ra\-dia\-tion
tracker for reconstructing charged tracks within $|\eta|<$2.5. A muon spectrometer ($|\eta|<$3) is
located within a toroidal field outside of the hadronic calorimeters.
The CMS detector (Fig.~\ref{figpseudet} bottom, Fig.~\ref{figdet} right) \cite{cms1,cms2} is also a hermetic detector with a
large acceptance both for tracker and calorimetry.
The CMS detector is a 22 m (length) x 15 m (diameter) detector featuring a 4 T solenoid surrounding central
silicon pixel and micro-strip tracking detectors ($|\eta|<$2.4) and electromagnetic ($|\eta|<$3) and hadronic
($|\eta|<$5) calorimeters. Muon detectors ($|\eta|<$2.4) are embedded in the flux return iron yoke of the magnet.
CMS detects leptons and both charged and neutral hadrons.
A compilation of the different photon detectors with their main characteristics in the three LHC experiments
is presented in Table~\ref{tablephotons}. PHOS and LAr (Barrel) have the best granularity.
\section{Prompt photons and photon-tagged jets}
To demonstrate the direct photon reconstruction capabilities ALICE has generated events using PYTHIA \cite{pythia}
triggered by prompt photons ($\gamma$-jet events) or $\pi^0$ (jet-jet events). To simulate Pb-Pb events, the pp events
were merged with heavy-ion events generated by HIJING \cite{hijing}.
Prompt photons are selected in PHOS by applying cuts on the shape of the shower developed on the detector and
isolation cuts \cite{nimali1,nimali2}.
The spectrum of direct photons detected in PHOS for Pb-Pb collisions \cite{nimali1,nimali2} is shown
in Fig.~\ref{figaligamPbPb}. An isolation cut of radius R=$\sqrt{\Delta \eta^2 + \Delta\phi^2}$=0.2 and
a p$_T$ threshold of 2 GeV/c is used. The background rejection with the applied cuts is 1/14 for an efficiency
of 50\%. A sample of 2000 $\gamma$ with p$_T$>20 GeV/c is expected during one LHC running year.
The spectrum of prompt photons detected in 2 PHOS modules for pp collisions is presented in Fig.~\ref{figaligampp}.
An isolation cut with R=0.3 and $\sum p_T <$~2~GeV/c has been used. A background rejection of 1/170 is obtained
for an efficiency of 69\%. A sample of 3000 $\gamma$ with p$_T>$ 20 GeV/c is expected during one LHC
running year (10~pb$^{-1}$) \cite{yaxian1}.
\begin{figure}[hbt]
\includegraphics[width=0.95\linewidth]{alicegamPbPb.eps}
\caption{Simulated prompt photon spectrum expected to be measured in ALICE during a LHC running year for
Pb-Pb collisions with statistical (bars) and systematic (shaded band) errors.}
\label{figaligamPbPb}
\end{figure}
\begin{figure}[hbt]
\includegraphics[width=0.5\textwidth]{alice-pp-gammapt.eps}
\caption{Isolated photon spectrum that can be measured in ALICE with 2 PHOS modules in pp collisions at 14~TeV.
The error bars are statistical errors. Systematic errors are given by the area around the data points.}
\label{figaligampp}
\end{figure}
As direct photons are not perturbed by the medium the fragmentation function of the recoiled jet can be measured
using E$_\gamma$ as the jet energy. The ratio of the measured fragmentation functions for the two collision systems
shows that for E~$>$~20~GeV variations of R$_{FF}$ larger than 5\% in the range 0.1$<z=p_T/E_{jet}<0.5$ can be
measured, see Fig.~\ref{figalicerffPbPb}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5\textwidth]{RatFinalFF_TPCnl_quench.eps}
\end{center}
\caption{Ratio of the fragmentation functions of $\gamma$-tagged jets with energy larger than 20 GeV for
Pb-Pb collisions scaled to pp collisions detected in the central tracking system and EMCal. The shaded
region represents the systematic error due to the contamination from jet-jet events.}
\label{figalicerffPbPb}
\end{figure}
The expected p$_T$ reach of the prompt photon spectrum will be larger in ATLAS and in
CMS compared to ALICE due to their larger acceptance.
\begin{figure}[hbt]
\includegraphics[width=0.95\linewidth]{atlas_etgamma.eps}
\caption{Isolated photon spectrum as expected to be measured by the ATLAS detector in one LHC year running \cite{cole1}.}
\label{figatlasgamet}
\end{figure}
Due to the fact that the ATLAS detector has very good granularity in the first layer, isolated photons give
a very clear signal even in a heavy-ion background simulated using HIJING. The set of cuts based on shape of the signal
and isolation cuts used for this study give an efficiency of about $\sim$60\% and a rejection factor
of $\sim$10 assuming R$_{AA}^h$/R$_{AA}^\gamma$ =1.
The direct photon spectrum resulting from combining shape and
isolation cuts is presented in Fig.~\ref{figatlasgamet}~\cite{grau1,cole1}.
A sample of about 100K photons with E$>$30~GeV and of about 10K with E$>$70~GeV are expected
for an integrated luminosity of 0.5 nb$^{-1}$.
\begin{figure}[htb]
\includegraphics[width=0.95\linewidth]{cms-photon-et.eps}
\caption{Isolated photon spectrum as expected to be measured by CMS in one year of Pb-Pb collisions \cite{loizides1}.}
\label{figcmsgam}
\end{figure}
CMS simulation studies are performed using PYQUEN~\cite{pyquen} and PYTHIA to generate the prompt photons with
and without jet quenching respectively, and HYDJET~\cite{hydjet} to model the underlying heavy-ion event.
The working point of the photon detection is set to 60\% signal efficiency,
leading to a background rejection of about 96.5\%.
The photon isolation and shape cuts improved S/B by a factor 15.
A detailed description of the photon detection method can be found in~\cite{loizides1}.
The photon spectrum for an integrated luminosity of 0.5 nb$^{-1}$ is shown in Fig.~\ref{figcmsgam}.
\begin{figure}[htb]
\includegraphics[width=0.95\linewidth]{cms-QuenUnquenRatio70_080110.eps}
\caption{Ratio of reconstructed (symbols) and MC truth (line) quenched fragmentation function over
unquenched MC truth in CMS. The estimated systematic error is given by the shared band \cite{loizides1}.}
\label{figcmsrff}
\end{figure}
Isolated photons are associated with the back-to-back jet in order to reconstruct the fragmentation function.
The ratio of the reconstructed fragmentation functions in quenched Pb-Pb and in pp is shown in Fig.~\ref{figcmsrff}.
CMS obtains high significance for $\xi$ between 0.2 and 5 for E$_\gamma$ larger than 70 GeV (Fig.~\ref{figcmsrff}).
In addition to the $\gamma$-tagged jet measurement, jets can also be tagged with a virtual
photon or a Z$^0$. This method is free from the bias of the isolation cut that is applied in
$\gamma$-tagged jets. A detailed study for the CMS detector is presented in \cite{cms2,kunde}.
\section{Low mass dileptons}
The measurement of direct photons at low p$_T$ is usually performed on a statistical basis, namely
subtracting the decay photons from all detected photons \cite{pere1}. The difficulty of this me\-thod
comes from the large background due to the decay photons.
To overcome the background problem it has been proposed to consider instead the emission of virtual
photons (lepton pairs). This method was used at CERN ISR to set a limit on direct photon
production \cite{isr}. The invariant mass distribution of Dalitz decay pairs as well as of virtual photons
is given by the Knoll-Wada formula \cite{knoll}.
Beyond the mass of the $\pi^0$, only the $\eta$ mesons contributes. As most of the $\gamma$'s come from $\pi^0$ decays,
the signal to background ratio for direct photon signal improves considerably for m$_{ee}>m_{\pi^0}$.
This method also benefits from the excellent mass resolution at low
invariant mass and from the low conversion probability in the case of ALICE ($\sim$7\%).
For ATLAS and for CMS, where the conversion probability is higher ($\sim$20\%, $\sim$30\%),
it would be more difficult to apply this method.
This method has also been used recently by PHENIX to measure direct photons in pp and Au+Au \cite{phenix1}.
While the yield is consistent with NLO pQCD calculations in pp collisions,
in AuAu collisions the data are larger than calculations for p$_T<$3.5 GeV/c.
\begin{figure*}[hbt]
\begin{center}
\resizebox{0.85\textwidth}{!}{
\includegraphics{alice-dileptons-pp.eps}
\includegraphics{alice-dileptons-pbpb.eps}
}
\end{center}
\caption{Virtual photon spectrum reconstructed from the measurement of electron pairs with mass on the
range 0.2 to 0.6 GeV/c$^2$ in pp at $\sqrt{s}$ =5.5 TeV (left) and in central Pb-Pb collisions (right)
at $\sqrt{s_{NN}}$=5.5 TeV.}
\label{figdilep}
\end{figure*}
Theoretically, the rate of production of a dilepton pair in a finite mass range is given by \cite{aurenche}
\begin{equation}
{{d\sigma^{e^+e^-}} \over {dp_Tdy}} = C_{e^+e^-} \alpha {{d\sigma^{\gamma}} \over {dp_Tdy}}
\label{eqee}
\end{equation}
where C$_{e^+e^-}\sim 0.3$ for 0.2~GeV/c$^2<$M$_{e^+e^-}<$0.6~GeV/c$^2$ valid in the range 2 GeV/c$<$p$_T<$100 GeV/c.
The expected yield of virtual photons calculated taking into account Eq~\ref{eqee} and the pQCD rate for a LHC
running year is shown in Fig.~\ref{figdilep}. Thermal photons are supposed to be the dominating contribution below
p$_T$$\sim$5~GeV/c \cite{phot2,phot3,phot4}, therefore the dilepton yield is probably underestimated.
Points for p$_T$ below 2 GeV/c should be taken with some caution, as this method assumes that the mass
of the dilepton pair is negligible compared to the p$_T$ and the validity of Eq.~\ref{eqee}.
The electron pairs are detected and identified by the ALICE central tracking system (TPC and TRD).
\section{Conclusions}
The LHC machine and the experiments are finally a reality.
We have shown that the three experiments are equipped with very good photon and dilepton detectors
that will allow the measurement of electromagnetic probes at LHC energies from low to very high p$_T$.
The low mass dilepton measurement will extend the direct photon spectrum to lower p$_T$.
The jet fragmentation function and its modification due to the medium effects in heavy-ion collisions
will be study using $\gamma$ tagged jets.
The ALICE, ATLAS and CMS experiments will provide independent and complementary results,
that will be essential to understand this new system.
\begin{acknowledgement}
{\it Acknowledgements}. I would like to express my gratitude to the organizers of the 3rd
International Conference on Hard and
Electromagnetic Probes in High Energy Nuclear Collisions and the ALICE management for the invitation to
speak in A Toxa. I would like also to thank the ALICE, the ATLAS and the CMS
Collaborations for providing the material for this talk and for useful discussions.
The author wishes to acknowledge the financial support received from GSI and from the German BMBF.
\end{acknowledgement}
|
train/arxiv
|
BkiUfPvxK7Ehm79fd7ol
| 5
| 1
|
\section{Introduction}
Bound states in quantum field theory are mysterious objects, from mathematical point of view. The calculation of the simplest 1-loop correction to the simplest bound state - hydrogen atom - requires the use of hypergeometric functions and not all steps in the calculation have been done analytically ~\cite{Pachucki}. Calculations at 2 loop level are substantially more complicated ~\cite{Eides_book}, are scattered in many publications ( see ~\cite{Eides1, Pachucki2, Yerokhin1}, to start with) and are not easy to follow for an average worker in the area of quantum field theory.
Calculations in the theory of helim excited states are accordingly more complicated. One source of complexity stems from the fact that the mere formulation of quantization condition for non-perturbed wave functions involves Stokes phenomenon ~\cite{Lazutkin,Faddeev_Merkuriev}, which complete mathematical treatment is still lacking ( see ~\cite{Eremenko} for much easier anharmonc oscillator case, see ~\cite{Mochizuki,Kedlaya} for mathematical development regarding high dimensional Stockes phenomenon). It is known that classical 3-body systems are chaotic ~\cite{Lazutkin}. It has not been clarified what is the quantum counterpart of this (see however ~\cite{Turbiner}). Radiative corrections to helium bound states were considered in a number of works, see e.g. ~\cite{Pachucki_helium}.
Bound state problem in QCD is a classic unsolved problem of our time. There exist various approximate approaches ( see e.g. ~\cite{review} ). Lattice approaches are very popular ~\cite{lattice}.
The remarkable feature of these developments is the absence of clear mathematical foundations of the theory. In the classic textbooks of the field ~\cite{Itzykson,Landau,Kinoshita} it is claimed that so called Bethe-Salpeter ~\cite{Bethe_Salpeter} equations are adequate to solve the problem completely. However, after the examination of the literature cited above it becomes clear that these calculations are in fact not based on the Bethe-Salpeter equations, but are rather approximate perturbative schemes, built around the notion of unperturbed wave function. One promising way to organize the perturbative series is provided by NRQED ~\cite{Caswell}.
On the mathematical level, it is not clear what is the meaning of the Bethe-Salpeter equations in therms of the structure of the underlying space time. The author is not aware of attempts to formulate such equations on Riemannian manifolds. One would expect that there is a hierarchy of wave functions of increasing number of variables that denote points in the underlying manifold , and increasing spin. This is of course makes sense only if one is willing to assume that there is certain manifold, that underlies physical processes, and that it is possible to interpret particle processes in terms of probability disctributions on tuples of manifolds. This is not at all obvious, as it is imaginable ( and probably true) that true evolution of particle systems happens in certain infinite-dimensional, functional , space, and is only being projected to finite dimensions in the observed states.
In this paper, we prefer to be modest and focus on the study of analogies of Bethe-Salpeter hierachy in Riemannian geometry. We make the observation that it is possible to define a hierarchy of wave functions on arbitrary Riemannian manifold. These multicomponent equations cane be coupled in an intricate way. There is considerable freedom in coupling these equations. This freedom can be encoded in a universal algebra quite similar to the homotopy algebras extensively studied in mathematics ~\cite{homotopy_alg}, but still quite different. This algebra stems from the structure similar to Feynman diagrams.
The resulting equations form an intricate system of integro-differential equations, which suffers from the usual problem of ultraviolet divergences. The kernels of integration have singularities on various multidiagonals, and need to be regularized. We note that the problem of renormalization in Riemannian geometry is not completely solved in the literature (except for 3d manifolds ~\cite{Axelrod_Singer}).
\section{Classical Bethe-Salpeter equations on $M^4$}
In this section we expling the most classic version of Bethe-Salpeter equations, as formulated in ~\cite{Bethe_Salpeter}. It is assumed that the spacetime is the Minkowski space $M^4$. This liner structure is very essential for the whole formulation. The translation invariance dictates that the Green's functions of fields are functions of the difference between the two arguments. This is a very strong assumption. The equation involves two fields - spinor field $\psi_\alpha(x)$ and gauge field $A_\mu(x)$. The spinor index denotes the spinor in the representation of the double cover $SU(2)\times SU(2)$ of the 4-dimensional Lorenz group, so the spinor is a section of rank 4 bundle. We only consider $U(1)$ gauge fields ( the formulation for other gauge groups is a non-trivial problem, see however ~\cite{Roberts} for an approach based on Schwinger-Dyson hierarchy ).
The basic objects in which the theory is formulated are causal Green's functions. The definition of these functions is somewhat complicated (see ~\cite{Bogoliubov}) and relies essentially on the Lorenz geometry. For our purposes we only note that these functions satisfy the following equations
\begin{gather*}
(i\hat{\partial}_x+m)_{\alpha,\gamma}S_{\gamma\beta}(x-y)=\delta_{\alpha\beta}\delta(x-y) \\
\Box G_{\mu\nu}(x-y)=\delta(x-y)
\end{gather*}
These equations are ambiguous on the non-compact Minkowski space and it is necessary to fix boundary conditions at temporal infinity. Two possible choices are possible - the ones that fixes the function to vanish in remote past, which correspond to advanced boundary conditions, and the one that corresponds to the vanishing in the forward light cone , that corresponds to retarded boundary conditions. We will denote these functions by superscripts $adv$ or $ret$ respectively. The causal Greens functions are then defined as
\begin{gather*}
G(x)=G^{ret}(x)\theta(t)-G^{adv}(x)\theta(x)
\end{gather*}
Starting from the functions $S(x) ,G(x)$ it is possible to define correlation functions, using Feynman diagrams ~\cite{Landau}. We will be espesially interested in 2electron $\rightarrow$ 2electron kernel, as it plays a prominent role in the formulation of the Bether-Salpeter equation for positronium - the basic system we are interested. This kernel has the following signature $K_{\alpha,\beta,\gamma,\zeta}(x,y,u,v)$. Due to translation invariance it depends only on 3 spacetime variables. It has 4 spinor indices. In the original approach, all perturbative contributions to this kernel we considered. These contributions are enumerated by Feynman diagrams with 4 external electron lines. A few contributions to $K$ are listed below
\begin{equation}
\int dzdu S(x,z)\gamma_{\mu} S(z,u)G_{\mu\nu}(u,v)S(y,w)\gamma_{\nu} S(w,u)
\end{equation}
This expression corresponds to the diagram with single photon line.
\begin{gather*}
\int d^4ad^4bd^4cd^4d S(x,a)\gamma_\mu S(a,c) \gamma_\nu S(c,u) G_{\mu\mu'}(a,b) \times\\
\times G_{\nu\nu'}(c,d) S(y,b)\gamma_{\mu'}S(b,d)\gamma_{\nu'}S(d,v)
\end{gather*}
which corresponds to the box diagram.
The Bethe-Salpeter equation is formulated in terms of 2-particle wave function $\psi_{\alpha,\beta}(x,y)$ of the pair electorn-positron. This function has two spinor indices and each of them corresponds to either electron or positron space. The equation is
\begin{equation}
(\hat{\partial_x}+m)(\hat{\partial_y}+m)\psi(x,y)=\int d^4ud^4v K(x,y,u,v)\psi(u,v)
\end{equation}
This is an integro-differential equation.
Methods of solution invariably rely on the choice of factorized zero-order approximation for the function $\psi(x,y)$.
This function ostensibly depends on two time variables. In the known approaches to solution, these two times are chosen to be equal. It is not known to the author how to avoid explicitly making this choice and still obtain the solution to the spectral problem (see however promising approaches ~\cite{multitime})
It is possible to imagine multiple bound states in QED, which contain both electrons, positrons and photons. There are good reasons to consider such mixed bound states. It is known that radiative corrections involve infrared divergences and these divergences do indeed plague higher order computations. It is necessary to introduce infra-red photons in the final state to cancel the loop contributions. This presicely corresponds to consideration of factorizable mixed electron-photon wave function.
Here we provide a few examples of Bethe-Salpeter equations for mixed wave function $\psi_{\alpha,\beta\mu}(x,y,z)$
\begin{equation}
(i\hat{\partial_x}+m)(i\hat{\partial_y}+m)\Box_z\psi_{\alpha,\beta\mu}(x,y,z)=\int K(x,y,z,u,v,w)\psi(u,v,w)d^4u d^4v d^4w
\end{equation}
This equation involves a kernel that has 6 external lines - 4 electron, and 2 photon lines.
\section{Bether-Salpeter hierarchy in Riemannian geometry}
In this section, we demonstrate that to any Riemannian manifold there corresponds a hierarchy of functions and equations for them, that generalizes the Bethe-Salpeter equations discussed above and the classical equations for Green's functions and harmonic forms. This hierarchy and its solutions are therefore functionals of Riemannian metric and must have geometric meaning.
Instead of working with spinors and vectors as we did before, we choose to formulate our equations for forms, to simplify our discussion. The basic functions for which the equations are defined are forms on tuples of our original manifold $M^d$
\begin{equation}
\omega_{I_1,...,I_n}(x_1,...,x_n),\ x_i \in M,\ I_i={\mu^{(i)}_1,...,\mu^{(i)}_{s_i}}
\end{equation}
These forms are analogies of multiparticle wave functions.
These forms are elements of the space $\Omega^{I_1,...,I_n}(M\times...\times M)$.
For these functions we can formulate eigenfunction equations of the form
\begin{equation}
(\Delta_1+...+\Delta_n)\omega^{(k)}(x_1,...,x_n)=\lambda_k \omega^{(k)}(x_1,...,x_n)
\end{equation}
index $k$ denotes the $k$-th eigenfunction ( we assume the spectrum is discrete. Simiral development can be carried out in the case of mixed spectrum).
For the field $\omega_k(x_1,...,x_n)$ we can obtain the corresponding Green's function
\begin{gather*}
G_{I_1,...,I_n;J_1,...,J_n}(x_1,...,x_n;x'_1,...,x'_n) =\\
\sum_k \frac{\omega_{I_1,...,I_n}^{(k)}(x_1,...,x_n) \omega_{J_1,...,J_n}^{(k)}(x'_1,...,x'_n)}{\lambda_k}
\end{gather*}
This picture corresponds to non-interacting theory. Now we turn to the introduction of interaction into this theory. We will take our motivation from Bethe-Salpeter theory and use integral kernels to add nonlinear interaction to the Laplace equations. Our equations will be of the form
\begin{equation}
(\Delta_1+...+\Delta_n )\omega_n(x_1,...,x_n)+K(x_1,...,x_n)[\omega_1,\omega_2,...]=\lambda \omega_n(x_1,...,x_n)
\end{equation}
Here $K(x_1,...,x_n)[\omega_1,\omega_2,...]$ is certain functional -interaction functional - of the multiparticle wave functions. Our goal is to define its structure.
To simplify the notation, we suppressed the tensor indices ( which can be restored from invariance condition). Using the set of solutions to this equation we define the Green's functions as follows
\begin{gather*}
G_{I_1,...,I_n;J_1,...,J_n}(x_1,...,x_n;y_1,...,y_n)=\\
\sum_k \frac{\omega^{(k)}_{I_1,...,I_n}(x_1,...,x_n)\omega^{(k)}_{J_1,...,J_n}(y_1,...,y_n)}{\lambda_k}
\end{gather*}
We now turn to the definition of the functional $K$. It is defined as the following formal expansion
\begin{gather*}
K(x_1,...,x_n)[\omega_1,\omega_2,....]=\\
=\sum_{k}\sum_{S_1,...,S_k} \int dy^{(1)}...dy^{(k)}K_{S_1,...,S_k}(x_1,...,x_n;y_{1,1},...,y_{1,m_1};....;y_{k,1},...,y_{k,m_k})\times\\
\times\omega_{S_1}(y_{1,1},...,y_{1,m_1})...\omega_{S_k}(y_{k,1},...,y_{k,m_k})
\end{gather*}
where the sum is performed over the number of variables and the tensorial structures that we wish to include in our theory. It is clear from this expression that there is considerable freedom in the formulation, as we are free to chose the signatures $S_i$.
The kernels $K_{S,S_1,...,S_k}(x_1,...,x_n;y_{1,1},...,y_{1,m_1};....;y_{k,1},...,y_{k,m_k})$ will be defined based on a generalization of Feynman diagrams. The conventional Feynman diagrams contain vartices of only one type, that is determined by the theory. Propagators are also fixed by the field content. We propose to generalize this construction and include effective propagators $G_{S,T}(x_1,...,x_n;y_1,...,y_n)$. These propagators carry the number of variables and the signature information. For each $n$ and each $S,T$ we have a propagator, and we define generalized Feynman lines by this data. The kernels $K_{S,S_1,...,S_k}(x_1,...,x_n;y_{1,1},...,y_{1,m_1};....;y_{k,1},...,y_{k,m_k})$ are sums over all modified Feynman diagrams, in which all possible dimensionalities of propagators, signatures of forms, and vertices are allowed.
Now we define the vertices of the theory. For each tuple $(\xi_1,S_1),...,(\xi_a,S_a)$ we will have a separate vertex, where $\xi_i=(x_{i,1},...,x_{i,q_i})$. We choose arbitrarily the splitting of the set $(x_{i,a})$ into $w$ groups $Y_1,...,Y_w$ of variables $y_b$ such that for each group $x_{i,a}=y_b$ within each group, and such that the corresponding parts of the tensor structures $S_a$ contract to form a volume form on the space $\{y_b\}=M$. As we see, there is considerable freedom in the definition of vertices. We will call such vertices as vertices of $\{((x_{1,1},S_{1,1}),...,(x_{1,q_1},S_{1,q_1}));...;((x_{n,1},S_{n,1}),...,(x_{n,q_n},S_{n,q_n}))\} \rightarrow (y_1,...,y_w)$ type. It cooresponds to the interaction of the multiparticle effective fields $\omega_{R_1}(x_{1,1},...,x_{1,q_1}),...,\omega_{R_n}(x_{n,1},...,x_{n,q_n})$ Each such vertex corresponds to an integration of the variables $y_1,...,y_w$. One of the simplest type of the vertices is given by the splitting into triples of variables, which corresponds to the elementary QED vertex.
\section{The case of non-matching tensor structure at the vertex}
Our definition of interaction vertices can be extended to the case when the tensor structures for each of the copy of $M$ are not equal to the volume form, but equal to a tensor of some lower rank $r_s$. We still wish to keep integral representation for the vertices. We are therefore lead to consider integrations over submanifolds $V_{r_1,...,r_a}$ of appropriate dimensionality in the space $M\times...\times M$ of the vertex. The dimensionality of this manifold is determined by the ranks $r_i$ of the forms. We can consider this manifold to be immersion of a diffeomorphism type in our ambient space.
We can formulate our modified Feynman rules as follows. The data for the theory is defined by basic field content $\omega_{\mu_1,...,\mu_s}(x)$ defined on $M$. From this data we constuct multi-field content, essentally as decribed in the previous section, obtaining forms on tuples of the manifold M, $\omega^{r_1,...,r_s}(x_1,...,x_s)$, where earch $r_i$ denotes tensor data $\mu^{(i)}_1,...,\mu^{(i)}_{r_i}$ that corresponds to the spatial variable $x_i$. The vertex is defined by the following data
1) Set of multifields that can interact in this vertex. We denote them as $\omega^{R_1}(x^1_1,...,x^1_{d_1}),...,\omega^{R_n}(x^n_1,...,x^n_{d_n})$. $R_i$ doneote the tensor structure $R_i=\{r^{(i)}_1,...,r^{(i)}_{d_i}\}$, where each $r^{(i)}_s$ is specified above.
2) Locality object. In the vertex we choose a set of variables $y_1,...,y_p$ and assignment $x^m_k \rightarrow y_l$ for each of the arguments $x^m_k$. This is our version of locality.
3) Using the assignment $x^m_k \rightarrow y_l,l=1...p$, we consider the corresponding tensorial structures $r^{(m)}_k$. As part of vertex data, we specify a rule to contract the indices in such a way as to obtain a form of certain rank on $w$ the manifold $M^k$.
4) Choose an immersed manifold $N \subset M^k$ of dimension $w$ and pull back the form from 3) to this manifold. Integrate this form on the fundamental class of $N$.
This construction depends on the choice of immersions $N \subset M^k$, and we obtain a much richer theory.
\section{Conclusion}
We investigated the analogies of Bethe-Salpeter equations on arbitrary Riemannian manifolds. We constructed a set of models which are very natural from geometric point of view and can be constructed on arbitrary Riemannian manifold. There is a natural hierarchy of multiparticle wave functions that can be defined on the manifold. The hierarchy depends on the choice of certain data that involves combinatorics of interaction of particles in our model. This data is discrete. We also define a more general class of models in which forms of intermediate rank on the interaction manifold of the vertices are allowed. This class of models depend on choice of immersed manifolds for each interaction manifold.
\bibliographystyle{amsplain}
|
train/arxiv
|
BkiUddE5qWTA7RDBfVlO
| 5
| 1
|
\section{Introduction}
The main aim of this paper is to demonstrate an application of the
optimization paradigm to the derivation of
coherent filters for quantum systems, i.e., filters which themselves can be
realized as a quantum system. Such filters are highly
desirable in quantum engineering since they do not require conventional
non-quantum measurement devices and hence are able to deliver
technological advantages of quantum information
processing. To be concrete, we
focus on one type of the optimal coherent filtering
problem concerned with equalization of distortions of quantum signals
transmitted via a quantum communication channel to help mitigate degrading
effects of the channel. Owing to the analogy with
classical channel equalization, we call this problem the \emph{quantum
equalization problem}.
Optimization has proven to be an essential tool in the design of classical
communication and signal processing systems. The Wiener filtering
theory~\cite{Wiener-1949} is the best known demonstration of how degrading
effects of noise and the channel can be mitigated using optimization
techniques.
While the Wiener's solution is elegant and tractable in the case of
stationary signals and perfectly known channels, additional properties and
requirements on the signal, the channel or the filter impose further
optimization constraints~\cite{DLS-2002,WBV-1999}. This is precisely the
situation encountered in the derivation of
a coherent quantum filter, as such filter must
satisfy the fundamental constraints of \emph{physical realizability}, in
order to represent a valid quantum physical system;
see~\cite{JNP-2008,SP-2012,NY-2017,MP-2011} and references therein.
\begin{figure}[t]
\begin{center}
\psfragfig[width=0.7\columnwidth]{bsplusbs}{
\psfrag{Quantum Channel}{Quantum channel}
\psfrag{Quantum Equalizer}{Coherent filter}
\psfrag{u}{$\breve u$}
\psfrag{w}{$\breve w$}
\psfrag{hatw}{$\breve d$}
\psfrag{z}{$\breve z$}
\psfrag{hatu}{$\breve {\hat u}$}
\psfrag{hatz}{$\breve {\hat z}$}
\psfrag{y}{$\breve y$}}
\caption{A quantum optical communication system consisting of two beam
splitters acting as a channel and a filter, respectively.}
\label{fig:bsplusbs}
\end{center}
\end{figure}
Conditions for physical realizability of quantum control systems have
received considerable attention in the control literature, in the
context of coherent quantum control and estimation problems including
problems of coherent $H_\infty$ control~\cite{JNP-2008,MP-2011a},
coherent quantum LQG control~\cite{NJP-2009} and coherent
filtering~\cite{VP-2013}. These conditions ensure that the
controller/filter can be implemented as a quantum physical
device. Fig.~\ref{fig:bsplusbs} illustrates
this situation. The quantum optical beam splitter on the left represents a
network of static quantum optical devices acting as a quantum communication
channel. The channel transmits a message $\breve{u}$ which suffers
from degradation due to the channel's physical environment $\breve{w}$.
The device on the right is another quantum physical device acting as a
filter aiming to reduce this degradation while being subjected to its own degrading environment $\breve z$. The general
aim of the coherent equalization problem is to design quantum systems
able to recover the transmitted message with high fidelity and
which are realizable as physical quantum devices. In this paper we
develop a procedure for the synthesis of transfer functions for such equalizing
filters.
Our focus is on the question whether
distortions introduced by a passive quantum communication channel can be
efficiently mitigated using another \emph{passive} quantum system acting as a
filter. Even restricted to passive filters, this question is
meaningful and sufficiently rich. Indeed, transfer functions
corresponding to passive coherent filters are easily implementable by
cascading quantum optical components such as beam splitters, optical
cavities and phase shift devices~\cite{Nurdin-2010,NY-2017}. Therefore,
answering the
question as to whether a (sub)optimal coherent equalizer can be obtained
within the class of passive systems enables synthesis
of physical devices which solve coherent equalization problems. The paper
gives examples of such synthesis.
The requirement for physical realizability
makes the task of finding an optimal coherent filter quite
nontrivial~\cite{VP-2013}. In~\cite{VP-2013}, this requirement led to nonconvex
constraints on the
state-space matrices of the filter which prohibited obtaining a closed form
solution. In this paper, following~\cite{UJ2a,UJ2b}, we cast
the coherent
equalization problem in the frequency domain. It turns out that in the frequency domain the physical
realizability constraints have a convenient structure.
They can be partitioned so that the constraints on the
`key' variables which determine the filter performance can be separated
from the constraints on the 'slack variables' responsible for the physical
realizability of the filter. This leads us to adopt a two-step
procedure for the design of coherent suboptimal filters which
was originally proposed in~\cite{UJ2a}. In the first
step of this procedure, only some of the physical
realizability constraints are retained, and the filter performance is
optimized over the `key' variables subject to these constraints. We
call this problem the auxiliary optimization problem. In the second step,
the remaining variables of the filter are computed to fulfill the
requirement of physical realizability. The rigorous justification of this
procedure is one of the original contributions of the paper.
In contrast with the previous work~\cite{VP-2013,UJ2a,UJ2b}
concerned with developing coherent Wiener and Kalman filters,
we consider the problem in the vein of classical $H_\infty$
filtering~\cite{HSK-1999,Shaked-1990}. Also unlike the coherent $H_\infty$
control problem~\cite{JNP-2008,MP-2011a}, our
approach is concerned with
minimization of the largest eigenvalue of the power spectrum density (PSD)
matrix of the equalization error. This allowed us to
formulate the aforementioned auxiliary optimization problem as
a convex optimization problem whose constraints are frequency
dependent.
This approach led to two contributions. Firstly, we
characterize the class of causal suboptimal coherent filters
in a manner similar to the Youla parameterization of $H_\infty$ suboptimal
controllers~\cite{ZDG-1996}, via the technique of
$J$-spectral factorization~\cite{GGLD-1990,IO-1996}.
Secondly, we propose a Semidefinite Program (SDP) relaxation which
reduces the number of optimization constraints
to a finite number of constraints. Combined with the
method of Nevanlinna-Pick
interpolation~\cite{BGR-2013,DGK-1979,Kovalishina-1984} this gives
a tractable algorithm to obtain a physically realizable suboptimal filter.
The optimization approach allows us to reveal some peculiar
features of coherent equalizers which set them apart from
measurement-based filters.
It turns out that,
unlike the classical equalization problem, the mean-square error between
the input and output fields of a linear quantum system may not always be
improved using a coherent linear equalizer. This is consistent
with the earlier finding~\cite{UJ2a} that in the
simplest case when both the input field and the thermal noise field have one
degree of freedom and the channel is static, the coherent equalization is truly beneficial when
the signal-to-noise ratio is below a certain threshold. The
paper relates the existence of such threshold
to the question whether
a certain
frequency dependent Linear Matrix Inequality (LMI) is feasible, as a sufficient
condition in the general case.
The paper is organized as follows. In the next section we present the
background on physically realizable open linear quantum systems. The
coherent equalization problem is also introduced in
Section~\ref{sec:equal-probl} where it is posed as an $H_\infty$-like
filtering problem subject to
the physical realizability constraints. The
justification of the two-step procedure for the design of coherent
suboptimal filters
and the auxiliary optimization problem are presented in
Section~\ref{framework}. The relation between the
feasibility sets of this auxiliary problem and the corresponding classical
problem is also discussed in Section~\ref{framework}. A complete
characterization of
all suboptimal solutions for the auxiliary optimization problem is derived in
Section~\ref{feasible}.
An alternative suboptimal solution to
this auxilialry problem via semidefinite programming and Nevanlinna-Pick
interpolation is presented in Section~\ref{semidef}. Section~\ref{examples}
presents two examples which illustrate these results. In the first example, the
results are applied to a single mode system
consisting of static components. The second example is an optical cavity
system. For both examples, we
show how a suboptimal equalizer can be constructed via $J$-spectral
factorization, and also illustrate the semidefinite programming approach
undertaken in Section~\ref{semidef}. In the first example, we also
show that the bound on the performance delivered
via the $J$-spectral factorization approach is in fact tight, and that an
optimal equalizer can be obtained as a limit point of the set
of suboptimal equalizers derived using the $J$-spectral factorization. This
example also
illustrates the threshold on the signal-to-noise ratio of the input fields
that arises due to the requirement of physical realizability.
Conclusions and suggestions for future work are
given in Section~\ref{Conclusions}.
\paragraph*{Notation}
For a collection of operators $\mathbf{a}_1$, \ldots, $\mathbf{a}_n$ in a
Hilbert space $\mathfrak{H}$, the notation $\mathrm{col}(\mathbf{a}_1,
\ldots, \mathbf{a}_n)$ denotes the column vector of operators obtained by
concatenating operators $\mathbf{a}_j$, i.e., the operator mapping
$\mathfrak{H}$ into the cartesian product of $n$ copies of the space
$\mathfrak{H}$, $\mathfrak{H}^n$. For an operator
$\mathbf{a}:\mathfrak{H}\to \mathfrak{H}$, $\mathbf{a}^*$ denotes the
adjoint operator, and when
$\mathbf{a}=\mathrm{col}(\mathbf{a}_1, \ldots, \mathbf{a}_n)$,
$\mathbf{a}^\#$ denotes the column vector of adjoint operators,
$\mathbf{a}^\#=\mathrm{col}(\mathbf{a}_1^*, \ldots, \mathbf{a}_n^*)$,
$\mathbf{a}^T=(\mathbf{a}_1~\ldots~ \mathbf{a}_n)$ (i.e, the row of
operators), and $\mathbf{a}^\dagger = (\mathbf{a}^\#)^T$. Also, we
will use the notation
$\breve{\mathbf{a}}=\mathrm{col}(\mathbf{a},\mathbf{a}^\#)=
\mathrm{col}(\mathbf{a}_1, \ldots, \mathbf{a}_n,\mathbf{a}_1^*, \ldots,
\mathbf{a}_n^*)$. $[\mathbf{a},\mathbf{b}]$
denotes the commutator of the operators $\mathbf{a},\mathbf{b}$ in
$\mathfrak{H}$,
$[\mathbf{a},\mathbf{b}]=\mathbf{a}\mathbf{b}-\mathbf{b}\mathbf{a}$.
The quantum expectation of an operator $\mathbf{v}$ of a quantum system in
a state $\rho$ is denoted $\langle \mathbf{v}\rangle=\tr[\rho
\mathbf{v}]$~\cite{Parthasarathy-2012}.
For a complex number $a$, $a^*$ is its complex conjugate and
for a matrix $A=(A_{ij})$, $A^\#$, $A^T$, $A^\dagger$ denote, respectively,
the matrix of complex conjugates $(A_{ij}^*)$, the transpose matrix and the
Hermitian adjoint matrix.
$I$ is the identity matrix, and
$
J=
\left[
\begin{array}{rr}
I & 0 \\ 0 & -I
\end{array}
\right].
$
We will also write $I_n$ when we need to specify that this
is the $n\times n$ identity matrix.
For two complex matrices $X_-$, $X_+$, we write
$
\Delta(X_-,X_+)=
\left[
\begin{array}{cc} X_- & X_+ \\ X_+^\# & X_-^\#
\end{array}
\right].
$
When $X_-=X_-(s)$, $X_+=X_+(s)$ are complex transfer function matrices, the
stacking operation defines the transfer function matrix
$
\Delta(X_-(s),X_+(s))=
\left[
\begin{array}{cc} X_-(s) & X_+(s) \\ (X_+(s^*))^\# & (X_-(s^*))^\#
\end{array}
\right].
$
For a transfer
function matrix $X(s)$, $X(s)^H$ denotes its Hermitian para-conjugate,
$X(s)^H=X(-s^*)^\dagger$. Clearly, for a complex matrix $X$, $X^H=X^\dagger$.
When the matrix $X$ is Hermitian, $\boldsymbol{\sigma}(X)$ is the largest eigenvalue
of $X$. For a transfer function $X(s)$ in the Hardy space $H_\infty$ of
matrix-valued functions which are analytic in the open right
half-plane $\mathrm{Re}s>0$ and are bounded on the imaginary axis,
$\|X\|_\infty$ denotes its $H_\infty$ norm, $\|X\|_\infty=\sup_{\mathrm{Re}s>0}\|X(s)\|=\mathrm{ess}\sup_{\omega\in\mathbf{R}}\|X(i\omega)\|$,
where $\|\cdot\|$ is the induced 2-norm of a matrix~\cite{ZDG-1996}.
\section{Coherent equalization problem for linear quantum communication
systems}\label{sec:equal-probl}
\subsection{Quantum noise processes}\label{sec:qn}
In the Heisenberg picture of quantum mechanics, an open quantum system
can be modeled as a linear system
describing evolution of $m$ harmonic modes
driven by $n$ quantum input noise
processes~\cite{HP-1984,GJN-2010,ZJ-2013}. These
quantum input processes are represented as annihilation and creation operators
$\mathbf{b}(t)=\mathrm{col}(\mathbf{b}_1(t), \ldots, \mathbf{b}_n(t))$,
$\mathbf{b}^\#(t)=\mathrm{col}(\mathbf{b}^\#_1(t), \ldots,
\mathbf{b}^\#_n(t))$, acting in an appropriate Fock
space~\cite{HP-1984} and satisfying canonical commutation relations
$ [\breve{\mathbf{b}}(t),\breve{\mathbf{b}}^\dagger(t')]=J\delta(t-t')$;
here $\delta(t)$ is the delta function. However from the system theory
viewpoint they can be treated as quantum random processes. When the input
fields are in a Gaussian state with zero mean, which is the situation
considered in this paper, these random processes can be regarded as
stationary quantum Gaussian white noise processes with zero mean (i.e., $\langle
\breve{\mathbf{b}}(t)\rangle =0$) and the correlation function
\begin{eqnarray}
\label{eq:8}
\langle\breve{\mathbf{b}}(t)\breve{\mathbf{b}}^\dagger(t')\rangle=F_{\mathbf{b}}
\delta(t-t'),
\quad F_{\mathbf{b}}\triangleq
\left[
\begin{array}{cc}
I+\Sigma_{\mathbf{b}}^T & \Pi_{\mathbf{b}} \\
\Pi_{\mathbf{b}}^\dagger & \Sigma_{\mathbf{b}}
\end{array}\right].
\end{eqnarray}
The matrix $F_{\mathbf{b}}$ symbolizes intensity of
the process $\breve{\mathbf{b}}$; $\Sigma_{\mathbf{b}}$ is a nonnegative definite
complex Hermitian matrix,
$\Sigma_{\mathbf{b}}^\dagger=\Sigma_{\mathbf{b}}$, and $\Pi_{\mathbf{b}}$
is a complex symmetric matrix, $\Pi_{\mathbf{b}}^T=\Pi_{\mathbf{b}}$. In
this paper, we will consider linear quantum systems in thermal state, therefore
it will always be assumed that $\Pi_{\mathbf{b}}=0$.
\subsection{A quantum communication system}
We consider a general setup consisting of a linear
quantum system representing a communication channel and a
second linear quantum system acting as an equalizer, as shown in
Fig.~\ref{fig:general}.
\begin{figure}[t]
\begin{center}
\psfragfig[width=0.7\columnwidth]{general-e}{
\psfrag{Quantum}{}
\psfrag{channel}{$\Gamma (s)$}
\psfrag{equalizer}{$\Xi(s)$}
\psfrag{+}{$+$}
\psfrag{-}{$-$}
\psfrag{e}{$\breve e$}
\psfrag{b}{$\breve u$}
\psfrag{w}{$\breve w$}
\psfrag{what}{$\breve d$}
\psfrag{z}{$\breve z$}
\psfrag{bhat}{$\breve {\hat u}$}
\psfrag{zhat}{$\breve {\hat z}$}
\psfrag{y}{$\breve y$}}
\caption{A general quantum communication system. The transfer function
$\Gamma(s)$ represents the channel, and $\Xi(s)$ represents an
equalizing filter.}
\label{fig:general}
\end{center}
\end{figure}
The $n$-dimensional input vector field $\breve u$ plays the role of a
signal carrying a message
transmitted through the channel, and the $n_w$-dimensional vector of
operators $\breve w$ is comprised of
operators describing the environment as well as noises introduced by the
routing hardware such as beam splitters, etc. In what follows it is assumed
that these operators commute,
$ [\breve u,\breve w]=0,
$
and the system is in a Gaussian thermal state, and $\langle \breve u(t)\rangle=0$, $\langle\breve w(t)\rangle=0$. Furthermore, it is assumed
that the input fields $\breve u$ and $\breve w$ are
not correlated, $\langle \breve u (t) \breve w^\dagger (t')\rangle=0$.
To represent the communication channel
as a linear quantum system, the annihilation and
creation parts of $\breve u$, $\breve w$ are stacked together to form
the vectors of operators
$\mathbf{b}=\mathrm{col}(u,w)$ and $\mathbf{b}^\#=\mathrm{col}(u^\#,w^\#)$,
which are then combined into the vector $\breve{\mathbf{b}}$.
This combined
vector of input operators $\breve{\mathbf{b}}$ is applied to a linear
quantum system with the transfer function $\Gamma(s)$ which
represents the communication channel
. The annihilation and
creation operators of the output field of this system form the vector
$\breve{\mathbf{y}}=\mathrm{col}(y,d,y^\#,d^\#)$. The dimensions of the
annihilation operators $y$ and
$d$ (respectively, creation operators $y^\#$ and $d^\#$) of the output
field are $n$ and $n_w$, respectively.
A coherent equalizer is another linear quantum system $\Xi(s)$
which takes the components $\breve y=\mathrm{col}(y,y^\#)$ of the output
$\breve{\mathbf{y}}$ as one of its inputs. Its second
input $\breve{z}(t)=\mathrm{col}(z(t),z^\#(t))$ in Fig.~\ref{fig:general}
is comprised of $n_z$ annihilation and creation operators of the auxiliary noise
input field introduced into the filter model to make it physically
realizable~\cite{JNP-2008,VP-2011}.
For simplicity, we assume that the filter environment is in a Gaussian vacuum
state; that is, the filter's quantum noise process $\breve{z}(t)$ has zero
mean, $\langle \breve z(t)\rangle =0$, and the correlation function
$ \langle
\breve z(t)
\breve{z}^\dagger(t')\rangle=
\left[
\begin{array}{cc}
I_{n_z} & 0 \\
0 & 0
\end{array}\right]\delta(t-t'),
$
i.e., $\Sigma_z=0$, $\Pi_z=0$. The operator $\breve{z}$ commutes with
$\breve{u}$ and $\breve{w}$.
The input into the equalizer,
$\breve{\mathbf{b}}_{\textrm{eq}}=\mathrm{col}(\mathbf{b}_{\textrm{eq}},\mathbf{b}_{\textrm{eq}}^\#)$,
combines the output $\breve y$ of the channel and the filter environment
noise $\breve z$, so that $\mathbf{b}_{\textrm{eq}}=\mathrm{col}(y,z)$. Its
output is $\breve {\mathbf{y}}_{\textrm{eq}}
=\mathrm{col}(\mathbf{y}_{\textrm{eq}},\mathbf{y}_{\textrm{eq}}^\#)$, and
each of the vectors of operators
$\mathbf{y}_{\textrm{eq}},\mathbf{y}_{\textrm{eq}}^\#$ can be partitioned
into operator vectors whose dimensions match the dimensions of
$y$ and $z$, respectively:
$\mathbf{y}_{\textrm{eq}}=\mathrm{col}(\hat u,\hat z)$,
$\mathbf{y}_{\textrm{eq}}^\#=\mathrm{col}(\hat u^\#,\hat z^\#)$. We
designate the
first component of these partitions, namely $\hat u$ (respectively $\hat
u^\#$), as the output field of the equalizing filter.
\subsection{The coherent equalization problem}\label{sec:eqprob-loose}
Let $e(t)$ be the difference between the channel
input and the filter output fields, $e(t)=\hat u(t)-u(t)$. We refer to
$e(t)$ as the equalization error of the filter $\Xi(s)$. Let $P_e(i\omega)$
denote the Fourier transform of the autocorrelation
matrix $R_e(t)= \langle e(t)e(0)^\dagger \rangle$. $P_e(i\omega)$
represents the power spectrum density of the difference between the channel
input and the filter output fields. \emph{The coherent equalization problem} in
this paper is to obtain a physically realizable passive filter transfer function
$\Xi$ which minimizes (exactly or approximately) the largest eigenvalue
of $P_e(i\omega)$:
\begin{eqnarray}
\label{eq:6}
\Xi=\mathrm{arg} \inf_{\Xi}\sup_\omega
\boldsymbol{\sigma}(P_e(i\omega)).
\end{eqnarray}
A formal definition of the problem will be given in
Section~\ref{statement}, after we
present some necessary background on linear quantum systems and the notion
of physical realizability.
\subsection{Open quantum systems}
This paper adopts linear quantum system models for quantum
channels and equalizing filters~\cite{GJN-2010,NY-2017}. In the Heisenberg
picture, dynamics of an
open quantum linear system consisting of $m$ quantum
oscillators interacting with the environment described by Gaussian
input fields are represented using quantum stochastic differential
equations~\cite{HP-1984} which
describe evolution of annihilation and creation operators of the quantum
system in the underlying Hilbert space $\mathfrak{H}$. The
Heisenberg-Langevin form
of these equations is as follows,
\begin{eqnarray}
\label{dyn}
\dot{\breve{\mathbf{a}}}(t)&=&\breve A \breve{\mathbf{a}}(t)+ \breve B
\breve{\mathbf{b}}(t), \nonumber \\
\breve{\mathbf{y}}(t)&=&\breve C \breve{\mathbf{a}}(t)dt+ \breve D
\breve{\mathbf{b}}(t).
\end{eqnarray}
Here, $\breve{\mathbf{b}}(t)$ denotes an $2n_b$-dimensional vector of
quantum random processes defined in Section~\ref{sec:qn}.
The vector $\breve{\mathbf{a}}$ consists of the mode operators
$\mathbf{a}=\mathrm{col}(\mathbf{a}_1, \ldots \mathbf{a}_m)$
and their adjoint operators $\mathbf{a}^\#=\mathrm{col}(\mathbf{a}_1^*,
\ldots \mathbf{a}_m^*)$; at the initial time $t=t_0$ these operators represent
initial annihilation and creation operators of the system
$\mathbf{a}(t_0)=\mathrm{col}(\mathbf{a}_1(t_0),\ldots, \mathbf{a}_m(t_0))$,
$\mathbf{a}^\#(t_0)=\mathrm{col}(\mathbf{a}_1^*(t_0),\ldots,
\mathbf{a}_m^*(t_0))$, respectively. Without loss of generality, we
assume throughout the paper that $[\mathbf{a}(t_0),\mathbf{a}^\dagger(t_0)]=I$. Also,
$\breve{\mathbf{y}}=\mathrm{col}(\mathbf{y},\mathbf{y}^\#)$ denotes the
output field of the system that carries away information about the system
interacting with the input field $\breve{\mathbf{b}}$; the vectors of operators
$\mathbf{y}=\mathcal{col}(\mathbf{y}_1, \ldots, \mathbf{y}_{n_b})$,
$\mathbf{y}^\#=\mathcal{col}(\mathbf{y}_1^*, \ldots, \mathbf{y}_{n_b}^*)$,
have the same dimension $n_b$ as the dimension of the vectors
$\mathbf{b}$, $\mathbf{b}^\#$ of the input field. The matrices
$\breve{A}$, $\breve{B}$, $\breve{C}$, $\breve{D}$ are complex matrices
partitioned in accordance with the structure of the vectors
of operators $\breve{\mathbf{a}}$, $\breve{\mathbf{b}}$, as
\begin{eqnarray*}
\label{eq:4}
\breve{A}&=&\Delta(A_-,A_+), \quad
\breve{B}=\Delta(B_-,B_+), \\
\breve{C}&=&\Delta(C_-,C_+), \quad
\breve{D}=\Delta(D_-,D_+).
\end{eqnarray*}
A detailed discussion of open linear quantum systems can be found
in~\cite{JG-2010,GJN-2010,JNP-2008,ZJ-2013}.
In the subsequent
sections we will consider passive linear quantum systems. For such systems,
$A_+=0$, $B_+=0$, $C_+=0$, $D_+=0$, i.e., the evolution of the `annihilation
part' of the system variable $\breve{\mathbf{a}}$ is governed
only by the annihilation operators $\mathbf{b}(t)$ of the input field,
and the `creation' part of $\breve{\mathbf{a}}$
is driven by the creation operators $\mathbf{b}^\#(t)$~\cite{JG-2010}. In
this case, the matrices $\breve A$, $\breve B$, $\breve C$, $\breve D$
are block diagonal.
For a quantum stochastic differential equation of the form
(\ref{dyn}) to describe evolution of quantum physical system in the
Heisenberg picture, its coefficients must satisfy certain
additional conditions~\cite{SP-2012,JNP-2008,MP-2011}. These conditions,
known as
the \emph{physical realizability conditions}, ensure that the oscillator
variables $\mathbf{a}_j(t)$ and the output field operators
$\mathbf{y}_j(t)$ defined by equation (\ref{dyn}) evolve unitarily,
\begin{eqnarray*}
&&\mathbf{a}_j(t)=\boldsymbol{\mathsf{U}}^*(t-t_0)\mathbf{a}_j(t_0)\boldsymbol{\mathsf{U}}(t-t_0), \quad j=1, \ldots m,\\
&&\mathbf{y}_k(t)=\boldsymbol{\mathsf{U}}^*(t-t_0)\mathbf{y}_k(t_0)\boldsymbol{\mathsf{U}}(t-t_0),
\quad k=1, \ldots n.
\end{eqnarray*}
Here $\boldsymbol{\mathsf{U}}(t)$ is an adapted process of unitary operators of the system. The
physical realizability of
the system amounts to the existence of such operators; more precisely the
operator process $\boldsymbol{\mathsf{U}}(t)$ arises as a solution to a
certain Ito quantum stochastic differential equation~\cite{HP-1984}.
In the frequency domain, the input-output map defined by system~(\ref{dyn})
is expressed in terms of the $n_b\times
n_b$ transfer function
\[
\Gamma(s)= \breve{C}(sI_{2m}-\breve{A})^{-1}\breve{B}+\breve{D},
\]
relating the bilateral Laplace transforms of
$\breve{\mathbf{y}}(t)$ and $\breve{\mathbf{b}}(t)$
~\cite{GJN-2010,ZJ-2013}.
According to the next lemma, physical realizability of
the linear quantum system (\ref{dyn}) dictates that the transfer function
$\Gamma(s)$ must be $J$-symplectic\footnote{
A
transfer function matrix $\Gamma(s)$ is
$J$-symplectic if $\Gamma J\Gamma^H=J$~\cite{GJN-2010}. Such transfer
functions are also known as
$J$-unitary~\cite{BGR-2013} and $(J,J)$-unitary~\cite{SP-2012}.};
see~\cite{SP-2012,GJN-2010,BGR-2013}. The lemma is a straightforward
combination of Theorem~4 in~\cite{SP-2012} and Theorem~6.1.1
in~\cite{BGR-2013}. It requires the following assumption about the linear
quantum system~(\ref{dyn}).
\begin{assumption}
\label{A1}
The
pair $(\breve A,\breve B)$ is controllable, and the
pair $(\breve A,\breve C)$ is observable.
\end{assumption}
\begin{lemma}\label{L.pr=unitary}
Suppose Assumption~\ref{A1} is satisfied. Then the
following conditions are equivalent:
\begin{enumerate}[(a)]
\item
The linear quantum system~(\ref{dyn}) is physically realizable;
\item
$\breve D=\Delta(S,0)$ where $S$ is a unitary matrix, and
\begin{equation}
\label{eq:1}
\Gamma(s)J\Gamma(s)^H=\Gamma(s)^H J \Gamma(s)=J;
\end{equation}
\item
$\breve D=\Delta(S,0)$ where $S$ is a unitary matrix, and
\begin{equation}
\label{eq:1w}
\Gamma(i\omega)J\Gamma(i\omega)^\dagger=\Gamma(i\omega)^\dagger J
\Gamma(i\omega)=J.
\end{equation}
\end{enumerate}
\end{lemma}
When the system (\ref{dyn}) is a passive (annihilation only) system, its
transfer function $\Gamma(s)$ is block-diagonal~\cite{GJN-2010}
\begin{eqnarray}
\label{eq:31}
&& \Gamma(s)= \left[
\begin{array}{cc}
G(s)& 0 \\
0 & G(s^*)^\#
\end{array}
\right],
\end{eqnarray}
where $G(s)$ is the transfer function of the annihilation part of the
system, $G(s)=C_-(sI-A_-)^{-1}B_-+S$. Assumption~\ref{A1} reduces to the
assumption
that $(A_-,B_-)$ and $(A_-,C_-)$ are controllable and observable,
respectively. It then follows from this assumption that the matrices $A_-$
and $\breve A=\Delta(A_-,0)$ are Hurwitz and that $G(s)$ and $\Gamma(s)$
in~(\ref{eq:31}) are stable rational proper transfer functions.
Furthermore, the frequency domain physical realizability
relations~(\ref{eq:1}),~(\ref{eq:1w}) reduce to the condition that $G(s)$
is paraunitary and $G(i\omega)$ is unitary
(also see \cite{MP-2011}),
\begin{eqnarray}
\label{eq:32}
G(s)^H G(s)=G(s)G(s)^H=I, \\
G(i\omega)^\dagger G(i\omega)=G(i\omega)G(i\omega)^\dagger=I.
\label{eq:32.w}
\end{eqnarray}
Then $G(i\omega)$ is bounded at infinity and
analytic on the entire closed imaginary axis~\cite[Lemma~2]{Youla-1961}.
Introduce the partition of the
transfer function $G(s)$ compatible with the partition
$\mathbf{b}=\mathrm{col}(u,w)$, $\mathbf{y}=\mathrm{col}(y,d)$,
\begin{eqnarray}
\label{eq:98}
G(s)&=&
\left[
\begin{array}{cc}
G_{11}(s) & G_{12}(s)\\
G_{21}(s) & G_{22}(s)\\
\end{array}
\right].
\end{eqnarray}
Using this partition, the physical realizability condition~(\ref{eq:32})
reduces to the identities
\begin{subequations}
\label{eq:35}
\begin{align}
\label{eq:9G}
&
G_{11}(s)G_{11}(s)^H+G_{12}(s)G_{12}(s)^H =I, \\
&
G_{11}(s)G_{21}(s)^H+G_{12}(s)G_{22}(s)^H =0, \label{eq:10G}
\\
&
G_{21}(s)G_{21}(s)^H+G_{22}(s)G_{22}(s)^H =I. \label{eq:11G}
\end{align}
\end{subequations}
We conclude this section by presenting a frequency domain relationship
between power spectrum densities of the input and the stationary output
fields of the linear quantum system~(\ref{dyn}). Since we focus on passive
systems, we restrict attention to the autocorrelation matrix of
$\mathbf{y}$,
$R_{\mathbf{y}}(t)=\langle
\mathbf{y}(t)\mathbf{y}^\dagger(0)\rangle$.
The corresponding power spectrum density (PSD) matrix $P_{\mathbf{y}}(s)$
is the bilateral Laplace transform of $R_{\mathbf{y}}(t)$~\cite{ZJ-2013}.
It was shown in~\cite{ZJ-2013} that since the matrix $\breve A$ is
Hurwitz,
it holds that
\begin{equation}
P_{\mathbf{y}}(i\omega)=
G(i\omega) (I+\Sigma_{\mathbf{b}}^T)G(i\omega)^\dagger.
\label{PSD}
\end{equation}
\subsection{Physically realizable passive filters}
Since the focus of this paper is on passive coherent
filters\footnote{Some results on active equalization
can be found
in~\cite{UJ2b}.}, from now only
the filters of the form $\Xi(s)=\Delta(H(s),0)$ will be considered, where
$H(s)$ is an $(n+n_z)\times (n+n_z)$ transfer function.
According to Lemma~\ref{L.pr=unitary}, physical
realizability of the filter
requires that $H(s)$ must be a paraunitary transfer function matrix and the
matrix $H(i\omega)$ must be unitary; cf.~(\ref{eq:32}),~(\ref{eq:32.w}):
\begin{equation}
\label{eq:60}
H(s)H(s)^H=I, \quad H(i\omega)H(i\omega)^\dagger=I.
\end{equation}
The set of equalizers $\Xi(s)=\Delta(H(s),0)$, where
$H(s)$ satisfies~(\ref{eq:60})
will be denoted $\mathcal{H}_p$.
The transfer function matrix $H(s)$ can be further partitioned into
the blocks compatible with dimensions of the filter inputs
$\mathrm{col}(y,z)$ and outputs $\mathrm{col}(\hat u, \hat z)$:
\begin{eqnarray}
\label{eq:98a}
H(s)=
\left[
\begin{array}{cc}
H_{11}(s) & H_{12}(s)\\H_{21}(s) & H_{22}(s)
\end{array}
\right]. \qquad
\end{eqnarray}
Using this partition, the condition (\ref{eq:60}) can be expanded into
conditions of the form~(\ref{eq:35}) which provide an explicit set of
constraints imposed on the transfer functions of each of the filter
channels by the requirement for physical realizability
\begin{subequations}
\label{eq:37}
\begin{align}
\label{eq:9p}
&
H_{11}(s)H_{11}(s)^H+H_{12}(s)H_{12}(s)^H =I, \\
&
H_{11}(s)H_{21}(s)^H+H_{12}(s)H_{22}(s)^H =0, \label{eq:10p}
\\
&
H_{21}(s)H_{21}(s)^H+H_{22}(s)H_{22}(s)^H =I. \label{eq:11p}
\end{align}
\end{subequations}
\subsection{The formal problem statement}\label{statement}
Let $e(t)=\hat u(t)-u(t)$ be the equalization error introduced in
Section~\ref{sec:eqprob-loose}. We then write that $\breve
e(t)=\mathrm{col}(e(t),e^\#(t))$. The transfer
function from the combined `input plus channel and filter environment'
field $\breve{\mathbf{v}}=\mathrm{col}(u,w,z,u^\#,w^\#,z^\#)$ to
$\breve e$ is obtained by interconnecting the passive channel and passive
filter systems as shown in Fig.~\ref{fig:general},
\begin{eqnarray}
E(s)&=&\Delta(E_-(s), 0), \nonumber \\
E_-(s)&\triangleq &\left[ \begin{array}{c|c|c}
H_{11}(s) G_{11}(s) -I ~&~ H_{11}(s) G_{12}(s)~ &~ H_{12}(s)
\end{array}\right].\qquad
\label{eq:34}
\end{eqnarray}
Using this transfer function and~(\ref{PSD}), the Fourier transform of the
autocorrelation
matrix of the equalization error
$ R_e(t)= \langle e(t)e(0)^\dagger \rangle$
can be expressed as
\begin{eqnarray}
P_{e}(i\omega)&=&
\left[
\begin{array}{cc}
E_-(i\omega)& 0
\end{array}
\right] F_{\mathbf{v}} \left[
\begin{array}{c}
E_-(i\omega)^\dagger \\ 0
\end{array}
\right].
\label{eq:39}
\end{eqnarray}
Here $F_{\mathbf{v}}$ is the intensity matrix of the noise process
$\breve{\mathbf{v}}$ when the system is in a thermal quantum state
(cf.~(\ref{eq:8})),
\begin{eqnarray}
F_{\mathbf{v}}&=&\left[ \begin{array}{ccc|ccc}
I+\Sigma_u^T & 0 & 0 & 0 & 0 & 0 \\
0 & I+\Sigma_w^T & 0 & 0 & 0 & 0 \\
0 & 0 & ~I~ & 0 & 0 & 0 \\ \hline
0 & 0 & 0 & \Sigma_u & 0 & 0 \\
0 & 0 & 0 & 0 & ~\Sigma_w~ & 0 \\
0 & 0 & 0 & 0 & 0 & 0
\end{array}
\right].
\label{eq:38.F}
\end{eqnarray}
Since $F_{\mathbf{v}}^\dagger=F_{\mathbf{v}}$, $P_{e}(i\omega)$ is an $n\times n$ Hermitian matrix
where $n$ is the dimension of the `channel input' $u$. Hence the eigenvalues
of $P_{e}(i\omega)$ are real.
Using (\ref{eq:34}) and~(\ref{eq:38.F}) an
explicit expression for $P_e(i\omega)$ can be obtained~\cite{UJ2a},
\begin{eqnarray}
\label{eq:121}
\lefteqn{P_e(i\omega)} && \nonumber \\
&=&
(H_{11}(i\omega)G_{11}(i\omega)-I)(I+\Sigma_u^T)(G_{11}(i\omega)^\dagger
H_{11}(i\omega)^\dagger -I)
\nonumber \\
&+&
H_{11}(i\omega)G_{12}(i\omega)(I+\Sigma_w^T)G_{12}(i\omega)^\dagger H_{11}(i\omega)^\dagger \nonumber \\
&+&
H_{12}(i\omega)H_{12}(i\omega)^\dagger.
\end{eqnarray}
Taking advantage of the properties~(\ref{eq:9G}) and~(\ref{eq:9p}) due to
passivity of the channel and filter transfer functions, this expression
can be simplified:
\begin{eqnarray}
\label{eq:121p}
\lefteqn{P_{e}(i\omega)=H_{11}(i\omega) \Psi(i\omega) H_{11}(i\omega)^\dagger} && \nonumber \\
&-&
H_{11}(i\omega)G_{11}(i\omega)(I+\Sigma_u^T)-(I+\Sigma_u^T)G_{11}(i\omega)^\dagger H_{11}(i\omega)^\dagger
\nonumber \\
&+& \Sigma_u^T+2I
\end{eqnarray}
where we let
\begin{equation}
\label{eq:47}
\Psi(s)\triangleq G_{11}(s)\Sigma_u^TG_{11}(s)^H
+G_{12}(s)\Sigma_w^TG_{12}(s)^H.
\end{equation}
In the sequel, we will also make use of the $n\times n$ matrix
\begin{eqnarray}
\label{eq:59}
\lefteqn{ P_e(s)=
\left[
\begin{array}{cc}
E_-(s)& 0
\end{array}
\right] F_{\mathbf{v}} \left[
\begin{array}{c}
E_-(s)^H \\ 0
\end{array}
\right]} && \nonumber \\
&=&H_{11}(s) \Psi(s) H_{11}(s)^H -H_{11}(s)G_{11}(s)(I+\Sigma_u^T)
\nonumber \\
&-&
(I+\Sigma_u^T)G_{11}(s)^H H_{11}(s)^H
+ \Sigma_u^T+2I.
\end{eqnarray}
Again, this expression is obtained using the identities~(\ref{eq:9G})
and~(\ref{eq:9p}). We will also write
$P_e(s,H)$ when we need to stress that the
expression for $P_e(s)$ corresponds to a specific filter
$\Xi(s)=\Delta(H(s),0)$.
We now present a formal statement of the problem of
coherent passive equalization posed in
Section~\ref{sec:eqprob-loose}.
\begin{problem}\label{P1}
The guaranteed cost passive equalization problem is to obtain a transfer
function matrix $\Xi(s)=\Delta(H(s),0)\in
\mathcal{H}_p$ which ensures a desired
bound on the power spectrum density of the equalization error. That is, given
$\gamma>0$, obtain $\Xi(s)=\Delta(H(s),0)\in \mathcal{H}_p$ such that
\begin{equation}
P_e(i\omega)<\gamma^2 I_n \quad \forall\omega. \label{eq:6'.sub}
\end{equation}
The optimal passive equalization problem is to minimize the
bound~(\ref{eq:6'.sub}) in the
class of filters $\mathcal{H}_p$:
\begin{eqnarray}
\label{eq:6'}
&&\gamma_\circ\triangleq \inf \gamma \mbox{ subject to
$\gamma>0$ and (\ref{eq:6'.sub})}.
\end{eqnarray}
\end{problem}
In~Problem~\ref{P1} we tacitly replaced optimization of
$\sup_\omega\boldsymbol{\sigma}(P_e(i\omega))$ with~(\ref{eq:6'}). The two
problems are equivalent. Indeed, given $\gamma>0$, define the set
\[
\mathcal{H}_\gamma=\{H(s)\colon \sup_\omega\boldsymbol{\sigma}(P_e(i\omega,H))<\gamma^2, H(s)H(s)^H=I\}.
\]
\begin{lemma}\label{L.eq=Hinf}
\begin{equation}
\label{eq:41}
\gamma_\circ=\bar\gamma\triangleq \inf\{\gamma>0\colon
\mathcal{H}_\gamma\neq \emptyset\}.
\end{equation}
\end{lemma}
\emph{Proof: }
From the definition of $\bar\gamma$, there exists a sequence
$\{\gamma_k\}\subset
\{\gamma>0\colon \mathcal{H}_\gamma\neq
\emptyset\}$ such that $\gamma_k\ge \bar\gamma$ and
$\lim_{k\to\infty}\gamma_k= \bar\gamma$. That is, for
any $\epsilon>0$, one can choose a sufficiently large $k$ so that
$\gamma_k<\bar\gamma+\epsilon$. Also, since $\mathcal{H}_{\gamma_k}\neq
\emptyset$, there exists a passive $H_k(s)$ such that
$\sup_\omega\boldsymbol{\sigma}(P_e(i\omega,H_k))<\gamma_k^2$. Consequently,
$P_e(i\omega,H_k)<\gamma_k^2I_n$ for any $\omega$, therefore
$ \gamma_\circ \le \gamma_k<\bar\gamma+\epsilon$.
Letting $\epsilon\to 0$ implies that
$\gamma_\circ\le\bar\gamma$.
Conversely, according to the definition of $\gamma_\circ$, there exists a sequence of constants $\{\gamma_l'\}$,
$\gamma_l'\ge \gamma_\circ $, which
converges to $\gamma_\circ$ and such that for each $\gamma_l'$ there exists
a physically realizable $H_l(s)$ such that
$P_e(i\omega,H_l)<(\gamma_l')^2I_n$ for any $\omega$. Then
$\sup_\omega\boldsymbol{\sigma}(P_e(i\omega,H_l))\le
(\gamma_l')^2<(\gamma_l'+\epsilon)^2$, where $\epsilon>0$ is an arbitrarily
small constant. Thus, $\mathcal{H}_{\gamma_l'+\epsilon}\neq \emptyset$,
which means that $\bar\gamma\le \gamma_l'+\epsilon$.
Letting $l'\to\infty$, $\epsilon\to 0$ leads to the conclusion that
$\bar\gamma\le \gamma_\circ$. Thus, $\bar\gamma= \gamma_\circ$.
\hfill$\Box$
Lemma~\ref{L.eq=Hinf} indicates that Problem~\ref{P1} is analogous to the
classical $H_\infty$ filtering problem~\cite{HSK-1999}.
However, instead of the singular value of the disturbance-to-error transfer
function, we seek to optimize the largest eigenvalue of the PSD function
$P_e$. Importantly, Problem~\ref{P1}
belongs to the class of \emph{constrained} optimization problems since the
class of
admissible filters is restricted to physically realizable passive
filters.
\section{The framework for solving Problem~\ref{P1}}\label{framework}
\subsection{The procedure for the synthesis of coherent
equalizers}
\label{sec:two-step}
The expression for $P_{e}$ obtained in~(\ref{eq:121p})
depends only on
$H_{11}$ and does not depend explicitly on other blocks
of the matrix $H$.
Therefore we adopt a two-step procedure
to solve Problem~\ref{P1} which was originally proposed in~\cite{UJ2a}. In the first step of this
procedure, the power spectrum density of the equalization error will be
optimized with respect to $H_{11}(s)$ subject to
some of the constraints implied by the paraunitarity of $H$.
Next, the blocks $H_{12}(s)$, $H_{21}(s)$, $H_{22}(s)$ of the equalizer
transfer function will be computed to fulfill the
constraint~(\ref{eq:37}). However, \cite{UJ2a} did not explain how causal
$H_{12}(s)$,
$H_{21}(s)$, $H_{22}(s)$ can be computed. This problem is solved in this
section. For this, we
recall the notion of spectral
factors of a rational para-Hermitian\footnote{A rational transfer
function matrix $X(s)$ is para-Hermitian if $X(s)^H=X(s)$.}
transfer function matrix~\cite{Youla-1961}.
\begin{lemma}[Youla, Theorem~2 of~\cite{Youla-1961}]\label{Youla.T2}
Suppose a rational
para-Hermitian $n\times n$ transfer function matrix $X(s)$ is positive
semidefinite on the imaginary axis, $X(i\omega)\ge 0$, and has normal rank\footnote{A non-negative
integer $r$ is the normal rank of a rational function $X(s)$ if (a)
$X$ has at least one subminor of order $r$ which does not vanish
identically, and (b) all minors of order greater than $r$ vanish
identically~\cite{Youla-1961}.} $r$, $r\le n$. Then the following statements hold.
\begin{enumerate}[(a)]
\item There exists an $r\times n$ rational matrix $N(s)$
such that $X(s)=N(s)^HN(s)$. $N(s)$ is a spectral factor of $X(s)$.
\item $N(s)$ and its right inverse $N^{-1}(s)$ are both analytic in the
open right half-plane $\mathrm{Re}s>0$.
\item
$N(s)$ is unique up to a constant unitary $r\times r$ matrix multiplier
on the left; i.e., if $N_1(s)$ also satisfies (a) and (b), then
$N_1(s)=TN(s)$ where $T$ is an $r\times r$ constant unitary matrix.
\item
If $X(s)$ is analytic on the finite
$i\omega$ axis, then $N(s)$ is analytic in a right half-plane
$\mathrm{Re}s>-\tau$, $\exists\tau>0$. If in addition, the normal rank of
$X(s)$ is invariant on the finite $i\omega$ axis, then $N^{-1}(s)$ is also
analytic in a right half plane $\mathrm{Re}s>-\tau_1$, $\exists\tau_1>0$.
\item
By applying claims (a)-(d) to $X(s)^T$, one can obtain the
factorization $ X(s)=M(s)M(s)^H$, where the spectral factor $M(s)$ has the
dimension $n\times r$ and has the same analiticity properties as $N(s)$.
\end{enumerate}
\end{lemma}
Consider a proper rational transfer function $H_{11}(s)$
with the properties
\begin{enumerate}[(H1):]
\item
$H_{11}(s)$ has poles in the open left half-plane of the complex plane, and
is analytic in a right half-plane $\mathrm{Re}s>-\tau$ ($\exists \tau>0$);
\item
$H_{11}(i\omega)H_{11}(i\omega)^\dagger \le I_n$; and
\item
The normal rank of the following matrices does not change on the finite
imaginary axis $i\omega$:
\begin{eqnarray}
\label{eq:26}
X_1(s)&=&I_n-H_{11}(s)H_{11}(s)^H, \nonumber \\
X_2(s)&=&I_n-H_{11}(s)^HH_{11}(s).
\end{eqnarray}
\end{enumerate}
The transfer functions $X_1(s)$ and $X_2(s)$ defined in~(\ref{eq:26}) are
para-Hermitian, and according to (H1), $X_1(i\omega)$ and
$X_2(i\omega)$ are positive semidefinite. Therefore, according to
Lemma~\ref{Youla.T2} these matrices admit
spectral factorizations
\begin{eqnarray}
X_1(s)=H_{12}(s)H_{12}(s)^H, \quad
X_2(s)=\tilde H_{21}(s)^H \tilde H_{21}(s). \quad \label{eq:85}
\end{eqnarray}
Also, let $\tilde H_{21}^{-1}(s)$ denote the right inverse of $\tilde H_{21}(s)$,
$
\tilde H_{21}(s) \tilde H_{21}^{-1}(s)=I_r,
$
where $r$ is the normal rank of $X_2(s)$.
\begin{theorem}
\label{two-step}
Given a proper rational transfer function $H_{11}(s)$ which satisfies conditions (H1)--(H3),
let $H_{12}(s)$ and $\tilde H_{21}(s)$ be the spectral
factors from~(\ref{eq:85}). Define
\begin{eqnarray}
\label{eq:81}
&&H_{21}(s)=U(s)\tilde H_{21}(s), \nonumber \\
&&H_{22}(s)=-U(s)(\tilde H_{21}^{-1}(s))^H H_{11}(s)^H
H_{12}(s), \quad
\end{eqnarray}
where $U(s)$ is a stable causal paraunitary $r\times r$ transfer function
matrix, chosen to cancel unstable poles of
$(\tilde H_{21}^{-1}(s))^H H_{11}(s)^H H_{12}(s)$; cf.~\cite{Shaked-1990}.
The corresponding $(n+r)\times (n+r)$ transfer function
$H(s)$ in~(\ref{eq:98a})
is stable, causal and
satisfies~(\ref{eq:60}).
\end{theorem}
\emph{Proof: }
For simplicity of notation, we drop the argument $s$ of the transfer functions.
Since $H_{11}$ is a proper rational stable transfer function, it is causal
according to the Paley-Wiener Theorem~\cite{Yosida}.
Furthermore since $H_{11}$ is analytic in a
right half-plane $\mathrm{Re}s>-\tau$ ($\exists \tau>0$),
it is
analytic on the imaginary axis. Together with the condition that the normal
rank of $H_{11}$ does not change along the imaginary axis, this guarantees
that the spectral factors $H_{12}$, $\tilde H_{21}$ and the right inverse
$\tilde H_{21}^{-1}$ are analytic in a right half-plane
$\mathrm{Re}s>-\tau$, $\exists \tau>0$; see claim (d) of
Lemma~\ref{Youla.T2}. These
properties ensure that the rational transfer functions $H_{12}$, $\tilde H_{21}$ and
$\tilde H_{21}^{-1}$ are stable and causal; the latter conclusion follows
from the Paley-Wiener Theorem.
Stability and causality of $H_{21}$, $H_{22}$ now follow from their
definitions expressed in terms of stable causal $H_{11}$, $H_{12}$, $\tilde
H_{21}$ and $\tilde H_{21}^{-1}$.
We now show that $H$ is paraunitary. The identity~(\ref{eq:9p})
follows directly from the first identity in~(\ref{eq:85}). Also,
using~(\ref{eq:85}), the identity~(\ref{eq:10p}) can be
verified:
\begin{eqnarray}
\lefteqn{H_{11}H_{21}^H+H_{12}H_{22}^H} &&
\nonumber \\
&=& (H_{11}\tilde H_{21}^H - H_{12} H_{12}^H H_{11}(s)
\tilde H_{21}^{-1})U^H \nonumber \\
&=& (H_{11}\tilde H_{21}^H - H_{11}(I-
H_{11}^H H_{11}) \tilde H_{21}^{-1})U^H \nonumber \\
&=& (H_{11}\tilde H_{21}^H - H_{11}
\tilde H_{21}^H \tilde H_{21} \tilde H_{21}^{-1})U^H =0.
\label{eq:70}
\end{eqnarray}
Furthermore,~(\ref{eq:11p}) also holds:
\begin{eqnarray}
\lefteqn{H_{21}H_{21}^H+H_{22}H_{22}^H} &&
\nonumber \\
&=& U (\tilde H_{21}\tilde H_{21}^H + (\tilde
H_{21}^{-1})^HH_{11}^H H_{12}
H_{12}^H H_{11}\tilde H_{21}^{-1})U^H \nonumber \\
&=& U (\tilde H_{21}\tilde H_{21}^H \nonumber \\
&& + (\tilde
H_{21}^{-1})^H(H_{11}^H H_{11}
-H_{11}^H H_{11}H_{11}^H H_{11})\tilde
H_{21}^{-1})U^H \nonumber \\
&=& U (\tilde H_{21}\tilde H_{21}^H + (\tilde
H_{21}^{-1})^H(I-\tilde H_{21}^H \tilde H_{21} \nonumber \\
&& -(I-\tilde H_{21}^H \tilde H_{21})(I-\tilde H_{21}^H \tilde
H_{21}))\tilde H_{21}^{-1})U^H \nonumber \\
&=& UU^H =I.
\label{eq:82}
\end{eqnarray}
\hfill$\Box$
\begin{remark}\label{r=0}
Theorem~\ref{two-step} shows that the number of noise channels $z$, $z^\#$
necessary to ensure that the equalizing filter is physically
realizable is determined by the normal rank of
$I_n-H_{11}(s)^HH_{11}(s)$. In particular, when $H_{11}(s)^HH_{11}(s)=I$,
the transfer function $H_{11}(s)$ is physically realizable, and
additional noise channels are not required.
\hfill$\Box$
\end{remark}
\subsection{The auxiliary optimization problem}
In the remainder of the paper, the two-step procedure described in the
previous section will pave the way to developing optimization approaches to
solving the quantum equalizer design problem.
For a constant $\gamma>0$, define the feasible set $\mathcal{H}_{11,\gamma}$
consisting of proper rational $n\times n$ transfer function
matrices $H_{11}(s)$, which satisfy conditions~(H1),~(H2)
and~(\ref{eq:6'.sub}). For convenience, we summarize the two latter
conditions as
\begin{eqnarray}
\label{eq:17}
\label{eq:76}
&& P_e(i\omega,H_{11})< \gamma^2I_n, \\
&& H_{11}(i\omega)H_{11}(i\omega)^\dagger \le I_n \quad
\forall\omega\in\mathbf{R}.
\label{eq:19}
\end{eqnarray}
In~(\ref{eq:17}), we slightly abuse the notation and write
$P_e(i\omega,H_{11})$ for the expression on the
right hand side of~(\ref{eq:121p}), to emphasize that the independent
variable of this function is $H_{11}(s)$.
Note that feasible $H_{11}$ are elements of the
Hardy space $H_\infty$ and ~(\ref{eq:19}) implies $\|H_{11}\|_\infty\le 1$.
Theorem~\ref{two-step} leads us to replace
Problem~\ref{P1} with the following auxiliary optimization problem.
\begin{problemprime}\label{P1a}
The auxiliary guaranteed cost problem is to obtain, for a given $\gamma>0$,
a feasible $H_{11}(s)\in \mathcal{H}_{11,\gamma}$. The corresponding
auxiliary optimal filtering problem is to determine an optimal level of
guaranteed performance
\begin{eqnarray}
\label{eq:6''}
\gamma_\circ'=\inf\{\gamma>0\colon \mathcal{H}_{11,\gamma}\neq\emptyset\}.
\end{eqnarray}
\end{problemprime}
Formally, one must distinguish between $\gamma_\circ$ defined
in~(\ref{eq:6'}) and $\gamma_\circ'$ defined in
Problem~\ref{P1a}. Solutions of the latter problem are not guaranteed to
satisfy condition (H3) of Theorem~\ref{two-step}, therefore
$\gamma_\circ'\le \gamma_\circ$. Nevertheless, the
remainder of the paper focuses on the auxiliary
Problem~\ref{P1a}. We will observe later in
Section~\ref{examples} that the (sub)optimal transfer functions $H_{11}(s)\in
\mathcal{H}_{11,\gamma}$ obtained in the examples considered in that section
also satisfy (H3). Therefore, the gap between $\gamma_\circ'$ and
$\gamma_\circ$ vanishes in those examples.
Note that when $\gamma^2\ge \boldsymbol{\sigma}(\Sigma_u^T+2I)$
(equivalently, $\gamma^2 I\ge \Sigma_u^T+2I$), the auxiliary guaranteed
cost problem has a trivial solution since for such $\gamma$, the set
$\mathcal{H}_{11,\gamma}$ contains $H_{11}(s)=0$. In this case,
a trivial suboptimal filter in $\mathcal{H}_\gamma$ can be
readily constructed using Theorem~1, e.g.,
$H(s)=
\left[
\begin{array}{cc}
0 & I \\ I & 0
\end{array}
\right]$.
Therefore, the standing assumption in the remainder of the paper is
that
\begin{equation}
\label{eq:53}
\gamma^2< \boldsymbol{\sigma}(\Sigma_u^T+2I).
\end{equation}
\subsection{Relation to the classical $H_\infty$-like
equalization}\label{S-proc}
The constraint~(\ref{eq:19}) reflects the distinction between the
coherent equalization problem and its classical $H_\infty$-like
counterpart. The latter involves optimization of the bound on the PSD
matrix $P_e$ but does not include condition~(\ref{eq:19}):
\begin{eqnarray}
\label{eq:71.nc}
&& \gamma_*=\inf \gamma, \\
&& \mbox{subject to } \gamma>0,\quad P_{e}(i\omega,H_{11})< \gamma^2 I_n \quad \forall \omega
\in\mathbf{R}. \nonumber
\end{eqnarray}
Clearly, for every $\gamma>0$, the set $\mathcal{H}_{11,\gamma}$ of feasible
optimizers $H_{11}$ of Problem~\ref{P1a} is a subset of the feasible set of
the problem~(\ref{eq:71.nc}). On the other hand, later in
the paper we will encounter a situation in which a suboptimal filter
of problem~(\ref{eq:71.nc}) also satisfies~(\ref{eq:19}).
Via the S-procedure, such situation can be
related to feasibility of a semidefinite program.
\begin{theorem}\label{SDP.primal.LMI}
Suppose $\gamma\ge \gamma_*$. If there exists $\theta >0$ such that
$\forall\omega\in\mathbf{R}$
\begin{equation}
\label{eq:21}
\theta \left[
\begin{array}{cc}
\Psi(i\omega) & -G_{11}(i\omega)(I_n+\Sigma_u^T)\\
-(I_n+\Sigma_u^T)G_{11}(i\omega)^\dagger &~~ \Sigma_u^T+(2-\gamma^2)I_n
\end{array} \right]-J\ge 0,
\end{equation}
then the feasible set of problem~(\ref{eq:71.nc}) is equal to
the feasible set of Problem~\ref{P1a} $\mathcal{H}_{11,\gamma}$.
\end{theorem}
\emph{Proof: }
After pre- and
postmultyping~(\ref{eq:21}) by $[H_{11}(i\omega)~I_n]$ and
$\left[
\begin{array}{c}
H_{11}(i\omega)^\dagger \\ I_n
\end{array}
\right]$, respectively, (\ref{eq:21}) becomes
\[
\theta
(P_e(i\omega,H_{11})-\gamma^2I_n)-(H_{11}(i\omega)H_{11}(i\omega)^\dagger-I_n)\ge 0.
\]
Now let $H_{11}(s)$ be a feasible transfer function of problem~(\ref{eq:71.nc}),
such that $P_e(i\omega,H_{11})<\gamma^2I$ $\forall\omega$. Then
\[
H_{11}(i\omega)H_{11}(i\omega)^\dagger\le I_n+\theta(P_e(i\omega,H_{11})-\gamma^2I_n)\le I_n.
\]
That is, $H_{11}(s)\in \mathcal{H}_{11,\gamma}$. This shows
that under the conditions of the theorem, the feasible set of
problem~(\ref{eq:71.nc}) is a subset of
$\mathcal{H}_{11,\gamma}$.
Thus, the claim of the
theorem follows, due to the previous observation that the
converse inclusion also holds.
\hfill$\Box$
From Theorem~\ref{SDP.primal.LMI}, it follows that under
condition~(\ref{eq:21}), the constraint~(\ref{eq:19}) of
problem~(\ref{eq:6''}) is inactive and any suboptimal filter of
problem~(\ref{eq:71.nc}) is also a guaranteed cost filter for
Problem~\ref{P1a}.
\section{Parameterization of suboptimal causal physically realizable
filters}\label{feasible}
In this section, we will derive a parametric representation of the set
$\mathcal{H}_{11,\gamma}$ of feasible
optimizers of Problem~\ref{P1a},
given a $\gamma>\gamma_\circ'$.
The problem of characterizing all
causal rational proper transfer functions which
satisfy~(\ref{eq:76}),~(\ref{eq:19}) is similar to the problem of
describing suboptimal $H_\infty$ filters for a linear uncertain system,
with the additional constraint that $\|H_{11}\|_\infty\le 1$.
We apply the technique of $J$-spectral
factorization~\cite{GGLD-1990,IO-1996} to solve this problem under the following technical assumption.
\begin{assumption}\label{A2}
The matrix $\Psi(s)$ in~(\ref{eq:47}) has full normal rank.
\end{assumption}
Next, we note that $\Psi(s)$ and its transpose $\Psi(s)^T$ are proper rational
para-Hermitian matrices,
$\Psi(s)^H=\Psi(s)$, $\Psi(-s^*)^\#=\Psi(s)^T$.
Therefore, according to Lemma~\ref{Youla.T2}, applied to $\Psi(s)^T$, there
exists a rational matrix $M(s)$ such that
\begin{equation}
\label{eq:61}
\Psi(s)=M(s)M(s)^H.
\end{equation}
Under Assumption~\ref{A2}, the matrix $M$ is a square $n\times n$
matrix, and its left inverse $M^{-1}(s)$ is the same as its right inverse.
Since the matrix $\breve{A}$ of the channel system is assumed to be stable
(Assumption~\ref{A1}),
$G_{11}(s)$, $G_{12}(s)$ are analytic on the imaginary axis. Therefore
$\Psi(s)$ is also analytic on the imaginary axis and $\Psi(i\omega)\ge 0$ for
all $\omega$. Then according to Lemma~\ref{Youla.T2},
$M(s)$ and $M^{-1}(s)$ are analytic in a right half-plane $\mathrm{Re}s>-\tau$,
$\exists\tau>0$. These observations allow us to express the expression for
$P_e(s,H_{11})-\gamma^2I$ as
\begin{eqnarray}
\label{eq:57}
&& P_e(s,H_{11})-\gamma^2I_{2n}=
\left[
\begin{array}{cc}
Y(s) & I_n
\end{array}
\right]\Phi(s) \left[
\begin{array}{c}
Y(s)^H
\\ I_n
\end{array}
\right], \quad
\end{eqnarray}
where $Y(s) = H_{11}(s)M(s)$, and
\begin{eqnarray}
&& \Phi(s) =
\left[
\begin{array}{cc}
I_n & Q(s) \\
Q(s)^H
&~~ \Sigma_u^T+(2-\gamma^2)I_n
\end{array} \right], \nonumber \\
&& Q(s) \triangleq -M^{-1}(s)G_{11}(s)(I_n+\Sigma_u^T), \label{eq:75}
\end{eqnarray}
$Q(s)$ is analytic in a right half-plane $\mathrm{Re}s>-\tau$,
$\exists\tau>0$.
Recall that $(2n)\times (2n)$ rational matrix transfer function $\Phi(s)$
is said to admit a (left-standard) $J$-spectral factorization
if it can be represented as
\begin{equation}
\label{eq:68}
\Phi(s)=\Upsilon(s)J \Upsilon(s)^H,
\end{equation}
where a $(2n)\times (2n)$ rational transfer matrix $\Upsilon(s)$ has all
its poles in the left half-plane $\mathrm{Re}s<-\tau_2$ ($\exists
\tau_2>0$)\footnote{Normally, $J$-spectral factors $\Upsilon(s)$ are required
to be analytic and bounded in the right half-plane
$\mathrm{Re}s>0$~\cite{FD-1987} or have poles in the left half-plane
$\mathrm{Re}s<0$~\cite{IO-1996,GGLD-1990}. Our somewhat
stronger requirements are dictated by the requirement that
the spectral factor $M(s)$ of $\Psi(s)$ must be invertible on the
imaginary axis and that $M(s)^{-1}$ must also be analytic in the right half
plane and $M(i\omega)^{-1}$ must be well defined on the imaginary
axis. To meet these requirements, we employ
Lemma~\ref{Youla.T2} which
requires that $\Psi(s)$ must be analytic on the imaginary axis.}
~\cite{IO-1996,GGLD-1990}.
The following theorem adapts Theorem~1 of~\cite{IO-1996} to
the left-standard factorization setting of this paper.
\begin{theorem}\label{T2}
Suppose there exists a spectral factor $M(s)$ defined in~(\ref{eq:61})
such that the $(2n)\times (2n)$ transfer matrix $\Phi(s)$
in~(\ref{eq:75}) has a $J$-spectral factorization~(\ref{eq:68}), where
\begin{equation}
\label{eq:38}
\Upsilon(s)=
\left[
\begin{array}{cc}
\Upsilon_1(s) & \Upsilon_2(s) \\
\Upsilon_3(s) & \Upsilon_4(s)
\end{array}
\right],
\end{equation}
$\Upsilon_{j}(s)$, $j=1,2,3,4$, and also the inverses $\Upsilon(s)^{-1}$,
$\Upsilon_1(s)^{-1}$ are analytic in a right half-plane
$\mathrm{Re}s>-\tau_2$ and have their poles in the half-plane
$\mathrm{Re}s< -\tau_2$ ($\exists \tau_2>0$). Then
$H_{11}(s)\in \mathcal{H}_{11,\gamma}$ if and only if
\begin{equation}
\label{eq:67}
H_{11}(s)=S_2^{-1}(s)S_1(s)M^{-1}(s),
\end{equation}
where
\begin{equation}
\label{eq:77}
\left[
\begin{array}{cc}
S_1(s) & S_2(s)
\end{array}
\right] = \left[
\begin{array}{cc}
\Theta(s) & I_n
\end{array}
\right] \Upsilon(s)^{-1}
\end{equation}
for a rational stable $n\times n$ transfer function matrix $\Theta(s)\in
H_\infty$
analytic in a right half-plane $\mathrm{Re}s>-\tau$ ($\exists \tau>0$),
such that $\|\Theta\|_\infty<1$, and also
\begin{eqnarray}
\label{eq:79}
S_1(i\omega)M(i\omega)^{-1}(M(i\omega)^{-1})^\dagger S_1(i\omega)^\dagger
&& \nonumber \\
\le
S_2(i\omega)S_2(i\omega)^\dagger && \quad \forall \omega\in
\mathbf{R}.\quad
\end{eqnarray}
\end{theorem}
\emph{Proof: }
\emph{The 'only if' claim}: The statement $H_{11}\in\mathcal{H}_{11,\gamma}$
reads that the transfer function matrix $H_{11}(s)$ is a
rational transfer function matrix which is stable, analytic in a right
half-plane $\mathrm{Re}s>-\tau$ ($\exists \tau>0$) and satisfies
conditions~(\ref{eq:76})
and~(\ref{eq:19}). Then the matrix $Y(s)=H_{11}(s)M(s)$ is also stable and
analytic in a right
half-plane $\mathrm{Re}s>-\tau_1$, $\exists\tau_1>0$, and
from~(\ref{eq:76}) we have
\begin{equation}
\label{eq:80}
\left[
\begin{array}{cc}
Y(i\omega) & I_n
\end{array}
\right]\Phi(i\omega) \left[
\begin{array}{c}
Y(i\omega)^\dagger \\ I_n
\end{array}
\right]<0 \quad \forall \omega\in \mathbf{R}.
\end{equation}
Following the same lines that were used to prove Theorem~1
in~\cite{IO-1996}, one can show that the matrices $\Theta_1(s)$,
$\Theta_2(s)$ defined by the equation
\[
\left[
\begin{array}{cc}
\Theta_1(s) & \Theta_2(s)
\end{array}
\right] = \left[
\begin{array}{cc}
Y(s) & I_n
\end{array}
\right] \Upsilon(s)
\]
are analytic in a right half-plane $\mathrm{Re}s>-\tau$ ($\exists\tau>0$),
stable and that
\begin{equation}
\label{eq:84}
\Theta_1(i\omega)\Theta_1(i\omega)^\dagger<\Theta_2(i\omega)\Theta_2(i\omega)^\dagger
\quad \forall \omega\in \mathbf{R}.
\end{equation}
In particular, it follows from~(\ref{eq:84}) that $\Theta_2(s)$ is
invertible on the imaginary axis, and that
$\|\Theta\|_\infty<1$ where
$\Theta(s)=\Theta_2(s)^{-1}\Theta_1(s)$.
Furthermore, $\Theta_2^{-1}(s)$ has all its poles in the left
half-plane~\cite{IO-1996}, thus $\Theta(s)$ is analytic in a right half-plane $\mathrm{Re}s>-\tau$ ($\exists\tau>0$). Also
\[
\left[
\begin{array}{cc}
\Theta_2(s)^{-1} Y(s)~ & \Theta_2(s)^{-1}
\end{array}
\right] = \left[
\begin{array}{cc}
\Theta(s) & I
\end{array}
\right] \Upsilon^{-1}(s).
\]
Letting $S_1(s)=\Theta_2(s)^{-1} Y(s)=\Theta_2(s)^{-1} H_{11}(s)M(s)$,
$S_2(s)=\Theta_2(s)^{-1}$ yields~(\ref{eq:77}). We also can express
$H_{11}(s)$ as $H_{11}(s)=S_2(s)^{-1}S_1(s)M(s)^{-1}$. This
gives~(\ref{eq:67}). Substituting this expression into~(\ref{eq:19})
results in~(\ref{eq:79}).
{\emph{The `if' claim:}
This part of the proof also replicates the proof of the
corresponding statement in Theorem~1 of~\cite{IO-1996}. Using the same
reasoning as in that theorem, one can show that $S_2(s)$ in
equation~(\ref{eq:77}) is invertible and that its inverse is stable and
analytic in a right half-plane $\mathrm{Re}s>-\tau$, $\exists\tau>0$. Also,
since
$\|\Theta\|_\infty<1$,
then using~(\ref{eq:68}) we obtain
\begin{eqnarray*}
\lefteqn{ \left[
\begin{array}{cc}
S_1(i\omega) & S_2(i\omega)
\end{array}
\right]\Phi(i\omega)\left[
\begin{array}{c}
S_1(i\omega)^\dagger \\ S_2(i\omega)^\dagger
\end{array}\right]} && \\
&& = \left[
\begin{array}{cc}
S_1(i\omega) & S_2(i\omega)
\end{array}
\right]\Upsilon(i\omega)J \Upsilon(i\omega)^\dagger\left[
\begin{array}{c}
S_1(i\omega)^\dagger \\ S_2(i\omega)^\dagger
\end{array}\right] \\
&& = \left[
\begin{array}{cc}
\Theta(i\omega) & I
\end{array}
\right]J \left[
\begin{array}{c}
\Theta(i\omega)^\dagger \\ I
\end{array}\right] <0.
\end{eqnarray*}
Therefore, we conclude that $Y(s)=S_2(s)^{-1}S_1(s)$
satisfies~(\ref{eq:80}). Therefore, $H_{11}(s)=Y(s)M^{-1}(s)$
satisfies~(\ref{eq:17}). Also, (\ref{eq:79}) implies that this $H_{11}$
satisfies~(\ref{eq:19}). Furthermore, this transfer
function matrix $H_{11}(s)$ has the required stability and analiticity
properties to be an element of $\mathcal{H}_{11,\gamma}$.
\hfill$\Box$
Adding the inequality (\ref{eq:21}) from Theorem~\ref{SDP.primal.LMI} to
the condition of Theorem~\ref{T2} will render the inequality~(\ref{eq:79})
redundant. As a result, we have the following corollary.
\begin{corollary}\label{cor}
Suppose that the conditions of Theorem~\ref{T2} hold and, in addition, the
inequality (\ref{eq:21}) from Theorem~\ref{SDP.primal.LMI} also holds for
some $\theta> 0$. Then $H_{11}\in \mathcal{H}_{11,\gamma}$ if
and only if it can be represented in
the form~(\ref{eq:67}), where $S_1(s)$, $S_2(s)$ are determined as
in~(\ref{eq:77}) using a stable rational transfer function matrix
$\Theta(s)$ analytic in a right half-plane $\mathrm{Re} s>-\tau$
($\exists\tau>0$) and such that $\|\Theta\|_\infty<1$.
\end{corollary}
The following corollary is concerned with a special case of Theorem~\ref{T2}
where $\Upsilon_4(s)=0$ in $\Upsilon(s)$. The corollary shows
that in this special case, the transfer function $H_{11}(s)$ can be
expressed in the form resembling the celebrated Youla parameterization of
all stabilizing controllers in the classical $H_\infty$ control
problem~\cite{ZDG-1996}. This special case will prove useful in
the examples considered in Section~\ref{examples}.
\begin{corollary}\label{cor.LFT}
Suppose that the conditions of Theorem~\ref{T2} hold and, in addition, the spectral factor $\Upsilon(s)$ has $\Upsilon_4(s)=0$, and
$\Upsilon_2(s)$, $\Upsilon_3(s)$ are invertible in the right half-plane
$\mathrm{Re}s>-\tau$ ($\exists \tau>0$). Then every feasible $H_{11}(s)$
has the form
\begin{eqnarray}
\label{eq:67.LFT}
H_{11}(s)&=&-\Upsilon_3(s)
(I-\Upsilon_1^{-1}(s)\Upsilon_2(s)\Theta(s))^{-1} \nonumber \\
&&\times \Upsilon_1^{-1}(s)M^{-1}(s).
\end{eqnarray}
where $\Theta(s)$ is a stable rational $n\times n$ transfer function matrix
analytic in a right half-plane $\mathrm{Re}s>-\tau$ ($\exists \tau>0$)
such that $\|\Theta\|_\infty<1$ and
\begin{eqnarray}
\label{eq:36}
\lefteqn{M(i\omega)^{-1}(M(i\omega)^{-1})^\dagger\le
(\Upsilon_2(i\omega)\Theta(i\omega)-\Upsilon_1(i\omega))
} && \nonumber \\
&& \times \Upsilon_3(i\omega)^{-1}
(\Upsilon_3i\omega)^{-1})^\dagger
(\Upsilon_2(i\omega)\Theta(i\omega)-\Upsilon_1(i\omega))^\dagger \nonumber\\
&& \qquad\qquad \forall \omega\in\mathbf{R} .
\end{eqnarray}
\end{corollary}
\emph{Proof: }
The statement of the corollary
follows directly from~(\ref{eq:67}) and~(\ref{eq:77}) using
the fact that
\begin{equation*}
\Upsilon(s)^{-1}=
\left[
\begin{array}{cc}
0 & \Upsilon_3^{-1}(s) \\
\Upsilon_2^{-1}(s) &~~ -\Upsilon_2^{-1}(s) \Upsilon_1(s)\Upsilon_3^{-1}(s)
\end{array}
\right].
\end{equation*}
With this expression for $\Upsilon(s)^{-1}$, (\ref{eq:77}) reduces to
\begin{eqnarray}
\label{eq:45.LFT}
S_1(s)&=&\Upsilon_2^{-1}(s), \nonumber \\
S_2(s)&=&-\Upsilon_2^{-1}(s)\Upsilon_1(s)(I-\Upsilon_1^{-1}(s)\Upsilon_2(s)\Theta(s))\Upsilon_3^{-1}(s). \quad\quad
\end{eqnarray}
Substituting these expressions
in~(\ref{eq:67}),~(\ref{eq:79}) leads
to~(\ref{eq:67.LFT}),~(\ref{eq:36}), respectively.
\hfill$\Box$
We conclude this section by stressing that combining Theorem~\ref{T2} with
Theorem~\ref{two-step} allows
one to obtain a suboptimal coherent equalizing filter $H(s)$ for which
the power spectral density of the equalization error is guaranteed to
be bounded from above as in~(\ref{eq:76}). To obtain such a filter,
$\Theta(s)$ must be chosen to ensure that $H_{11}(s)$ defined
in~(\ref{eq:67}) also satisfies condition (H3) of
Theorem~\ref{two-step}. Furthermore, it is also possible to obtain the
smallest $\gamma^2$ for which conditions of Theorem~\ref{T2} hold. The
examples in Section~\ref{examples} illustrate these points. The following
expansion of the $J$-spectral factorization formula~(\ref{eq:68}) will be
used in these examples
\begin{eqnarray}
\label{eq:90}
&& \Upsilon_1(s)\Upsilon_1(s)^H-\Upsilon_2(s)\Upsilon_2(s)^H=I, \nonumber
\\
&&
\Upsilon_3(s)\Upsilon_3(s)^H-\Upsilon_4(s)\Upsilon_4(s)^H=\Sigma_u^T+(2-\gamma^2)I, \nonumber \\
&& \Upsilon_1(s)\Upsilon_3(s)^H-\Upsilon_2(s)\Upsilon_4(s)^H=Q(s).
\end{eqnarray}
\section{Suboptimal solution via Semidefinite Programming and
Nevanlinna-Pick interpolation}\label{semidef}
Theorem~\ref{T2} reduces the coherent passive equalization problem
to finding the smallest constant $\gamma^2$
for which the matrix $\Phi(s)$ admits a $J$-spectral decomposition and for
which a matrix $\Theta(s)$ can be found which satisfies~(\ref{eq:79}).
In general, finding $J$-spectral factors is known to be a difficult
problem. Therefore in this section we consider an alternative approach
in which we seek to construct a physically realizable
equalizer which is suboptimal in the sense that it minimizes the
power spectrum density $P_e(i\omega,H_{11})$ at selected frequency
points.
The proposed approach is based on the observation that
Problem~\ref{P1a} is equivalent to the semidefinite program (SDP)
\begin{eqnarray}
\label{eq:71.LMI}
&& \inf \gamma^2 \\
&& \mbox{s.t. } \left[
\begin{array}{cc}
Z_{11}(i\omega) & H_{11}(i\omega)M(i\omega) \\
M(i\omega)^\dagger H_{11}(i\omega)^\dagger & -I_q
\end{array}
\right]< 0, \label{eq:13} \quad \\
&& \phantom{\mbox{s.t. }}
\left[
\begin{array}{cc}
I_n & H_{11}(i\omega) \\ H_{11}(i\omega)^\dagger & I_n
\end{array}
\right]\ge 0 \quad \forall\omega\in\mathbf{R}, \label{eq:14}
\end{eqnarray}
where
\begin{eqnarray*}
Z_{11}(s)&\triangleq&
(2-\gamma^2)I+\Sigma_u^T-H_{11}(s)G_{11}(s)(I+\Sigma_u^T) \\
&& -(I+\Sigma_u^T)G_{11}(s)^H H_{11}(s)^H.
\end{eqnarray*}
Indeed, (\ref{eq:13}),~(\ref{eq:14}) follow
from~(\ref{eq:17}),~(\ref{eq:19}) using the Schur complement~\cite{HZ-2005}.
The LMI constraints of the problem~(\ref{eq:71.LMI})--(\ref{eq:14}) are
parameterized by the frequency parameter $\omega$. Unless a closed form
solution to this problem can be found, to obtain a numerical solution one
has to resort to a relaxation of the constraints. One such relaxation
involves a grid of frequency points $\omega_l$, $l=1,\ldots,L$:
\begin{eqnarray}
\label{eq:71.LMI.l}
&& \inf \gamma^2 \\
&& \mbox{s.t. }
\left[
\begin{array}{cc}
Z_{11}(i\omega_l) & H_{11,l}M(i\omega_l) \\
M(i\omega_l)^\dagger H_{11,l}^\dagger & -I_q
\end{array}
\right]< 0, \label{eq:13.l} \\
&& \phantom{\mbox{s.t. }}
\left[
\begin{array}{cc}
I_n & H_{11,l} \\ H_{11,l}^\dagger & I_n
\end{array}
\right]\ge 0, \quad l=1, \ldots,L. \label{eq:14.l}
\end{eqnarray}
While this relaxation of the constraints makes the SDP problem more
tractable, it also needs to be complemented with
interpolation, to obtain a transfer function $H_{11}(s)$ from
which a physically realizable equalizer $\Delta(H(s),0)$ can be obtained. This
requires that the resulting transfer function
matrix $H_{11}(s)$ must satisfy~(\ref{eq:14}) at every $\omega\in\mathbf{R}$.
To accomplish this, we use the Nevanlinna-Pick interpolation of the solution of
the relaxed problem~(\ref{eq:71.LMI.l})--(\ref{eq:14.l}).
Recall the formulation of the matrix Nevanlinna-Pick interpolation
problem~\cite{BGR-2013,DGK-1979,Kovalishina-1984}. Our
formulation follows~\cite{Kovalishina-1984}, which gives a solution
of the
kind of the
Nevanlinna-Pick problem which is most convenient for
application to the
problem considered in this section. Given a set of
distinct points $\{s_l, l=1, \ldots, L\}$ located in the open
right half-plane $\mathrm{Re}s>0$, and a collection of $n\times n$
matrices $\{\mathbf{X}_l,
l=1,\ldots,L\}$~\footnote{In the most general setting, the index set on
which $l$ varies is
not necessarily finite, nor even countable~\cite{DGK-1979}.}, the
matrix Nevanlinna-Pick interpolation consists in finding a rational
$n\times n$ matrix-valued function
$\mathbf{X}(s)$ which is analytic in the open
right half-plane $\mathrm{Re}s>0$, satisfies
$\mathbf{X}(s_l)=\mathbf{X}_l$ and such that
$\|\mathbf{X}(s)\|\le 1$ for $\mathrm{Re}s>0$. Here
$\|\mathbf{X}(s)\|$ is the
spectral norm of the matrix $\mathbf{X}(s)$, i.e., the largest singular value of
$\mathbf{X}(s)$\footnote{In~\cite{Kovalishina-1984},
this requirement is expressed as $I-\mathbf{X}(s)^\dagger \mathbf{X}(s)\ge 0$.}.
We now summarize the algorithm for finding a suboptimal solution to
Problem~\ref{P1a} which is based on the results
of~\cite{Kovalishina-1984}. Given a collection of frequencies
$\omega_l$, $l=1, \ldots,L$, let $\tilde \gamma>0$ and $H_{11,l}$, $l=1,
\ldots,L$, be a (subotimal) solution of the LMI optimization
problem~(\ref{eq:71.LMI.l})--(\ref{eq:14.l}).
Let $\tau$ be a sufficiently small positive constant, and define
$s_l=i\omega_l+\tau$. Furthermore, suppose that
the $nL\times nL$ block-Pick matrix $\mathbf{P}$ consisting of the blocks
\begin{equation}
\label{eq:48}
\mathbf{P}_{l,k}=
\frac{I-H_{11,l}H_{11,k}^\dagger }{s_l+s_k^*}, \quad l,k=1,\ldots L,
\end{equation}
is posititve definite. The matrix version of the
Nevanlinna~criterion~\cite{Kovalishina-1984} states that this
is necessary and sufficient
for the existence of a rational matrix $\hat H_{11}(s)$
which is analytic in $\{s: \mathrm{Re}s>0\}$, satisfies
$\|\hat H_{11}(s)\|\le 1$ in that domain and such that
$ \hat H_{11}(s_l)=H_{11,l}$.
A rational interpolant $\hat H_{11}(s)$ can be obtained using the
matrix extension of the Nevanlinna
algorithm~\cite{Kovalishina-1984}; also see
\cite{DGK-1979,GK-1989}. Namely, $\hat H_{11}(s)$ is
representable as a linear fractional transformation of an arbitrary
rational stable transfer function
$\Theta(s)$ which is analytic in $\mathrm{Re}s>0$ and satisfies
$\|\Theta\|_\infty<1$:
\begin{eqnarray}
\label{eq:58}
\hat H_{11}(s)&=&(W_{11}(s)\Theta(s)+W_{12}(s)) \nonumber \\
&& \times (W_{21}(s)\Theta(s)+W_{22}(s))^{-1}.
\end{eqnarray}
The coefficient matrix of this transformation
\begin{equation}
\label{eq:83}
W(s)=
\left[\begin{array}{cc}
W_{11}(s) & W_{12}(s) \\
W_{21}(s) & W_{22}(s)
\end{array}
\right]
\end{equation}
is constructed from the matrix $\mathbf{P}>0$:
\begin{eqnarray}
\label{eq:101}
W(s)&=&I-
\left[
\begin{array}{ccc}
\frac{I}{s+s_1^*} & \ldots & \frac{I}{s+s_L^*}\\
\frac{H_{11,1}^*}{s+s_1^*} & \ldots &\frac{H_{11,L}^*}{s+s_L^*}
\end{array}
\right]
\mathbf{P}^{-1} \left[\begin{array}{cc}
I~& -H_{11,1} \\
\vdots & \vdots \\
I~ & -H_{11,L}
\end{array}
\right].\quad
\end{eqnarray}
It remains to obtain $H_{11}(s)$. Let
\begin{equation}
\label{eq:52}
H_{11}(s)=\hat H_{11}(s+\tau).
\end{equation}
From the properties of $\hat H_{11}(s)$,
it follows that $H_{11}(s)$ is analytic in the half-plane
$\mathrm{Re}s>-\tau$ and $\|H_{11}(s)\|\le 1$ for all $s$ such that
$\mathrm{Re}s> -\tau$. Consequently, $\|H_{11}(i\omega)\|\le 1$ for all
$\omega\in\mathbf{R}$ which
is equivalent to~(\ref{eq:19}). Finally, it follows from the definition of
$H_{11}(s)$ and~(\ref{eq:13.l}) that
\begin{equation}
\label{eq:71}
P_e(i\omega_l, H_{11})< \tilde\gamma^2, \quad l=1, \ldots, L.
\end{equation}
That is, the constructed transfer function $H_{11}(s)$ is
suboptimal in the sense that it minimizes the power spectrum density
$P_e(i\omega,H_{11})$ at the selected frequency grid points.
\begin{remark}\label{rem-posdef}
The requirement of the algorithm that the
block-Pick matrix $\mathbf{P}$ must be positive definite is not
restrictive. It can be satisfied by further restricting~(\ref{eq:14.l}) to
be a strict inequality, and then choosing a sufficiently small $\tau>0$.
Indeed, when $H_{11,l}^\dagger
H_{11,l}< I$, it follows from~(\ref{eq:14.l}) that the
$(l,l)$-block~(\ref{eq:48}) is positive definite. Its eigenvalues
can be made arbitrarily large by selecting $\tau>0$ to be sufficiently
close to 0, since the denominator is equal to $2\tau$ and
vanishes as $\tau\to 0$. The off-diagonal blocks
remain bounded as $\tau\to 0$, making the block-Pick matrix $\mathbf{P}$
block-diagonally dominant~\cite{FV-1962}, with positive definite blocks on the
diagonal. Then if $\tau>0$ is sufficiently small, $\mathbf{P}$ is positive
definite~\cite{ZLHX-2010}.
\end{remark}
\section{Examples}\label{examples}
\subsection{Coherent equalization of a static two-input two-output system}
\label{ex1.revisited}
Consider a two-input two-output system which mixes a single mode input
field $u$ with a single mode environment field $w$; its outputs and inputs are
related via a static unitary transformation:
\begin{eqnarray}
\label{eq:1.bs}
\left[
\begin{array}{c}
y \\ d
\end{array}
\right]= G \left[
\begin{array}{c}
u \\ w
\end{array}
\right], \quad G=\left[
\begin{array}{cc}k & m \\
-e^{i\phi}m^* & e^{i\phi}k^*
\end{array}
\right]; \quad
\end{eqnarray}
$k,m$ are complex numbers, $|k|^2+|m|^2=1$, and $\phi$ is a real
number. One example of such system is an optical beam splitter.
Beam splitters play an important role in many
quantum optics applications such as interferometry, holography, laser
systems, etc. The device has two inputs. The input $u$ represents the
signal we would like to split, and the second input $w$ represents the
thermal noise input from the environment. These input fields are related to the
output via a unitary transformation~(\ref{eq:1.bs}).
Since both input
fields are scalar, $u$ and $w$ are scalar operators, and
$\Sigma_u$ and $\Sigma_w$ are
real constants. To emphasize this, we write $\Sigma_u=\sigma_u^2$,
$\Sigma_w=\sigma_w^2$. We now illustrate application of Theorem~\ref{T2}
in this example.
Using the above notation, $\Psi(s)$ defined in (\ref{eq:47}) is a constant
expressed as $\Psi(s)=\psi=|k|^2\sigma_u^2+|m|^2\sigma_w^2$. The
expression for the power spectrum density matrix~(\ref{eq:121p}) becomes
\begin{eqnarray}
\label{eq:121ps}
P_{e}(i\omega,H_{11})&=& \psi |H_{11}(i\omega)|^2 \nonumber \\
&-&2(1+\sigma_u^2)\mathrm{Re}[kH_{11}(i\omega)]+ \sigma_u^2+2.
\end{eqnarray}
Assumption~\ref{A2} is satisfied when at least one of the addends in the
expression for $\psi$ is
positive. We suppose in this example that this requirement is
satisfied. Then one can select $M(s)=\psi^{1/2}e^{i\varphi}$, where
$\psi^{1/2}$ is the real positive root, and $\varphi$ is an arbitrary
real constant. Also, $Q(s)$ defined in~(\ref{eq:75}) is constant,
$Q(s)=-\frac{k(1+\sigma_u^2)e^{-i\varphi}}{\psi^{1/2}}$.
Note that condition~(\ref{eq:53}) reduces to
$\gamma^2<\sigma_u^2+2$. Therefore, we assume that this condition holds.
Also, suppose that
\begin{equation}
\label{eq:105}
\gamma^2>
\begin{cases}
\sigma_u^2+2-2|k|(1+\sigma_u^2)+\psi, & \text{if $\psi\le
|k|(1+\sigma_u^2)$}, \\
\sigma_u^2+2-\frac{|k|^2(1+\sigma_u^2)^2}{\psi}, & \text{if $\psi>
|k|(1+\sigma_u^2)$}.
\end{cases}
\end{equation}
Under this condition,
$\frac{|k|^2(1+\sigma_u^2)^2}{\psi(\sigma_u^2+2-\gamma^2)}-1\ge
0$. Therefore, we let
\begin{eqnarray}
\label{eq:106}
\Upsilon_1&=&
-\frac{k(1+\sigma_u^2)e^{-i\varphi}}{\sqrt{\psi(\sigma_u^2+2-\gamma^2)}}, \quad
\Upsilon_3=\sqrt{\sigma_u^2+2-\gamma^2},
\nonumber \\
\Upsilon_2&=&e^{-i\varphi}\sqrt{\frac{|k|^2(1+\sigma_u^2)^2}{\psi(\sigma_u^2+2-\gamma^2)}-1},
\end{eqnarray}
where the square roots are chosen to be real and positive.
\begin{proposition}\label{Prop1.J}
Suppose $\gamma^2<\sigma_u^2+2$ satisfies condition~(\ref{eq:105}).
Then $H_{11}(s)\in\mathcal{H}_{11,\gamma}$ if and only if it
can be represented as
\begin{eqnarray}
\label{eq:46}
\lefteqn{H_{11}(s)} && \nonumber \\
&& =\frac{\sigma_u^2+2-\gamma^2}{k(1+\sigma_u^2)+\Theta(s)
\sqrt{|k|^2(1+\sigma_u^2)^2-\psi(\sigma_u^2+2-\gamma^2)}}, \nonumber
\\
\end{eqnarray}
where $\Theta(s)$ is a stable rational transfer function analytic in the
closed right half-plane, which satisfies $\|\Theta\|_\infty<1$ and the
frequency domain condition
\begin{eqnarray}
\label{eq:9}
\lefteqn{\sigma_u^2+2-\gamma^2} && \nonumber \\
&&\le
|k(1+\sigma_u^2)+\Theta(i\omega)
\sqrt{|k|^2(1+\sigma_u^2)^2-\psi(\sigma_u^2+2-\gamma^2)}|.\nonumber \\
\end{eqnarray}
One choice of $\Theta$ which satisfies these requirements is
\begin{eqnarray}
\label{eq:108}
\Theta&=&
\begin{cases}
\epsilon\frac{k}{|k|}& \text{if $\psi\le |k|(1+\sigma_u^2)$}, \\
0, & \text{if $\psi> |k|(1+\sigma_u^2)$},
\end{cases}
\end{eqnarray}
where $0<\epsilon<1$ must be chosen to be sufficiently close to 1.
\end{proposition}
\emph{Proof: }
The direct calculation shows that the matrix $\Upsilon$ defined
in~(\ref{eq:38}) with $\Upsilon_1$,
$\Upsilon_2$, $\Upsilon_3$ defined in~(\ref{eq:106}) and $\Upsilon_4=0$
is the $J$-spectral factor of
\[
\Phi=
\left[\begin{array}{cc}
1 & -\frac{k(1+\sigma_u^2)e^{-i\varphi}}{\psi^{1/2}} \\
-\frac{k^*(1+\sigma_u^2)e^{i\varphi}}{\psi^{1/2}} &~~ \sigma_u^2+2-\gamma^2
\end{array}\right].
\]
Thus, the conditions of Theorem~\ref{T2} are
satisfied. Therefore,
$H_{11}(s)\in \mathcal{H}_{11,\gamma}$ if and only if it can be expressed
by equation~(\ref{eq:67.LFT}) in which $\Theta(s)$ must satisfy
$\|\Theta\|_\infty<1$ and~(\ref{eq:36}); see
Corollary~\ref{cor.LFT}. Substituting the values $\Upsilon_1$,
$\Upsilon_2$, $\Upsilon_3$ defined in~(\ref{eq:106}) into~(\ref{eq:67.LFT}),~(\ref{eq:36})
yields~(\ref{eq:46}),~(\ref{eq:9}).
This proves the first part of the proposition.
Next, consider $\Theta$ suggested in~(\ref{eq:108}). It is obvious that
$\|\Theta\|_\infty<1$. Let us show that this $\Theta$
satisfies~(\ref{eq:9}) as well.
First, consider the case where $\psi\le |k|(1+\sigma_u^2)$.
When $\gamma^2\ge \sigma_u^2+2-|k|(1+\sigma_u^2)$ it holds that
\begin{eqnarray*}
\lefteqn{\sigma_u^2+2-\gamma^2 \le |k|(1+\sigma_u^2)} && \\
&&\le
|k|(1+\sigma_u^2)+\epsilon
\sqrt{|k|^2(1+\sigma_u^2)^2-\psi(\sigma_u^2+2-\gamma^2)} \\
&&=
\left|k(1+\sigma_u^2)+\epsilon\frac{k}{|k|}
\sqrt{|k|^2(1+\sigma_u^2)^2-\psi(\sigma_u^2+2-\gamma^2)}\right|.
\end{eqnarray*}
for any $\epsilon\in(0,1)$. Therefore~(\ref{eq:9}) holds in this case.
When $\sigma_u^2+2-2|k|(1+\sigma_u^2)+\psi < \gamma^2\le
\sigma_u^2+2-|k|(1+\sigma_u^2)$, the left hand-side of this inequality
implies that
\begin{eqnarray*}
\lefteqn{ 0\le \sigma_u^2+2-\gamma^2-|k|(1+\sigma_u^2)} && \\
&&< \sqrt{|k|^2(1+\sigma_u^2)^2-\psi(\sigma_u^2+2-\gamma^2)}.
\end{eqnarray*}
Since the rightmost inequality is strict, one can choose $\epsilon\in(0,1)$
which is sufficiently close to $1$ and still ensures that
\begin{eqnarray*}
\lefteqn{ 0\le \sigma_u^2+2-\gamma^2-|k|(1+\sigma_u^2)} && \\
&&\le \epsilon \sqrt{|k|^2(1+\sigma_u^2)^2-\psi(\sigma_u^2+2-\gamma^2)}.
\end{eqnarray*}
Thus, we again obtain that~(\ref{eq:9}) holds.
Now consider the case where $\psi>|k|(1+\sigma_u^2)$. Since in this case we
assume that
\begin{eqnarray*}
\gamma^2 &>& \sigma_u^2+2-\frac{|k|^2(1+\sigma_u^2)^2}{\psi} \ge \sigma_u^2+2-|k|(1+\sigma_u^2)
\end{eqnarray*}
then $\sigma_u^2+2-\gamma^2\le |k|(1+\sigma_u^2)$. Thus, (\ref{eq:9})
holds with $\Theta=0$.
\hfill$\Box$
Proposition~\ref{Prop1.J} shows that for every $\gamma$ such that
$\gamma^2<\sigma_u^2+2$ and which satisfies the condition~(\ref{eq:105}),
there exists a transfer function $H_{11}(s)$ with the desired
properties~(\ref{eq:17}),~(\ref{eq:19}). When $\Theta$ is chosen according
to~(\ref{eq:108}), the corresponding $H_{11}$ also satisfies the conditions
of Theorem~\ref{two-step} including condition (H3). Hence,
for each such $\gamma$, an equalizer
$\Xi=\Delta(H,0)\in\mathcal{H}_p$ can be
constructed which guarantees that the corresponding error power
spectrum density $P_e$ does not exceed $\gamma^2$. This leads
to the upper bound on the optimal equalization
performance achievable by coherent passive filters for this system
\begin{equation}
\label{eq:23}
(\gamma_\circ')^2\le \gamma_\circ^2=\inf_{H\in\mathcal{H}_p}P_e\le \begin{cases}
\psi-2(1+\sigma_u^2)|k|+(2+\sigma_u^2), \\
& \hspace{-3cm} \text{if $\psi\le (1+\sigma_u^2)|k|$
}; \\
(2+\sigma_u^2)-\frac{(1+\sigma_u^2)^2|k|^2}{\psi}, \\
& \hspace{-3cm} \text{if $\psi> (1+\sigma_u^2)|k|$
}.
\end{cases}
\end{equation}
Indeed, from Proposition~\ref{Prop1.J} and the remark
following the statement of Problem~\ref{P1a} we have
\begin{eqnarray*}
(\gamma_\circ')^2\le \gamma_\circ^2=\inf_{H\in\mathcal{H}_p}P_e\le P_e(H_{11})<\gamma^2.
\end{eqnarray*}
Taking infimum over $\gamma$ subject to~(\ref{eq:105}) yields~(\ref{eq:23}).
It turns out that this upper bound is in fact
tight; i.e., the inequalities in~(\ref{eq:23}) are in fact the equalities. The
matrix $H_{11}$ that gives rise to the filter which attains the infimum
in~(\ref{eq:23}) can be obtained as a limit of the
matrix~(\ref{eq:46}),~(\ref{eq:108}) as $\gamma\to \gamma_\circ$:
\begin{equation}
\label{eq:5}
H_{11}=\begin{cases}
\frac{k^*}{|k|}, & \text{if $\psi\le (1+\sigma_u^2)|k|$}; \\
\frac{(1+\sigma_u^2)k^*}{\psi} & \text{if $\psi> (1+\sigma_u^2)|k|$}.
\end{cases}
\end{equation}
Indeed, when $\psi\le |k|(1+\sigma_u^2)$, letting $\gamma^2\to
\sigma_u^2+2-2|k|(1+\sigma_u^2)+\psi$ in~(\ref{eq:9}) produces
$\Theta=k/|k|$ as the unique admissible value of $\Theta$. As a result,
(\ref{eq:46}) reduces to $H_{11}=k^*/|k|$.
Likewise, when $\psi>|k|(1+\sigma_u^2)$ and $\gamma^2\to
\sigma_u^2+2-\frac{|k|^2(1+\sigma_u^2)^2}{\psi}$,
(\ref{eq:9}) holds for any $\Theta(s)$. Also, $\Upsilon_2\to
0$, and~(\ref{eq:67.LFT}) has the limit
$-\frac{e^{-i\varphi}\Upsilon_3}{\psi^{1/2}\Upsilon_1}$, which yields
$H_{11}=\frac{(1+\sigma_u^2)k^*}{\psi}$. Remarkably, these are
exactly the values of $H_{11}$ which we obtain by minimizing
the expression for $P_e(i\omega,H_{11})$ in~(\ref{eq:121ps})
directly, proving that~(\ref{eq:23}) is, in fact, the
identity. This conclusion follows from the following proposition. A
special case of this proposition appeared in~\cite{UJ2a}.
\begin{proposition}\label{Prop1}
Consider the auxiliary optimal filtering problem~(\ref{eq:6''}) for the static
channel~(\ref{eq:1.bs}):
\begin{eqnarray}
(\gamma_\circ')^2=&&\inf P_e(i\omega,H_{11}) \nonumber \\
&& \mbox{subject to }
\label{eq:15}
|H_{11}(i\omega)|^2\le 1.
\end{eqnarray}
The following alternatives hold:
\begin{enumerate}[(a)]
\item
If $\psi\le (1+\sigma_u^2)|k|$, then the infimum in (\ref{eq:15}) is
achieved at $H_{11}$ given in the first line on the right hand side
of~(\ref{eq:5}). In this case, any passive equalizer
$\Xi(s)=\Delta(H(s),0)$, with $H(s)$ of the form
\begin{equation}
\label{eq:50}
H(s)=
\left[
\begin{array}{cc}
k^*/|k| & 0 \\ 0 & U_{22}(s)
\end{array}
\right],
\end{equation}
where $U_{22}(s)$ is an arbitrary paraunitary transfer function, is an
optimal equalizer for the optimal passive equalization
problem~(\ref{eq:6'}).
\item
On the other hand, when $\psi> (1+\sigma_u^2)|k|$,
the infimum in (\ref{eq:15}) is achieved at $H_{11}$ given in the second
line on the right hand side of~(\ref{eq:5}).
In this case, any passive equalizer
$\Xi(s)=\Delta(H(s),0)$, with $H(s)$ of the form
\begin{equation}
\label{eq:50.2}
\hspace{-.5cm} H(s)=
\left[
\begin{array}{cc}
\frac{(1+\sigma_u^2) k^*}{\psi} & \frac{\sqrt{\psi^2-(1+\sigma_u^2)^2 |k|^2}}{\psi}U_{12}(s) \\ \frac{\sqrt{\psi^2-(1+\sigma_u^2)^2 |k|^2}}{\psi}U_{21}(s)~ & -\frac{(1+\sigma_u^2) k}{\psi}U_{12}(s)U_{21}(s)
\end{array}
\right],
\end{equation}
where $U_{12}(s)$, $U_{21}(s)$ are arbitrary paraunitary transfer
functions, is a solution of the optimal passive equalization
problem~(\ref{eq:6'}).
\end{enumerate}
The expression on the right-hand side of
equation~(\ref{eq:23}) gives the corresponding expressions for the optimal
error power spectrum density:
\begin{equation}
\label{eq:7}
(\gamma_\circ')^2= \gamma_\circ^2=\begin{cases}
\psi-2(1+\sigma_u^2)|k|+(2+\sigma_u^2), \\
& \hspace{-3cm} \text{if $\psi\le (1+\sigma_u^2)|k|$}; \\
(2+\sigma_u^2)-\frac{(1+\sigma_u^2)^2|k|^2}{\psi}, \\
& \hspace{-3cm} \text{if $\psi> (1+\sigma_u^2)|k|$
}.
\end{cases}
\end{equation}
\end{proposition}
\emph{Proof: }
Since the coefficients of the objective function~(\ref{eq:121ps})
are constant, it will suffice to carry out
optimization in~(\ref{eq:15}) over the closed unit disk
$\{H_{11},|H_{11}|\le 1\}$. The corresponding cost is independent of
$\omega$, and we will write it as $P_e(H_{11})$ in lieu of
$P_e(i\omega,H_{11})$.
To prove claim (a), suppose first that $\psi<(1+\sigma_u^2)|k|$.
Consider the Lagrangian function with the multiplier $\lambda\ge 0$
\begin{eqnarray*}
\mathcal{L}&=& P_e(H_{11})-\lambda(1-|H_{11}|^2).
\end{eqnarray*}
The KKT optimality conditions are
\begin{eqnarray}
\label{eq:49}
&& (\psi+\lambda)H_{11}=(1+\sigma_u^2) k^*, \\
\label{opt.con.complementarity}
&&\lambda(|H_{11}|^2-1)=0.
\end{eqnarray}
Based on the complementarity condition (\ref{opt.con.complementarity}), the
following two cases must be considered.
Case 1. $\lambda> 0$. In this case, the complementarity condition
(\ref{opt.con.complementarity}) yields $|H_{11}|=1$. Therefore,
(\ref{eq:49}) yields the critical value $\lambda=(1+\sigma_u^2)|k|-\psi$
which is positive under the assumption $\psi<(1+\sigma_u^2) |k|$. Then, the
corresponding minimizer is $H_{11}=k^*/|k|$.
Case 2. $\lambda=0$. In this case, the optimality condition~(\ref{eq:49})
implies $H_{11}=\frac{(1+\sigma_u^2) k^*}{\psi}$.
However, this value of $H_{11}$ cannot be an optimal point
because $|H_{11}|>1$ when $\psi<(1+\sigma_u^2) |k|$. Thus, the
solution to the problem~(\ref{eq:15}) is $H_{11}$ obtained in Case~1.
When $\psi= (1+\sigma_u^2)|k|$, the
function $P_e$ reduces to
\begin{eqnarray*}
P_e(H_{11})&=&(1+\sigma_u^2)\left|\sqrt{|k|}H_{11}-\frac{k^*}{\sqrt{|k|}}\right|^2
\nonumber \\
&& +(2+\sigma_u^2)-(1+\sigma_u^2) |k|;
\end{eqnarray*}
it achieves minimum at $H_{11}=k^*/|k|$.
Thus we observe that in both cases, the minimum in~(\ref{eq:15}) is achieved at
$H_{11}=k^*/|k|$. To obtain the corresponding coherent equalizer, we refer
directly to equations (\ref{eq:37}), since $H_{11}$ is physically
realizable on its own, and the transfer function $X_2$ in
condition (H3) of Theorem~\ref{two-step} is 0. As explained in
Remark~\ref{r=0}, in this situation additional noise channels are not
required to ensure the physical realizability of the filter. For
mathematical consistency, we can let
$H_{12}=H_{21}=0$, and select $H_{22}(s)$ to be an arbitrary paraunitary
transfer function $U_{22}(s)$.
This completes the proof of claim~(a).
The proof of claim (b) proceeds in a similar manner. We again
analyze the optimality
conditions~(\ref{eq:49}),~(\ref{opt.con.complementarity}). This time
however, $\lambda=(1+\sigma_u^2)|k|-\psi$
fails to be nonnegative since $\psi> (1+\sigma_u^2) |k|$. On the other hand, in
the case $\lambda=0$, we obtain that the minimum is achieved at
$H_{11}=\frac{(1+\sigma_u^2) k^*}{\psi}$ since
this value of $H_{11}$ satisfies the condition $|H_{11}|\le 1$. The
remaining entries of the optimal filter matrix $H(s)$ are obtained using
Theorem~\ref{two-step}. \hfill$\Box$
\begin{remark}\label{rem.bs.opt}
In order to obtain an optimal equalizer in this
example, it suffices to select a constant $\Theta(s)$. As we have shown,
when $\psi\le |k|(1+\sigma_u^2)$, the optimal equalizer is obtained using
$\Theta=k/|k|$, and when $\psi\le |k|(1+\sigma_u^2)$, a transfer
function $\Theta(s)$ can be selected arbitrarily. In the latter case,
choosing a dynamic parameter $\Theta(s)$ in~(\ref{eq:46}) delivers no
benefit, compared with choosing a constant parameter.
\end{remark}
It is interesting to compare the optimal points of the constrained
optimization problem~(\ref{eq:15}) with optimal points of the
corresponding unconstrained
optimization problem~(\ref{eq:71.nc}). When $\psi\le (1+\sigma_u^2) |k|$,
claim (a) of
Proposition~\ref{Prop1} shows that the solutions to these two problems are
different. In the constrained problem~(\ref{eq:15}) the minimum is achieved
on the boundary of the unit disk at $H_{11}=k^*/k$, whereas
the minimum of the unconstrained problem~(\ref{eq:71.nc}) is achieved
outside the unit disk, at
$H_{11,*}=\frac{(1+\sigma_u^2) k^*}{\psi}$. The minimum value of the
problem~(\ref{eq:71.nc}) is $\gamma_*^2=2+\sigma_u^2-\frac{(1+\sigma_u^2)^2
|k|^2}{\psi}<\gamma_\circ^2$.
On the other hand, when $\psi> (1+\sigma_u^2)|k|$, claim (b) of
Proposition~\ref{Prop1} states that the two
solutions are identical, and $\gamma_\circ^2=\gamma_*^2$. This situation
was envisaged in Section~\ref{S-proc}, and we now show that the threshold
condition $\psi> (1+\sigma_u^2)|k|$ which characterizes alternative (b)
can be obtained directly from Theorem~\ref{SDP.primal.LMI}.
\begin{proposition}\label{SDP.primal.LMI.BS}
$\psi> (1+\sigma_u^2) |k|$ if and only if there exists $\theta>0$ such that
\begin{equation}
\label{eq:16}
\theta
\left[
\begin{array}{cc}
\psi &-(1+\sigma_u^2) k \\
-(1+\sigma_u^2) k^* & (2+\sigma_u^2)-\gamma_*^2
\end{array}
\right]>
\left[
\begin{array}{rr}
1 & 0 \\
0 & ~-1
\end{array}
\right].
\end{equation}
\end{proposition}
\emph{Proof: }
Since $\gamma_*^2=2+\sigma_u^2-\frac{(1+\sigma_u^2)^2|k|^2}{\psi}$ and
$\theta>0$, (\ref{eq:16}) is equivalent to the inequality
\begin{equation}
\label{eq:89}
\left[
\begin{array}{cc}
\psi-\frac{1}{\theta} &-(1+\sigma_u^2) k \\
-(1+\sigma_u^2) k^* & \frac{1}{\theta}+\frac{(1+\sigma_u^2)^2|k|^2}{\psi}
\end{array}
\right]> 0.
\end{equation}
It is easily checked that a $\theta>0$ for which~(\ref{eq:89}) holds exists
if and only if $\psi-\frac{(1+\sigma_u^2)^2|k|^2}{\psi}>0$. The latter condition
is equivalent to the inequality $\psi> (1+\sigma_u^2) |k|$.
\hfill$\Box$
Proposition~\ref{SDP.primal.LMI.BS} and
Theorem~\ref{SDP.primal.LMI} show that when $\psi> (1+\sigma_u^2) |k|$
the constraint~(\ref{eq:15}) is inactive. This observation has
an interesting interpretation, since the inequality
$\psi> (1+\sigma_u^2) |k|$ is equivalent to $\sigma_w^2> \bar
\sigma_w^2=\frac{(1+\sigma_u^2)|k|-\sigma_u^2|k|^2}{|m|^2}$. The latter
inequality sets a threshold on the intensity of the field $w$.
When the
input field $w$ exceeds this threshold, the optimal filter is able to mix the
fields $y$ and $z$ in such a way that the intensity of the equalization error
$e=\hat u-u$ is reduced, compared with the intensity of the error
$y-u$. The latter would be incurred if the equalizer was not used. Indeed,
the difference
between the power spectrum densities of these two errors is
\begin{eqnarray*}
P_{y-u}-\gamma_\circ^2
&=&\frac{|\psi-(1+\sigma_u^2) k|^2}{\psi} >0.
\end{eqnarray*}
This shows that the equalizer is able to offset the high intensity field
$w$ by redirecting a fraction of this field to the output $\hat z$, and
`trade' it for the low intensity noise $z$. Note that the gap between
$P_{y-u}$ and the optimal power spectrum density $\gamma_\circ^2$ of the
equalization error increases as $\sigma_w^2$ increases.
On the other hand, when $\sigma_w^2\le \bar \sigma_w^2$, the improvement is
marginal. It does not depend on $\sigma_w^2$:
\begin{eqnarray*}
P_{y-u}-\gamma_\circ^2
&=&2(1+\sigma_u^2)(|k|-\mathrm{Re}k).
\end{eqnarray*}
According to~(\ref{eq:5}), the action of the optimal equalizer in this case
is limited to phase correction, $\hat u=\frac{k^*}{|k|}y$.
In the worst case scenario, when $k$ is real, the filter simply passes the
unaltered input $y$ through.
In this worst case, $e=y-u$ and $\gamma_\circ^2=P_{y-u}$; i.e., the
optimal equalizer is unable to improve the mean-square error.
This analysis shows that the
capacity of an optimal coherent
equalizers to respond to noise in the transmission channel is restricted
when the signal to thermal noise ratio $\sigma_u^2/\sigma_w^2$ in the
quantum transmission channel exceeds
\[
\frac{\sigma_u^2}{\bar\sigma_w^2}=\frac{|m|^2\sigma_u^2}{(1+\sigma_u^2)|k|-\sigma_u^2|k|^2}=\frac{|m|^2}{|k|\sigma_u^{-2}+|k|-|k|^2}.
\]
The benefits of equalization become tangible only
when the ratio $\sigma_u^2/\sigma_w^2$ is sufficiently small. This situation differs strikingly from the
situation encountered in the classical mean-square equalization theory. We
conjecture that this
phenomenon holds in general when the channel environment noise is thermal,
and the equalizer is passive, and that the condition~(\ref{eq:21}) sets a
corresponding threshold on the signal to thermal noise ratio.
We conclude the example by comparing numerical results obtained
from the SDP problem~(\ref{eq:71.LMI.l})--(\ref{eq:14.l}) with the results
obtained directly using Proposition~\ref{Prop1}.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{error-variance-and-gamma2-vs-sig2w.eps}
\caption{Power spectrum densities $P_{y-u}$ and $P_{e}$ (given
by~(\ref{eq:7})) and the optimal
value of the SDP problem
(\ref{eq:71.LMI})--(\ref{eq:14}) for a range of $\sigma_w^2$, for the
beam splitter transmittance of $\eta=0.7$.}
\label{fig.compare}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{absH11-vs-sig2w.eps}
\caption{The theoretical (equation~(\ref{eq:5})) and numerically obtained
(via the SDP problem (\ref{eq:71.LMI})--(\ref{eq:14})) optimal gains
$|H_{11}|$ for a range of $\sigma_w^2$, for the beam splitter transmittance of $\eta=0.7$.}
\label{fig.H11}
\end{figure}
For this comparison, consider a
quantum-mechanical beam splitter as a special case of a static two-input
two-output quantum channel. In this case, $k=\sqrt{\eta}$,
$m=\sqrt{1-\eta}$, where $\eta\in(0,1)$ is the transmittance of the
device. That is, $y=\sqrt{\eta}u+\sqrt{1-\eta}w$.
Figure~\ref{fig.compare} shows the plot of the optimal value of
the LMI problem~(\ref{eq:71.LMI.l})--(\ref{eq:14.l})
obtained for this example numerically. Since the parameters of the
system are constant w.r.t. $\omega$, interpolation is not required in this
example, and one can use the obtained numerical value $H_{11}$ directly to
obtain an optimal equalizer, as was done in Proposition~\ref{Prop1}.
The optimal $H_{11}$ obtained numerically is
real, the graph of the optimal $|H_{11}|$ for this example is shown in
Figure~\ref{fig.H11}. For comparison, Figures~\ref{fig.compare}
and~\ref{fig.H11} also show the graphs of the optimal $P_e$ in
equation~(\ref{eq:7}) and
the optimal gain $H_{11}$ obtained in
Proposition~\ref{Prop1}. Remarkably, both the graphs of the optimal value of
the optimization problem and the graphs of the optimal gain are essentially
identical.
The threshold on the intensity of the noise $w$ separating the two
alternative equalization
strategies can also be seen vividly in the graphs.
With the chosen parameters, the threshold is
$\bar\sigma_w^2=\frac{(1+\sigma_u^2)\sqrt{\eta}-\sigma_u^2\eta}{1-\eta}$. When
$\sigma_w^2$ is below this threshold, the equalizing filter is given
by~(\ref{eq:50}), and we let $H_{22}(s)=1$:
\[
\hat u=y=\sqrt{\eta}u+\sqrt{1-\eta}w.
\]
I.e., the optimal equalization policy is to pass the channel output $u$
unaltered. On the other hand, when $\sigma_w^2>\bar\sigma_w^2$, the optimal
equalizing filter is given by~(\ref{eq:50.2}). Letting $U_{12}=U_{21}=1$
yields the following expression for the mean-square optimal estimate of
$u$,
\begin{eqnarray*}
\hat
u&=&\frac{(1+\sigma_u^2)\eta}{\psi}u+\frac{(1+\sigma_u^2)\sqrt{\eta}\sqrt{1-\eta}}{\psi}w
\\
&&+\frac{1}{\psi}\sqrt{\psi^2-(1+\sigma_u^2)^2\eta}z.
\end{eqnarray*}
When $\sigma_w^2>\bar\sigma_w^2$,
$\frac{(1+\sigma_u^2)\sqrt{\eta}\sqrt{1-\eta}}{\psi}< \sqrt{1-\eta}$, i.e.,
the optimal filter applies a reduced gain to the input $w$, compared
with the gain $\sqrt{1-\eta}$ of the
corresponding term in the expression for $y$. This results in the lower
intensity of the filtering error; see Figure~\ref{fig.compare}.
The figure confirms that when the intensity of the thermal noise $w$ is
sufficiently large, the optimal equalizer is
able to reduce the degrading effect of the auxiliary noise $w$ by trading
it for a smaller intensity noise injected through the channel $z$. On
the contrary, when the noise $w$ has low intensity, such trade-off
is not possible, and the filter resorts to passing the channel output $y$
through without any modification.
We conclude the example by pointing out that for a beam splitter of
transmittance $\eta$, the optimal
equalizer~(\ref{eq:50.2}) can be implemented using a single beam splitter with
the transmittance
$\frac{(1+\sigma_u^2)^2\eta}{(\sigma_u^2\eta+\sigma_w^2(1-\eta))^2}$.
I.e.,
the optimal channel-equalizer system has the configuration shown in
Fig.~\ref{fig:bsplusbs}.
\subsection{Equalization of an optical cavity system}\label{ex2.revisited}
\subsubsection*{Guaranteed cost equalization}
\begin{figure}[t]
\begin{center}
\psfragfig[width=0.9\columnwidth]{cavity-simplified-marked}{
\psfrag{Optical cavity}{Optical cavity}
\psfrag{b}{\hspace{-1ex}$\breve w$}
\psfrag{v}{$\breve u$}
\psfrag{bp}{$\breve u_1$}
\psfrag{vp}{$\breve w_1$}
\psfrag{bp2}{$\breve u_2$}
\psfrag{vp2}{$\breve w_2$}
\psfrag{alpha}{$k_1$}
\psfrag{beta}{$k_2$}
\psfrag{w}{\hspace{-3ex}$w$}
\psfrag{z}{$\breve z$}
\psfrag{bhat}{$\breve{\hat u}$}
\psfrag{wout}{$y_w$}
\psfrag{vout}{$\breve{d}$}
\psfrag{bout}{}
\psfrag{zhat}{$\breve{\hat z}$}
\psfrag{y}{$\breve{y}$}
\psfrag{H}{\hspace{-2ex}$\Xi(s)$}}
\caption{A cavity, beam splitters and an equalizer system.}
\label{cavity}
\end{center}
\end{figure}
Consider the equalization system shown in Fig.~\ref{cavity}. The channel
consists of an optical cavity and two optical beam splitters. As in
the previous example, the input fields $u$ and $w$ are
scalar. For simplicity, suppose that the transmittance parameters
$k_1^2$, $k_2^2$ of the beam splitters are equal, and that
$k_1, k_2$ are real positive numbers, and so
$k_1=k_2=k$ is a real constant. Thus, the relations between the input and
output fields of the beam splitters are
\begin{eqnarray*}
&& \left[
\begin{array}{c}
u_1 \\ w_1
\end{array}
\right]=
\left[
\begin{array}{cc}
k& m \\ -m & k
\end{array}
\right]
\left[
\begin{array}{c}
u \\ w
\end{array}
\right], \quad
\left[
\begin{array}{c}
y \\ d
\end{array}
\right]=
\left[
\begin{array}{cc}
k& m \\ -m & k
\end{array}
\right]
\left[
\begin{array}{c}
u_2 \\ w_2
\end{array}
\right],
\end{eqnarray*}
$m=\sqrt{1-k^2}$.
The transfer function of the optical cavity is
$G_c(s)=\frac{s-\kappa+i\Omega}{s+\kappa+i\Omega}$, $\kappa>0$, $\Omega$
are real numbers, i.e., $u_2=G_c(s)u_1$.
Then the elements
of the transfer function $G(s)$ of the channel are
\begin{eqnarray}
\label{eq:29}
G_{11}(s)&=&k^2G_c(s)-(1-k^2), \nonumber \\
G_{12}(s)&=&k\sqrt{1-k^2}(G_c(s)+1), \nonumber \\
G_{21}(s)&=&-k\sqrt{1-k^2}(G_c(s)+1), \nonumber \\
G_{22}(s)&=&k^2-(1-k^2)G_c(s).
\end{eqnarray}
Our standing assumptions in this section are that $\sigma_w^2>\sigma_u^2>0$
and $k^2<\frac{1}{2}$.
Under these assumptions,
\begin{eqnarray}
&& \rho\triangleq
1+\frac{\sigma_u^2}{2(\sigma_w^2-\sigma_u^2)k^2(1-k^2)}>1, \quad
\hat\rho\triangleq \frac{\rho-1}{\rho+1}\in(0,1), \nonumber \\
&& \delta\triangleq \frac{\sqrt{1-k^2}}{k}>1, \quad \hat\delta\triangleq
\frac{\delta^2+1}{\delta^2-1}=\frac{1}{1-2k^2}>1.
\label{eq:10}
\end{eqnarray}
From~(\ref{eq:90}), we have
\begin{equation}
\label{eq:96}
\Upsilon_3=\sqrt{\sigma_u^2+2-\gamma^2}>0.
\end{equation}
In the next proposition, the following notation will be used:
\begin{eqnarray}
\beta&\triangleq&\frac{1+\sigma_u^2}{\Upsilon_3\sqrt{2(\sigma_w^2-\sigma_u^2)(1+\rho)}}(\delta-\frac{1}{\delta}), \nonumber \\
\alpha&\triangleq&\sqrt{\beta^2-1},
\quad
\nu\triangleq\sqrt{\beta^2\hat\delta^2-\hat\rho}, \label{eq:11} \\
\mu&\triangleq& \sqrt{2(\sigma_w^2-\sigma_u^2)k^2(1-k^2)(1+\rho)},
\nonumber \\
N_1(s)&=&\beta(s+i\Omega)+\beta\hat\delta\kappa,
\nonumber \\
N_2(s)&=&\alpha(s+i\Omega)+\nu\kappa. \nonumber
\end{eqnarray}
\begin{proposition}\label{Prop.cav}
Suppose $\gamma$ is chosen so that $\beta>1$ and $\gamma^2<\sigma_u^2+2$. Then
$H_{11}(s)\in\mathcal{H}_{11,\gamma}$ if and only if
\begin{eqnarray}
\label{eq:46.1}
H_{11}(s)&=&-\frac{\Upsilon_3}{\mu}
\frac{s+\kappa+i\Omega}{N_1(s)-\Theta(s)N_2(s)},
\end{eqnarray}
where $\Theta(s)$ is a stable rational transfer function analytic in the
closed right half-plane, which satisfies $\|\Theta\|_\infty<1$ and the
frequency domain condition
\begin{eqnarray}
\label{eq:9.1}
\left|\frac{N_1(i\omega)-\Theta(i\omega)N_2(i\omega)}{i(\omega+\Omega)+\kappa}
\right|\ge \frac{\Upsilon_3}{\mu} \quad \forall \omega\in\mathbf{R}.
\end{eqnarray}
\end{proposition}
\emph{Proof: }
Using the notation in~(\ref{eq:10}), the function $\Psi(s)$ given in
equation~(\ref{eq:47}) is expressed as
\begin{eqnarray*}
\Psi(s)&=&\sigma^2_u+2(\sigma_w^2-\sigma_u^2)k^2(1-k^2)
\left(1+\frac{(s+i\Omega)^2+\kappa^2}{(s+i\Omega)^2-\kappa^2}\right) \\
\\
&=&2(\sigma_w^2-\sigma_u^2)k^2(1-k^2)(1+\rho)\frac{(s+i\Omega)^2-\hat\rho\kappa^2}{(s+i\Omega)^2-\kappa^2}.
\end{eqnarray*}
Clearly, $\Psi(s)$ has full normal rank. It admits the spectral
decomposition~(\ref{eq:61}) with the spectral factor\footnote{As in
Proposition~\ref{Prop1.J}, one can
use a spectral factor $M_1(s)=e^{i\varphi}M(s)$ in lieu of $M(s)$ in~(\ref{eq:54}). The
definitions of $\Upsilon_1(s)$, $\Upsilon_2(s)$
will then need to be updated accordingly, as was done in
Proposition~\ref{Prop1.J}.}
\begin{equation}
\label{eq:54}
M(s)=\mu
\frac{s+\kappa\sqrt{\hat\rho}+i\Omega}{s+\kappa+i\Omega}.
\end{equation}
Both $M(s)$ and $M^{-1}(s)$ are stable and
analytic in the half-plane
$\mathrm{Re}s>-\kappa\sqrt{\hat\rho}$. Using~(\ref{eq:54}), the transfer
function $Q(s)$ in~(\ref{eq:75}) is expressed as
\begin{eqnarray*}
Q(s)=\frac{1+\sigma_u^2}{\sqrt{2(\sigma_w^2-\sigma_u^2)(1+\rho)}}(\delta-\frac{1}{\delta})\frac{s+\hat\delta\kappa+i\Omega}{s+\kappa\sqrt{\hat\rho}+i\Omega}.
\end{eqnarray*}
Using this information, one can readily check that the matrix $\Upsilon(s)$
in which
\begin{eqnarray}
\Upsilon_1(s)&=&\frac{N_1(s)}{s+\sqrt{\hat\rho}\kappa+i\Omega}
=\beta\frac{s+\hat\delta\kappa+i\Omega}{s+\sqrt{\hat\rho}\kappa+i\Omega},
\nonumber \\
\Upsilon_2(s)&=&\frac{N_2(s)}{s+\sqrt{\hat\rho}\kappa+i\Omega}
=\alpha\frac{s+\frac{\nu}{\alpha}\kappa+i\Omega}{s+\sqrt{\hat\rho}\kappa+i\Omega},
\label{eq:95}
\end{eqnarray}
$\Upsilon_3$ is defined in~(\ref{eq:96}) and $\Upsilon_4=0$, is a
$J$-spectral factor of the corresponding matrix $\Phi(s)$. Indeed, when
$\beta>1$, the constants $\alpha$ and $\nu$ in~(\ref{eq:11}) are well
defined, and the identity~(\ref{eq:68}) can be verified directly.
Also, since $\sqrt{\hat\rho}\kappa>0$,
$\Upsilon_1(s)$ and $\Upsilon_2(s)$ are stable and are analytic in the
half-plane $\mathrm{Re}(s)>-\sqrt{\hat\rho}\kappa$. Therefore,
$\Upsilon(s)$ also has these properties. Similarly, since
$\hat\delta$ is positive, then $\Upsilon_1(s)^{-1}$ is also stable and is
analytic in the half-plane $\mathrm{Re}(s)>-\hat\delta\kappa$.
Finally, we note that
\begin{equation}
\label{eq:97}
\Upsilon(s)^{-1}= \frac{1}{K(s)}
\left[
\begin{array}{cc}
0 & -\Upsilon_2(s) \\
-\Upsilon_3 & \Upsilon_1(s)
\end{array}
\right],
\end{equation}
where
\begin{equation}
\label{eq:99}
K(s)=\det\Upsilon(s)
=-\Upsilon_3\Upsilon_2(s)=-\Upsilon_3 \alpha
\frac{s+\frac{\nu}{\alpha}\kappa+i\Omega}{s+\sqrt{\hat\rho}\kappa+i\Omega}.
\quad
\end{equation}
Therefore $\frac{1}{K(s)}$ is stable and is analytic in the half-plane
$\mathrm{Re}s> -\frac{\nu}{\alpha}\kappa$. Thus we conclude that
$\Upsilon(s)^{-1}$ is stable and is analytic in a half-plane
$\mathrm{Re}s>-\tau$, $\exists\tau>0$.
These properties verify the conditions of Theorem~\ref{T2}. Therefore
$H_{11}(s)\in\mathcal{H}_{11,\gamma}$ if and only if there
exists a transfer function $\Theta(s)$ with properties described in that
theorem for which $H_{11}(s)$ can be expressed by
equation~(\ref{eq:67}).
Since we chose $\Upsilon_4=0$, we can use
Corollary~\ref{cor.LFT} to obtain the general
form of a feasible $H_{11}(s)$. Substituting~(\ref{eq:54}),~(\ref{eq:95})
in~(\ref{eq:67.LFT}) yields~(\ref{eq:46.1}). The
frequency domain condition~(\ref{eq:9.1}) follows from~(\ref{eq:79})
and~(\ref{eq:45.LFT}) in the same manner.
\hfill$\Box$
As in the previous section, it is useful to derive sufficient
conditions which would allow us to obtain a $\Theta(s)$ which
solves~(\ref{eq:9.1}). For this, we
restrict attention to constant $\Theta$'s.
\begin{corollary}\label{cav.suffcond}
Under the conditions of Proposition~\ref{Prop.cav}, if
\begin{equation}
\label{eq:9.4}
\beta+\alpha>\mu^{-1}\sqrt{\sigma_u^2+2-\gamma^2},
\end{equation}
then for any constant $\Theta\in
(-1,\min\{\alpha^{-1}(\beta-\mu^{-1}\Upsilon_3),0\})$ the corresponding
$H_{11}(s)$ given by equation~(\ref{eq:46.1}) belongs to
$\mathcal{H}_{11,\gamma}$.
\end{corollary}
\emph{Proof: }
For a constant $\Theta$, (\ref{eq:9.1}) is equivalent to
\begin{equation*}
\label{eq:9.2}
\frac{\Upsilon_3}{\mu}
\sqrt{\frac{(\omega+\Omega)^2+\kappa^2}{(\beta-\Theta\alpha)^2(\omega+\Omega)^2
+(\beta\hat\delta-\Theta\nu)^2}}
\le 1 \quad \forall \omega\in\mathbf{R}.
\end{equation*}
The maximum of the expression on the left-hand side is equal to
$\frac{\Upsilon_3}{\mu}/\min\{|\beta-\Theta\alpha|,|\beta\hat\delta-\Theta\nu|\}$.
Therefore, (\ref{eq:9.1}) is equivalent to the condition
\begin{equation}
\label{eq:9.3}
\min\{|\beta-\Theta\alpha|,|\beta\hat\delta-\Theta\nu|\}\ge \mu^{-1} \sqrt{\sigma_u^2+2-\gamma^2}.
\end{equation}
Next, we show that~(\ref{eq:9.3}) holds for any $\Theta\in
(-1,\min\{\alpha^{-1}(\beta-\mu^{-1}\Upsilon_3),0\})$. Indeed, condition
(\ref{eq:9.4}) guarantees that this interval is not an empty set. Then
for any $\Theta$ in that interval,
\[
\min\{|\beta-\Theta\alpha|,|\beta\hat\delta-\Theta\nu|\}=\beta-\Theta\alpha.
\]
This identity holds because $\Theta<0$, $\beta\hat\delta>\beta$ and
$\nu>\alpha$ due to $\hat\delta>1$, $\hat\rho<1$. Furthermore,
$\Theta<\alpha^{-1}(\beta-\mu^{-1}\Upsilon_3)$ implies
$
\beta-\Theta\alpha>\mu^{-1}\Upsilon_3.
$
This validates~(\ref{eq:9.3}) and~(\ref{eq:9.1}). The claim
then follows from Proposition~\ref{Prop.cav}.
\hfill$\Box$
Finally, we apply Corollary~\ref{cav.suffcond} and Theorem~\ref{two-step}
to obtain a complete physically
realizable suboptimal equalizer for the cavity system in this example. For
convenience, we introduce the additional notation
\begin{eqnarray}
\label{eq:22}
a=-\frac{\Upsilon_3}{\mu(\beta-\Theta\alpha)}, \quad
c=\frac{\beta\hat\delta-\Theta\nu}{\beta-\Theta\alpha}.
\end{eqnarray}
Conditions of
Corollary~\ref{cav.suffcond} allow us to select
$\Theta\in (-1,\min\{\alpha^{-1}(\beta-\mu^{-1}\Upsilon_3),0\})$, i.e.,
$\Theta<0$ and
\begin{eqnarray*}
\label{eq:24}
|a|&=&\frac{\Upsilon_3}{\mu|\beta-\Theta\alpha|}=\frac{\Upsilon_3}{\mu(\beta-\Theta\alpha)}<1,
\\
c&=&\frac{\beta\hat\delta-\Theta\nu}{\beta-\Theta\alpha}>1>|a|.
\end{eqnarray*}
Hence $c^2-a^2>0$, $1-a^2>0$.
Using this notation, the transfer function $H_{11}(s)$ in~(\ref{eq:46.1})
can be written in a compact form
\begin{eqnarray}
\label{eq:18}
H_{11}(s)&=&a\frac{s+\kappa+i\Omega}{s+c\kappa+i\Omega}.
\end{eqnarray}
Clearly, it satisfies condition (H1) of
Theorem~\ref{two-step}. The frequency domain
condition~(\ref{eq:9.1}) ensures that condition (H2) is also satisfied.
Then we compute
\begin{eqnarray}
\label{eq:28}
X_1(s)&=&X_2(s)=1-H_{11}(s)H_{11}(s)^H
\nonumber \\
&=&(1-a^2)\frac{(s+i\Omega)^2-\frac{c^2-a^2}{1-a^2}\kappa^2}{(s+i\Omega)^2-c^2\kappa^2}.
\end{eqnarray}
It is easy to check that $X_1(s)$ and $X_2(s)$ are para-Hermitian and satisfy
condition (H3) of Theorem~\ref{two-step}. Let us define the spectral
factors of $X_1(s)$, $X_2(s)$,
\begin{eqnarray}
\label{eq:30}
H_{12}(s)&=&-\sqrt{1-a^2}\frac{s+\sqrt{\frac{c^2-a^2}{1-a^2}}\kappa +i\Omega}{s+c\kappa+i\Omega},
\\
\tilde H_{21}(s)&=&-H_{12}(s)=\sqrt{1-a^2}\frac{s+\sqrt{\frac{c^2-a^2}{1-a^2}}\kappa +i\Omega}{s+c\kappa+i\Omega},
\nonumber \\
\tilde
H_{21}^{-1}(s)&=&\frac{1}{\sqrt{1-a^2}}\frac{s+c\kappa+i\Omega}{s+\sqrt{\frac{c^2-a^2}{1-a^2}}\kappa +i\Omega},
\nonumber
\end{eqnarray}
and also select
\begin{equation*}
\label{eq:63}
U(s)=\frac{s-\sqrt{\frac{c^2-a^2}{1-a^2}}\kappa +i\Omega}{s+\sqrt{\frac{c^2-a^2}{1-a^2}}\kappa +i\Omega}.
\end{equation*}
This transfer function is paraunitary, stable and analytical in the right
half-plane $\mathrm{Re}s>-\sqrt{\frac{c^2-a^2}{1-a^2}}$, as required by
Theorem~\ref{two-step}.
Using these definitions the remaining blocks of $H(s)$ are obtained
according to~(\ref{eq:81}):
\begin{eqnarray}
\label{eq:72}
H_{21}(s)&=&U(s)\tilde H_{21}(s) \nonumber \\
&=&\sqrt{1-a^2}\frac{s-\sqrt{\frac{c^2-a^2}{1-a^2}}\kappa+i\Omega}{s+c\kappa+i\Omega},
\nonumber \\
H_{22}(s)&=&-U(s)(\tilde H_{21}^{-1}(s))^HH_{11}(s)^HH_{12}(s) \nonumber \\
&=&a\frac{s-\kappa+i\Omega}{s+c\kappa+i\Omega}.
\end{eqnarray}
The following proposition which follows from
Theorem~\ref{two-step} summarizes our analysis.
\begin{proposition}\label{Prop.cav.complete}
Given a constant $\gamma$ which satisfies the conditions of
Proposition~\ref{Prop.cav} and Corollary~\ref{cav.suffcond},
consider the transfer function $\Xi(s)=\Delta(H(s),0)$
where $H(s)$ is composed of the blocks defined in
equations~(\ref{eq:18}),~(\ref{eq:30}) and~(\ref{eq:72}); see~(\ref{eq:98a}).
Then $\Xi(s)=\Delta(H(s),0)$ is a passive
physically realizable stable and causal guaranteed cost equalizer for the
cavity system under consideration, and
\begin{equation}
\label{eq:78}
\sup_\omega P_e(i\omega,\Xi)<\gamma^2.
\end{equation}
\end{proposition}
It is worth pointing out that the guaranteed cost equalizer in this example
can be realized using an interconnection of an optical cavity and two
beam splitters shown in Figure~\ref{fig:cavity.filter}.
The transfer function of the optical cavity in the figure is
\begin{equation*}
y_2=H_c(s)y_1, \quad H_c(s)=\frac{s-c\kappa+i\Omega}{s+c\kappa+i\Omega},
\end{equation*}
and the beam splitters' operators are
\begin{eqnarray*}
&& \left[
\begin{array}{c}
y_1 \\ z_1
\end{array}
\right]=
\left[
\begin{array}{cc}
\xi_1 & \eta_1 \\ \eta_1 & -\xi_1
\end{array}
\right]
\left[
\begin{array}{c}
y \\ z
\end{array}
\right], \quad
\left[
\begin{array}{c}
\hat u \\ \hat z
\end{array}
\right]=
\left[
\begin{array}{cc}
\eta_2& \xi_2 \\ \xi_2 & -\eta_2
\end{array}
\right]
\left[
\begin{array}{c}
y_2 \\ z_2
\end{array}
\right],
\end{eqnarray*}
where
\begin{eqnarray*}
&& \eta_1=-\sqrt{\frac{c+a^2-\sqrt{(c^2-a^2)(1-a^2)}}{2c}}, \\
&& \xi_1=\sqrt{1-\eta_1^2}=\sqrt{\frac{c-a^2+\sqrt{(c^2-a^2)(1-a^2)}}{2c}}, \\
&& \eta_2=-\sqrt{\frac{c-a^2-\sqrt{(c^2-a^2)(1-a^2)}}{2c}}, \\
&& \xi_2=\sqrt{1-\eta_2^2}=\sqrt{\frac{c+a^2+\sqrt{(c^2-a^2)(1-a^2)}}{2c}}.
\end{eqnarray*}
\begin{figure}[t]
\begin{center}
\psfragfig[width=0.7\columnwidth]{cavity-filter}{
\psfrag{Optical cavity}{Optical cavity}
\psfrag{b}{\hspace{-1ex}$\breve z$}
\psfrag{v}{$\breve y$}
\psfrag{bp}{$\breve y_1$}
\psfrag{vp}{$\breve z_1$}
\psfrag{bp2}{$\breve y_2$}
\psfrag{vp2}{$\breve z_2$}
\psfrag{alpha}{$\eta_1$}
\psfrag{beta}{$\eta_2$}
\psfrag{w}{\hspace{-3ex}$w$}
\psfrag{bhat}{$\breve{\hat u}$}
\psfrag{wout}{$y_w$}
\psfrag{vout}{$\breve{\hat z}$}
\psfrag{y}{$\breve{\hat u}$}
\psfrag{H}{\hspace{-2ex}$\Xi(s)$}}
\caption{A cavity and beam splitters realization of the equalizer.}
\label{fig:cavity.filter}
\end{center}
\end{figure}
Proposition~\ref{Prop.cav.complete} reduces the question of finding a
suboptimal physically realizable equalizer to checking
whether~(\ref{eq:9.4}) is satisfied for a given
$\gamma$ such that $\gamma<\sigma_u^2+2$ and $\beta>1$. It is also possible
to minimize the upper bound $\gamma^2$ on $\sup_\omega
P_e(i\omega,\Xi)$ over the set $\{\gamma\colon
\gamma^2\in(0,\sigma_u^2+2),
\beta>1,\beta+\alpha>\mu^{-1}\Upsilon_3\}$. This will lead to a
suboptimal solution to the problem. Figure~\ref{fig:cav.subopt}
illustrates this. The solid line in
Figure~\ref{fig:cav.subopt} shows such a suboptimal $\gamma^2$ obtained for a range of values of
$\sigma_w^2>\sigma_u^2$, where $\sigma_u^2=0.1$, $k=0.4$, $\kappa=5$,
$\Omega=10$.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{cavity-errors-vs-sig2w.eps}
\caption{$\sup_\omega P_{y-u}(i\omega)$ (the dash line) and the optimized bound on
$\sup_\omega P_{e}(i\omega)$ (the solid line)
for a range of $\sigma_w^2$, for an optical cavity.}
\label{fig:cav.subopt}
\end{figure}
For comparison, the figure shows the graph of the error power spectrum
density $\sup_\omega P_{y-u}$ of the system without an equalizer (the
dashed line). The
advantage of equalization is quite clear from this
figure. Fig.~\ref{fig:cav.Bode.subopt} shows the Bode plot of one of the
suboptimal transfer functions $H_{11}(s)=-\frac{s+5+10i}{s+7.961+10i}$
obtained for the cavity system with $\sigma_w^2=0.2$. It confirms that
$|H_{11}(i\omega)|<1$.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{cavity-Bode.eps}
\caption{The Bode plot of one of the
suboptimal transfer functions $H_{11}(s)$
obtained using equation~(\ref{eq:46.1}) with $\Theta=-0.9998$. The
parameters of the system are $\sigma_w^2=0.2$, $\sigma_u^2=0.1$,
$k=0.4$, $\kappa=5$, $\Omega=10$.}
\label{fig:cav.Bode.subopt}
\end{figure}
\subsubsection*{Equalization via semidefinite programming}
We now illustrate the application of the approximation
technique presented in
Section~\ref{semidef}. For this, we consider the same cavity
system with parameters
$k=0.4$,
$\kappa=5$, $\Omega=10$, $\sigma_w^2=0.2$, $\sigma_u^2=0.1$. Recall that
the transfer function matrix $G(s)$ of that system is a $2\times 2$ matrix,
its elements are given in~(\ref{eq:29}).
To apply the algorithm described in Section~\ref{semidef} to this system,
first a set of $L=21$ points $\omega_l$ was selected which included $0$,
ten logarithmically spaced frequency points in the interval
$[10^{-3},10]$ and the corresponding negative frequencies. With these
data, the LMI problem~(\ref{eq:71.LMI.l})--(\ref{eq:14.l}) was
solved numerically and the array of values $H_{11,l}$ was obtained, $l=1,
\ldots, L$, along with the value of the optimization
problem. In this example, this value was obtained to be
$\tilde\gamma^2=0.7049$. It is worth noting
that $\tilde\gamma^2<\sigma_u^2+2$; this validates~(\ref{eq:53}).
This set of data was then used to solve the Nevanlinna-Pick interpolation
problem.
The procedure
outlined in Section~\ref{semidef}
involves mapping the half-plane $\mathrm{Re}s>-\tau$ conformally onto the
half-plane $\mathrm{Re}s>0$, performing interpolation over this half-plane
to obtain $\hat H_{11}(s)$, then obtaining $H_{11}(s)$ via~(\ref{eq:52}). To
implement this procedure, we used the conformal mapping
$s'=s+\tau$, where we let
$\tau=10^{-3}$. This ensured that the grid points $i\omega_l$ on the
imaginary axis were mapped conformally into the interior of the right
half-plane $\mathrm{Re}s>0$, as required by the algorithm in
Section~\ref{semidef}.
Theorem~NP in~\cite{Kovalishina-1984} allows to obtain a
solution using~(\ref{eq:58}), provided the Pick matrix $\mathbf{P}$ is
positive definite. This assumption of the Nevanlinna-Pick interpolation
theory was satisfied in this example.
The method gives the analytical expression~(\ref{eq:58})
for $\hat H_{11}(s)$. Since it involves inverting the Pick matrix $\mathbf{P}$, a
closed form expression for~(\ref{eq:58}) is quite cumbersome, even when a
modest number of grid points is selected. Therefore we validated our
approach numerically. For this, we selected additional
grid points on the imaginary axis, while keeping the original frequencies as a
control set. Then we computed interpolated values of $H_{11}(i\omega)$ at
those grid points, using~(\ref{eq:58}),~(\ref{eq:52}) with
$\Theta(s)=-0.95,0$ and $0.95$. Also, the
corresponding normalized values of the error power spectrum density,
$P_e(i\omega,H_{11})/{\tilde\gamma^2}$, were computed. The graphs of these
quantities are shown in Fig.~\ref{fig:psds} using solid lines.
\begin{figure}[t]
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[width=0.9\columnwidth]{cavity-errors-suboptimal.eps}
\caption{}\label{fig:psds}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\includegraphics[width=0.9\columnwidth]{cavity-errors-suboptimal-sig2w=4.eps}
\caption{}\label{fig:psds.sig2w4}
\end{subfigure}
\caption{The normalized computed error power spectrum densities
$P_e/{\tilde\gamma^2}$ (the solid lines) and the
normalized error $P_{y-u}/{\tilde\gamma^2}$ (the dash line): (a)
$\sigma_w^2=0.2$; (b) $\sigma_w^2=4$.}
\label{fig:psds.all}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[width=0.9\columnwidth]{cavity-H11-suboptimal.eps}
\includegraphics[width=0.9\columnwidth]{cavity-H11-phase-suboptimal.eps}
\caption{} \label{fig:ratio}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\includegraphics[width=0.9\columnwidth]{cavity-H11-suboptimal-sig2w=4.eps}
\includegraphics[width=0.9\columnwidth]{cavity-H11-phase-suboptimal-sig2w=4.eps}
\caption{} \label{fig:ratio.4}
\end{subfigure}
\caption{The Bode plots of $H_{11}(s)$ for $\Theta=-0.95,0$ and $0.95$. The
circles indicate the magnitude and phase of $H_{11,l}$ obtained from
the optimization problem~(\ref{eq:71.LMI.l})--(\ref{eq:14.l}): (a)
$\sigma_w^2=0.2$; (b) $\sigma_w^2=4$.}\label{fig:ratio.all}
\end{figure}
Fig.~\ref{fig:ratio} confirms that all three computed $H_{11}(i\omega)$
agree at the grid frequencies.
The sharp peaks in the graphs occur at the grid point frequencies
$\omega_l$ which were used in the
LMIs~(\ref{eq:13.l}),~(\ref{eq:14.l}). These peaks
occur because the transfer functions $W_{ij}(s)$ which parameterize the
solution have poles at $-\tau+i\omega_l$; see~(\ref{eq:101}). When
$\tau=0.001$, these poles are quite close to the imaginary axis.
Other than at the control frequencies $\omega_l$, all three graphs of
$P_e(i\omega,
H_{11})$ deviate from the optimal value $\tilde\gamma^2$ of
the problem (\ref{eq:71.LMI.l})--(\ref{eq:14.l}). This is expected
since the algorithm optimizes the PSD of the error at selected frequency points
only. It is worth noting that away from
the grid frequencies, $P_e(i\omega, H_{11})$ varies considerably, depending
on $\Theta$. When we let $\Theta=-0.95$
in~(\ref{eq:58}), interpolation led to a substantially improved error power
spectrum density, in comparison with the power spectrum
density $P_{y-u}(i\omega)$ of the difference $y-u$. However, when
$\Theta=0.95$, the error power spectrum density deviated considerably from
the value $\tilde\gamma^2$, and was relatively close to
$P_{y-u}(i\omega)$. Nevertheless, the observed reduction of the
error PSD using $\Theta=-0.95$ and $\Theta=0$ indicates that there is
room for further optimization of
$P_e(i\omega,H_{11})$ over the parameter $\Theta(s)$. Simulations performed
with other values of $\sigma_w^2$ (e.g., see Fig.~\ref{fig:psds.sig2w4}
and~\ref{fig:ratio.4}) confirmed this finding. This interesting problem will be
addressed in future research.
Another interesting observation is that with the selected
parameters, the optimization problem~(\ref{eq:71.LMI.l})-(\ref{eq:14.l})
produced a set of points $H_{11,l}$ that were quite close
to the boundary of the set $|H_{11}(i\omega)|\le 1$. As a result, while
all three interpolants satisfied the constraint~(\ref{eq:19})
of Problem~\ref{P1a} required for physical realizability of the filter,
they do so with a rather small margin; see Fig.~\ref{fig:ratio}.
The intuition gained in the previous section suggests that when the noise
intensity is sufficiently large, the parameter $H_{11}$ of the filter
should reduce away from the boundary of the constraint set, in an attempt
to reduce the contribution of the noise field to the output of the
equalizer. This is confirmed in Fig.~\ref{fig:ratio.4}, which illustrates
the results of interpolation when $\sigma_w^2=4$. This time all obtained
$H_{11,jl}$ have magnitude of order of 0.4. The corresponding value
$\tilde\gamma^2=1.8117$.
It is worth noting that for both values of $\sigma_w^2$, letting
$\Theta=0$ led to $H_{11}(i\omega)$ vanishing at large
$\omega$. When
$\sigma_w^2=0.2$, the filter with $H_{11}(s)$ that vanishes as
$\omega\to\infty$ is not mean-square optimal. However, when
$\sigma_w^2=4$, the equalizer with $\Theta=0$ is the best
out of the three in terms of performance. In this equalizer, the gain
$H_{12}(i\omega)$ dominates $H_{11}(i\omega)$ as $\omega\to\infty$. An
explanation to this is
that when the intensity of the channel noise field becomes very large,
from the view point of optimizing the mean-square error, it becomes
advantageous to block high frequency components of the channel output
altogether and transfer the filter environment field as the filter
output. Such a filtering
strategy may not be beneficial when the information accuracy of the
system is important --- despite the signal-to-noise ratio is low, the
noisy channel output still carries some information about its input,
while the filter environment does not carry such information. Therefore,
an interesting problem for future research is to find a trade-off between
mean-square accuracy and information accuracy of coherent equalizers,
similar to the problem considered recently for classical Kalman-Bucy
filters~\cite{TZUS1}.
\section{Conclusions}\label{Conclusions}
The paper has introduced a quantum counterpart of the classical equalization
problem. The discussion is focused on passive quantum
channels and passive quantum filters, and is motivated by the
utility and the ease of implementation of passive quantum
systems~\cite{Nurdin-2010,NY-2017}.
Different from the previous work on developing coherent Wiener
and Kalman filters, we posed the
quantum equalization problem in the same vein as the classical
$H_\infty$ filtering problem. However, instead of the
disturbance-to-error transfer function, we considered the PSD of the
difference between the input field of the quantum communication channel
and the output field of the equalizer as the measure of the equalizer
performance. Accordingly, the filter was
sought to guarantee that the maximum eigenvalue of the error PSD was
below a prescribed threshold. The requirement that such
filter must be physically realizable, adds a constraint on the filter.
We have shown that this problem reduces to a constrained
optimization with respect to one of the blocks of equalizer's transfer function
matrix. Using the $J$-spectral factorization technique, we have developed a
convenient parameterization of the class of suboptimal filters similar to
the Youla parameterization of the class of stabilizing controllers.
Also, this auxiliary problem was cast as a semidefinite program subject to
frequency-dependent
linear matrix inequality constraints, and a
tractable constraint relaxation was proposed involving constraints
over a discrete set of frequencies. In addition, the Nevanlinna-Pick
interpolation technique was employed to ensure that the
solution to the relaxed problem yields a physically realizable filter.
A set of all interpolating filters was also obtained. In principle,
coherent filters obtained this way are not guaranteed to yield an improved
mean-square performance over the entire interval of frequencies. Therefore it is
interesting to attempt to minimize the error power spectrum density over
the set of interpolating filters.
Another possible direction for future research is to find a
trade-off between mean-square accuracy and information accuracy of
coherent equalizers.
The paper gives two examples of equalization of single mode channels.
One of them comprises a static quantum system as a channel, and another one
includes a quantum optical cavity. These examples demonstrate that coherent
equalizers can be effective in improving the mean-square accuracy of the
channel. We also showed that in the static case, passive
equalizers are especially beneficial when the intensity of the
thermal noise from the channel environment exceeds certain
threshold. A linear matrix inequality condition has been
introduced to predict such a threshold in a general case.
\newcommand{\noopsort}[1]{} \newcommand{\printfirst}[2]{#1}
\newcommand{\singleletter}[1]{#1} \newcommand{\switchargs}[2]{#2#1}
|
train/arxiv
|
BkiUdqnxaKPQounYEnVD
| 5
| 1
|
\section{Bayesian analysis and computation}\label{sec:bac}
\subsection{Notations and Conventions}
\input{notation.tex}
\subsection{Imaging inverse problems}
We consider imaging inverse problems where we seek to estimate an unknown image $x \in \mathbb{R}^d$ from an observation $y$, related to $x$ by a forward statistical model with likelihood function $p(y|x)$. Following a Bayesian approach, we use prior knowledge about $x$ to reduce the uncertainty and deliver accurate estimation results \cite{somersalo:2005}. Precisely, we specify a prior distribution $p(x)$ promoting expected properties (e.g., sparsity, piecewise regularity, or smoothness), and combine observed and prior information by using Bayes' theorem, leading to the posterior distribution \cite{cprbayes}
$$
\pi(x) \triangleq p(x|y) = \frac{p(y|x)p(x)}{\int_{\mathbb{R}^d} p(y|x)p(x)\textrm{d}x}\,,
$$
that we henceforth denote as $\pi$, and which models our knowledge about $x$ after observing $y$. In this paper we focus on inverse problems that are convex. We assume that $\pi$ is log-concave, i.e.
\begin{eqnarray}\label{posterior}
\pi(x) = \frac{\mathrm{e}^{-U(x)}}{\int_{\mathbb{R}^d} \mathrm{e}^{-U(s)} \mathrm{d} s \;},
\end{eqnarray}
for some measurable function $U : \mathbb{R}^{d} \to \ocint{-\infty,+\infty}$ satisfying the following condition.
\begin{assumption}
\label{assum:form-potential}
$U = f+ g$, where $f : \mathbb{R}^d \to \mathbb{R}$ and $g : \mathbb{R}^d \to \ocint{-\infty,+\infty}$ are two lower bounded functions
satisfying:
\begin{enumerate}[label=(\roman*)]
\item $f$ is convex, continuously differentiable, and gradient Lipschitz with Lipschitz constant $L_f$, i.e.~for all $x,y \in \mathbb{R}^d$
\begin{equation}
\label{eq:gradient-Lip}
\norm{\nabla f(x) - \nabla f(y)} \leq L_f \norm{x-y} \;.
\end{equation}
\item \label{item:assum-prior} $g$ is proper, convex and lower semi continuous (l.s.c).
\end{enumerate}
\end{assumption}
Notice that the class \eqref{posterior} comprises many important models that are used extensively in modern imaging sciences. Particularly, models of the form $U(x) = \|y-Ax\|^2 /2\sigma^2 + \phi(B x)$ for some linear operators $A$, $B$, and convex regulariser $\phi$ that is typically non-smooth, and which may also encode convex constraints on the parameter space. In such cases $f(x) = \|y-Ax\|^2 /2\sigma^2$ and $g(x) = \phi(B x)$ for instance.
When $x$ is high-dimensional, drawing inferences from $\pi$ directly is generally not possible. Instead we use summaries, particularly point estimators, that capture some of the information about $\pi$ that is relevant for the application considered \citep{cprbayes}. In particular, modern statistical imaging methodology relies strongly on the maximum-a-posteriori (MAP) estimator defined by:
\begin{eqnarray}\label{map}
\begin{split}
\hat{x}_{\operatorname{MAP}} = \operatorname*{arg\,max}_{x \in \mathbb{R}^d} \pi(x) = \operatorname*{arg\,min}_{x \in \mathbb{R}^d} U(x) \;,
\end{split}
\end{eqnarray}
which can often be computed efficiently, even in very large problems, by using proximal convex optimisation algorithms \citep{pesquet:2011, Parikh2013}. From the practitioner's viewpoint, this is a main advantage w.r.t.~most other summaries that require high-dimensional integration w.r.t.~$\pi$, which is generally significantly more computationally expensive \citep{pereyra:2016}.
However, in its raw form, mathematical imaging based on optimisation struggles to support the complex statistical analyses that are inherent to modern scientific reasoning. For example, such methods are typically unable to assess the uncertainty in the solutions delivered, support uncertainty quantification and decision-making procedures (e.g. hypothesis tests). Similarly, they have difficulty checking and comparing alternative mathematical models intrinsically (i.e., without ground truth available). To perform such advanced (often Bayesian) analyses and deliver the full richness of the statistical paradigm it is necessary to use Monte Carlo stochastic simulation algorithms \citep{Green2015}.
As mentioned previously, the high-dimensionality and the lack of smoothness of $\pi$ pose important challenges from a Bayesian computation viewpoint. This paper presents a new MCMC methodology to tackle this problem. The proposed methodology is general, robust, theoretically sound, and computationally efficient, and can be applied straightforwardly to any model satisfying \eqref{posterior} that can be addressed by using proximal convex optimisation (particularly by using the gradient of $f$ and the proximal operator of $g$, similarly to forward-backward splitting algorithms).
\subsection{Bayesian computation: unadjusted and Metropolis-adjusted Langevin algorithms}
The MCMC method proposed in this paper is derived from the
discretization of overdamped Langevin diffusions.
Let $\bar{U} : \mathbb{R}^d \to \mathbb{R}$ be a continuously differentiable function and consider the
Langevin stochastic differential equations (SDE) given by
\begin{equation}
\label{eq:langevin-1}
\mathrm{d} \mathbf{X}_t = -\nabla \bar{U}(\mathbf{X}_t) \mathrm{d} t + \sqrt{2} \mathrm{d} B^d_t \;,
\end{equation}
where $(B_t^d)_{t \geq 0}$ is a $d$-dimensional Brownian
motion. Under additional mild assumptions, this equation has a unique
strong solution. In addition if $\int_{\mathbb{R}} \mathrm{e}^{-\bar{U}(x)} \mathrm{d} x < \infty$, then $\bar{\pi}(x) \propto \mathrm{e}^{-\bar{U}(x)}$ is the unique invariant
distribution of the semi-group associated with the
Langevin SDE, see \cite{khasminskii:1960}. Consequently, if we could solve \eqref{eq:langevin-1} and let $t \rightarrow \infty$, this would provide samples from $\bar{\pi}$ useful for Bayesian computation. Since it is
possible to analytically solve \eqref{eq:langevin-1} only in very
specific cases, we consider a discrete-time Euler-Maruyama
approximation and obtain the following Markov chain $(X_k)_{k\geq
0}$: for all $k \geq 0$
\begin{equation}
\label{eq:definition-Euler}
\textrm{ULA}:\, X_{k+1} = X_k - \gamma \nabla \bar{U}(X_k) + \sqrt{2 \gamma} Z_{k+1} \;,
\end{equation}
where $\gamma >0$ is a given
step size and $(Z_{k})_{k\geq 1}$ is a sequence of
i.i.d.~$d$-dimensional standard Gaussian random variables. This scheme
has been first introduced in molecular dynamics by \cite{ermak:1975}
and \cite{parisi:1981}, and then popularized in the machine learning community by
\cite{grenander:1983}, \cite{grenander:miller:1994} and in computational statistics by
\cite{neal:1992} and \cite{roberts:tweedie-Langevin:1996}. Following \cite{roberts:tweedie-Langevin:1996}, this algorithm is referred to as the Unadjusted Langevin Algorithm (ULA).
In Bayesian computation, the samples $(X_k)_{k\geq 0}$ generated by
ULA \eqref{eq:definition-Euler} are used to estimate probabilities and expectations
w.r.t. $\bar{\pi}$. This scheme has attracted significant attention in the late, particularly for high-dimensional problems were most Monte Carlo methods struggle.
Theory for ULA advanced significantly recently with the development of non-asymptotic bounds in total variation
distance between $\bar{\pi}$ and the marginal laws of the Markov chain
$(X_k)_{k \geq 0}$ defined by ULA \cite{dalalyan:2014,durmus:moulines:2015}, with explicit dependence on the stepsize
$\gamma$ and the dimension $d$ (see
\Cref{ssec:convergence_analysis}). These new theoretical results are
important because they provide estimation accuracy guarantees for ULA, as well as
valuable new insights into the convergence properties of the
algorithm. In particular, they establish that if $\bar{U}$ is convex and
gradient Lipchitz, then ULA's convergence properties deteriorate at
most polynomially as $d$ increases.
Remarkably, if in addition $\bar{U}$ is strongly
convex, then it deteriorates at most linearly with $d$, confirming
the empirical evidence that ULA is a highly computationally efficient method to sample in
high-dimensional settings.
It is worth emphasising at this point that this deep understanding of ULA is very recent. Indeed, without a proper theoretical underpinning, ULA has been traditionally regarded as unreliable and rarely applied directly in statistics or statistical image processing. Instead, most applications reported in the literature adopt a safe approach and complement ULA with a Metropolis-Hastings correction step targeting $\bar{\pi}$, as recommended by \cite{rossky:doll:friedman:1978} and \cite{roberts:tweedie-Langevin:1996}. This correction guarantees that the resulting Metropolis Adjusted Langevin Algorithm (MALA) generates a reversible Markov chain with respect to $\bar{\pi}$, and therefore eliminates the asymptotic bias. And perhaps more importantly, it places ULA within the sound theoretical framework of Metropolis-Hasting algorithms. For sufficiently smooth densities MALA inherits the good convergence properties of ULA and scales efficiently to high-dimensional settings \citep{roberts:tweedie-Langevin:1996}.
Unfortunately, neither ULA nor MALA are well defined for non-smooth target
densities, which strongly limits their
application to modern mathematical imaging problems. In fact, both theory and experimental
evidence show that ULA and MALA often run into difficulties if $\pi$ is not sufficiently regular. For example, when $\nabla\log\pi$ is not Lipchitz continuous ULA is generally explosive and MALA is not geometrically ergodic (see \cite[Figure
2]{roberts:tweedie-Langevin:1996, pereyra:2015}). Similarly, when $\nabla\log\pi$ is subdifferentiable and therefore, at least from a purely algorithmic viewpoint, the algorithms could still be applied, the theory underpinning the ULA and MALA collapses and even the convergence of the time-continuous Langevin diffusion driving the algorithms becomes unclear.
Moreover, many applications involve constraints on the parameter space and then $\pi$ is supported only on a bounded convex set $\mathcal{K}$. In
such case, $\nabla\log\pi$ is bounded on $\mathcal{K}$ and infinite or not defined
outside $\mathcal{K}$. Then it is not possible to use ULA, and MALA
typically behaves very poorly (the algorithm gets ``stuck'' whenever
the proposal drives the Markov chain outside $\mathcal{K}$). Following a proximal MCMC approach \cite{pereyra:2015}, in the following section we present a new ULA that exploits tools from convex calculus and proximal optimisation to address these issues, and sample efficiently from high-dimensional log-concave densities of the form H\ref{assum:form-potential} that are beyond the scope of conventional ULAs and MALAs.
\section{Proximal MCMC: Moreau-Yosida regularised Unadjusted Langevin Algorithm}
\label{sec:more-yosida-regul}
\subsection{Proposed method}
\label{ssec:MYULAl}
A central idea in this work is to replace the non-smooth potential $U$ with a carefully designed smooth approximation $U^{\lambda}$ which, by construction, has the following two key properties: 1) its Euler-Maruyama discrete-time approximations are always stable and have favourable convergence properties, and 2) we can make $\pi^\lambda\propto\mathrm{e}^{-U^{\lambda}}$ arbitrarily close to $\pi$ by adjusting an approximation parameter $\lambda >0$.
In a manner akin to \cite{pereyra:2015}, we define such approximations by using Moreau-Yosida envelopes
\cite{Combettes2011} which we recall below. Let $\mathrm{g} :
\mathbb{R}^d \to \ocint{-\infty,+\infty}$ be a l.s.c~convex function and
$\lambda >0$. The $\lambda$-Moreau-Yosida envelope of $\mathrm{g}$ is a carefully regularised approximation of $g$ given by
\begin{equation}
\label{eq:id-MY-env}
\mathrm{g}^{\lambda}(x) = \min_{y \in \mathbb{R}^d} \defEns{\mathrm{g}(y) + (2 \lambda)^{-1}\norm[2]{x-y}} \;,
\end{equation}
where $\lambda$ is a regularisation parameter that controls a trade-off between the regularity properties of $\mathrm{g}^{\lambda}$ and the approximation error involved. Remarkably, by \cite[Example 10.32, Theorem 9.18]{rockafellar:wets:1998}, the approximation $\gconv^{\lambda}$ inherits the convexity of $g$ and is always continuously differentiable, even if $g$ is not. In fact, $\gconv^{\lambda}$ is gradient Lipshitz \cite[Proposition 12.19]{rockafellar:wets:1998}: for all $x,y \in \mathbb{R}^d$,
\begin{equation}
\label{eq:lip_moreau_yosida}
\norm{\nabla \gconv^{\lambda}(x) - \nabla \gconv^{\lambda}(y)} \leq \lambda^{-1} \norm{x-y} \;.
\end{equation}
The gradient is given by for all $x \in \mathbb{R}^d$
\begin{equation}
\label{eq:definition-grad-prox}
\nabla \gconv^{\lambda}(x) = \lambda^{-1}\parenthese{x-\prox_{\gconv}^{\lambdaMY}(x)} \;,
\end{equation}
where
\begin{equation}
\label{eq:id-MY-env}
\prox_{\gconv}^{\lambdaMY}(x) = \operatorname*{arg\,min}_{y \in \mathbb{R}^d} \defEns{\mathrm{g}(y) + (2 \lambda)^{-1}\norm[2]{x-y}} \;,
\end{equation}
is the proximal operator of $\mathrm{g}$ \cite{Combettes2011}. This operator is used extensively in imaging methods based on convex optimisation, where it is generally computed efficiently by using a specialised algorithm \cite{pesquet:2011,Parikh2013}. Indeed, similarly to gradient mappings, $\prox_{\gconv}^{\lambdaMY}$ also moves points in the direction of the minimum of $g$ (by an amount related to the value of $\lambda$), and has many properties that are useful for devising fixed-point methods \cite{Combettes2011}.
In addition, $\gconv^{\lambda}$ envelopes $\mathrm{g}$ from below: for all $x \in \mathbb{R}^d$, $\mathrm{g}^{\lambda}(x) \leq \mathrm{g}(x)$, and since for $0 < \lambda < \lambda'$ and $x,y \in \mathbb{R}^d$,
$\mathrm{g}(y) + (2 \lambda')^{-1}\norm[2]{x-y} \leq \mathrm{g}(y) + (2 \lambda)^{-1}\norm[2]{x-y}$, we get that for all $x \in \mathbb{R}^d$ $\mathrm{g}^{\lambda'}(x) \leq \mathrm{g}^{\lambda}(x)$. By \cite[Theorem 1.25]{rockafellar:wets:1998}, $\gconv^{\lambda}$ converges pointwise to $\mathrm{g}$ as $\lambda$ goes to $0$, i.e.\ for all $x \in \mathbb{R}^d$,
\begin{equation}
\label{eq:limit-d-lambda}
\lim_{\lambda \to 0}\gconv^{\lambda}(x) = \mathrm{g}(x) \;
\end{equation}
Hence, $\gconv^{\lambda}$ provides a convex and smooth approximation to $g$ that we can make arbitrarily close to $\mathrm{g}$ by adjusting the value of $\lambda$.
So under \Cref{assum:form-potential}, if $g$ is
not continuously differentiable, but the proximity operator associated with $g$ is
available, we can consider sampling algorithms that use the $\lambda$-Moreau-Yosida envelope
$\gU^{\lambda}$ instead of $g$. Here we propose to replace the potential $U$ with the approximation $U^{\lambdaMY}:\mathbb{R}^d \to \mathbb{R}$ defined for all $x \in \mathbb{R}^d$ by
\begin{equation*}
U^{\lambdaMY}(x) = \gU^{\lambda}(x) + f(x) \;,
\end{equation*}
which we will use to define a surrogate target density $\pi^\lambda \propto \mathrm{e}^{-U^\lambda}$. We will see that such approximation is endowed with very useful regularity and approximation accuracy properties.
\Cref{propo:finite-measure-MY} below implies that the probability measure $\pi^{\lambdaMY}$ on $\mathbb{R}^d$, with density with respect to the Lebesgue measure, also denoted by $\pi^{\lambdaMY}$ and given for all $x \in \mathbb{R}^d$ by
$$
\pi^\lambda (x)= \frac{\mathrm{e}^{-U^\lambda(x)}}{\int_{\mathbb{R}^d} \mathrm{e}^{-U^\lambda (s)} \mathrm{d} s} \;,
$$
is well defined, log-concave, Lipschitz continuously differentiable, and as close to $\pi$ as required.
\begin{assumption}
\label{assum:integrabilite}
Assume that one of these two conditions holds:
\indent\begin{enumerate}[label=(\roman*)]
\item\label{assum:integrable_g} $\mathrm{e}^{-g}$ is integrable with respect to the Lebesgue measure.
\item\label{assum:lipschitz_g} $g$ is Lipschitz.
\end{enumerate}
\end{assumption}
\begin{proposition}
\label{propo:finite-measure-MY}
Assume \Cref{assum:form-potential} and \Cref{assum:integrabilite}.
\begin{enumerate}[label=\alph*)]
\item For all $\lambda >0$, $\pi^\lambda$ defines a proper density of a probability measure on $\mathbb{R}^d$, i.e.
\begin{equation*}
0 < \int_{\mathbb{R}^d} \mathrm{e}^{-U^{\lambdaMY}(y)}\mathrm{d} y < +\infty \;.
\end{equation*}
\item For all $\lambda >0$, $\pi^\lambda$ is log-concave and continuously differentiable with
\begin{equation}
\label{eq:definition-grad-prox_U}
\nabla U^\lambda (x)=-\nabla \log \pi^\lambda(x) = \nabla f(x) +\lambda^{-1}(x-\operatorname{prox}^\lambda_{ g}(x)) \;.
\end{equation}
In addition, $\nabla U^{\lambdaMY}$ is Lipschitz with constant $L \leq L_f + \lambda^{-1}$.
\item
\label{item:propo:dist_TV_MY_1}
The approximation $\pi^\lambda$ converges to $\pi$ as $\lambda \downarrow 0$ in total variation norm, i.e.
\begin{equation*}
\lim_{\lambda \to 0} \tvnorm{\pi^{\lambdaMY}-\pi} = 0 \;.
\end{equation*}
\item
\label{item:propo:dist_TV_MY_2}
If \Cref{assum:integrabilite}-\ref{assum:lipschitz_g} then for all $\lambda >0$,
\begin{equation*}
\tvnorm{\pi^{\lambdaMY}-\pi} \leq \lambda \norm[2][\operatorname{Lip}]{g} \;
\end{equation*}
\end{enumerate}
\end{proposition}
\begin{proof}
The proof is postponed to \Cref{sec:proof-crefpr-meas}.
\end{proof}
Figure \ref{FigMoreauApprox} shows the approximations of two non-smooth densities that satisfy \Cref{assum:form-potential}:
\begin{enumerate}
\item\label{item:caseLaplace} the Laplace density $\pi(x) = (1/2)\exp\parenthese{\abs{x}}$, for which
\[
\pi^{\lambdaMY}(x) = \frac{\exp\defEns{(\lambda/2-\abs{x})\indiD{\abs{x}
\geq \lambda} - (x^2/(2\lambda))\indiD{\abs{x} < \lambda} }}{2 \defEns{\mathrm{e}^{-\lambda/2}+(2 \uppi
/ \lambda)^{1/2}(\mathbf{\Phi}(\lambda^{1/2})-1/2)}}
\;,
\]
where $\mathbf{\Phi}$ is the cumulative function of the standard normal distribution.
\item\label{item:caseUnif} the uniform density $\pi(x) = (1/2)\exp\parentheseLigne{-\iota_{[-1,1]}(x)}$, for which
\begin{equation*}
\pi^{\lambdaMY}(x) = \defEns{2+\sqrt{2\uppi\lambda}}^{-1}\exp\parentheseDeux{\defEns{-\max(\abs{x}-1,0)}^2/(2 \lambda)} \;.
\end{equation*}
\end{enumerate}
We observe that the approximations are smooth and converge to $\pi$ as
$\lambda$ decreases, as described by \Cref{propo:finite-measure-MY}.
Also for these two examples, analytic expressions for
$\tvnorm{\pi-\pi^{\lambdaMY}}$ can be found, and \Cref{FigMoreauApprox-TV}
shows $\tvnorm{\pi-\pi^{\lambdaMY}}$ as a function of $\lambda >0$. Notice that in the case of the Laplace density $\tvnorm{\pi-\pi^{\lambdaMY}}$ goes to $0$ quadratically in $\lambda$ as
$\lambda$ goes to $0$, which is faster than the linear bound given in \Cref{propo:finite-measure-MY}-\ref{item:propo:dist_TV_MY_2}. Also note that this bound does not apply to the uniform density, and in this case $\tvnorm{\pi-\pi^{\lambdaMY}}$ vanishes at rate $\sqrt{\lambda}$.
\begin{figure}[htbp!]
\begin{minipage}[l2]{0.5\linewidth}
\centering
\centerline{\includegraphics[width=7cm]{FiguresSIAM/LaplaceExample2.png}}
\small{(a) $\pi(x) = \tfrac{1}{2}\mathrm{e}^{-|x|}$}
\end{minipage}
\begin{minipage}[l2]{0.5\linewidth}
\centering
\centerline{\includegraphics[width=7cm]{FiguresSIAM/UniformExample2.png}}
\small{(b) $\pi(x) = \tfrac{1}{2}\iota_{[-1,1]}(x)$}
\end{minipage}
\caption{\small{Density plots for the Laplace (a) and uniform (b) distributions (solid
black), and their smooth approximations $\pi^\lambda$ for $\lambda = 1, 0.1, 0.01$ (dashed blue and green, and solid red).}} \label{FigMoreauApprox}
\end{figure}
\begin{figure}[htbp!]
\begin{minipage}[l2]{0.5\linewidth}
\centering
\centerline{\includegraphics[width=7cm]{Figures/bound_tv_l1.pdf}}
\small{$\pi(x) = \tfrac{1}{2}\mathrm{e}^{-|x|}$}
\end{minipage}
\begin{minipage}[l2]{0.5\linewidth}
\centering
\centerline{\includegraphics[width=7cm]{Figures/bound_tv_unif.pdf}}
\small{$\pi(x) = \tfrac{1}{2}\iota_{[-1,1]}(x)$}
\end{minipage}
\caption{\small{Total variation norm between $\pi$ and its smooth approximation $\pi^{\lambda}$ as function of $\lambda$.}} \label{FigMoreauApprox-TV}
\end{figure}
We now make two key observations. First, \Cref{propo:finite-measure-MY} shows that $\nabla U^{\lambda}$ is
gradient Lipschitz and therefore it guarantees that the Langevin SDE
constructed with $U^{\lambda}$ converges to $\pi^{\lambda}$ as $t \rightarrow \infty$ (formally, it guarantees that the Langevin SDE
associated with $\pi^{\lambda}$ admits a unique strong solution
$(\mathbf{X}^\lambda_t)_{t \geq 0}$ and $\pi^{\lambda}$ is the unique
stationary distribution of the semigroup). More importantly, as it will be seen below, it
implies that the ULA chain derived from a Euler-Maruyama discretisation of
this Langevin diffusion will be, by construction, well
behaved and useful for Monte Carlo integration with respect to
$\pi^\lambda$.
Second, \Cref{propo:finite-measure-MY} also establishes that $\lambda$ controls
the estimation bias involved in performing estimations with
$\pi^\lambda$ as a substitute of $\pi$. This approximation error can
be made arbitrarily small, and is bounded explicitly by $\lambda
\norm[2][\operatorname{Lip}]{g}$ when $g$ is Lipschitz.
We are now in a position to present the new MCMC methodology proposed in this work, which is essentially an application of ULA to $\pi^{\lambda}$. Precisely, given $\lambda>0$ and a stepsize $\gamma >0$, we use an Euler-Maruyama approximation of $(\mathbf{X}^\lambda_t)_{t \geq 0}$, and obtain the following Markov chain $(\XE^{\rmM}_k)_{k\geq 0}$: for all $k \geq 0$
\begin{equation}
\label{eq:def-MYRULA}
\textrm{MYULA}: \, \XE^{\rmM}_{k+1} = (1- \tfrac{\gamma}{\lambda})\XE^{\rmM}_{k} - \gamma \nabla f(\XE^{\rmM}_k) + \tfrac{\gamma}{\lambda}\prox_{\gU}^{\lambdaMY}(\XE^{\rmM}_k) +\sqrt{2 \gamma} Z_{k+1} \;,
\end{equation}
where $\sequence{Z}[k][\mathbb{N}^*]$ is a sequence of i.i.d.\ $d$ dimensional standard Gaussian random variables. This algorithm will be referred to as the \emph{Moreau-Yosida Unadjusted Langevin Algorithm} (MYULA), and is summarised in \Cref{Algo:MYULA} below (see \Cref{guidelines} for guidelines for setting the values of $\gamma$ and $\lambda$). Note that the stationary distribution of the MYULA sequence $\sequence{\XE^{\rmM}}[k][\mathbb{N}]$ is different from the target distribution $\pi^{\lambdaMY}$, and depends on the stepsize $\gamma >0$. Nevertheless, we show in \Cref{ssec:convergence_analysis} that, choosing $\lambda$ and $\gamma$ appropriately, the samples are very close to $\pi$.
Besides, to compute the expectation of a function $h : \mathbb{R}^d \to
\mathbb{R}$ under $\pi$ from $\{\XE^{\rmM}_k\ ; \ 0 \leq k \leq n\}$, an
optional importance sampling step might be used to correct the regularization.
This step amounts to approximate $\int_{\mathbb{R}^d} h(x) \pi(x) \mathrm{d} x$ by the weighted sum
\begin{equation}
\label{eq:importance_sampling}
\mathrm{S}_n(h) = \sum_{k=0}^n \weight{k}h(X_k) \;, \text{ with } \weight{k} = \defEns{\sum_{\ell=0}^n \mathrm{e}^{\bar{g}^\lambda(\XE^{\rmM}_\ell)}}^{-1} \mathrm{e}^{\bar{g}^\lambda(\XE^{\rmM}_k)} \;,
\end{equation}
where for all $x \in \mathbb{R}^d$
\begin{equation*}
\bar{g}^\lambda(x)=\gU^{\lambda}(x)-g(x)=g(\prox_{\gU}^{\lambdaMY}(x)) - g(x) +(2\lambda)^{-1}\norm[2]{x- \prox_{\gU}^{\lambdaMY}(x)} \;.
\end{equation*}
To remove this asymptotic bias, we can add an Hastings-Metropolis step, which will
produce a Markov chain $\sequence{\tilde{\XE}^{\lambdaMY}}[k][\mathbb{N}]$ which is reversible this time
with respect to $\pi^{\lambdaMY}$ and use similarly an importance sampling step
to correct for the bias introduced by smoothing. This algorithm will be called the \emph{Moreau-Yosida Regularized Metropolis-adjusted Langevin Algorithm} (MYMALA).
The focus of this work is on MYULA without importance sampling or Metropolis-Hastings correction. A study of MYMALA is currently in progress and will be reported separately.
\begin{algorithm}
\caption{Moreau-Yoshida unadjusted Langevin algorithm (MYULA)}
\label{Algo:MYULA}
\begin{algorithmic}
\STATE \textbf{set} $\XE^{\rmM}_{0} \in \mathbb{R}^d$, $\lambda > 0$, $\gamma \in (0, \lambda/(\lambda L_f + 1)]$, $n \in \mathbb{N}$
\FOR {$k = 0:n$}
\STATE $Z_{k+1} \sim \mathcal{N}(0,\mathbb{I}_d)$
\STATE $\XE^{\rmM}_{k+1} = (1- \tfrac{\gamma}{\lambda})\XE^{\rmM}_{k} - \gamma \nabla f(\XE^{\rmM}_k) + \tfrac{\gamma}{\lambda}\prox_{\gU}^{\lambdaMY}(\XE^{\rmM}_k) +\sqrt{2 \gamma} Z_{k+1}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Theoretical convergence analysis of MYULA}\label{ssec:convergence_analysis}
In this section we present a detailed theoretical analysis of MYULA implemented with fixed regularization parameter $\lambda>0$ and step-size $\gamma >0$. We first establish that the chains generated by MYULA converge geometrically fast to an approximation of $\pi$ that is controlled by $\lambda$ and $\gamma$, and which can be made arbitrarily close to $\pi$. More importantly, we also establish non-asymptotic bounds for the estimation error of MYULA with a finite number of iterations. This enables an analysis of the behaviour of MYULA as the dimensionality of the model increases, as well as deriving practical guidelines for setting $\lambda$ and $\gamma$ for specific models.
First, under \Cref{assum:form-potential}, it has been observed that
$\gU^{\lambda}$ is $\lambda^{-1}$-gradient Lipschitz, which implies that
$U^{\lambda}$ is gradient Lipschitz as well: there exists $L \geq 0$ such that for
all $x,y \in \mathbb{R}^d$, $\norm{\nabla U^{\lambda} (x) - \nabla U^{\lambda}(y)} \leq
L\norm{x-y}$ and
\begin{equation}
\label{eq:definition_const_lip_u_lambda}
L \leq L_f + \lambda^{-1} \;.
\end{equation}
Of course, this bound strongly depends on the decomposition of $U$ in a smooth and a non-smooth part, which is arbitrary and therefore
can be pessimistic (for instance, if $U$ is continuously differentiable, $g$ can be chosen to be $0$ which implies $U^{\lambda} = U$
and $L = L_f$).
We assume first the following assumption on the potential $U^{\lambda}$.
\begin{assumption}
\label{assum:potentialUl}
There exist a minimizer $x^{\star}$ of $U^{\lambda}$, $\eta_{\convSym} >0$ and $\mathrm{R}_{\convSym} \geq 0$ such that for all $x \in \mathbb{R}^d$, $\norm{x-x^{\star}} \geq \mathrm{R}_{\convSym}$,
\begin{equation}
\label{eq:superexpo_potential}
U^{\lambda}(x) - U^{\lambda}(x^{\star}) \geq \eta_{\convSym} \norm{x-x^{\star}} \;.
\end{equation}
\end{assumption}
Note that in fact
\Cref{assum:potentialUl} always holds under \Cref{assum:form-potential} and \Cref{assum:integrabilite},
since by \Cref{lem:control-fun-convex-gene} and
\Cref{propo:finite-measure-MY} there exist $C_1,C_2 >0$ such that
$U^{\lambda}(x) \geq C_1\norm{x} -C_2$. Therefore, since $U^{\lambda}$ is continuous on $\mathbb{R}^d$, there exists a minimizer $x^{\star}$ of $U^{\lambda}$ and
\eqref{eq:superexpo_potential} holds with $\eta_{\convSym} \leftarrow C_1/2$ and
$\mathrm{R}_{\convSym} \leftarrow 2(C_2 + \norm{x^{\star}}+ U^{\lambda}(x^{\star}))/C_1$. However, these
constants are non quantitative, and that is why we introduce \Cref{assum:potentialUl} to derive quantitative bounds
Consider the Markov kernel $R_{\gamma}$ associated to the Euler-Maruyama discretization \eqref{eq:def-MYRULA} given, for all $\mathsf{A} \in \mathcal{B}(\mathbb{R}^d)$ and $x \in \mathbb{R}^d$ by
\begin{equation}
\label{eq:definition_R_kernel}
R_{\gamma}(x ,\mathsf{A}) = (4 \uppi \gamma)^{-d/2}\int_{\mathsf{A}}\exp \parenthese{- (4 \gamma)^{-1}\norm[2]{y-x+\gamma \nabla U^{\lambda}(x)}} \mathrm{d} y \;.
\end{equation}
The sequence $(\XE^{\rmM}_n)_{n \geq 0}$ defined by \eqref{eq:def-MYRULA} is a homogeneous Markov chain
associated with the Markov kernel $R_{\gamma}$.
It is easily seen that under \Cref{assum:form-potential}, since $U^{\lambda}$ is continuously differentiable, $R_{\gamma}$
is irreducible with
respect to the Lebesgue measure, all compact sets are $1$-small and the kernel is strongly aperiodic. In addition under \Cref{assum:potentialUl}, since $U$ is also convex then
\cite[Proposition 13]{durmus:moulines:2015} shows that $R_{\gamma}$ satisfies a Foster-Lyapunov drift condition, i.e.~for all $\bar{\gamma} \in \ocint{0,L}$, $\gamma \in \ocint{0,\bar{\gamma}}$ and for all $x \in \mathbb{R}^d$,
\begin{equation*}
R_{\gamma} V_{\convSym}(x) \leq \varrho_{\convSym}^{\gamma} V_{\convSym}(x) + b_{\convSym} \gamma \;,
\end{equation*}
where
\begin{subequations}
\begin{align}
V_{\convSym}(x)& = \exp\defEns{(\eta_{\convSym}/4)\parenthese{\norm[2]{x-x^{\star}}+1}^{1/2}} \\
\varrho_{\convSym} &= \mathrm{e}^{-2^{-4} \eta_{\convSym}^2(2^{1/2}-1)} \;, \ \mathrm{a}_{\convSym} = \max(1,2 d / \eta_{\convSym},\mathrm{R}_{\convSym}) \\
b_{\convSym} &= \defEns{ (\eta_{\convSym}/4)(d+(\eta_{\convSym} \bar{\gaStep}/4)) - \log( \varrho_{\convSym} )} \mathrm{e}^{\eta_{\convSym} (\mathrm{a}_{\convSym}^2+1)^{1/2}/4 + (\eta_{\convSym} \bar{\gaStep}/4)(d+(\eta_{\convSym} \bar{\gaStep}/4))}
\;.
\end{align}
\label{eq:convex_drift}
\end{subequations}
By \cite[Theorem 16.0.1]{meyn:tweedie:2009}, $R_{\gamma}$ has a
unique invariant distribution $\pil_{\gaStep}$ and is $V_{\convSym}$-uniformly
geometrically ergodic: there exists $\kappa_{\mathrm{c}} \in \ooint{0,1}$ and $C_{\mathrm{c}}
\geq 0$ such that all $n \geq 0$ and $x \in \mathbb{R}^d$,
\begin{equation*}
\tvnorm{\delta_x R_{\gamma}^n - \pil_{\gaStep}} \leq C_{\mathrm{c}} V_{\convSym}(x) \kappa_{\mathrm{c}}^n \;.
\end{equation*}
Note $\pil_{\gaStep}$ is different from $\pi^{\lambda}$, nevertheless the following
result shows that choosing $\gamma$ small enough, the ULA generates
samples very close to the distribution $\pi^{\lambda}$.
We are now ready to present our main theoretical result: a non-asymptotic bound of the total-variation distance between $\pi$ and the marginal laws of the samples generated by MYULA. Denote in the following by $\omega : \mathbb{R}_+ \to \mathbb{R}_+$ the function given for all $r \geq 0$ by
\begin{equation}
\label{eq:Fsmall}
\omega(r) = r^{2}/ \defEns{2\mathbf{\Phi}^{-1}(3/4)}^{2} \;.
\end{equation}
\begin{theorem}[\protect{\cite[Corollary 19]{durmus:moulines:2015}}]
\label{theo:convergence_TV_dec-stepsize-convV}
Assume \Cref{assum:form-potential} and \Cref{assum:potentialUl}. Let $\bar{\gaStep} \in \ocint{0,L^{-1}}$. For all $\varepsilon >0$ and $x \in \mathbb{R}^d$, we have
$$
\tvnorm{\delta_x R_{\gamma}^n-\pi} \leq \varepsilon \;,
$$
provided that $n > T \gamma^{-1}$ with
\begin{align*}
T &= \max\defEns{32\, \eta_{\convSym}^{-2}\log\parenthese{8 \varepsilon^{-1}A_1(x)}, \log(16 \varepsilon^{-1}) \Big/(- \log(\kappa))} \\
\gamma &\leq \frac{-d+\sqrt{d^2 +(2/3) A_2(x) \varepsilon^2 (L^2T)^{-1} }}{2 A_2(x)/3} \wedge \bar{\gaStep} \;,
\end{align*}
where $\alpha_{\convSym} = \max(1,4 d / \eta_{\convSym},\mathrm{R}_{\convSym}) $
\begin{align*}
\bdrift_{\convSym} &=(\eta_{\convSym}/4)\parentheseDeux{\eta_{\convSym} \alpha_{\convSym} /4 +d}
\max \defEns{1,(\alpha_{\convSym}^2 +1)^{-1/2} \exp(\eta_{\convSym}( \alpha_{\convSym}^2+1)^{1/2}/4)} \\
A_1(x) &= (1/2)( V_{\convSym}(x) + b_{\convSym}(-\varrho_{\convSym}^{\gamma} \log(\varrho_{\convSym}))^{-1} + 8 \eta_{\convSym}^{-2} \bdrift_{\convSym})+ 16 \eta_{\convSym}^{-2} \bdrift_{\convSym}\mathrm{e}^{32^{-1} \eta_{\convSym}^{2} \omega\defEns{(8/\eta_{\convSym}) \log( 32 \eta_{\convSym}^{-2} \bdrift_{\convSym})}} \\
A_2(x) & = L^2\parenthese{4 \eta_{\convSym}^{-1} \parentheseDeux{ 1+\log \defEns{V_{\convSym}(x) + b_{\convSym}(-\varrho_{\convSym}^{\gamma} \log(\varrho_{\convSym}))^{-1}}}}^2\\
\log(\kappa)
&= - \log(2) (\eta_{\convSym}^{2}/32) \parentheseDeux{\log \defEns{8 \eta_{\convSym}^{-2} \bdrift_{\convSym} \parenthese{3+ 4 \eta_{\convSym}^{-2}\mathrm{e}^{32^{-1} \eta_{\convSym}^{2} \omega\defEns{(8/\eta_{\convSym}) \log( 32 \eta_{\convSym}^{-2} \bdrift_{\convSym})}}}} +\log(2) }^{-1} \;, \\
\end{align*}
$ \mathrm{a}_{\convSym},\varrho_{\convSym},b_{\convSym},V_{\convSym}$ are defined in \eqref{eq:convex_drift} and $\omega$ in \eqref{eq:Fsmall}.
\end{theorem}
\begin{proof}
The proof follows from combining \cite[Lemma 4, Theorem 14, Theorem 16]{durmus:moulines:2015}.
\end{proof}
This result implies that the number of iteration to reach a precision target $\varepsilon$ is, at worse, of order $d^5\log^2(\varepsilon^{-1})\varepsilon^{-2}$ for this class of models. Significantly more precise bounds can be obtained under more stringent assumption on $U^{\lambda}$. In particular, we consider the case where $U^{\lambda}$ is strongly convex outside some ball; see \cite{eberle:2015}.
\begin{assumption}
\label{assum:strongConvexityOutsideBallDriftV}
There exist $\mathrm{R}_{\StSym} \geq 1$ and $\constSt >0$, such that for all
$x,y \in \mathbb{R}^d$, $\norm{x-y} \geq \mathrm{R}_{\StSym}$,
\[
\ps{\nabla U^{\lambda}(x) -\nabla U^{\lambda}(y)} {x-y} \geq \constSt
\norm[2]{x-y} \;.
\]
\end{assumption}
Of course, in the case where $f$ is strongly convex then this assumption holds.
\begin{theorem}[\protect{\cite[Lemma 4, Theorem 21]{durmus:moulines:2015}}]
\label{theo:convergence_TV_dec-stepsize-StV}
Assume \Cref{assum:form-potential} and \Cref{assum:strongConvexityOutsideBallDriftV}.
Let $\bar{\gaStep} \in \ocint{0,L^{-1}}$. Then for all $\varepsilon >0$, we get $\tvnorm{\delta_x R_{\gamma}^n-\pi} \leq \varepsilon$ provided that $n > T \gamma^{-1}$ with
\begin{align*}
T &= \parenthese{ \log\{ A_1(x) \}-\log(\varepsilon/2)} \Big/(- \log(\kappa))\\
\gamma &\leq \frac{-d+\sqrt{d^2 +(2/3) A_2(x) \varepsilon^2 (L^2T)^{-1} }}{2 A_2(x)/3} \wedge \bar{\gaStep} \;,
\end{align*}
where
\begin{align*}
A_1(x) &= 5+ \parenthese{d/\constSt + \mathrm{R}_{\StSym}^2}^{1/2} +(A_1(x)/L^2)^{1/2} \\
A_2(x) & = L^2 \parenthese{\norm[2]{x-x^{\star}} + 2(d + \constSt \mathrm{R}_{\StSym}^2)(\mathrm{e}^{-\gamma(2m+\bar{\gaStep} L^2)}/(2m+\bar{\gaStep} L^2) )^{-1}}\\
\log(\kappa)
&= - (\log(2) m/2) \parentheseDeux{\log \defEns{ \parenthese{1+ \mathrm{e}^{m \omega\defEns{\max(1,\mathrm{R}_{\StSym})}/4}}\parenthese{1+\max(1,\mathrm{R}_{\StSym})}} +\log(2) }^{-1}
\;,
\end{align*}
and $\omega$ is given in \eqref{eq:Fsmall}.
\end{theorem}
This result implies that the worst minimal number of iterations to achieve a
precision level $\varepsilon >0$ is this time of order $d \log(d) \log^{2}(\varepsilon^{-1})\varepsilon^{-2}$.
\subsection{Selection of $\lambda$ and $\gamma$}\label{guidelines}
We now discuss practical guidelines for setting the values for $\lambda$ and for $\gamma$. As mentioned previously, our aim is to provide an efficient computation methodology that can be applied straightforwardly to any model satisfying \Cref{assum:form-potential}. Hence, rather than seeking optimal values for specific models, we focus on general rules that are simple, robust, and which only involve tractable quantities such as Lipschitz constants.
First, by Theorem \ref{theo:convergence_TV_dec-stepsize-convV}, $\gamma$ should take its value in the range $\gamma \in (0, \lambda/(L_f\lambda + 1)]$ to guarantee the stability of the Euler-Maruyama discretisation, and where we recall that $L_f$ is the Lipschitz constant of $\nabla f$. The values of $\gamma$ within this range are subject to the a bias-variance trade-off. Precisely, large values of $\gamma$ produce a fast-moving chain that convergences quickly and has low estimation variance, but potentially relatively high asymptotic bias. Conversely, small values of $\gamma$ lead to low asymptotic bias, but produce a Markov chain that moves slowly and requires a large number of iterations to produce a stable estimate (such chains often also suffer from some additional bias from the transient or burn-in period). Because applications in imaging sciences involve high dimensionality and require moderately low computing times, as a general rule we recommend setting $\gamma$ to a relatively large value. For example, in our experiments we use
$$
\gamma \in \left[\lambda/5(L_f\lambda + 1), \lambda/2(L_f\lambda + 1)\right]\,.
$$
Observe that this range depends on the value of $\lambda$, which is also subject to a bias-variance tradeoff. Letting $\lambda \rightarrow 0$ to bring $\pi^\lambda$ close to $\pi$ reduces asymptotic bias, but forces $\gamma \rightarrow 0$ and consequently reduces significantly the efficiency of the chain. Conversely, increasing the value of $\lambda$ accelerates the chain at the expense of some asymptotic bias. Based on our experience, and again with an emphasis on efficiency in high dimensional settings, we recommend using values of $\lambda$ in the order of $L_f^{-1}$ (there is no benefit in using larger values of $\lambda$ because $\gamma$ saturates at $L_f^{-1}$). In all our experiments we use $\lambda = 1/L_f$ and $\gamma \in [L_f^{-1}/10, L_f^{-1}/4]$ and obtain estimation errors of the order of $1\%$.
\subsection{Connections to the proximal Metropolis-adjusted Langevin algorithm}
We conclude this section with a discussion of the connections between the proposed MYULA method and the original proximal Metropolis-adjusted Langevin algorithm (Px-MALA) \cite{Pereyra2015}. That algorithm is also based on a Euler-Maruyama approximation of a Langevin SDE targeting a Moreau-Yoshide-type regularised approximation of $\pi$. However, unlike MYULA, that algorithm uses this approximation as proposal mechanism to drive a Metropolis-Hastings (MH) algorithm targeting $\pi$ (not the regularised approximation). The role of the MH is two-fold: it removes the asymptotic bias related to the approximations involved, and it provides a theoretical framework for Px-MALA by placing the scheme within the framework of MH algorithms (recall that many theoretical results regarding ULAs are very recent). However, as mentioned previously, the introduction of the MH step often slows down the algorithm, thus leading to higher estimation variance and longer chains (and potentially some bias from the chain's initial transient regime). Of course, it also introduces a significant computational overhead related to the computation of the MH acceptance ratio \cite{Pereyra2015}. Another importance difference between MYULA and Px-MALA is that the latter uses the proximal operator of $U$, which is often unavailable and has to be approximated by using a forward-backward scheme based on the decomposition $U = f + g$ that we also use in this paper. This approximation error is corrected in practice by the MH step, but it is not considered in the theoretical analysis of the algorithm. Conversely, in MYULA this decomposition is explicit, both in the computational aspects of the method as well as in its theoretical analysis. Furthermore, the theory for MYULA presented in this paper is significantly more complete than that currently available for Px-MALA and other MALAs. Finally, MYULA is also more robust and simple to implement than Px-MALA. For example, identifying suitable values of $\gamma$ for MYULA is straightforward by using the guidelines described above, whereas setting $\gamma$ for Px-MALA can be challenging and often requires using an adaptive MCMC approach based on a stochastic approximation scheme \cite{Pereyra2015,Green2015}.
\section{Model selection using improper priors}
\label{sec:selection_model_case-proper-imp_prior}
Model selection using improper priors can lead to tedious
considerations \cite{cprbayes}. Indeed, in that case the joint density
of each model is not defined. However, this difficulty can be avoided
when the considered models share the same improper prior distribution
see \cite{marin:robert:2007:bayesian}. Let $\mathcal{M}_1, \ldots,
\mathcal{M}_K$ be $K$ alternative Bayesian models having the same
improper distribution with density $\tilde{p}(x)$ on $\mathbb{R}^d$ and
associated to the family of likelihood functions $p_i(y | x)$ such
that for all $i \in \{1,\ldots,K\}$, $\int_{\mathbb{R}^d} p_i(y|x)
\tilde{p}(x) \mathrm{d} x < +\infty$. The marginal posterior
probabilities of $\mathcal{M}_1,\ldots, \mathcal{M}_K$ are then
defined by
\begin{equation}\label{margPost}
\tilde{p}(\mathcal{M}_j | y) = \frac{ \tilde{p}(y|\mathcal{M}_j) K^{-1}}{\sum_{k = 1}^K \tilde{p}(y | \mathcal{M}_k) K^{-1}},\quad j \in \{1, \ldots, K\}\, ,
\end{equation}
where for all $j \in \{1,\ldots,K\}$,
\begin{equation*}
\tilde{p} (y|\mathcal{M}_j) = \int_{\mathbb{R}^d} p_i(y|x)
\tilde{p}(x) \mathrm{d} x \;.
\end{equation*}
\section{Truncated harmonic mean estimator}\label{HME}
\subsection{Case of proper prior distributions}
\label{sec:case-proper-prior}
Consider a positive probability density $p$ on $\mathbb{R}^d \times
\mathbb{R}^m$ for $d,m \in \mathbb{N}^*$ of the form: $p(x,y) = f(x,y) /
\int_{\mathbb{R}^d \times \mathbb{R}^m} f(z,w) \mathrm{d} z \mathrm{d} w$. Assume that $f$
is known but not the normalization constant of $p$. Here $p$ plays
the role of a joint distribution of the data and the parameters. It
can be defined if we take a proper prior distribution for the
parameters.
Define for any bounded Borel set $\mathrm{A} \in \mathcal{B}(\mathbb{R}^d)$
\begin{align}
\label{harmonicmean}
I_{\mathrm{A}}(f,y) &= \int_{\mathbb{R}^d} \mathbbm{1}_{\mathrm{A}}(x) \frac{p(x|y)}{f(x,y)} \mathrm{d} x \\
\nonumber
& = \left. \int_{\mathbb{R}^d} \mathbbm{1}_{\mathrm{A}}(x) \frac{p(x|y)}{ p(x,y)} \mathrm{d} x \middle/ \int_{\mathbb{R}^d \times \mathbb{R}^{m}} f(z,w) \mathrm{d} z \mathrm{d} w \right.\;.
\end{align}
Since $p(x|y) = p(x,y)/p(y)$, the following identity holds
\begin{equation}
\label{eq:relation_harmonic_mean}
p(y) = \left. \operatorname{Vol}(\mathrm{A}) \defEns{I_{\mathrm{A}}(f,y) \int_{\mathbb{R}^d \times \mathbb{R}^{m}} f(z,w) \mathrm{d} z \mathrm{d} w }^{-1} \right. \;.
\end{equation}
For all $y \in \mathbb{R}^m$ and $\mathrm{A}\in \mathcal{B}(\mathbb{R}^d)$, we define the truncated harmonic mean estimator of $I_{\mathrm{A}}(f,y)$ by
\begin{equation}\label{harmonicmean_est_1}
\hat{I}_{\mathrm{A}}(f,y) = \sum_{k = 1}^n \frac{\indi{\mathrm{A}}(X_k)}{f(X_k,y )} \;,
\end{equation}
where $(X_k)_{k \geq 1}$ is an ergodic Markov chain targeting
$p(x|y)$ to ensure that the defined estimator almost surely converges to
$I_{\mathrm{A}}(f,y)$ given by \eqref{harmonicmean}.
Let $p_1 , p_2$ be two positive distributions on $\mathbb{R}^d \times
\mathbb{R}^m$, associated with their two unormalized versions $f_1,f_2:
\mathbb{R}^d \times \mathbb{R}^m \to \mathbb{R}_+$. We aim to estimate
$p_1(y)/p_2(y)$. By \eqref{eq:relation_harmonic_mean}, we have
\begin{equation*}
\label{eq:ratio_marginal_likelihood}
\frac{p_1(y)}{p_2(y)} =
\frac{\int_{\mathbb{R}^d \times \mathbb{R}^{m}} f_2(z,w) \mathrm{d} z \mathrm{d} w}{\int_{\mathbb{R}^d \times \mathbb{R}^{m}} f_1(z,w) \mathrm{d} z \mathrm{d} w} \frac{I_{\mathrm{A}}(f_2,y) } {I_{\mathrm{A}}(f_1,y) }
\end{equation*}
Using \eqref{harmonicmean_est_1}, we estimate this ratio by
\begin{equation*}
\frac{p_1(y)}{p_2(y)} \approx
\hat{B}_{1,2}(y) = \frac{\int_{\mathbb{R}^d \times \mathbb{R}^{m}} f_2(z,w) \mathrm{d} z \mathrm{d} w}{\int_{\mathbb{R}^d \times \mathbb{R}^{m}} f_1(z,w) \mathrm{d} z \mathrm{d} w} \frac{\hat{I}_{\mathrm{A}}(f_2,y) } {\hat{I}_{\mathrm{A}}(f_1,y) }\;.
\end{equation*}
However, we need to compute the ratio $\int_{\mathbb{R}^d \times \mathbb{R}^{m}} f_2(z,w) \mathrm{d} z \mathrm{d} w /\int_{\mathbb{R}^d \times \mathbb{R}^{m}} f_1(z,w) \mathrm{d} z \mathrm{d} w$, if it not equal to $1$.
Assume that for $i=1,2$, $f_i(x,y) = h_i(x,y)g_i(x)$, for some measurable functions $h_i: \mathbb{R}^d \times \mathbb{R}^m \to \mathbb{R}^*_+$, $g_i : \mathbb{R}^d \to \mathbb{R}^*_+$ such that $\int_{\mathbb{R}^m} h_i(x,y) \mathrm{d} y $ does not depend on $x$. Note that this assumption holds in \Cref{sec:experiment-2:-image}.
We distinguish two cases:
\begin{enumerate}
\item
If for $i=1,2$, $g_i$ is integrable, we get
\begin{equation*}
\hat{B}_{1,2}(y) = \frac{\int_{\mathbb{R}^d} g_2(z) \mathrm{d} z }{\int_{\mathbb{R}^d} g_1(z) \mathrm{d} z} \frac{\hat{I}_{\mathrm{A}}(f_2) } {\hat{I}_{\mathrm{A}}(f_1) } \;.
\end{equation*}
In the case where the ratio $\int_{\mathbb{R}^d} g_2(z) \mathrm{d} z /
\int_{\mathbb{R}^d} g_1(z) \mathrm{d} z$ is unknown, such as with the priors considered in the experiment reported in Section \ref{sec:experiment-2:-image}, we use a Monte
Carlo algorithm such as MYULA or Px-MALA to compute it. Observe that this computation can be performed offline when the ratio does not depend on the value of $y$
\item If there exists a function $g: \mathbb{R}^d \to \mathbb{R}_+^*$ and two real
numbers $\lambda_1,\lambda_2 >0$ such that
for $i=1,2$, $g_i( x) = g(\lambda_i x)$ for all $x \in
\mathbb{R}^d$, we get for all $R > 0$
\begin{multline*}
\int_{\mathbb{R}^d \times \mathbb{R}^{m}} \mathbbm{1}_{\ball{0}{R}} f_2(z,w) \mathrm{d} z \mathrm{d} w /\int_{\mathbb{R}^d \times \mathbb{R}^{m}} \mathbbm{1}_{\ball{0}{\lambda_1 \lambda_2^{-1}R}} f_1(z,w) \mathrm{d} z \mathrm{d} w \\
=
\int_{\mathbb{R}^d } \mathbbm{1}_{\ball{0}{R}} g_2(z) \mathrm{d} z /\int_{\mathbb{R}^d} \mathbbm{1}_{\ball{0}{\lambda_1 \lambda_2^{-1}R}} g_1(z) \mathrm{d} z
= (\lambda_1/ \lambda_2)^d
\;.
\end{multline*}
Since for all $a >0$ and $i=1,2$,
\begin{equation*}
\int_{\mathbb{R}^d \times \mathbb{R}^{m}} f_i(z,w) \mathrm{d} z \mathrm{d} w = \lim_{R \to +\infty} \int_{\mathbb{R}^d \times \mathbb{R}^{m}} \mathbbm{1}_{\ball{0}{aR}} f_i(z,w) \mathrm{d} z \mathrm{d} w
\;,
\end{equation*}
we get
\begin{equation*}
\left. \int_{\mathbb{R}^d \times \mathbb{R}^{m}} f_2(z,w) \mathrm{d} z \mathrm{d} w \middle/\int_{\mathbb{R}^d \times \mathbb{R}^{m}} f_1(z,w) \mathrm{d} z \mathrm{d} w \right. = (\lambda_1/ \lambda_2)^d \;.
\end{equation*}
\end{enumerate}
\subsection{Case of improper prior distributions}
\label{sec:case-proper-imp_prior}
Let $f : \mathbb{R}^d \times \mathbb{R}^m \to \mathbb{R}_+$ such that for all $y \in
\mathbb{R}^m$,
\begin{equation}
\label{eq:def_marginal_improper}
\tilde{p}(y) = \int_{\mathbb{R}^d} f(x,y) \mathrm{d} x < +\infty \;.
\end{equation}
Here, $f$ plays the role of an improper joint density of the data and
the parameters as the prior distribution is improper. This setting corresponds to \Cref{ssec:exp1}. Define for
all $y \in \mathbb{R}^m$ the conditional distribution on $\mathbb{R}^d \times
\mathbb{R}^m$ by $p(x|y) = f(x,y)/\tilde{p}(y)$, where $\tilde{p}$ is
defined by \eqref{eq:def_marginal_improper}. Then, define
for any bounded Borel set $\mathrm{A} \in \mathcal{B}(\mathbb{R}^d)$
\begin{equation}
\label{harmonicmean_improper}
I_{\mathrm{A}}(f,y) = \int_{\mathbb{R}^d} \mathbbm{1}_{\mathrm{A}}(x) \frac{p(x|y)}{f(x,y)} \mathrm{d} x \;.
\end{equation}
Then by \eqref{eq:def_marginal_improper}, we get
\begin{equation}
\label{eq:relation_harmonic_mean_improper}
\tilde{p}(y) = \left. \operatorname{Vol}(\mathrm{A}) /I_{\mathrm{A}}(f,y) \right. \;.
\end{equation}
For all $y \in \mathbb{R}^m$ and $\mathrm{A}\in \mathcal{B}(\mathbb{R}^d)$, we define the truncated harmonic mean estimator of $I_{\mathrm{A}}(f,y)$ as in \Cref{sec:case-proper-prior} by
\eqref{harmonicmean_est_1}.
Let now $f_1,f_2: \mathbb{R}^d \times \mathbb{R}^m \to \mathbb{R}_+$, satisfying for
all $i=1,2$ and $y \in \mathbb{R}^m$, $\tilde{p}_i(y) = \int_{\mathbb{R}^d}f_i(x,y) \mathrm{d}
x < +\infty$. We aim to estimate $\tilde{p}_1(y)/\tilde{p}_2(y)$. But by \eqref{eq:relation_harmonic_mean_improper}, we have
\begin{equation*}
\label{eq:ratio_marginal_likelihood}
\frac{\tilde{p}_1(y)}{\tilde{p}_2(y)} =
\frac{I_{\mathrm{A}}(f_2,y) } {I_{\mathrm{A}}(f_1,y) } \;.
\end{equation*}
Using \eqref{harmonicmean_improper} and \eqref{harmonicmean_est_1}, we estimate this ratio by
\begin{equation*}
\frac{\tilde{p}_1(y)}{\tilde{p}_2(y)} \approx
\hat{B}_{1,2}(y) = \frac{\hat{I}_{\mathrm{A}}(f_2,y) } {\hat{I}_{\mathrm{A}}(f_1,y) }\;.
\end{equation*}
\section{Conclusion}\label{sec:conclusion}
This paper presented a new and general proximal MCMC methodology to perform Bayesian computation in log-concave models, with a focus on enabling advanced Bayesian analyses for imaging inverse problems that are convex and not smooth, and currently solved mainly by convex optimisation. The methodology is based on a Moreau-Yoshida-type regularised approximation of the target density that is by construction is log-concave and Lipchitz continuously differentiable, and which can be addressed efficiently by using an unadjusted Langevin MCMC algorithm. We provided a detailed theoretical analysis of this scheme, including asymptotic as well as non-asymptotic convergence results, and bounds on the convergence rate of the chains with explicit dependence on model dimension. In addition to being highly computational efficient and having a strong theoretical underpinning, this new methodology is general and can be applied straightforwardly to most problems solved by proximal optimisation, particularly all problems solved by using forward-backward splitting techniques. The proposed methodology was finally demonstrated with four experiments related to image deconvolution and tomographic reconstruction with total-variation and l1 sparse priors, where we conducted a range of challenging Bayesian analyses related to model comparison and uncertainty quantification, and where we reported estimation accuracy and computational efficiency comparisons with the proximal Metropolis-adjusted Langevin algorithm
\section{Experimental results}
\label{sec:experiments}
In this section we illustrate the proposed methodology with four canonical imaging inverse problems related to image deconvolution and tomographic reconstruction with total-variation and $\ell_1$ sparse priors. In the Bayesian setting these problems are typically solved by MAP estimation, which delivers accurate solutions and can be computed very efficiently by using proximal convex optimisation algorithm. Here we demonstrate MYULA by performing some advanced and challenging Bayesian analyses that are beyond the scope of optimisation-based mathematical imaging methodologies. For example, in Section \ref{exp:BMS} we report two experiments where we use MYULA to perform Bayesian model choice for image deconvolution models, and where a novelty is that comparisons are performed intrinsically (i.e., without ground truth available) by computing the posterior probability of each model given the observed data. Following on from this, in Section \ref{exp:BUQ} we report the two additional experiments where we use MYULA to explore the posterior uncertainty about $x$ and analyse specific aspects about the solutions delivered, particularly by computing simultaneous credible sets (joint Bayesian confidence sets).
Moreover, to assess the computational efficiency and the accuracy of MYULA we benchmark our estimations against the results of Px-MALA \cite{Pereyra2015} targeting the exact posterior $\pi(x) = p(x|y)$ (recall that this algorithm has no asymptotic estimation bias). We emphasise at this point that we do not seek to compare explicitly and quantitatively the methods because: 1) MYULA and Px-MALA do not target the exact same stationary distribution; 2) high-dimensional quantitative efficiency comparisons may depend strongly on the summary statistics used to define the efficiency metrics; and 3) results can often be marginally improved by fine tuning the algorithm parameters (e.g., step sizes, burn-in periods, etc.). What our comparisons seek to demonstrate is that MYULA can deliver reliable approximate inferences with a computational cost that is often significantly lower than Px-MALA, and more importantly, that it provides a general, robust, and theoretically sound computational framework for performing advanced Bayesian analyses for imaging problems. Experiments were conducted on a Apple Macbook Pro computer running MATLAB 2015.
\subsection{Bayesian model selection}\label{exp:BMS}
\subsubsection{Bayesian analysis and computation}
Most mathematical imaging problems can be solved with a range of alternative models. Currently, the predominant approach to select the best model for a specific problem is to compare their estimations against ground truth. For example, given $K$ alternative Bayesian models $\mathcal{M}_1, \ldots, \mathcal{M}_K$, practitioners often benchmark models by artificially degrading a set of test images, computing the MAP estimator for each model and image, and then measuring estimation error with respect to the truth. The model with the best overall performance is then used in applications to analyse real data. Of course this approach to model selection has some limitations: 1) it relies strongly on test data that may not be representative of the unknown, and 2) conclusions can depend on the estimation error metrics used.
An advantage of formulating inverse problems within the Bayesian framework is that, in addition to strategies to perform point estimation, this formalism also provides theory to compare models objectively and intrinsically, and hence perform model selection in the absence of ground truth. Precisely, $K$ alternative Bayesian models are compared through their marginal posterior probabilities
\begin{equation}\label{margPost}
p(\mathcal{M}_j | y) = \frac{ p(y|\mathcal{M}_j) K^{-1}}{\sum_{k = 1}^K p(y | \mathcal{M}_k) K^{-1}},\quad j = \{1, \ldots, K\}\, ,
\end{equation}
where for objectiveness here we use an uniform prior on the auxiliary variable $j$ indexing the models, $p(y | \mathcal{M}_j)$ is the marginal likelihood
\begin{equation}\label{margLike}
p(y | \mathcal{M}_j) = \int p(x,y | \mathcal{M}_j) \textrm{d}x,\quad j = \{1, \ldots, K\}\, ,
\end{equation}
measuring model-fit-to-data and $p(y,x | \mathcal{M}_j)$ is the joint
probability density associated with $\mathcal{M}_j$ (see
\Cref{sec:selection_model_case-proper-imp_prior} for details regarding the case of
improper priors). Following Bayesian decision theory, to perform model
selection we simply chose the model with the highest posterior
probability (this is equivalent to performing MAP estimation on the
model index $j$):
\begin{equation*}
\mathcal{M}^* = \operatorname*{arg\,max}_{j \in \{1,\ldots,K\}} p(\mathcal{M}_j | y).
\end{equation*}
From a computation viewpoint, performing Bayesian model selection for imaging problems is challenging because it requires evaluating the likelihoods $p(y | \mathcal{M}_j)$ up to a proportionality constant, or equivalently the Bayes factors $p(y | \mathcal{M}_j)/p(y | \mathcal{M}_i)$ for $i,j \in \{1,\cdots,K\}$ (see \Cref{sec:case-proper-imp_prior} for details regarding the case of improper priors). Here we perform this computation by Monte Carlo integration. Precisely, given $n$ samples $X^M_1,\ldots, X^M_n$ from $p(x |y, \mathcal{M}_j)$, we approximate the marginal likelihood of model $\mathcal{M}_j$ by using the truncated harmonic mean estimator \citep{Robert2009AIP}
\begin{equation}\label{harmonicEstimator}
p(y|\mathcal{M}_j) \approx \left(\sum_{k = 1}^n \frac{\indi{\mathsf{A}^{\star}}{(X^M_k)}}{p(X^M_k,y | \mathcal{M}_j)}\right)^{-1} \operatorname{Vol}(\mathsf{A}^{\star}) \;, \quad j = \{1,2,3\}
\end{equation}
where for all $x,y$, $p(x,y| \mathcal{M}_j)$ is joint density of $\mathcal{M}_j$ and $\mathsf{A}^{\star} = \cup_{j = 1}^3 \mathcal{C}^{\star}_{j,\alpha}$ is the union of highest posterior density regions \eqref{HPD} of each model at level $(1-\alpha)$ (see Section \ref{exp:BUQ} for details about HPD regions). In our experiments we use the samples to calibrate each $\mathcal{C}^{\star}_{j,\alpha}$ for $\alpha = 0.8$. Notice that it is not necessary to compute $\operatorname{Vol}(\mathsf{A}^{\star})$ to calculate \eqref{margPost} because the normalisation is retrieved via $ \sum_{j=1}^3 p(\mathcal{M}_j|y) =1$. See \Cref{HME} for more details about this estimator and its use to compute the Bayes factors.
\subsubsection{Experiment 1: Image deconvolution with total-variation prior}\label{ssec:exp1}
\paragraph{Experiment setup}
To illustrate the Bayesian model selection approach we consider an image deconvolution problem with three alternative models related to three different blur operators. The goal of image deconvolution is to recover a high-resolution image $x \in \mathbb{R}^n$ from a blurred and noisy observation $y = H x + w$, where $H$ is a circulant blurring matrix and $w \sim \mathcal{N}(0,\sigma^2\boldsymbol{I}_n)$. This inverse problem is ill-conditioned, a difficulty that Bayesian image deconvolution methods address by exploiting the prior knowledge available. For this first experiment we consider three alternative models involving three different blur operators $H_1$, $H_2$, and $H_3$. With regards to the prior, we use the popular total-variation prior that promotes regularity by using the pseudo-norm $TV(x) = \|\nabla_d x\|_{1-2}$, where $\|\cdot\|_{1-2}$ is the composite $\ell_1 -\ell_2$ norm and $\nabla_d$ is the two-dimensional discrete gradient operator. The posterior distribution $p(x|y)$ for the models is given by
\begin{eqnarray}\label{deconvolutionTV}
\mathcal{M}_j : \quad \pi(x) \propto \exp{\left[-(\|y-H_j x\|^2/2\sigma^2) - \beta TV(x) \right]}
\end{eqnarray}
with fixed hyper-parameters $\sigma > 0$ and $\beta > 0$ set manually by an expert. This density is log-concave and MAP estimation can be performed efficiently by proximal convex optimisation (here we use the ADMM algorithm SALSA \citep{Figueiredo2011}).
Figure \ref{FibBoat1} presents an experiment with the \texttt{Boat} test image of size $d = 256 \times 256$ pixels. Figure \ref{FibBoat1}(a) shows a blurred and noisy observation $y$, generated by using a $5 \times 5$ uniform blur and Gaussian noise with $\sigma = 0.47$, related to a blurred signal-to-noise ratio of $40$dB. Moreover, Figures \ref{FibBoat1}(b)-(d) show the MAP estimates associated with three alternative instances of model \eqref{deconvolutionTV} involving the following blur operators:
\begin{itemize}
\item $\mathcal{M}_1$: $H_1$ is the correct $5 \times 5$ uniform blur operator.
\item $\mathcal{M}_2$: $H_2$ is a mildly misspecified $6 \times 6$ uniform blur operator.
\item $\mathcal{M}_3$: $H_3$ is a strongly misspecified $7 \times 7$ uniform blur operator.
\end{itemize}
(All models share the same hyper-parameter values $\sigma = 0.47$ and $\beta = 0.03$ selected manually to produce good image deconvolution results.) We observe in Figure \ref{FibBoat1} that models $\mathcal{M}_1$ and $\mathcal{M}_2$ have produced sharp images with fine detail, whereas $\mathcal{M}_3$ is clearly misspecified. In terms of estimation performance with respect to the truth, as expected the estimate of Figure \ref{FibBoat1}(c) corresponding to model $\mathcal{M}_1$ achieves the highest peak signal-to-noise-ratio (PSNR) of $33.8$dB, $\mathcal{M}_2$ scores $33.4$dB, and $\mathcal{M}_3$ scores $13.4$dB. Finally, computing the MAP estimates displayed in Figure \ref{FibBoat1} with SALSA \cite{Figueiredo2011} required $2$ seconds per model.
\begin{figure}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/Boat/BoatBlurred.png}}
\small{(a)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/Boat/BoatXmap.png}}
\small{(b)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/Boat/BoatXmap2.png}}
\small{(c)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/Boat/BoatXmap3.png}}
\small{(d)}
\end{minipage}
\caption{\small{Deconvolution experiment - \texttt{Boat} test image ($256\times 256$ pixels): (a) Blurred and noisy image $y$, (b)-(d) MAP estimators corresponding to models $\mathcal{M}_1$, $\mathcal{M}_2$, and $\mathcal{M}_3$.}} \label{FibBoat1}
\end{figure}
\paragraph{Model selection in the absence of ground truth}
We now demonstrate the Bayesian approach to perform model selection intrinsically. Precisely, we ran $10^5$ iterations of MYULA with the specific blur operators corresponding to $\mathcal{M}_1$, $\mathcal{M}_2$, and $\mathcal{M}_3$. For this experiment we implemented MYULA with $f(x) = \|y-H_j x\|^2/2\sigma^2$ and $g(x) = \beta TV(x)$, with fixed algorithm parameters $\lambda = L_f^{-1} = 0.45$ and $\gamma = L_f^{-1}/5 = 0.1$, and by using Chambolle's algorithm \citep{Chambolle} to evaluate the proximal operator of the TV-norm. Computing these samples required approximately $30$ minutes per model. Following on from this, we used the samples to calibrate the high-posterior-density regions $\mathcal{C}^{\star}_j$ of each model at level $20\%$, and then computed the Bayes factors between the models by using \eqref{harmonicEstimator} (see \ref{sec:case-proper-prior} for details).
By applying this procedure we obtained that $\mathcal{M}_1$ has the highest posterior probability $p(\mathcal{M}_1|y) = 0.964$, followed by $p(\mathcal{M}_2|y) = 0.036$ and $p(\mathcal{M}_3|y) < 0.001$ (the values of the Bayes factors for this experiment are $\hat{B}_{1,2}(y) = 26.8$ and $\hat{B}_{1,3}(y) > 10^{3}$). These results, which have been computing without using any form of ground truth, are in agreement with the PSNR values calculated by using the true image and provide strong evidence in favour of model $\mathcal{M}_1$. They also confirm the good performance of the Bayesian model selection technique.
\paragraph{Comparison with proximal MALA}
We conclude this first experiment by benchmarking our estimations against Px-MALA, which targets \eqref{deconvolutionTV} exactly. Precisely, we recalculated the models' posterior probabilities \eqref{margPost} with Px-MALA and obtained that $p( \mathcal{M}_1 | y) = 0.962$, $p( \mathcal{M}_2 | y) = 0.038$, and $p( \mathcal{M}_3 | y) < 0.001$, indicating that the MYULA estimate has an approximation error of the order of $0.5\%$ (to obtain accurate estimates for Px-MALA we used $n = 10^7$ iterations with an adaptive time-step targeting an average acceptance rate of order $45\%$). Moreover, comparing the chains generated with MYULA and Px-MALA revealed that MYULA is significantly more computationally efficient than Px-MALA. For illustration, Fig. \ref{FibBoat2}(a) shows the transient regimes of the MYULA and Px-MALA chains related $\mathcal{M}_1$, where starting from a common initial condition the chains converge to the posterior typical set\footnote{In stationarity, $x|y$ is with very high probability in the neighbourhood of the $(d-1)$-dimensional shell $\{x : U(x) = \mathbb{E}[U(x)|y]\}$, see \cite{Pereyra:2016b}} of $p(x|y)$ (to improve visibility this is displayed in logarithmic scale). Observe that MYULA requires around $10^2$ iterations to navigate the parameter space and reach the typical set, whereas Px-MALA requires $10^4$ iterations. Furthermore, to compare the efficiency of the chains in stationarity, Fig. \ref{FibBoat2}(b) shows the autocorrelation function of the chains generated by MYULA and Px-MALA. To highlight the efficiency of MYULA we have used the chains' slowest component (i.e., that with largest variance) as summary statistic. Again, observe that MYULA is clearly significantly more efficient than Px-MALA. From a practitioner's viewpoint, this efficiency advantage is further accentuated by the fact that MYULA iterations are almost twice less computationally expensive than Px-MALA iterations, which include the MH step
\begin{figure}
\begin{minipage}[l2]{0.5\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/Flinstones/logPiTraceLog.png}}
\small{(a)}
\end{minipage}
\begin{minipage}[l2]{0.5\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/Boat/ACF.png}}
\small{(b)}
\end{minipage}
\caption{\small{MYULA and Px-MALA comparison: (a) Convergence of the chains to the typical set of \eqref{deconvolutionTV} under model $\mathcal{M}_1$ (logarithmic scale), (b) chain autocorrelation function (ACF).}} \label{FibBoat2}
\end{figure}
\subsubsection{Experiment 2: Image deconvolution with wavelet frame}
\label{sec:experiment-2:-image}
\paragraph{Experiment setup}
The second model selection experiment we consider involves three alternative image deconvolution models with different priors. This experiment is more challenging than the previous one because priors operate indirectly on $y$ through $x$. We consider three models of the form
\begin{eqnarray}\label{deconvolutionL1wave}
\mathcal{M}_j: \quad p(x|y) \propto \exp{\left[-(\|y-H x\|^2/2\sigma^2) - \beta_j \|\Psi_j x\|_1 \right]}
\end{eqnarray}
where $\Psi_j$ is a model dependent frame:
\begin{itemize}
\item $\mathcal{M}_1$: $\Psi_1$ is a redundant Haar frame with 6-level, and $\beta_1 = 0.02$ is selected automatically by using a hierarchical Bayesian method \citep{Pereyra_EUSIPCO_2015},
\item $\mathcal{M}_2$: $\Psi_2$ is a redundant Haar frame with 3-level, and $\beta_2 = 0.02$ is selected automatically by using a hierarchical Bayesian method \citep{Pereyra_EUSIPCO_2015},
\item $\mathcal{M}_3$: $\Psi_3$ is a redundant Haar frame with 3-level, and $\beta_3 = 0.003$ is selected automatically by using the L-curve method \citep{Hanke1993}.
\end{itemize}
To make the selection problem even more challenging, in this experiment we use a higher noise level $\sigma = 1.76$, related to a blurred signal-to-noise ratio of $30$dB. We note that \eqref{deconvolutionL1wave} is log-concave and MAP estimation can be performed efficiently by proximal convex optimisation (here we use the ADMM algorithm SALSA \cite{Figueiredo2011}).
Fig. \ref{FibFlin} presents an experiment with the \texttt{Flinstones} test image of size $d = 256 \times 256$ pixels. Fig. \ref{FibBoat1}(a) shows the blurred and noisy observation $y$ used in this experiment, which we generated by using a $5 \times 5$ uniform blur and $\sigma = 1.76$, and Fig. \ref{FibFlin}(b)-(d) show the MAP estimates obtained with $\mathcal{M}_1$, $\mathcal{M}_2$, and $\mathcal{M}_3$ by using SALSA \citep{Figueiredo2011} (these computations required $4$ seconds per model). We observe in Figure \ref{FibBoat1} that models $\mathcal{M}_1$ and $\mathcal{M}_2$ have produced sharp images with fine detail, whereas $\mathcal{M}_3$ is misspecified. In terms of estimation performance with respect to the truth, the estimate of Figure \ref{FibFlin}(c) corresponding to model $\mathcal{M}_2$ achieves the highest peak signal-to-noise-ratio (PSNR) of $20.8$dB, $\mathcal{M}_1$ scores $20.6$dB, and $\mathcal{M}_3$ scores $11.6$dB.
\begin{figure}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/Flinstones/obs.png}}
\small{(a)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/Flinstones/xmapM3.png}}
\small{(b)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/Flinstones/xmapM1.png}}
\small{(c)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/Flinstones/xmapM2.png}}
\small{(d)}
\end{minipage}
\caption{\small{Deconvolution experiment - \texttt{Flinstones} test image ($256\times 256$ pixels): (a) Blurred and noisy image $y$, (b)-(d) MAP estimators corresponding to models $\mathcal{M}_1$, $\mathcal{M}_2$, and $\mathcal{M}_3$.}} \label{FibFlin}
\end{figure}
\paragraph{Model selection in the absence of ground truth}
Similarly to the previous experiment, we used MYULA to perform Bayesian model selection intrinsically. Precisely, we used MYULA to generate three sets of $n = 10^5$ samples $X^M_1,\ldots, X^M_n$ approximately distributed according to \eqref{deconvolutionL1wave} with the parameters corresponding to $\mathcal{M}_1$, $\mathcal{M}_2$, and $\mathcal{M}_3$. For this experiment we implemented MYULA with $f(x) = \|y-H x\|^2/2\sigma^2$ and $g(x) = \beta_j\|\Psi_j x\|_1$, with fixed algorithm parameters $\lambda = L_f^{-1} = 4.5$ and $\gamma = L_f^{-1}/5 = 0.9$. Computing these samples required $50$ minutes per model. Following on from this, we used the samples to calibrate the high-posterior-density regions $\mathcal{C}^{\star}_j$ of each model at level $20\%$, and then computed the Bayes factors between the models by using \eqref{harmonicEstimator} (see \ref{sec:case-proper-prior} for details).
By applying this procedure we obtained that $\mathcal{M}_2$ has the highest posterior probability $p(\mathcal{M}_2|y) = 0.42$, followed by $p(\mathcal{M}_1|y) = 0.32$ and $p(\mathcal{M}_3|y) = 0.26$ (the values of the Bayes factors for this experiment are $\hat{B}_{2,1}(y) = 1.31$ and $\hat{B}_{2,3}(y) = 1.62$). Note that these results, which have been computing without using any form of ground truth, are in agreement with the PSNR values calculated by using the true image and indicate that $\mathcal{M}_2$ is the most appropriate model for data $y$.
\paragraph{Comparison with proximal MALA}
Again, we conclude our second experiment by benchmarking our estimations against Px-MALA, which targets \eqref{deconvolutionL1wave} exactly. Precisely, we recalculated the models' posterior probabilities \eqref{margPost} with Px-MALA and obtained that $p(y | \mathcal{M}_1) = 0.41$, $p(y | \mathcal{M}_2) = 0.33$, and $p(y | \mathcal{M}_3)= 0.26$, indicating that the MYULA estimate has an approximation error of the order of $0.5\%$ (to obtain accurate estimates for Px-MALA we used $n = 10^7$ iterations with an adaptive time-step targeting an average acceptance rate of order $45\%$). Moreover, efficiency analyses indicate that in this case MYULA is approximately an order of magnitude more efficient per iteration than Px-MALA, with an additional advantage in terms of time-normalised computational efficiency because of a lower computational cost per iteration.
\subsection{Bayesian uncertainty quantification via posterior credible sets}\label{exp:BUQ}
\subsubsection{Bayesian analysis and computation}
As mentioned earlier, point estimators such as $\hat{x}_{MAP}$ deliver accurate results but do not provide information about the posterior uncertainty of $x$. Given the uncertainty that is inherent to ill-posed and ill-conditioned inverse problems, it would be highly desirable to complement point estimators with posterior credibility sets that indicate the region of the parameter space where most of the posterior probability mass of $x$ lies. This is formalised in the Bayesian decision theory framework by computing \emph{credible regions} \cite{cprbayes}. A set $C_\alpha$ is a posterior credible region with confidence level $(1-\alpha)$ if
\begin{equation*}
\mathbb{P} \left[x \in \mathcal{C}_{\alpha} | y \right] = 1-\alpha.
\end{equation*}
It is easy to check that for any $\alpha \in (0,1)$ there are infinitely many regions of the parameter space that verify this property. Among all possible regions, the so-called \emph{highest posterior density} (HPD) region has minimum volume \cite{cprbayes}, and is given by
\begin{eqnarray}\label{HPD}
\mathcal{C}^{\star}_{\alpha} = \{ \boldsymbol{x} : U(x) \leq \eta_\alpha \}
\end{eqnarray}
with $\eta_\alpha \in \mathbb{R}$ chosen such that $\int_{\mathcal{C}^{\star}_{\alpha}} p(x|y) \textrm{d}\boldsymbol{x} = 1-\alpha$ holds. This joint credible set has the important advantage that it can be enumerated by simply specifying the scalar value $\eta_\alpha$.
From a computation viewpoint, calculating credible sets for images is very challenging because it requires solving very high-dimensional integrals of the form $\int_{\mathcal{C}^{\star}_{\alpha}} p(x|y)\textrm{d}x$. In this work, we use MYULA to approximate these integrals.
\subsubsection{Experiment 3: Tomographic image reconstruction}\label{tomographic_imaging}
\paragraph{Experiment setup}
The third experiment we consider is a tomographic image reconstruction problem with a total-variation prior. The goal is to recover the image $x \in \mathbb{R}^n$ from an incomplete and noisy set of Fourier measurements $y = A F x + w$, where $F$ is the discrete Fourier transform operator, $A$ is a tomographic sampling mask, and $w \sim \mathcal{N}(0,\sigma^2\boldsymbol{I}_n)$. This inverse problem is ill-posed, resulting in significant uncertainty about the true value of $x$. Similarly to Experiment 1, in this experiment we regularise the problem and reduce the uncertainty about $x$ by using a total-variation prior promoting piecewise regular images. The resulting posterior $p(x|y)$ is
\begin{eqnarray}\label{tomographic}
\pi(x) \propto \exp{\left[-\|y-A F x\|^2/2\sigma^2 -\beta TV(x)\right]}.
\end{eqnarray}
with fixed hyper-parameters $\sigma > 0$ and $\beta > 0$ set manually by an expert. We note that this density is log-concave and MAP estimation can be performed efficiently by proximal convex optimisation (here we use the ADMM algorithm SALSA \citep{Figueiredo2011}).
Figure \ref{FigMRI1} presents an experiment with the \texttt{Shepp-Logan phantom} magnetic resonance image (MRI) of size $d = 128 \times 128$ pixels presented in Figure \ref{FigMRI1}(a). Figure \ref{FigMRI1}(b) shows a noisy tomographic measurement $y$ of this image, contaminated with Gaussian noise with $\sigma = 7 \times 10^{-2}$ (to improve visibility Figure \ref{FigMRI1}(b) shows the amplitude of the Fourier coefficients in logarithmic scale, with black regions representing unobserved coefficients). Notice from Figure \ref{FigMRI1}(b) that only $15\%$ of the original Fourier coefficients are observed. Moreover, Figure \ref{FigMRI1}(c) shows the Bayesian estimate $\hat{x}_{MAP}$ associated with \eqref{tomographic} with hyper-parameter value $\beta = 5$.
\begin{figure}[htbp!]
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{Figures/MRI_true.png}}
\small{(a)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{Figures/MRI_obs.png}}
\small{(b)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{Figures/MRI_xmap.png}}
\small{(d)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{Figures/MRI_xTest_highNoise.png}}
\small{(d)}
\end{minipage}
\caption{\small{Tomography experiment: (a) \texttt{Shepp-Logan phantom} image ($128\times128$ pixels), (b) tomographic observation $y$ (amplitude of Fourier coefficients in logarithmic scale), (c) MAP estimator.}} \label{FigMRI1}
\end{figure}
\paragraph{Bayesian uncertainty analysis}
We now conduct a simple Bayesian uncertainty analysis to illustrate how posterior credible sets can inform decision-making. For illustration, suppose that the structure highlighted in red in Figure \ref{FigMRI1}(c) is relevant from a clinical viewpoint because it provides important information for diagnosis or treatment related decision-making. Also, suppose that we first observe this structure in the Bayesian estimate $\hat{x}_{MAP}$ and that, following on from this, we wish to explore the posterior uncertainty about $x$ to learn more about the structure. In particular, here we conduct a simple analysis to show that there is lack of confidence regarding the presence of this structure in the true image (i.e., the structure could be an artefact). Precisely, this is achieved by computing the HDP credible region $\mathcal{C}^{\star}_{\alpha}$ and showing that it includes solutions that are essentially equivalent to $\hat{x}_{MAP}$ except for the fact that they do not have the structure of interest
As alternative solution or ``counter example'' of $\hat{x}_{MAP}$, consider the image $x_\dagger$ displayed in Figure \ref{FigMRI1}(d). This image is equivalent to $\hat{x}_{MAP}$ except for the fact that the structure of interest has been removed (we generated this image by modifying $\hat{x}_{MAP}$ by applying a segmentation-inpainting process to replace the structure with the surrounding intensity level). Of course, clinicians observing $x_\dagger$ images would potentially arrive to significantly different conclusions about the diagnosis or the treatment required. This test image scores $U(x_\dagger) = 1.27 \times 10^4$.
To determine if $x_\dagger$ belongs to $\mathcal{C}^{\star}_{\alpha}$ we used MYULA to generate $n = 10^5$ samples from \eqref{tomographic}, and calculated the HPD threshold $\eta_\alpha$ by estimating the $(1-\alpha)$-quantile of $U(x)$ (we implemented the algorithm with $f(x) = \|y-A F x\|^2/2\sigma^2$ and $g(x) = \beta TV(x)$, with fixed parameters $\lambda = L_f^{-1} = 1 \times 10^{-4}$ and $\gamma_k = L_f^{-1}/10 = 10^{-5}$, and by using Chambolle's algorithm \citep{Chambolle} to evaluate the proximal operator of the TV-norm). Fig. \ref{FigMRI3}(a) shows the threshold values $\eta_\alpha$ for a range of values of $\alpha \in [0.01,0.99]$. Observe that $U(x_\dagger) = 1.27 \times 10^4$ is significantly lower than the values displayed in Fig. \ref{FigMRI3}(a), indicating that the counter example image $x_\dagger$ belongs to set of likely solutions to the inverse problem (e.g., at level $90\%$ $\eta_{0.10} = 2.34 \times 10^4$ hence $x_\dagger \in \mathcal{C}^{\star}_{0.10}$). Based on this we conclude that, with the current number of observations and noise level, it is not possible to assert confidently that the structure considered is present in the true image. Consequently, we would recommend that this data is not used as primary evidence to support decision-making about this structure. Generating the Monte Carlo samples and computing the HPD threshold values required $15$ minutes.
\paragraph{Comparison with proximal MALA}
We conclude this experiment by benchmarking our estimations against Px-MALA, which targets \eqref{tomographic} exactly (to obtain accurate estimates for Px-MALA we use $n = 10^7$ iterations with an adaptive time-step targeting an average acceptance rate of order $45\%$). The HPD threshold values $\eta_\alpha$ obtained with Px-MALA are reported in Fig. \ref{FigMRI3}(a), notice the approximation error of order of $3\%$ due to MYULA's estimation bias (this does not affect the conclusions of the experiment). With regards to computational performance, an efficiency analysis of the two algorithms indicates that for this model MYULA is approximately two orders of magnitude more efficient than Px-MALA in terms of integrated autocorrelation time (for illustration Fig. \ref{FigMRI3}(b) compares the autocorrelation functions for slowest component of the MYULA and Px-MALA chains).
\begin{figure}[htbp!]
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/MRI/HDPthresh_PULA_PMALA.png}}
\small{(a)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/MRI/ACF_PULA_PMALA.png}}
\small{(b)}
\end{minipage}
\caption{\small{Tomography experiment: (a) HDP region thresholds $\eta_\alpha$ for MYULA and Px-MALA, (b) chain autocorrelation functions for MYULA and Px-MALA.}} \label{FigMRI3}
\end{figure}
\subsubsection{Experiment 4: Sparse image deconvolution with an $\ell_1$ prior}\label{ssec:exp4}
\paragraph{Experiment setup}
The fourth experiment we consider is a sparse image deconvolution problem with a Laplace or $\ell_1$ prior. Again, we aim to recover $x \in \mathbb{R}^n$ from $y = Hx + \boldsymbol{w}$, where $H$ is a circulant blurring matrix and $\boldsymbol{w} \sim \mathcal{N}(0,\sigma^2\boldsymbol{I}_n)$. We expect sparse solutions and use a Laplace prior related to the $\ell_1$ norm of $x$. The resulting posterior $p(x|y)$ is
\begin{eqnarray}\label{deconvolution}
\pi(x) \propto \exp{\left[-\|y-Hx\|^2/2\sigma^2 - \beta \|x\|_{1}\right]}.
\end{eqnarray}
with fixed hyper-parameters $\sigma >0$ and $\beta > 0$ set manually by an expert. Similarly to the previous experiments, we notice that this density is log-concave and MAP estimation can be performed efficiently by proximal convex optimisation.
Figure \ref{FigMicro} presents an experiment with a microscopy dataset of \cite{Zhu2012} related to high-resolution live cell imaging. Figure \ref{FigMicro}(a) shows an observation $y$ of field of size $4 \mu m \times 4 \mu m$ containing $100$ molecules. This low-resolution observation has been acquired with an instrument specific point-spread-function of size $16 \times 16$ pixels and a blurred signal-to-noise ratio of $20$dB (see \cite{Zhu2012} for more details). Figure \ref{FigMicro}(b) shows the Bayesian estimate $\hat{x}_{MAP}$ associated with \eqref{deconvolution} with hyper-parameter value $\alpha = 0.01$ (notice that $\hat{x}_{MAP}$ is displayed in logarithmic scale to improve visibility). Computing this estimate with SALSA \cite{Figueiredo2011} required $2.3$ seconds.
\paragraph{Bayesian uncertainty analysis}
As second example of Bayesian uncertainty quantification, we use $\mathcal{C}^{\star}_{\alpha}$ to examine the uncertainty about the position of the group of molecules highlighted in red in Fig. \ref{FigMicro}, which we assume to be relevant for an application considered. Precisely, we used $n = 10^5$ samples generated with MYULA to compute $\mathcal{C}^{\star}_{\alpha}$ with $\alpha = 0.01$ related to the $99\%$ confidence level, and obtained the threshold value $\eta_{0.01} = 9.69 \times 10^4$. Following on from this, to explore $\mathcal{C}^{\star}_{0.01}$ to quantify the uncertainty about the exact position of the molecules, we generated several surrogate test images by modifying $\hat{x}_{MAP}$ by displacing the molecules in different directions until these surrogates exit $\mathcal{C}^{\star}_{0.01}$. Figure \ref{FigMicro}(c) shows the posterior uncertainty of the molecule positions (note that for visibility the figure focuses on the region of interest). This analysis reveals that the uncertainty at level $99\%$ is of the order of $\pm 5$ pixels vertically and $\pm 8$ pixels horizontally, corresponding to $\pm 78nm$ and $\pm 125nm$. It is worth mentioning that these results are in close in agreement with the experimental precision results reported in \cite{Zhu2012}, which identified an average precision of the order of $80nm$ for the one hundred molecules.
\begin{figure}[htbp!]
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{Figures/Microscopy_obs.png}}
\small{(a)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{Figures/Microscopy_xmapLog.png}}
\small{(b)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/Microscopy/Microscopy_uncertaintyLog.png}}
\small{(c)}
\end{minipage}
\begin{minipage}[l2]{0.49\linewidth}
\centering
\centerline{\includegraphics[width=7.5cm]{FiguresSIAM/Microscopy/HDPthresh_PULA_PMALA.png}}
\small{(d)}
\end{minipage}
\caption{\small{Microscopy experiment: (a) Blurred image $y$ ($256\times256$ pixels, $4\mu m \times 4\mu m$)),\newline (b) MAP estimate $\hat{x}_{MAP}$ (logarithmic scale), (c) molecule position uncertainty quantification (vertical: $\pm 78nm$, horizontal $\pm 125nm$), (d) HDP region thresholds $\eta_\alpha$ for MYULA and Px-MALA.}} \label{FigMicro}
\end{figure}
\paragraph{Comparison with proximal MALA}
Again, we conclude the experiment by benchmarking our estimations against Px-MALA, which targets \eqref{deconvolution} exactly (to obtain accurate estimates for Px-MALA we use $n = 2 \times 10^{7}$ iterations with an adaptive step-size targeting an acceptance rate of the order of $45\%$). Figure \ref{FigMicro}(d) compares the estimations of the threshold values $\eta_\alpha$ obtained with MYULA and Px-MALA for different values of $\alpha$, indicating that the approximation errors of MYULA are of the order of $0.1\%$. Moreover, performance analyses based on the chains generated with each algorithm indicate that in this case MYULA is approximately one order of magnitude more computationally efficient than Px-MALA
\section{Introduction}
Image estimation problems are ubiquitous in science and engineering. For example, problems related to image denoising \cite{Lebrun:2013}, deconvolution \cite{Bonettini2013}, compressive sensing reconstruction \cite{Donoho2006}, super-resolution \cite{Veniamin2016}, tomographic reconstruction \cite{Lustig:2007}, inpainting \cite{Chan2011}, source separation \cite{Zhengming:2012}, fusion \cite{Haro:2015}, and phase retrieval \cite{Candes:2013}. The development of new theory, methodology, and algorithms for imaging problems is a focus of significant research efforts. Particularly, convex imaging problems have received a lot of attention lately, leading to major developments in this area.
Most recent works in the imaging literature adopt formal mathematical approaches to analyse problems, derive solutions, and study the underpinning algorithms. There are several mathematical frameworks available to solve imaging problems \cite{somersalo:2005}. In particular, many modern methods are formulated in the Bayesian statistical framework, which relies on statistical models to represent the data observation process and the prior knowledge available, and then derives solutions by using inference techniques rooted in Bayesian decision theory \cite{somersalo:2005}.
There are currently two main approaches in Bayesian imaging methodology. The predominant approach is to use a convex formulation of the estimation problem and postulate a prior distribution that is log-concave. This leads to a posterior distribution that is also log-concave, and where maximum-a-posteriori (MAP) estimation can be computed efficiently by using high dimensional convex optimisation algorithms \cite{Green2015}. In addition to scaling well to large settings, convex optimisation algorithms have two additional advantages that are important for practical Bayesian computation: they are well understood theoretically and their conditions for convergence are clear and simple to check; and the main algorithms are general and can be applied similarly to wide range of problems. However, as we will discuss later, convex optimisation on its own cannot deliver basic aspects of the Bayesian paradigm, and struggles to support the complex statistical analyses that are inherent to modern scientific reasoning and decision-making, such as uncertainty quantification and model comparison analyses.
The second main approach in Bayesian imaging methodology is based on stochastic simulation algorithms, namely Markov chain Monte Carlo (MCMC) algorithms. Such methods, which were already actively studied over two decades ago, have regained significant attention lately because of their capacity to address very challenging imaging problems that are beyond the scope of optimisation-based techniques \cite{Pereyra2016}. Additionally to complex models such as hierarchical or empirical Bayesian models, MCMC methods also enable advanced analyses such as hypotheses test and model selection. Unfortunately, despite great progress in high dimensional MCMC methodology, solving imaging problems by stochastic simulation remains too expensive for applications involving moderate or large datasets. Another drawback of existing MCMC methods is that the conditions for their convergence are often significantly more difficult to check than those of optimisation schemes. As a result, most practitioners only assess convergence empirically. It is worth mentioning that some of these limitations can be partially mitigated by resorting to variational Bayes or message passing approximations, which are generally significantly more computationally efficient than stochastic simulation. Unfortunately, such approximations are available only for specific models, and we currently have little theory to analyse the approximation error involved. Similarly, it is generally difficult to provide convergence guarantees for the related algorithms, which often suffer from local convergence issues. Observe that this is in sharp contrast with the convex optimisation approach, which despite its clear limitations, is general and well understood theoretically.
In summary, convex optimisation and MCMC methods have complementary strengths and weaknesses related to their computational efficiency, theoretical underpinning, and the inferences they can support. As a result, it is increasingly acknowledged that the two methodologies should be used together. In this view, the future imaging methodological toolbox should provide a flexible framework where it is possible to perform very efficiently a first analysis of a full dataset by using convex optimisation algorithms, followed by in-depth analyses by MCMC simulation for specific data (e.g., particular data that will be used as evidence to support a hypothesis or a decision). Also, in this framework practitioners should be able to use MCMC algorithms to perform preliminary analyses, which then set the basis for a full scale analysis with convex optimisation techniques. These could be, for example, exploratory analyses with selected data aimed at calibrating the model or performing Bayesian model selection, and benchmarking analyses to assess efficient approximations (e.g., optimisation-based approximate confidence intervals \cite{Pereyra2016b}). Unfortunately, it is currently difficult to use optimisation and MCMC methodologies in this complementary manner because optimisation methods use predominantly non-conjugate priors that are not smooth, such as priors involving the $\ell_1$ or the total-variation noms, whereas MCMC methods are mainly restricted to models with priors that are either conjugate to the likelihood function, or that are smooth with Lipchitz gradients (the latter enables efficient high dimensional MCMC algorithms such as the Metropolis-adjusted Langevin algorithm or Hamiltonian Monte Carlo \cite{Pereyra2016}).
Proximal MCMC algorithms, proposed recently in \cite{Pereyra2015}, are an important first step towards bridging this methodological gap between convex optimisation and stochastic simulation. Unlike conventional high dimensional MCMC algorithms that use gradient mappings and require Lipchitz differentiability, proximal MCMC algorithms draw their efficiency from convex analysis, namely proximal mappings and Moreau-Yoshida envelopes. This allows MCMC-based Bayesian computation for precisely the type of models that are solved by convex optimisation (i.e., high dimensional models that are log-concave but not smooth), which in turn enables advanced Bayesian analyses for these models (e.g., see \cite{Pereyra2016b,Atchade2016} for applications of proximal MCMC to Bayesian uncertainty quantification and sparse regression). However, the proximal MCMC algorithms presented in \cite{Pereyra2015} have three shortcomings that limit their impact in imaging sciences, and which this paper seeks to address. First, the conditions that guarantee the convergence of the algorithms are difficult to check in practice. Second, the algorithms assume that it is possible to compute the proximal mapping of the log-posterior distribution; in practice however this mapping is often approximated by using a forward-backward splitting scheme. Third, the algorithms rely on a Metropolis-Hastings correction step to remove the asymptotic bias introduced by the approximations and to guarantee that the Markov chains target the desired posterior distribution. Unfortunately, this correction step can degrade significantly the efficiency of the algorithms (i.e., the asymptotic bias is removed at the expense of a potentially significant increase in estimation variance and some additional bias from the Markov chain's transient or burn-in regime).
This paper presents a new and significantly better proximal MCMC methodology that address all the issues of the original proximal algorithms discussed above. This new methodology is highly computationally efficient and general, in that it can be applied straightforwardly to most models currently addressed by convex optimisation (in particular, to any model that can be solved by forward-backward splitting). Moreover, we provide simple theoretical conditions to guarantee the convergence of the Markov chains, as well as bounds on its convergence rate.
The remainder of the paper is organised as follows: Section \ref{sec:bac} defines notation, introduces the class of models considered, and recalls the Langevin MCMC approach that is the basis of our method. In Section \ref{sec:more-yosida-regul} we present the proposed MCMC method, analyse its theoretical properties in detail, provide practical implementation guidelines, and discuss connections with the original proximal MCMC algorithms described in \cite{Pereyra2015}. Section \ref{sec:experiments} illustrates the
methodology on four experiments related to image deconvolution and tomographic reconstruction with total-variation and $\ell_1$ sparse priors, where we conduct a range of challenging Bayesian analyses related to model comparison and uncertainty quantification. Conclusions and perspectives for future work are reported in Section \ref{sec:conclusion}. Proofs are finally reported in Appendices \ref{sec:proof-crefpr-meas} and \ref{HME}.
\section{Acknowledgements}
Marcelo Pereyra holds a Marie Curie Intra-European Research Fellowship for Career Development at the School of Mathematics of the University of Bristol,
and is a Visiting Scholar at the School of Mathematical and Computer Sciences of Heriot-Watt University.
\bibliographystyle{plain}
\section{Proofs of \Cref{sec:more-yosida-regul}}
\section{Proof of \Cref{propo:finite-measure-MY}}
\label{sec:proof-crefpr-meas}
We preface the proof by a Lemma.
\begin{lemma}
\label{lem:control-fun-convex-gene}
Let $\mathrm{g}: \mathbb{R}^d \to \ocint{-\infty,+\infty}$ be a lower bounded, l.s.c~convex function satisfying $ 0 < \int_{\mathbb{R}^d} \mathrm{e}^{-\mathrm{g}(y)}\mathrm{d} y < +\infty $.
Then there exists $x_{\gconv} \in \mathbb{R}^d$, $R_{\gconv}, \rho_{\gconv} >0$ such that for all $x \in \mathbb{R}^d$, $x \not \in \ball{x_{\gconv}}{R_{\gconv}}$, $\mathrm{g}(x)-\mathrm{g}(x_{\gconv}) \geq \rho_{\gconv}\norm{x-x_{\gconv}}$.
\end{lemma}
\begin{proof}
The proof is a simple extension of the one of \cite[Theorem 2.2.2]{bakry:barthe:cattiaux:guillin:2008}, where $\mathrm{g}$ is assumed to be continuously differentiable.
We first show that $\mathrm{g}$ is finite on a non-empty open set of
$\mathbb{R}^d$. Note since $\int_{\mathbb{R}^d}
\mathrm{e}^{-\mathrm{g}(y)}\mathrm{d} y> 0$, the set $\{ \mathrm{g} < \infty \}$ can not be contained in a
$k$-dimensional hyperplane, for $k \in \{0,\cdots,d-1 \}$. Then,
there exists $d+1$ points $\{\mathrm{v}_i\}_{0 \leq i \leq d} \subset \{ \mathrm{g} < \infty
\}$ such that the vectors $\{\mathrm{v}_i-\mathrm{v}_0\}_{1 \leq i \leq d}$ are linearly
independent. Denote by $\operatorname{co}(\mathrm{v}_0,\cdots,\mathrm{v}_d)$ the convex hull
of $\{\mathrm{v}_i\}_{0\leq i \leq d} $ defined by
$$
\operatorname{co}(\mathrm{v}_0,\cdots,\mathrm{v}_d)= \left\{ \sum_{i=0}^d \alpha_i \mathrm{v}_i \ | \
\sum_{i=0}^d \alpha_i =1 \ , \forall i \in \{0,\cdots,d \} \ , \
\alpha_i \geq 0 \right\} \;.
$$
Since $\mathrm{g}$ is convex and $\operatorname{co}(\mathrm{v}_0,\cdots,\mathrm{v}_d) \subset \{ \mathrm{g} < \infty \}$, we have
\begin{equation}
\label{eq:max_conv_hull}
\sup_{y \in \operatorname{co}(\mathrm{v}_0,\cdots,\mathrm{v}_d)} \abs{\mathrm{g}(y)} \leq M_{\operatorname{co}} = \max_{i \in \{0,\cdots, d\}} \{ \abs{\mathrm{g}(\mathrm{v}_i)} \} \;.
\end{equation}
It follows from $\{\mathrm{v}_i\}_{0 \leq i \leq d} \subset \{ \mathrm{g} < \infty \}$ and $\mathrm{g}$ is lower bounded that $M_{\operatorname{co}}$ is
finite. Finally by \cite[Lemma 1.2.1]{florenzano:levan:2001}, $\operatorname{co}(\mathrm{v}_0,\cdots,\mathrm{v}_d)$ has non empty interior.
Consider now the set $\{ \mathrm{g} \leq M_{\operatorname{co}} +1 \}$. We
prove by contradiction that it is a bounded subset of $\mathbb{R}^d$. Assume that for all $R
\geq 0$, there exists $x_R \in \{ \mathrm{g} \leq M_{\operatorname{co}} +1 \}$ and
$x_R \not \in \ball{\mathrm{v}_0}{R}$. Then since $\{ \mathrm{g} \leq M_{\operatorname{co}} +1 \}$ is
convex, it contains the convex hull of
$\{\mathrm{v}_0,\cdots,\mathrm{v}_d,x_{R}\}$. Since $\operatorname{co}(\mathrm{v}_0,\cdots,\mathrm{v}_d)$
has non empty interior, the volume of
$\operatorname{co}(\mathrm{v}_0,\cdots,\mathrm{v}_d,x_{R})$ grows at least linearly in $R$
and the volume corresponding to $\{ \mathrm{g} \leq M_{\operatorname{co}} +1 \}$ is
infinite taking the limit as $R$ goes to $ \infty$. On the other hand,
by assumption and since $\{\mathrm{v}_0,\cdots,\mathrm{v}_d,x_R\} \subset \{ \mathrm{g} \leq
M_{\operatorname{co}} +1 \}$, we have using the Markov inequality
\begin{equation*}
\operatorname{Vol}\parenthese{ \{ \mathrm{g} \leq M_{\operatorname{co}} +1 \} } \leq \mathrm{e}^{M_{\operatorname{co}}+1} \int_{ \{ \mathrm{g} \leq M_{\operatorname{co}} +1 \}} \mathrm{e}^{-\mathrm{g}(y)} \mathrm{d} y < +\infty \;,
\end{equation*}
which leads to a contradiction. Then there exists $R_{\gconv} \geq 0$, such that $\{\mathrm{g} \leq M_{\operatorname{co}}+1\} \subset \ball{\mathrm{v}_0}{R_{\gconv}}$. \\
For all $x \not \in \ball{\mathrm{v}_0}{R_{\gconv}}$, consider $y =R_{\gconv}
(x-\mathrm{v}_0)\norm[-1]{x-\mathrm{v}_0} + \mathrm{v}_0$. Note that $y \not \in \{ \mathrm{g}
\leq M_{\operatorname{co}} + 1 \}$, so $\mathrm{g}(y) \geq M_{\operatorname{co}}+1$. Now using
the convexity of $\mathrm{g}$, we have for all $x \not \in
\ball{\mathrm{v}_0}{R_{\gconv}}$,
\begin{equation*}
M_{\operatorname{co}}+1 \leq \mathrm{g}(y) \leq R_{\gconv}\norm[-1]{x-\mathrm{v}_0} (\mathrm{g}(x)-\mathrm{g}(\mathrm{v}_0)) +\mathrm{g}(\mathrm{v}_0) \;.
\end{equation*}
Since $\mathrm{g}(\mathrm{v}_0) \leq M_{\operatorname{co}}$, we get
\begin{equation*}
(\mathrm{g}(x)-\mathrm{g}(\mathrm{v}_0)) \geq R_{\gconv}^{-1}\norm{x-\mathrm{v}_0}
\end{equation*}
and the proof is concluded setting $x_{\gconv} = \mathrm{v}_0$.
\end{proof}
\begin{proof}[Proof of \Cref{propo:finite-measure-MY}]
\begin{enumerate}[label=\alph*), wide=0pt, labelindent=\parindent]
\item
We first assume that \Cref{assum:integrabilite}-\ref{assum:integrable_g} holds.
By \eqref{eq:id-MY-env}, $U \geq U^{\lambdaMY}$ and therefore $0 <
\int_{\mathbb{R}^d} \mathrm{e}^{-U(y)} \mathrm{d} y < \int_{\mathbb{R}^d} \mathrm{e}^{-U^{\lambdaMY}(y)} \mathrm{d} y$. We now prove $\mathrm{e}^{-\gU^{\lambda}}$
is integrable with respect to the Lebesgue measure, which implies
$y \mapsto \mathrm{e}^{-U^{\lambdaMY}(y)}$ is integrable as well since $f$ is assumed to be lower bounded. By \Cref{assum:form-potential} and
\Cref{lem:control-fun-convex-gene}, there exist $ \rho_{\gU} >0$, $x_{\gU} \in \mathbb{R}^d$ and $M_1 \in \mathbb{R}$ such that for all $x \in \mathbb{R}^d$, $g(x)-g(x_{\gU}) \geq M_1 +
\rho_{\gU}\norm{x-x_{\gU}} $. Thus, for all $x \in \mathbb{R}^d$, we have by \eqref{eq:id-MY-env}
\begin{align}
\nonumber
\gU^{\lambda}(x) -g(x_{g})& \geq M_1 + \rho_{\gU}\norm{\prox_{\gU}^{\lambdaMY}(x)-x_{\gU}} +(2 \lambda)^{-1}\norm[2]{x-\prox_{\gU}^{\lambdaMY}(x)} \\
\label{eq:second bound-finite-measure-MY}
&\geq M_1 + \inf_{y \in \mathbb{R}^d} \{ \rho_{\gU}\norm{y-x_{\gU}} +(2 \lambda)^{-1}\norm[2]{x-y} \}
\geq M_1 +\mathrm{h}^{\lambda}(x) \;,
\end{align}
where $\mathrm{h}^{\lambda}(x)$ is the $\lambda$-Moreau Yosida
envelope of $\mathrm{h}(x) = \rho_{\gU} \norm{x-x_{\gU}}$. By
\cite[Section 6.5.1]{Parikh2013}, the proximal operator associated
with the norm is the block soft thresholding given for all $\lambda
>0$ and $x \in \mathbb{R}^d \setminus \{ 0 \}$ by $\prox_{\gconvD}^{\lambdaMY}(x) =
\max(0,1-\lambda/\norm{x}) x $ and $\prox_{\gconvD}^{\lambdaMY}(0) =0$. Therefore using
again \eqref{eq:id-MY-env}, it follows that there exists $M_2 \in \mathbb{R}$ such
that for all $x \in \mathbb{R}^d$,
\begin{equation*}
\mathrm{h}^{\lambda}(x) \geq \rho_{\gU} \norm{x-x_{\gU}} + M_2 \;.
\end{equation*}
Combining this inequality with \eqref{eq:second bound-finite-measure-MY} concludes the proof.
We now assume that \Cref{assum:integrabilite}-\ref{assum:lipschitz_g}
holds. First, we show that for all $\lambda >0$
\begin{equation}
\label{eq:unif_prox}
\sup_{x \in \mathbb{R}^d} \{g(x) - \gU^{\lambda}(x) \}
\leq \lambda \norm[2][\operatorname{Lip}]{g}/2 \;,
\end{equation}
which will conclude the
proof since $\int_{\mathbb{R}^d} \mathrm{e}^{-U(x)} \mathrm{d} x < +\infty$. Using that $g$ is Lipschitz, we have by \eqref{eq:id-MY-env},
for all $x \in \mathbb{R}^d$
\begin{align*}
g(x) - \gU^{\lambda}(x)&= g(x) - \inf_{y \in \mathbb{R}^d} \defEns{g(y) + (2 \lambda)^{-1}\norm[2]{x-y}} = \sup_{y \in \mathbb{R}^d} \defEns{ g(x) - g(y) - (2 \lambda)^{-1}\norm[2]{x-y}} \\
&\leq \sup_{y \in \mathbb{R}^d} \defEns{ \norm[][\operatorname{Lip}]{g} \norm{x-y} - (2 \lambda)^{-1}\norm[2]{x-y}} \leq \lambda \norm[2][\operatorname{Lip}]{g}/2 \;,
\end{align*}
where we have used that the maximum of $u \mapsto au - b u^2$, for $a,b \geq 0$, is given by $a^2/(4b)$.
\item This point is a straightforward consequence of \eqref{eq:definition-grad-prox} and \eqref{eq:lip_moreau_yosida}.
\item
Since $\pi$ has also a density with
respect to the Lebesgue measure and $ U^{\lambdaMY}(x) \leq U(x)$ for all $x \in \mathbb{R}^d$, we have for all $\lambda >0$
\begin{equation}
\label{eq:bound_2_TV_MY}
\tvnorm{\pi^{\lambdaMY}-\pi} =
\int_{\mathbb{R}^d} \abs{\pi^{\lambdaMY}(x)-\pi(x)} \mathrm{d} x
\leq 2 A_{\lambda} \;,
\end{equation}
where $A_{\lambda} = \int_{\mathbb{R}^d} \{1-\mathrm{e}^{\gU^{\lambda}(x)-g(x)} \} \pi^{\lambdaMY}(x)\mathrm{d} x = 1- \defEns{\int_{\mathbb{R}^d}\mathrm{e}^{-U^{\lambdaMY}(x)} \mathrm{d} x}^{-1}\int_{\mathbb{R}^d} \mathrm{e}^{-U(x)} \mathrm{d} x$.
By \eqref{eq:limit-d-lambda}, for all $x \in \mathbb{R}^d$, we get $\lim_{\lambda \downarrow 0} \uparrow U^{\lambdaMY}(x)
= U(x)$. We conclude by applying the monotone convergence theorem.
\item
Using that for all $x \in \mathbb{R}^d$, $\gU^{\lambda}(x) \leq g(x)$ and $1-\mathrm{e}^{-u} \leq u$ for all $u \geq 0$, \eqref{eq:bound_2_TV_MY} shows that
\begin{equation*}
\tvnorm{\pi^{\lambdaMY}-\pi} \leq 2\int_{\mathbb{R}^d}\{ g(x)-\gU^{\lambda}(x) \} \pi^{\lambdaMY}(x) \mathrm{d} x \;.
\end{equation*}
Then the proof follows from \eqref{eq:unif_prox}.
\end{enumerate}
\end{proof}
|
train/arxiv
|
BkiUff_xK7kjXMEc_SEb
| 5
| 1
|
\section{\textbf{Introduction}}
\label{sec:intro}
As a representative of the attractive technique for immersive multimedia data, Light Field (LF) images have attracted widespread attention~\cite{wu2017light}. Unlike traditional 2D images, LF images can record radiance information in both spatial and angular dimensions~\cite{levoy2009recording}, leading to better immersive experience. In order to provide satisfactory viewing quality of experience (QoE), light field image quality assessment (LF-IQA) plays a crucial role in LF contents acquisition, processing and application.
The LF image is a 4-D signal containing spatial and angular information. In Fig.~\ref{fig:intro}, we show the LF image with different formats. Fig.~\ref{fig:intro}(a) shows the LF image captured by a lenslet LF camera called Lytro Illum~\cite{ng2005light}.
The parameters $u$ and $v$ refer to angular dimensions while $s$ and $t$ represent spatial dimensions. We can obtain the Sub-Aperture Image (SAI) by fixing $u$ and $v$~\cite{van2018light}, while the Micro-Lens Image (MLI) is through fixing $s$ and $t$~\cite{cho2013modeling}, as shown in Fig.~\ref{fig:intro}(b-c). The bottom and right of Fig.~\ref{fig:intro}(b) are Epipolar-Plane Images (EPIs)~\cite{wu2017light}, which are produced by fixing ($u$,$s$) and ($v$,$t$). The SAI only contains spatial information of the LF image, while the EPI include both spatial and angular dimensions. However, the EPI only contains horizontal or vertical angular direction. Unlike the SAI and EPI, the MLI includes 2-D angular information. On account of the 4-D characteristic, the perceptual quality of LF image mainly depends on spatio-angular resolution, angular consistency and spatial quality~\cite{wu2017light}. Concretely, spatio-angular resolution refers to the LF image resolution (i.e. the values of $u$,$v$,$s$ and $t$). Angular consistency measures the visual coherence of LF images while spatial quality indicates the SAI quality. Since spatio-angular resolution is an inherent factor of the LF image, we consider the effects of angular consistency and spatial quality in this paper.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{intro.png}
\caption{LF image with different formats. (a) Lenset image captured by Lytro Illum; (b) SAI array with EPIs in the bottom and right; (c) 9 MLIs in the red bounding box of (a) at high magnification.}
\label{fig:intro}
\end{figure}
Although the subjective evaluation of LF-IQA ~\cite{viola2016objective, kiran2017towards, shi2018perceptual} is precise and reliable, it is resource and time-consuming. Therefore, an effective objective LF-IQA model is urgently required. In general, image quality assessment (IQA) methods can be classified into three categories: full-reference (FR), reduced-reference (RR) and no-reference (NR)~\cite{zhou2016binocular}. Among FR methods, intact information of original images is needed. Structure similarity between original and distorted images is measured in structural similarity index (SSIM)~\cite{SSIM}, with several variants, e.g. MS-SSIM~\cite{MSSIM} and FSIM~\cite{FSIM}. MP-PSNR Full~\cite{MP_F} and MP-PSNR Reduc~\cite{MP_R} based on Morphological pyramid decomposition are proposed to evaluate the multi-view image quality.
RR methods only require part of information from original images. NR methods only utilize distorted images, which can be applied to most applications where reference images are hardly available, e.g. Mittal $et \ al.$~\cite{BRI} uses scene statistics in spatial domain and binocular fusion and rivalry are concerned in BSVQE~\cite{BSVQE}.
There exist only a few objective LF-IQA models. Fang $et \ al.$~\cite{fang2018light} proposes a FR LF-IQA method to compute the gradient magnitude similarity between original and distorted EPIs. Paudyal $et \ al.$~\cite{LFIQM} predicts the LF image quality with the structure similarity between original depth map and distorted depth map. However, neither of the methods consider the spatial quality degradation on the SAI. In addition, the EPIs only contain horizontal or vertical angular dimension, leading to insufficient measurement of angular consistency for the LF image
applications. Therefore, a LF-IQA method that considers spatial quality and 2-D angular consistency is necessary for practical application.
In this paper, we propose a novel NR Light Field image Quality assessment model based on Micro-Lens Image (LF-QMLI) to evaluate both of angular consistency and spatial quality for LF images. As shown in Fig.~\ref{fig:intro}(c), each pixel in the MLI comes from the same point in spatial domain, but is captured in various directions. Hence, there exists quite strong dependence between MLI pixels for 2-D angular consistency. To our best knowledge, we are the first to utilize MLI to evaluate angular consistency of LF images.
In this work, we first obtain the MLI by fixing $s$ and $t$, while the SAI is generated through fixing $u$ and $v$. Second, the Global Entropy Distribution (GED) and Uniform Local Binary Pattern descriptor (ULBP) are proposed to measure the angular consistency on each MLI, and then we utilize content pooling. Third, the information entropy of SAI is utilized to evaluate spatial quality. Finally, we train a regression model that predicts the perceptual quality of distorted LF images. Our experimental results demonstrate that proposed model LF-QMLI achieves the state-of-the-art performance.
The rest of the paper is organized as follows: Section~\ref{method} introduces our LF-QMLI method. Experimental results are shown in Section~\ref{experiment} and we conclude the paper in Section~\ref{conclusion}.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{flowdiagram.png}
\caption{Flow diagram of proposed LF-QMLI model. (a) LF image (lenslet format); (b) MLI Array; (c) SAI Array.}
\label{fig:flowdiagram}
\end{figure}
\section{\textbf{Proposed Method}}
\label{method}
The flow diagram of our proposed LF-QMLI model is shown in Fig.~\ref{fig:flowdiagram}. We first convert LF images from lenslet format into MLI and SAI arrays. Since the entropy can measure the angular dependence between adjacent pixels~\cite{liu2014no}, the GED is adopted on MLI to measure angular consistency. As the textural variation shown in Fig.~\ref{fig:MLIS}, the ULBP is selected to measure the textural features for original and distorted MLIs. In addition, the information entropy of SAI is utilized to measure spatial quality~\cite{hu2008constructing}.
After content pooling, a regression model is used to predict the perceptual quality of LF images.
\subsection{\textbf{Angular Consistency Based on MLI}}
The distortion caused by angular inconsistency affects LF image quality. A LF camera captures the same object in spatial domain with various angles of view, and the MLI is composed of light rays from both horizontal and vertical directions, leading to a 2-D angular domain.
\subsubsection{\textbf{Global Entropy Distribution (GED)}}
\label{sec:GED}
In previous works, information entropy is proved as an efficient method to measure spatial image quality~\cite{sponring1996entropy}. However, without considering angular dimension, it cannot work well in LF images.
The entropy of undistorted images possesses certain statistical properties, owing to the dependence between adjacent pixels~\cite{liu2014no}. As shown in Fig.~\ref{subfig:Ori}, since the variation between pixels is piecewise linear~\cite{heber2013variational}, the MLI without angular distortion is regular and gradually varied. With increased distortion, the dependence between adjacent pixels is destroyed, leading to the change of global entropy. In Fig.~\ref{fig:MLIS}(b)-(e), the distortion caused by angular inconsistency primarily affects the MLI entropy.
\begin{figure}
\centering
\subfigure[]{
\begin{minipage}[t]{0.17\linewidth}
\centering
\includegraphics[width=\linewidth]{Ori.png}
\label{subfig:Ori}
\end{minipage}}
\subfigure[]{
\begin{minipage}[t]{0.17\linewidth}
\centering
\includegraphics[width=\linewidth]{LN20.png}
\label{subfig:LN20}
\end{minipage}}
\subfigure[]{
\begin{minipage}[t]{0.17\linewidth}
\centering
\includegraphics[width=\linewidth]{LN50.png}
\label{subfig:LN50}
\end{minipage}}
\subfigure[]{
\begin{minipage}[t]{0.17\linewidth}
\centering
\includegraphics[width=\linewidth]{NN20.png}
\label{subfig:NN20}
\end{minipage}}
\subfigure[]{
\begin{minipage}[t]{0.17\linewidth}
\centering
\includegraphics[width=\linewidth]{NN50.png}
\label{subfig:NN50}
\end{minipage}}
\caption{MLIs with various types of distortion in different quality levels. Note that distortions in higher level represent worse visual quality. (a) Original Image; (b) LN lv.2 Distorted Image; (c) LN lv.5 Distorted Image; (d) NN lv.2 Distorted Image; (e) NN lv.5 Distorted Image.}
\label{fig:MLIS}
\end{figure}
Our proposed GED includes global image entropy distribution and global frequency entropy distribution of MLI.
The image entropy is
\begin{equation}
E_{I}=-\sum_{x}P_{x}log_{2}P_{x},
\end{equation}
where $x$ is the pixel value within a MLI, ranging from 0 to 255, with empirical probability density $P_{x}$.
The frequency entropy is
\begin{equation}
E_{F}=-\sum_{i}\sum_{j}P_{i,j}log_{2}P_{i,j},
\end{equation}
where $P_{i,j}$ is the value of probability map locating at $(i,j)$ in the DCT domain of MLI.
Finally, the MLI global entropy $E_{MLI}$ includes image entropy and frequency entropy.
\begin{equation}
E_{MLI}=\{E_{I},E_{F}\}.
\end{equation}
\begin{table}[htp]
\centering
\caption{E$_{MLI}$ on various angular distortion}
\label{tab:EMLI}
\renewcommand\arraystretch{1.35}
\begin{tabular}{c|c|c|c|c|c}
\hline
\bf E$_{MLI}$ & \bf Ori. Img & \bf LN lv.2 & \bf LN lv.5 & \bf NN lv.2 & \bf NN lv.5 \\
\hline
\bf IE & 5.5772 & 5.4971 & 5.3812 & 5.1807 & 4.3343 \\
\hline
\bf FE & 2.0344 & 2.0543 & 3.0031 & 2.2356 & 3.3460 \\
\hline
\end{tabular}
\end{table}
We conducted validation experiments on various levels of distortion caused by angular inconsistency. We considered 2 kinds of angular distortion \{linear interpolation (LN), nearest neighbor interpolation (NN)\} and 2 levels of distortion \{Level2, Level5\}~\cite{shi2018perceptual}. The higher level represents higher distortion. The image entropy (IE) and frequency entropy (FE) of 5 MLIs (Fig.~\ref{fig:MLIS}) were computed to demonstrate the validity of $E_{MLI}$. It is shown in Table~\ref{tab:EMLI} that the angular distortion can affect $E_{MLI}$ in a conspicuous and predictable way. The loss of image details is caused by the distortion, leading to image entropy reduction. Generally, the angular distortion of higher level obtains smaller IE and greater FE. On the same distortion level, NN destroys angular consistency more acutely than LN, which is verified in subjective experiments~\cite{shi2018perceptual}.
However, there exist a large number of MLIs in the LF image, so we utilize the GED considering all MLIs in the LF image. We will propose our Content Pooling method in section~\ref{sec:pooling}.
\subsubsection{\textbf{Uniform Local Binary Pattern (ULBP)}}
Although image entropy and frequency entropy can measure the distortion caused by angular inconsistency, $E_{MLI}$ is the global characteristics of the whole MLI. As shown in Fig.~\ref{fig:MLIS}, the increased angular distortion changes the local texture of the MLI. Thus, we utilize ULBP to measure local textural features in MLI.
Local binary patterns (LBP) has been proved as an efficient operator to extract local distribution information~\cite{ojala1994performance, ojala1996comparative}. LBP is a very simple but efficient texture descriptor, with rotation invariance, position invariance and robustness under various illumination. Since LBP can efficiently represent local distributions of joint pixels, we adopt modified ULBP descriptor to describe inconsistency of local adjacent pixels.
The LBP operator can be given as~\cite{ojala2002multiresolution}
\begin{equation}
LBP_{P,R}=\sum_{p=0}^{P-1}s(g_{p}-g_{c})2^{p}.
\end{equation}
Here we set a neighbor of $P=4$ members on a circle of radius $R=1$. $g_{c}$ is the gray value of central pixel, while $g_{p}$ is the gray value of the neighbor pixel. $s$ represents the sign function.
To reduce the number of pattern types, we utilize modified Uniform LBP operator on each MLI$_{i}$. There exist $P+2$ types of uniform pattern class without regard to upright property. Then we combine the Probability of every Type of binary patterns ($PT$) into vector $\bm{PT}_{i}$.
\begin{equation}
\bm{PT}_{i}=\{\underbrace{PT_{1},PT_{2},\cdots,PT_{n}}_{P+2}\}.
\end{equation}
\subsubsection{\textbf{Content Pooling}}
\label{sec:pooling}
In order to evaluate LF image quality, we pool the characteristics $E_{MLI}$ and $\bm{PT}_{i}$ of each MLI together into LF image angular features.
The method of pooling used on $E_{MLI}$ is percentile pooling~\cite{moorthy2009visual}: central elements of $E_{MLI}$ are extracted while extremely large or small elements are neglected. Our experimental results verify that percentile pooling improves our proposed model.
We reserve 60\% of central elements of $E_{MLI}$ here and show the global distribution histogram of IE and FE in Fig.~\ref{fig:GEDhistogram}. In accord with our analysis of Table~\ref{tab:EMLI}, with the increase of angular distortion, the histogram of IE has a left shift, while the histogram of FE has a right shift. The higher level the distortion is, the larger the shift will become and the steeper the curve will appear.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{GEDhistogram.png}
\caption{Histograms of GED for different types and levels of angular distortion. (a) IE histogram; (b) FE histogram.}
\label{fig:GEDhistogram}
\end{figure}
The mean values and skew values of IE and FE distribution histogram are selected as the GED features.
\begin{equation}
f_{GED}=\{ mean(IE), skew(IE), mean(FE), skew(FE) \}.
\end{equation}
The angular inconsistency of distorted LF images appears obviously at the edge between the foreground and background, while it appears inconspicuously in the background or mild areas. In these mild MLIs, less information is contained and the ULBP features might be misleading. Therefore, in order to exclude the MLIs in mild areas, we introduce a selector on $\bm{PT}_{i}$. The ULBP features of a whole LF image are exacted as:
\begin{equation}
f_{ULBP}=avg\{\bm{PT}_{i}\} \quad \forall. i \quad if \quad R(i)>threshold,
\end{equation}
where $R(i)=Max(MLI_{i})-Min(MLI_{i})$, the Range of a single MLI$_{i}$. $threshold$ is selected as gray value 20 here.
\subsection{\textbf{Spatial Quality}}
The spatial quality plays an important part in LF image perceptual quality. Specifically, we utilize information entropy distribution of SAI to measure the changes in spatial quality. In the undistorted SAI, there exists the spatial dependence between adjacent pixels~\cite{liu2014no}. With increased spatial distortion such as compression, the dependence is destroyed, resulting in changes of information entropy.
We divide the SAI into $8 \times 8$ blocks, then compute SAI Image Entropy (SIE) and SAI Frequency Entropy (SFE) of each block. Finally, pooling all the blocks together like what is shown in section~\ref{sec:pooling} and obtain Spatial Quality features.
\begin{equation}
\begin{aligned}
f_{SQ}=\{ \ mean(SIE),skew(SIE), \\
mean(SFE),skew(SFE) \ \}.
\end{aligned}
\end{equation}
\section{\textbf{Experiment results}}
\label{experiment}
\subsection{\textbf{Light Field Image Databases}}
To test the performance of our proposed LF-QMLI model, comparison experiments were conducted on Win5-LID~\cite{shi2018perceptual} and VALID~\cite{viola2018valid} databases. In Win5-LID database, there exist 220 distorted LF images with various distortion types and levels. The distortion types consist of JPEG2000, HEVC, linear interpolation (LN), nearest neighbor interpolation (NN) and two CNN models. The overall Mean Opinion Score (MOS) value is provided for each LF image.
VALID database includes 5 reference LF images captured by Lytro Illum with a number of distorted LF images caused by several types of compression methods. In this experiment, we utilize several distorted LF images obtained through the interactive methodology, including 40 LF images with two types of distortion: HEVC and VP9.
\begin{table}[htbp]
\centering
\caption{Performance Comparison}
\label{tab:performance}
\renewcommand\arraystretch{1.3}
\setlength{\tabcolsep}{0.5mm}{
\begin{tabular}{c|ccc|ccc}
\hline
& \multicolumn{3}{|c|}{\bf Win5-LID} & \multicolumn{3}{|c}{\bf VALID} \\
\hline
\bf Metrics & \bf SROCC & \bf LCC & \bf RMSE & \bf SROCC & \bf LCC & \bf RMSE \\
\hline
\bf PSNR & 0.6026 & 0.6189 & 0.8031 & 0.9620 & 0.9681 & 0.3352 \\
\bf SSIM~\cite{SSIM} & 0.7346 & 0.7596 & 0.6650 & 0.9576 & 0.9573 & 0.3868 \\
\bf MS-SSIM~\cite{MSSIM} & 0.8266 & 0.8388 & 0.5566 & 0.9593 & 0.9658 & 0.3473 \\
\bf FSIM~\cite{FSIM} & 0.8233 & 0.8318 & 0.5675 & 0.9695 & 0.9798 & 0.2678 \\
\bf IWSSIM~\cite{IW} & 0.8352 & 0.8485 & 0.5492 & 0.9674 & 0.9764 & 0.2892 \\
\bf IFC~\cite{IFC} & 0.5028 & 0.5393 & 0.8611 & 0.9693 & \bf 0.9909 & \bf 0.1800 \\
\bf VIF~\cite{VIF} & 0.6665 & 0.7032 & 0.7270 & \bf 0.9749 & 0.9870 & 0.2150 \\
\bf NQM~\cite{NQM} & 0.6508 & 0.6940 & 0.7362 & 0.9055 & 0.9194 & 0.5266 \\
\bf VSNR~\cite{VSNR} & 0.3961 & 0.5050 & 0.8826 & 0.9359 & 0.9324 & 0.4838 \\
\hline
\bf BRISQUE~\cite{BRI} & 0.6687 & 0.7510 & 0.5619 & 0.9222 & 0.9849 & 0.2017 \\
\bf NIQE~\cite{NIQE} & 0.2086 & 0.2645 & 0.9861 & 0.8636 & 0.9524 & 0.4080 \\
\bf FRIQUEE~\cite{FRI} & 0.6328 & 0.7213 & 0.5767 & 0.9157 & 0.9836 & 0.2160 \\
\hline
\bf Chen~\cite{Chen} & 0.5269 & 0.6070 & 0.8126 & 0.9642 & 0.9738 & 0.3046 \\
\hline
\bf SINQ~\cite{SINQ} & 0.8029 & 0.8362 & 0.5124 & 0.9222 & 0.9849 & 0.2070 \\
\bf BSVQE~\cite{BSVQE} & 0.8179 & 0.8425 & 0.4801 & 0.9222 & 0.9814 & 0.2180 \\
\hline
\bf MP-PSNR Full~\cite{MP_F} & 0.5335 & 0.4766 & 0.8989 & 0.9730 & 0.9852 & 0.2291 \\
\bf MP-PSNR Reduc~\cite{MP_R} & 0.5374 & 0.4765 & 0.8989 & 0.9744 & 0.9859 & 0.2237 \\
\bf MW-PSNR Full~\cite{MW} & 0.5147 & 0.4758 & 0.8993 & 0.9597 & 0.9677 & 0.3376 \\
\bf MW-PSNR Reduc~\cite{MW} & 0.5326 & 0.4766 & 0.8989 & 0.9648 & 0.9751 & 0.2970 \\
\bf 3DSwIM~\cite{DSwIM} & 0.4320 & 0.5262 & 0.8695 & 0.7950 & 0.7876 & 0.8248 \\
\hline
\bf APT~\cite{APT} & 0.3058 & 0.4087 & 0.9332 & 0.4699 & 0.6452 & 1.0228 \\
\hline
\bf LF-IQM~\cite{LFIQM}& 0.4503 & 0.4763 & 0.8991 & 0.3934 & 0.5001 & 1.1593 \\
\hline
\bf LF-QMLI & \bf 0.8802 & \bf 0.9038 & \bf 0.4147 & 0.9286 & 0.9683 & 0.2791\\
\hline
\end{tabular}
}
\end{table}
\subsection{\textbf{Comparison with Previous Objective Metrics}}
We conducted comparison experiments between our proposed model and several FR, RR and NR metrics, including nine 2D-FR metrics~\cite{SSIM,MSSIM,FSIM,IW,IFC,VIF,NQM,VSNR}, three 2D-NR metrics~\cite{BRI,NIQE,FRI}, one 3D-FR metric~\cite{Chen}, two 3D-NR metrics~\cite{SINQ,BSVQE}, five Multi-views FR metrics~\cite{MP_F,MP_R,MW,DSwIM}, one Multi-views NR metric~\cite{APT} and one LF-RR metric~\cite{LFIQM}. Three evaluation criteria are selected to measure the correlation between MOS and predicted results, consisting of Spearman Rank Order Correlation Coefficient (SROCC), Linear Correlation Coefficient (LCC) and Root Mean Squared Error (RMSE). The SROCC measures the monotonicity while LCC evaluates the linear relationship between predicted score and MOS. The RMSE computes the deviation of prediction. The better consistency with human perception is reflected in SROCC and LCC closing to 1 as well as RMSE closing to 0.
Then, we use SVR for regression~\cite{smola2004tutorial}. LIBSVM package~\cite{chang2011libsvm} is utilized to implement the SVR, which uses a radial basis function (RBF) kernel. We randomly select 80\% of the database as the training set while the remaining 20\% constitute the test set. The median of correlation coefficients across 1000 random trails were regarded as the final results.
The results of all metrics are shown in Table~\ref{tab:performance}. Here we can find that almost all 2D-FR metrics perform well on VALID database and LF-QMLI is competitive in NR metrics. This situation may be caused by its limited types of distortion. VALID only introduces two compression distortions, which destroy the spatial quality of LF images but do not take angular consistency into consideration, so previous 2D-FR metrics can almost excellently measure the distortion. The results show that VALID is not challenging for quality assessment of LF images. Therefore, we mainly analyze how our proposed model LF-QMLI performs on Win5-LID database.
On Win5-LID database, LF-QMLI outperforms all previous metrics. In general, the existing 2D and 3D metrics only consider the degradation of spatial quality, ignoring the degradation of angular consistency. Although multi-view metrics can measure the angular distortion, they do not take the compression distortion and similar spatial distortions into account. Therefore, we can reach a conclusion that the proposed model LF-QMLI can evaluate both angular consistency and spatial quality.
\subsection{\textbf{Ablation Study}}
In order to verify the validity of our proposed MLI-based model, we conducted an ablation study on Win5-LID database and the results are demonstrated in TABLE~\ref{tab:ablation}. The features extracted from MLI $f_{MLI}$ can obviously improve the model performance. One possible reason is that the $f_{MLI}$ provides the measurement of the 2-D angular consistency.
\begin{table}[htbp]
\centering
\caption{Ablation Study}
\label{tab:ablation}
\renewcommand\arraystretch{1.2}
\begin{tabular}{c|ccc}
\hline
& SROCC & LCC & RMSE \\
\hline
\bf Model-f$ \bf _{MLI}$ & 0.6927 & 0.7890 & 0.5504 \\
\bf Model & 0.8802 & 0.9038 & 0.4147 \\
\hline
\end{tabular}
\end{table}
\section{\textbf{Conclusion}}
\label{conclusion}
In this paper, we proposed a No-Reference Light Field image Quality assessment model based on Micro-Lens Image (LF-QMLI). We theoretically analyze the significance of MLI in LF-IQA and extract features for our evaluator. The model can effectively measure the angular consistency and spatial quality. The results show that LF-QMLI achieves state-of-the-art performance. In the future, we will consider more advanced features on the MLI to improve our model.
\section*{acknowledgment}
This work was supported in part by NSFC under Grant 61571413, 61632001.
\footnotesize
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUdz44eIOjRrm8BRzT
| 5
| 1
|
\section{Signed Hall trees}
Let $X$ be a non-empty set. A signed tree is defined inductively as follows:\\
1- Every element of $X$ is a signed tree.
2- If $t_1$ and $t_2$ are signed trees, then $(t_1, t_2)_-$ and $(t_1, t_2)_+$ are also signed trees.\\
So, every signed tree is in fact an element of $X$ or a triple consisting two smaller signed trees and a $\pm$ sign. Elements of $X$ have length one and if $t=(t_1, t_2)_{\pm}$, then $|t|=|t_1|+|t_2|$. We denote $t_1$, the immediate left part of $t$, by $t^{\pr}$ and $t_2$, the immediate right part of $t$, by $t^{\prr}$. Let $\M$ be the set of all signed trees. A Hall order is a linear ordering $\leq$ on $\M$ such that $t\leq t^{\prr}$, for all $t$. It is easy to see that there are many Hall orders on $\M$. For example, suppose $X=\{ x, y\}$, and consider the following ordering
\begin{align*}
&x > y > (x,x)_+ > (x,x)_- > (x,y)_+ > (x,y)_- > (y,x)_+ > (y,x)_- > (y,y)_+ > \\
&(y,y)_- > (x,(x,x)_+)_+ > (x,(x,x)_+)_- > (x,(x,x)_-)_+ > (x,(x,x)_-)_- >\\
&(x, (x,y)_+)_+ > (x,(x,y)_+)_- > (x,(x,y)_-)_+ > (x,(x,y)_-)_- >\cdots .
\end{align*}
From now on, we assume that $\leq$ is a fixed Hall order given over $\M$. A subset $H\subseteq \M$ is called a Hall set, if it satisfies the following requirements\\
1- Every element of $X$ belongs to $H$.
2- If $t\in \M\setminus X$, then $t\in H$ if and only if, $t^{\pr}, t^{\prr}\in H$, $t^{\pr}\leq t^{\prr}$, and $t^{\pr}\in X$ or $t^{\prr}\leq (t^{\pr})^{\prr}$.\\
It is easy to see that for any fixed Hall order, there is a unique Hall set in $\M$. Again, for example, if we consider the above ordering, then the following set is a Hall set
\begin{eqnarray*}
H&=&\{ x, y, (x,x)_+, (x,x)_-, (y,x)_+, (y,x)_-, (y,y)_+, (y,y)_-, ((x,x)_+,x)_+,\\
&\ &((x,x)_+,x)_-, ((x,x)_-,x)_+, ((x,x)_-)_-, ((x,x)_+,y)_+, \ldots\}.
\end{eqnarray*}
Every element of $H$ will be called a signed Hall tree. A standard sequence is a tuple $s=(t_1, t_2, \ldots, t_n; j)$, where every $t_i$ is a signed Hall tree, $1\leq j\leq n$ is an integer (which is called the middle of the sequence), and for any $i$, the signed Hall tree $t_i$ belongs to $X$ or otherwise
$$
t_n, \ldots, t_{i+1}\leq t_i^{\prr}.
$$
There are many examples of standard sequences, for example, if every $t_i$ is an element of $X$, or if $t_1\geq t_2\geq \cdots\geq t_n$, then obviously $s$ is a standard sequence. A rise in $s$ is an index $i$ such that $t_i\leq t_{i+1}$. If further we have $t_{i+1}\geq t_{i+2}, \ldots, t_n$, then we say that $i$ is a legal rise.
\begin{definition}
Let $s=(t_1, t_2, \ldots, t_n; j)$ be a standard sequence with a legal rise $i$. The rewriting of $s$ in the place $i$ is defined as follows.\\
1- If $i\leq j$, then it is the sequence
$$
s^{\pr}=(t_1, \ldots, t_{i-1}, (t_i, t_{i+1})_+, \ldots, t_n; j-1).
$$
2- If $j\leq i$, then it is the sequence
$$
s^{\pr}=(t_1, \ldots, t_{i-1}, (t_i, t_{i+1})_-, \ldots, t_n; j).
$$
\end{definition}
It is easy to check that $s^{\pr}$ is again a standard sequence. Let $s_1$ and $s_2$ be two standard sequences. The notation $s_1\to s_2$ indicates that $s_2$ can be obtained from $s_1$ by a finite number of rewriting operations. The proof of the following proposition is completely similar to the case of classical standard sequences of Hall trees and so we omit the proof. The reader can consult \cite{Reu}, page 86.
\begin{proposition}
Suppose $s$, $s_1$, and $s_2$ are standard sequences. \\
1- If $s\to s_1$ and $s\to s_2$, then there exists a standard sequence $r$ such that $s_1\to r$ and $s_2\to r$.
2- There is a standard sequence $r$ consisting of elements of $X$, such that $r\to s$.
3- There exists a standard decreasing sequence $r$, such that $s\to r$.
\end{proposition}
\section{Free Leibniz algebra and Leibniz polynomials}
A di-semigroup is a non-empty set $M$ with two associative binary operations $\lp$ and $\rp$ satisfying the identities
\begin{eqnarray*}
(x\lp y)\lp z&=& x\lp(y\rp z)\\
(x\rp y)\lp z&=& x\rp(y\lp z)\\
(x\lp y)\rp z&=& x\rp(y\rp z)
\end{eqnarray*}
If $x_1, \ldots, x_n\in M$ are arbitrary elements, then applying any sequence of the operations $\lp$ and $\rp$, we obtain a di-semigroup word on these elements, for example
$$
y=((x_1\lp x_2)\rp (x_3\lp x_4))\lp (x_5\rp x_6)
$$
is a di-semigroup word on elements $x_1, \ldots, x_6$. Any such word can be represented by a rooted planar tree whose nodes are indexed by one of the symbols $\lp$ or $\rp$. In the case of the above word the corresponding tree is the following:\\
\begin{center}
\begin{tikzpicture}
[level distance=10mm,
every node/.style={fill=red!20,circle,inner sep=.5pt},
level 1/.style={sibling distance=20mm}
level 2/.style={sibling distance=10mm}
level 3/.style={sibling distance=5mm}
\node {$\dashv$}[grow'=up]
child[solid,level distance=10mm] {node {$\vdash$}
child[solid] {node {$\dashv$}
child {node {$x_1$}}
child {node {$x_2$}}
}
child {node {$\dashv$}
child[solid] {node {$x_3$}}
child {node {$x_4$}}
}
}
child {node {$\vdash$}
child {node {$ x_5 $}
}
child {node {$x_6$}}
};
\end{tikzpicture}
\end{center}
If we move from the root toward leafs and in any node follow the directions indicated by the symbols $\lp$ and $\rp$, then we arrive a leaf $x_j$ which is called the middle of $y$. In our example the middle is $x_3$:\\
\begin{center}
\begin{tikzpicture}
[level distance=10mm,
every node/.style={fill=red!20,circle,inner sep=.5pt},
level 1/.style={sibling distance=20mm}
level 2/.style={sibling distance=10mm}
level 3/.style={sibling distance=5mm}
\node {$\dashv$}[grow'=up]
child[dashed,level distance=10mm] {node {$\vdash$}
child[solid] {node {$\dashv$}
child {node {$x_1$}}
child {node {$x_2$}}
}
child {node {$\dashv$}
child {node {$x_3$}}
child [solid]{node {$x_4$}}
}
}
child {node {$\vdash$}
child {node {$ x_5 $}
}
child {node {$x_6$}}
};
\end{tikzpicture}
\end{center}
It is not hard to see that the laws of di-semigroup implies
$$
y=x_1\rp x_2\rp x_3\lp x_4\lp x_5\lp x_6,
$$
and this expression does not depend on paranthesing. So, any di-semigroup word can be represented as a normal form
$$
x_1\rp \cdots\rp x_j\lp \cdots\lp x_n.
$$
In the case of free di-semigroups, this normal form is also unique.
The free di-semigroup over a set $X$ can be constructed as follows: On the set $\M$ define two operations
$$
t_1\lp t_2=(t_1, t_2)_-, \ \ t_1\rp t_2=(t_1, t_2)_+.
$$
Let $R$ be the ideal generated by all laws defining a di-semigroup. Then
$$
\D=\frac{\M}{R}
$$
is the free di-semigroup on $X$. Every element of this free di-semigroup has a unique representation of the normal form
$$
x_1\rp \cdots\rp x_j\lp \cdots\lp x_n.
$$
We also call elements of $\D$ monomials. Let $\mathbb{K}$ be a field and $\A$ be the vector space with basis $\D$ over $\mathbb{K}$. If we extend the operation $\lp$ and $\rp$ bilinearly, $\A$ becomes the free di-algebra over $X$. This free di-algebra has also the structure of Leibniz algebra, since we can define the Leibniz bracket
$$
[P,Q]=P\lp Q-Q\rp P,
$$
For all $P, Q\in \A$. The free Leibniz algebra over $X$ is the smallest Leibniz subalgebra of $\A$ which includes $X$. We denote it by $\Leib$. The elements of $\Leib$ will be called Leibniz polynomials. For any signed tree $t$, we define a monomial $(t)\in \D$ by induction:\\
1- For any $x\in X$ we have $(x)=x$.
2- If $t=(t_1,t_2)_+$, then $(t)=(t_1)\rp (t_2)$.
3- If $t=(t_1,t_2)_-$, then $(t)=(t_1)\lp (t_2)$.
\begin{proposition}
Every element of $\D$ can be uniquely represented as
$$
(t_1)\rp\cdots\rp (t_j)\lp\cdots \lp (t_n),
$$
for some decreasing sequence of signed Hall trees $t_1\geq t_2\geq \cdots\geq t_n$ and some integer $1\leq j\leq n$.
\end{proposition}
\begin{proof}
The general idea of this proof is the same as \cite{Reu}, but in details it has some differences. For any standard sequence $s=(t_1, \ldots, t_n;j)$, define
$$
(s)=(t_1)\rp\cdots\rp(t_j)\lp\cdots\lp (t_n).
$$
We show that if $s\to s^{\pr}$, then $(s)=(s^{\pr})$. There are two cases: if
$$
s^{\pr}=(t_1, \ldots, (t_i, t_{i+1})_+, \ldots, t_n;j-1),
$$
then we have
\begin{eqnarray*}
(s^{\pr})&=&(t_1)\rp\cdots\rp((t_i)\rp(t_{i+1}))\rp\cdots\rp (t_j)\lp\cdots\lp (t_n)\\
&=&(s),
\end{eqnarray*}
because of the associativity of $\rp$. If we have
$$
s^{\pr}=(t_1, \ldots, (t_i, t_{i+1})_-, \ldots, t_n;j),
$$
then
\begin{eqnarray*}
(s^{\pr})&=&(t_1)\rp\cdots\rp (t_j)\lp\cdots\lp((t_i)\lp(t_{i+1}))\lp\cdots\lp (t_n)\\
&=&(s),
\end{eqnarray*}
because of the associativity of $\lp$. Now, suppose $w\in \D$ and $s$ is the standard sequence of letters of $w$ in its normal form. Then $s\to r$, where $r$ is a decreasing sequence of signed Hall trees. So, we have
$$
w=(s)=(r)=(t_1)\rp\cdots\rp (t_j)\lp\cdots \lp (t_n),
$$
For signed Hall trees $t_1\geq t_2\geq \cdots\geq t_n$ and some integer $1\leq j\leq n$. To prove the uniqueness, suppose in the same time we have
$$
w=(u_1)\rp\cdots\rp (u_k)\lp\cdots \lp (u_m),
$$ for a sequence of signed Hall trees $u_1\geq u_2\geq\cdots\geq u_m$. Let $s=(t_1, \ldots, t_n;j)$ and $r=(u_1, \ldots, u_m;k)$. Then clearly, we have $(s)=w=(r)$. By the proposition 1, there are two standard sequences of letters $s^{\pr}$ and $r^{\pr}$, such that $s^{\pr}\to s$ and $r^{\pr}\to r$. So we have $(s^{\pr})=(r^{\pr})$. Since $\D$ is free, we have $s^{\pr}=r^{\pr}$. Again by the proposition 1, there is a standard sequence $p$, such that $s\to p$ and $r\to p$. But $s$ and $r$ are decreasing, so they have no more rewritings. This shows that $s=p=r$.
\end{proof}
Every element of $\D$ of the form $(t)$ will be called a signed Hall word, if $t$ is such a tree. Hence for every signed Hall word $w$, there is exactly one signed Hall tree $t$ such that $w=(t)$. Also, it is now clear that every monomial is a normal product of a decreasing set of signed Hall words, and this representation is unique.\\
Recall that the monomial $(t)$ was defined for any signed Hall tree. Similarly we can define a Leibniz polynomial $[t]$ by induction:\\
1- For any $x\in X$, we define $[x]=x$.
2- If $t=(t_1,t_2)_+$, then $[t]=[[t_1],[t_2]]$.
3- If $t=(t_1,t_2)_-$, then $[t]=[[t_2],[t_1]]$.\\
\begin{theorem}
The set of all expressions of the form
$$
[t_1]\rp\cdots\rp [t_j]\lp\cdots \lp [t_n],
$$
with $n\geq 1$, $t_1\leq t_2\leq \cdots \leq t_n$, and $1\leq j\leq n$, is a basis of $\A$.
\end{theorem}
\begin{proof}
Through the proof, we denote the set of all such polynomials by $B$. Let $s=(t_1, \ldots, t_n;j)$ be a standard sequence. Define
$$
[s]=[t_1]\rp\cdots\rp [t_j]\lp\cdots \lp [t_n].
$$
For any legal rise $i$ in $s$, we define new sequences $\lambda_i(s)$ and $\rho_i(s)$ as follows:\\
1- If $i+1\neq j$, then $\lambda_i(s)=(t_1, \ldots, (t_i, t_{i+1})_+, \ldots, t_n;j-1)$.
2- If $i+1= j$, then $\lambda_i(s)=(t_1, \ldots, (t_i, t_{i+1})_-, \ldots, t_n;j-1)$.
3- $\rho_i(s)=(t_1, \ldots, t_{i+1}, t_i, \ldots, t_n;j)$.\\
It is not hard to check that both $\lambda_i(s)$ and $\rho_i(s)$ are standard sequences. A calculation shows that if $i+1\neq j$, then $ [s]=[\lambda_i(s)]+[\rho_i(s)]$, and if $i+1=j$, then $[s]=[\rho_i(s)]-[\lambda_i(s)]$. So, we argue with induction on the number
$$
k=n+ the\ number\ of\ inversions.
$$
Note that an inversion in a sequence $s$ is a pair of indices $p$ and $q$ such that $p<q$ and $t_p<t_q$. Now, the length of $\lambda_i(s)$ is $n-1$ and the number of inversions of $\rho_i(s)$ is smaller than those of $s$, therefore $[s]\in \langle B\rangle_{\mathbb{K}}$. Now, consider a monomial
$$
w=x_1\rp\cdots\rp x_j\lp\cdots\lp x_n.
$$
We have
$$
w=[(x_1, \ldots, x_n;j)]\in \langle B\rangle_{\mathbb{K}},
$$
and this shows that $\langle B\rangle_{\mathbb{K}}=\A$. We now show that the set $B$ is linearly independent. Without loos of generality, we can assume that $X$ is finite. Suppose $\Ad$ is the homogenous part of $\A$ consisting of the polynomials of degree $d$. It is clear that $\dim \Ad=|X|^{d+1}$. Since every monomial of degree $d$ can be represented as
$$
(t_1)\rp\cdots\rp (t_j)\lp\cdots \lp (t_n),
$$
for some decreasing sequence of signed Hall trees $t_1\geq t_2\geq \cdots\geq t_n$ and some integer $1\leq j\leq n$, with $\sum |t_i|=d$, so the set
$$
B_0=\{ (t_1)\rp\cdots\rp (t_j)\lp\cdots \lp (t_n): n\geq 1, t_1\geq t_2\geq \cdots\geq t_n, t_i\in H, \sum |t_i|=d\}
$$
is a basis of $\Ad$. Now, we have a correspondence
$$
(t_1)\rp\cdots\rp (t_j)\lp\cdots \lp (t_n)\to [t_1]\rp\cdots\rp [t_j]\lp\cdots \lp [t_n],
$$
between $B_0$ and the set
$$
B_1=\{ [t_1]\rp\cdots\rp [t_j]\lp\cdots \lp [t_n]: n\geq 1, t_1\geq t_2\geq \cdots\geq t_n, t_i\in H, \sum |t_i|=d\}.
$$
As we saw, the later generates $\Ad$, so it is also a basis of $\Ad$. This shows that $B$ is linearly independent.
\end{proof}
\section{Hall basis}
Now, consider that $[H]=\{ [t]: t\in H\}$. We prove the main theorem of this article:
\begin{theorem}
The set $[H]$ is a linear basis of $\Leib$.
\end{theorem}
\begin{proof}
By Theorem 1, this set is linearly independent. We know that $\Leib$ is the smallest Leibniz subalgebra of $\A$ which contains $X$. We also have
$$
X\subseteq \langle [H]\rangle_{\mathbb{K}}\subseteq \Leib.
$$
So, we prove that $\langle [H]\rangle_{\mathbb{K}}$ is a Leibniz subalgebra. Equivalently, we show that
$$
t_1, t_2\in H\Rightarrow [[t_1],[t_2]]\in \langle [H]\rangle_{\mathbb{K}}.
$$
Again, without loos of generality, we assume that $X$ is finite. Let $\alpha=(|t_1|+|t_2|, \max(t_1, t_2))$. We prove by induction on $\alpha$, that
$$
[[t_1],[t_2]]=\sum \lambda_i [u_i],
$$
for some scalars $\lambda_i$, and signed Hall trees $u_i$, with $u_i^{\prr}< \max(t_1, t_2)$. Note that the ordering of the set of all such $\alpha$'s is lexicographic. If $\alpha=(2, x)$, then $t_1=y\in X$ and $t_2=y\in X$ and $y<x$. Hence $(y,x)_-\in H$ and so
$$
[[t_1],[t_2]]=[x,y]=[(y,x)_-]\in \langle [H]\rangle_{\mathbb{K}}.
$$
Now, suppose for all $x\in X$, we have $\alpha> (2,x)$. There are two cases:\\
Case 1- Assume that $t_1\leq t_2$. We have three subcases:\\
1-1. Let $t_1\in X$. Then $(t_1, t_2)_+\in H$ and hence
$$
[[t_1],[t_2]]=[(t_1,t_2)_+]\in \langle [H]\rangle_{\mathbb{K}}.
$$
1.2. Let $t_1=(t_1^{\pr}, t_2^{\prr})$, with $t_1^{\prr}\geq t_2$. Then $(t_1, t_2)_+\in H$ and hence
$$
[[t_1],[t_2]]=[(t_1, t_2)_+]\in \langle [H]\rangle_{\mathbb{K}}.
$$
1-3. Let $t_1=(t_1^{\pr},t_2^{\prr})$, with $t_1^{\prr}\leq t_2$. Since $t_1\leq t_1^{\prr}$ and $t_1^{\pr}\leq t_1^{\prr}$, so
$$
t_1\leq t_1^{\prr}\leq t_2, \ \ t_1^{\pr}\leq t_1^{\prr}\leq t_2.
$$
By the Leinbiz identity, we have
\begin{eqnarray*}
[(t_1,t_2)_+]&=& [[t_1],[t_2]]\\
&=& [[[t_1^{\pr}], [t_1^{\prr}]], [t_2]]\\
&=& [[[t_1^{\pr}],[t_2]],[t_1^{\prr}]]+[[t_1^{\pr}],[[t_1^{\prr}],[t_2]]].
\end{eqnarray*}
We also have
\begin{eqnarray*}
(|t_1^{\pr}|+|t_2|, \max(t_1^{\pr}, t_2))&=&(|t_1^{\pr}|+|t_2|, t_2)\\
&<&(|t_1|+|t_2|, \max(t_1, t_2)),
\end{eqnarray*}
as well as
\begin{eqnarray*}
(|t_1^{\prr}|+|t_2|, \max(t_1^{\prr}, t_2))&=&(|t_1^{\prr}|+|t_2|, t_2)\\
&<&(|t_1|+|t_2|, \max(t_1, t_2)).
\end{eqnarray*}
Hence, by the induction hypothesis, we have
$$
[[t_1^{\pr}],[t_2]]=\sum \lambda_i [u_i],
$$
for some scalars $\lambda_i$, and signed Hall trees $u_i$, with $u_i^{\prr}\leq \max(t_1^{\pr}, t_2)=t_2$. Similarly, we have
$$
[[t_1^{\prr}],[t_2]]=\sum \mu_j [v_j],
$$
for some scalars $\mu_j$, and signed Hall trees $v_j$, with $v_j^{\prr}\leq \max(t_1^{\prr}, t_2)=t_2$. Note that, we also have $|u_i|=|t_1^{\pr}|+|t_2|$, and $|v_j|=|t_1^{\prr}|+|t_2|$. Therefore
$$
[[t_1],[t_2]]=\sum \lambda_i[[u_i],[t_1^{\prr}]]+\sum\mu_j[[t_1^{\pr}], [v_j]],
$$
and the assertion can be now obtained from the induction hypothesis. \\
Case 2- Now assume that $t_1<t_2$. We have
$$
[[t_1],[t_2]]=[(t_2,t_1)_-],
$$
and the assertion follows from the previous case.
\end{proof}
|
train/arxiv
|
BkiUd6PxK0wg09FXWYcL
| 5
| 1
|
\section{Introduction}
\subsection{Acoustic field in the Sun and its measurement}
Turbulent convection in the upper layers of the solar convection zone can reach almost sonic speeds and serves as an efficient driving mechanism for acoustic oscillations \cite{christensen2014lecture}. We consider the three-dimensional equation describing these oscillations at fixed frequency $\omega>0$ proposed in \cite{gizon2017computational}:
\begin{equation}\label{eq:acoustic}
\begin{gathered}
-\nabla\left(\frac{1}{\rho}\nabla\bigl(\sqrt \rho \psi_\omega \bigr) \right) - \frac{\sigma^2}{c^2 \sqrt \rho} \psi_\omega = \frac{f_\omega}{\sqrt \rho},\\
\psi_\omega = \sqrt{\rho}c^2\nabla \cdot \xi_\omega, \quad \sigma^2 = \omega^2 + 2i\omega \gamma,
\end{gathered}
\end{equation}
where $\xi_\omega$ is the spatial matter displacement vector, $c$ is the sound speed, $\rho$ is the density, $\gamma$ is the attenuation, $f_\omega$ is the random source field due to turbulent convection and $x \in \mathbb R^3$. In this work we consider this model under an additional spherical symmetry assumption: $c = c(|x|)$, $\rho = \rho(|x|)$, $\gamma = \gamma(|x|)$. We also assume that
\begin{equation}\label{eq:spaces}
\text{$c \in L^\infty(\mathbb R_+)$, $c\geq c_\text{min}>0$ a.e.}, \quad \text{$\rho \in W^{2,\infty}(\mathbb R_+)$, $\rho>0$}, \quad \gamma \in L^\infty(\mathbb R_+),
\end{equation}
where $\mathbb R_+ = (0,\infty)$ and $W^{2,\infty}(\mathbb R_+)$ denotes the Sobolev space of functions defined on $\mathbb R_+$ which belong to $L^\infty(\mathbb R_+)$ together with their first two derivatives. We suppose that in the upper atmosphere $|x| \geq R_a$ the sound speed is constant, the density is exponentially decreasing (which corresponds to the adiabatic approximation, see \cite[section 5.4]{christensen2014lecture}), and there are no attenuation:
\begin{equation}\label{eq:atmospheric_values}
c(r) = c_0, \quad \rho(r) = \rho_0\exp(-(r-R_a)/H), \quad \gamma(r) = 0, \quad r\geq R_a,
\end{equation}
where $R_a = R_\odot + h_a$, $R_\odot = 6.957 \times 10^5$ km is the solar radius, $h_a$ is the altitude at which the (conventional) interface between the lower and upper parts of the atmosphere is located, and $H$ is called density scale height. The first two assumptions of formula \eqref{eq:atmospheric_values} follow the model of \cite{fournier2017atmospheric}, which extends a standard solar model of \cite{christensen1996current} to the upper atmosphere. In this article we do not fix exact values of the above parameters, but recall that in \cite{fournier2017atmospheric} they are given by the following table:
\begin{center}
\begin{tabularx}{.95\linewidth}{r|l|l}
& value & meaning \\ \hline
$h_a$ & 500 km & altitude at which the interface is located \\
$c_0$ & $6855 \; \si{m.s^{-1}}$ & sound speed in the upper solar atmosphere \\
$\rho_0$ & $2.886 \times 10^{-6} \; \si{kg.m^{-3}}$ & density at the interface \\
$H$ & $ 125 \; \si{km} $ & density scale height in the upper atmosphere
\end{tabularx}
\end{center}
In reality, the Sun is surrounded by a corona whose base is located at about $h_c = 2000 \, \si{km}$ above the surface and which is highly inhomogeneous. However, it is common to neglect this complication when studying acoustic waves inside of the Sun and in the lower atmosphere, see, e.g., \cite{christensen2014lecture,fournier2017atmospheric}. The adequacy of this simplification have been theoretically justified and numerically confirmed.
The exponential decay of density in the upper atmosphere results in trapping of acoustic waves with frequencies less than the cutoff frequency $\omega_\text{ctf} = c_0/(2H)$, which is about 5.2 mHz for the Sun, and to quantisation of their admissible frequencies. Observations of these frequencies and corresponding modes at the solar surface provide a common basis for helioseismological studies, see, e.g., \cite{christensen2014lecture}. In turn, acoustic waves with frequencies above the cutoff, that is, such that
\begin{equation}\label{eq:cutoff}
\omega > \frac{c_0}{2H},
\end{equation}
propagate into the upper atmosphere. One can expect that the simulated data (oscillation power spectrum) computed from equation \eqref{eq:acoustic} is relatively closer to observations for these higher frequency waves. The reason is that convection, granulation and supergranulation significantly contribute to the power of oscillations at lower frequencies (see \cite{gizon2010local}), but are not captured by the model. Possibility of helioseismic inversions from observations above the cutoff has not been well investigated to date. In this artile we show that observations of acoustic waves, when performed at two frequencies above the cutoff and at two different heights above the solar surface, uniquely determine the sound speed, density and attenuation in the Sun in the spherically symmetric case.
Experimental measurement of solar acoustic waves can be performed through the Doppler shifts in the absorption lines of the solar light, as it is done in the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) satellite, see, e.g., the official SDO website (\url{sdo.gsfc.nasa.gov}) for more details and references. HMI observes the full solar disk in the Fe I absorption line at 6173 \r{A} continuously from April 30, 2010. It combines six localised in wavelength photographs (filtergrams) taken in a neighborhood of this spectral line with a cadence of 45 sec to compute the map of Doppler velocities (Dopplergram). Several models show that this Dopplergram can serve as a rough estimate for the line-of-sight matter displacement velocity at about 100 km above the solar surface, which is the formation height for the HMI Dopplergram, see \cite{fleck2011formation}.
HMI is the successor of the Michelson Doppler Imager (MDI), which had similar design and purpose, onboard the Solar and Heliospheric Observatory (SOHO) satellite. In contrast to HMI, MDI observed the full solar disk in the Ni I absorption line at 6768 \r{A} with a cadence of 60 sec and only for several months each year during the so-called Dynamic Runs. The formation height for MDI is about 125 km above the solar surface, see \cite{fleck2011formation}. Note that observations from HMI and MDI can be found in the Joint Science Operations Center database at Stanford University (\url{jsoc.stanford.edu}).
In the present work we assume that the measurements of the solar acoustic field can be performed at two different heights above the surface. In a rough approximation, HMI and MDI Dopplergrams taken during the Dynamics Run 2010, when both instruments continuously observed the full solar disk, can be used to extract this data. As recently shown in \cite{nagashima2014interpreting}, it is also possible to perform multi-height measurements by combining six raw HMI filtergrams in different ways.
The main theoretical results of this work are presented in \cref{sec:main_results}. Related proofs are given in \cref{sec:demonstration_of_uniqueness}. Numerical reconstructions confirming our theoretical conclusions are given in \cref{sec:reconstruction}.
\section{Main results}\label{sec:main_results}
\subsection{Extracting the imaginary part of the Green's function}\label{sec:extracting_green_function}
Under the assumptions \eqref{eq:spaces}, \eqref{eq:atmospheric_values}, \eqref{eq:cutoff} equation \eqref{eq:acoustic} at fixed $\omega$ can be rewritten as the Schr\"odinger equation
\begin{equation}\label{eq:schroedinger}
(L_v - k^2)\psi = f, \quad L_v = -\Delta + v, \quad k > 0,
\end{equation}
where the indices indicating dependence on $\omega$ are suppressed,
\begin{equation}\label{eq:acoustic_kv}
k = \sqrt{\frac{\omega^2}{c_0^2} - \frac{1}{4H^2}}, \quad v(x) = k^2 - \frac{\sigma(|x|)^2}{c(|x|)^2} + \rho(|x|)^{\frac 1 2} \Delta(\rho(|x|)^{-\frac 1 2}),
\end{equation}
$v \in L^\infty(\mathbb R^3)$, and $v(x)=1/(H|x|)$ for $|x|\geq R_a$.
In this article we consider equation \eqref{eq:schroedinger} with a general complex-valued potential $v$ such that
\begin{equation}\label{eq:potential}
\begin{gathered}
v(x) = \widetilde v(|x|), \; x \in \mathbb R^3, \\
\widetilde v \in L^\infty(\mathbb R_+), \quad \widetilde v(r) = \frac{\alpha}{r}, \; r \geq R_a, \\
\text{for some constants $\alpha \in \mathbb R$, $R_a > 0$}.
\end{gathered}
\end{equation}
If the potential $v$ satisfies \eqref{eq:potential}, then the resolvent $(L_v-k^2)^{-1}$ is a meromorphic operator-valued function of $k\in\mathbb C_+ = \bigl\{ z \in \mathbb C \colon \Im z > 0\bigr\}$ with the distributional kernel $G_v(x,x') = G_v(x,x';k)$ admitting a unique meromorphic continuation across the positive real axis. The restriction to $k \in \mathbb R_+$ of the distributional kernel $G_v(x,x')$ is called the radiation Green's function for equation \eqref{eq:schroedinger}. In addition, $G_v(x,x')$ is a distributional solution to equation $(L_v - k^2)G_v(\cdot,x') = \delta_{x'}$, where $\delta_{x'}$ denotes the Dirac delta function centered at $x'$. We also suppose that $k \in \mathbb R_+ \setminus\Sigma^P_v$, where
\begin{equation}\label{eq:Sigma_definition}
\begin{gathered}
\text{$\Sigma^P_v \subset \mathbb R_+$ is the union of the sets of positive poles $k$}\\
\text{of functions $G_v(x,x';k)$ and $G_{\overline v}(x,x';k)$}.
\end{gathered}
\end{equation}
Basic properties of $G_v$ can be found in \cite{saito1974principle,agmon1992analyticity}. In particular, at fixed $k$ the function $G_v$ is jointly continuous outside the diagonal $\Delta = \{ (x,x')\in \mathbb R^3 \times \mathbb R^3 \colon x = x'\}$. Besides, $(L_v-(k+i0)^2)^{-1} \in \mathcal L(L^2_{1+\varepsilon}(\mathbb R^3),L^2_{-1-\varepsilon}(\mathbb R^3))$, $\varepsilon \in (0,\tfrac 1 2]$, where $L^2_\delta(\mathbb R^3)$ denotes the Hilbert space of measurable functions $u$ in $\mathbb R^3$ with the finite norm
\begin{equation*}
\|u\|_{L^2_\delta} = \left[\int_{\mathbb R^d} (1+|x|)^{\delta}|u(x)|^2 \, dx\right]^{1/2}, \quad \delta \in \mathbb R.
\end{equation*}
Accordingly, for any $k \in \mathbb R_+ \setminus \Sigma^P_v$ equation \eqref{eq:schroedinger} with $f \in L^2_{1+\varepsilon}(\mathbb R^3)$, $\varepsilon \in (0,\tfrac 1 2)$, admits a unique radiation (limiting absorption), solution given by
\begin{equation}\label{eq:radiation_solution}
\psi_v(x) = \int_{\mathbb R^3} G_v(x,x')f(x') \, dx'.
\end{equation}
Spherical symmetry of the potential $v$ allows to separate variables in equation \eqref{eq:schroedinger}, reducing it to an equivalent multi-channel Schr\"odinger equation on the half-line $\mathbb R_+$ with non-coupled channels. More precisely, consider the orthogonal expansions in normalized spherical harmonics $Y^m_\ell$:
\begin{equation}\label{eq:harmonics_expansions}
\psi_v(r\vartheta) = \frac 1 r \sum_{\ell \geq 0}\sum_{|m|\leq \ell} \varphi^m_{v,\ell}(r) Y^m_\ell(\vartheta), \quad f(r\vartheta) = \frac 1 r \sum_{\ell \geq 0}\sum_{|m|\leq \ell} f^m_\ell(r) Y^m_\ell(\vartheta),
\end{equation}
where $r>0$, $\vartheta \in S^2_1$ and $S^2_R = \bigl\{ x \in \mathbb R^3 \colon |x| = R\}$. Plugging these expansions into formula \eqref{eq:schroedinger}, we get the radial equations
\begin{equation}\label{eq:radial_schroedinger_0}
(L_{v,\ell} - k^2)\varphi^m_{v,\ell} = f^m_\ell, \quad L_{v,\ell} = -\tfrac{d^2}{dr^2} + \tfrac{\ell(\ell+1)}{r^2} + \widetilde v,
\end{equation}
where $\ell \geq 0$, $|m| \leq \ell$. Besides, it follows from formulas \eqref{eq:radiation_solution}, \eqref{eq:harmonics_expansions} that if $\psi_v$ is a unique radiation solution of equation \eqref{eq:schroedinger}, then $\varphi^m_\ell$ can be expressed as:
\begin{equation}\label{eq:radial_radiation_solution}
\varphi^m_{v,\ell}(r) = \int_0^{R_a} G_{v,\ell}(r,r')f^m_\ell(r') \, dr',
\end{equation}
where $G_{v,\ell}(r,r')$ is the coefficient in the spherical harmonics expansion of $G_v$:
\begin{equation}\label{eq:green_harmonics_expansion}
G_v(r\vartheta,r'\vartheta) = \frac{1}{rr'}\sum_{\ell\geq 0} \sum_{|m|\leq \ell} G_{v,\ell}(r,r') Y^m_\ell(\vartheta)\overline{Y^m_\ell}(\vartheta'), \quad r,r'>0, \; \vartheta,\vartheta'\in S^2_1.
\end{equation}
One can show that $G_{v,\ell}$ is indeed a Green's function for equation \eqref{eq:radial_schroedinger_0}, that is, $(L_{v,\ell}-k^2)G_{v,\ell}(\cdot,r')=\delta_{r'}$, see \cite{agmon1992analyticity} and \cref{sec:green_functions}.
In this article we consider equation \eqref{eq:schroedinger} with a random source function $f$. In this case the radiation solution $\psi_v$ is also a random function, as well as functions $\varphi^m_{v,\ell}$ and $f^m_\ell$ in the spherical harmonics expansions of formula \eqref{eq:harmonics_expansions}. Following \cite{gizon2017computational}, we assume that the power spectrum (power spectral density) of $\psi_v$, defined as $\mathcal P^m_{v,\ell}(r) = \mathbb E|\varphi^m_{v,\ell}(r)|^2$, can be measured experimentally. However, in contrast to \cite{gizon2017computational}, where the power spectral density is assumed to be known at the solar surface $r = R_\odot$, we assume that it can be measured at two different observation radii $R_o^\dagger > R_o \geq R_\odot$. These measurements can be roughly achieved by using concurrent MDI and HMI Dopplergrams \cite{fleck2011formation}, or multi-height measurements from raw HMI filtergrams \cite{nagashima2014interpreting}.
Our first result relates cross correlations $\mathbb E\bigl( \overline{\varphi^m_{v,\ell}(r_1)}\varphi^m_\ell(r_2) \bigr)$ to the Green's function $G_{v,\ell}(r_1,r_2)$. We prove the following proposition:
\begin{proposition}\label{thm:green_extraction} Let $v$ be a complex-valued potential satisfying \eqref{eq:potential} and let $k \in \mathbb R_+ \setminus \Sigma^P_v$ be fixed. Assume that the random functions $f^m_\ell$ satisfy the condition
\begin{equation}\label{eq:sources_equipartitioning}
\mathbb E\bigl( \overline{f^m_\ell(r)}f^m_\ell(r+\xi)\bigr) = \Pi \delta_0(\xi) \left( -\Im \widetilde v(r) + k \delta_R(r) \right),
\end{equation}
for some $\Pi > 0$, $R \geq R_a$. Then the following formula is valid at fixed $r_1$, $r_2>0$:
\begin{equation}\label{eq:green_extraction}
\Pi \Im G_{v,\ell}(r_1,r_2) = \mathbb E\bigl( \overline{\varphi^m_{v,\ell}(r_1)}\varphi^m_{v,\ell}(r_2) \bigr) + O(\tfrac 1 R), \quad R \to + \infty.
\end{equation}
\end{proposition}
\Cref{thm:green_extraction} is proved in \cref{sec:proof_green_extraction}. This proposition is a variation of a well-known result, see, e.g., \cite{snieder2007extracting,gizon2017computational} and references therein. The main difference is that we consider long range potentials and the radiation Green's function, whereas in the literature the Green's function with an artificial boundary condition imposed at $r = R$ and approximating the Sommerfeld radiation condition is used. The approximate radiation boundary condition allows to get rid of the error term $O(\tfrac 1 R)$ in the formula \eqref{eq:green_extraction} but complicates the further analysis. In addition, note that in general the Sommerfeld radiation condition does not apply for long range potentials.
Assumption \eqref{eq:sources_equipartitioning} requires that the random sources be uncorrelated in space, excited throughout the volume with a power proportional to $-\Im \widetilde v$, and excited at the surface $r = R$ with a power proportional to $k$.
\begin{remark} Recall that equation \eqref{eq:schroedinger} arises, in particular, by rewriting equation \eqref{eq:acoustic} under the assumptions \eqref{eq:spaces}, \eqref{eq:atmospheric_values}, \eqref{eq:cutoff}. In this case $k$ and $v$ are given by formulas \eqref{eq:acoustic_kv}, and \Cref{thm:green_extraction} has a physical interpretation. Taking into account that $-\Im \widetilde v = 2\omega \gamma$, condition \eqref{eq:sources_equipartitioning} implies proportionality of the power spectral density of random excitations to the local attenuation (energy dissipation) rate. It has long been known in physics that this condition is related to the possibility to extract the imaginary part of the point-source response function (Green's function) from the power spectral density of the randomly excited field, which is expressed by relation \eqref{eq:green_extraction} in our setting. In physical literature similar relations are etablished in fluctuation-dissipation theorems, see, e.g., \cite{landau1996statistical}.
\end{remark}
\subsection{Uniqueness results}\label{sec:uniqueness}
\Cref{thm:green_extraction} allows to retrieve $\Im G_{v,\ell}(r,r)$ approximately from the power spectral density of noise $\mathcal P^m_{v,\ell}(r) = \mathbb E |\varphi^m_{v,\ell}(r)|^2$ at fixed $r$. Next, we prove that $\Im G_{v,\ell}(r,r)$ known exactly for all $\ell \geq 0$ and at two different $r$ uniquely determines $v$. Equivalently, taking into account the orthogonal expansion \eqref{eq:green_harmonics_expansion}, $v$ is uniquely determined by $\Im G_v$ known on $S^2_r \times S^2_r$ at two different $r$.
\begin{theorem}\label{thm:uniqueness} let $v_1$, $v_2$ be two complex-valued potentials satisfying \eqref{eq:potential} and let $k \in \mathbb R_+ \setminus (\Sigma^P_{v_1}\cup\Sigma^P_{v_2})$ be fixed. Assume that that one of the following conditions holds true:
\begin{enumerate}
\item[(A)] $G_{v_1} = G_{v_2}$ on $M^4_{R_o} = (S^2_{R_o} \times S^2_{R_o}) \setminus \Delta$ for some $R_o > R_a$;
\item[(B)] $\Im G_{v_1} = \Im G_{v_2}$ on $M^4_{R_o} \cup M^4_{R_o^\dagger}$ for some $R_o^\dagger > R_o \geq R_a$ such that $R_o^\dagger \not\in \Sigma^S_{\alpha,k,R_o}$, where $\Sigma^S_{\alpha,k,R_o} \subset [R_o,\infty)$ is a discrete set without finite accumulation points defined by \eqref{eq:coulomb_modulus_phase}, \eqref{eq:sigma_s}.
\end{enumerate}
Then $v_1 = v_2$ a.e.
\end{theorem}
\begin{remark} If $v$ is some potential satisfying \eqref{eq:potential} then the restriction of $G_v(x,x')$ to $M^4_R$ depends on $|x-x'|$ only because $v$ is spherically symmetric. In particular, \Cref{thm:uniqueness} remains valid if the four-dimensional manifolds $M^4_R$ are replaced by the one-dimensional manifolds
\begin{equation*}
M^1_R(x_1,x_2) = \bigl\{ (x_1 \sin \theta + x_2 \cos \theta,x_2 ) \colon \theta \in (0,\pi] \bigr\},
\end{equation*}
for some fixed $x_1$, $x_2 \in S^2_R$ with $x_1 \cdot x_2 = 0$.
\end{remark}
\Cref{thm:uniqueness} is proved in \cref{sec:demonstration_of_uniqueness} and the proof consists of the following steps presented in \cref{sec:regular_solutions,sec:green_functions,sec:recovering_scattering_matrix,sec:recovering_dtn}. In \cref{sec:regular_solutions} we separate variables in the equation $(L_v-k^2)\psi = 0$ and establish auxilary results for the regular solutions of the arising radial Schr\"odinger equations. In \cref{sec:green_functions} we derive an appropriate relation between the Green's function $G_v$ and the Green's functions $G_{v,\ell}$ of the radial equations. This relation will allow to extract the diagonal values $G_{v,\ell}(R,R)$ from $G_v$ on $M^4_R$. Using this relation, in \cref{sec:recovering_scattering_matrix} we show that the scattering matrix elements $s_{v,\ell}$ can be extracted from $G_v$ on $M^4_{R_o}$ or from the imaginary part $\Im G_v$ only on $M^4_{R_o}\cup M^4_{R_o^\dagger}$, under the assumption that $R_o^\dagger \not\in \Sigma^S_{\alpha,k,R_o}$. In \cref{sec:recovering_dtn} we prove that the scattering matrix elements $s_{v,\ell}$ determine the Dirichlet-to-Neumann map $\Lambda_{v,R}$ for potential $v$ in some ball $B^3_R$, $R \geq R_a$, where
\begin{equation*}
B^3_R = \bigl\{ x \in \mathbb R^3 \colon |x| < R \bigr\}.
\end{equation*}
In \cref{sec:combining_results_uniqueness} we combine these results together with the uniqueness theorem for the Dirichlet-to-Neumann map from \cite{novikov1988multidimensional} to uniquely determine $v$. This will prove \Cref{thm:uniqueness}.
\begin{corollary}\label{thm:uniqueness_acoustic} Let $c$, $\rho$, $\gamma$, and $c'$, $\rho'$, $\gamma'$ be two sets of parameters satisfying \eqref{eq:spaces}, \eqref{eq:atmospheric_values} and define the corresponding potentials $v=v_\omega$, $v' = v_\omega'$ and the wavenumber $k = k_\omega$ according to formula \eqref{eq:acoustic_kv} at fixed $\omega$. Let $\omega_1 \neq \omega_2$ be two positive frequencies satisfying \eqref{eq:cutoff} and such that $k_\omega \in \mathbb R_+ \setminus (\Sigma^P_{v_\omega} \cup \Sigma^P_{v_\omega'})$ for $\omega = \omega_1$, $\omega_2$. Let $G_{v_\omega}$, $G_{v_\omega'}$ be the radiation Green's functions at fixed $\omega$ for the potentials $v$, $v'$ respectively.
Suppose that $\Im G_{v_\omega} = \Im G_{v_\omega'}$ on $M^4_{R_0} \cup M^4_{R_o^\dag}$ for some $R_o^\dag > R_o \geq R_a$ such that $R_o^\dag \not\in\Sigma^S_{1/H,k_\omega,R_o}$, where $\omega = \omega_1$, $\omega_2$. Then $c=c'$, $\rho = \rho'$, $\gamma = \gamma'$ a.e.
\end{corollary}
\begin{proof}
Under the assumptions of \Cref{thm:uniqueness_acoustic} it follows from \Cref{thm:uniqueness} that $v_\omega = v_\omega'$ for $\omega = \omega_1$, $\omega_2$. Using that $\Re v_\omega = \Re v_\omega'$ for $\omega = \omega_1$, $\omega_2$ one can show that $c = c'$, $\rho = \rho'$, see, e.g., the proof of \cite[Theorem 2.9]{agaltsov2018monochromatic}. Then, recalling that $\Im v_\omega = -2\omega\gamma/c^2$, $\Im v_\omega' = -2\omega\gamma'/(c')^2$, it follows from the equality $\Im v_\omega = \Im v_\omega'$ for $\omega = \omega_1$, $\omega_2$ together with the equality $c = c'$ that $\gamma = \gamma'$, concluding the proof of \Cref{thm:uniqueness_acoustic}.
\end{proof}
In \cref{sec:reconstruction} we shall present numerical simulations which confirm the uniqueness results of \Cref{thm:uniqueness} and \Cref{thm:uniqueness_acoustic}.
\section{Proof of the main results}\label{sec:demonstration_of_uniqueness}
\subsection{Properties of regular radial solutions}\label{sec:regular_solutions}
In this subsection we shall establish some auxilary results regarding regular solutions of the radial Schr\"odinger equation which arises by separation of variables in the homogeneous equation $(L_v - k^2)\psi = 0$.
As the potential $v$ is spherically symmetric, this equation separates in spherical coordinates. We seek a solution $\psi^m_{v,\ell}\in H^2_\text{loc}(\mathbb R^3)$ of the form
\begin{equation}\label{eq:angular_solution}
\psi^m_{v,\ell}(x) = \tfrac{1}{|x|} \varphi_{v,\ell}(|x|) Y^m_\ell(\tfrac{x}{|x|}),
\end{equation}
which leads to the equation
\begin{equation}\label{eq:radial_schroedinger}
\bigl(L_{v,\ell} - k^2\bigr) \varphi_{v,\ell} = 0,
\end{equation}
together with the condition that $\varphi_{v,\ell}$ vanishes at the origin. One can show that this determines $\varphi_{v,\ell} \in H^2_\text{loc}(\mathbb R_+)$ uniquely up to a multiplicative factor, see, e.g., \cite{agmon1992analyticity}. We impose the boundary condition
\begin{equation}\label{eq:regular_asymptotics}
\lim_{r\to+0} r^{-\ell-1}\varphi_{v,l}(r) = 1,
\end{equation}
which fixes $\varphi_{v,\ell}$ uniquely.
Note that $\varphi_{v,\ell}(r)$ does not depend on the values of $\widetilde v$ in the region $[r,+\infty)$, see \cite[formula (12.4)]{newton1982scattering}. This analysis implies the following lemma.
\begin{lemma}\label{thm:regular_solution} Let $k > 0$ and let $v$ be a complex-valued potential satisfying \eqref{eq:potential}. Then the Dirichlet problem
\begin{equation*}
(L_v - k^2)\psi = 0 \; \text{in $B^3_R$}, \quad \psi|_{S^2_R} = Y^m_\ell,
\end{equation*}
where $Y^m_\ell=Y^m_\ell(x/|x|)$, $x \in S^2_R$, has a unique solution $\psi \in H^2(B^3_R)$ if and only if $\varphi_{v,\lambda}(R) \neq 0$ for all integer $\lambda \geq 0$. In addition, this solution is given by the formula
\begin{equation}\label{eq:regular_solution}
\psi(x) = \frac{R}{|x|}\frac{\varphi_{v,\ell}(|x|)}{\varphi_{v,\ell}(R)} Y^m_\ell(\tfrac{x}{|x|}).
\end{equation}
\end{lemma}
Next we shall derive an expression for the regular solution $\varphi_{v,\ell}$ in the domain $r\geq R$ in terms of the Coulomb wave functions and of the so-called scattering matrix element $s_{v,\ell}$. First we recall the definition and some basic properties of the Coulomb wave functions from \cite{erdelyi1957asymptotic,olver2010nist}.
The Coulomb wave functions $H^\pm_\ell(\eta,kr)$, $\eta = \alpha/(2k)$, are the unique solutions of equation \eqref{eq:radial_schroedinger} with $\widetilde v(r)=\alpha/r$ specified by the following asymptotics as $r \to + \infty$:
\begin{subequations}
\begin{gather}\label{eq:coulomb_asymptotics}
H^\pm_\ell(\eta,kr) = \exp(\pm i \theta_\ell(\eta,kr)) + O(\tfrac{1}{r}),\\
\theta_\ell(\eta,kr) = kr - \eta\ln(2kr) - \tfrac 1 2 \ell \pi + \sigma_\ell(\eta),\label{eq:coulomb_theta}
\end{gather}
\end{subequations}
where $\sigma_\ell(\eta) = \arg \Gamma(\ell+1+i\eta)$ is the Coulomb phase shift and $\Gamma$ denotes the usual gamma function. Functions $H^+_\ell(\eta,kr)$ and $H^-_\ell(\eta,kr)$ are complex conjugates of each other and are linearly independent. Using \eqref{eq:coulomb_asymptotics} together with the relation
\begin{equation*}
\frac{\partial}{\partial \rho} H^\pm_\ell(\eta,\rho) = \left(\frac{\ell+1}{\rho}+\frac{\eta}{\ell+1}\right) H^\pm_\ell(\eta,\rho) - \sqrt{1+\frac{\eta^2}{(\ell+1)^2}} H^\pm_{\ell+1}(\eta,\rho), \quad \ell \geq 0
\end{equation*}
given in \cite{powell1947recurrence}, one can show that
\begin{equation}\label{eq:coulomb_radiation}
\frac{\partial}{\partial r} H^\pm_\ell(\eta,kr) = \pm ik H^\pm_\ell(\eta,kr) + O(\tfrac{1}{r}), \quad r \to+\infty.
\end{equation}
Using these properties, we shall prove the following result.
\begin{lemma}\label{thm:regular_solution_expansion} Let $v$ be a complex-valued potential satisfying \eqref{eq:potential}, let $k \in \mathbb R_+\setminus \Sigma^P_v$ be fixed and let $\eta = \alpha/(2k)$. Then the function $\varphi_{v,\ell}$ defined by \eqref{eq:radial_schroedinger}, \eqref{eq:regular_asymptotics} admits in the region $r \geq R_a$ the representation
\begin{equation}\label{eq:regular_solution_expansion}
\varphi_{v,\ell}(r) = b_\ell\left( H^-_\ell(\eta,kr) - s_{v,\ell} H^+_\ell(\eta,kr) \right), \quad r \geq R_a,
\end{equation}
with unique $b_\ell = b_\ell(k) \in \mathbb C \setminus \{0\}$ and $s_{v,\ell}=s_{v,\ell}(k) \in \mathbb C \setminus \{0\}$.
\end{lemma}
\begin{proof} As the Coulomb wave functions $H^\pm_\ell(\eta,kr)$ are linearly independent, any solution to equation \eqref{eq:radial_schroedinger} in the region $r \geq R_a$ is given by their linear combination with unique coefficients. In particular, $\varphi_{v,\ell}$ can be expressed in the region $r \geq R$ in the form
\begin{equation*}
\varphi_{v,\ell}(r) = a_{v,\ell} H^+_\ell(\eta,kr) + b_{v,\ell} H^-_\ell(\eta,kr), \quad r \geq R,
\end{equation*}
for some $a_{v,\ell}$, $b_{v,\ell} \in \mathbb C$. We shall show that $a_{v,\ell} \neq 0$, $b_{v,\ell} \neq 0$.
Recall from \cite{saito1974principle} that for any $k \in \mathbb R_+$ outside the singular set $\Sigma^P_v$ and for any $f \in L^2_{1+\varepsilon}(\mathbb R^3)$ the Schr\"odinger equation \eqref{eq:schroedinger} admits the unique solution $\psi \in H^2_\text{loc}(\mathbb R^3) \cap L^2_{-1-\varepsilon}(\mathbb R^3)$, $\varepsilon \in (0,\tfrac 1 2]$, satisfying the radiation condition
\begin{subequations}
\begin{gather}\label{eq:radiation}
\int_{|x|\geq 1}(1+|x|)^{-1+\varepsilon}|\mathcal D\psi(x)|^2 \, dx < \infty, \quad \varepsilon \in (0,\tfrac 1 2], \\
\mathcal D \psi(x) = \nabla \psi(x) + \tfrac{x}{|x|^2} \psi(x) - i k \tfrac{x}{|x|}\psi(x).
\end{gather}
\end{subequations}
Now assume that $b_{v,\ell} = 0$. Then it follows from \eqref{eq:coulomb_asymptotics}, \eqref{eq:coulomb_radiation} that the function defined by \eqref{eq:regular_solution} is a non-zero solution of class $H^2_\text{loc}(\mathbb R^3) \cap L^2_{-1-\varepsilon}(\mathbb R^3)$, $\varepsilon \in (0,\tfrac 1 2]$, to $(L_v-k^2)\psi = 0$ in $\mathbb R^3$ satisfying the radiation condition \eqref{eq:radiation}. This contradicts the assumption $k \not\in \Sigma^P_v$.
Now assume that $a_{v,\ell} = 0$ and let $\psi$ be defined by \eqref{eq:regular_solution}. Then it follows from \eqref{eq:coulomb_asymptotics}, \eqref{eq:coulomb_radiation} that $\overline \psi$ is of class $H^2_\text{loc}(\mathbb R^3) \cap L^2_{-1-\varepsilon}(\mathbb R^3)$, $\varepsilon \in (0,\tfrac 1 2]$, and satisfies $(L_{\overline v}-k^2)\overline \psi = 0$ in $\mathbb R^3$ together with the radiation condition \eqref{eq:radiation}. Taking into account definition \eqref{eq:Sigma_definition}, this also contradicts the assumption $k \not\in \Sigma^P_v$ and concludes the proof of \Cref{thm:regular_solution_expansion}.
\end{proof}
\begin{remark}\label{rmk:scattering_matrix}
The coefficient $s_{v,\ell} = s_{v,\ell}(k)$ in \Cref{thm:regular_solution_expansion} is called the $\ell$-th scattering matrix element of the potential $v$.
\end{remark}
\subsection{Green's functions}\label{sec:green_functions}
In this subsection we shall express the radiation Green's function $G_{v,\ell}$ for equation \eqref{eq:radial_schroedinger} in the region $r \geq R_a$ in terms of the Coulomb wave functions. Then we shall give a formula for extracting the diagonal values $G_{v,\ell}(R,R)$ from the radiation Green's functon $G_v$ for equation \eqref{eq:schroedinger} known at $M^4_R$.
In addition to the regular solution $\varphi_{v,\ell}$ of equation \eqref{eq:schroedinger} specified by the boundary condition \eqref{eq:regular_asymptotics}, we consider the outgoing solution $\varphi^+_{v,\ell}$ which is specified by the asymptotics
\begin{equation}\label{eq:outgoing_asymptotics}
\varphi^+_{v,\ell}(r)=H^+_\ell(\eta,kr), \quad r \geq R_a.
\end{equation}
The outgoing Green's function $G_{v,\ell}(r,r')$ for equation \eqref{eq:radial_schroedinger} is defined as a distributional solution to the equation $(L_{v,\ell}-k^2)G_{v,\ell}(\cdot,r')=\delta_{r'}$ specified by the following boundary conditions at fixed $r' > 0$:
\begin{equation*}
G_{v,\ell}(r,r') = O(r^{\ell+1}), \; r \to+0, \quad G_{v,\ell}(r,r') = c H^+_\ell(\eta,kr), \; r \to + \infty,
\end{equation*}
for some non-zero constant $c = c(r',\eta,k)$. If the regular solution $\varphi_{v,\ell}(r)$ and the outgoing solution $\varphi^+_{v,\ell}(r)$ are linearly independent, this Green's function exists, is unique and is given by the explicit formula
\begin{equation}\label{eq:radial_green_definition}
G_{v,\ell}(r,r') = -\frac{\varphi_{v,\ell}(r_<) \varphi^+_{v,\ell}(r_>)}{[\varphi_{v,\ell},\varphi^+_{v,\ell}]}, \quad r, r' > 0,
\end{equation}
where $r_< = \min(r,r')$, $r_> = \max(r,r')$ and the Wronskian
\begin{equation}\label{eq:wronskian}
[\varphi_{v,\ell},\varphi^+_{v,\ell}] = \varphi_{v,\ell}(r)\frac{\partial \varphi^+_{v,\ell}(r)}{\partial r} - \frac{\partial \varphi_{v,\ell}(r)}{\partial r}\varphi^+_{v,\ell}(r)
\end{equation}
is independent of $r$. Note that formula \eqref{eq:radial_green_definition} is a standard result from the theory of Sturm-Liouville problems, see, e.g., \cite[p. 158, formula (5.65)]{teschl2012ordinary}.
\begin{lemma}\label{thm:radial_green} If $k \in \mathbb R_+\setminus\Sigma^P_v$, then $G_{v,\ell}$ is well-defined and is given in the region $r \geq R_a$, $r' \geq R_a$ by the formula
\begin{equation}\label{eq:radial_green}
G_{v,\ell}(r,r') = \frac{i}{2k}\left( H^-_\ell(\eta,kr_<) - s_{v,\ell} H^+_\ell(\eta,kr_<) \right) H^+_\ell(\eta,kr_>), \quad r, r' \geq R_a.
\end{equation}
\end{lemma}
\begin{proof} It follows from \Cref{thm:regular_solution_expansion} that under the assumption $k \in \mathbb R_+\setminus \Sigma^P_v$ the functions $\varphi_{v,\ell}(r)$ and $H^+_\ell(\eta,kr)$ are linearly independent in the region $r \geq R_a$. Using the relation $[H^-_\ell(\eta,kr),H^+_\ell(\eta,kr)]=2ik$, given in \cite{olver2010nist}, and formula \eqref{eq:regular_solution_expansion} we get $[\varphi_{v,\ell},\varphi^+_{v,\ell}] = 2ikb_{v,\ell}$. Together with \eqref{eq:radial_green_definition}, this implies \eqref{eq:radial_green}.
\end{proof}
Next we shall show how the diagonal values $G_{v,\ell}(R,R)$ can be extracted from the Green's function $G_v$ restricted to $M^4_R$.
First recall that the Legendre polynomials $P_\ell$ can be defined using the formal generating identity \cite{szego1939orthogonal}
\begin{equation}\label{eq:legendre}
\frac{1}{\sqrt{1- 2st + t^2}} = \sum_{\ell=0}^\infty P_\ell(s) t^\ell.
\end{equation}
Using the Laplace formula \cite[Theorem 8.21.2]{szego1939orthogonal} and the Dirichlet convergence test one can show that at fixed $t=1$ this series converges pointwise for all $s \in (-1,1)$. Besides, the Legendre polynomials form a complete orthogonal system in $L^2(-1,1)$ such that
\begin{equation}\label{eq:legendre_normalization}
\int_{-1}^1 P_\ell(s)P_m(s) \, ds = \frac{2}{2\ell+1} \delta_{\ell m},
\end{equation}
where $\delta_{\ell m}$ is the Kronecker delta.
Now we recall \cite{agmon1992analyticity} that at fixed $k \in \mathbb R_+\setminus\Sigma^P_v$ the Green's function $G_v(x,x')$ is continuous outside of the diagonal $x=x'$ and $G_v(x,x')=O(|x-x'|^{-1})$ as $x\to x'$. Besides, the series expansion
\begin{equation}\label{eq:multipole_expansion}
G_v(x, x') = \frac{1}{4\pi R^2} \sum_{l=0}^\infty (2\ell+1) G_{v,\ell}(R,R)P_\ell(x \cdot x' / R^2), \quad (x,x')\in M^4_R,
\end{equation}
converges for $x \neq \pm x'$ and $G_{v,\ell}(R,R)$ has the asymptotics
\begin{equation}\label{eq:partial_green_asymptotics}
G_{v,\ell}(R,R) = \frac{R}{2\ell+1}\bigl(1 + O(\tfrac 1 \ell ) \bigr), \quad l \to + \infty.
\end{equation}
Note that series expansion \eqref{eq:multipole_expansion} follows from the spherical harmonics expansion \eqref{eq:green_harmonics_expansion}, taking into account the well known addition theorem (see, e.g., \cite{agmon1992analyticity}):
\begin{equation*}
P_\ell(\vartheta \cdot \vartheta') = \frac{4\pi}{2\ell+1} \sum_{m=-\ell}^\ell Y_{\ell}^m(\vartheta) \overline{Y^m_\ell(\vartheta')}, \quad \vartheta, \vartheta' \in S^2_1.
\end{equation*}
\begin{lemma}\label{thm:green_separation} Let $k \in \mathbb R_+ \setminus \Sigma^P_v$ and fix $x'$, $x'' \in S^2_R$ such that $x' \cdot x'' = 0$. Then for each $\ell \geq 0$
\begin{gather*}
G_{v,\ell}(R,R) = \frac{R}{2\ell+1} - 2\pi R^2 \int_0^\pi g_{v,R}(\cos\theta)P_\ell(\cos\theta) \sin\theta d\theta, \\
g_{v,R}(\cos \theta)=G_v(x'\cos \theta + x''\sin \theta,x') - \frac{1}{4\pi R}\frac{1}{\sqrt{2-2\cos\theta}}, \; \theta\in(0,\pi).
\end{gather*}
\end{lemma}
\begin{proof} Using \eqref{eq:legendre} and \eqref{eq:multipole_expansion} we get a pointwise convergent series expansion
\begin{equation*}
g_{v,R}(s) = \frac{1}{4\pi R^2}\sum_{l=0}^\infty (2\ell+1)\bigl(G_{v,\ell}(R,R)-\tfrac{R}{2\ell+1} \bigr) P_\ell(s), \quad s \in (-1,1),
\end{equation*}
which, in view of \eqref{eq:legendre_normalization} and \eqref{eq:partial_green_asymptotics}, also converges in $L^2(-1,1)$. Recalling that Legendre polynomials form a complete orthogonal system in $L^2(-1,1)$, we get \Cref{thm:green_separation}.
\end{proof}
\subsection{Extracting the Green's function from cross correlations}\label{sec:proof_green_extraction}
In this subsection we prove \Cref{thm:green_extraction}. First, recall that the Green's function $G_{v,\ell}$ satisfies the reciprocity relation
\begin{equation}\label{eq:green_reciprocity}
G_{v,\ell}(r_1,r_2) = G_{v,\ell}(r_2,r_1), \quad r_1, r_2 > 0.
\end{equation}
To prove it, consider the equations
\begin{equation*}
\bigl(L_{v,\ell} - k^2 \bigr) G_{v,\ell}(\cdot,r_1) = \delta_{r_1}, \quad \bigl(L_{v,\ell} - k^2 \bigr) G_{v,\ell}(\cdot,r_2) = \delta_{r_2}.
\end{equation*}
Multiplying the first equation by $G_{v,\ell}(\cdot,r_2)$, subtracting the second equation multiplied by $G_{v,\ell}(\cdot,r_1)$, and integrating over $(0,R)$, $R > r_1$, $r_2$, we obtain
\begin{equation*}
\lbrack G_{v,\ell}(\cdot,r_1),G_{v,\ell}(\cdot,r_2)\rbrack \bigr|^R_{+0} = G_{v,\ell}(r_1,r_2) - G_{v,\ell}(r_2,r_1),
\end{equation*}
where $[-,-]$ denotes the Wronskian defined according to \eqref{eq:wronskian} and the notation $x|^b_a = x(b)-x(a)$ is used. The next step is to show that the term on the left hand side vanishes. Using formulas \eqref{eq:regular_asymptotics}, \eqref{eq:radial_green_definition} and the estimate $\tfrac{\partial}{\partial r}\varphi_{v,\ell}(r) = O(r^\ell)$, $r\to+0$, given in \cite[Theorem 3.3]{agmon1992analyticity}, we get $\lbrack G_{v,\ell}(\cdot,r_1),G_{v,\ell}(\cdot,r_2)\rbrack(+0)=0$. Using formulas \eqref{eq:coulomb_asymptotics}, \eqref{eq:coulomb_theta}, \eqref{eq:coulomb_radiation}, \eqref{eq:radial_green_definition}, we also get $\lbrack G_{v,\ell}(\cdot,r_1),G_{v,\ell}(\cdot,r_2)\rbrack(R)=O(\tfrac 1 R)$, $R\to+\infty$. As $R$ tends to $+\infty$, we get formula \eqref{eq:green_reciprocity}.
To prove \eqref{eq:green_extraction}, we follow a similar scheme. We start from the equations
\begin{equation*}
\bigl(\overline{L_{v,\ell}} - k^2 \bigr) \overline{G_{v,\ell}(\cdot,r_1)} = \delta_{r_1}, \quad \bigl(L_{v,\ell} - k^2 \bigr) G_{v,\ell}(\cdot,r_2) = \delta_{r_2}.
\end{equation*}
Multiplying the first equation by $G_{v,\ell}(\cdot,r_2)$, subtracting the second equation multiplied by $\overline{G_{v,\ell}(\cdot,r_1)}$, integrating over $(0,R)$, $R > r_1$, $r_2$, we get
\begin{equation}\label{eq:green_extraction_1}
\begin{gathered}
\lbrack\overline{G_{v,\ell}(\cdot,r_1)},G_{v,\ell}(\cdot,r_2)\rbrack\bigr|^R_{+0} - 2i \int_0^R \Im \widetilde v(r)\overline{G_{v,\ell}(r,r_1)}G_{v,\ell}(r,r_2) dr \\ = G_{v,\ell}(r_1,r_2) - \overline{G_{v,\ell}(r_2,r_1)}.
\end{gathered}
\end{equation}
In a similar way with the proof of formula \eqref{eq:green_reciprocity}, one can show that the Wronskian vanishes at zero and that
\begin{equation*}
[\overline{G_{v,\ell}(\cdot,r_1)},G_{v,\ell}(\cdot,r_2)](R) = 2ik \overline{G_{v,\ell}(R,r_1)}G_{v,\ell}(R,r_2) + O(\tfrac 1 R), \quad R \to + \infty.
\end{equation*}
Combining this with formulas \eqref{eq:radial_radiation_solution}, \eqref{eq:sources_equipartitioning}, \eqref{eq:green_extraction_1}, \eqref{eq:green_reciprocity} to compute $\mathbb E\bigl( \overline{\psi^m_\ell(r_1)}\psi^m_\ell(r_2) \bigr)$, we get formula \eqref{eq:green_extraction}, which concludes the proof of \Cref{thm:green_extraction}.
\subsection{Recovering the scattering matrix elements}\label{sec:recovering_scattering_matrix}
In this subsection we shall show that the scattering matrix elements $s_{v,\ell}$ for the potential $v$ can be extracted from the Green's function $G_v$ on $M^4_{R_o}$ or from its imaginary part $\Im G_v$ only on $M^4_{R_o} \cup M^4_{R_o^\dagger}$, where $R_a \leq R_o < R_o^\dagger$.
Recall that the Coulomb function $H^+_\ell(\eta,kr)$ does not vanish for $r > 0$, since $H^+_\ell(\eta,kr)$ and its complex conjugate $H^-_\ell(\eta,kr)$ form a basis of solutions of equation \eqref{eq:radial_schroedinger} with $\widetilde v(r) = \alpha/r$. Together with Lemmas \ref{thm:radial_green} and \ref{thm:green_separation}, this leads to the following result.
\begin{lemma}\label{thm:scattering_matrix_extraction_a} Let $v_1$, $v_2$ be two complex-valued potentials satisfying \eqref{eq:potential}, let $k \in \mathbb R_+\setminus(\Sigma^P_{v_1} \cup \Sigma^P_{v_2})$ be fixed and let $R_o \geq R_a$. Suppose that $G_{v_1} = G_{v_2}$ on $M^4_{R_o}$. Then $G_{v_1,\ell}(R_o,R_o)=G_{v_2,\ell}(R_o,R_o)$ and $s_{v_1,\ell}=s_{v_2,\ell}$ for all $\ell \geq 0$.
\end{lemma}
Next we shall show that the scattering matrix elements $s_{v,\ell}$ can be extracted from $\Im G_v$ only, i.e., without knowing $\Re G_v$. However, the values of $\Im G_v$ must be given not only on $M^4_{R_o}$ but also on $M^4_{R_o^\dagger}$ for some $R_o^\dagger > R_o$.
Note that formula \eqref{eq:radial_green} implies that
\begin{subequations}
\begin{gather}\label{eq:scattering_matrix_equation}
\cos\vartheta_\ell(\eta,kr) \Re s_{v,\ell} - \sin \vartheta_\ell(\eta,kr) \Im s_{v,\ell}
= \frac{H^-_\ell(\eta,kr)-2k\Im G_{v,\ell}(r,r)}{|H^+_\ell(\eta,kr)|^2}, \\
H^+_\ell(\eta,kr) = |H^+_\ell(\eta,kr)|\exp(i\vartheta_\ell(\eta,kr)/2\bigr),\label{eq:coulomb_modulus_phase}
\end{gather}
\end{subequations}
where $r \geq R_a$, $\eta = \alpha/(2k)$. Using \eqref{eq:scattering_matrix_equation} with $r = R_o$, $R_o^\dagger$ one can see that $s_{v,l}$ is uniquely determined from $\Im G_{v,\ell}(R_o,R_o)$ and $\Im G_{v,\ell}(R_o^\dagger,R_o^\dagger)$ if and only if $\sin(\vartheta_\ell(\eta,kR_o^\dagger)-\vartheta_\ell(\eta,kR_o)) \neq 0$. This justifies the definition of the singular set
\begin{equation}\label{eq:sigma_s}
\Sigma^S_{\alpha,k,R_o} = \bigl\{ r \in [R_o,+\infty) \colon \sin(\vartheta_\ell(\eta,kr)-\vartheta_\ell(\eta,kR_o))=0 \; \text{for some $\ell \geq 0$} \bigr\}.
\end{equation}
\begin{lemma}\label{thm:scattering_matrix_extraction_b} Let $v_1$, $v_2$ be two complex-valued potentials satisfying \eqref{eq:potential}, let $k \in \mathbb R_+ \setminus (\Sigma^P_{v_1}\cup\Sigma^P_{v_2})$, and let $R_o^\dagger > R_o \geq R_a$ be such that $R_o^\dagger \not\in \Sigma^S_{\alpha,k,R_o}$. Suppose that $\Im G_{v_1}=\Im G_{v_2}$ on $M^4_{R_o} \cup M^4_{R_o^\dagger}$. Then $s_{v_1,\ell}=s_{v_2,\ell}$ for all $\ell \geq 0$. Besides, the set $\Sigma^S_{\alpha,k,R_o}$ is discrete and does not have finite accumulation points.
\end{lemma}
\begin{proof} Under the assumptions of \Cref{thm:scattering_matrix_extraction_b} it follows from \Cref{thm:green_separation} that for all $\ell \geq 0$ the equality $\Im G_{v_1,\ell}(r,r) = \Im G_{v_2,\ell}(r,r)$ holds true for $r = R_o$, $R_o^\dagger$. Together with the discussion before \Cref{thm:scattering_matrix_extraction_b} it implies that $s_{v_1,\ell}=s_{v_2,\ell}$ for all $\ell \geq 0$. This concludes the proof of the first assertion of \Cref{thm:scattering_matrix_extraction_b}.
Next we shall prove the second assertion. Using formulas (33.2.11), (33.5.8), (33.5.9) of \cite{olver2010nist} one can see that
\begin{equation}\label{eq:determinant_asymptotics}
\sin\bigl(\vartheta_\ell(\eta,kr)-\vartheta_\ell(\eta,kR_o)\bigr) \sim e^{-1-\pi\eta} \left(\frac{ekr}{2\ell}\right)^{2\ell+1}, \quad \ell \to + \infty,
\end{equation}
locally uniformly in $r\in\mathbb R_+$. Besides, it follows from formula (33.2.11) of \cite{olver2010nist} and from discussion below it that at fixed $\ell \geq 0$ the set of solutions $r \in \mathbb R_+$ of the equation $\sin(\vartheta_\ell(\eta,kr)-\vartheta_\ell(\eta,kR_o))=0$ is discrete and does not have finite accumulation points, as the zero-set of a non-zero analytic function. Together with \eqref{eq:determinant_asymptotics}, this concludes the proof of \Cref{thm:scattering_matrix_extraction_b}.
\end{proof}
\subsection{Recovering the Dirichlet-to-Neumann map}\label{sec:recovering_dtn}
In this subsection we shall show that the scattering matrix elements $s_{v,\ell}$ uniquely determine the Dirichlet-to-Neumann map for $v$ is some ball $B^3_R$, $R \geq R_a$.
Note that \Cref{thm:regular_solution} justifies the definition of the singular set
\begin{equation*}
\Sigma^D_{v,k} = \bigl\{ R > 0 \colon \varphi_{v,\ell}(R) = 0 \; \text{for some $\ell \geq 0$} \bigr\}.
\end{equation*}
\begin{lemma}\label{thm:dirichlet_uniqueness} Let $v$ be a complex-valued potential satisfying \eqref{eq:potential} and let $k \in \mathbb R_+$ be fixed. Then $R \in \mathbb R_+\setminus \Sigma^D_{v,k}$ if and only if for any $g \in H^{3/2}(S^2_R)$ the Dirichlet problem
\begin{equation}\label{eq:dirichlet_problem}
(L_v - k^2) \psi = 0 \; \text{in $B^3_R$}, \quad \psi|_{S^2_R} = g,
\end{equation}
is uniquely solvable for $\psi \in H^2(B^3_R)$. Besides, the set $\Sigma^D_{v,k}$ is discrete and does not have finite accumulation points.
\end{lemma}
\begin{proof} It follows from \Cref{thm:regular_solution} that if the Dirichlet problem \eqref{eq:dirichlet_problem} is uniquely solvable for any $g \in H^{3/2}(S^2_R)$ then $R \not\in \Sigma^D_{v,k}$.
Now suppose that $R \not\in \Sigma^D_{v,k}$. First we shall show that \eqref{eq:dirichlet_problem} does not admit a non-zero solution for $g = 0$. Assuming that $\psi \in H^2_0(B^3_R)$ is a solution to \eqref{eq:dirichlet_problem} with $g=0$ one can see (taking into account the spherical symmetry of $v$) that its partial wave components
\begin{equation*}
\psi^m_\ell(x) = Y^m_\ell(x/|x|) \int_{S^2_1} \psi(|x|\omega) \overline Y{}^m_\ell(\omega) \, dS_\omega
\end{equation*}
belong to $H^1_0(B^3_R)$ and satisfy \eqref{eq:dirichlet_problem} weakly. Because of the boundary elliptic regularity \cite{evans2010partial} they also belong to $H^2(B^3_R)$. Then it follows from \Cref{thm:regular_solution} that all $\psi^m_\ell$ vanish and $\psi = 0$.
Next we recall that the operator
\begin{equation}\label{eq:pde_operator}
\psi \mapsto \bigl( (L_v-k^2) \psi, \psi|_{S^2_R} \bigr)
\end{equation}
is Fredholm of index zero from $H^2(B^3_R)$ to $L^2(B^3_R) \times H^{3/2}(S^2_R)$, see \cite{grisvard1985elliptic}. Together with already established uniqueness for the Dirichlet problem \eqref{eq:dirichlet_problem} for $R \not\in \Sigma^D_{v,k}$, this proves the first assertion of \Cref{thm:dirichlet_uniqueness}.
Next we shall show that $\Sigma^D_{v,k}$ is a discrete set without finite accumulation points. It can be shown \cite{agmon1992analyticity} that the regular solution $\varphi_{v,\ell}(r)$ defined by \eqref{eq:radial_schroedinger}, \eqref{eq:regular_asymptotics} satisfies the estimate
\begin{equation*}
\varphi_{v,\ell}(r) = r^{\ell+1} ( 1 + O(\tfrac 1 \ell)), \quad \ell \to + \infty,
\end{equation*}
uniformly in $r \in (0,R]$ at fixed $R>0$. Besides, zeros of each $\varphi_{v,\ell}$ are discrete and do not have finite accumulation points. This concludes the proof of the second assertion of \Cref{thm:dirichlet_uniqueness}.
\end{proof}
\begin{remark}\label{rmk:solution_estimate} Recalling from \cite{grisvard1985elliptic} that the operator \eqref{eq:pde_operator} is Fredholm of index zero from $H^2(B^3_R)$ to $L^2(B^3_R) \times H^{3/2}(S^3_R)$ one can see that if the potential $v \in L^\infty(B^3_R)$ is such that the Dirichlet problem \eqref{eq:dirichlet_problem} with $g \in H^{3/2}(S^2_R)$ is uniquely solvable for $\psi \in H^2(B^3_R)$, then
\begin{equation*}
\|\psi\|_{H^2(B^3_R)} \leq C_{v,k,R} \|g\|_{H^{3/2}(S^2_R)}
\end{equation*}
for some constant $C_{v,k,R}>0$. In addition, the trace theorem \cite{grisvard1985elliptic} leads to the estimate
\begin{equation*}
\|\tfrac{\partial \psi}{\partial r}\|_{H^{1/2}(S^2_R)} \leq C'_{v,k,R}\|g\|_{H^{3/2}(S^2_R)}
\end{equation*}
for some constant $C'_{v,k,R}>0$, where $\tfrac{\partial\psi}{\partial r} = \tfrac{x}{|x|}\nabla \psi$ is the derivative of $\psi$ in the radial direction.
\end{remark}
Under the assumption that the Dirichlet problem \eqref{eq:dirichlet_problem} is uniquely solvable for $\psi \in H^2(B^3_R)$ for all $g \in H^{3/2}(S^2_R)$, we define the Dirichlet-to-Neumann map $\Lambda_{v,R} \in \mathcal L\bigl(H^{3/2}(S^2_R),H^{1/2}(S^2_R)\bigr)$ by $\Lambda_{v,R} \varphi = \tfrac{\partial \psi}{\partial r}|_{S^2_R}$. Next we shall show that the partial scattering matrix elements $s_{v,\ell}$ known for all $\ell \geq 0$ uniquely determine the Dirichlet-to-Neumann map $\Lambda_{v,R}$.
\begin{lemma}\label{thm:dtn_extraction} Let $v_1$, $v_2$ be two complex-valued potentials satisfying \eqref{eq:potential} and let $k \in \mathbb R_+ \setminus (\Sigma^P_{v_1} \cup \Sigma^P_{v_2})$ be fixed. Besides, let $R \in [R_a,+\infty) \setminus (\Sigma^D_{v_1,k}\cup\Sigma^D_{v_2,k})$. Suppose that $s_{v_1,\ell}=s_{v_2,\ell}$ for all $\ell\geq 0$. Then $\Lambda_{v_1,R}=\Lambda_{v_2,R}$.
\end{lemma}
\begin{proof} It follows from \Cref{thm:regular_solution,thm:regular_solution_expansion} and from continuity of $\varphi_{v,\ell}(r)$ at $r=R$ that $\Lambda_{v_1,R}|_{\mathcal H_{\ell,R}} = \Lambda_{v_2,R}|_{\mathcal H_{\ell,R}}$, where $\mathcal H_{\ell,R}$ denotes the space of restrictions to $S^2_R$ of harmonic polynomials of degree $\ell$, spanned by the spherical harmonics $Y^m_\ell = Y^m_\ell(\tfrac{x}{|x|})$, $|m|\leq \ell$. More precisely, the following explicit formula for $\Lambda_{v_j,R}|_{\mathcal H_{\ell,R}}$ is valid:
\begin{equation}\label{eq:dtn_on_harmonics}
\Lambda_{v_j,R} Y^m_\ell = R\frac{\bigr[\tfrac{\partial}{\partial r}(\tfrac 1 r H^-_\ell(\eta,kr)) - s_{v_j,\ell} \tfrac{\partial}{\partial r}(\tfrac 1 r H^+_\ell(\eta,kr)) \bigr]_{r=R}}{H^-_\ell(\eta,kR) - s_{v_j,\ell} H^+_\ell(\eta,kR)} Y^m_\ell,
\end{equation}
where $\eta = \alpha/(2k)$ and the denominator is non-zero as $R \not\in \Sigma^D_{v_j,k}$.
Now let $g \in H^{3/2}(S^2_R)$ and denote by $\psi_{v_j} \in H^2(B^3_R)$ the unique solution of the Dirichlet problem \eqref{eq:dirichlet_problem} with $v = v_j$. Besides, define $g_N$ by
\begin{equation*}
g_N = \sum_{\ell=0}^N \sum_{|m|\leq\ell}g^m_\ell Y^m_\ell, \quad g^m_\ell = \int_{S^2_1}g(R\omega)\overline Y{}^m_\ell(\omega)dS_\omega.
\end{equation*}
so that $g_N \to g$ in $H^{3/2}(S^2_R)$ and, according to \Cref{rmk:solution_estimate}, $\Lambda_{v_j,R}g_N \to \Lambda_{v_j,R}g$ in $H^{1/2}(B^3_R)$ as $N\to\infty$. Together with \eqref{eq:dtn_on_harmonics}, which shows that $\Lambda_{v_1,R}g_N = \Lambda_{v_2,R}g_N$, this implies that $\Lambda_{v_1,R}g = \Lambda_{v_2,R}g $ and concludes the proof of \Cref{thm:dtn_extraction}.
\end{proof}
\subsection{Demonstration of the uniqueness theorem}\label{sec:combining_results_uniqueness}
Now we combine the preliminary results established in \cref{sec:regular_solutions,sec:green_functions,sec:recovering_scattering_matrix,sec:recovering_dtn} to prove \Cref{thm:uniqueness}.
Under the assumptions of \Cref{thm:uniqueness} it follows from \Cref{thm:scattering_matrix_extraction_a} in case (A) and from \Cref{thm:scattering_matrix_extraction_b} in case (B) that $s_{v_1,\ell} = s_{v_2,\ell}$ for all $\ell \geq 0$. Using \Cref{thm:dtn_extraction} we conclude that $\Lambda_{v_1,R} = \Lambda_{v_2,R}$ for any $R$ in the non-empty set $[R_a,\infty) \setminus (\Sigma^D_{v_1,k} \cup \Sigma^D_{v_2,k})$. It follows from the uniqueness theorem of \cite{novikov1988multidimensional}, where the proof does not use that the potential is real-valued, that $v_1 = v_2$ a.e. This proves \Cref{thm:uniqueness}.
\section{Reconstruction}\label{sec:reconstruction}
\subsection{Reconstruction scheme for exact simulated data}\label{sec:algorithm}
The possibility to use measurements of the solar acoustic field at two heigths above the surface to recover the sound speed, density and attenuation inside of the Sun is confirmed by our numerical simulations. In this subsection we shall briefly describe the reconstruction algorithm that we use.
We assume that the unknown solar parameters $q = (c,\rho,\gamma)$ are perturbations of some known background quantities $q^0 =(c^0,\rho^0,\gamma^0)$ such that
\begin{equation}\label{eq:I_definition}
\supp(q-q^0)\subseteq I, \quad I = [A_1,A_2] \subseteq (0,R_\odot],
\end{equation}
where $R_\odot = 6.957 \times 10^5$ km is the solar radius, and we assume that both parameter sets $q$ and $q^0$ satisfy \eqref{eq:spaces}, \eqref{eq:atmospheric_values}. Let $\Omega \subset \mathbb R_+$ be a finite set of admissible frequencies such that $\Omega \cap (\Sigma^P_q \cup \Sigma^P_{q^0}) = \varnothing$, where
\begin{equation*}
\Sigma^P_q = \bigl(0, \tfrac{c_0}{2H}\bigr] \cup \bigl\{ \omega > \tfrac{c_0}{2H} \colon k_\omega \in \Sigma^P_{v_\omega} \},
\end{equation*}
$\Sigma^P_{v_\omega}$ is defined according to \eqref{eq:Sigma_definition}, and the potentials $v_\omega$ and $v^0_{\omega}$ are defined using formula \eqref{eq:acoustic_kv} with parameters $q$ and $q_0$, respectively. We recall that $c_0/(2H)$ is the acoustic cutoff frequency, which separates the regime of oscillations at eigenfrequencies and the scattering regime.
Put $G_q(h,\ell,\omega) = G_{v_\omega,\ell}(R_\odot+h,R_\odot+h)$. As initial data for inversions from exact data we use the imaginary part of the Green's function $\Im G_q(h,\ell,\omega)$ measured at two different non-negative altitudes $h \in \{h_1,h_2\}$, at angular degrees $\ell \in \{0,\dots,\ell_\text{max}\}$, and at all admissible frequencies $\omega \in \Omega$. From this data we recover the solar parameters $q$ as follows.
The first step of the algorithm consists in recovering the scattering matrix elements $s_q(\ell,\omega) = s_{v_\omega,\ell}$, $\ell \in \{0,\dots,\ell_\text{max}\}$, $\omega \in \Omega$, which are defined according to \Cref{thm:regular_solution_expansion}. This reconstruction is done by considering equation \eqref{eq:scattering_matrix_equation} with $r = R_\odot + h$, $h \in \{h_1,h_2\}$ as a linear system for finding $\Re s_q(\ell,\omega)$, $\Im s_q(\ell,\omega)$ at each fixed $\ell$, $\omega$.
At the next step the scattering matrix elements $s_q(\ell,\omega)$ are used to recover the map $u_q \colon I \to \mathbb R^3$ (where $I$ is the interval defined in \eqref{eq:I_definition}) which is defined as follows:
\begin{equation}\label{eq:u_definition}
\begin{gathered}
u_q = (u_{q,1},u_{q,2},u_{q,3}), \\
u_{q,1} = \frac{1}{c_0^2}-\frac{1}{c^2}, \quad u_{q,2} = \rho^{\frac 1 2} \left( \frac{d^2}{dr^2} + \frac{2}{r}\frac{d}{dr}\right)\rho^{-\frac 1 2} - \frac{1}{4H^2}, \quad u_{q,3} = \frac{\gamma}{c^2},
\end{gathered}
\end{equation}
and such that
\begin{equation*}
\omega^2 u_{q,1} + u_{q,2} - 2i\omega u_{q,3} = v_\omega.
\end{equation*}
This reconstruction is done by applying the iteratively regularized Gauss-Newton method, going back to \cite{bakushinskii1992problem}, to the forward map
\begin{equation}\label{eq:forward_map}
F \colon L^2(I,\mathbb R^3) \to \mathbb C^{\ell_\text{max}+1} \times \mathbb C^{|\Omega|}, \quad F( u_q ) = s_q.
\end{equation}
For more details on the iteratively regularized Gauss-Newton method and for sufficient conditions of its convergence see, e.g., \cite{kaltenbacher2008iterative}.
The last step is to determine $q = (c,\rho,\gamma)$ from $u_q$. Note that definitions \eqref{eq:u_definition} lead to the following explicit formulas for determining $c$ and $\gamma$:
\begin{equation*}
c = \bigl( c_0^{-2} - u_{q,1} \bigr)^{-\frac 1 2}, \quad \gamma = c^2 u_{q,3}.
\end{equation*}
Also note that definitions \eqref{eq:u_definition} lead to the following problem for determining $\rho$:
\begin{equation*}
-\left( \frac{d^2}{dr^2} + \frac{2}{r}\frac{d}{dr}\right)\rho^{-\frac 1 2} + \left(u_{q,2}+\frac{1}{4H^2}\right) \rho^{-\frac 1 2} = 0, \quad \rho(A_{1,2}) = \rho^0(A_{1,2}),
\end{equation*}
which is solved for the unknown function $\rho^{-\frac 1 2}$. This step concludes the reconstruction algorithm from exact data.
\subsection{Numerical example with exact data}\label{sec:example_exact}
We consider the background sound speed and density from the model of \cite{fournier2017atmospheric}, which extends the standard solar model of \cite{christensen1996current} to the upper atmosphere. We suppose that the background attenuation is equal to $\gamma_0 = 102.5$ $\mu$Hz inside of the Sun and decays to zero smoothly in the region $[R_\odot,R_\odot+h_a]$, where $R_\odot = 6.957 \times 10^5$ km is the solar radius and $h_a = 500$ km is the height above the photosphere at which the (conventional) interface between the lower and upper atmosphere is located. Note that this approximate value for the background attenuation can be obtained by analysing the observed full width at half maximum (FWHM) of acoustic modes, see \cite[section 7.3]{gizon2017computational} for more details\footnote{Our simulations show that another reasonable choice for the attenuation constant in the Sun is $\gamma_0 = 39.8$ $\mu$Hz. For this attenuation coefficient the difference between the observed by HMI and the modeled power spectra (after a linear transformation) is minimal.}. We also assume that the unknown perturbations to the background values of solar parameters are supported in the interval $[0.9 R_\odot,0.95 R_\odot]$.
\begin{figure}[h]
\insertimg{.31}{c_exact.pdf}
\insertimg{.31}{rho_exact.pdf}
\insertimg{.31}{gma_exact.pdf}
\caption{Perturbations $\delta c^\dag$, $\delta \rho^\dag$, $\delta \gamma^\dag$ of solar parameters, reconstructed approximations $\delta c$, $\delta \rho$, $\delta \gamma$ and relative $L^2$ reconstruction errors $e(c,c^\dag)$, $e(\rho,\rho^\dag)$, $e(\gamma,\gamma^\dag)$.}\label{fig.all_i}
\end{figure}
The initial data for reconstructions is the imaginary part of the radiation Green's function $\Im G_q(h,\ell,\omega)$ at heights $h \in \{105, 144\}$ (km), angular degrees $\ell \in \{0, \dots, 250\}$, and frequencies $\omega \in \{5.3, 5.4\}$ (mHz). Note that observations of the acoustic field at these heights approximately correspond to measuring Doppler velocities of the line center (three-point approximation) and center of gravity (six-point approximation) of the Fe I absorption line at 6173 \r{A} using the data from the Helioseismic and Magnetic Imager (HMI) as described in \cite{nagashima2014interpreting}.
\Cref{fig.all_i} shows exact profiles of perturbations $\delta c^\dag$, $\delta \rho^\dag$, $\gamma^\dag$, reconstructed profiles of perturbations $\delta c$, $\delta \rho$, $\delta \gamma$, and relative $L^2$ reconstruction errors $e(\delta c,\delta c^\dag)$, $e(\delta \rho,\delta \rho^\dag)$, $e(\delta \gamma,\delta \gamma^\dag)$. Here,
\begin{equation*}
e(f,f^\dag) = \frac{\|f-f^\dag\|_{L^2(I)}}{\|f^\dag\|_{L^2(I)}}.
\end{equation*}
This reconstruction example confirms the uniqueness results of \cref{sec:uniqueness}.
\subsection{Reconstruction scheme for noisy data}\label{sec:algorithm_noisy}
The power spectrum $P^m_{v_\omega,\ell}(r) = \mathbb E|\varphi^m_{v_\omega,\ell}(r)|^2$ introduced in \cref{sec:extracting_green_function} can not be measured precisely in a real experiment. A standard approach to compute an approximation to the power spectrum is to parse the time series of acoustic oscillations (after an aproppriate preprocessing) into $N$ segments of equal duration $T$, compute for each segment the sample spectrum (periodogram), and then take the arithmetic average of periodograms $\widehat P^m_{v_\omega,\ell}(r)$. For the definition and properties of the sample power spectrum see, e.g., \cite[section 6.3.1]{jenkins1969spectral}, and for the details on computation of the sample power spectrum from the observed time series of sollar oscillations see \cite{larson2015improved}.
Parameter $T$ is chosen to achieve a desired frequency resolution. We recall that for a time series segment of duration $T$ the frequency resolution of the sample power spectrum is equal to $\Delta \omega = 1/T$. Besides, if the time resolution (cadence) is equal to $\Delta t$ then the sample power spectrum can be computed for frequencies up to $1 / (2\Delta t)$.
It is a standard assumption going back to \cite{woodard1984} that
\begin{equation}\label{eq:chi_square}
\frac{2N \widehat P^m_{v_\omega,\ell}(r)}{P^m_{v_\omega,\ell}(r)} \; \text{is $\chi^2(2N)$ distributed},
\end{equation}
where $\chi^2(2N)$ denotes the chi-squared distribution with $2N$ degrees of freedom and $r$, $\omega$, $\ell$, $m$ are fixed. We use relation \eqref{eq:chi_square} to simulate data for inversions with noisy data. This simulated data is then used to extract the diagonal values of the imaginary part of the radial Green's function $\Im G_q(h,\ell,\omega) = \Im G_{v_\omega,\ell}(R_\odot+h,R_\odot+h)$ using formula \eqref{eq:green_extraction} with the $O(\tfrac 1 R)$ term dropped and assuming that $\Pi = 1$. Note that in reality $\Pi$ is a function of frequency which is not directly accessible to measurements; for a possible model of this function see, e.g., \cite[section 7.5]{gizon2017computational}.
Then the reconstruction proceeds as described in \cref{sec:algorithm}.
\subsection{Numerical examples with noisy data}
We consider a model situation where the solar oscillations are observed for a total period of eight years with the time resolution of 45 s. In this connection, we recall that the HMI observes the sollar oscillations continuously with this cadence since April 30, 2010. We also assume that the sample power spectra are computed from three-day intervals, so that the frequency resolution is equal to 3.86 $\mu$Hz and the maximal frequency is approximately 8.33 mHz.
We simulate $\widehat P_{v_\omega,\ell}^m(R_\odot+h)$ according to \eqref{eq:chi_square} with $N = 974$ (which is the number of three-day intervals constituting eight years of observations) for angular degrees $\ell \in \{0,\dots,250\}$, azimuthal degree $m=0$, observation heights $h \in \{105, 144\}$ (km) and frequencies $\omega \in \{ 5.27,5.29,5.31,5.34,5.36,5.38 \}$ (mHz). We use the same background model and the same assumptions on the unknown parameters as in \cref{sec:example_exact}.
\begin{figure}[h]
\begin{center}
\insertimg{.31}{c.pdf}
\insertimg{.31}{rho.pdf}
\insertimg{.31}{gma.pdf}
\end{center}
\caption{Perturbations $\delta c^\dag$, $\delta \rho^\dag$, $\delta \gamma^\dag$ of solar parameters, reconstructed approximations $\delta c$, $\delta \rho$, $\delta \gamma$ and relative $L^2$ reconstruction errors for noisy data $e(\delta c,\delta c^\dag)$, $e(\delta \rho,\delta \rho^\dag)$, $e(\delta \gamma,\delta \gamma^\dag)$.}\label{fig.all_8}
\end{figure}
In contrast to reconstructions with exact data, simultaneous recovery of all the parameters from noisy data fails when these realistic settings are used. The point is that the (numerically computed) singular values of the forward map of formula \eqref{eq:forward_map} decrease exponentially fast, which leads to a severe ill-posedness of the inverse problem.
However, if two out of three parameters are known a priori, the third parameter is recovered with reasonable precision. \Cref{fig.all_8} shows parameters $\delta c^\dag$, $\delta \rho^\dag$, $\delta \gamma^\dag$, their reconstructions $\delta c$, $\delta \rho$, $\delta \gamma$ from noisy data and relative $L^2$ reconstruction errors for one realization of data. The mean relative $L^2$ reconstruction errors for parameters $\delta c^\dag$, $\delta \rho^\dag$, $\delta \gamma^\dag$ are equal to
\begin{equation*}
e(\delta c, \delta c^\dag) = 11.7\%, \quad e(\delta \rho,\delta \rho^\dag) = 16.8\%, \quad e(\delta \gamma,\delta \gamma^\dag)=11.36\%.
\end{equation*}
Besides, standard deviations of relative $L^2$ reconstruction errors are equal to 3\%, 12.6\%, 3.1\%. We emphasize that in each of these examples two out of three parameters are known a priori and fixed, and we reconstruct the remaining parameter.
These simulations show that reconstructions from noisy simulated data and, as a corollary, from experimental data, require a separate and detailed treatment, which is out of scope of the present article.
\section{Conclusion}
We considered the inverse problem of recovering the radially symmetric sound speed, density and attenuation in the Sun from the measurements of the solar acoustic field at two heights above the photosphere and for a finite number of frequencies above the acoustic cutoff frequency. We showed that this problem reduces to recovering a long range potential (with a Coulomb-type decay at infinity) in a Schr\"odinger equation from the measurements of the imaginary part of the radiation Green's function at two distances from zero. We demonstrated that generically this inverse problem for the Schr\"odinger equation admits a unique solution, and that the original inverse problem for the Sun admits a unique solution when measurements are performed at least two different frequencies above the cutoff frequency. These uniqueness results are confirmed by numerical experiments with simulated data without noise. However, simulations also show that the inverse problem is severly ill-posed, and only separate recovery of one of the solar parameters (i.e. when two other parameters are fixed) using a standard iterative reconstruction method (IRGNM) is reasonably precise for realistic noise levels.
\bibliographystyle{plain}
|
train/arxiv
|
BkiUdLc4uzki0qvyM3GP
| 5
| 1
|
\section{introduction}
The transport properties of mesoscopic conductors have attracted wide research interest, and are of great importance for future nanoscale electronic applications. One usually probes the dynamics of electron in the conductors by measuring the dc and ac conductance in the linear response regime or driving the systems to the nonlinear out-of equilibrium case. For instance, Gabelli et al.\cite{Gabelli} measured the ac conductance of a mesoscopic RC circuit at GHz frequency, and found the violation of Kirchhoff's law of impedance addition. Recent experimental advance will enable to probe the electronic processes in these systems in the high frequency region reaching the intrinsic time scales of the electron dynamics, therefore new interesting physical properties are expected to be observed.\cite{Chevallier}
Some important progresses in this field are based on a wisdom that more information about electron dynamics can be obtained by measuring the current fluctuations or the higher current moments in these systems. \cite{Landauer,Blanter} The theory of full counting statistics (FCS) of electron current, which describes the probability distribution of transmitted charge during a fixed time interval through a mesoscopic conductor, were developed within the scattering formulation,\cite{Levitov1993,Levitov1996,Nazarov} and the Hamiltonian formulation for FCS were constructed based on the Anderson impurity model.\cite{Gogolin,Schmidt,Levitov2004} It was also shown that the real part of ac conductance is related to the asymmetric parts of the frequency dependent current noise by a non-equilibrium fluctuation-dissipation theorem. \cite{Safi,Billangeon} Therefore, at present there are intensive research actives on the study of FCS problems both in quantum dot systems \cite{Bagrets,Utsumi2007,Kambly} and diffusive conductors. \cite{Pilgram,Lee}
For the study of the time-dependent and finite frequency transport properties of mesoscopic systems, Blanter and B\"uttiker \cite{Blanter} emphasized the importance of considering electron interaction effects, and pointed out that the current measured in frequency dependent transport experiment is a sum of the particle current and displacement current, while the displacement current is zero in the static case, but is essential for finite frequency transport. A theoretical approach that can address the finite frequency electron transport and take into account the various electron correlation effects as well as the current conservation condition explicitly is highly worthy of investigation.\cite{Pedersen,Wei}
A prototype mesoscopic system of nonequilibrium electron transport with strong Coulomb interaction is a single quantum dot coupled to left and right leads, which can be described by the Anderson impurity model. The current through the quantum dot\cite{H1992a} had been calculated by performing a second order perturbation theory of the Coulomb interaction strength $U$. However, the authors\cite{H1992a} observed that the current is not conserved in this approximation when the dot level is tuned away from the particle-hole symmetry point, and pointed out the necessity of treating interaction terms by a current conservation approximation. Hershfield \cite{H1992b} studied the current fluctuations in the Anderson impurity model subsequently, and obtained a formula for interaction effect on zero frequency shot noise in the Hartree approximation. During the last decade, the shot noise and the FCS of current of the Anderson impurity model in the low-temperature Kondo regime have attracted a great deal of research efforts and interests, but we will restrict our consideration only to the resonant tunnelling regime in this paper. For a noninteracting resonant tunneling model, exact analytical formula of the finite frequency noise spectrum had been obtained.\cite{Chen} The nonsymmetrized noise spectrum of the resonant tunneling model had been studied recently within the scattering formulation \cite{Entin-Wohlman,Rothstein}, where some step and dip structures at finite frequency were shown, and the Hartree-Fock theory was applied to multi-level quantum dot system.\cite{Gabdank}
In the present work, we have formulated the theory of electron transport through quantum dot based on the nonequilibrium self-consistent perturbation method,\cite{Baym} which can guarantee the gauge invariance and current conservation condition for arbitrary time-dependent external potential. We have studied the effect of the potential fluctuations in the quantum dot on the finite frequency noise within the Hartree approximation, and obtained an analytical equation for the interaction effect on the current noise, which is an extension of Hershfield's result \cite{H1992b} to the finite frequency case. We will show that various correlation functions of nonequilibrium system including the vertex correction terms can be calculated in a systematic way by using functional derivations defined on the closed time contour and the external counting field method. Actually, the functional approach has been applied to a wide range of problems in mesosocopic system in the literatures, e.g. current and noise characteristics for single-electron transistors in the Coulomb blockade regime \cite{Utsumi2003,Oh} or the Kondo regime,\cite{Moca} photoassisted shot noise in mesoscopic conductor, \cite{Chevallier} charge transport to chaotic cavity\cite{Macedo} and the time evolution of non-equilibrium quantum dot system,\cite{Sexty} etc.
This paper is organized as follows: In section II, we discuss the current conservation condition and the generalized Ward identity for quantum dot based on the Anderson impurity model. In section III, the formula for finite frequency noise spectrum including the interaction correction term in the Hartree approximation is obtained. Section IV is devoted to the numerical calculations. In section V we summarize the results of this work.
\section {Self-consistent perturbation theory and the generalized Ward identity}
We consider the electrons transport through a single level quantum dot in the presence of external ac fields, the system will be described by the following Anderson impurity model\cite{Ng,Ding}
\bn
H&=&\sum_{k\eta\sigma}\epsilon_{k\eta}(t)c^\dagger_{k\eta\sigma}c_{k\eta\sigma}+
\sum_\sigma \epsilon_d(t) d^\dagger_\sigma d_\sigma + Un_{d\uparrow}n_{d\downarrow} \nonumber\\
&& +\sum_{k\eta\sigma}\left [ t_\eta e^{i \lambda_\eta
(t)}c^\dagger_{k\eta\sigma}d_\sigma+H.c.\right ]\;,
\en
where $\eta=L,R$ denotes the left and right leads, $\epsilon_{k\eta}(t)=\epsilon_{k\eta}+v_\eta(t)$ and $\epsilon_d(t)=\epsilon_d+v_0(t)$, with $v_\eta(t)$ and $v_0(t)$ being the ac potentials in leads and in the dot, respectively. $\lambda_\eta(t)$ is the gauge potential coupled to the tunneling current from the lead $\eta$ to the dot. The strong Coulomb interaction term in the Hamiltonian prevent this model from an exact solution. But we can approach this problem by doing perturbation theory on the Schwinger-Keldysh closed time path contour. Using the nonequilibrium Dyson equation, we write the equation of motion for the nonequilibrium Green's function of the quantum dot as follows
\bq
[ i{\frac {\partial}{\partial
t}}-\epsilon_d(t) ]G_{d\sigma}(t,t')=\delta(t,t')+\int dt_1
\Sigma(t,t_1) G_{d\sigma}(t_1, t')\;.
\eq
It should be emphasized that the time variables $t$ and $t'$ can be either on the forward or the backward branch of the time contour, and the integration over $t_1$ is also defined on the closed time path contour. The self-energy $\Sigma(t,t')$ is obtained in the framework of self-consistent perturbation theory, and can be divided into two terms
\bq
\Sigma(t,t')=\Sigma^{(0)}(t,t')+\Sigma_{U}(t,t') \;,
\eq
where $\Sigma^{(0)}(t,t')$ is the dot level self-energy contributed from the tunneling between the leads and the quantum dot
\bq
\Sigma^{(0)}(t,t')=\sum_{k\eta}|t_\eta|^2
\bar{g}_{k\eta}(t,t')e^{-i[\lambda_\eta(t)-\lambda_\eta(t')+\int^t_{t'}dt_1
v_\eta(t_1)]}\;,
\eq
with $\bar {g}_{k\eta}(t,t')$ being the bare Green's function of the lead without external ac potential field. $\Sigma_U(t,t')$ is the self-energy due to Coulomb interaction. In the self-consistent perturbation theory, it is a functional of the full Green's functions of the quantum dot. In order to illustrate the method of calculation, we consider only the first order approximation, and the interaction self-energy is given by the Hartree term
\bq
\Sigma_U(t,t')=U\langle n_{d\bar\sigma}(t)\rangle\delta(t,t')\;.
\eq
We first study the gauge transformation properties and current conservation condition of the Green's functions of quantum dot. By making a transformation\cite{Ng} $G_{d\sigma}(t,t')=\bar G_{d\sigma}(t,t')e^{-i\int_{t}^{t'} dt_1 v_0(t_1)} $, the equation of motion for $\bar G_{d\sigma}(t,t')$ will be given by
\begin{equation}
[i{\frac {\partial} {\partial t}}-\epsilon_d]\bar
G_{d\sigma}(t,t')=\delta(t,t')+\int dt_1 \bar\Sigma(t,t_1) \bar
G_{d\sigma}(t_1, t')\;,
\end{equation}
where the self-energy $\bar\Sigma(t,t_1)=\bar\Sigma^{(0)}(t,t')+\bar\Sigma_U(t,t')$ with
\bq
\bar\Sigma^{(0)}(t,t')=\sum_{k\eta}|t_\eta|^2
\bar{g}_{k\eta}(t,t')e^{-i\phi_\eta(t,t')}\;,
\eq
and in the Hartree approximation
\bq
\bar\Sigma_U(t,t')=U \langle\bar n_{d\bar\sigma}(t)\rangle\delta(t,t')\;,
\eq
where the phase factor
$\phi_\eta(t,t')=\lambda_\eta(t)-\lambda_\eta(t')+\int^t_{t'}dt_1
[v_\eta(t_1)-v_0(t_1)]$, and $\bar G_{d\sigma}(t,t')$ is a gauge transformation invariant quantity. If one consider a gauge transformation: $v_0(t)\rightarrow
v_0(t)+\partial_t \tilde\Lambda(t)$, $\lambda_\eta(t)\rightarrow \lambda_\eta(t) +\tilde\Lambda(t)$.
then it is easy to see that the phase factor $\phi_\eta(t,t')$, the self-energy $\bar\Sigma(t,t')$ and
$\bar G_{d\sigma}(t,t')$ are all gauge transformation invariant.
Therefore, the Green's function $G_{d\sigma}(t,t')$ transforms as
\bq
G_{d\sigma}(t,t';\tilde\Lambda)=e^{-i\tilde\Lambda(t)}G_{d\sigma}(t,t'
)e^{i\tilde\Lambda(t')}\;.
\eq
The above gauge transformation is directly related to the current conservation condition in the quantum dot. Since under this gauge transformation, the change of the Hamiltonian to the first order of $\tilde\Lambda$ is given by
\bq
\delta H(t)= n_d(t)\partial_t\tilde\Lambda(t)+\sum_\eta
j_\eta(t)\tilde\Lambda(t)\;,
\eq
where $n_d(t)$ and $j_\eta(t)$ are the operators of the charge number in the dot and the tunneling current from the lead $\eta$ to the dot, respectively. The gauge transformation invariance of the action leads to the continuity equation
\bq
\partial_t \langle n_d(t)\rangle-\sum_\eta \langle j_\eta(t)\rangle=0\;.
\eq
In the out-of equilibrium steady state, the occupation number $\langle n_d(t)\rangle$ is time independent, and the current conservation condition $\sum_\eta \langle j_\eta\rangle=0$ is satisfied.
Next we will follow the procedure as given in the Ref.\cite{Leeuwen} to give a derivation of the generalized Ward identity\cite{Ward} for this quantum dot system, which is closely related to the current conservation condition. In the spin degenerate case, the spin index $\sigma$ will be omitted. We consider the changes in the Green's function induced by the gauge transformation. From Eq.(9) the first-order change in $G$ is
\bq
\delta G(t,t')=-i\left [\tilde\Lambda(t)-\tilde\Lambda(t')\right ]G(t,t')\;,
\eq
and it leads to the equation
\bn
\int dt_1 \left [{\frac {\delta G(t,t')} {\delta
v_0(t_1)}}\partial_{t_1}\tilde\Lambda(t_1)+\sum_\eta {\frac {\delta
G(t,t')}{\delta \lambda_\eta(t_1)}}\tilde\Lambda(t_1)\right ]
\nonumber\\
=-i\left [\tilde\Lambda(t)-\tilde\Lambda(t')\right ]G(t,t')\;,
\en
where the functional derivatives of $G$ can be denoted as
$\Lambda_0$ and $\Lambda_\eta$, and they correspond to the time-ordered operator products as follows
\bq
\Lambda_0(t,t';t_1)={\frac {\delta G(t,t')} {\delta
{v_0(t_1)}}}=-\langle T_C[d_\sigma(t)d^\dagger_\sigma(t')n_d(t_1)]\rangle\;,
\eq
\bq
\Lambda_\eta(t,t';t_1)={\frac {\delta G(t,t')} {\delta
{\lambda_\eta(t_1)}}}=-\langle T_C[d_\sigma(t)d^\dagger_\sigma(t')j_\eta(t_1)]\rangle\;.
\eq
An integration by parts in Eq. (13), and demanding that the equation is satisfied for arbitrary $\tilde\Lambda$, straightforwardly leads to the well-known generalized Ward identity
\bn
\partial_{t_1}\Lambda_0(t,t';t_1)&-&\sum_\eta
\Lambda_\eta(t,t';t_1)\nonumber\\
&=&i[\delta(t,t_1)-\delta(t',t_1)]G(t,t')\;.
\en
This identity leads to a relation between the vertex functions and the self-energy, which can be demonstrated explicitly by introducing the following vertex functions
\bq
\Gamma_0(t,t';t_1)=-{\frac {\delta G^{-1}(t,t')}{\delta v_0(t_1)}}\;,
\eq
\bq
\Gamma_\eta(t,t';t_1)=-{\frac {\delta G^{-1}(t,t')} {\delta
\lambda_\eta(t_1)}}\;.
\eq
One can see that these vertex functions are related to the time-ordered operators
\bq
\Lambda_0(t,t';t_1)=\int dt_2 dt_3
G(t,t_2)\Gamma_0(t_2,t_3;t_1)G(t_3,t')\;,
\eq
\bq
\Lambda_\eta(t,t';t_1)=\int dt_2 dt_3
G(t,t_2)\Gamma_\eta(t_2,t_3;t_1)G(t_3,t')\;.
\eq
Thereby, the generalized Ward identity Eq. (16) can be rewritten in terms of the vertex functions
\bn
\partial_{t_1}\Gamma_0(t,t';t_1)&-&\sum_\eta
\Gamma_\eta(t,t';t_1)\nonumber\\
&=&i[\delta(t',t_1)-\delta(t,t_1)]G^{-1}(t,t')\;.
\en
This equation relates the vertex functions to the self-energy, since the inverse of Green's function $G^{-1}=G_0^{-1}-\Sigma$ is given explicitly as
\bq
G^{-1}(t,t')=[i\partial_t-\epsilon_d-v_0(t)]\delta(t,t')-\Sigma^{(0)}(t,t')-\Sigma_U(t,t')\;.
\eq
The generalized Ward identity implies the gauge invariance and current conservation condition in this problem. Therefore it should be satisfied when we are investigating the current fluctuations or time-dependent electron transport properties in this system by making approximation calculation of the self-energy or the vertex functions.
\section{ Current fluctuations and the Hartree approximation}
In this section, we study the current fluctuation and statistical problems in this quantum dot system. It is well known that the central quantity in FCS calculations \cite{Levitov1993,Levitov1996} is the cumulant generating function $\chi ({\bf \lambda})=\sum_{\bf Q}e^{i{\bf Q \bf \lambda}}P({\bf
Q})$, where ${\bf\lambda}=(\lambda_1,
\ldots,\lambda_N)$ are the counting fields and $P({\bf Q})$ is the probability for the charge ${\bf Q}=(Q_1,\ldots, Q_N)$ to be transferred through the respective channel\cite{Schmidt} during the measuring time $\it T $. For noninteracting electron system, the generating function is given by the Levitov-Lesovik formula\cite{Levitov1993} within scattering matrix approach. It is observed that the cumulant generating function can be generalized to the system with time-dependent counting fields, and it can be written in terms of the nonequilibrium Green's function defined on the closed time path contour as follows
\begin{equation}
\ln \chi({\bf \lambda})=\mathrm{Tr}\ln G^{-1} -\mathrm{Tr} \Sigma_U G +\Phi(G)\;,
\end{equation}
where $G^{-1}$ is the inverse of the full Green's function of the quantum dot given explicitly by Eq.(22). $\Phi$ a functional potential constructed by summing over irreducible self-energy diagrams closed with an additional Green function line.\cite{Baym} The interaction self-energy $\Sigma_U$ can be obtained from the functional $\Phi$ by
\begin{equation}
\Sigma_U(t,t')={\frac {\delta\Phi} {\delta G(t',t)}}\;.
\end{equation}
One can verify that when the counting fields $\lambda$ are assumed to be time-independent and the system is in the noninteracting case, the above generating functional arrives at the Levitov-Lesovik formula.
The electron current tunneling from the lead $\eta$ to the quantum dot can be obtained by a functional derivative of $\chi (\lambda)$ with respect to $\lambda_\eta (t)$
\begin{eqnarray}
&&\langle I_\eta(t)\rangle=i{\frac e \hbar}{\frac {\delta \ln \chi(\lambda)} {
{\delta\lambda_\eta(t)}} }\nonumber\\
&&=-i{\frac e \hbar}\int dt_1 dt_2
G(t_1,t_2)\Gamma^{(0)}_\eta(t_2,t_1;t)\;.
\end{eqnarray}
Here the bare current vertex function $\Gamma^{(0)}_\eta(t_2,t_1;t)$ is given by
\begin{eqnarray}
\Gamma^{(0)}_\eta(t_2,t_1;t)&=&{\frac {\delta\Sigma^{(0)}(t_2,t_1)} {\delta\lambda_\eta(t)}}
\nonumber\\
&=&i\left [\delta(t_1,t)-\delta(t_2,t)\right ]\Sigma^{(0)}_\eta(t_2,t_1)\;.
\end{eqnarray}
Therefore the current through quantum dot is given as\cite{Oh,Macedo}
\begin{equation}
\langle I_\eta(t)\rangle={\frac e \hbar}\int dt_1 [G(t,t_1)\Sigma^{(0)}_\eta(t_1,t)
-\Sigma^{(0)}_\eta(t,t_1)G(t_1,t) ]\;.
\end{equation}
By using the operational rules given by Langreth for contour integration, \cite{Langreth} it is not difficult to prove that this formula is exactly equivalent to the current formula obtained by Jauho et al.\cite{Jauho} for the time-dependent electron transport through an interacting quantum dot.
We can introduce the interaction induced current vertex function which is related to the self-energy of Coulomb interaction
\begin{equation}
\Gamma^U_\eta(t_2,t_1;t)={\frac {\delta\Sigma_{U}(t_2,t_1)} {\delta\lambda_\eta(t)}}\;.
\end{equation}
Then the vertex function defined in Eq.(18) is given by
\begin{equation}
\Gamma_\eta(t_2,t_1;t)=\Gamma^{(0)}_\eta(t_2,t_1;t)+\Gamma^U_\eta(t_2,t_1;t)\;.
\end{equation}
The current formula Eq. (25) indicates that the current is contributed solely from the bare current vertex. There is no contribution of interaction vertex correction to the current in this self-consistent perturbation approach. However, we will show in the following that the interaction vertex indeed influences the current fluctuations.
Next, we calculate the current-current correlation functions on the time contour and find that they can be represented as the sum of two terms
\bn
D_{\eta\eta'}(t,t')&\equiv& \langle T_C\delta I_\eta(t)\delta I_{\eta'}(t')\rangle
=-{\frac {e^2} \hbar}{\frac {\delta^2
\ln \chi(\lambda)}
{\delta\lambda_\eta(t)\delta\lambda_\eta(t')}}
\nonumber\\
&=&D^{(0)}_{\eta\eta'}(t,t')+ D^{(c)}_{\eta\eta'}(t,t')\;,
\en
where the bare term is
\begin{widetext}
\bn
D^{(0)}_{\eta\eta'}(t,t')&=&{\frac {e^2} \hbar}\delta_{\eta\eta'}\left [G(t,t')\Sigma^{(0)}_\eta(t',t)
+\Sigma^{(0)}_\eta(t,t')G(t',t)\right ]
\nonumber\\
&&+{\frac {e^2} \hbar}\int dt_1 dt_2 dt_3 dt_4
\left [G(t_1,t_2)\Gamma^{(0)}_{\eta'}(t_2,t_3;t')G(t_3,t_4)\Gamma_{\eta}^{(0)}(t_4,t_1;t)\right ]\;,
\en
and the interaction induced vertex correction term to the current correlation is given by
\bq
D^{(c)}_{\eta\eta'}(t,t')={\frac {e^2} \hbar}\int dt_1 dt_2 dt_3 dt_4
\left [G(t_1,t_2)\Gamma_{\eta'}^U(t_2,t_3;t')G(t_3,t_4)\Gamma_{\eta}^{(0)}(t_4,t_1;t)\right ]\;.
\eq
Among the various current correlation functions, the correlation function for current noise is of particular interest, since the frequency dependent noise spectrum of current contains the intrinsic dynamics information of this quantum dot system. In a steady state without external time-dependent potential, the symmetrized noise spectrum $S_{\eta\eta'}(\omega)$ is given by the Fourier transform of the correlation function of current operators $S_{\eta\eta'}(t,t')=\langle \delta I_\eta(t)\delta
I_{\eta'}(t')\rangle+\langle \delta I_{\eta'}(t')\delta I_{\eta}(t)\rangle$. It is noted that the correlation function for current noise can be written as
\bq
S_{\eta\eta'}(t,t') = D^{>}_{\eta\eta'}(t,t')+D^{<}_{\eta\eta'}(t,t')
= S^{(0)}_{\eta\eta'}(t,t')+ S^{(c)}_{\eta\eta'}(t,t')\;,
\eq
where $S^{(0)}_{\eta\eta'}(t,t')$ and $S^{(c)}_{\eta\eta'}(t,t')$ are contributed from the bare term and the interaction induced vertex correction term, respectively.
The bare term $S^{(0)}_{\eta\eta'}(t,t')$ is obtained straightforwardly by using Langreth's analytical continuation rules.\cite{Langreth} In the absence of external ac potential, we can transform it to the frequency space, and express it in terms of the Green's functions of quantum dot explicitly as \cite{Dong,Lopez}
\bn
S^{(0)}_{\eta\eta'}(\omega)&=&{\frac {e^2} \hbar}\int {\frac {d\omega_1} {2\pi}}\bigg \{ \delta_{\eta\eta'}
i\Gamma_\eta \left [ n_\eta(\omega_1)G^>(\omega_1+\omega)-(1-n_\eta(\omega_1+\omega))G^<(\omega_1)\right ]
\nonumber\\
&&-\Gamma_\eta\Gamma_{\eta'} \Big \{ n_\eta(\omega_1)(1-n_{\eta'}(\omega_1+\omega) )G^r(\omega_1)G^r(\omega_1+\omega)+
n_{\eta'}(\omega_1) (1-n_{\eta}(\omega_1+\omega))G^a(\omega_1)G^a(\omega_1+\omega)
\nonumber\\
&&+ [ n_{\eta'}(\omega_1)G^a(\omega_1)-n_\eta(\omega_1)G^r(\omega_1) ]G^>(\omega_1+\omega)
+G^<(\omega_1) [(1-n_{\eta'}(\omega_1+\omega))G^r(\omega_1+\omega)\nonumber\\
&&-(1-n_{\eta}(\omega_1+\omega))G^a(\omega_1+\omega)]
-G^<(\omega_1)G^>(\omega_1+\omega)\Big \} \bigg \} +\bigg \{\omega\rightarrow -\omega \bigg \}\;.
\en
In order to obtain the interaction effect on the noise spectra, we have to calculate the vertex function by functional derivation of the interaction self-energy with respect to the counting field: $ \Gamma^U_\eta(t_1,t_2;t)= {\frac {\delta\Sigma_{U}(t_1,t_2)} {\delta\lambda_\eta(t)}} $, where the interaction self energy $\Sigma_{U}(t_1,t_2)$ is given by Eq.(5) in the Hartree approximation. The technical details of our calculation is presented in Appendix B. After calculating the vertex function and transform it to the frequency space, we can obtain the interaction correction to the finite frequency current correlation function $S^{(c)}_{\eta\eta'}(\omega)$ as follows
\bn
S^{(c)}_{\eta\eta'}(\omega)&=&{\frac {e^2} \hbar}\bigg [\chi^{r,(0)}_{j_{\eta}n}(\omega){\frac U {1-U\chi^{r,(0)}_{nn}(\omega)}}S^{(0)}_{nj_{\eta'}}(\omega)+S^{(0)}_{j_\eta n}(\omega) {\frac U {1-U\chi^{a,(0)}_{nn}(\omega)}}\chi^{a,(0)}_{n j_{\eta'}}(\omega)
\nonumber\\
&&+\chi^{r,(0)}_{j_\eta n}(\omega) {\frac U {1-U\chi^{r,(0)}_{nn}(\omega)}}S^{(0)}_{nn}(\omega)
{\frac U {1-U\chi^{a,(0)}_{nn}(\omega)}}\chi^{a,(0)}_{n j_{\eta'}}(\omega)\bigg ]\;,
\en
\end{widetext}
where the various correlation and response functions are given in Appendix A. This equation is the central result of our paper. It is a generalization of the zero frequency noise result obtained by Hershfield\cite{H1992b} to the finite frequency case, and can be interpreted as the current noise contributed from the coupling of density fluctuations in the quantum dot to the current fluctuations. The first term in the Eq.(35) indicates that the correlation between the density and the current $S^{(0)}_{nj_\eta'}(\omega)$ can propagate forward in time via $\chi^{r,(0)}_{nn}(\omega) $ and $\chi^{r,(0)}_{j_{\eta}n}(\omega)$ to produce current fluctuations. The second term represents the correlation $S^{(0)}_{j_{\eta}n}(\omega)$ propagates backward in time on the backward branch to produce current fluctuations. In the last term the correlation between the densities at the quantum dot $S^{(0)}_{nn}(\omega)$ propagates forward and backward on the closed time contour simultaneously and gives rise to current fluctuations.
\begin{figure}[hbtp]
\includegraphics[width=\columnwidth, height=7cm, angle=0]{figure1.eps}
\caption{ (Color online) The noise spectra for quantum dot system in the symmetric coupling case. (a) for left lead, we plot the bare noise spectrum $S^{(0)}_{LL}(\omega)$ (dashed line), the interaction correction term $S^{(c)}_{LL}(\omega)$ (dash-dotted line), and the noise spectrum after correction $S_{LL}(\omega)$ (solid line); (b) the same for the right lead. Here, we take the parameters in the Anderson impurity model as $\epsilon_d=-1.0$, $U=4.0$ in units of the coupling strength $\Gamma$, and we assume $\Gamma_L=\Gamma_R=1.0$ for the symmetric case.}
\end{figure}
\section{numerical results}
To get better understanding of the interaction effect on the current noise spectrum for this quantum dot system, we will present some numerical calculations of the current noise at zero temperature. In our calculation, we take the coupling strength between the leads and quantum dot $\Gamma$ as the units of the energy, and take the parameters $\epsilon_d=-1.0$, $U=4.0$ and the bandwidth $D=100$. The applied bias voltage $\Delta\mu=3.0$, with $\mu_L=-\mu_R=\Delta\mu/2$. For the system with symmetric coupling strength $\Gamma_L=\Gamma_R=\Gamma$, we plot the current noise spectra for left and right leads in Fig.1(a) and (b), respectively. Fig.1 (a) shows that the bare noise spectrum $S^{(0)}_{LL}(\omega)$ is positive and is an increasing function of the frequency. The interaction correction term $S^{(c)}_{LL}(\omega)$ can have negative values at an intermediate finite frequencies region. It is observed that the interaction correction for shot noise at zero frequency has a rather small positive value, which agrees with the previous result,\cite{H1992b} but the interaction correction becomes negative and more significant when the frequency increases, and it goes to positive value again in the large frequency region. The maximum influence of interaction correction is obtained at the frequency which is largely determined by the energy difference between the renormalized dot level $\tilde\epsilon_d=\epsilon_d+U \langle n_{d\bar\sigma}\rangle $ and the Fermi level of the leads. The sum of $S^{(0)}_{LL}(\omega)$ and $S^{(c)}_{LL}(\omega)$ gives the noise spectrum after interaction correction, which is a monotonously increasing function of the frequency. Fig.1 (b) shows the noise spectra for the right lead (the drain side of this system). These noise spectra have more prominent features than that of the left lead. One can find a significant dip for the total noise spectrum $S_{RR}(\omega)$ at the frequency equal to the applied bias voltage ($\omega=\Delta \mu$=3.0).
The various bare correlation and response functions utilized in the calculation of interaction effect of noise spectra are plotted in Fig.2. As shown in Fig.2 (a) that the density fluctuation spectrum $S^{(0)}_{nn}(\omega)$ of the quantum dot always has real positive values at finite frequencies. The real part of density response function $\chi^{(0)}_{nn}(\omega)$ is negative at low frequency, which indicates that the screening effect of electron decreases the Coulomb interaction strength $U$ on the quantum dot. The imaginary part of $\chi^{(0)}_{nn}(\omega)$ remains negative for all frequencies due to the analytical properties of the density-density response function. The correlation and response functions between the density operator on the dot and the current operator for the left and right lead are plotted in Fig.2(b) and (c), respectively.
\begin{figure}[hbtp]
\includegraphics[width=\columnwidth, height=7cm, angle=0]{figure2.eps}
\caption{(Color online) The spectra of various correlation and response functions involved in the calculation of the interaction effect on noise spectra in the symmetric coupling case (in units of $e^2/\Gamma$). (a) The density correlation function $S^{(0)}_{nn}(\omega)$, the real and imaginary parts of the density response function $\chi^{(0)}_{nn}(\omega)$ in the quantum dot. (b) The real and imaginary parts of the correlation function $S^{(0)}_{j_L n}(\omega)$ and response function $\chi^{r,(0)}_{j_L n}(\omega)$ for the left lead. (c) The correlation function and response function of the right lead.
The parameters used in the calculation are the same as in Fig.1. }
\end{figure}
Fig.3 shows the current noise spectra for asymmetrically coupled quantum dot system. Since the coupling strength $\Gamma_L\gg\Gamma_R $, we find that the magnitude of the current fluctuations in the right lead plotted in Fig.3
(b) is much less than that of the left lead in Fig.3(a), because of the tunneling rate between the right lead and quantum dot is much less than that of the left lead. The interaction correction terms also have negative value regions at finite frequencies both for the left and right leads. The noise spectrum in the drain lead (right lead) shows an evident dip structure at the frequency equal to the bias voltage. One can expect this kind of prominent features of noise spectrum can be detected in experiments.
\begin{figure}[hbtp]
\includegraphics[width=\columnwidth,height=7cm,angle=0]{figure3.eps}
\caption{ (Color online) The noise spectra for quantum dot system in the asymmetric coupling case . (a) For the left lead, we plot the bare noise spectrum $S^{(0)}_{LL}(\omega)$ (dashed line), the interaction correction term $S^{(c)}_{LL}(\omega)$ (dash-dotted line), and the noise spectrum after correction $S_{LL}(\omega)$ (solid line); (b) The same for the right lead. We take the parameters in the Anderson model as $\epsilon_d=-1.0$, $U=4.0$ in units of the coupling strength $\Gamma$, and assume $\Gamma_L=1.0, \Gamma_R=0.2$ for the asymmetric case. }
\end{figure}
\section{conclusions}
In this work, we have investigated the problem of electron transport through quantum dot in the framework of nonequilibrium self-consistent perturbation theory and examined the current conservation condition. Based on the Anderson impurity model, we gives the current and current fluctuations formulae, which are valid in the presence of arbitrary external time-dependent potentials, by using nonequilibrium generating functional and the functional derivation method. We have calculated the interaction effect on the finite frequency noise spectrum of Anderson impurity model by taking into account the interaction vertex correction term with the Hartree approximation, and obtained an analytical equation for the noise correction term at finite frequency, which corresponds to a generalization of the previous result on zero frequency shot noise. We have focused our attention on the symmetrized noise spectrum, one can expect that the nonsymmetrized noise spectrum and the ac conductance can also be studied within the formulation presented in this paper. We believe that the self-consistent perturbation theory on the Schwinger-Keldysh contour can lead to a unified approach to many interesting problems in nonequilibrium electron transport through mesoscopic systems, and will give us deep understanding of the current fluctuation and energy dissipation phenomena.\cite{Das} The functional method provides a convenient way to study the statistics of current fluctuations. We note that the Hartree approximation can only account for some interaction effects on the electron current noise in the resonant tunnelling regime. It is expected that future research work can treat the interacting effect beyond the Hartree approximation, and give us more information about the interaction effect on the out-of equilibrium dynamics of electrons in the Coulomb blockade regime as well as the low-temperature Kondo regime.
\begin{acknowledgments}
This work was supported by Projects of the National Basic Research Program of China (973 Program) under Grant No. 2011CB925603, and the National Science Foundation of China, Specialized Research Fund for the Doctoral Program of Higher Education (SRFDP) of China.
\end{acknowledgments}
|
train/arxiv
|
BkiUcavxaKPQoka4Rb7V
| 5
| 1
|
\section{Introduction}
\IEEEPARstart{A}{dvances} in small, low-power CMOS image sensors and related optics have revolutionized consumer photography\cite{Gamal02, Beatriz16, Fossum14, Chang12, Reshidko16}. These technologies have improved dramatically the spatial resolution, dynamic range, and low light sensitivity of digital photography.
In addition to improving conventional photography, these technologies open up many possibilities for novel image systems architectures. The new optics and CMOS sensors capabilities have already motivated novel camera architectures that extend the original Bayer RGB design. For example, in recent years a new generation of architectures have been produced to increase spatial resolution\cite{Longoni08}, control depth of field through light field camera designs\cite{Georgiev12, Bishop12, Marwah13}, extend dynamic range and sensitivity by the use of novel arrangements of color filters\cite{Baranov15} and mixed pixel architectures \cite{Nayar15, Yasuma10}.
To develop these opportunities requires that we innovate on the third fundamental component of image systems, the image processing pipeline. The pipeline is the set of algorithms, including demosaicking, noise reduction, color management, and display nonlinear transforms (gamma curves), that convert the sensor data into a rendered image. Even modest changes to the camera architecture, such as more color pixels into the mosaic \cite{Monno12} or including near infrared detectors \cite{Tang15} can require substantial rethinking of the image processing pipeline. New image processing pipelines, specialized for the new types of cameras, are slow to develop. Consequently, the design of new imaging sensors is far outpacing the development of algorithms that can take advantage of these new designs\cite{Nayar06}, and the vast majority of image processing algorithms are still designed for sensors that use the classic single plane Bayer RGB spatial sampling mosaic.
In this paper, we describe a new framework that enables image systems engineers to rapidly design image processing pipelines that are optimized for novel camera architectures. The general idea of the framework was first proposed by Lansel et al. in 2011\cite{Lansel11}. Here, we introduce the framework in the form of a set of software tools that use simulation and learning methods to design and optimize image processing pipelines for these new camera systems.
This paper is organized into three sections that define our contributions. First, we explain the image processing architecture: the input data are grouped by their local features into one of a set of local classes, where locality refers to both position on the sensor array (space), pixel type (color) and response level. The optimal affine transform in each class is learned using camera simulation technology. We refer to this framework as $L^3$ to emphasize its key principles: Local, Linear and Learned. Second, we assess the performance of $L^3$ method by comparing the rendered quality with the ones from high-end modern digital cameras. We specifically show that such a collection of affine transforms accurately approximates the complex, nonlinear pipelines implemented in modern consumer photography systems. Third, we illustrate how $L^3$ method can learn image processing pipelines for new camera architectures.
There are a number of related efforts that incorporate system joint optimization and data-driven learning methods in designing camera image processing pipeline. We discuss the relationship between $L^3$ and these contributions more fully in Discussion section after introducing $L^3$ framework.
\section{The \texorpdfstring{$L^3$}{L3} Method}
The $L^3$ method comprises two main steps: rendering and learning. The rendering step adaptively selects from a stored table of affine transformations to convert the raw camera sensor data into an image in the target color space (e.g. sRGB). The training step learns and stores the transformations used in rendering.
Conventional image processing pipelines often include nonlinear elements, including thresholding operations and 'gamma' transforms\cite{Poynton93, Guo04}. The $L^3$ rendering algorithm uses a collection of affine transforms that are applied to data in the different classes. The accuracy of a collection of affine transforms for approximating the image processing pipeline, including nonlinearities such as the 'gamma', can be controlled by the number of classes; this is the conventional local linear approximation to continuous functions. The challenge of managing discrete transitions, such as the transition from the linear response region to saturation, represented by thresholds can be managed by selecting proper category boundaries. As a practical matter, there is rarely a strong need to specify a precise and sharp category boundary in natural image processing.
In the following, we explain these two steps of $L^3$ method in detail. We explain the method using the example of a camera sensor with RGBW (red, green, blue and white) color filter array. The method can be applied to other designs, which we describe in the Results section.
\subsection{Rendering}
The rendering pipeline begins with the sensor data in the spatial neighborhood of a pixel, $n(x, y, p)$. We illustrate the method for a $5\times5$ neighborhood, so that the neighborhood comprises 25 pixel responses (Figure \ref{Fig:L3Render}). Each pixel is classified into one of many classes, c, based on its local features: the identity of the pixel, the mean response level of the local patch, the spatial variance of the neighborhood, and pixel saturation status, etc. The total number of classes can be estimated as the product of the number of categories for every feature. For example, suppose that there are 4 types of pixels, and we categorize the mean neighborhood intensity into 10 response levels. This produces 40 different classes. If we further classify each neighborhood into textured (high variance) or uniform, then there will be 80 classes ($1\leq c\leq 80$). Additionally, pixel saturation is also a condition that requires careful management. In some designs, one type of pixels (e.g., the $W$ pixel in an RGBW) can saturate while the RGB pixels will be well within their operating range. It is important to create separate classes that account for pixel saturation. Allowing for this increases the number of effective classes, typically by a factor of three for the RGBW case. The number of classes is a hyperparameter of the $L^3$ framework and can vary in different applications.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figures/L3Render.png}
\end{center}
\caption{\textbf{The $L^3$ rendering method.} In $L^3$ rendering, each pixel is classified by its local features into one of a few hundred classes (e.g. a red pixel, with high intensity, surrounded by a uniform field). Each class has a learned affine transform (stored in a table) that weights the sensor pixel values in order to calculate the rendered outputs. Hence the rendered output of each pixel is calculated as an affine transform of the pixel and its neighbors.}
\label{Fig:L3Render}
\end{figure}
For each class $c$ and output channel $r$, we retrieve an affine transform to map the 25 values into one rendered value. The affine transforms for each class and rendered output channel, $T(c, r)$, are pre-computed and stored. The rendered output, $o(x, y, r)$ is the inner product of the stored affine transform and the neighborhood data (augmented with a 1 to account for the affine term).
$$o(x,y,r)= <T(c,r), n(x,y,p)>$$
There are several practical considerations in $L^3$ implementation: the content and application dependent class definitions, the target color representations (monochrome, highly saturated) and even the fitting model in each class (affine or polynomial). These choices impact the algorithm efficiency and precision, and we describe experiments evaluating different choices in Results. These choices are part of the design process when implementing the $L^3$ method.
Finally, the computations for each pixel are independent, meaning that the architecture can be highly parallelized with a graphical processing unit (GPU) or other hardware to shorten rendering time\cite{Tian15}. And $L^3$ is designed to perform real-time rendering for high quality images on mobile devices.
\subsection{Learning}
It is challenging to create algorithms for cameras that are being designed, rather than cameras that already exist, because of limitations in obtaining sensor data\cite{Khashabi14}. We solve this problem by using image systems simulation tools to model the proposed camera architecture and to create the training data\cite{Farrell04, Farrell12}. The Image Systems Engineering Toolbox (ISET) begins with a spectral representation of the scene and includes simulations of the optics and sensor. The simulator has been validated against data from several real devices \cite{Farrell08,Chen09}.
Once we have simulations of the critical data, we have a variety of options for how we select the transforms that map the sensor data into the rendered images. We describe our approach to simulation and transformation derivation in the next two sections.
\subsubsection{Training data}
Realistic camera training requires scene spectral radiance data sets that are representative of the likely application. We have obtained scene spectral radiance and accumulated examples from a number of public scene radiance data sets\footnote{http://www.imageval.com/scene-database/}. In addition, we have also used spectral computer graphics methods to simulate a variety of scene spectral radiance images. These simulations can produce scene spectral radiance examples for training that establish special spatial resolution and color requirements that extend what we would be likely to find by merely sampling a range of natural images.
A further advantage of the simulation method is that the desired output image and sensor data are precisely aligned, at the pixel level. For each input scene spectral radiance we calculate the calibrated color representation (e.g. CIE XYZ) and the sensor response at each pixel (Figure \ref{Fig:L3Train}). Such correspondence is very difficult or even impossible to obtain from empirical measurements.
Finally, because the training pairs are produced by simulation, we can produce many examples of scenes and measurement conditions. Through simulation we can control the properties of the ambient lighting, including its level and spectral power distribution. We can perform simulations with a wide range of optical parameters. Training can be performed for special content (e.g., faces, text, or outdoors scenes).
The pairs of images produced by the simulation methods can provide a virtually limitless collection of input data to the training system.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figures/L3Train.png}
\end{center}
\caption{\textbf{Preparing training data with image systems simulation.} Starting from scene spectral radiance data (left), we compute two aligned representations. Top: the sensor responses in a model camera system. Bottom: the target rendered image from scene radiance. The simulated sensor data at each pixel and its neighborhood (patch) is placed into one of several hundred pre-defined classes, based on properties such as the pixel type, response level, and local image contrast. For each class, the pairs of patch data and target output values are used to learn an affine transformation from the data and the target output value.
}
\label{Fig:L3Train}
\end{figure}
\subsubsection{Transform optimization}
The purpose of the training is to calculate a set of transforms that optimally map the sensor responses to the target output image and minimize the empirical risk. Stated formally, our task is to find for each class, $C_i$, a transformation, $T_i$ such that
$$\underset{T_i}{\text{minimize}} \sum_{j\in C_i} \mathcal{L}(y_j, X_jT_i) $$
Here, $X_j$ is a row vector containing the $j$-th patch data from the sensor; $y_j$ are the corresponding target rendered image values. $\mathcal{L}$ is the loss function (error) between the target image and the transformed sensor data. In consumer imaging applications, the visual difference measure CIE $\Delta E_{ab}$\cite{Luo01} can be a good choice for the loss function. For other applications, regularized RMSE is widely used. In nearly all photography applications, the transformation from sensor to rendered data is globally non-linear. $L^3$ method approximates the global nonlinear transformation with a collection of affine transforms for appropriately defined classes $C_i$.
At first, we solve the transforms for each class independently. This problem can be expressed in the form of ordinary least-squares. To avoid noise magnification, we use ridge regression and regularize the kernel coefficients. That is
$$ T_i= \underset{T_i}{\text{argmin}} ||\tilde{y}-XT_i||_2^2+ \lambda ||T_i||_2^2$$
The data from each patch are placed in the rows of $X$; the regularization parameter is $\lambda$, and $\tilde{y}$ is the output in the target color space as an $N\times 3$ matrix. We have experimented using several target color spaces, including the XYZ, CIELAB and sRGB representations, and we can find satisfactory solutions in all cases. The closed-form solution for this problem is given as
$$ T_i=(X^TX + \lambda I)^{-1}X^T\tilde{y}$$
The computation of $T_i$ can be further optimized by using singular vector decomposition (SVD) of $X$. That is, if we decompose $X = UDV^T$, we have
$$T_i = V\text{diag}(\frac{D_j}{D_j^2+\lambda})U^T\tilde{y}$$
The regularization parameter ($\lambda$) is chosen to minimize the generalized cross-validation (GCV) error \cite{Golub79}.
Once the transforms for each class are defined, it is possible to review them for properties that reflect our prior knowledge, such as continuity over the sensor input space, symmetry and uniformity (see Discussion). The software implementation includes methods to check for these conditions and to bring the transforms into alignment with this knowledge.
\section{Results}
In this section, we characterize the performance of the $L^3$ method and illustrate how it can be used to generate image processing pipelines for novel camera architectures. First, we analyze whether the $L^3$ rendering method based on many affine transforms is sufficient to approximate the performance of commercial image processing pipelines (Nikon and DxO). Second, we use $L^3$ to learn a collection of transforms for non-standard color sensor designs (RGBW and RGB-NIR).
\subsection{The \texorpdfstring{$L^3$}{L3} pipeline closely approximates high quality Bayer CFA algorithms}
The $L^3$ pipeline is designed to be computationally efficient and to learn algorithms for novel arrays. Before applying the method to new designs, it is important to analyze whether the simple pipeline is capable of supporting high quality rendering expected from camera systems that have been optimized. To evaluate any performance limits, we compared how well the $L^3$ rendering algorithm can approximate the image processing pipeline embedded in a high quality commercial camera.
\subsubsection{Accuracy}
In one experiment, we used an image dataset of 22 well-aligned raw and JPEG natural images from a Nikon D200 camera. We used 11 randomly selected images for training the local linear transforms and the other half for testing (cross-validation). The $L^3$ pipeline parameters were set to use 50 luminance levels for the four pixel types (red, blue and two types of green), for a total of 200 classes. We analyzed the data with $5\times5$ local patches (affine transforms of 26 parameters). The effect of patch size is discussed later.
Figure \ref{Fig:L3Nikon} (upper left) shows a typical example of an image produced by the Nikon processing pipeline and the corresponding image produced by the $L^3$ method (lower left). By visual inspection the images are very similar; the largest visual differences are the blue sky and the bush in the lower left. We use a perceptual space-color visual difference metric S-CIELAB \cite{Zhang97} to quantify the visual difference. Perceptual metrics require specifying the display and viewing distance, and for the S-CIELAB calculation we assumed the images are rendered and viewed on a calibrated LCD monitor with $96$ dpi at viewing distance of one meter. For the $2592\times 3872$ Nikon images, the horizontal field of view is $58.7$ deg. The $\Delta E_{ab}$ error image (lower right) and histogram (upper right) are typical of the test data: the mean S-CIELAB $\Delta E_{ab}$ is 1.74 for the 11 test images (PSNR 40.36), which is a small visual difference.
These experiments show that the collection of $L^3$ transforms approximates the full commercial rendering produced by this Nikon D200 camera for this collection of outdoor images.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figures/L3Nikon.png}
\end{center}
\caption{\textbf{Approximating the Nikon pipeline with $L^3$ local linear transforms.} At the left, we compare camera RGB (upper left) with $L^3$ rendered image (lower left). The image at the lower right measures the perceptual difference between the two images using S-CIELAB $\Delta E_{ab}$ values for each pixel. The metric is calculated assuming that the images are displayed on a known, calibrated monitor (see inset text, upper right). The histogram of errors is shown on the upper right. The mean error is 1.84, the peak error is near 8, and the standard deviation of the $\Delta E_{ab}$ values is 0.9. The $L^3$ transforms were learned from one set of 11 images. This image is from an independent data set of 11 images. The errors reported for this image are typical for all the images in the independent test set. }
\label{Fig:L3Nikon}
\end{figure}
We also applied the $L^3$ method to learn the local linear transforms that approximates a commercial image processing pipeline (DxO). In this experiment, 26 raw and RGB image pairs are analyzed. The RGB images are generated with DxO Optics Pro using parameters tuned by a professional photographer, Dave Cardinal. This dataset includes multiple types of cameras and the content spans various natural scenes, human portraits, and scenic vistas.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figures/L3DxO.png}
\end{center}
\caption{\textbf{Approximating DxO pipeline with $L^3$ local linear transforms.} We compare the image rendered by a DxO pipeline with parameters tuned by a professional photographer (Left) with one rendered with the $L^3$ method (Middle). The perceptual error is measured in S-CIELAB $\Delta E_{ab}$ image (Right). The error calculations are based on the same monitor as in Figure \ref{Fig:L3Nikon}.}
\label{Fig:L3DxO}
\end{figure}
Each of the individual images can be well-approximated by the $L^3$ method. The images at the left and middle of Figure \ref{Fig:L3DxO} show a typical example of the DxO image and the $L^3$ image. The image at the right is the S-CIELAB visual difference error for each pixel. The mean S-CIELAB $\Delta E_{ab}$ value for this image is 1.458 and the accuracy of the approximation is similar to what we achieved for the Nikon processing pipeline.
The expert's settings vary significantly as the scene and camera types change; for example, in some scenes the expert chooses more sharpening and for others a softer focus is preferred. Hence, no single set of $L^3$ transforms applies to all of the images. The broad issue of selecting $L^3$ transforms for specific acquisition conditions or rendering aesthetics is further analyzed in Discussion.
The DxO and Nikon D200 experiments show that the $L^3$ kernel regression approach is sufficient to approximate the transforms embedded in commercial rendering products.
\subsubsection{The transforms}
Next, we examine the properties of the learned transforms that approximate the Nikon D200 processing pipeline. When learning the $L^3$ transforms, a few parameters must be selected: the number and distribution of response levels, and the size of the local patch.
The transforms change substantially with the response level (Figure \ref{Fig:L3TransformBayer}). At low levels, the weights are relatively equal across the entire patch, and there is less selectivity for color channels. At high response levels the weights are concentrated on the appropriate pixel type. For example, when the center pixel is green and the output channel is also green, at high response levels the transform weights are concentrated on the central green pixel. At low light levels the transform weights at non-central green pixels are relatively high.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figures/L3TransformBayer.png}
\end{center}
\caption{\textbf{$L^3$ transforms depend on response level.} The three monochrome images show the relative weights of the transforms that convert data at a G pixel centered patch into the green output channel. The patch size is $5\times5$ and the three images show the weights that are learned for a class defined by Low (left), Mid (second left) and High (third left) response levels. The spatial distribution of the weights becomes more concentrated at the central pixel as the mean response level increases. The CFA pattern of the full $5\times5$ patch is shown at the right.}
\label{Fig:L3TransformBayer}
\end{figure}
\subsubsection{Response level spacing}
The learned transform weights change more rapidly at the lower response levels compared to the higher levels. For this reason, it is efficient to use a logarithmic spacing of the mean response levels that define the classes; that is, we use more finely spaced classes at the low response levels than the high response levels.
Through simulation, we can evaluate the difference between linear and logarithmic spacing of the mean response levels. We analyzed the Nikon D200 data using different numbers of mean response levels (Figure \ref{Fig:L3ClassesSpacing}). The levels were spaced either linearly or logarithmically. To achieve the same image quality (e.g. 2 $\Delta E$), logarithmic spacing is equivalent to linear spacing with about 50\% more number of classes. As the total number of classes becomes large, say 50 for this example, the performance of the two spacing methods is very similar. We expect that the specific parameter values, such as number of response levels, will differ slightly for different optics and sensor combinations. But the principle of using logarithmic spacing is likely to hold across conditions.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figures/L3ClassesSpacing.png}
\end{center}
\caption{\textbf{The effect of class selection on rendered image quality.} The graph shows how the color error (mean S-CIELAB $\Delta E$) of the rendered image declines as the number of response level classes increases. The two curves show the effect for linear (blue dashed) and logarithmic (red solid) spacing of the mean response levels in the patch. This inset images are the renderings for 4, 7 and 38 mean response level classes (logarithmic spacing). The rendering of the flower (zoomed view) changes substantially as the number of levels increases, and the mean color reproduction error declines significantly as the number of mean response levels increases to about 15. There is only a very small advantage for the logarithmic spacing when using a small number of classes, and no advantage beyond about 12 levels.}
\label{Fig:L3ClassesSpacing}
\end{figure}
\subsubsection{Patch size selection}
There is a significant computational cost to increasing patch size. Changing the patch size from a $5\times5$ to $7\times7$ ($9\times9$) approximately doubles (triples) the number of coefficients and computational cost. Moreover, if the training dataset is limited, the risk of overfitting can increase with the patch size.
For the Nikon and DxO approximations, we found little improvement in the approximation as we changed the patch size beyond $5\times5$. Figure \ref{Fig:L3PatchSize} shows the mean $\Delta E$ values of S-CIELAB on the 11 test images. The plotted data points show the mean error for individual test images, and the solid red line shows the average of all 11 images. The mean $\Delta E$ values decrease very slightly as the patch size increases from $5\times5$ to $7\times7$ and there is no further decrease as the patch size increases to $9\times9$. It might be more effective to create classes based on other features (e.g. nonlinear or global features) rather than increasing the patch size.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figures/L3PatchSize.png}
\end{center}
\caption{\textbf{The effect of patch size on rendered image quality.} The boxplot shows the distribution of mean S-CIELAB $\Delta E$ errors for the 11 test images. The inset monochrome images are examples of the learned affine transforms. The red rectangle in each image denotes the center $5\times5$ region. The weights learned by the transform outside of the $5\times5$ patch are small, and increasing the patch size has little effect on the color accuracy.}
\label{Fig:L3PatchSize}
\end{figure}
\subsection{\texorpdfstring{$L^3$}{L3} pipelines for novel color filter arrays}
The ability to automate the process of learning the rendering pipeline is an important objective of the $L^3$ method. In this section, we use $L^3$ to generate the image processing pipelines for two challenging CFA designs.
We first apply $L^3$ method to generate an image processing pipeline for camera with RGBW sensor. The CFA repeating pattern of RGBW sensor contains both RGB and clear (white) pixel. Adding a white pixel extends the operating range of the camera and makes the camera usable in low light imaging. The key challenge in designing a pipeline for this sensor is the large mismatch in the sensitivity between the W and RGB pixels\cite{Parmar09}
We then consider a CFA that combines RGB channels with a near infrared (NIR) channel. There is a great deal of interest in adding an NIR channel to support applications of depth sensing. The NIR channel, which is invisible to the human eye, can be used to measure a projected NIR pattern for depth estimation. The NIR pixels do not contain significant information for image reproduction, so that this design reduces the pixel count significantly. We analyze concerns how to best render an image for the RGB-NIR design, and how this rendering depends on factors such as pixel size and optics.
\subsubsection{RGBW}
In this experiment, we simulated an RGBW camera with exactly one R, G, B and W in each CFA repeating pattern. The spectral transmittance of the color filters and other key sensor and lens parameters of the camera simulation are shown in Figure \ref{Fig:L3RGBW}. The relative sensitivity of the W to the RGB pixels and the spatial arrangement of the four pixel types differs between vendors \cite{Kumar09,Wang11}, and this simulation represents one of a range of possible choices.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figures/L3RGBW.png}
\end{center}
\caption{\textbf{The simulated RGBW camera system.} The table lists key properties of the optics, pixel and sensor array used in the simulation. The curves show the spectral quantum efficiency of the four different pixel types. The inset shows the CFA pattern for a $5\times5$ patch centered on a green pixel. }
\label{Fig:L3RGBW}
\end{figure}
We apply simulation methods (Figure \ref{Fig:L3Train}) to prepare the training data to learn the local linear transforms. Specifically, we first calculate the sensor response of multispectral scenes using the ISET camera simulator. Then, we compute the ideal (CIE XYZ) value at each pixel location. The local patches are classified by mean response levels and the center pixel type. Affine transformations are learned for each class, using ridge regression with the regularization parameters set using a cross-validation error minimization.
Figure \ref{Fig:L3TransformRGBW} shows examples of the learned transforms for this RGBW camera in four different classes, from low response levels to near saturation. The low mean response class transforms (Low) heavily weight data from the W pixel, presumably because the signal-to-noise ratio (SNR) of the W pixel is substantially higher at generally low response levels. As the response level increases (Mid), the W pixel SNR advantage is less important than the color information provided by the RGB pixels, and the weights redistribute to using more of the data from these pixels than the white pixel. As the W pixel saturates (High) the $L^3$ transforms further discounts the W responses. As the G pixel begins to saturate as well (Saturate), the weights on both the W and G pixels decline, with most of the weight being assigned to the R and B pixels. By designing tables that include many response levels, we assure a smooth transition from the W-dominated to the RGB-dominated domain.
Notice that for the Mid and High response levels, the red output depends significantly on the the G center pixel response. The algorithm learns that there is a strong spatial and color correlation between the G center pixel value and output red channel\cite{Hel-Or04}. This confirms the previous observation that the value at the G center pixel is useful in predicting the red output value, and the linear transform quantitatively estimates the proper amount that G pixel should contribute to the red channel output. However, as the center G pixel starts to saturate (Saturate), the transforms assign them lower weights.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figures/L3TransformRGBW.png}
\end{center}
\caption{\textbf{The transforms learned for the RGBW sensor.} The transforms learned for Low, Mid, High and Saturate response levels are shown in the rows. The columns show the red, green and blue output channels. The color filter array is shown at the right. At Low response levels, the highest weights are on the W pixels. As the response level increases, the weights of the RGB pixels become larger. When most W and some of the G pixels saturate, the R and B weights are the largest.}
\label{Fig:L3TransformRGBW}
\end{figure}
We analyzed two important aspects of the $L^3$ performance: color accuracy and image resolution. We performed this analysis by training on a standard data set and then testing performance on targets designed to analyze color and resolution.
To assess color accuracy, we compute the CIELAB $\Delta E$ values between the $L^3$ rendered image and ideal values of a standard Macbeth color checker. The $L^3$ pipeline for the RGBW sensor achieves a mean $\Delta E$ of 1.7, which is very accurate. We then replaced the W pixel with a green pixel to form a traditional Bayer CFA pattern. The mean $\Delta E$ difference is similar ($\Delta E$= 1.65). Hence, the $L^3$ method learns how to incorporate the W pixels to achieve an accurate color reproduction.
To assess resolution, we calculated the spatial frequency response (SFR) using the ISO 12233 slanted bar method\cite{ISO12233}. This method measures the image intensity in the region near the edge of a slanted bar. The intensity measurements are converted from a spatial representation into a modulation transfer function (MTF). The metric is defined by the spatial frequency at which the MTF drops to half of its peak value (MTF50); higher MTF50 values imply higher spatial resolution. The value of the MTF50 depends on a number of system features, including the optics and the pixel size. For the system described in Figure 8 (f/\#=4, pixel size 1.4 um), the MTF50 is 154.80 cycles/mm, which is close to the upper bound imposed by the optics 186.70 cycles/mm.
We assessed the spatial resolution for different lens (diffraction-limited) and sensor combinations (Figure \ref{Fig:L3MTF50}a). The radius of the blur circle grows proportionally with the f/\#, blurring the optical irradiance at the sensor. When the pixel size is small and the f/\# is large, the spatial resolution, assessed by MTF50, is limited by the optics. When the pixel size is large and the f/\# is small, the spatial resolution is limited by the pixel size.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figures/L3MTF50.png}
\end{center}
\caption{\textbf{Dependence of spatial resolution (MTF50) on system and acquisition conditions.} (a) The MTF50 depends on both the f/\# of the diffraction-limited optics and the pixel size. The mesh shows the MTF50 for a range of these parameters. The transforms are learned from images of human face images. (b) MTF50 of the transform as function of f/\# and pixel size when trained with a spatial resolution test chart. (c) MTF50 as a function of f/\# plotted separately for classes at different percentages of the maximum response level.}
\label{Fig:L3MTF50}
\end{figure}
The exact value of the MTF50 also depends on the training data. In Figure 10a, the $L^3$ transforms are trained with a collection of human faces which do not have sharp edges. To evaluate the upper limit of spatial resolution, we trained with a scene containing only a spatial resolution chart and again used the MTF50 metric to evaluate spatial resolution (Figure \ref{Fig:L3MTF50}b). In this case, the $L^3$ transforms achieve almost double the spatial resolution in the optimal region (small pixel size, small f/\#). In other regions, the benefits of using spatial resolution target is much smaller.
In addition to the optics and pixel size, scene illumination level and exposure conditions also matter (Figure \ref{Fig:L3MTF50}c). We evaluated the MTF50 for a fixed pixel size using over a range of response levels and f/\#s. For small f/\# when the resolution can be very high, the $L^3$ transforms differ between the response levels. At low response levels, the transforms reduce noise by placing significant weights on most of the pixels in the patch. Thus the MTF50 is relatively low. When the response level is high, the MTF50 is much higher. For large f/\# the loss is dominated by the lens and thus the difference in the MTF50 between low and high response levels is minimal.
\subsubsection{RGB-NIR}
Next, we use the $L^3$ method to design an image processing pipeline for an RGB near infrared (NIR) sensor. The main application for including an NIR channel is to acquire extra information that is used in combination with an IR projector to estimate depth \cite{Jeong13, Smisek13}. There are several ways to implement NIR sensors. One approach removes one of the two green filters and the IR cutoff filter that is normally placed on the sensor surface. Modern color filters pass significant amounts of IR, so this approach allows NIR photons to enter the same pixels that are used by the visible light\cite{Fredembach13}. The image processing pipeline must estimate and remove these correlated IR signals, which introduces noise and reduces sensor dynamic range.
An alternative approach, recently implemented by Panasonic, selectively blocks IR photons in the RGB channels placing metal within the pixel\cite{Watanabe15}. In this approach, each CFA block contains a pixel of each type, and the RGB channels are protected from absorbing NIR photons by an infrared cut filter layer which includes a stack of silicon oxide and titanium oxide films.
The Panasonic RGB-NIR differs significantly from RGBW because the NIR channel captures very little information in the visible range. However, there may be useful image reproduction information in the NIR channel, and in any case the pipeline must run effectively even if the imaging component only uses the RGB channels.
In this example, we simulate the Panasonic design and the key parameters are shown in Figure \ref{Fig:L3RGBNIR}a. Learning the $L^3$ pipeline for this sensor requires a hyperspectral scene radiance dataset that extends into the NIR. We used a dataset of 12 scenes that were measured from 415 to 950 nm. The data set includes calibration targets, fruits, buildings and natural scenes.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figures/L3RGBNIR.png}
\end{center}
\caption{\textbf{Key parameters of RGB-NIR camera systems (left) and the normalized weights for NIR pixels as a function of response levels (right).} The three curves show the normalized weights for NIR pixels towards three output channels as a function of response levels. The two inset gray images are the learned transforms for a G centered patch towards blue output under low and high response range. In low light, the NIR pixels are slighted used while in high response levels, there is almost no contribution from NIR pixels in the RGB rendered image. The CFA pattern and effective quanta efficiency of the pixels (CFA transmittance included) are shown as the inset image on the upper right.}
\label{Fig:L3RGBNIR}
\end{figure}
We used the $L^3$ method to learn a set of local linear transforms for the RGB-NIR sensor. Figure \ref{Fig:L3RGBNIR} shows the normalized weights of NIR pixels for G centered patches at different response levels. For low response levels, the $L^3$ transforms assign significant weight on the NIR pixels; the weights on these pixels become very small at high response levels.
We evaluated the color accuracy for the RGB-NIR sensor as we did for the RGBW case. The cross-validated median CIELAB $\Delta E$ value is 2.87. If we black out the IR pixel and solve for the transforms again, the cross-validated median $\Delta E$ increases slightly to 3.08. This shows that the NIR data can slightly help estimate color. If we replace the IR sensor with a green pixel, forming a conventional Bayer pattern, the median $\Delta E$ value is 2.69. Hence, the RGB-NIR acquires some information in the invisible range at a cost of slightly worse RGB image. The simulations and $L^3$ algorithm quantify how to take advantage of this information in natural indoor images.
We also evaluated the spatial resolution of the RGB-NIR design using the MTF50 measure. For the Panasonic camera, the MTF50 value is 137 cycles/mm (optics f/\#=4, 2.75 um pixel size with proper exposure). Replacing the NIR pixel with a G pixel, increases the MTF50 to 151 cycles/mm. This shows that the RGB-NIR design has slightly lower spatial resolution than the matched Bayer design.
\section{Discussion}
We quantified how accurately the $L^3$ architecture renders images. We used a perceptual error metric to compare the $L^3$ rendering with two different commercial rendering methods (cf. Nikon and DxO). The $L^3$ architecture, based on kernel regression approximation, produces images that are about 2 $\Delta E$ (spatial CIELAB) from the original.
We also showed that the $L^3$ architecture can automatically generate rendering pipelines that are optimized for novel sensor designs. We implemented and evaluated image rendering pipelines for a sensor designed to extend the dynamic range by including a clear pixel (RGB-W), and also for a sensor that includes a set of pixels capable of measuring near infrared patterns projected into the scene to estimate depth (RGB-NIR).
In this section, we review the relationship between $L^3$ and other new ideas in image processing. Then, we discuss some design choices of the $L^3$ method that arise in practice. First, we discuss the need to create multiple tables of transforms that should be applied in different acquisition conditions (color balancing; exposure duration; imaging conditions). Second, we describe how we account for knowledge about the transformations that should be incorporated to improve the learned transforms. Third, we consider the choice of a target space for the rendering. Fourth, we discuss how $L^3$ might be extended to address additional modules in the rendering pipeline.
\subsection{Related work}
There are several themes in the literature that share common elements with the $L^3$ method. At the most general level, Milanfar et al. proposed using the multidimensional kernel regression and non-parametric learning methods in image processing and reconstruction \cite{Takeda07, Milanfar11}. That work generalizes several image processing methods, including bilateral filtering, and denoising algorithms, under kernel regression schema. The general principles of kernel regression - classifying local data and interpolating measurements - can be applied to a range of imaging problems, such as learning super-resolution kernels \cite{Zhang15}.
The proposal that is closest to our work comes from Khabashi et al. \cite{Khashabi14} Similar to our work, they describe a nonparametric regression tree models, together with Gaussian conditional random fields, to demosaic raw sensor response. They classify the data near each pixel into one of a large number of classes; the class is based on the color filter type of the pixel and a measurement of the local edge direction in the $5\times 5$ neighborhood of the pixel. For each class they use example camera data to find a quadratic transform that maps pixel data to the rendered value.
In addition to these similarities in the approach, there are several significant differences. First, Khabashi et al. perform their training by processing mosaicked sensor data from existing cameras to estimate the ground truth, full resolution. In contrast, $L^3$ makes extensive use of image systems simulation technology to create sensor data for training. The use of simulations enables $L^3$ to support analyses for cameras that do not yet exist. Second, Khabashi et al. classify the sensor data based on pixel type and spatial orientation of the data in the of $5\times5$ patches within the sensor mosaic. $L^3$ classifies the sensor data based on response level and local contrast. Relying on response level is important because the different levels have very different noise characteristics, and the optimal transform differs significantly between low and high response levels (Figure \ref{Fig:L3TransformRGBW}). Finally, Khabashi et al. focus on the demosaicking stage of the process, while $L^3$ training replaces additional pipeline components, including denoising and color transforms that map the sensor data directly into the target color space.
Another related set of ideas concerns the development of image processing pipelines that are based on joint optimization across optics, sensor, and display. An example is from Stork and Robinson \cite{Stork08} who offered a theoretical foundation for jointly optimizing the design and analysis of the optics, detector, and digital image processing for imaging systems. They optimized the image processing pipeline for different lenses, assuming a monochrome sensor. The $L^3$ method incorporates lens properties into the simulation, so that the table of transforms accounts for the specific lens properties. Different tables are generated as the lens properties (e.g., aperture, f/\#) are varied. Hence, the $L^3$ method is also a co-design approach in the sense that the learned rendering parameters depend on the whole system, including the optics and sensor.
Heide et al. also conceive of the image processing pipeline as a single, integrated computation\cite{Heide14}. They suggest a framework (FlexISP) in which they model the relationship between the sensor data and a latent image that represents the fully sampled sensor data prior to optical defocus. They propose to estimate the latent image from the sensor data by solving an optimization problem; the optimization accounts for both the data and a set of natural image priors. Hence, a key difference is that Heide et al. calculate a solution separately for each sensor acquisition, while $L^3$ pre-computes a fixed table of transforms and applies this table to all images. Another difference is that Heide et al., like Khabashi et al., begin their calculations with the sensor data. In contrast, $L^3$ simulates a camera system beginning with scene radiance, accounting for properties of the optics, pixel, and sensor. By working from scene spectral radiance data, $L^3$ can be used to create pipelines at the earliest stages of the design process, when no hardware implementation yet exists. The simulations also make it possible to optimize $L^3$ parameters for different types of scenes, some of which may be difficult to create in a laboratory environment.
Convolutional sparse coding (CSC) methods share some features of the $L^3$ method. CSC representations begin with a full image representation and decompose the image into a linear sum of component images \cite{Heide15}. Each component is the convolution of a single, usually small, kernel with a sparse feature map (most entries are zero). The CSC learns local features from the input training images, and the core calculations are linear. However, the CSC learning methods and target applications of the differ significantly from $L^3$. First, CSC learns kernels and feature maps that decompose an image into separate components. $L^3$ performs the reverse computation; it starts with partial sensor data and creates a complete image. Second, the learning methods are different. The CSC kernels are learned through advanced bi-convex optimization methods that require substantial computational power. The affine (or simple polynomial) transforms learned by $L^3$ use prior knowledge of the camera and training about the camera design but very simple optimization methods. In summary, $L^3$ is an architecture for designing new image processing pipelines and efficient rendering; CSC is a technique for feature extraction and applications to learning image features for machine vision applications and computational photography, such as inpainting.
\subsection{Multiple transform tables}
Training $L^3$ for the range of settings (optics, sensor properties) of a single camera, leads to different transform tables (Figure 10). For mobile devices, the camera settings do not vary extensively. When only a few number of possible settings are available, the best solution might be to pre-compute and store a table of transforms for each setting.
In addition to hardware settings, there is the question of whether the $L^3$ training would produce different tables as we change the scene characteristics. We have explored important case: how the tables depend on the spectral power distribution of the illumination {Germain}. In this case we found that the tables learned for different illuminant are similar enough so that we can render the image with a single table and then apply a $3\times3$ color transform to render a color-balanced image.
There are many other different scene characteristics that remain to be explored. The optimal table of transforms may depend on factors such as image motion, image content, optics parameters such as depth of field and focus. It is possible that multiple tables will be required or that a single set of tables followed by simple transforms will suffice for most conditions.
\subsection{Applying prior knowledge to the transform table}
$L^3$ training depends on the specific training data. This can be used to our advantage, say if we know we want to optimize the rendering for a particular condition and target (e.g., human faces, outdoors). The reliance on specific samples produces transforms that may differ in some small way from our expectations. For example, in many cases we expect the transforms to be left-right symmetric. Further, we expect that transforms at nearby response levels will be similar to one another. We developed functions that can be applied to the learned table of transforms to enforce these expectations (prior knowledge).
\begin{itemize}
\item \textbf{Symmetry}
When the underlying color filter array of each patch is symmetric in some way (up-down, left-right, transpose, circular), we also expect the learned transform to be symmetric. Imposing symmetry helps avoid over-fitting to the training data. We transform general transforms into symmetric transforms by creating symmetric versions of the learned transform and then using the average.
\item \textbf{Smoothing and interpolation}
The $L^3$ coefficients change relatively smoothly as the response level increases. We smooth the transforms by fitting a spline to each of the coefficients and then replacing the coefficients with the value of the smooth spline. We use the same method to interpolate for transforms in classes that have a small amount or insufficient training data.
\item \textbf{Uniformity}
A uniform scene should be rendered as a uniform image. This requirement, unlike the previous two, requires operating on the transforms from different pixel types. Specifically, the sum of the transform weights of each pixel type must be equated between the classes of different center pixels.
\end{itemize}
\subsection{Choosing the target space for rendering}
We emphasized $L^3$ consumer photography applications: the sensor data are transformed to a rendered image. Even for consumer photography applications, there are multiple choices for the target rendering space. We have trained $L^3$ instances to transform into various colorimetric spaces (e.g., CIE-XYZ), and we have also trained to transform into nonlinear representations (e.g., sRGB, CIELAB). Because the global $L^3$ transformation is nonlinear (though locally linear), the input data can be effectively transformed to most representations that are smoothly related to colorimetry representations. Choosing the target space is equivalent to choosing the error function. For example, rendering to CIELAB space minimizes the point-by-point color error.
\subsection{Space-varying}
The current $L^3$ formulation does not account for the position of the center pixel within the sensor. Thus, the algorithm is effectively shift-invariant. There are two important aspects of the rendering pipeline that are space-varying. First, lens shading produces an uneven illumination level from the center to the periphery of the sensor. Second, geometric distortion of the image (e.g., barrell distortion) varies the relationship between the position of the pixel within the sensor and its appropriate position in the output image.
The pixel type and response level are used to identify a class, and we do not include the position of the pixel within the sensor. Hence, the $L^3$ method is fundamentally space-invariant. Correcting for the shift-varying components (lens and geometric distortion) are shift-varying. The parameters of these features are determined by the main taking lens and are independent of the image processing pipeline. Hence, like illuminant correction, for the moment we think it is best to perform these steps separately rather than extending the number of classes and setting $L^3$ the task of accounting for the position-dependent factors.
\subsection{Reproducibility}
The data and methods necessary to reproduce the figures are available from the Stanford Digital Repository\footnote{http://purl.stanford.edu/bk962py0458}.
\section{Conclusion}
We introduce a methodology to automate the design of image processing pipelines. The image-processing pipeline is approximated as a locally linear operation in which sensor data are grouped into various classes, and the data from a class are rendered by a linear transform into the rendered image. We illustrate that the local transforms can produce high quality rendered images. Then, we use image systems simulation to create the table of affine transforms for novel camera designs, including sensors with clear or near infrared pixels. We evaluate the performance of these tables using color metrics (S-CIELAB), spatial resolution metrics (MTF50), and simulations of captured images. Hence, this paper combines image systems simulation technology and modern computational methods into a methodology that creates image processing pipelines.
|
train/arxiv
|
BkiUbac4uzki0oIf_6Zz
| 5
| 1
|
\section{Introduction}
The search for the electric dipole moments (EDMs) of particles (electrons, muons, neutrons,
protons) as well as the EDMs of closed many-particle systems (nuclei, atoms, molecules) presents
one of the most important fundamental physics problems starting from the works of Purcell and Ramsey
\cite{Purc50} on the neutron EDM and of Salpeter \cite{Salp58} on the electron EDM in atoms. The prediction
for the electron EDM following from the Standard Model (SM) is $d_e<10^{-38}$ e cm (e is the electron
charge) \cite{Posp91}. However, within various extensions of the SM \cite{Eng13} the electron EDM can be enhanced to the value close to the bounds obtained in the recent accurate experiments with heavy diatomic molecules YbF \cite{Huds11}, ThO \cite{ACME13}. A possibility for the observation of the charged particles EDMs in heavy diatomic molecules was first discussed in \cite{Sand67}, where the observation of the proton EDM in diatomic molecules with closed electron shells was considered. For the observation of the electron EDM the diatomic molecules with non-closed electron shells should be used. Due to the closely lying opposite parity $\Omega$-doublets in the heteronuclear diatomic molecules with one heavy nucleus and open electron shells the space parity violating (P-odd) \cite{Labz78} and space parity and time invariance violating (P, T-odd) effects, including the EDM effect \cite{Sush78}-\cite{Gorsh79} are strongly enhanced. For the extraction of a bound for the electron EDM from the molecular measurement complicated theoretical calculations are necessary. Such calculations for YbF, PbF, ThO and other molecules were performed in \cite{Dmit87}-\cite{Skripn13}.
Another idea to observe the muon EDM in magnetic storage rings was suggested in \cite{Bail78}. This idea consists of observation of the charged particle spin precession in an electric field due to the existence of the EDM. This idea was realized for muons in frames of the (g - 2) experiment with polarized muons in a magnetic storage ring \cite{Bail78} and led to a new constraint for the muon EDM. Several years ago it was suggested that this constraint can be essentially improved by the full compensation of the magnetic field in the rest frame of the particle by an external electric radial field \cite{Semer98} - \cite{Farley}. The same idea was discussed for bare nuclei and for highly charged
ions (HCI) with closed electron shells in \cite{Khripl98}, \cite{Khripl00}. Possible proton EDM experiments in magnetic storage rings were described in \cite{Semer08}. A proposal for the
observation of the electron EDM in H-like HCI in magnetic storage rings was made in \cite{BondPR11}.
Very recently it was suggested to use electrostatic storage rings for the observation of the charged particles
(muon, proton, deuteron) EDMs \cite{Semer09}. In this paper we extend this idea to the observation of the
electron EDM in the H-like ions.
The operating electrostatic storage rings, which however were never used for the search for EDMs, exist
in Aarhus (Denmark) \cite{Moll97}, Stockholm (Sweden) \cite{Ren04} and Heidelberg (Germany) \cite{Fad06}.
Very recently it was also suggested to employ an electrostatic storage ring to observe the electron EDM in molecular ions and the possibility to reach the boundary $10^{-30}$ e cm was anticipated \cite{Kaw11}. In \cite{Kaw11} an ion $WN^+$ was chosen for the theoretical studies. The proposed experiment in \cite{Kaw11} was very similar to the experimental techniques employed in \cite{Huds11},\cite{ACME13} with molecular beams but could provide better statistics. This experimental techniques differ essentially from the simple magnetic resonance \cite{Purc50} and the muon spin precession effect in the external electric field as in \cite{Bail78}. It is assumed that the electron spins are polarized by laser pumping to some excited state, then the electron spin rotates due to the EDM in the external electric field and this rotation is fixed in the decay process of the excited state. This scheme is quite similar to one suggested for the electron EDM observation in the H-like ions in magnetic storage rings in \cite{BondPR11}.
The electron EDM search with the H-like ions has important advantages compared to the electron EDM experiments with heavy molecules. First, the experiments with H-like ions require much simpler theoretical support. This will become quite important for the extraction of accurate values for the EDM from the experimental results after the experimental discovery of the electron EDM. Moreover, we will demonstrate that the experiments with H-like ions may provide a possibility to distinguish the electron EDM effect and the effect of P,T-odd interaction between the atomic electron and the nucleus. These two effects cannot be distinguished in experiments with heavy neutral atoms and molecules.
An electrostatic storage ring as described in \cite{Moll97} - \cite{Fad06} presents a ring with two deflection areas formed by pairs of plates (electrodes) as it is shown in Fig.1. The electric field is sustained between these plates and has
a radial direction. This electric field compensate the centrifugal force and the injected charged particles move along a closed trajectory
in the ring. The radius of the ring R grows up with the mass and velocity of an ion but drops down with the larger ion charge. Assuming
the applied electric field of the order $\mathcal{E} \approx 10^5$ V/cm and the velocity of the particles of about
$0.1 c$ ($c$ is the speed of light) we obtain for the different ions the radii given in Table I. This Table demonstrates that
compared to the existing magnetic storage rings the electrostatic rings have the size smaller by an order
of magnitude.
\section{Polarization methods}
For the observation of the EDM effect in storage rings polarized particles are
necessary \cite{Bail78} - \cite{BondPR11}. Production, preservation and monitoring of polarized
H-like HCI beams in magnetic storage rings are discussed theoretically in \cite{BondPR11}. As a
production method a selective laser pumping of the excited hyperfine sublevel of a ground electronic
level of an ion was suggested in \cite{Proz03} (see also \cite{BondPR11}), where the $^{151}_{63}Eu^{62+}$
ion was considered as an example. The selective laser pumping method of polarization consists of the excitation of a hyperfine sublevel of
H-like HCI with a circularly polarized optical laser. This excitation leads to an inhomogeneous occupation
of the Zeeman substates of the excited hyperfine state, i.e. the excited hyperfine level becomes polarized. This happens during one laser pulse ($\approx 5 \cdot 10^{-8}$ s), i.e. after one revolution of the ion around the ring ($\approx 10^{-6}$ s). The laser beam is assumed to travel parallel to the ion beam in a certain ring area. Then the quantization axis is called longitudinal and the laser produces a longitudinal polarization.
In \cite{Klaft94}, \cite{Seel98} resonant laser excitation measurements of the hyperfine structure of H-like $^{207}_{82}$Pb and $^{209}_{83}$Bi ions were performed. It follows from the results of these measurements that during one pulse an equilibrium between the excited and the ground hyperfine levels is established. After the ions leave the polarization area 50$\%$ of them will be in an excited (polarized) hyperfine state and 50$\%$ will be in the ground (unpolarized) hyperfine state of the ground electronic state. After the excited hyperfine state decays (see Table II for the decay time) the laser should be switched on again. A question arises, whether the ions will not lose their polarization in the process of decay? In \cite{BondPR11} it was proved that the ions will not lose the polarization in the particular case of transitions between the hyperfine levels $F \rightarrow F'$ when $F = F' + 1$. This case corresponds to all the transitions in Table II. In a more general way this problem is discussed in Appendix A. Calculations of the polarization dynamics show that initially unpolarized ions will acquire the 100$\%$ polarization after some time period $t_{pol}$ (see Table II). We define the degree of polarization $\lambda_F$ as \cite{BondPR11}
\begin{equation}
\label{1}
\lambda_F = \frac{1}{F} \sum_{M_F} n_{FM_F} M_F,
\end{equation}
where $F$, $M_F$ are the quantum numbers for the total angular momentum and its projection for an ion and $n_{FM_F}$ are the occupation numbers for the Zeeman substates. It is assumed that
\begin{equation}
\label{2}
\sum_{M_F} n_{FM_F} = 1.
\end{equation}
%
From Table II one can see that the selective laser pumping method should work only for relatively heavy H-like ions with
$Z \geq 30$, otherwise the 100$\%$ polarization time becomes too long and the necessary laser frequency is lying out of the optical region. The preservation of the ion polarization during the many revolutions of the ion around the ring is a hard problem due to the existence of the depolarization effects. Methods for the ion polarization preservation in the magnetic storage rings were discussed in \cite{BondPR11}. The corresponding methods for the electrostatic ring are essentially the same as for the preservation of the EDM effect and will be discussed below in section VI.
\section{Electron spin precession in an electric field}
Creation of a longitudinally polarized H-like HCI beam in an electrostatic storage ring presents the first step of the proposed experiment (see Fig.1). The main aim of the EDM experiment is the generation of a
vertical component of the polarization due to the existence of the electron EDM. This is a second step of the experiment. For simplicity we start with the consideration of HCI with spinless nuclei. Then the H-like HCI will be treated as a particle with the mass $m = m_N A$, charge $q = (Z - 1)e$ ($A$ is the atomic number, $m_N$ is the nucleon mass, $Z$ is the nuclear charge number) and the magnetic moment equal to the magnetic moment of an electron $\vec{\mu} = \frac{2 \mu_B}{\hbar} \vec{s}$. Here $(-e)$ is the electron charge, $\vec{s}$ is the electron spin, $\hbar$ is the Planck constant, $\mu_B = \frac{e \hbar}{2m_ec}$ is the Bohr magneton and $m_e$ is the electron mass. If the particle possesses an EDM $\vec{d}$ which is directed along the particle spin, it undergoes a precession in an external electric field. This is the basic idea of all the EDM experiments in storage rings, magnetic or electrostatic. The EDM $\vec{d}$ and the spin of the particle $\vec{s}$ are connected via
\begin{equation}
\label{3}
\vec{d} = \eta \frac{q}{2mc} \vec{s},
\end{equation}
where $\eta$ is a dimensionless constant which has to be determined in the experiment. In the rest frame of the ion
\begin{equation}
\label{4}
\biggl( \frac{d\vec{s}}{dt} \biggr)_{\rm rest} = - \frac{\eta e}{2m_e c} \vec{s} \times [\vec{\mathcal{E}} + (\vec{\beta} \times \vec{\mathcal{H}}) -
\frac{\gamma}{\gamma + 1} \vec{\beta} (\vec{\beta} \cdot \vec{\mathcal{E}})] \equiv \vec{s} \times \vec{\Omega}_d ,
\end{equation}
where $\vec{\mathcal{E}}$, $\vec{\mathcal{H}}$ are the external electric and magnetic fields in the laboratory frame,
$\vec{\beta} = \vec{v}/c$, $\vec{v}$ is the particle velocity, $\gamma = 1/ \sqrt{1 - \beta^2}$. Spin precession occurs around the direction of the vector $\vec{\Omega}_d$, defined in Eq.(4).
In the nonrelativistic case $\vec{\Omega}_d$ coincides with $\vec{\mathcal{E}}$. Electron spin precession due to the electron magnetic moment is described by an equation (in the rest frame of an ion)
\begin{equation}
\label{5}
\biggl( \frac{d\vec{s}}{dt} \biggr)_{\rm rest} = - \frac{e}{ m_e c} \vec{s} \times (\vec{\mathcal{H}} + \vec{\mathcal{H}}_m),
\end{equation}
where $\vec{\mathcal{H}}_m$ is the motional magnetic field
\begin{equation}
\label{6}
\vec{\mathcal{H}}_m = \vec{\mathcal{H}}^{(1)}_m + \vec{\mathcal{H}}^{(2)}_m = - (\vec{\beta} \times \vec{\mathcal{E}}) - \frac{\gamma}{\gamma + 1} \vec{\beta} (\vec{\beta} \vec{\mathcal{H}}).
\end{equation}
This is actually the Bargmann-Michel-Telegdi (BMT) equation \cite{Barg59}, where the terms responsible for the Thomas precession are neglected. These terms depend on the particle acceleration and in case of H-like HCI are inversely proportional to the ion mass, i.e. negligible \cite{BondPR11}.
Equations (4), (5) can be generalized to any H-like HCI with a nucleus possessing the nuclear spin $\vec{I}$ \cite{BondPR11}. Then we have to replace in Eq.(3) the vector $\vec{s}$ by the vector $\vec{F} = \vec{I} + \vec{J}$, where $\vec{J}$ is the total angular momentum of the electron. Precession of the longitudinally polarized electron spin $\vec{s}$ around the direction of the radial
electric field due to the electron EDM leads to the generation of a vertical spin component (assuming that the ring is
horizontal, $x$ is radial direction, $y$ is longitudinal direction and $z$ is vertical direction). Due to the EDM
the ion polarization vector will rotate by a small angle $\varphi$ around the $x$ axis. This rotation will continue for every
revolution of the ions around the ring and the rotation angle $\varphi$ will grow up linearly with time (see Fig.2).
\section{The effect of electron EDM and the effect of P,T-violating interaction between atomic electron and nucleus }
The relation of the EDM of a neutral system (atom, molecule) to the EDMs of the charged particles (nuclei, electrons) incorporated in this system is regulated by the Schiff theorem \cite{Schiff63}. According to this theorem, the total electric field at the charged particle inside the neutral system is zero due to the electrostatic equilibrium, so that the EDM effect is absent. However, the Schiff theorem is violated either by strong interactions which determine the finite size of the nucleus, or by the magnetic interactions, i.e. relativistic corrections. As a result the nuclear EDM in atoms and molecules is suppressed compared to the bare nuclei, but the electron EDM in heavy atoms and molecules is, in contrary, enhanced due to the strong relativistic effects \cite{Sand65},\cite{Flam76}. Following \cite{Flam76}, we perform similar estimates for the HCI.
The "primary" EDM of an atom or ion due to the electron EDM is
\begin{equation}
\label{7}
\vec{d_e}^{pr}=\langle 0 |\widehat{\vec{d_e}}^{pr}| 0 \rangle=d_e\langle 0 |\gamma_0 \vec{\Sigma}| 0 \rangle ,
\end{equation}
where $\vec{\Sigma}$ are the Dirac matrices and the average $\langle 0| \ldots| 0 \rangle$ corresponds to the electronic state under consideration. The relativistic correction to the Stark matrix element
\begin{equation}
\label{8}
S_{\rm EDM}=- \vec{d_e}^{pr} \vec{\mathcal{E}} ,
\end{equation}
where $\vec{\mathcal{E}}$ is an external electric field
can be presented as \cite{Khrip97}, \cite{Khrip91}
\begin{equation}
\label{9}
\delta S_{\rm EDM}=-d_e\langle 0 |\left( \gamma_0-1\right)\vec{\Sigma}\vec{\mathcal{E}}| 0 \rangle
\end{equation}
and estimated as
\begin{equation}
\label{10}
|\delta S_{\rm EDM}|\approx d_e |\vec{\mathcal E} |\left(\alpha Z \right)^2 ,
\end{equation}
where $\gamma_0$ is the Dirac matrix and $\alpha$ is the fine structure constant.
Due to the factor $(\gamma_0-1)$ only the lower component of the Dirac wave function contributes, i.e. the result is fully determined by relativistic effects.
The electric dipole moment of an atom and accordingly the linear Stark matrix element
\begin{equation}
\label{11}
S_{at}=e\langle 0 |\vec{r}\vec{\mathcal E} | 0 \rangle
\end{equation}
are zero in the absence of the electron EDM. However, if the electron EDM is present, $S_{at}$ becomes nonzero due to mixing of the states with opposite parity by the interaction (in r.u.)
\begin{equation}
\label{12}
\widehat{\vec{d_e}}^{pr}\vec{\mathcal E}_c=\widehat{\vec{d_e}}^{pr}\vec{r} \frac{Ze}{r^3} ,
\end{equation}
where $\vec{\mathcal E}_c=\frac{Ze}{r^3}\vec{r}$ is the Coulomb field of the nucleus, $\vec{r}$ is radius-vector, $ r=|\vec{r}|$ and $\widehat{\vec{d_e}}^{pr}$ is defined in Eq.\Br{7}. Due to the interaction Eq.\Br{12} the atomic linear Stark effect becomes (we retain only relativistic corrections which give nonzero contribution to the EDM effect) :
\begin{eqnarray}
\label{13}
\delta S_{at}&=&d_e e\vec{\mathcal E}\bigg[ \sum_n \frac{\langle 0 |\vec{r}|n \rangle\langle n |\left( \gamma_0-1\right)\vec{\Sigma}\vec{\mathcal E}_c |0\rangle }{E_n-E_0}+
\nonumber
\\
&&
+\sum_n \frac{\langle 0 |\left( \gamma_0-1\right)\vec{\Sigma}\vec{\mathcal E}_c |n\rangle\langle n |\vec{r}|0 \rangle }{E_n-E_0} \bigg] ,
\end{eqnarray}
where $|n \rangle$, $E_n$ are the wave functions and energies of the atomic Dirac states with the parity opposite to the state $|0 \rangle$. In what follows we choose the state $|0 \rangle$ as the ground state $1s$ of the H-like ion. The sum over $n$ (only $p_{1/2}$ states are contributing) was evaluated exactly including the discrete and continuous Dirac spectra within the B-spline approach \cite{John88},\cite{Shab04}. For practical evaluations we consider the nucleus as a homogeneously charged sphere. The nuclear radii are taken from \cite{Ang04}. Using the order-of-magnitude estimates (in r.u.): $r\approx\frac{1}{m_e\alpha Z}$, $e|\vec{\mathcal{E}}_c|\approx m_e^2\left(Z\alpha\right)^3$, $(E_n-E_0)\approx m_e\left(\alpha Z\right)^2$, we obtain from Eq.\Br{13}
\begin{equation}
\label{15}
|\delta S_{at}|\approx d_e |\vec{\mathcal{E}}|\left(\alpha Z\right)^2 ,
\end{equation}
i.e. a correction of the same order as Eq.\Br{10}. Due to the Schiff theorem the nonrelativistic contributions to the total Stark shift Eq.\Br{8} and Eq.\Br{11} cancel out:
\begin{equation}
\label{16}
S_{EDM}+S_{at}=0 .
\end{equation}
Still the relativistic corrections Eq.\Br{9} and Eq.\Br{13} remain. The coefficient $\mathcal{K}_d$ for the effective electron EDM in an atom or ion can be defined as
\begin{equation}
\label{17}
\mathcal{K}_d=\Bigg|\frac{\delta S_{EDM}+\delta S_{at}}{d_e |\vec{\mathcal{E}}|}\Bigg|.
\end{equation}
It follows that for the case $|0\rangle=|1s\rangle$, $|n\rangle=|np\rangle$, the coefficient $\mathcal{K}_d\approx\left(\alpha Z\right)^2$, so there is no enhancement. For another choice $|0\rangle=|2s\rangle$, $|n\rangle=|np\rangle$ using the estimate for the Lamb shift $(E_{np}-E_{2s})\approx m_e\alpha\left(\alpha Z\right)^4 $, we would have a strong enhancement, $\mathcal{K}_d\approx \frac{1}{\alpha\left(\alpha Z\right)^2}$. However, the EDM experiment with the excited $|2s\rangle$ state of the HCI as the basic one seems to be unrealistic, since the lifetime of the level $|2s\rangle$ in ions with high $Z$ value is quite short and is not enough to perform the EDM experiment (see Section V). The repeated excitation of the ions to the $|2s\rangle$ state should lead to the loss of polarization, i.e. also makes the EDM experiment impossible. The coefficients $\mathcal{K}_d$ for different ions are given in Table III.
There is an important advantage of the EDM experiment with different H-like HCI in storage rings compared to the linear Stark EDM measurements performed with certain neutral atoms and molecules. Collecting and comparing the results of the EDM experiments for the H-like ions with different $Z$ values one can distinguish the electron EDM effect from the effect caused by P- and T-violating interaction $V_{P,T}$ of the electron with the nucleus. This is not possible in experiments with neutral atoms or molecules. The identity of consequences of both effects was demonstrated in \cite{Gorsh79} for any particular atom or ion. However, the different dependence of these effects on $Z$ gives an opportunity to distinguish them provided that there is sufficient experimental data for the ions with different $Z$ values. The scalar P, T-violating interaction $V_{P,T}$ looks like \cite{Gorsh79},\cite{Khrip97},\cite{Khrip91}
\begin{equation}
\label{18}
V_{ P,T}=Q_{P,T}g_{P,T}i\gamma_0 \gamma_5 \delta(\vec{r}),
\end{equation}
where $\gamma_0, \gamma_5$ are Dirac matrices, $g_{P,T}$ is the constant of interaction and $Q_{P,T}$ is the "P,T-odd charge" of the nucleus. In the nonrelativistic limit operator $V_{P,T}$ takes the form (in r.u.)
\begin{equation}
\label{18a}
V_{ P,T}=Q_{P,T}g_{P,T}i \frac{1}{2m_e}[\vec{\sigma} \hat{\vec{p}},\delta(\vec{r})],
\end{equation}
where $\vec{\sigma}$ are the Pauli matrices, $\hat{\vec{p}}$ is the momentum operator and
$[\,\, , \,\,]$ denotes the commutator.
The Stark shift caused by the interaction Eq.\Br{18} reads
\begin{eqnarray}
\label{19}
S_{P,T}&=&e\vec{\mathcal E}\bigg[ \sum_n \frac{\langle 0 |\vec{r}|n \rangle\langle n | V_{P,T}|0\rangle }{E_n-E_0}+
\nonumber
\\
&& +\sum_n \frac{\langle 0 |V_{P,T} |n\rangle\langle n |\vec{r}|0 \rangle }{E_n-E_0} \bigg].
\end{eqnarray}
This expression we derived in the same way as Eq.\Br{13}.
The estimate for the electron momentum in an ion is $p\approx m_e\alpha Z$. For practical purposes we replace $\delta(\vec{r})$ in Eqs.\Br{18},\Br{19} by the nuclear density distribution $\rho_N(r)$. The expectation value $\langle\rho_N\rangle$ can be estimated as \cite{Khrip91} $\langle\rho_N\rangle\approx |\psi_{1s}(0)|^2R\approx\left(m_e \alpha Z\right)^3R$, where $\psi_{1s}(0)$ is the Schr\"{o}dinger wave function at the surface of the nucleus and $R$ is the relativistic enhancement factor. Then we obtain
\begin{equation}
\label{21}
S_{P,T}\approx Q_{P,T} m_e g_{P,T}e |\vec{\mathcal{E}}| \left(\alpha Z\right)R.
\end{equation}
Here $R\approx \left(\frac{2 Z R_N}{a_0}\right)^{-\left(\alpha Z\right)^2}$ \cite{Khrip91}, $R_N$ is the nuclear radius and $a_0$ is the Bohr radius.
Similar to Eq.\Br{17} we can define the coefficient $\mathcal{K}_{P,T}$ as
\begin{equation}
\label{21a}
\mathcal{K}_{P,T}=\Bigg|\frac{S_{P,T}}{m_e g_{P,T}e |\vec{\mathcal{E}}|}\Bigg|.
\end{equation}
Coefficients $\mathcal{K}_{P,T}(Z)$ for different H-like ions are also given in Table III.
For comparing the $Z$-dependence of the EDM effect and P,T-odd interaction effect we present the EDM effect in the form
\begin{equation}
\label{23}
\xi_d=a_d \mathcal{K}_d\left(Z\right) |\vec{\mathcal{E}}|,
\end{equation}
where $\xi_d$ can be understood as the linear Stark shift or rotation angle in the external electric field $\vec{\mathcal{E}}$, $a_d$ is a numerical factor and $\mathcal{K}_d$ is a $Z$-dependent expression that can be exactly evaluated with Dirac wave functions. A similar expression for the P,T-odd interaction reads
\begin{equation}
\label{24}
\xi_{P,T}=a_{P,T} \mathcal{K}_{P,T}\left(Z\right) |\vec{\mathcal{E}}|.
\end{equation}
Then we can consider the ratios
\begin{equation}
\label{25}
R_d=\frac{\xi_d(Z=Z_1)}{\xi_d(Z=Z_2)}=\frac{\mathcal{K}_d(Z_1)}{\mathcal{K}_d(Z_2)},
\end{equation}
\begin{equation}
\label{26}
R_{P,T}=\frac{\xi_{P,T}(Z=Z_1)}{\xi_{P,T}(Z=Z_2)}=\frac{\mathcal{K}_{P,T}(Z_1)}{\mathcal{K}_{P,T}(Z_2)}.
\end{equation}
For example, taking the simplest assumption $Q_{P,T}=A$ ($A=Z+N$, N is the number of neutrons, A is the atomic number) and comparing the $R$ values for $^{67}Zn^{29+}$ and $^{229}Th^{89+}$ we have
\begin{equation}
\label{27}
R_d=\frac{\xi_d(Z=90)}{\xi_d(Z=30)}=13.3,
\end{equation}
\begin{equation}
\label{28}
R_{P,T}=\frac{\xi_{P,T}(Z=90)}{\xi_{P,T}(Z=30)}=63.7.
\end{equation}
So the difference can be rather essential. The influence of the radiative corrections on the ratios \Br{27},\Br{28} should be negligible. Comparing the experimental $R$ value with the theoretical $R_d$ and $R_{P,T}$ values it should be possible, in principle, to distinguish between the electron EDM effect and the P,T-odd interaction effect.
\section{Observation of the EDM effect}
In this paper we suggest the following scenario for the observation of the electron EDM. We suppose that the ions move in the ring within a time interval which is sufficient for the EDM rotation angle to grow up essentially.
The angle $\varphi_{\rm EDM}$ after the observation time $t_{obs}$ becomes
\begin{equation}
\label{nonumber}
\varphi_{\rm EDM} = q |\vec{\omega}_d|\mathcal{K}_d t_{obs},
\end{equation}
where $|\vec{\omega}_d|$ is the frequency of the precession of the ion polarization around the vector $\vec{\Omega}_d$ (see Eq.(4)). This frequency can be estimated as \cite{Farley}
\begin{equation}
\label{29}
|\vec{\omega}_d| \approx e \eta |\vec{\mathcal{E}}|/2m_ec.
\end{equation}
A coefficient $q$ denotes the part of the ring where the ions move in the electric field. It is reasonable to assume $q\approx0.5$.
For an electric field of about $|\vec{\mathcal{E}}| \approx 10^5$ V/cm from Eq.\Br{29} follows $|\vec{\omega}_d| = \eta\cdot 3\cdot10^{9} \, {\rm s}^{-1}$. For the
observation of the electron EDM at the level $10^{-29}$ e cm we should use
the value $\eta \approx 10^{-18}$ in our estimates. Then we can estimate $t_{obs}$ necessary to make
the rotation angle of the order of $10^{-4}\pi$ which seems to be a measurable value:
\begin{equation}
\label{30}
t_{obs} \approx 10^{-4} \pi(3\cdot 10^{9}q \mathcal{K}_d \eta )^{-1} \, s.
\end{equation}
The results obtained with formula (\ref{30}) are listed in Table III. After the EDM rotation angle has grown up sufficiently the third step of the experiment - the measurement of the effect should start. Polarization laser should be switched on again. For the ions with $Z\geq30$ the time $t_{obs}$ necessary for the observation of EDM is larger than the 100\% polarization time $t_{pol}$ and consequently much larger then the decay time $t_{dec}$. Then, at the start of the third step of the experiment all ions will be in their ground state. For the observation of the EDM effect it will be necessary to excite the ions back to the excited hyperfine sublevel. It can be done again with the same circularly polarized optical laser and this excitation will not destroy the existing polarization.
As we have already mentioned the decay of the excited hyperfine level with the total angular momentum value $F$ to the ground hyperfine level $F'$ will not destroy the ion polarization if $F>F'$ (see Appendix A). It remains to prove that the excitation process $F'\rightarrow F$ also will not destroy the ion polarization. Consider for example the case $F'=2$, $F=3$. Then for 100\% ion polarization only Zeeman substate with $M_{F'}=2$ will be occupied. A circularly polarized laser will be able to populate only $M_F=3$ Zeeman substate of the excited hyperfine level, i.e. 100\% polarization will be preserved.
We suggest to apply the same idea as for the HCI in a magnetic storage ring in \cite{BondPR11}: it will be necessary to measure the asymmetry in the number of the decay photons with fixed circular polarization with respect to the vertical polarization component of the ions. An expression for the transition probability of the decay process when the circular polarization of the emitted photons is fixed, looks like \cite{BondPR11}
\begin{equation}
\label{31}
dW = \frac{W_0}{4 \pi} [1 \pm \xi_{\rm EDM} Q (\vec{\zeta} \vec{\nu})],
\end{equation}
where $W_0$ is the total transition rate, $\vec{\zeta}$ is the unit vector of the ion beam polarization, $\vec{\nu}$ is the unit vector of the direction of the photon emission, $\pm$ correspond to the right (left) circular polarization of the emitted photons, $\xi_{\rm EDM}$ defines the magnitude of the EDM effect and $Q$ is a factor of order $1$, specific for the particular transition in an ion. The value of $\xi_{\rm EDM}$ can be presented as
\begin{equation}
\label{32}
\xi_{\rm EDM} = \lambda_F F \sin \varphi_{\rm EDM},
\end{equation}
where $F$ is the total angular momentum of the excited hyperfine sublevel of the ground electronic level of an ion, $\lambda_F$ is the degree of the ion polarization and $\varphi_{\rm EDM}$ is the EDM rotation angle in the xz plane.
It should be convenient to locate the photon detectors above and below the ring and to measure the asymmetry in the number of photons detected in the upper and lower hemispheres. The asymmetry $R$ equals to
\begin{equation}
\label{33}
R = 2 \xi_{\rm EDM} Q = 2 Q \lambda_F F \sin \varphi_{\rm EDM}.
\end{equation}
Knowing $R$ from the experimental data and using Eqs.\Br{32}, \Br{33} and \Br{29} one can define the $\eta$ (i.e. the electron EDM) value. In particular with $Q\left(1s_{1/2} F=3 \rightarrow 1s_{1/2} F=2\right)=\frac{1}{2}$ \cite{BondPR11}, $\varphi_{\mathrm{EDM}} \approx 10^{-4}\pi$ (see above), $\lambda_F = 1$ (since the time of observation is much greater than the polarization time, most of their time ions will move in the ring being fully polarized) and $F$ values from the Table II, the $R$ values should be of the order $R \approx 10^{-3}$. For the observation of the effect of the order $10^{-3}$ over the level of fluctuations it is necessary to fix at least $10^6$ "events", i.e. HCI decays from the excited hyperfine $1s_{1/2}F$ level. For example, for $Z=63$ the decay time $t_{dec}=1.1\cdot 10^{-2}$s. Since this time is much smaller than the observation time $\approx 100$ hours for $Z=63$ ions, the necessary statistics for the fixation of the EDM effect in the electrostatic ring with $10^7$ ions in the beam can be obtained within 1 second even with the detector efficiency $10^{-3}$. For the registration of smaller asymmetry $R\approx 10^{-5}$ the observation time $t_{obs}$ for $Z=63$ ions will go down to $1$ hour. The registration time $t_{reg}$ then will grow up to $\approx 10^4$ s, i.e. will be approximately the same as $t_{obs}$.
\section{Removal of the background effects}
The main advantage of the electrostatic ring compared to the magnetic one in the context of the electron EDM
experiments with H-like HCI is the reduction of the background effects. First, in the magnetic storage
ring the strong vertical bending magnets produce a fast electron spin rotation in the horizontal plane.
This leads to an averaging of the EDM effect to zero. Second, the focussing magnets which create a
magnetic field with radial component produce a false EDM effect. To overcome both difficulties a complicated
experimental scheme containing "Siberian snakes" was proposed in \cite{BondPR11}.
However in the rest frame of the ions a motional magnetic field arises in the electric field regions of the electrostatic ring. For the radial electric field the motional field will have a vertical component (see Eq.\Br{6}), i.e. will cause the same problems as the bending magnets in the magnetic
rings. In the electric field $10^5$ V/cm and for the ion velocity 0.1 c the motional field will be about $0.3\cdot 10^{-2}$ T (30 gs), i.e. essentially weaker than the magnetic fields of bending magnets in the magnetic storage rings (1 T). While passing the deflection area about few meters length with velocity 0.1 c the electron spin in H-like ion will rotate about $10$ times in the horizontal ($xy$) plane, i.e. the rotation angle will be about $20 \, \pi$. For the purpose of the EDM experiment we have to keep the ion polarization longitudinal, i.e. the deviation from the direction along the $y$ axis should not exceed $\pi/2$. We assume that the vertical component of the motional magnetic field $\vec{\mathcal{H}}_m^{(1)}$ can be compensated by real vertical magnetic field, i.e. to each pair of electrodes a vertical magnet can be attached that will compensate the influence of the motional field on the ion polarization. The insertion of such magnets in the electrostatic ring will partly diminish the strength of the electric
field acting on the moving ions, but this effect can be taken into account by the construction of the ring.
As it follows from Eq.(6) the motional field $\vec{\mathcal{H}}_m$ has two components: vertical $\vec{\mathcal{H}}^{(1)}_m$, presented by the first term in the right-hand side of Eq.(6), and longitudinal $\vec{\mathcal{H}}^{(2)}_m$, presented by the second term. Since the compensating magnetic field is orthogonal to the ion velocity, the second term in the right-hand side of Eq.(6) will be effectively suppressed. This term contains also a small parameter $\beta^2$.
To keep the ion polarization longitudinal after one revolution of the ions around the ring, the compensating magnetic field $\mathcal{H}_{c}$ should be fixed with an accuracy defined by the condition
\begin{equation}
\label{34}
\frac{\delta \mathcal{H}}{\mathcal{H}} = \frac{\mathcal{H}_c - \mathcal{H}}{\mathcal{H}} < 10^{-4}.
\end{equation}
For $\mathcal{H}_m \approx 30$ gs this gives $\delta \mathcal{H} \approx 3\cdot 10^{-3}$ gs which should be realistic. Possible systematic effects which grow up linearly with the number of revolutions could be suppressed by changing the direction of the ion velocity and simultaneous changing the ion polarization from longitudinal to anti-longitudinal one as is shown in Fig.3. The direction of the compensating magnetic field also should be reversed. If to perform such a change after every revolution the background effect from the vertical component of $\vec{\mathcal{H}}_m$ will be fully canceled but the EDM effect will survive: the vector $\vec{v}$ changes the sign, but $\vec{s}$ remains the same in Eqs.\Br{4}-\Br{6}. This would require a more complicated construction of the ring with switches that will change periodically the route of the bunched ion beam around the ring. The switches can be arranged by turning off few electrodes or turning on few additional ones. This will change the curvature of the trajectory of ions and hence change the route. The direction of polarization can be changed from longitudinal to anti-longitudinal and back with the spin rotator \--- a vertical magnet which rotates the electron spin in the horizontal plane by an angle $\pi$. If the spin rotators will be located outside the electrodes this operation will not do any harm to the EDM rotation angle.
The use of the cross-routes should strongly suppress the systematic background effects. The cancellation of the residual magnetic field (which remains after the compensation of the motional field by a real one) after two subsequent revolutions can be incomplete only due to the change of the velocity of the ion from one revolution to another. But this change $\frac{\delta v}{ v}$ is due to the fluctuations, i.e. grows up not linearly with the number of revolutions $N_{rev}$, but is proportional to $\sqrt{N_{rev}}$. Assuming that the field control in the ring can be as high as $10^{-4}$ we can expect that after the compensation of the motional magnetic field by the real one the residual magnetic field will be of the order $3\cdot 10^{-3}$ gs. Then after the cancellation of this residual field due to the cross-route scheme of the experiment the systematic background rotation angle will be of the order $3\cdot 10^{-3}\frac{\delta v}{ v}\pi$ per one revolution (the magnetic field of the 1 gs rotates the electron spin by an angle $\pi$ at the length interval of about few meters). Assuming that the velocity control is also of the order $\frac{\delta v}{ v}\approx10^{-4}$ we obtain a systematic "parasite" magnetic rotation $\delta \varphi_{syst}^{1rev}\approx 3\cdot 10^{-7}\pi$ per one revolution. Then with the $t_{obs}\approx 10^5$s ($Z=63$), i.e. for $10^{11}$ revolutions of the ions around the ring the total systematic background rotation angle will raise up to $\delta \varphi_{syst}^{tot}\approx 3\cdot 10^{-7}\cdot \sqrt{10^{11}}\pi\approx 0.1\pi$. This is acceptable since the vertical background magnetic field will not average out the EDM effect.
\section{Conclusions}
The feasibility of the proposed experiment depends mainly on the suppression of systematic errors which might be a difficult task. Still due to the relatively small size of the electrostatic ring it is possible to locate it into the cooler and into the vacuum chamber and to achieve full magnetic shielding. This could help to reach the required overall accuracy of about $10^{-4}$ for the field and ion velocity control.
A possible disadvantage of the electrostatic storage rings compared to the magnetic ones is the smaller ion current: maximum $10^7$ stored ions in the ring ELISA \cite{Moll97} compared to $10^{10}$ ions in ESR (GSI, Darmstadt). However, the number of ions (which determines the statistics of the experiment) is not so important for the proposed EDM experiment, since after the EDM rotation angle will reach its final value $\varphi_{\rm EDM} \approx 10^{-4}\pi$, the measurement of the photon emission asymmetry of the order $10^{-3}$ should not represent a serious difficulty. The smallness of the EDM effect in this experiment is reflected by the relatively large observation time $t_{obs}$ necessary to reach the rotation angle $\varphi_{\rm EDM} \approx 10^{-4}\pi$.
Another disadvantage of the electrostatic rings compared to the magnetic storage rings is the problem of the focusing. Due to the gravity the ions in the electrostatic ring will drop down essentially within few seconds. In $10^{-2}$ s the ion beam with $Z\approx 60$ will be shifted down by $1$ mm. To keep the beam at a certain height it would be necessary to switch on a "supporting" vertical electric field outside the deflection areas after some time period, for example after every $10^{-2}$ s, i.e. after $10^4$ revolutions of the ions around the ring. This supporting field can be rather weak: for the length of the non-deflecting area of about 1 m this field should be about 0.1 V/cm.
One more difficulty with the electrostatic rings is the relatively short lifetime of the beam in these rings. If in magnetic ring ESR the beam can exist for at least 10 hours, the lifetime of the beam in electrostatic ring is much shorter: 30 minutes in the ring DESIREE in Stockholm \cite{Thom11}. One of the reasons for this short lifetime is the loss of energy due to the radiation of the charged particle moving along the bended trajectory, i.e. electric bremsstrahlung in our case. This problem can be solved by the segments with longitudinal (accelerating) electric field in the ring. The longitudinal electric field should not produce any dangerous motional magnetic field provided that the field and velocity control conditions at the level $10^{-4}$ are satisfied.
\textbf{Acknowledgements}
A.B., O. A., E.M., L.L., G.P. and D.L. gratefully acknowledge the support by the DFG (grant SU 658/2-1).
A.B., O.A., E.M. and L.L. acknowledge the partial support by the RFBR grants 11-02-00168-a and 14-02-00188-a, RFBR-DFG grant (RFBR 12-02-91340) and by the Ministry of Education and Science of Russian Federation, project 8420. The
work of A.B. and E.M. was supported by the FAIR-Russia Research Center Fellowship and by the nonprofit
foundation "Dynasty" (Moscow).
G.P. acknowledges the support from GSI Helmholzzentrum f$\ddot{\rm u}$r
Schwerionenforschung GmbH.
\setcounter{equation}{0}
\renewcommand{\theequation}%
{A.\arabic{equation}}
\section*{Appendix A: A conservation and loss of polarization in atomic transitions.}
Here we investigate the polarization behavior in atomic decay transitions $F \rightarrow F'$ where $F, F'$ are the total angular momenta of atomic states. For the final state $F'$ the degree of polarization defined by Eq.\Br{1} in the text reads
\begin{equation}
\label{b1}
\lambda_{F}' = \frac{1}{F'} \sum_{M_F'} M_F' n_{F'M_F'},
\end{equation}
where the occupation numbers can be presented via the transition probabilities $W_{FM_F \rightarrow F'M_F'}$ in the following way \cite{BondPR11}
\begin{equation}
\label{b2}
n_{F'M_F'} = \sum_{M_F} n_{FM_F} \frac{W_{FM_F \rightarrow F'M_F'}}{\Gamma_{FM_F}},
\end{equation}
$n_{FM_F}$ are the initial level occupation numbers and $\Gamma_{FM_F} = \sum_{M_F'} W_{FM_F \rightarrow F'M_F'}$ is the total width of the initial level $FM_F$.
Employing the Wigner-Eckart theorem for the matrix elements of transition probabilities
$$W_{FM_F \rightarrow F'M_F'} =$$
\begin{equation}
\label{b3}
= \sum_{L M_L} C^{FM_F}_{F'M_F' \, LM_L} C^{FM_F}_{F'M_F' \, LM_L} |\langle F \| \textbf{V}_{L} \| F' \rangle|^2,
\end{equation}
where $\langle F \| \textbf{V}_{L} \| F' \rangle$ denotes the reduced matrix element of the photon-electron interaction operator $\textbf{V}_{L,M_{L}}$, $L, M_L$ being the emitted photon momentum and its projection, the degree of the final state polarization results as
$$\lambda_{F}' = \frac{1}{F'} \sum_{M_F} n_{FM_F} \times $$
\begin{equation}
\label{b4}
\times \sum_{M_F'LM_L} M_F' \frac{C^{FM_F}_{F'M_F' \, LM_L} C^{FM_F}_{F'M_F' \, LM_L} |\langle F \| \textbf{V}_{L} \| F' \rangle|^2}{\Gamma_{FM_F}}.
\end{equation}
Replacing the factor $M_F'$ in Eq.(A.4) by $M_F' = M_F - M_L$ we can perform the summation over the projection $M_F'$. For this purpose we rewrite the Clebsch-Gordan coefficients in terms of 3j-symbols and use the formula for summation over one angular momentum projection for two 3j-symbols \cite{Varsh88}
$$
\sum_{M_F'} (-1)^{F' - M_F'} \left (
\begin{array}{ccc}
F & L & F' \\
M_F & \overline{M}_L & \overline{M}_F'
\end{array}
\right ) \left (
\begin{array}{ccc}
F' & L & F \\
M_F' & M_L & \overline{M}_F
\end{array}
\right ) =
$$
\begin{equation}
\label{b4a}
= (-1)^{2F} \cdot \sum_{x} (-1)^{x} \Pi^2_x \left (
\begin{array}{ccc}
F & F & x \\
M_F & \overline{M}_F & 0
\end{array}
\right ) \left (
\begin{array}{ccc}
x & L & L \\
0 & M_L & \overline{M}_L
\end{array}
\right ) \times
\end{equation}
$$\times \left \{
\begin{array}{ccc}
L & L & x \\
F & F & F'
\end{array}
\right \} ,$$
where $\overline{m}_j=-m_j$, $\Pi_{a,b, \ldots c} = \sqrt{(2a + 1)(2b + 1)\ldots ((2c + 1))}$ and a standard notation for 6j-symbol is employed.
If only the dipole transition is under consideration ($L = 1$), due to the 6j-symbol properties $x = 0, 1, 2$.
It remains to perform the summation over $M_L$ and $x$ in two terms with factors $M_F$ and $M_L$.
In the first term only one 3j-symbol depends on $M_L$ and we can perform the summation of one 3j-symbol over one momentum projection \cite{Varsh88}:
\begin{equation}
\label{b4b}
\sum_{M_L} (-1)^{L - M_L} \left (
\begin{array}{ccc}
L & L & x \\
M_L & \overline{M}_L & 0
\end{array}
\right ) = \Pi_L \cdot \delta_{x 0}.
\end{equation}
Here $\delta_{xy}$ is the Kronecker delta.
What concerns the second term with the factor $M_L$ let's consider 3j-symbol with different $x$ parameter.
Here it is convenient to return to Clebsch-Gordan coefficients. For $x = 0$ the Clebsch-Gordan coefficient doesn't depend on the projection $M_L$:
\begin{equation}
\label{b4c}
C^{L \, M_L}_{L\, M_L \, \, 00} = 1.
\end{equation}
For $x = 2$ the dependence is quadratic in $M_L$:
\begin{equation}
\label{b4d}
C^{L \, M_L}_{L\, M_L \, \, 20} = \frac{3M_L^2 - L(L + 1)}{[(2L - 1)L(L + 1)(2L + 3)]^{1/2}}
\end{equation}
and only $x = 1$ gives the linear dependence on $M_L$:
\begin{equation}
\label{b4e}
C^{L \, M_L}_{L\, M_L \, \, 10} = \frac{M_L}{L(L + 1)}.
\end{equation}
Due to the consequent multiplication by $M_L$ and summation over $M_L$ only the linear term survives.
This term ($x = 1$) gives linear dependence on $M_F$, so after collecting both terms (with factors $M_F$ and $M_L$) the degree of polarization of the final state can be presented in the following form:
\begin{equation}
\label{b5}
\lambda_{F}' = \lambda_{F} \cdot N(F, F'),
\end{equation}
where
\begin{equation}
\label{b6}
N(F, F') = \frac{F(F +1) + F'(F' + 1) - 2}{2F'(F + 1)},
\end{equation}
and $\lambda_{F}$ is the degree of polarization of the initial state.
In particular, choosing the momentum of the final state $F' = F - 1$ we have $N(F, F') = 1$, which means the conservation of the degree of polarization. For $F' = F$ and $F' = F + 1$ a coefficient $N(F, F') < 1$ would be obtained, which corresponds to the loss of polarization.
\section*{References}
|
train/arxiv
|
BkiUfsnxK6mkyCKC3VvS
| 5
| 1
|
\section{Introduction}\label{sec:Intro}
Let $\Omega$ be an open set in $\mathbb{R}^3$, bounded or unbounded, and set
\[
\begin{split}
H_{loc}(\mathrm{curl}, \Omega)=& \big\{U|_{B}\in H(\mathrm{curl}, B); B\ \mbox{is any bounded subdomain of $\Omega$}\big\}, \\
H(\mathrm{curl}, B)=& \big\{U\in L^2(B)^3; \ \nabla\wedge U\in L^2(B)^3\big\}.
\end{split}
\]
Consider the time-harmonic Maxwell equations for $(\mathbf{E}, \mathbf{H})\in H_{loc}(\mathrm{curl}, \Omega)\times H_{loc}(\mathrm{curl}, \Omega)$:
\begin{equation}\label{eq:eig}
\nabla\wedge\mathbf{E}-\mathbf{i} k\mathbf{H}={\mathbf 0},\quad \nabla\wedge\mathbf{H}+\mathbf{i} k\mathbf{E}={\mathbf 0},
\end{equation}
where $\mathbf{i}:=\sqrt{-1}$ and $k\in\mathbb{R}_+$. In this paper, we are concerned with the unique continuation property (UCP) of the Maxwell system \eqref{eq:eig} in a particular scenario, which is strongly motivated by our study of a longstanding problem in the inverse electromagnetic scattering theory. In what follows, we first present the mathematical setup for our UCP study.
Let $B_\rho(\mathbf{x})$ denote a ball of radius $\rho\in\mathbb{R}_+$ and centered at $\mathbf{x}\in\mathbb{R}^3$. In the sequel, for a set $K\subset\mathbb{R}^3$, $B_\rho(K):=\{\mathbf{x}; \mathbf{x}\in B_\rho(\mathbf{y})\ \mbox{for any}\ \mathbf{y}\in K\}$.
Let $\Pi_1$ and $\Pi_2$ be two planes in $\mathbb{R}^3$ such that $\Pi_1\cap\Pi_2= \boldsymbol {L} $, where $ \boldsymbol {L} $ is a straight line. We suppose that there exists an open line segment $ \boldsymbol {l} \Subset \boldsymbol {L} $ and $\rho\in\mathbb{R}_+$ such that $B_\rho( \boldsymbol {l} )\Subset\Omega$. Let $\mathcal{W}(\Pi_1, \Pi_2)$ denote one of the wedge domains formed by $\Pi_1$ and $\Pi_2$, then
$\partial \mathcal{W}(\Pi_1,\Pi_2)\cap B_\rho( \boldsymbol {l} )$ is called an edge-corner associated with $\Pi_1$ and $\Pi_2$;
see Fig.~\ref{fig:coordinate1} for a schematic illustration. In the sequel, we let $\widetilde{\Pi}_j$, $j=1,2$, denote the two flat faces of the edge-corner lying on $\Pi_j$, respectively, and denote it by ${\mathcal E}(\widetilde\Pi_1, \widetilde\Pi_2, \boldsymbol {l} )$. Any $\mathbf{x}\in \boldsymbol {l} $ is said to be an edge-corner point of ${\mathcal E}(\widetilde\Pi_1, \widetilde\Pi_2, \boldsymbol {l} )$.
\begin{figure}[htbp]
\centering
\vspace*{-1cm} \includegraphics[width=0.25\linewidth]{edgecorner2d}\\[-25pt]
\caption{Schematic illustration of two intersecting planes with an edge-corner ${\mathcal E}(\widetilde\Pi_1, \widetilde\Pi_2, \boldsymbol {l} )$ and the dihedral angle $\phi_0$.}
\label{fig:coordinate1}
\end{figure}
Let $\boldsymbol{ \eta}_j$ denote a generalized impedance parameter on $\widetilde{\Pi}_j$, whose value must fulfil one of the following three possibilities
\begin{equation}\label{eq:imp1}
({\rm i})~\boldsymbol{ \eta}_j\equiv 0;\quad ({\rm ii})~\boldsymbol{ \eta}_j\equiv \infty;\quad ({\rm iii})~\boldsymbol{ \eta}_j\in L^\infty(\widetilde{\Pi}_j).
\end{equation}
Let $\nu_j\in\mathbb{S}^2$ be the unit normal vector to $\Pi_j$, pointing to the exterior of $\mathcal{W}(\Pi_1,\Pi_2)$. We introduce the following {\it generalized impedance condition} on $\widetilde{\Pi}_j$ associated with $(\mathbf{E}, \mathbf{H})$ to the Maxwell system \eqref{eq:eig}:
\begin{equation}\label{eq:imp2}
\nu_j \wedge (\nabla\wedge \mathbf{E})+\boldsymbol{ \eta}_j (\nu_j\wedge\mathbf{E})\wedge\nu_j\big|_{\widetilde{\Pi}_j}=0.
\end{equation}
In the case $ \boldsymbol{ \eta}_j\equiv \infty$, \eqref{eq:imp2} is understood as
\begin{equation}\label{eq:imp3}
(\nu_j\wedge\mathbf{E})\wedge\nu_j\big|_{\widetilde{\Pi}_j}=0.
\end{equation}
An edge-corner ${\mathcal E}(\widetilde\Pi_1, \widetilde\Pi_2, \boldsymbol {l} )$ with the generalized impedance condition \eqref{eq:imp2} imposed on $\widetilde{\Pi}_j$, $j=1,2$, is called a generalized impedance edge-corner associated with the Maxwell system \eqref{eq:eig}. In this paper, we shall consider the unique continuation property of the solution $(\mathbf{E}, \mathbf{H})$ to \eqref{eq:eig} with the presence of a generalized impedance edge-corner.
The UCP for differential equations from a crack in the domain has been the subject of many existing studies in the literature, see e.g. \cite{AF,CDD,Dal} and the references cited therein. However, the corresponding study to the Maxwell system is rather rare. Moreover, there are several other features that make our current study interestingly new and distinct from many existing UCP studies from cracks. First, the Maxwell system \eqref{eq:eig} is defined in the whole domain $\Omega$, instead of the exterior of the crack, namely $\Omega\backslash {\mathcal E}(\widetilde\Pi_1, \widetilde\Pi_2, \boldsymbol {l} )$. Usually, for a typical UCP problem from a crack, the differential equation is given over the exterior of the crack, and hence the solution inherits a certain singularity from the pathological geometry of the crack. But in our case, by the standard PDE theory, we know that $(\mathbf{E}, \mathbf{H})$ are real analytic in the interior of $\Omega$, and in particular in $B_\rho( \boldsymbol {l} )$ which is a neighbourhood of the edge-corner. This makes our UCP study seemingly rather ``artificial". However, on the one hand, the UCP problem in this work is strongly motivated by our study of the inverse electromagnetic scattering problems. This shall become more evident in Section~\ref{sec4}, and the UCP results shall generate some significant applications that are of both theoretical and practical importance. On the other hand, it turns out that the analyticity of the solutions around the edge-corner is a key factor that helps us to develop an algebraic argument in achieving the desired UCP, though highly intricate and subtle. Second, the edge-corner geometry enables us to establish an accurate relationship between the vanishing order of the solutions to the Maxwell system and the angle of the edge-corner. In particular, if the angle is irrational, then the vanishing order is infinity, i.e. strong unique continuation holds from the edge-corner. We would like to point out that it seems that the extension to the other more general geometry seems rather unpractical, though certain quantitative estimates are more plausible. Third, it is remarked that in our UCP study, the Robin-type generalized impedance condition \eqref{eq:imp2} is considered on the crack, namely the edge-corner, whereas in most of the existing studies of UCP from cracks, homogeneous Dirichlet-type or Neumann-type conditions are more concerned, which correspond to $ \boldsymbol{ \eta} \equiv 0$ or $\boldsymbol{ \eta} \equiv \infty$, respectively.
As mentioned earlier, we shall consider two interesting and significant applications of the new UCP results to the study of inverse electromagnetic scattering problems. We postpone the mathematical formulation of the inverse problem to Section~\ref{sec5} and we are mainly concerned with the determination of an impenetrable obstacle as well as its boundary impedance by a single electromagnetic far-field measurement. This constitutes a longstanding problem in the inverse scattering theory (cf. \cite{CK18}). In \cite{LiuA,Liu3,Liu09}, the case $\boldsymbol \eta\equiv 0$ or $\boldsymbol \eta\equiv\infty$ was considered, and it is shown that a single far-field measurement can uniquely determine an obstacle of the general polyhedral shape and the corresponding stability estimate was established in \cite{LRX}. The proofs are mainly based on the path argument originated in \cite{Liu-Zou} for the acoustic problem as well as a certain reflection principle for the Maxwell system establish in \cite{Liu3,Liu09}. However, the arguments developed therein cannot be extended to tackle the case that the impedance parameter $\boldsymbol \eta$ is finite and non-identically zero, even if in the simplest case that it is a finite and nonzero constant, and a fortiori a variable function in our study. Using the UCP results derived in this paper, we are able to establish several novel unique identifiability results for this challenging problem in the polyhedral case, especially in the case that $\boldsymbol \eta$ is a finite and non-identically zero variable function. Nevertheless, it is our intention to point out that we shall require certain mild but unobjectionable a-priori knowledge of the underlying polyhedral obstacle as well it surface impedance. The other interesting application of our UCP results is about the ``information encoding" for the inverse electromagnetic scattering problems. Indeed, we shall regard our UCP results as generalizing the classical Holmgren's principle \cite{CK,TF} for the Maxwell equations. With this view, we can provide an alternative means of electromagnetic scattering measurements for inverse problems that might have some practical implications.
The rest of the paper is organized as follows. Section \ref{sec:2} is devoted to some preliminary knowledge and auxiliary results. In Sections \ref{sec:5} and \ref{sec:6}, we establish the UCP results from a generalized impedance edge-corner for the Maxwell equations \eqref{eq:eig} in two different scenarios. In Section \ref{sec5}, we consider the inverse electromagnetic scattering problems and present two applications of the newly established UCP results.
\section{Preliminaries and auxiliary lemmas}\label{sec:2}
In this section, we collect some preliminary knowledge for the Maxwell system \eqref{eq:eig} as well as derive several auxiliary lemmas for our subsequent use.
First, we note that the Maxwell system \eqref{eq:eig} is invariant under rigid motions (cf. \cite{BLZ,LZh}). Hence, throughout the rest of this paper and without loss of generality, we can assume that the edge-corner ${\mathcal E}(\widetilde\Pi_1, \widetilde\Pi_2, \boldsymbol {l} ) \Subset \Omega$ satisfies
\begin{equation*}\label{notation}
\boldsymbol {l} =\big\{~ \mathbf {x} =( \mathbf {x} ',x_3)\in \mathbb{R}^3; \mathbf {x} ':=(x_1,x_2)=\mathbf{0},\ x_3\in (-h,h )\big\}\Subset\Omega,
\end{equation*}
where $2h\in\mathbb{R}_+$ is the length of $ \boldsymbol {l} $, and furthermore $\Pi_1$ coincides with the $(x_1,x_3)$-plane while $\Pi_2$ possesses a dihedral angle $\phi_0= \alpha\pi$ away from $\Pi_1$ in the anti-clockwise direction; see Fig.~\ref{fig:coordinate1} for a schematic illustration. Throughout, it is assumed that
\begin{equation}\label{eq:angle2a}
\alpha\in(0, 2)\quad\mbox{and}\quad \alpha\neq 1.
\end{equation}
It can directly verified that the exterior unit normal vectors $\nu_j$ to $\Pi_j$, $j=1,2$ are given by
\begin{equation}\label{l1}
\begin{split}
&\nu_1=(0,-1,0)^\top , \quad \nu_2=(-\sin\phi_0,\cos\phi_0,0)^\top.
\end {split}
\end{equation}
As specified earlier, we have the generalized impedance condition \eqref{eq:imp2} imposed on $\widetilde\Pi_j$, where the boundary impedance parameter $\boldsymbol \eta_j$ fulfils \eqref{eq:imp1}.
In order to consider the unique continuation from the edge-corner as described above, we introduce the following definition.
\begin{definition}\label{def:3}
Let $\mathbf{E}\in H_{loc}(\mathrm{curl},\Omega)$ be a solution to \eqref{eq:eig} and suppose there exists an edge-corner ${\mathcal E}(\widetilde\Pi_1, \widetilde\Pi_2, \boldsymbol {l} ) \Subset \Omega$ as described above. For a given point $\mathbf{x}_0 \in \boldsymbol {l} $, if there exits a number $N \in {\mathbb N}\cup\{0\}$ such that
\begin{equation}\label{eq:normal3}
\lim_{\rho\rightarrow +0} \frac{1}{\rho^m} \int_{B_\rho(\mathbf{x}_0)}\, |\mathbf{E}(\mathbf{x})|\, {\rm d} \mathbf{x}=0\ \ \mbox{for}\ \ m=0,1,\ldots, {{N+2}},
\end{equation}
we say that $\mathbf{E}$ vanishes at $\mathbf{x}_0$ up to the order $N$. The largest possible $N$ such that \eqref{eq:normal3} is fulfilled is called the vanishing order of $\mathbf{E}$ at $\mathbf{x}_0$, and we write
\begin{equation*}\label{eq:normal4}
\mathrm{Vani}(\mathbf{E}; \mathbf{x}_0)=N.
\end{equation*}
If \eqref{eq:normal3} holds for any $N\in\mathbb{N}$, then we say that the vanishing order is infinity.
\end{definition}
Since $\mathbf{E}$ is (real) analytic in $\Omega$, we immediately see that if the vanishing order of $\mathbf{E}$ at any point $\mathbf{x}_0\in \boldsymbol {l} $ is infinity, then $\mathbf{E}\equiv 0$ in $\Omega$, namely the strong unique continuation property holds. In what follows, it is sufficient to consider the UCP at the origin $\mathbf{0}\in \boldsymbol {l} $. Moreover, due to the symmetry role between $(\mathbf{E}, \mathbf{H})$ and $(-\mathbf{H}, \mathbf{E})$, namely both of them satisfy the same Maxwell system \eqref{eq:eig}, we only consider the vanishing order of $\mathbf{E}$, and the same result equally holds for $\mathbf{H}$. It turns out that the vanishing order of $\mathbf{E}$ is related to the {\it rationality} of the edge-corner angle, i.e. $\alpha\pi$, and we shall make it more rigorous in the following.
In the subsequent analysis, we shall make frequent use of the spherical coordinate of a point $ \mathbf {x} $ in $\mathbb{R}^3$:
\begin{equation}\label{eq:x sph}
\mathbf{ x}=(r\sin\theta\cos\phi, r\sin\theta\sin\phi, r\cos\theta):=(r,\theta,\phi),\ r\geq 0, \, {{\theta\in [0,\pi),\, \phi \in [0, 2\pi)}}\,.
\end{equation}
It is noted that
\begin{equation}\label{w1}
\begin{split}
&\boldsymbol{\hat{r}}=\sin\theta\cos\phi\cdot\hat{\mathbf{x}} +\sin\theta\sin\phi\cdot\hat{\mathbf{y}}+\cos\theta\cdot\hat{\mathbf{z}}\\
&\boldsymbol{\hat{\theta}}=\cos\theta\cos\phi\cdot\hat{\mathbf{x}}+\cos\theta\sin\phi\cdot\hat{\mathbf{y}}-\sin\theta\cdot\hat{\mathbf{z}}\\
&\boldsymbol{\hat{\phi}}=-\sin\phi\cdot\hat{\mathbf{x}}+\cos\phi\cdot\hat{\mathbf{y}}
\end{split}
\end{equation}
constitutes an orthonormal basis in the spherical coordinate system, where $\hat{\mathbf{x}}=(1,0,0)^{\top},\hat{\mathbf{y}}=(0,1,0)^{\top},\hat{\mathbf{z}}=(0,0,1)^{\top}$.
\begin{definition}\label{def:class1}
Suppose that $\psi(r,\theta)$ is a complex-valued function for $(r, \theta)\in \Sigma:=[0, r_0]\times [-\theta_0, \theta_0]$, where $r_0, \theta_0\in\mathbb{R}_+$. $\psi$ is said to belong to class $\mathcal{A}$ in $\Sigma$ if it allows an absolutely convergent series representation as follows
\begin{equation}\label{eq:series1}
\psi(r,\theta)=a_0+\sum_{j=1}^\infty a_j(\theta) r^j,
\end{equation}
where $a_0\in\mathbb{C}\backslash\{0\}$ and $a_j(\theta)\in C[-\theta_0, \theta_0]$.
\end{definition}
Two simple scenarios for $\psi(r,\theta)$ to belong to the class $\mathcal{A}$: first, $\psi$ is a non-zero constant; second, $\psi(r, \theta)$ is real-analytic in $\Sigma$ with $r_0, \theta_0$ sufficiently small and $\psi(0,\theta)$ independent of $\theta$. For an impedance parameter $\boldsymbol{ \eta}_j$ in \eqref{eq:imp2} in the third case, namely $\boldsymbol{ \eta}_j\in L^\infty(\widetilde\Pi_j)$, we readily see that in the $(r,\theta,\phi)$-coordinate, $\phi|_{\widetilde\Pi_1}=0$ and $\phi|_{\widetilde\Pi_2}=\phi_0$. In what follows, if for any $ \mathbf {x} _0\in \boldsymbol {l} $ there exists a neighbourhood $\Sigma_{ \mathbf {x} _0}$ of $ \mathbf {x} _0$ which is of the form in Definition~\ref{def:class1} and is contained in $\overline{\widetilde{\Pi}_j}$ such that $\psi_{ \mathbf {x} _0}(r,\theta):=\boldsymbol{ \eta}_j( \mathbf {x} - \mathbf {x} _0)$ belongs to the class $\mathcal{A}$ in $\Sigma_{ \mathbf {x} _0}$, then we say that $\eta_j$ belongs to the class $\mathcal{A}( \boldsymbol {l} )$. It is emphasized that $\boldsymbol{ \eta}_j$ belonging to the class $\mathcal{A}( \boldsymbol {l} )$ is a local property, which is localized around a neighbourhood of $ \boldsymbol {l} $ on $\widetilde{\Pi}_j$. In fact, in our subsequent analysis of the UCP from the edge-corner ${\mathcal E}(\widetilde\Pi_1, \widetilde\Pi_2, \boldsymbol {l} )$ is confined locally around a neighbourhood of $ \boldsymbol {l} $, and indeed, around a neighbourhood of the origin $\mathbf{0}$ according to our earlier discussion.
Next, we consider the Fourier representations of the solutions to \eqref{eq:eig} in terms of the spherical waves. Throughout the rest of the paper, for a fixed $l \in \mathbb N$ we adopt the notation
\begin{equation}\label{eq:lindex}
[ l ]_0:=\{0,\pm 1, \ldots, \pm l \}, \quad [ l ]_1:=\{\pm 1, \ldots, \pm l \}.
\end{equation}
Recall that the spherical harmonics $Y_l^m(\theta,\phi)$ is given by
\begin{equation}\label{sphe harmonic}
\begin{split}
Y_l^m(\theta, \phi)=c_l^mP_l^{|m|}(\cos\theta)e^{{\mathbf{i}} m\phi},\quad c_l^m=\sqrt{\frac{2l+1}{4\pi}\frac{(l-|m|)!}{(l+|m|)!}},
\end{split}
\end{equation}
where $P_l^m(t)$ is the Legendre function. For simplicity, we use the notation $Y_l^m$ for $Y_l^m(\theta, \phi)$ from the clear context. For our subsequent use, the following lemma presents some important properties of the associated Legendre functions, which can be conveniently found in \cite{Abr}.
\begin{lemma}\label{base21}
In the spherical coordinate system, the Legendre functions fulfil the following orthogonality condition
for any fixed $n \in \mathbb N$, and any two integers $m\ge 0$ and $l\leq n$:
\begin{equation}\label{ortho3}
\int_{-\pi}^{\pi}\frac{P_n^m(\cos\theta)P_n^l(\cos\theta)}{\sin\theta}\,d\theta=
\begin{cases}
0 &\mbox{ if }\quad l\neq m,\medskip\\
\frac{(n+m)!}{m(n-m)!} &\mbox{ if }\quad l=m \neq 0.
\end{cases}
\end{equation}
Furthermore, the following recursive relationships hold
\begin{equation}\label{uu}
\begin{split}
\frac{{\rm d} P_l^{|m|}(\cos\theta)}{{\rm d} \theta}&=\frac{1}{2}\big[(l+|m|)(l-|m|+1)P_l^{|m|-1}(\cos\theta)-P_l^{|m|+1}(\cos\theta)\big], \\
\frac{|m|}{\sin\theta}P_l^{|m|}(\cos\theta)&=-\frac{1}{2}\big[P_{l-1}^{|m|+1}(\cos\theta)+(l+|m|-1)(l+|m|)P_{l-1}^{|m|-1}(\cos\theta)\big],
\end{split}
\end{equation}
where $l\in \mathbb N$ and $m\in [ l]_0.$ If $P_l^m(\cos \theta)$ is evaluated at $\theta=0$, for $l\in \mathbb N \cup \{0\}$ we have
\begin{equation}\label{eq:plm0}
P_l^m(1)=0, \quad m\in [l]_1 ; \quad P_l^0(1)=1.
\end{equation}
For a fixed $n\in \mathbb N \cup \{0\}$ and $m\in \mathbb N$ with $m\leq n$, it holds that
\begin{equation}\label{eq:pnm neg}
P_{n}^{-m} (\cos \theta ) =(-1)^m \frac{ (n-m)!}{(n+m)!}P_n^m (\cos \theta ).
\end{equation}
\end{lemma}
Recall that the spherical Bessel function $j_\ell (t)$ of the order $\ell $ is defined by
\begin{equation}\label{eq:bess sph}
j_\ell (t)=\frac{t^\ell }{ (2\ell+1)!!}\left (1-\sum_{l=1}^\infty \frac{(-1)^l t^{2l }}{ 2^l l! (2\ell+3)\cdots (2\ell+2l+1) }\right )=\frac{t^\ell }{ (2\ell+1)!!}+{\mathcal O} (t^{\ell+2}) .
\end{equation}
There holds the following recursive relationships \cite{Abr}:
\begin{equation}\label{eq:bessel}
\begin{split}
\frac{j_\ell(t)}{t}=\frac{ j_{\ell-1}(t)+j_{\ell+1}(t)} {2\ell+1}, \quad j_{\ell }'(t) = \frac{\ell j_{\ell-1}(t)-(\ell+1) j_{\ell+1}(t)}{2\ell+1} , \quad \ell \in \mathbb N.
\end{split}
\end{equation}
\begin{lemma}\cite[Lemma 2.5]{CDL2}\label{lem:coeff0}
Suppose that for $t\in(0, h)$, $h\in\mathbb{R}_+$,
\begin{equation}\label{coef1}
\sum_{n=0}^{\infty}\alpha_nj_n(t)=0,
\end{equation}
where $j_n(t)$ is the $n$-th spherical Bessel function. Then
$\alpha_n=0,\quad n=0,1,2,\ldots.$
\end{lemma}
\begin{lemma}\cite{CK}\label{k2}
Recall that $\hat{\boldsymbol{r}},\hat{\boldsymbol\theta}$ and $\hat{\boldsymbol\phi}$ are defined in \eqref{w1}. Denote
\begin{equation}\label{ww}
\begin{split}
&\mathbf{M}_l^m( \mathbf {x} )=j_l{(kr)}\cdot\mathbf{X}_l^m,\quad \mathbf{N}_l^m( \mathbf {x} )=\boldsymbol{\mathrm{i}}\bigg(\frac{ j_l(kr)}{kr}+j'_l\big(kr\big)\bigg)\mathbf{Z}_l^m-\frac{\sqrt{l(l+1)}}{kr}\cdot j_l(kr)Y_l^m\cdot\boldsymbol{\hat{r}},
\end{split}
\end{equation}
where $k\in \mathbb R_+$, $j'_l\big(kr\big)$ is the derivative of $j_l(kr)$ with respect to $kr$, and
$$\mathbf{X}_l^m=\frac{\boldsymbol{\mathrm{i}}}{\sqrt{l(l+1)}}\bigg(\frac{\boldsymbol{\mathrm{i}}\cdot m}{\sin\theta}Y_l^m\hat{\boldsymbol{\theta}}
-\frac{\partial{Y_l^m}}{\partial\theta}\cdot \hat{\boldsymbol\phi}\bigg),\quad \mathbf{Z}_l^m=\frac{\boldsymbol{\mathrm{i}}}{\sqrt{l(l+1)}}
\bigg(\frac{\partial{Y_l^m}}{\partial\theta}\hat{\boldsymbol\theta}+\frac{\boldsymbol{\mathrm{i}}\cdot m}{\sin\theta}Y_l^m\hat{\mathbf{\boldsymbol\phi}}\bigg) .
$$
The solution $ \mathbf{E}( \mathbf {x} )$ to \eqref{eq:eig} has the following Fourier expansion around $\mathbf{0}$,
\begin{equation}\notag
\begin{split}
\mathbf{E}( \mathbf {x} )=\sum_{l=1}^{\infty}\sum_{m=-l}^{l}\bigg(a_l^m\cdot\mathbf{M}_l^m( \mathbf {x} )+b_l^m\cdot\mathbf{N}_l^m( \mathbf {x} )\bigg),\quad a_l^m, b_l^m \in \mathbb{C},
\end{split}
\end{equation}
which (together with its derivatives) converges uniformly in $B_{\rho_0}({\mathbf 0})$ for a sufficiently small $\rho_0\in\mathbb{R}_+$.
\end{lemma}
Using \eqref{eq:bessel}, from Lemma \ref{k2}, we can derive that
\begin{equation}
\begin{split}\label{mix pi21}
\mathbf{E}( \mathbf {x} )=&-\sum_{l=1}^{\infty}\sum_{m=-l}^{l}\frac{1}{\sqrt{l(l+1)}}\Bigg\{ b_l^m\cdot{l(l+1)}p_l(kr)\cdot Y_l^m \cdot\hat{\boldsymbol{r}}\\
&+ \bigg[a_l^m\cdot j_l\big(kr\big)\frac{m}{\sin\theta}Y_l^m+b_l^m\cdot
q_l(kr)\cdot\frac{\partial{Y_l^m}}{\partial\theta}\bigg]\cdot \hat{\boldsymbol\theta}\\
&+\boldsymbol{\mathrm{i}} \bigg[a_l^m\cdot j_l(kr) \frac{\partial{Y_l^m}}{\partial\theta}+b_l^m\cdot
q_l(kr)\frac{ m}{\sin\theta}Y_l^m
\bigg]\cdot\hat{\boldsymbol\phi}\Bigg\},
\end{split}
\end{equation}
where
\begin{equation}\label{eq:plql}
\begin{split}
p_l(kr)=\frac{j_{l-1}\big(kr\big)+j_{l+1}\big(kr\big)}{2l+1},\quad q_l(kr)=\frac{(l+1)j_{l-1}\big(kr\big)-lj_{l+1}\big(kr\big)}{2l+1}.
\end{split}
\end{equation}
\begin{remark}\label{i2}
In view of \eqref{eq:bess sph}, the lowest order terms of $ p_l(kr)$ and $ q_l(kr)$ with respect to the power of $r$ are
$$
\frac{k^{l-1} }{(2l+1) (2l-1)!!} r^{l-1} \mbox{ and } \frac{(l+1)k^{l-1} }{(2l+1) (2l-1)!!} r^{l-1}
$$
respectively.
\end{remark}
\begin{lemma}\cite[Proposition 2.1.7]{krantz}\label{lem:kra}
If the power series $\sum_{\mu } a_{\mu } {\mathbf x}^\mu $ converges at a point ${\mathbf x}_0$, then it converges uniformly and absolutely on compact subsets of $U(\bf{x}_0)$, where
$$
U({\mathbf x}_0)=\{(r_1 x_{0,1},\ldots, r_n x_{0,n}):-1<r_j<1,j=1,\ldots,n\}, \, {\mathbf x}_0=(x_{0,1},\ldots, x_{0,n}) \in {\mathbb R}^n.
$$
\end{lemma}
Using Definition \ref{def:3}, in view of \eqref{mix pi21}, we can obtain the following lemma.
\begin{lemma}\label{lem:vani}
Let $\mathbf E$ be a solution to \eqref{eq:eig}. Recall that $\mathbf E$ has the radial wave expansion \eqref{mix pi21} in $B_{\rho_0}(\mathbf{0})$. For a fixed $N \in \mathbb N$, if
\begin{equation}\label{eq:216 cond1}
a_l^m=b_l^m=0,\quad m\in [l]_0, \quad l=1,2,\ldots,N,
\end{equation}
where $[l]_0$ is defined in \eqref{eq:lindex}, then
\begin{equation}\label{eq:217 cond}
\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq N.
\end{equation}
Conversely, if there exits $N\in \mathbb N$ such that \eqref{eq:217 cond} holds then we have \eqref{eq:216 cond1}.
\end{lemma}
\begin{proof} From Lemma \ref{lem:kra}, we know that \eqref{mix pi21} converges uniformly and absolutely in $B_{\rho_1}({\mathbf 0})$, where $0<\rho_1<\rho_0$.
Substituting \eqref{eq:216 cond1} into \eqref{mix pi21}, we have
\begin{equation}
\begin{split}\label{eq:E N+1}
\mathbf{E}( \mathbf {x} )=&-\sum_{l=N+1}^{\infty}\sum_{m=-l}^{l}\frac{1}{\sqrt{l(l+1)}}\Bigg\{ b_l^m\cdot{l(l+1)}p_l(kr)\cdot Y_l^m\cdot\hat{\boldsymbol{r}}\\
&+ \bigg[a_l^m\cdot j_l\big(kr\big)\frac{m}{\sin\theta}Y_l^m+b_l^m\cdot
q_l(kr)\cdot\frac{\partial{Y_l^m}}{\partial\theta}\bigg]\cdot \hat{\boldsymbol\theta}\\
&+\boldsymbol{\mathrm{i}} \bigg[a_l^m\cdot j_l(kr) \frac{\partial{Y_l^m}}{\partial\theta}+b_l^m\cdot
q_l(kr)\frac{ m}{\sin\theta}Y_l^m
\bigg]\cdot\hat{\boldsymbol\phi}\Bigg\}.
\end{split}
\end{equation}
From Remark \ref{i2}, the lowest order of $r$ with respect to the power of $r$ in \eqref{eq:E N+1} is $N$. Therefore,
\begin{equation}
\begin{split}\label{eq:E N+1}
\frac{\mathbf{E}( \mathbf {x} )}{r^N}=&-\sum_{l=N+1}^{\infty}\sum_{m=-l}^{l}\frac{1}{\sqrt{l(l+1)}}\Bigg\{ b_l^m\cdot{l(l+1)}\frac{ p_l(kr)}{r^N }\cdot Y_l^m\cdot\hat{\boldsymbol{r}}\\
&+ \bigg[a_l^m\cdot \frac{j_l\big(kr\big)}{r^N}\frac{m}{\sin\theta}Y_l^m+b_l^m\cdot
\frac{q_l(kr)}{r^N}\cdot\frac{\partial{Y_l^m}}{\partial\theta}\bigg]\cdot \hat{\boldsymbol\theta}\\
&+\boldsymbol{\mathrm{i}} \bigg[a_l^m\cdot \frac{j_l(kr)}{r^N} \frac{\partial{Y_l^m}}{\partial\theta}+b_l^m\cdot
\frac{q_l(kr)}{r^N}\frac{ m}{\sin\theta}Y_l^m
\bigg]\cdot\hat{\boldsymbol\phi}\Bigg\}
\end{split}
\end{equation}
converges uniformly and absolutely in $B_{\rho_1}({\mathbf 0})$, which implies
\begin{equation}\label{eq:221}
\left| \frac{\mathbf{E}( \mathbf {x} )}{r^N} \right| = {\mathcal O}(1), \quad \mbox{ as } r \rightarrow +0 .
\end{equation}
In view of Definition \ref{def:3}, by virtue of \eqref{eq:221}, we have
\begin{equation}\notag
\lim_{\rho\rightarrow +0} \frac{1}{\rho^m} \int_{B_\rho(\mathbf{0} )}\, |\mathbf{E}(\mathbf{x})|\, {\rm d} \mathbf{x}\leq \lim_{\rho\rightarrow +0} \frac{\rho^{N+2}}{\rho^m} \int_{0}^\rho\int_{0}^\pi \int_{0}^{2\pi} \left| \frac{\mathbf{E}( \mathbf {x} )}{r^N} \right| {\mathrm{d}}r {\mathrm{d}}\theta {\mathrm{d}}\phi=0,
\end{equation}
which holds for $m=0,1,\ldots, {{N+2}},$ and this proves \eqref{eq:217 cond}. The other direction of the conclusion can be proved by using similar arguments.
\end{proof}
\begin{lemma}\label{eq:e1e21}
Let $\mathbf E$ be a solution to \eqref{eq:eig}. Recall that $\mathbf{E}$ has the radial wave expansion \eqref{mix pi21} in $B_{\rho_0}(\mathbf{0})$. Consider an edge-corner ${\mathcal E}(\widetilde\Pi_1, \widetilde\Pi_2, \boldsymbol {l} )\Subset \Omega$ associated with $\mathbf{E}$. Recall that $\nu_i$ defined in \eqref{l1} are the outward unit normal vectors to $\Pi_i$, $i=1,2$. Then
\begin{equation}\label{mix pi2}
\begin{split}
& \nu_1 \wedge \mathbf{E}|_{\widetilde\Pi_1}= \sum_{l=1}^{\infty}\sum_{m=-l}^{l}-\frac{1}{\sqrt{l(l+1)}}\Bigg\{b_l^ml(l+1)p_l(kr)Y_l^m\Big|_{\phi=0}
\boldsymbol{e_1}(\theta,0)\\
& \hspace{2cm} +\bigg(
a_l^mj_l\big(kr\big)\frac{m}{\sin\theta}Y_l^m\Big|_{\phi=0}+b_l^m\cdot q_l(kr)
\frac{\partial{Y_l^m}}{\partial\theta}\Big|_{\phi=0}
\bigg)\boldsymbol{e_2}(\theta,0)\Bigg\},\\
&\nu_2 \wedge \mathbf{E}|_{\widetilde\Pi_2}= \sum_{l=1}^{\infty}\sum_{m=-l}^{l}-\frac{1}{\sqrt{l(l+1)}}\Bigg\{b_l^ml(l+1)p_l(kr)Y_l^m\Big|_{\phi=\phi_0}
\boldsymbol{e_1}(\theta,\phi_0) \\
&\hspace{2cm} +\bigg(
a_l^mj_l\big(kr\big)\frac{m}{\sin\theta}Y_l^m \Big|_{\phi=\phi_0}
+b_l^m\cdot q_l(kr)\frac{\partial{Y_l^m}}{\partial\theta}\Big|_{\phi=\phi_0}\bigg)
\boldsymbol{e_2}(\theta,\phi_0)\Bigg\},
\end{split}
\end{equation}
where
\begin{equation}\label{eq:e1e2}
\boldsymbol{e}_{1}\left(\theta, \phi\right)=\left[\begin{array}{c}\cos \phi \cos \theta \\ \sin \phi \cos \theta \\ -\sin \theta\end{array}\right] \mbox{ and } \boldsymbol{e}_{2}\left(\theta, \phi\right)=-\left[\begin{array}{c} \cos \phi \sin \theta \\ \sin \phi \sin \theta \\ \cos \theta\end{array}\right],
\end{equation}
are linearly independent for any $\theta$ and $\phi$.
Furthermore, we have
\begin{equation}\label{gg}
\begin{split}
&\nu_1 \wedge(\nabla\wedge \mathbf{E}|_{\widetilde\Pi_1})={\mathbf{i}k}\sum_{l=1}^{\infty}\sum_{m=-l}^{l}\frac{1}{\sqrt{l(l+1)}}\Bigg\{a_l^ml(l+1)p_l(kr)Y_l^m \Big|_{\phi=0}
\cdot\boldsymbol{e_1}(\theta,0)\\
&\hspace{3cm} +\bigg(- b_l^mj_l(kr)\cdot\frac{m}{\sin\theta}Y_l^m+a_l^m\cdot
q_l(kr)\cdot\frac{\partial{Y_l^m}}{\partial\theta}\Big|_{\phi=0}\bigg)
\cdot\boldsymbol{e_2}(\theta,0)\Bigg\},\\
&\nu_2 \wedge(\nabla\wedge \mathbf{E}|_{\widetilde\Pi_2})={\mathbf{i}k}\sum_{l=1}^{\infty}\sum_{m=-l}^{l}\frac{1}{\sqrt{l(l+1)}}\Bigg\{a_l^ml(l+1)p_l(kr)Y_l^m \Big|_{\phi=\phi_0}
\cdot\boldsymbol{e_1}(\theta,\phi_0)\\
&\hspace{3cm}+\bigg( -b_l^mj_l(kr)\cdot\frac{m}{\sin\theta}Y_l^m \Big|_{\phi=\phi_0}+a_l^m
q_l(kr)\cdot\frac{\partial{Y_l^m}}{\partial\theta}\Big|_{\phi=\phi_0}\bigg)
\cdot\boldsymbol{e_2}(\theta,\phi_0)\Bigg\}.
\end{split}
\end{equation}
\end{lemma}
\begin{proof} Using the fact that $\phi=\phi_0$ for $ \mathbf {x} =(r,\theta, \phi) \in \Pi_2$, it is easy to see that
\begin{equation}
\label{eq:nu2 r}
\begin{split}
& \nu_2 \wedge (\hat{\boldsymbol{r}}|_{\phi=
\phi_0})=\begin{bmatrix}
\cos\phi_0\cos\theta\\ \sin\phi_0\sin\theta \\-\sin\theta\end{bmatrix} ,
\quad\ \nu_2 \wedge (\hat{\boldsymbol\theta}|_{\phi=\phi_0})=\begin{bmatrix}
-\cos\phi_0\sin\theta\\ -\sin\phi_0\sin\theta \\-\cos\theta
\end{bmatrix},\quad \nu_2 \wedge (\hat{\boldsymbol\phi}|_{\phi=\phi_0})={\mathbf 0},
\end{split}
\end{equation}
from which we can derive the second equation of \eqref{mix pi2}. The first equation of \eqref{mix pi2} can be obtained in a similar way.
Recall that $\mathbf{M}_l^m( \mathbf {x} )$ and $\mathbf{N}_l^m( \mathbf {x} )$ are defined in \eqref{ww}. Using the identity
$
\nabla\wedge \mathbf{M}_l^m( \mathbf {x} )=-\mathbf{i}k \mathbf{N}_l^m( \mathbf {x} )$ and $ \nabla\wedge \mathbf{N}_l^m( \mathbf {x} )=\mathbf{i}k\mathbf{M}_l^m( \mathbf {x} )
$ (cf. \cite{CK})
we can obtain that
\begin{align}
\nabla\wedge \mathbf{E}|_{\widetilde\Pi_1}=\mathbf{i}k\sum_{l=1}^{\infty}\sum_{m=-l}^{l}&\frac{1}{\sqrt{l(l+1)}}\Bigg\{a_l^m\cdot{l(l+1)}p_l(kr)Y_l^m\cdot\nu_1\wedge\hat{\boldsymbol{r}}|_{\phi=0}\notag\\
&+\bigg(-b_l^mj_l(kr)\frac{m}{\sin\theta}Y_l^m+a_l^mq_l(kr)\frac{\partial{Y_l^m}}{\partial\theta}\bigg)\cdot\nu_1\wedge\hat{\boldsymbol\theta}|_{\phi=0}\notag\\
&+\bigg(-b_l^mj_l(kr)\boldsymbol{\mathrm{i}}\frac{\partial{Y_l^m}}{\partial\theta}+a_l^m\cdot
q_l(kr)\frac{\boldsymbol{\mathrm{i}}m}{\sin\theta}Y_l^m\bigg)\cdot\nu_1\wedge\hat{\boldsymbol\phi}|_{\phi=0}\Bigg\},\notag\\
\nabla\wedge \mathbf{E}|_{\widetilde\Pi_2}=\mathbf{i}k\sum_{l=1}^{\infty}\sum_{m=-l}^{l}&\frac{1}{\sqrt{l(l+1)}}\Bigg\{a_l^m\cdot{l(l+1)}p_l(kr)Y_l^m\cdot\nu_2\wedge\hat{\boldsymbol{r}}|_{\phi=\phi_0}\notag\\
&+\bigg(-b_l^mj_l(kr)\frac{m}{\sin\theta}Y_l^m+a_l^mq_l(kr)\frac{\partial{Y_l^m}}{\partial\theta}\bigg)\nu_2\wedge\hat{\boldsymbol\theta}|_{\phi=\phi_0}\notag\\
&+\bigg(-b_l^mj_l(kr)\boldsymbol{\mathrm{i}}\frac{\partial{Y_l^m}}{\partial\theta}+a_l^m\cdot
q_l(kr)\frac{\boldsymbol{\mathrm{i}}m}{\sin\theta}Y_l^m\bigg)\nu_2\wedge\hat{\boldsymbol\phi}|_{\phi=\phi_0}\Bigg\}.\label{eq:curl E}
\end{align}
Combing \eqref{eq:curl E} with \eqref{eq:nu2 r}, together with straightforward though a bit tedious calculations, one can deduce the second equation of \eqref{gg}. The first equation of \eqref{gg} can be shown in a similar manner.
The proof is complete.
\end{proof}
\begin{lemma}
Let $\mathbf E$ be a solution to \eqref{eq:eig}. Recall that $\mathbf{E}$ has the radial wave expansion \eqref{mix pi21} in $B_{\rho_0}(\mathbf{0})$. Consider an edge-corner ${\mathcal E}(\widetilde\Pi_1, \widetilde\Pi_2, \boldsymbol {l} )\Subset \Omega$ associated with $\mathbf{E}$. Recall that $\nu_i$ defined in \eqref{l1} are the outward unit normal vectors to $\Pi_i$, $i=1,2$. Assume that $ \boldsymbol{ \eta}_1,\boldsymbol{ \eta}_2 $ belong to the class $\mathcal{A}( \boldsymbol {l} )$. Then we have
\begin{align}
&\nu_1 \wedge (\nabla\wedge\mathbf{E}|_{\widetilde{ \Pi}_1})+\boldsymbol{ \eta}_1(\nu_1 \wedge\mathbf{E}|_{\widetilde{ \Pi}_1})\wedge\nu_1\notag\\
=&\sum_{l=1}^{\infty}\sum_{m=-l}^{l} \frac{1}{\sqrt{l(l+1)}}\Bigg\{\bigg(
\mathbf{i}ka_l^ml(l+1)p_l(kr)Y_l^m -\boldsymbol{ \eta}_1 a_l^mj_l(kr)\frac{m}{\sin\theta}Y_l^m
\notag \\
&-\boldsymbol{ \eta}_1 b_l^mq_l(kr)\frac{\partial{Y_l^m}}{\partial\theta}\bigg) \boldsymbol{e_1}(\theta,0)+\bigg(-\mathbf{i}kb_l^mj_l(kr)\frac{m}{\sin\theta}Y_l^m+\boldsymbol{\mathrm{i}}ka_l^m
q_l(kr)\frac{\partial{Y_l^m}}{\partial\theta} \notag\\
&
+\boldsymbol{ \eta}_1 b_l^ml(l+1)p_l(kr)Y_l^m \bigg)\cdot\boldsymbol{e_2}(\theta,0)\Bigg\},\label{ss1}\\
&\nu_1 \wedge (\nabla\wedge\mathbf{E}|_{\widetilde{ \Pi}_1})+\boldsymbol{ \eta}_1(\nu_1 \wedge\mathbf{E}|_{\widetilde{ \Pi}_1})\wedge\nu_1\notag\\
=&\sum_{l=1}^{\infty}\sum_{m=-l}^{l} \frac{1}{\sqrt{l(l+1)}}\Bigg\{\bigg(
\mathbf{i}ka_l^ml(l+1)p_l(kr)Y_l^m -\boldsymbol{ \eta}_1 a_l^mj_l(kr)\frac{m}{\sin\theta}Y_l^m\notag
\\
&-\boldsymbol{ \eta}_1 b_l^mq_l(kr)\frac{\partial{Y_l^m}}{\partial\theta}\bigg) \boldsymbol{e_1}(\theta,0)+\bigg(-\mathbf{i}kb_l^mj_l(kr)\frac{m}{\sin\theta}Y_l^m+\boldsymbol{\mathrm{i}}ka_l^m
q_l(kr)\frac{\partial{Y_l^m}}{\partial\theta}\notag \\
&
+\boldsymbol{ \eta}_1 b_l^ml(l+1)p_l(kr)Y_l^m \bigg)\cdot\boldsymbol{e_2}(\theta,0)\Bigg\},\label{ss1}
\end{align}
and
\begin{equation}\label{ss2}
\begin{split}
&\nu_2 \wedge (\nabla\wedge\mathbf{E}|_{\widetilde{ \Pi}_2})+\boldsymbol{ \eta}_2(\nu_2 \wedge\mathbf{E}|_{\widetilde{ \Pi}_2})\wedge\nu_2\\
=&\sum_{l=1}^{\infty}\sum_{m=-l}^{l} \frac{1}{\sqrt{l(l+1)}}\Bigg\{\bigg(
\mathbf{i}ka_l^ml(l+1)p_l(kr)Y_l^m -\boldsymbol{ \eta}_2 a_l^mj_l(kr)\frac{m}{\sin\theta}Y_l^m\\
& -\boldsymbol{ \eta}_2 b_l^mq_l(kr)\frac{\partial{Y_l^m}}{\partial\theta}\bigg)
\boldsymbol{e_1}(\theta,\phi_0) +\bigg(-\mathbf{i}kb_l^mj_l(kr)\frac{m}{\sin\theta}Y_l^m+\boldsymbol{\mathrm{i}}ka_l^m
q_l(kr)\frac{\partial{Y_l^m}}{\partial\theta}\\
&+\boldsymbol{ \eta}_2 b_l^ml(l+1)p_l(kr)Y_l^m\bigg)
\cdot\boldsymbol{e_2}(\theta,\phi_0)\Bigg\},
\end{split}
\end{equation}
where $\boldsymbol{e_1}(\theta, 0)$, $\boldsymbol{e_2}(\theta,0)$ $\boldsymbol{e_1}(\theta,\phi_0)$ and $\boldsymbol{e_2}(\theta,\phi_0)$ are defined in \eqref{eq:e1e2}.
\end{lemma}
\begin{proof} Recall that $\nu_2$ is defined in \eqref{l1}, $\hat{\boldsymbol{r}}, \hat{\boldsymbol{\theta}}$ and $\hat{\boldsymbol{\phi}}$ are given by \eqref{w1}. Then it is easy to see that
\begin{equation}
\label{eq:22 cross}
\begin{split}
(\nu_2\wedge\hat{\boldsymbol{r}})\wedge\nu_2&=(\cos\phi_0\sin\theta,\sin\phi_0\sin\theta,\cos\theta)^\top ,\\
(\nu_2\wedge \hat{\boldsymbol{\theta}})\wedge\nu_2&=(\cos\phi_0\cos\theta,\sin\phi_0\cos\theta,-\sin\theta)^\top,\quad (\nu_2\wedge\hat{\boldsymbol{\phi}})\wedge\nu_2 ={\mathbf 0}.
\end{split}
\end{equation}
Using \eqref{mix pi21} and \eqref{gg}, we can derive that
\begin{align}
&\nu_2 \wedge (\nabla\wedge\mathbf{E}|_{\widetilde{ \Pi}_2})+\boldsymbol{ \eta} _2(\nu_2\wedge\mathbf{E}|_{\widetilde{ \Pi}_2})\wedge\nu_2\notag\\
=& \sum_{l=1}^{\infty}\sum_{m=-l}^{l}\frac{1}{\sqrt{l(l+1)}}\Bigg\{\mathbf{i}k\Bigg(a_l^ml(l+1)p_l(kr)Y_l^m
\cdot\nu_2 \wedge \hat{\boldsymbol{r}}\notag \\
&+\bigg( -b_l^mj_l(kr)\cdot\frac{m}{\sin\theta}Y_l^m+a_l^m
q_l(kr)\frac{\partial{Y_l^m}}{\partial\theta}\bigg)
\cdot\nu_2 \wedge \hat{\boldsymbol\theta}\notag \\
&+\bigg(-b_l^mj_l(kr)\mathbf{i}\frac{\partial{Y_l^m}}{\partial\theta}+a_l^m
q_l(kr)\frac{\mathbf{i}m}{\sin\theta}Y_l^m\bigg)\nu_2 \wedge \hat{\boldsymbol\phi}\Bigg)\notag \\
&-\boldsymbol{ \eta} _2\Bigg(\bigg(b_l^m\cdot{l(l+1)}p_l(kr)\cdot Y_l^m\bigg)\cdot(\nu_2\wedge\hat{\mathbf{r}})\wedge\nu_2\notag\\
&+ \bigg(a_l^m j_l\big(kr\big)\frac{m}{\sin\theta}Y_l^m+b_l^m
q_l(kr) \frac{\partial{Y_l^m}}{\partial\theta}\bigg) (\nu_2\wedge \hat{\boldsymbol{\theta}})\wedge\nu_2 \notag \\
&+\mathbf{i} \bigg(a_l^m j_l(kr)\frac{\partial{Y_l^m}}{\partial\theta}+\frac{ m b_l^m
q_l(kr)}{\sin\theta}Y_l^m
\bigg) (\nu_2\wedge\hat{\boldsymbol{\phi}})\wedge\nu_2\Bigg)\Bigg\}. \label{eq:23}
\end{align}
Substituting \eqref{eq:nu2 r} and \eqref{eq:22 cross} into \eqref{eq:23}, together with straightforward calculations, we can obtain \eqref{ss2}. \eqref{ss1} can be derived in a similar manner.
\end{proof}
\section{Vanishing orders for an edge-corner ${\mathcal E} ( \widetilde{ \Pi}_1, \widetilde{ \Pi}_2, \boldsymbol {l} )$ with $\boldsymbol{\eta}_j\in \mathcal{A}( \boldsymbol {l} )$}\label{sec:5}
In this section, we consider the case that ${\mathcal E} ( \widetilde{ \Pi}_1, \widetilde{ \Pi}_2, \boldsymbol {l} )$ and edge-corner with both $\boldsymbol{\eta}_1$ and $\boldsymbol{\eta}_2$ belong to the class $\mathcal{A}( \boldsymbol {l} )$. We shall derive the vanishing order of $\mathbf{E}$ to \eqref{eq:eig} at the origin $\mathbf{0}\in \boldsymbol {l} $. The major idea is to make use of the radial wave expansion \eqref{mix pi21} of $\mathbf{E}$ in $B_{\rho_0 }( {\mathbf 0}) $, and to investigate the relationships between $a_n^{\pm 1}$, $a_n^0$ and $b_n^{\pm 1}$, $b_n^0$. Henceforth, according to Definition~\ref{def:class1}, we assume that $\boldsymbol{ \eta}_j$, $j=1,2,$ are given by the following absolutely convergent series at $\mathbf 0\in \boldsymbol {l} $:
\begin{subequations}
\begin{align}
\boldsymbol{\eta}_1&=\eta_{1}+\sum_{j=1}^\infty \eta_{1,j}(\theta) r^j, \label{eq:eta1 ex} \\
\boldsymbol{\eta}_2&=\eta_{2}+\sum_{j=1}^\infty \eta_{2,j}(\theta) r^j \label{eq:eta2 ex}
\end{align}
\end{subequations}
where $\eta_{\ell}\in\mathbb{C}\backslash\{0\}$, $\eta_{\ell,j}(\theta)\in C[-\pi, \pi]$ and $r\in [-h,h]$, $\ell=1,2$. Next, based on the above setting, we derive several critical lemmas.
\begin{lemma}\label{lem:imp pi12}
Let $\mathbf{E}$ be a a solution to \eqref{eq:eig}, whose radial wave expansion in $B_{\rho_0}(\mathbf{0}) $ is given by \eqref{mix pi21}. Consider a generalized impedance edge-corner ${\mathcal E}(\widetilde{ \Pi}_1, \widetilde{ \Pi}_2, \boldsymbol {l} ) \Subset \Omega$ with $\angle(\Pi_{1},\Pi_2)=\phi_0=\alpha \pi$, where $\alpha\in(0,2)$ and $\alpha \neq 1$. Suppose that the generalized impedance parameters $ \boldsymbol{ \eta}_j$ on $\widetilde{ \Pi}_j $, $j=1, 2$, are given by \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} respectively. It holds that
\begin{subequations}
\begin{align}
0&=\frac{4\mathbf{i}kc_1^1\sin^2\phi_0}{6\sqrt{2}}(a_1^1+a_1^{-1})-\frac{4k c_1^1\sin\phi_0\cos\phi_0}{6\sqrt{2}}(a_1^1-a_1^{-1})-\frac{(\eta_{2}\cos\phi_0+\eta_1)\sqrt{2}c_1^0}{3} b_1^0 ,\label{eq:lem51 a1} \\
0&=-\frac{4\mathbf{i}k c_1^1\sin\phi_0\cos\phi_0}{6\sqrt{2}}
(a_1^1+a_1^{-1})-\frac{4k c_1^1\sin^2\phi_0}{6\sqrt{2}}(a_1^1-a_1^{-1})-\frac{\eta_{2}\sqrt{2} c_1^0\sin\phi_0}{3}b_1^0,\label{eq:lem51 a2}\\
0&=-\frac{4c_1^1(-\eta_{1}+\eta_{2}\cos\phi_0)}{6\sqrt{2}}(b_1^1+b_1^{-1}) +\frac{4\eta_{2} c_1^1\sin\phi_0\mathbf{i}}{6\sqrt{2}}(b_1^1-b_1^{-1}). \label{eq:beta 3rd}
\end{align}
\end{subequations}
Assume that there exists $n\in \mathbb N \backslash\{1\}$ such that
\begin{equation}\label{eq:lem41 cond}
a_l^0=b_l^0=a_l^{\pm 1}=b_l^{\pm 1}=0, \quad l =1,\ldots, n-1.
\end{equation}
Then we have
\begin{subequations}
\begin{align}
&\frac{\eta_{1}\sqrt{n(n+1)}c_n^0}{2n+1}b_n^0=\frac{\mathbf{i}kn(n+1)^2c_n^1\sin^2\phi_0}{2(2n+1)\sqrt{n(n+1)}}(a_n^1+a_n^{-1}) \label{eq:52a}
\\
&\quad -\frac{kn(n+1)^2c_n^1\sin\phi_0\cos\phi_0}{2(2n+1)\sqrt{n(n+1)}}(a_n^1-a_n^{-1})-\frac{\eta_{2}\sqrt{n(n+1)}c_n^0\cos\phi_0}{2n+1} b_n^0 , \notag \\
&\frac{kn(n+1)^2c_n^1}{2(2n+1)\sqrt{n(n+1)}}(a_n^1-a_n^{-1})=-\frac{\mathbf{i}kn(n+1)^2c_n^1\sin\phi_0\cos\phi_0}{2(2n+1)\sqrt{n(n+1)}}
(a_n^1+a_n^{-1}) \label{eq:52b} \\
&\quad +\frac{kn(n+1)^2c_n^1\cos^2\phi_0}{2(2n+1)\sqrt{n(n+1)}}(a_n^1-a_n^{-1})-\frac{\eta_2\sqrt{n(n+1)} c_n^0\sin\phi_0}{2n+1}b_n^0, \notag \\
&-\frac{\eta_{1} n(n+1)^2c_n^1}{2(2n+1)\sqrt{n(n+1)}}(b_n^1+b_n^{-1})=\frac{\eta_2n(n+1)^2c_n^1\cos\phi_0}{2(2n+1)\sqrt{n(n+1)}}(b_n^1+b_n^{-1})\label{eq:52c} \\
&\quad -\frac{n(n+1)^2\eta_2\sin\phi_0\mathbf{i}}{2(2n+1)\sqrt{n(n+1)}}(b_n^1-b_n^{-1}). \notag
\end{align}
\end{subequations}
\end{lemma}
\begin{proof} We shall first derive \eqref{eq:52a}, \eqref{eq:52b} and \eqref{eq:52c}. \eqref{eq:lem51 a1}, \eqref{eq:lem51 a2} and \eqref{eq:beta 3rd} can be obtained in a similar way and we shall sketch the corresponding derivations at the end of the proof.
We first note that
\begin{equation}\label{eq:41}
(\nu_2\wedge\mathbf{E})\wedge\nu_2=-\nu_2\wedge(\nu_2\wedge\mathbf{E})=-\big(\nu_2\cdot(\nu_2\cdot\mathbf{E})-\mathbf{E}(\nu_2\cdot\nu_2)\big)=\mathbf{E}-(\nu_2\cdot\mathbf{E})\cdot\nu_2.
\end{equation}
Hence, we have from
\begin{equation}
\nu_2 \wedge (\nabla\wedge\mathbf{E}|_{\widetilde \Pi_2})+\boldsymbol{ \eta}_2(\nu_2 \wedge\mathbf{E}|_{\widetilde \Pi_2})\wedge\nu_2=\mathbf 0,
\end{equation}
that
\begin{equation}\label{eq:43}
\nu_2\wedge(\nabla\wedge \mathbf{E})|_{\widetilde \Pi_2} +\boldsymbol \eta_2\big(\mathbf{E}|_{\widetilde \Pi_2}-(\nu_2\cdot\mathbf{E}|_{\widetilde \Pi_2})\cdot\nu_2\big)=\mathbf 0.
\end{equation}
Multiplying the cross product with $\nu_2$ from left on both sides \eqref{eq:43}, by using the fact that
$$\nu_2\wedge\big(\nu_2\wedge(\nabla\wedge \mathbf{E})|_{\widetilde \Pi_2}\big)=\nu_2\cdot\big(\nu_2\cdot(\nabla\wedge \mathbf{E})|_{\widetilde \Pi_2}\big)-(\nu_2\cdot\nu_2)(\nabla\wedge \mathbf{E}) |_{\widetilde \Pi_2},
$$
we can obtain that
\begin{equation}\label{eq:44}
\big(\nu_2\cdot(\nabla\wedge \mathbf{E})|_{ \widetilde \Pi_2}\big)\nu_2+\boldsymbol \eta_2(\nu_2\wedge\mathbf{E}|_{\widetilde \Pi_2})=\nabla\wedge \mathbf{E} |_{\widetilde \Pi_2}.
\end{equation}
Similarly, since the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_1$ is imposed on $\widetilde {\Pi}_1$, using the above argument, we can deduce that
\begin{equation}\label{eq:45}
\big(\nu_1\cdot(\nabla\wedge \mathbf{E})|_{\widetilde \Pi_1}\big)\nu_1+\boldsymbol \eta_1(\nu_1\wedge\mathbf{E}|_{\widetilde \Pi_1})=\nabla\wedge \mathbf{E} |_{\widetilde \Pi_1}.
\end{equation}
Since $ \boldsymbol {l} \in \widetilde \Pi_1 \cap \widetilde \Pi_2$, combing \eqref{eq:44} with \eqref{eq:45}, it yields that
\begin{equation}\label{y1}
\big(\nu_1\cdot(\nabla\wedge \mathbf{E}|_{ \boldsymbol {l} })\big)\nu_1+\boldsymbol \eta_1(\nu_1\wedge\mathbf{E}|_{ \boldsymbol {l} })=\big(\nu_2\cdot(\nabla\wedge \mathbf{E}|_{ \boldsymbol {l} })\big)\nu_2+\boldsymbol \eta_2(\nu_2 \wedge\mathbf{E}|_{ \boldsymbol {l} }).
\end{equation}
Due to \eqref{eq:lem41 cond}, using \eqref{mix pi2} and \eqref{eq:curl E}, by virtue of \eqref{uu}, it yields that
\begin{align}
\label{eq:nu2 curl E}
\nabla\wedge\mathbf{E}|_{\widetilde \Pi_2}=&\sum_{l=n}^{\infty}\sum_{m=-l }^{l}\frac{\mathbf{i}k}{\sqrt{l(l+1)}}\Bigg\{a_l^m\cdot{l(l+1)}p_l(kr)c_l^mP_l^{|m|}\cdot\hat{\boldsymbol{r}}|_{\widetilde \Pi_2 }\notag \\
&+\bigg(b_l^mj_l(kr)c_l^m\frac{\sgn(m)}{2}\big[P_{l-1}^{|m|+1}(\cos\theta)+(l+|m|-1)(l+|m|)P_{l-1}^{|m|-1}(\cos\theta)\big]\notag \\
&+a_l^mq_l(kr)c_l^m\frac{1}{2}\big[(l+|m|)(l-|m|+1)P_l^{|m|-1}(\cos\theta)-P_l^{|m|+1}(\cos\theta)\big]\bigg)\hat{\boldsymbol\theta}|_{\widetilde \Pi_2 } \notag \\
&+\bigg(-b_l^mj_l(kr)\mathbf{i}c_l^m\frac{1}{2}\big[(l+|m|)(l-|m|+1)P_l^{|m|-1}(\cos\theta)-P_l^{|m|+1}(\cos\theta)\big] \notag \\
&-a_l^m\cdot
q_l(kr)\mathbf{i} c_l^m\frac{\sgn(m)}{2}\big[P_{l-1}^{|m|+1}(\cos\theta)\notag \\
&+(l+|m|-1)(l+|m|)P_{l-1}^{|m|-1}(\cos\theta)\big]\bigg)\hat{\boldsymbol\phi}|_{\Pi_2 }\Bigg\},\notag \\
\end{align}
and
\begin{align}\label{eq:312 nu2}
\nu_2\wedge\mathbf{E}|_{\widetilde \Pi_2}&=\sum_{l=n}^{\infty}\sum_{m=-l }^{l}\Bigg\{\bigg\{-b_l^m\cdot{\sqrt{l(l+1)}}p_l(kr)\cdot c_l^mP_l^{|m|}\bigg\}\cdot\nu_2 \wedge\hat{\boldsymbol{r}}|_{\widetilde \Pi_2 }\notag \\
&+ \bigg\{a_l^m\cdot j_l\big(kr\big)\frac{1}{\sqrt{l(l+1)}} c_l^m\frac{\sgn(m)}{2} \big[P_{l-1}^{|m|+1}(\cos\theta)+(l+|m|-1)(l+|m|)\notag \\
&\times P_{l-1}^{|m|-1}(\cos\theta)\big]-b_l^m\cdot
q_l(kr)\cdot\frac{1}{\sqrt{l(l+1)}}c_l^m\frac{1}{2}\big[(l+|m|)(l-|m|+1)\notag \\
&\times P_l^{|m|-1}(\cos\theta)-P_l^{|m|+1}(\cos\theta)\big]\bigg\}\cdot\nu_2 \wedge \hat{\boldsymbol\theta}|_{\widetilde \Pi_2 } +\bigg\{-a_l^m\cdot j_l(kr)\frac{\mathbf{i}}{\sqrt{l(l+1)}}\frac{c_l^m }{2}\notag \\
&\times \big[(l+|m|)(l-|m|+1) P_l^{|m|-1}(\cos\theta)-P_l^{|m|+1}(\cos\theta)\big]+b_l^m
q_l(kr)\frac{\mathbf{i}}{\sqrt{l(l+1)}}\notag \\
&\times c_l^m\frac{\sgn(m)}{2}\big[P_{l-1}^{|m|+1}(\cos\theta)+(l+|m|-1)(l+|m|)P_{l-1}^{|m|-1}(\cos\theta)\big]
\bigg\}\times \nu_2 \wedge\hat{\boldsymbol\phi}|_{\widetilde \Pi_2 } \Bigg\},
\end{align}
where
\begin{equation}\notag
\sgn(m)=1\ \mbox{when}\ m>0;\ 0\ \mbox{when}\ m=0; \ -1\ \mbox{when}\ m<0.
\end{equation}
Recall that if $ \mathbf {x} \in \boldsymbol {l} $ one has
\begin{equation}\label{eq:l}
\theta=\phi=0, \quad 0\leq r\leq h,
\end{equation}
where $r$, $\theta$ and $\phi$ are the spherical coordinates of $ \mathbf {x} \in \boldsymbol {l} $ defined in \eqref{eq:x sph}. It is straightforward to calculate that
\begin{equation}\label{eq:48}
\begin{split}
\nu_2\wedge\boldsymbol{\hat{r}}|_{\theta=\phi=0}&=\begin{bmatrix} \cos\phi_0 \\ \sin\phi_0 \\ 0\end{bmatrix},\, \nu_2\wedge\boldsymbol{\hat{\theta}}|_{\theta=\phi=0}=-\begin{bmatrix} 0 \\ 0 \\ \cos\phi_0\end{bmatrix},\, \nu_2\wedge\boldsymbol{\hat{\phi}}|_{\theta=\phi=0}=-\begin{bmatrix} 0 \\0 \\\sin\phi_0 \end{bmatrix}, \\
\nu_2\cdot\boldsymbol{\hat{r}}|_{\theta=\phi=0}&=0,\quad \nu_2\cdot\boldsymbol{\hat{\theta}}|_{\theta=\phi=0}=-\sin\phi_0,\quad \nu_2\cdot\boldsymbol{\hat{\phi}}|_{\theta=\phi=0}=\cos\phi_0,
\end{split}
\end{equation}
where $\boldsymbol{\hat{r}}$, $\boldsymbol{\hat{\theta }}$ and $\boldsymbol{\hat{\phi }}$ are defined in \eqref{eq:x sph}.
Evaluating \eqref{eq:nu2 curl E} and \eqref{eq:312 nu2} at $ \boldsymbol {l} $, by virtue of \eqref{eq:plm0} and \eqref{eq:48}, we can derive that
\begin{equation}
\label{eq:nu2 curl E416}
\begin{split}
\nabla\wedge\mathbf{E}|_{ \boldsymbol {l} }&=\sum_{l=n}^{+\infty}\frac{\mathbf{i}k}{\sqrt{l(l+1)}}\Bigg\{ a_l^0l(l+1)p_l(kr)c_l^0\cdot\hat{\boldsymbol{r}}|_{\theta=\phi=0} \\
&+\frac{(l+1)l}{2}\bigg((b_l^1-b_l^{-1})\cdot j_l(kr)\cdot c_l^1+(a_l^1+a_l^{-1})q_l(kr)c_l^1
\bigg)\cdot\boldsymbol{\hat{\theta}}|_{\theta=\phi=0} \\
&+\mathbf{i}\frac{(l+1)l}{2}\bigg(-(b_l^1+b_l^{-1})\cdot j_l(kr)c_l^1-(a_l^1-a_l^{-1})q_l(kr)c_l^1\bigg)\cdot\boldsymbol{\hat{\phi}} |_{\theta=\phi=0} \Bigg\},\\
\nu_2\wedge\mathbf{E}|_{ \boldsymbol {l} }&=\sum_{l=n}^{+\infty}\frac{1}{\sqrt{l(l+1)}}\Bigg\{\Bigg(-b_l^0l(l+1)p_l(kr)c_l^0\cdot\nu_2\wedge\boldsymbol{\hat{r}}|_{\theta=\phi=0}\\
&-\frac{(l+1)l}{2}\bigg(-(a_l^1-a_l^{-1}) j_l(kr)c_l^1+(b_l^1+b_l^{-1})q_l(kr)c_l^1\bigg)\cdot\nu_2\wedge\boldsymbol{\hat{\theta}}|_{\theta=\phi=0}\\
&-\mathbf{i}\frac{(l+1)l}{2}\bigg((a_l^1+a_l^{-1})j_l(kr)c_l^1-(b_l^1-b_l^{-1})q_l(kr)c_l^1\bigg)
\cdot\nu_2\cdot\boldsymbol{\hat{ \phi }}|_{\theta=\phi=0}\Bigg)\Bigg\}.
\end{split}
\end{equation}
Therefore, from \eqref{eq:nu2 curl E416} we obtain that
\begin{align}
&\nu_2^\top (\nabla\wedge\mathbf{E}|_{ \boldsymbol {l} }
\nu_2+\boldsymbol{ \eta}_2(\nu_2\wedge\mathbf{E}|_{ \boldsymbol {l} })\notag\\
=&\sum_{l=n}^{+\infty}\frac{1}{\sqrt{l(l+1)}}\Bigg\{\mathbf{i}k\Bigg[ -\sin\phi_0\frac{ c_l^1(l+1)l}{2}\bigg((b_l^1-b_l^{-1}) j_l(kr) +(a_l^1+a_l^{-1})q_l(kr)c_l^1
\bigg)\notag\\
&-\cos\phi_0\frac{\mathbf{i}(l+1)l}{2} \times \bigg((b_l^1+b_l^{-1}) j_l(kr)c_l^1 +(a_l^1-a_l^{-1})q_l(kr)c_l^1\bigg)\Bigg] \notag \\
&\quad\times \begin{bmatrix} -\sin\phi_0\\ \cos\phi_0 \\ 0\end{bmatrix} +\boldsymbol{ \eta}_2\bigg[ -b_l^0l(l+1)p_l(kr)c_l^0 \begin{bmatrix}\cos\phi_0\\ \sin\phi_0\\0\end{bmatrix}\notag\\
&-\frac{(l+1)l}{2} \bigg(-(a_l^1-a_l^{-1}) j_l(kr)c_l^1 +(b_l^1+b_l^{-1}) q_l(kr)c_l^1\bigg) \begin{bmatrix} 0\\0\\-\cos\phi_0\end{bmatrix} \notag\\
&-\frac{\mathbf{i}(l+1)l}{2}\bigg((a_l^1+a_l^{-1}) j_l(kr)c_l^1 -(b_l^1-b_l^{-1})q_l(kr)c_l^1\bigg)
\cdot\begin{bmatrix}0\\0\\-\sin\phi_0\end{bmatrix}\bigg] \Bigg\}. \label{eq:411 x31}
\end{align}
\begin{equation}
\label{eq:411 x31}
\begin{split}
&\nu_2^\top (\nabla\wedge\mathbf{E}|_{ \boldsymbol {l} }
\nu_2+\boldsymbol{ \eta}_2(\nu_2\wedge\mathbf{E}|_{ \boldsymbol {l} })\\
=&\sum_{l=n}^{+\infty}\frac{1}{\sqrt{l(l+1)}}\Bigg\{\mathbf{i}k\Bigg[ -\sin\phi_0\frac{ c_l^1(l+1)l}{2}\bigg((b_l^1-b_l^{-1}) j_l(kr) +(a_l^1+a_l^{-1})q_l(kr)c_l^1
\bigg)\\
&-\cos\phi_0\frac{\mathbf{i}(l+1)l}{2} \times \bigg((b_l^1+b_l^{-1}) j_l(kr)c_l^1 +(a_l^1-a_l^{-1})q_l(kr)c_l^1\bigg)\Bigg] \\
&\quad\times \begin{bmatrix} -\sin\phi_0\\ \cos\phi_0 \\ 0\end{bmatrix} +\boldsymbol{ \eta}_2\bigg[ -b_l^0l(l+1)p_l(kr)c_l^0 \begin{bmatrix}\cos\phi_0\\ \sin\phi_0\\0\end{bmatrix}\\
&-\frac{(l+1)l}{2} \bigg(-(a_l^1-a_l^{-1}) j_l(kr)c_l^1 +(b_l^1+b_l^{-1}) q_l(kr)c_l^1\bigg) \begin{bmatrix} 0\\0\\-\cos\phi_0\end{bmatrix} \\
&-\frac{\mathbf{i}(l+1)l}{2}\bigg((a_l^1+a_l^{-1}) j_l(kr)c_l^1 -(b_l^1-b_l^{-1})q_l(kr)c_l^1\bigg)
\cdot\begin{bmatrix}0\\0\\-\sin\phi_0\end{bmatrix}\bigg] \Bigg\}.
\end{split}
\end{equation}
Using a similar argument for deriving \eqref{eq:411 x31}, we have
\begin{equation}\label{eq:412 x3}
\begin{split}
& \nu_1^\top (\nabla\wedge \mathbf{E}|_{ \boldsymbol {l} }) \nu_1+\eta_1(\nu\wedge\mathbf{E}|_{ \boldsymbol {l} })\\
=&-\sum_{l=n}^{+\infty}\Bigg\{ \frac{kl(l+1)}{2\sqrt{l(l+1)}}\bigg[(b_l^1+b_l^{-1})\cdot j_l(kr)\cdot c_l^1 +(a_l^1-a_l^{-1}) \notag q_l(kr) c_l^1\bigg]\begin{bmatrix}0\\-1\\0\end{bmatrix}\\
&+\boldsymbol{ \eta}_1\Bigg(-b_l^0\sqrt{l(l+1)}p_l(kr) c_l^0 \begin{bmatrix}-1\\0\\0\end{bmatrix} +\frac{l(l+1)}{2\sqrt{l(l+1)}}\bigg((a_l^1-a_l^{-1}) j_l(kr)c_l^1 \\
&-(b_l^1+b_l^{-1}) q_l(kr) c_l^1 \bigg) \begin{bmatrix}0\\0\\1\end{bmatrix}\Bigg)\Bigg\}.
\end{split}
\end{equation}
Note that $\boldsymbol{\eta}_\ell$, $\ell=1,2$, have the expansions \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} respectively, where the coefficients of $r^0$ in \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} are the non-zero numbers $\eta_1$ and $\eta_2$. From Remark \ref{i2}, it is easy to see that the lowest order of \eqref{eq:411 x31} and \eqref{eq:412 x3} with respect to the power of $r$ is $n-1$, which is contributed by $p_{n}\big(kr\big)$ and $q_{n}\big(kr\big)$ in \eqref{eq:411 x31} and \eqref{eq:412 x3}. Substituting \eqref{eq:411 x31} and \eqref{eq:412 x3} into \eqref{y1}, and comparing the coefficients of $r^{n-1}$ on both sides of the first, second and third component of \eqref{y1} respectively, we can derive \eqref{eq:52a}, \eqref{eq:52b} and \eqref{eq:52c}.
We can derive \eqref{eq:lem51 a1}, \eqref{eq:lem51 a2} and \eqref{eq:beta 3rd} by similar arguments for \eqref{eq:52a}, \eqref{eq:52b} and \eqref{eq:52c}. Indeed, the Fourier expansions of \eqref{eq:411 x31} and \eqref{eq:412 x3} can be rewritten with the starting summation index $n=1$. Hence we can obtain \eqref{eq:lem51 a1}, \eqref{eq:lem51 a2} and \eqref{eq:beta 3rd} by comparing the coefficients of $r^0$ on both sides of \eqref{y1} by virtue of \eqref{eq:411 x31} and \eqref{eq:412 x3} .
The proof is complete.
\end{proof}
\begin{lemma}\label{lem:imp pi2}
Under the same setup in Lemma~\ref{lem:imp pi12}, it holds that
\begin{align}
0&=-\frac{4 \eta_2 c_1^1\cos^2\phi_0}{6\sqrt{2}}(b_1^1+b_1^{-1})+\frac{4\mathbf{i}\eta_2 c_1^1\sin\phi_0\cos\phi_0}{6\sqrt{2}}(b_1^1-b_1^{-1})+\frac{\mathbf{i}k\sqrt{2}c_1^0\cos\phi_0}{3}a_1^0, \label{eq:517a} \\
0&=\frac{4\eta_2 c_1^1\sin\phi_0\cos\phi_0}{6\sqrt{2}}(b_1^1+b_1^{-1})+ \frac{4\mathbf{i}\eta_2 c_1^1\sin^2\phi_0}{6\sqrt{2}}(b_1^1-b_1^{-1})+\frac{\mathbf{i}k\sqrt{2}c_1^0\sin\phi_0}{3}a_1^0,\label{eq:517b} \\
0&=\frac{4\mathbf{i}k c_1^1\cos\phi_0}{6\sqrt{2}}(a_1^1+a_1^{-1})+\frac{4k c_1^1\sin\phi_0}{6\sqrt{2}}(a_1^1-a_1^{-1})+ \frac{\eta_2 \sqrt{2}c_1^0}{3}b_1^0\label{eq:517c}.
\end{align}
Furthermore, if we assume that there exists $n\in \mathbb N \backslash\{1\}$ such that \eqref{eq:lem41 cond} is fulfilled, then it holds that
\begin{subequations}
\begin{align}
0=&\frac{\mathbf{i}k\sqrt{n(n+1)}c_n^0\cos\phi_0}{2n+1}a_n^0-\frac{\eta_2n(n+1)^2c_n^1\cos^2\phi_0}{2(2n+1)\sqrt{n(n+1)}}(b_n^1+b_n^{-1})\label{eq:z14}
\\
&+\frac{\mathbf{i} \eta_2n(n+1)^2c_n^1\sin\phi_0\cos\phi_0}{2(2n+1)\sqrt{n(n+1)}}(b_n^1-b_n^{-1}), \notag\\
0=&\frac{\mathbf{i}kc_n^0\sqrt{n(n+1)}\sin\phi_0}{2n+1}a_n^0+\frac{\eta_2n(n+1)^2c_n^1\sin\phi_0\cos\phi_0}{2(2n+1)\sqrt{n(n+1)}}(b_n^1+b_n^{-1}) \label{eq:z15}
\\
&+\frac{\mathbf{i}\eta_2n(n+1)^2c_n^1\sin^2\phi_0}{2(2n+1)\sqrt{n(n+1)}}(b_n^1-b_n^{-1}),
\notag \\
0=&\frac{\mathbf{i}kn(n+1)^2c_n^1\cos\phi_0}{2(2n+1)\sqrt{n(n+1)}}(a_n^1+a_n^{-1})+\frac{kn(n+1)^2c_n^1\sin\phi_0}{2(2n+1)\sqrt{n(n+1)}}(a_n^1-a_n^{-1}) \label{eq:z16}
\\
&+ \frac{\eta_2 \sqrt{n(n+1)}c_n^0}{2n+1}b_n^0. \notag
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
We first prove \eqref{eq:z14}, \eqref{eq:z15} and \eqref{eq:z16}. Since the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_2$ is imposed on $\widetilde {\Pi}_2$, we have
\begin{equation}\label{z1}
\nu_2\wedge(\nabla\wedge \mathbf{E}|_{ \boldsymbol {l} })+ \boldsymbol{ \eta}_2(\nu_2\wedge\mathbf{E}|_{ \boldsymbol {l} })\wedge\nu_2=\mathbf{0}.
\end{equation}
Here, we recall \eqref{eq:l}. Under the assumption \eqref{eq:lem41 cond}, using \eqref{uu}, \eqref{eq:plm0} and \eqref{ss2},
we can obtain that
\begin{align}
&\nu_2\wedge(\nabla\wedge\mathbf{E}|_{ \boldsymbol {l} })+\eta_2(\nu_2\wedge\mathbf{E}|_{ \boldsymbol {l} })\wedge\nu_2=\sum_{l=n}^{+\infty}\Bigg\{\mathbf{i}k\bigg\{a_l^0\sqrt{l(l+1)}p_l(kr)c_l^0\cdot(\cos\phi_0,\sin\phi_0,0)^{\top} \notag \\
&+\frac{l(l+1)}{2\sqrt{l(l+1)}}\bigg((b_l^1-b_l^{-1})j_l(kr)c_l^1+(a_l^1+a_l^{-1}) q_l(kr)c_l^1\bigg) \begin{bmatrix}0\\0\\-\cos\phi_0\end{bmatrix}-\frac{l(l+1)\mathbf{i}}{2\sqrt{l(l+1)}}\bigg((b_l^1+b_l^{-1})\notag \\
&\times j_l(kr)c_l^1+(a_l^1-a_l^{-1}) q_l(kr)c_l^1\bigg)\begin{bmatrix}0\\0\\-\sin\phi_0\end{bmatrix}\bigg\}+\boldsymbol{ \eta}_2\bigg\{-b_l^0\sqrt{l(l+1)}p_l(kr)c_l^0 \begin{bmatrix}0\\0\\1\end{bmatrix}+\frac{l(l+1)}{2\sqrt{l(l+1)}}\notag \\
&\times \bigg((a_l^1-a_l^{-1})j_l(kr)c_l^1+(b_l^1+b_l^{-1})q_l(kr)c_l^1\bigg) \times \begin{bmatrix}\cos^2\phi_0\\-\sin\phi_0\cos\phi_0\\0\end{bmatrix}+\frac{l(l+1)\mathbf{i}}{2\sqrt{l(l+1)}}\bigg((a_l^1+a_l^{-1})\notag \\
&\times j_l(kr)c_l^1-(b_l^1-b_l^{-1})q_l(kr)c_l^1\bigg)\times \begin{bmatrix}\sin\phi_0\cos\phi_0\\ \sin^2\phi_0\\0\end{bmatrix}\bigg\}\Bigg\}. \label{j2}
\end{align}
Note that $\boldsymbol{\eta}_\ell$, $\ell=1,2$, have the expansions \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} respectively, where the coefficients of $r^0$ in \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} are non zero number $\eta_1$ and $\eta_2$. In view of Remark \ref{i2}, we know that the lowest order of \eqref{j2} with respect to the power of $r$ is $n-1$, which is contributed by $p_{n}\big(kr\big)$ and $q_{n}\big(kr\big)$ in \eqref{j2}. Substituting \eqref{eq:411 x31} and \eqref{j2} into \eqref{z1}, comparing the coefficients of $r^{n-1}$ on both sides of the first, the second, the third, component of \eqref{z1} respectively, we derive that \eqref{eq:z14}, \eqref{eq:z15} and \eqref{eq:z16}.
We can derive \eqref{eq:517a}, \eqref{eq:517b} and \eqref{eq:517c} by following similar arguments in deriving \eqref{eq:z14}, \eqref{eq:z15} and \eqref{eq:z16}. Indeed, the Fourier expansions of \eqref{j2} can be rewritten with the starting summation index $n=1$. Hence we can obtain \eqref{eq:517a}, \eqref{eq:517b} and \eqref{eq:517c} by comparing the coefficients of $r^0$ on both sides of \eqref{z1} by virtue of \eqref{j2}.
\end{proof}
\begin{lemma}\label{base2}
Under the same setup in Lemma~\ref{lem:imp pi12}, one has the following linear relations:
\begin{equation}
\left\{\begin{split}
&\beta^1_{11}(b_1^1+b_1^{-1})+\beta^1_{12}(b_1^1-b_1^{-1})+\beta^1_{13}a_1^0=0,
\\
&\beta^1_{21}(b_1^1+b_1^{-1})+\beta^1_{22}(b_1^1-b_1^{-1})+\beta^1_{23}a_1^0=0,
\\
&\beta^1_{31}(b_1^1+b_1^{-1})+\beta^1_{32}(b_1^1-b_1^{-1})+\beta^1_{33}a_1^0=0,\label{eq:lem51 1}
\end{split}\right.
\end{equation}
where
\begin{equation}\label{eq:matrix entry}
\begin{split}
\beta_{11}^1&=-\frac{4 \eta_2 c_1^1\cos^2\phi_0}{6\sqrt{2}},\quad
\beta_{12}^1=\frac{4\mathbf{i}\eta_2 c_1^1\sin\phi_0\cos\phi_0}{6\sqrt{2}},\quad\beta_{13}^1=\frac{\mathbf{i}k\sqrt{2}c_1^0\cos\phi_0}{3}, \\
\beta_{21}^1&=\frac{4\eta_2 c_1^1\sin\phi_0\cos\phi_0}{6\sqrt{2}},\quad \beta_{22}^1=\frac{4\mathbf{i}\eta_2 c_1^1\sin^2\phi_0}{6\sqrt{2}}, \quad \beta_{23}^1=\frac{\mathbf{i}k\sqrt{2}c_1^0\sin\phi_0}{3},\\
\beta_{31}^1&=-\frac{4c_1^1(-\eta_1+\eta_2\cos\phi_0)}{6\sqrt{2}} , \quad \beta_{32}^1=\frac{4\eta_2 c_1^1\sin\phi_0\mathbf{i}}{6\sqrt{2}},\quad\beta_{33}^1=0.
\end{split}
\end{equation}
If we assume that there exists $n\in \mathbb N \backslash\{1\}$ such that \eqref{eq:lem41 cond} is fulfilled, then one has that
\begin{equation}\label{eq:lem51 2}
\left\{\begin{split}
&\beta^n_{11}(b_n^1+b_n^{-1})+\beta^n_{12}(b_n^1-b_n^{-1})+\beta^n_{13}a_n^0=0,
\\
&\beta^n_{21}(b_n^1+b_n^{-1})+\beta^n_{22}(b_n^1-b_n^{-1})+\beta^n_{23}a_n^0=0,
\\
&\beta^n_{31}(b_n^1+b_n^{-1})+\beta^n_{32}(b_n^1-b_n^{-1})+\beta^n_{33}a_n^0=0,
\end{split}\right.
\end{equation}
where
\begin{align*}
\beta_{11}^n&=-\frac{\eta_2n(n+1)^2c_n^1\cos^2\phi_0}{2(2n+1)\sqrt{n(n+1)}},\\
\beta_{12}^n&=\frac{\mathbf{i}\eta_2n(n+1)^2c_n^1\sin\phi_0\cos\phi_0}{2(2n+1)\sqrt{n(n+1)}},\quad\beta_{13}^n=\frac{\mathbf{i}k\sqrt{n(n+1)}c_n^0\cos\phi_0}{2n+1}, \\
\beta_{21}^n&=\frac{\eta_2n(n+1)^2c_n^1\sin\phi_0\cos\phi_0}{2(2n+1)\sqrt{n(n+1)}},\quad \beta_{22}^n=\frac{\mathbf{i}\eta_2n(n+1)^2c_n^1\sin^2\phi_0}{2(2n+1)\sqrt{n(n+1)}}, \\
\beta_{23}^n&=\frac{\mathbf{i}k\sqrt{n(n+1)}c_n^0\sin\phi_0}{2n+1},\quad {\beta_{31}^n=-\frac{n(n+1)^2c_n^1(-\eta_1+\eta_2\cos\phi_0)}{2(2n+1)\sqrt{n(n+1)}}}, \\
\beta_{32}^n&=\frac{\mathbf{i}\eta_2n(n+1)^2c_n^1\sin\phi_0}{2(2n+1)\sqrt{n(n+1)}},\quad\beta_{33}^n=0.
\end{align*}
Furthermore, if
$ \alpha \neq \frac{1}{2}$ and $ \alpha \neq \frac{3}{2},$
then it holds that
\begin{equation}\label{eq:lem41 zero}
a_n^0=b_n^{\pm1}=0.
\end{equation}
\end{lemma}
\begin{proof
Combining \eqref{eq:517a}, \eqref{eq:517b} with \eqref{eq:beta 3rd}, we can obtain \eqref{eq:lem51 1}. Similarly, by virtue of \eqref{eq:52c}, \eqref{eq:z14} and \eqref{eq:z15}, we can derive \eqref{eq:lem51 2}. After straightforward calculations, it can be verified that the determinant of coefficients matrices \eqref{eq:lem51 2} is given by
\begin{equation}\label{eq:det AB}
\begin{split}
\left|{\mathcal B}_n
\right| &=-k\eta_2^2\left(\frac{n+1}{2n+1}\right)^3\frac{n\sqrt{n(n+1)}}{2}\big(c_n^1\big)^2c_n^0\sin^2\phi_0 \cos^2\phi_0,
\end{split}
\end{equation}
where $c_n^0,c_n^1$ are nonzero constants defined in \eqref{sphe harmonic}. Recall that $\boldsymbol{\eta}_\ell$, $\ell=1,2$, have the expansions \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} respectively, where the coefficients of $r^0$ in \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} are non zero number $\eta_1$ and $\eta_2$.
Since $\phi_0=\alpha \pi \neq \pi/2$, $\phi_0=\alpha \pi \neq 3\pi/2$, $\eta_2 \neq 0$ and $k\in \mathbb{R}_+$, we conclude that ${\mathcal B}_n$ are nonsingular, which readily implies \eqref{eq:lem41 zero}.
\end{proof}
The following two important lemmas reveal the recursive relationships for $a_n^{\pm m}$ and $b_n^{\pm m}$, where $m=0,1,\ldots, n$, which will be used to characterize the vanishing order of $\mathbf E$ with respect to the the corresponding dihedral angle of the edge-corner ${\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2, \boldsymbol {l} ) \Subset \Omega$ in Theorem \ref{th:two imp}.
\begin{lemma}\label{lem:54}
Let $\mathbf{E}$ be a a solution to \eqref{eq:eig}, whose radial wave expansion in $B_{\rho_0}(\mathbf{0}) $ is given by \eqref{mix pi21}. Consider a generalized impedance edge-corner ${\mathcal E}(\widetilde{ \Pi}_1, \widetilde{ \Pi}_2, \boldsymbol {l} ) \Subset \Omega$ with $\angle(\Pi_{1},\Pi_2)=\phi_0=\alpha \pi$, where $\alpha\in(0,2)$ and $\alpha \neq 1$. Suppose that the generalized impedance parameters $ \boldsymbol{ \eta}_j$ on $\widetilde{ \Pi}_j $, $j=1, 2$, are given by \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} respectively. Assume that there exists $n\in \mathbb N \backslash\{1\}$ such that
\begin{equation}\label{eq:lem54 cond}
a_l^{m}=b_l^{m}=0, \quad l =1,\ldots, n-1, \mbox{ and } m\in [l]_0.
\end{equation}
Then we have the following recursive linear equations:
\begin{equation}\label{c2}
\left\{
\begin{split}
0=&\mathbf{i}k\frac{\sqrt{n(n+1)}}{2n+1}c_n^0a_n^0-\frac{\eta_1(n+1) }{2(2n+1) }\frac{c_n^1(n+1)n}{\sqrt{n(n+1)}}(b_n^1+b_n^{-1})
\\
0=&\mathbf{i}k\frac{\sqrt{n(n+1)}}{2n+1}c_n^1(a_n^1+a_n^{-1})-\frac{\eta_1(n+1)}{2(2n+1)}\frac{c_n^2}{\sqrt{n(n+1)}}(n+2)(n-1)(b_n^2+b_n^{-2}
\\
&+ \frac{\eta_1 (n+1) }{2n+1}\frac{c_n^0}{\sqrt{n(n+1)}}b_n^0,
\\
0=&\mathbf{i}k\frac{\sqrt{n(n+1)}}{2n+1}c_n^{m}(a_n^{m}+a_n^{-m})- \frac{\eta_1(n+1)}{2(2n+1)}\frac{c_n^{m+1}}{\sqrt{n(n+1)}} (n+m+1)(n-m)
\\
&\times (b_n^{m+1}+b_n^{-(m+1)}) +\frac{\eta_1(n+1)}{2(2n+1) }\frac{c_n^{m-1}}{\sqrt{n(n+1)}}(b_n^{m-1}+b_n^{-(m-1)}), \, m=2,3,\ldots,n-1
\\
0=&\mathbf{i}k\frac{\sqrt{n(n+1)}}{2n+1}c_n^n(a_n^n+a_n^{-n})+\frac{\eta_1(n+1)}{2(2n+1) }\frac{c_n^{n-1}}{\sqrt{n(n+1)}}(b_n^{n-1}+b_n^{-(n-1)}),
\end{split}
\right.
\end{equation}
and
\begin{equation}\label{m2}
\left\{
\begin{split}
0=&\mathbf{i}k\frac{n+1}{2(2n+1)}\frac{c_n^1}{\sqrt{n(n+1)}}(n+1)n(a_n^1+a_n^{-1})+\eta_1 \frac{c_n^0\sqrt{n(n+1)}}{2n+1}b_n^0,\\
0=&\mathbf{i}k\frac{n+1}{2(2n+1) }\frac{c_n^2}{\sqrt{n(n+1)}}(n+2)(n-1)(a_n^2+a_n^{-2})-\mathbf{i}k \frac{n+1}{2n+1}\frac{c_n^0 }{\sqrt{n(n+1)}}a_n^0\\
&+\eta_1\frac{c_n^1 \sqrt{n(n+1)}}{2n+1}(b_n^1+b_n^{-1}), \\
0=&\mathbf{i}k\frac{n+1}{2(2n+1)}\frac{c_n^m}{\sqrt{n(n+1)}} (n+m)(n-m+1)(a_n^m+a_n^{-m})-\mathbf{i}k\cdot\frac{n+1}{2(2n+1)} \frac{c_n^{m-2}}{\sqrt{n(n+1)}}\\
&\times (a_n^{m-2}+a_n^{-(m-2)})+\eta_1\frac{c_n^{m-1}\sqrt{n(n+1)}}{2n+1}(b_n^{m-1}+b_n^{-(m-1)}),\quad m=3,4,\ldots,n, \\
0=&-\mathbf{i}k\frac{n+1}{2(2n+1)}\frac{c_n^{n-1}}{\sqrt{n(n+1)}} (a_n^{n-1}+a_n^{-(n-1)})+\eta_1\frac{c_n^n \sqrt{n(n+1)}}{2n+1}(b_n^n+b_n^{-n})=0.
\end{split}
\right.
\end{equation}
\end{lemma}
\begin{proof}
Since the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_1$ is imposed on $\widetilde {\Pi}_1$, substituting \eqref{eq:lem54 cond} into \eqref{ss1}, by virtue of \eqref{uu}, we derive that
\begin{equation}\label{eq:im bd1n}
\begin{split}
\mathbf{0}
=
&\sum_{l=n}^{\infty}\sum_{m=-l }^{l} \frac{1}{\sqrt{l(l+1)}}\Bigg\{\bigg(
\mathbf{i}ka_l^ml(l+1)p_l(kr)c_l^mP_l^m+\boldsymbol{ \eta}_1 a_l^mj_l(kr) c_l^m \frac{\sgn(m)}{2} \big[P_{l-1}^{|m|+1}(\cos\theta)\\
&+(l+|m|-1)(l+|m|)P_{l-1}^{|m|-1}(\cos\theta)\big]-\boldsymbol{ \eta}_1 b_l^mq_l(kr)\frac{c_l^m}{2}\big[(l+|m|)(l-|m|+1)\\
&\times P_l^{|m|-1}(\cos\theta)-P_l^{|m|+1}(\cos\theta)\big]\bigg) \boldsymbol{e_1}(\theta,0) +\bigg(\mathbf{i}kb_l^m j_l(kr) c_l^m \frac{\sgn(m)}{2} \big[P_{l-1}^{|m|+1}(\cos\theta)\\
&+(l+|m|-1)(l+|m|)P_{l-1}^{|m|-1}(\cos\theta)\big]+\mathbf{i}ka_l^m
q_l(kr) \frac{c_l^m }{2}\big[(l+|m|)(l-|m|+1)\\
&\times P_l^{|m|-1}(\cos\theta)-P_l^{|m|+1}(\cos\theta)\big]
+\boldsymbol{ \eta}_1 b_l^ml(l+1)p_l(kr)c_l^mP_l^m \bigg) \boldsymbol{e_2}(\theta,0)\Bigg\},
\end{split}
\end{equation}
where $\boldsymbol{e_1}(\theta, 0)$, $\boldsymbol{e_2}(\theta,0)$ $\boldsymbol{e_1}(\theta,\phi_0)$ and $\boldsymbol{e_2}(\theta,\phi_0)$ are defined in \eqref{eq:e1e2}.
Recall that $\boldsymbol{\eta}_\ell$, $\ell=1,2$, have the expansion \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} respectively, where the coefficients of $r^0$ in \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} are non zero number $\eta_1$ and $\eta_2$. The lowest order term in \eqref{eq:im bd1n} with respect to the power of $r$ is $r^{n-1}$, which is contributed by $p_n(kr)$ and $q_n(kr)$ from Remark \ref{i2}. Furthermore, it is noted that coefficients of $r^{n-1}$ in $ p_n(kr)$ and $ q_n(kr)$ are $\frac{k^{n-1} }{(2n+1) (2n-1)!!} $ and $\frac{(n+1)k^{n-1} }{(2n+1) (2n-1)!!} $ respectively. Due to the fact that $ \boldsymbol{e}_{1}\left(\theta, \phi\right)$ and $ \boldsymbol{e}_{2}\left(\theta, \phi\right)$ are linearly independent for any $\theta$ and $\phi$, from Lemma \ref{lem:coeff0}, comparing the coefficient of $r^{n-1}$ both sides of \eqref{eq:im bd1n} associated with $ \boldsymbol{e}_{1}\left(\theta, \phi\right)$ for $\phi=0$, we have
\begin{align}
0= &\mathbf{i}k\sum_{m=-n\atop m\neq 0 }^{n} a_n^m\frac{\sqrt{n(n+1)}}{2n+1}c_n^mP_n^m(\cos\theta)-\eta_1\sum_{m=-n}^{n}b_n^m\cdot\frac{n+1}{2n+1}\frac{1}{\sqrt{n(n+1)}}\frac{c_n^m}{2}
\cdot\bigg((n+m)\notag\\
&\quad \times (n-m+1)P_n^{m-1}(\cos\theta)-P_n^{m+1}(\cos\theta)\bigg) +\eta_1b_n^0 \frac{n+1}{2n+1}\frac{c_n^0}{\sqrt{n(n+1)}}
P_n^{1}(\cos\theta),\label{eq:457pi1}
\end{align}
where for the index $m=0$ in \eqref{eq:im bd1n} we use the property \eqref{eq:pnm neg}, and $c_n^m$, $m=0,1,\ldots,n$, are nonzero constants defined in \eqref{sphe harmonic}. Utilizing the orthogonality condition \eqref{ortho3}, from \eqref{eq:457pi1} we can deduce \eqref{c2}.
Similarly, comparing the coefficient of $r^{n-1}$ both sides of \eqref{eq:im bd1n} associated with $ \boldsymbol{e}_{2}\left(\theta, \phi\right)$ for $\phi=0$ in \eqref{eq:im bd1n}, from Lemma \ref{lem:coeff0}, we obtain the following $n+1$ equations:
\begin{align}
0=&\mathbf{i}k\sum_{m=-n \atop m\neq 0}^{n}a_n^m\cdot\frac{n+1}{2n+1}\frac{1}{\sqrt{n(n+1)}}c_n^m\bigg((n+m)(n-m+1)P_n^{m-1}(\cos\theta)-P_n^{m+1}(\cos\theta)\bigg)\notag\\
&-\mathbf{i}ka_n^0\cdot\frac{n+1}{2n+1}\frac{1}{\sqrt{n(n+1)}}c_n^0P_n^{1}(\cos\theta) +\eta_1\sum_{m=-n}^{n}b_n^m\frac{\sqrt{n(n+1)}}{2n+1}c_n^mP_n^m(\cos\theta) ,\label{eq:460 anbn}
\end{align}
where for the index $m=0$ in \eqref{eq:im bd1n} we use the property \eqref{eq:pnm neg}. By virtue of \eqref{eq:460 anbn}, utilizing the orthogonality condition \eqref{ortho3}, we can obtain \eqref{m2}.
\end{proof}
\begin{lemma}\label{lem:55}
Under the same setup to Lemma~\ref{lem:54} and assuming that there exists $n\in \mathbb N \backslash\{1\}$ such that \eqref{eq:lem54 cond} is fulfilled, we have the following recursive linear equations:
\begin{align}
0=&\mathbf{i}k\frac{\sqrt{n(n+1)}}{2n+1}c_n^0a_n^0-\frac{\eta_2(n+1) }{2(2n+1) }\frac{c_n^1(n+1)n}{\sqrt{n(n+1)}}(b_n^1e^{{\mathbf{i}} \alpha\cdot\pi} +b_n^{-1} e^{-{\mathbf{i}} \alpha\cdot\pi}), \notag
\\
0=&\mathbf{i}k\frac{\sqrt{n(n+1)}}{2n+1}c_n^1(a_n^1 e^{{\mathbf{i}} \alpha\cdot\pi}+a_n^{-1}e^{-{\mathbf{i}} \alpha\cdot\pi})-\frac{\eta_2(n+1)}{2(2n+1)}\frac{c_n^2 (n+2)(n-1)}{\sqrt{n(n+1)}}\notag
\\
&\times (b_n^2 e^{{\mathbf{i}} 2\alpha\cdot\pi}+b_n^{-2}e^{-{\mathbf{i}} 2\alpha\cdot\pi})+ \frac{\eta_2 (n+1) }{2n+1}\frac{c_n^0}{\sqrt{n(n+1)}}b_n^0, \notag
\\
0=&\mathbf{i}k\frac{\sqrt{n(n+1)}}{2n+1}c_n^{m}(a_n^{m} e^{{\mathbf{i}} m\alpha\cdot\pi}+a_n^{-m}e^{-{\mathbf{i}} m\alpha\cdot\pi})- \frac{\eta_2(n+1)}{2(2n+1)}\frac{c_n^{m+1}}{\sqrt{n(n+1)}} (n+m+1)\notag
\\
&\times (n-m) (b_n^{m+1}e^{{\mathbf{i}} (m+1)\alpha\cdot\pi}+b_n^{-(m+1)}e^{-{\mathbf{i}} (m+1)\alpha\cdot\pi}) +\frac{\eta_2(n+1)}{2(2n+1) }\frac{c_n^{m-1}}{\sqrt{n(n+1)}} \notag
\\
&\times (b_n^{m-1} e^{{\mathbf{i}} (m-1)\alpha\cdot\pi} +b_n^{-(m-1)}e^{-{\mathbf{i}} (m-1)\alpha\cdot\pi} ), \quad m=2,3,\ldots,n-1, \label{c22}
\\
0=&\mathbf{i}k\frac{\sqrt{n(n+1)}}{2n+1}c_n^n(a_n^n e^{{\mathbf{i}} n\alpha\cdot\pi}+a_n^{-n}e^{-{\mathbf{i}} n \alpha\cdot\pi} )+\frac{\eta_2(n+1)}{2(2n+1) }\frac{c_n^{n-1}}{\sqrt{n(n+1)}} \notag
\\
&\times (b_n^{n-1}e^{{\mathbf{i}} (n-1)\alpha \pi}+b_n^{-(n-1)}e^{-{\mathbf{i}} (n-1)\alpha \pi}),\notag
\end{align}
and
\begin{align}
0=&\mathbf{i}k\frac{n+1}{2(2n+1)}\frac{c_n^1}{\sqrt{n(n+1)}}(n+1)n(a_n^1 e^{{\mathbf{i}}\alpha \cdot \pi}+a_n^{-1} e^{-{\mathbf{i}} \alpha \cdot \pi})+\eta_2 \frac{c_n^0\sqrt{n(n+1)}}{2n+1}b_n^0,\notag\\
0=&\mathbf{i}k\frac{n+1}{2(2n+1) }\frac{c_n^2}{\sqrt{n(n+1)}}(n+2)(n-1)(a_n^2 e^{2 {\mathbf{i}}\alpha \cdot \pi}+a_n^{-2} e^{-2 {\mathbf{i}}\alpha \cdot \pi} )-\mathbf{i}k \frac{n+1}{2n+1}\notag\\
&\times \frac{c_n^0 }{\sqrt{n(n+1)}}a_n^0 +\eta_2\frac{c_n^1 \sqrt{n(n+1)}}{2n+1}(b_n^1 e^{{\mathbf{i}}\alpha \cdot \pi} +b_n^{-1} e^{-{\mathbf{i}}\alpha \cdot \pi}), \label{m22} \\
0=&\mathbf{i}k\frac{n+1}{2(2n+1)}\frac{c_n^m (n+m)(n-m+1)}{\sqrt{n(n+1)}} (a_n^m e^{ {\mathbf{i}} m \alpha \cdot \pi} +a_n^{-m} e^{-{\mathbf{i}} m\alpha \cdot \pi} )-\mathbf{i}k\cdot\frac{n+1}{2(2n+1)}\notag\\
&\times \frac{c_n^{m-2}}{\sqrt{n(n+1)}} (a_n^{m-2} e^{{\mathbf{i}} (m-2)\alpha \cdot \pi} +a_n^{-(m-2)} e^{-{\mathbf{i}} (m-2)\alpha \cdot \pi} )+\eta_2\frac{c_n^{m-1}\sqrt{n(n+1)}}{2n+1}\notag \\
&\times (b_n^{m-1} e^{{\mathbf{i}} (m-1)\alpha \cdot \pi} +b_n^{-(m-1)}e^{-{\mathbf{i}} (m-1)\alpha \cdot \pi} ),\quad m=3,4,\ldots,n,\notag \\
0=&-\mathbf{i}k\frac{n+1}{2(2n+1)}\frac{c_n^{n-1}}{\sqrt{n(n+1)}} (a_n^{n-1} e^{{\mathbf{i}} (n-1)\alpha \cdot \pi} +a_n^{-(n-1)} e^{-{\mathbf{i}} (n-1) \alpha \cdot \pi} )+\eta_2\frac{c_n^n \sqrt{n(n+1)}}{2n+1}\notag\\
&\times (b_n^n e^{{\mathbf{i}} n \alpha \cdot \pi}+b_n^{-n} e^{-{\mathbf{i}} n \alpha \cdot \pi} )=0.\notag
\end{align}
\end{lemma}
\begin{proof} Since the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_2$ is imposed on $\widetilde {\Pi}_2$, substituting \eqref{eq:lem54 cond} into \eqref{ss2}, by virtue of \eqref{uu}, we derive that
\begin{equation}\label{eq:im bd2n}
\begin{split}
\mathbf{0}=
&\sum_{l=n}^{\infty}\sum_{m=-l}^{l} \frac{e^{{\mathbf{i}} m\alpha\cdot\pi}}{\sqrt{l(l+1)}}\Bigg\{\bigg(
\mathbf{i}ka_l^ml(l+1)p_l(kr)c_l^mP_l^m +\boldsymbol{ \eta}_2 a_l^mj_l(kr) c_l^m \frac{\sgn(m)}{2} \big[P_{l-1}^{|m|+1}(\cos\theta)\\
&+(l+|m|-1)(l+|m|)P_{l-1}^{|m|-1}(\cos\theta)\big]-\boldsymbol{ \eta}_2 b_l^mq_l(kr)\frac{c_l^m}{2}\big[(l+|m|)(l-|m|+1)\\
&\times P_l^{|m|-1}(\cos\theta)-P_l^{|m|+1}(\cos\theta)\big]\bigg)
\boldsymbol{e_1}(\theta,\phi_0) +\bigg(\mathbf{i}kb_l^mj_l(kr) c_l^m \frac{\sgn(m)}{2} \big[P_{l-1}^{|m|+1}(\cos\theta)\\
&+(l+|m|-1)(l+|m|)P_{l-1}^{|m|-1}(\cos\theta)\big]+\mathbf{i}ka_l^m
q_l(kr) \frac{c_l^m}{2}\big[(l+|m|)(l-|m|+1)\\
&\times P_l^{|m|-1}(\cos\theta)-P_l^{|m|+1}(\cos\theta)\big]+\boldsymbol{ \eta}_2 b_l^ml(l+1)p_l(kr)c_l^mP_l^m\bigg)
\boldsymbol{e_2}(\theta,\phi_0)\Bigg\},
\end{split}
\end{equation}
where $\boldsymbol{e_1}(\theta, 0)$, $\boldsymbol{e_2}(\theta,0)$ $\boldsymbol{e_1}(\theta,\phi_0)$ and $\boldsymbol{e_2}(\theta,\phi_0)$ are defined in \eqref{eq:e1e2}.
Recall that $\boldsymbol{\eta}_\ell$, $\ell=1,2$, have the expansion \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} respectively, where the coefficients of $r^0$ in \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} are non zero number $\eta_1$ and $\eta_2$. Comparing the coefficient of $r^{n-1}$ both sides of \eqref{eq:im bd2n} associated with $ \boldsymbol{e}_{1}\left(\theta, \phi\right)$ and $ \boldsymbol{e}_{2}\left(\theta, \phi\right)$ for $\phi=\phi_0$ respectively, utilizing the orthogonality condition \eqref{ortho3}, we can derive \eqref{c22} and \eqref{m22}.
\end{proof}
The next theorem characterises the vanishing order of $\mathbf{E}$ to \eqref{eq:eig} at ${\mathbf 0} \in {\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2, \boldsymbol {l} )$ with $\boldsymbol \eta_j\in\mathcal{A}( \boldsymbol {l} )$.
\begin{theorem}\label{th:two imp}
Let $\mathbf{E}$ be a a solution to \eqref{eq:eig}, whose radial wave expansion in $B_{\rho_0}(\mathbf{0}) $ is given by \eqref{mix pi21}. Consider a generalized impedance edge-corner ${\mathcal E}(\widetilde{ \Pi}_1, \widetilde{ \Pi}_2, \boldsymbol {l} ) \Subset \Omega$ with $\angle(\Pi_{1},\Pi_2)=\phi_0=\alpha \pi$, where $\alpha\in(0,2)$ and $\alpha \neq 1$. Suppose that the generalized impedance parameters $ \boldsymbol{ \eta}_j$ on $\widetilde{ \Pi}_j $, $j=1, 2$, are given by \eqref{eq:eta1 ex} and \eqref{eq:eta2 ex} respectively. Then it holds that $\mathbf{E}$ vanishes up to the order $N$ at $\mathbf{0}$:
\begin{equation}\notag
\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq \left\{\begin{split}
&1,\quad \mbox{if }\alpha \neq \frac{1}{2} , \\
& N \in\mathbb{N}\backslash\{1\} ,\, \mbox{if } \alpha \neq \frac{q }{p}, \, p=1,\ldots, N, \mbox{ and for a fixed } p, q=1,\ldots, 2p-1.
\end{split}\right.
\end{equation}
\end{theorem}
\begin{proof}
We prove this theorem by induction. Assume that
\begin{equation}\label{eq:alpha 12}
\alpha \neq \frac 1 2 \mbox{ and } \alpha \neq \frac 3 2 .
\end{equation}
Since the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_1$ is imposed on $\widetilde {\Pi}_1$, it yields \eqref{eq:im bd1n} when the summation index $n=1$. Comparing the coefficient of $r^0$ associated with $\boldsymbol{e_2}(\theta,0)$ on both sides of \eqref{eq:im bd1n} for the case that the summation index $n=1$, from Lemma \ref{base21}, we can obtain that
\begin{equation}\label{eq:543 a1}
0=\mathbf{i}k\frac{4c_1^1}{6\sqrt{2}}(a_1^1+a_1^{-1})+\eta_1\frac{\sqrt{2}c_1^0}{3}b_1^0.
\end{equation}
Combine \eqref{eq:543 a1} with \eqref{eq:lem51 a1} and \eqref{eq:lem51 a2} from Lemma \ref{base2}, we derive that
\begin{align}\label{eq:543 matrix}
{\mathcal A}_1 \begin{bmatrix}
a_1^1+a_1^{-1} \\ a_1^1-a_1^{-1} \\ b_1^0
\end{bmatrix} &=\mathbf{0},\quad {\mathcal A}_1 =\left(\alpha_{ij}^1\right)_{i,j=1}^3,
\end{align}
where
\begin{equation}\notag
\begin{split}
&\alpha_{11}=\frac{4 \mathbf{i}kc_1^1\sin^2\phi_0}{6\sqrt{2}},\quad\alpha_{12}=-\frac{4kc_1^1\sin\phi_0\cos\phi_0}{6\sqrt{2}},\quad\alpha_{13}=\frac{(-\eta_2\cos\phi_0-\eta_1)\sqrt{2}c_1^0}{3}\\
&\alpha_{21}=-\frac{4\mathbf{i}kc_1^1\sin\phi_0\cos\phi_0}{6\sqrt{2}},\quad\alpha_{22}=-\frac{4kc_1^1\sin^2\phi_0}{6\sqrt{2}},\quad
\alpha_{23}=-\frac{\eta_2\sqrt{2} c_1^0\sin\phi_0}{3}\\
&\alpha_{31}=\frac{4\mathbf{i}kc_1^1}{6\sqrt{2}},\quad\alpha_{32}=0,\quad \alpha_{33}=\frac{\sqrt{2}c_1^0 \eta_1}{3}.
\end{split}
\end{equation}
By direct calculations, it yields that
\[
\left|
{\mathcal A}_1
\right|
=-\mathbf{i}k^2\eta_1\left(\frac{2}{3}\right)^3\frac{\sqrt{2}}{2}\big(c_1^1\big)^2c_1^0\sin^2(\alpha \pi).
\]
Hence under the assumption \eqref{eq:alpha 12} and $\eta_1\neq 0$, by virtue of \eqref{eq:543 matrix} and $k\in \mathbb R_+$, it can be derived that $a_1^{\pm 1}=b_1^0=0$. Recall that \eqref{eq:lem51 1} is given by Lemma \ref{base2}. In view of \eqref{eq:alpha 12}, $\alpha \in (0,1)$, $k\in \mathbb R_+$ and $\eta_2\neq 0$, using the fact that
\[
\left|{\mathcal B}_1
\right| =-k\eta_2^2\left(\frac{2}{3}\right)^3\frac{\sqrt{2}}{2}\big(c_1^1\big)^2c_1^0\sin^2 (\alpha \pi) \cos^2(\alpha \pi )\neq 0,
\]
where ${\mathcal B}_1$ is defined in \eqref{eq:lem51 1}, we can obtain that $b_1^{\pm 1}=a_1^0=0$. Therefore, from Lemma \ref{lem:vani}, we prove that $\mathrm{Vani}(\mathbf{E}; \mathbf{0}) \geq 1$ under conditions \eqref{eq:alpha 12} and $\eta_\ell \neq 0$, $\ell=1,2$.
Suppose that $N=2$, from the assumption in this theorem we know that \eqref{eq:alpha 12} still holds. Since $a_1^{\pm 1}=b_{1}^{\pm 1} =a_1^0=b_1^0=0$, from Lemmas \ref{lem:54} and \ref{lem:55} we know that \eqref{c2}, \eqref{m2}, \eqref{c22} and \eqref{m22} for $n=2$ hold. Therefore we have
\begin{equation}\label{12equations1}
\left\{
\begin{split}
0=&\frac{\sqrt{6}c_2^0\mathbf{i}k}{5}a_2^0-\frac{18c_2^1\eta_1 }{10\sqrt{6} }(b_2^1+b_2^{-1})
\\
0=&\frac{\sqrt{6}c_2^1\mathbf{i}k}{5}(a_2^1+a_2^{-1})-\frac{12c_2^2\eta_1}{10\sqrt{6}}(b_2^2+b_2^{-2})+ \frac{3c_2^0\eta_1 }{5\sqrt{6}}b_2^0,
\\
0=&\mathbf{i}k\frac{\sqrt{6}}{5}c_2^2(a_2^2+a_2^{-2})+\frac{3c_2^{1}\eta_1}{10\sqrt{6} }(b_2^{1}+b_2^{-1}),
\end{split}
\right.
\end{equation}
and
\begin{equation}\label{12equations2}
\left\{
\begin{split}
0=&\frac{18c_2^1\mathbf{i}k}{10\sqrt{6}}(a_2^1+a_2^{-1})+ \frac{\sqrt{6}c_2^0\eta_1}{5}b_2^0,\\
0=&\frac{12c_2^2\mathbf{i}k}{10\sqrt{6} }(a_2^2+a_2^{-2})- \frac{3c_2^0\mathbf{i}k}{5\sqrt{6}}a_2^0+\frac{\sqrt{6}c_2^1\eta_1}{5}(b_2^1+b_2^{-1}), \\
0=&-\frac{3c_2^{1}\mathbf{i}k}{10\sqrt{6}}(a_2^{1}+a_2^{-1})+\frac{\sqrt{6}c_2^2\eta_1}{5}(b_2^2+b_2^{-2})=0.
\end{split}
\right.
\end{equation}
Furthermore, it holds that
\begin{equation} \label{12equations3}
\left\{
\begin{split}
0=&\frac{\sqrt{6}c_2^0\mathbf{i}k}{5}a_2^0-\frac{18c_2^1\eta_2 }{10\sqrt{6} }(b_2^1e^{{\mathbf{i}} \alpha\cdot\pi} +b_2^{-1} e^{-{\mathbf{i}} \alpha\cdot\pi}),
\\
0=&\frac{\sqrt{6}c_2^1\mathbf{i}k}{5}(a_2^1 e^{{\mathbf{i}} \alpha\cdot\pi}+a_2^{-1}e^{-{\mathbf{i}} \alpha\cdot\pi})-\frac{12c_2^2\eta_2}{10\sqrt{6}} (b_2^2 e^{{\mathbf{i}} 2\alpha\cdot\pi}+b_2^{-2}e^{-{\mathbf{i}} 2\alpha\cdot\pi})+ \frac{3c_2^0\eta_2 }{5\sqrt{6}}b_2^0,
\\
0=&\frac{\sqrt{6}c_2^2\mathbf{i}k}{5}(a_2^2 e^{{\mathbf{i}} 2\alpha\cdot\pi}+a_2^{-2}e^{-{\mathbf{i}} 2 \alpha\cdot\pi} )+\frac{3c_2^{1}\eta_2}{10\sqrt{6} } (b_2^{1}e^{{\mathbf{i}} \alpha \pi}+b_2^{-1}e^{-{\mathbf{i}} \alpha \pi}),
\end{split}
\right.
\end{equation}
\begin{equation} \label{12equations4}
\left\{
\begin{split}
0=&\frac{18c_2^1\mathbf{i}k}{10\sqrt{6}}(a_2^1 e^{{\mathbf{i}}\alpha \cdot \pi}+a_2^{-1} e^{-{\mathbf{i}} \alpha \cdot \pi})+ \frac{\sqrt{6}c_2^0\eta_2}{5}b_2^0,\\
0=&\frac{12c_2^2\mathbf{i}k}{10\sqrt{6} }(a_2^2 e^{2 {\mathbf{i}}\alpha \cdot \pi}+a_2^{-2} e^{-2 {\mathbf{i}}\alpha \cdot \pi} )- \frac{3c_2^0\mathbf{i}k}{5\sqrt{6}}a_2^0 +\frac{ \sqrt{6}c_2^1\eta_2}{5}(b_2^1 e^{{\mathbf{i}}\alpha \cdot \pi} +b_2^{-1} e^{-{\mathbf{i}}\alpha \cdot \pi}), \\
0=&-\frac{3c_2^{1}\mathbf{i}k}{10\sqrt{6}} (a_2^{1} e^{{\mathbf{i}} \alpha \cdot \pi} +a_2^{-1} e^{-{\mathbf{i}} \alpha \cdot \pi} )+\frac{\sqrt{6}c_2^2\eta_2 }{5}(b_2^2 e^{{\mathbf{i}} 2 \alpha \cdot \pi}+b_2^{-2} e^{-{\mathbf{i}} 2 \alpha \cdot \pi} )=0.
\end{split}
\right.
\end{equation}
From Lemma \ref{lem:imp pi12}, \eqref{eq:52a} and \eqref{eq:52b} for $n=2$ can be written as
\begin{align}\label{eq:545 n=2}
\begin{split}
0&=\frac{18c_2^1\sin^2\phi_0\mathbf{i}k}{10\sqrt{6}}(a_2^1+a_2^{-1})-\frac{18c_2^1\sin\phi_0\cos\phi_0k}{10\sqrt{6}}(a_2^1-a_2^{-1})+\frac{\sqrt{6}c_2^0(-\eta_2\cos\phi_0-\eta_1)}{5} b_2^0, \\
0&=-\frac{18c_2^1\sin\phi_0\cos\phi_0\mathbf{i}k}{10\sqrt{6}}
(a_2^1+a_2^{-1})-\frac{18c_2^1\sin^2\phi_0k}{10\sqrt{6}}(a_2^1-a_2^{-1})-\frac{\eta_2\sqrt{6} c_2^0\sin\phi_0}{5}b_2^0.
\end{split}
\end{align}
Combing the first equation of \eqref{12equations2} with \eqref{eq:545 n=2}, we have
\begin{equation}\label{eq:546 linear}
{\mathcal A}_2 \begin{bmatrix}
a_2^1+a_2^{-1} \\ a_2^1-a_2^{-1} \\ b_2^0
\end{bmatrix} =\mathbf{0},\quad {\mathcal A}_2 =\left(\alpha_{ij}^2\right)_{i,j=1}^3,
\end{equation}
where
\begin{equation}\notag
\begin{split}
&\alpha_{11}^2=\frac{18c_2^1\sin^2\phi_0\mathbf{i}k}{10\sqrt{6}},\quad\alpha_{12}^2=-\frac{18c_2^1\sin\phi_0\cos\phi_0k}{10\sqrt{6}},\quad\alpha_{13}^2=\frac{\sqrt{6}c_2^0(-\eta_2\cos\phi_0-\eta_1)}{5}\\
&\alpha_{21}^2=-\frac{18c_2^1\sin\phi_0\cos\phi_0\mathbf{i}k}{10\sqrt{6}},\quad\alpha_{22}^2=-\frac{18c_2^1\sin^2\phi_0k}{10\sqrt{6}},\quad
\alpha_{23}^2=-\frac{\eta_2\sqrt{6} c_2^0\sin\phi_0}{5}\\
&\alpha_{31}^2=\frac{18c_2^1\mathbf{i}k}{10\sqrt{6}},\quad\alpha_{32}^2=0,\quad \alpha_{33}^2=\frac{\sqrt{6}c_2^0\eta_1}{5}.
\end{split}
\end{equation}
It can be computed directly that
\begin{equation}
\begin{split}
\left|{\mathcal A}_2
\right| &=-\mathbf{i}k^2\eta_1\left(\frac{3}{5}\right)^3\frac{2\sqrt{6}}{2}\big(c_2^1\big)^2c_2^0\sin^2(\alpha \pi).
\end{split}
\end{equation}
Since $\alpha \in (0,2)$, $\alpha\neq 1$, $\eta_1\neq 0$ and $k\in \mathbb R_+$, in view of \eqref{eq:546 linear}, we prove that $a_2^{\pm1}=b_2^0=0$.
Recall that $\mathbf{E}$ has the radial wave expansion \eqref{mix pi21} at $\mathbf{0}$. Due to that $\eta_1 \neq 0$, under the assumption \eqref{eq:alpha 12}, by virtue of \eqref{eq:lem41 zero} in Lemma \ref{base2}, we have
\begin{equation}\label{eq:a1b10 new}
a_1^0=b_1^{\pm1}=0.
\end{equation}
By mathematical induction, if $\alpha \neq \frac{q }{p}$, where $p=1,\ldots, n-1$ and for a fixed $p$, $ q=1,2,\ldots, 2p-1$, then
$$
\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq n-1.
$$
From Lemma \ref{lem:vani}, we know that
\begin{equation}\label{eq:alm=0}
a_l^m=b_l^m=0,\quad m\in [l]_0, \quad l=1,2,\ldots,n.
\end{equation}
Therefore we know that \eqref{c2}, \eqref{m2}, \eqref{c22} and \eqref{m22} hold from Lemmas \ref{lem:54} and \ref{lem:55}. In the following under the assumption
\begin{equation}\label{eq:455 cond}
\eta_\ell \neq 0 \mbox{ for } \ell=1, 2 \mbox{ and } \alpha \neq \frac{q }{p}, \, p=1,\ldots, n,
\end{equation}
where for a fixed $p$, $ q=1,2,\ldots, 2p-1$, we shall show that
\begin{equation}\label{eq:4567 anm}
a_n^m=b_n^m=0,\quad \forall m \in [n]_0
\end{equation}
by utilizing the recursive equations of \eqref{c2}, \eqref{m2}, \eqref{c22} and \eqref{m22}. Indeed, combing the first equation of \eqref{m2} with \eqref{eq:lem51 a1} and \eqref{eq:lem51 a2}, we have
\begin{equation}\label{eq:556 matrix}
{\mathcal A}_n \begin{bmatrix}
a_n^1+a_n^{-1} \\ a_n^1-a_n^{-1} \\ b_n^0
\end{bmatrix} =\mathbf{0},\quad {\mathcal A}_n =\left(\alpha_{ij}^n\right)_{i,j=1}^3,
\end{equation}
where
\begin{equation}\notag
\begin{split}
&\alpha^n_{11}=\frac{\mathbf{i}kn(n+1)^2c_n^1\sin^2\phi_0}{2(2n+1)\sqrt{n(n+1)}},\quad \alpha^n_{12}=-\frac{kn(n+1)^2c_n^1\sin\phi_0\cos\phi_0}{2(2n+1)\sqrt{n(n+1)}},\\
&\alpha^n_{13}=\frac{(-\eta_2\cos\phi_0-\eta_1)\sqrt{n(n+1)}c_n^0}{2n+1},\quad \alpha^n_{21}=-\frac{\mathbf{i}kn(n+1)^2c_n^1\sin\phi_0\cos\phi_0}{2(2n+1)\sqrt{n(n+1)}},\\
&\alpha^n_{22}=-\frac{kn(n+1)^2c_n^1\sin^2\phi_0}{2(2n+1)\sqrt{n(n+1)}},\quad
\alpha^n_{23}=-\frac{\eta_2\sqrt{n(n+1)} c_n^0\sin\phi_0}{2n+1}, \\
&\alpha^n_{31}=\mathbf{i}k\frac{n(n+1)^2c_n^1}{2(2n+1)\sqrt{n(n+1)}},\quad\alpha^n_{32}=0,\quad \alpha^n_{33}=\eta_1\frac{\sqrt{n(n+1)}c_n^0}{2n+1}.
\end{split}
\end{equation}
It can be derived that
\begin{equation}\label{eq:557 deter}
\left|{\mathcal A}_n
\right| =-\mathbf{i}k^2\eta_1\left(\frac{n+1}{2n+1}\right)^3\frac{n\sqrt{n(n+1)}}{2}\big(c_n^1\big)^2c_n^0\sin^2(\alpha \pi) .
\end{equation}
Since $\alpha\in (0,2)$, $\alpha \neq 1$, $\alpha \neq \frac{1}{2}$, $\alpha \neq \frac{3}{2}$ and $\eta_\ell \neq 0$, $\ell=1,2$, by virtue of \eqref{eq:556 matrix}, \eqref{eq:557 deter} and Lemma \ref{base2}, we have
\begin{equation}\label{eq:462 an1bn1+}
a_n^{\pm1}=a_n^{0}=b_n^{\pm1}=b_n^{0}=0.
\end{equation}
Substituting \eqref{eq:462 an1bn1+} into the second equation of \eqref{c2}, \eqref{m2}, \eqref{c22} and \eqref{m22}, since $k\in \mathbb R_+$, $\eta_\ell \neq 0$ for $\ell=1,2$ and $c_n^2\neq 0$, we obtain that
\begin{equation}\notag
\left\{\begin{array}{l}
a_n^2 +a_n^{-2}=0,\\
a_n^2 e^{2 {\mathbf{i}}\alpha \cdot \pi}+a_n^{-2} e^{-2 {\mathbf{i}}\alpha \cdot \pi}=0,
\end{array} \right. \quad
\left\{\begin{array}{l}
b_n^2 +b_n^{-2}=0,\\
b_n^2 e^{2 {\mathbf{i}}\alpha \cdot \pi}+b_n^{-2} e^{-2 {\mathbf{i}}\alpha \cdot \pi}=0,
\end{array} \right.
\end{equation}
which can be shown to prove that $a_n^{\pm 2} = b_n^{\pm 2}=0$, since
\[
\left|\begin{array}{cc}
1 & 1 \\
e^{{\mathbf{i}} 2\alpha\cdot\pi} & e^{-{\mathbf{i}} 2\alpha\cdot\pi}
\end{array}\right|
=-2{\mathbf{i}}\sin (2\alpha\pi)\neq0,
\]
under \eqref{eq:455 cond}. Substituting
$$
a_n^{\pm 1} = b_n^{\pm 1}=a_n^{\pm 2} = b_n^{\pm 2}=0
$$
into the third equation of \eqref{c2}, \eqref{m2}, \eqref{c22} and \eqref{m22}, since $k\in \mathbb R_+$, $\eta_\ell \neq 0$ for $\ell=1,2$ and $c_n^3 \neq 0$, we get that
\begin{equation}\notag
\left\{\begin{array}{l}
a_n^3 +a_n^{-3}=0,\\
a_n^3 e^{3 {\mathbf{i}}\alpha \cdot \pi}+a_n^{-3} e^{-3 {\mathbf{i}}\alpha \cdot \pi}=0,
\end{array} \right. \quad
\left\{\begin{array}{l}
b_n^3 +b_n^{-3}=0,\\
b_n^3 e^{3 {\mathbf{i}}\alpha \cdot \pi}+b_n^{-3} e^{-3 {\mathbf{i}}\alpha \cdot \pi}=0,
\end{array} \right.
\end{equation}
which can be shown to prove that $a_n^{\pm 3} = b_n^{\pm 3}=0$, since
\[
\left|\begin{array}{cc}
1 & 1 \\
e^{{\mathbf{i}} 3\alpha\cdot\pi} & e^{-{\mathbf{i}} 3\alpha\cdot\pi}
\end{array}\right|
=-2{\mathbf{i}}\sin (3\alpha\pi)\neq0,
\]
under \eqref{eq:455 cond}. Repeating the above procedures step by step, utilizing the recursive property of \eqref{c2}, \eqref{m2}, \eqref{c22} and \eqref{m22}, we can prove \eqref{eq:4567 anm}. Generally, assume that we have proved that
\begin{equation}\notag
a_n^{\pm m}=b_n^{\pm m}=0 \mbox{ for }m=0,1,\ldots, \ell.
\end{equation}
Substituting $a_n^{\pm (\ell-1)}=b_n^{\pm (\ell-2)}=0$ into the $\ell$-th equation of \eqref{c2} and \eqref{c22}, we can obtain that
\begin{equation}\label{eq:560}
\left\{\begin{array}{l}
b_n^\ell +b_n^{-\ell}=0,\\
b_n^\ell e^{{\mathbf{i}} \ell \alpha \cdot \pi}+b_n^{-\ell} e^{-{\mathbf{i}}\ell \alpha \cdot \pi}=0,
\end{array} \right.
\end{equation}
under the assumption $\eta_1\neq 0$ and $\eta_2\neq 0$. Substituting $a_n^{\pm (\ell-2)}=b_n^{\pm (\ell-1)}=0$ into the $\ell$-th equation of \eqref{m2} and \eqref{m22}, we can get that
\begin{equation}\label{eq:561}
\left\{\begin{array}{l}
a_n^\ell +a_n^{-\ell }=0,\\
a_n^\ell e^{ {\mathbf{i}} \ell \alpha \cdot \pi}+a_n^{-2} e^{- {\mathbf{i}} \ell \alpha \cdot \pi}=0,
\end{array} \right.
\end{equation}
Hence from \eqref{eq:560} and \eqref{eq:561}, under \eqref{eq:455 cond} it yields that $a_n^{\pm \ell}=b_n^{\pm \ell}=0 $.
Therefore, due to \eqref{eq:4567 anm}, by virtue of Lemma \ref{lem:vani},
we prove that
$$
\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq n,
$$
which completes the proof of this theorem.
\end{proof}
\section{Vanishing orders for an edge-corner ${\mathcal E} ( \widetilde{ \Pi}_1, \widetilde{ \Pi}_2, \boldsymbol {l} )$ with $\boldsymbol{\eta}_j\in \mathcal{A}( \boldsymbol {l} )$ or $\boldsymbol{\eta}_j=0, \infty$}\label{sec:6}
In this section, we investigate the vanishing order of the solution $\mathbf{E}$ to \eqref{eq:eig} at an edge-corner point $\mathbf 0 \in {\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2, \boldsymbol {l} )$, where the generalized impedance edge-corner ${\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2, \boldsymbol {l} ) \Subset \Omega$ with $\angle(\Pi_{1},\Pi_2)=\phi_0=\alpha \pi$, $\alpha\in(0,2)$ and $\alpha \neq 1$. The generalized impedance condition \eqref{eq:imp2} on $\widetilde \Pi_j $, $j=1,2$, are different. Namely, the associated generalized impedance parameter of the generalized impedance edge-corner ${\mathcal E}(\Pi_1, \Pi_2, \boldsymbol {l} )$ in Theorem \ref{thm:pec pmc} are $\boldsymbol{\eta}_1\equiv \infty$ and $\boldsymbol{\eta}_2\equiv 0$, where we utilize Lemma \ref{lem:31} to reveal the vanishing order of $\mathbf{E}$ at $\mathbf 0$. On the other hand, in Theorems \ref{thm:imp pec} and \ref{thm:imp pmc}, we consider the case that $\boldsymbol{\eta}_2\in {\mathcal A}( \boldsymbol {l} )$ has the expansion \eqref{eq:eta2 ex} whereas the associated generalized impedance parameter $\boldsymbol{\eta}_1$ could be either $\infty$ or $0$. The reflection principle \cite{Liu3,Liu09} are adopted to transform the corresponding generalized impedance edge-corner to be generalized impedance edge-corner intersected by two plane cells with the generalized impedance condition \eqref{eq:imp2} and two associated generalized impedance parameters belonging to $\mathcal{A}( \boldsymbol {l} )$.
\begin{lemma}\label{lem:31}
Let $\mathbf{E}$ be a a solution to \eqref{eq:eig}, whose radial wave expansion in $B_{\rho_0}(\mathbf{0}) $ is given by \eqref{mix pi21}. Consider a generalized impedance edge-corner ${\mathcal E}(\widetilde{ \Pi}_1, \widetilde{ \Pi}_2, \boldsymbol {l} ) \Subset \Omega$ with $\angle(\Pi_{1},\Pi_2)=\phi_0=\alpha \pi$, where $\alpha\in(0,2)$ and $\alpha \neq 1$. Suppose that the generalize impedance parameters $ \boldsymbol{ \eta}_1$ on $\widetilde{ \Pi}_1 $ satisfies (ii) in \eqref{eq:imp1} and $ \boldsymbol{ \eta}_2$ on $\widetilde{ \Pi}_2 $ satisfies (i) in \eqref{eq:imp1} respectively. It holds that
\begin{subequations}
\begin{align}
& b_1^1+b_1^{-1}=0, \quad b_1^0=0, \label{eq:b1b10 pec}\\
&a_1^1-a_1^{-1}=0,\label{eq:d1 pec}\\
& b_{2}^m+b_{2}^{-m}=0,\quad m=1, 2,\mbox{ and } b_{2}^0=0, \label{eq:39 b1b10 pec}
\end{align}
\end{subequations}
and
\begin{subequations}
\begin{align}
&
a_1^1e^{{\mathbf{i}}\alpha\cdot\pi}+a_1^{-1}e^{-{\mathbf{i}}\alpha\cdot\pi}=0, \quad a_1^0=0, \label{eq:pmc 49} \\
&
b_1^1e^{{\mathbf{i}}\alpha\cdot\pi}-b_1^{-1}e^{-{\mathbf{i}}\alpha\cdot\pi}=0, \label{d1}\\
& a_{2}^me^{{\mathbf{i}} m\alpha\cdot\pi}+a_{2}^{-m}e^{-{\mathbf{i}} m\alpha\cdot\pi}=0,\quad m=1, 2,\mbox{ and } a_{2}^0=0. \label{eq:41c}
\end{align}
\end{subequations}
Assume that there exits a $n\in \mathbb N$ such that
\begin{equation}\label{eq:lem31 cond}
a_l^m=b_l^m=0,\quad l=1,2,\ldots,n-1,\quad m\in [l]_0,
\end{equation}
then we have
\begin{subequations}
\begin{align}
&b_n^m+b_n^{-m}=0, \quad m=1,\ldots, n, \mbox{ and } b_n^0=0,\label{eq:lem31 33} \\
& a_n^me^{{\mathbf{i}} m\alpha\cdot\pi}+a_n^{-m}e^{-{\mathbf{i}} m\alpha\cdot\pi}=0, \quad m=1,\ldots, n, \mbox{ and } a_n^0=0, \label{eq:lem41 412}
\end{align}
\end{subequations}
and
\begin{subequations}
\begin{align}
& \sum_{m=1}^{n}mc_n^m(a_n^m-a_n^{-m})\frac{P_n^m(\cos\theta)}{\sin \theta}+\sum_{m=-(n+1)}^{n+1}\frac{c_{n+1}^m(n+2)}{2n+3}b_{n+1}^m\frac{\partial Y_{n+1}^m}{\partial\theta}\Big|_{\phi=0}=0, \label{eq:lem31 34} \\
&\sum_{m=1}^{n}mc_n^m(b_n^me^{{\mathbf{i}} m\alpha\cdot\pi}-b_n^{-m}e^{-{\mathbf{i}} m\alpha\cdot\pi})\frac{P_n^m(\cos\theta)}{\sin \theta}+\sum_{m=-(n+1)}^{n+1}\frac{c_{n+1}^m(n+2)}{2n+3}a_{n+1}^m\frac{\partial Y_{n+1}^m}{\partial\theta}\Big|_{\phi=\phi_0}=0, \label{eq:lem41 43}
\end{align}
\end{subequations}
where $c_n^m$ are nonzero constants defined in \eqref{sphe harmonic} for m $=0,1,\ldots,n$. Furthermore, we have
\begin{subequations}
\begin{align}
&b_{n+1}^m+b_{n+1}^{-m}=0,\quad m=1,\ldots, n+1,\mbox{ and } b_{n+1}^0=0, \label{eq:bm1} \\
& a_{n+1}^me^{{\mathbf{i}} m\alpha\cdot\pi}+a_{n+1}^{-m}e^{-{\mathbf{i}} m\alpha\cdot\pi}=0,\quad m=1,\ldots, n+1,\mbox{ and } a_{n+1}^0=0,\label{eq:b7}
\end{align}
\end{subequations}
where $c_{n+1}^m$ are nonzero constants defined in \eqref{sphe harmonic} for m$ =0,1,\ldots,n+1 $.
\end{lemma}
\begin{proof
We first derive \eqref{eq:b1b10 pec}, \eqref{eq:d1 pec} and \eqref{eq:39 b1b10 pec}. Since the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_1$ is imposed on $\widetilde {\Pi}_1$ where $\boldsymbol{\eta}_1\equiv \infty $, using \eqref{mix pi2}, we have
\begin{equation}\label{eq:38 pec}
\begin{split}
& \mathbf{0}= \sum_{l=1}^{\infty}\sum_{m=-l}^{l}-\frac{1}{\sqrt{l(l+1)}}\Bigg\{b_l^ml(l+1)p_l(kr)Y_l^m\Big|_{\phi=0}
\boldsymbol{e_1}(\theta,0)\\
& \hspace{2cm} +\bigg(
a_l^mj_l\big(kr\big)\frac{m}{\sin\theta}Y_l^m\Big|_{\phi=0}+b_l^m\cdot q_l(kr)
\frac{\partial{Y_l^m}}{\partial\theta}\Big|_{\phi=0}
\bigg)\boldsymbol{e_2}(\theta,0)\Bigg\},
\end{split}
\end{equation}
where $ \boldsymbol{e}_{1}\left(\theta, 0\right)$ and $ \boldsymbol{e}_{2}\left(\theta, 0\right)$ are defined in \eqref{eq:e1e2}. From Remark \ref{i2}, the lowest order of \eqref{eq:38 pec} with respect to the power of $r$ is $r^0$, which is contributed by $p_1(kr)$ and $q_1(kr)$. Similarly, the second lowest order of \eqref{eq:38 pec} with respect to the power of $r$ is $r^1$, which is contributed by $j_1(kr)$, $p_2(kr)$ and $q_2(kr)$. Comparing the coefficient of $r^0$ and $r^1$ associated with $ \boldsymbol{e}_{1}\left(\theta, 0 \right)$ on both sides of \eqref{eq:38 pec}, utilizing the orthogonality property \eqref{ortho3}, we can obtain \eqref{eq:b1b10 pec} and \eqref{eq:39 b1b10 pec}.
Substituting \eqref{eq:39 b1b10 pec} into \eqref{eq:38 pec}, comparing the coefficient of $r^1$ in the resulting equation \eqref{eq:38 pec} associate with $ \boldsymbol{e}_{2}\left(\theta, 0\right)$, using Lemma \ref{lem:coeff0}, we deduce that
\begin{equation}\label{ii1}
\begin{split}
&(a_1^1c_1^1-a_1^{-1}c_1^{-1})P_1^1(\cos\theta)=0,
\end{split}
\end{equation}
where $c_1^{\pm 1}$ are nonzero constants defined in \eqref{sphe harmonic}.
In view of \eqref{ii1}, from \eqref{ortho3} and $c_1^1=c_1^{-1}\neq 0$, it yields that \eqref{eq:d1 pec}.
Since the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_2$ is imposed on $\widetilde {\Pi}_2$ where $\boldsymbol{\eta}_2\equiv 0 $, by virtue of \eqref{gg} it yields that
\begin{equation}\label{eq:331 pec a1b1}
\begin{split}
\mathbf{0} ={\mathbf{i}k}&\sum_{l=1}^{\infty}\sum_{m=-l}^{l}\frac{1}{\sqrt{l(l+1)}}\Bigg\{a_l^ml(l+1)p_l(kr)Y_l^m \Big|_{\phi=\phi_0}
\cdot\boldsymbol{e_1}(\theta,\phi_0)\\
&+\bigg( -b_l^mj_l(kr)\cdot\frac{m}{\sin\theta}Y_l^m \Big|_{\phi=\phi_0}+a_l^m
q_l(kr)\cdot\frac{\partial{Y_l^m}}{\partial\theta}\Big|_{\phi=\phi_0}\bigg)
\cdot\boldsymbol{e_2}(\theta,\phi_0)\Bigg\},
\end{split}
\end{equation}
where $ \boldsymbol{e}_{1}\left(\theta, \phi_0\right)$ and $ \boldsymbol{e}_{2}\left(\theta, \phi_0\right)$ are defined in \eqref{eq:e1e2}. From Remark \ref{i2}, the lowest order of \eqref{eq:331 pec a1b1} with respect to the power of $r$ is $r^0$, which is contributed by $p_1(kr)$ and $q_1(kr)$. Similarly, the second lowest order of \eqref{eq:331 pec a1b1} with respect to the power of $r$ is $r^1$, which is contributed by $j_1(kr)$, $p_2(kr)$ and $q_2(kr)$. Comparing the coefficient of $r^0$ and $r^1$ associated with $ \boldsymbol{e}_{1}\left(\theta, \phi_0\right)$ on both sides of \eqref{eq:331 pec a1b1}, utilizing the orthogonality property \eqref{ortho3}, we can obtain \eqref{eq:pmc 49} and \eqref{eq:41c}.
Comparing the coefficient of $r^1$ in \eqref{eq:331 pec a1b1} associate with $ \boldsymbol{e}_{2}\left(\theta, \phi_0\right)$, using Lemma \ref{lem:coeff0}, we deduce that
\begin{equation}\label{iii1}
\begin{split}
&(b_1^1c_1^1e^{{\mathbf{i}}\alpha\cdot\pi}-b_1^{-1}c_1^{-1}e^{-{\mathbf{i}}\alpha\cdot\pi})P_1^1(\cos\theta)=0,
\end{split}
\end{equation}
where $c_1^{\pm 1}$ are nonzero constants defined in \eqref{sphe harmonic}.
In view of \eqref{iii1}, from \eqref{ortho3} and $c_1^1=c_1^{-1}\neq 0$, it yields that \eqref{d1}.
Now we are in the position to prove \eqref{eq:lem31 33}, \eqref{eq:lem31 34} and \eqref{eq:bm1} under the assumption \eqref{eq:lem31 cond}. Since the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_1$ is imposed on $\widetilde {\Pi}_1$ where $\boldsymbol{\eta}_1\equiv \infty $, substituting \eqref{eq:lem31 cond} into \eqref{mix pi2} it yields that
\begin{equation}\label{eq:33 pec}
\begin{split}
& \mathbf{0}= \sum_{l=n}^{\infty}\sum_{m=-l}^{l}-\frac{1}{\sqrt{l(l+1)}}\Bigg\{b_l^ml(l+1)p_l(kr)Y_l^m\Big|_{\phi=0}
\boldsymbol{e_1}(\theta,0)\\
& \hspace{2cm} +\bigg(
a_l^mj_l\big(kr\big)\frac{m}{\sin\theta}Y_l^m\Big|_{\phi=0}+b_l^m\cdot q_l(kr)
\frac{\partial{Y_l^m}}{\partial\theta}\Big|_{\phi=0}
\bigg)\boldsymbol{e_2}(\theta,0)\Bigg\},
\end{split}
\end{equation}
where $ \boldsymbol{e}_{1}\left(\theta, 0\right)$ and $ \boldsymbol{e}_{2}\left(\theta, 0\right)$ are defined in \eqref{eq:e1e2}.
The lowest order term in \eqref{eq:33 pec} with respect to the power of $r$ is $r^{n-1}$, which is contributed by $p_n(kr)$ and $q_n(kr)$ from Remark \ref{i2}. Since $ \boldsymbol{e}_{1}\left(\theta, \phi\right)$ and $ \boldsymbol{e}_{2}\left(\theta, \phi\right)$ are linearly independent for any $\theta$ and $\phi$, where $ \boldsymbol{e}_{i}\left(\theta, \phi\right)$ are defined in \eqref{eq:e1e21}, from Lemma \ref{lem:coeff0}, comparing the coefficient of $r^{n-1}$ both sides of \eqref{eq:33 pec} associated with $ \boldsymbol{e}_{1}\left(\theta, 0\right)$, we can obtain
\begin{equation}\notag
\label{eq:lem31 33 n}
\begin{split}
&\sum_{m=0}^{n}c_n^m(b_n^m+b_n^{-m})P_n^m(\cos\theta)=0.
\end{split}
\end{equation}
Utilizing the orthogonality property \eqref{ortho3}, since $c_n^m\neq 0$ for $m\in [n]_0$, \eqref{eq:lem31 33} holds.
From Remark \ref{i2} we know that the second lowest order term in in \eqref{eq:33 pec} with respect to the power of $r$ is $r^{n}$, which is related to $j_n(kr)$, $p_{n+1}(kr)$ and $q_{n+1}(kr)$. Since $ \boldsymbol{e}_{1}\left(\theta, \phi\right)$ and $ \boldsymbol{e}_{2}\left(\theta, \phi\right)$ are linear independently for any $\theta$ and $\phi$, comparing the coefficient of $r^{n}$ both sides of \eqref{eq:33 pec} associated with $ \boldsymbol{e}_{1}\left(\theta, 0\right)$, we can obtain
\begin{equation}\notag
\label{eq:bm1 n}
\begin{split}
&\sum_{m=0}^{n+1}c_{n+1}^m(b_{n+1}^m+b_{n+1}^{-m})P_{n+1}^m(\cos\theta)=0.
\end{split}
\end{equation}
Using the orthogonality property \eqref{ortho3}, together with the fact that $c_{n+1}^m\neq 0$ for $m\in [n+1]_0$, we see that \eqref{eq:bm1} holds.
Similarly, in view of Remark \ref{i2}, comparing the coefficient of $r^{n}$ both sides of \eqref{eq:33 pec} associated with $ \boldsymbol{e}_{2}\left(\theta, 0\right)$, we know that \eqref{eq:lem31 34} hold.
We proceed to derive \eqref{eq:lem41 412}, \eqref{eq:lem41 43} and \eqref{eq:b7} under the assumption \eqref{eq:lem31 cond}. Since the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_2$ is imposed on $\widetilde {\Pi}_2$ where $\boldsymbol{\eta}_2 \equiv 0$, substituting \eqref{eq:lem31 cond} into \eqref{gg} it yields that
\begin{equation}\label{eq:331 pec}
\begin{split}
\mathbf{0}={\mathbf{i}k}&\sum_{l=n}^{\infty}\sum_{m=-l}^{l}\frac{1}{\sqrt{l(l+1)}}\Bigg\{a_l^ml(l+1)p_l(kr)Y_l^m \Big|_{\phi=\phi_0}
\cdot\boldsymbol{e_1}(\theta,\phi_0)\\
&+\bigg( -b_l^mj_l(kr)\cdot\frac{m}{\sin\theta}Y_l^m \Big|_{\phi=\phi_0}+a_l^m
q_l(kr)\cdot\frac{\partial{Y_l^m}}{\partial\theta}\Big|_{\phi=\phi_0}\bigg)
\cdot\boldsymbol{e_2}(\theta,\phi_0)\Bigg\}.
\end{split}
\end{equation}
where $ \boldsymbol{e}_{1}\left(\theta, \phi_0\right)$ and $ \boldsymbol{e}_{2}\left(\theta, \phi_0\right)$ are defined in \eqref{eq:e1e2}.
The lowest order term in \eqref{eq:331 pec} with respect to the power of $r$ is $r^{n-1}$, which is contributed by $p_n(kr)$ and $q_n(kr)$ from Remark \ref{i2}. Since $ \boldsymbol{e}_{1}\left(\theta, \phi\right)$ and $ \boldsymbol{e}_{2}\left(\theta, \phi\right)$ are linear independently for any $\theta$ and $\phi$, where $ \boldsymbol{e}_{i}\left(\theta, \phi\right)$ are defined in \eqref{eq:e1e2}, from Lemma \ref{lem:coeff0}, comparing the coefficient of $r^{n-1}$ both sides of \eqref{eq:331 pec} associated with $ \boldsymbol{e}_{1}\left(\theta, \phi_0\right)$,
we can obtain
\begin{equation}\label{eq:lem31 33 n}
\begin{split}
&\sum_{m=0}^{n}c_n^m(a_n^me^{{\mathbf{i}} m\alpha\cdot\pi}+a_n^{-m}e^{-{\mathbf{i}} m\alpha\cdot\pi})P_n^m(\cos\theta)=0.
\end{split}
\end{equation}
Using the orthogonality property \eqref{ortho3}, together with the fact that $c_n^m\neq 0$ for $m\in [n]_0$, we can obtain \eqref{eq:lem41 412}.
From Remark \ref{i2} we know that the second lowest order term in \eqref{eq:331 pec} with respect to the power of $r$ is $r^{n}$, which is related to $j_n(kr)$, $p_{n+1}(kr)$ and $q_{n+1}(kr)$. Since $ \boldsymbol{e}_{1}\left(\theta, \phi\right)$ and $ \boldsymbol{e}_{2}\left(\theta, \phi\right)$ are linearly independent for any $\theta$ and $\phi$, comparing the coefficient of $r^{n}$ both sides of \eqref{eq:331 pec} associated with $ \boldsymbol{e}_{1}\left(\theta, \phi_0\right)$, we can get
\begin{equation}\notag
\label{eq:bm1 n}
\begin{split}
&\sum_{m=0}^{n+1}c_{n+1}^m(a_{n+1}^me^{{\mathbf{i}} m\alpha\cdot\pi}+a_{n+1}^{-m}e^{-{\mathbf{i}} m\alpha\cdot\pi})P_{n+1}^m(\cos\theta)=0.
\end{split}
\end{equation}
Utilizing the orthogonality property \eqref{ortho3}, since $c_{n+1}^m\neq 0$ for $m\in [n+1]_0$, we derive \eqref{eq:b7}.
Similarly, in view of Remark \ref{i2}, comparing the coefficient of $r^{n}$ both sides of \eqref{eq:331 pec} associated with $ \boldsymbol{e}_{2}\left(\theta, \phi_0\right)$, we know that \eqref{eq:lem41 43} holds.
The proof is complete.
\end{proof}
\begin{theorem}\label{thm:pec pmc}
Under the same setup in Lemma~\ref{lem:31}, we have that
\begin{align}\notag
&\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq N,\quad \mbox{if } \alpha \neq \frac{q }{2p}, \, p=1,\ldots, N,
\end{align}
where $N\in\mathbb{N}$ and for a fixed $p$, $ q=1,2,\ldots, 4p-1.$
\end{theorem}
\begin{proof}
We prove this theorem by induction. Assume that
\begin{equation}\label{eq:61 cond}
\alpha \neq \frac{1}{2} \mbox{ and } \alpha \neq \frac{3}{2},
\end{equation}
we shall prove that $\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq 1$. Since the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_1$ is imposed on $\widetilde {\Pi}_1$ where $\boldsymbol{\eta}_1\equiv \infty $, from Lemma \ref{lem:31} we know that \eqref{eq:b1b10 pec} and \eqref{eq:d1 pec} hold. Similarly, since the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_2$ is imposed on $\widetilde {\Pi}_2$ where $\boldsymbol{\eta}_2 \equiv 0$, from Lemma \ref{lem:31} it yields that \eqref{eq:pmc 49} and \eqref{d1}.
Combing \eqref{eq:b1b10 pec}, \eqref{eq:d1 pec}, \eqref{eq:pmc 49} and \eqref{d1}, it yields that
\begin{equation}\label{eq:66 eqn}
\left\{\begin{array}{l}
a_1^1 -a_1^{-1}=0,\\
a_1^1 e^{{\mathbf{i}}\alpha \cdot \pi}+a_1^{-1} e^{- {\mathbf{i}}\alpha \cdot \pi}=0,
\end{array} \right.
\left\{\begin{array}{l}
b_1^1 +b_1^{-1}=0,\\
b_1^1 e^{{\mathbf{i}}\alpha \cdot \pi}-b_1^{-1} e^{- {\mathbf{i}}\alpha \cdot \pi}=0.
\end{array} \right.
\end{equation}
Under \eqref{eq:61 cond} we have
$$
\left|\begin{array}{cc}
1 & -1 \\
e^{{\mathbf{i}} \alpha\cdot\pi} & e^{-{\mathbf{i}} \alpha\cdot\pi}
\end{array}\right|
=2\cos (\alpha\cdot\pi) \neq0,
$$
which implies that $a_1^{\pm1}=b_1^{\pm1}=0$ from \eqref{eq:66 eqn}. Since $a_1^0=b_1^0=0$, from Lemma \ref{lem:vani}, we prove $\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq 1$ under the assumption \eqref{eq:61 cond}.
Assume that
\begin{equation}\label{eq:67 assump}
\alpha \neq \frac{1}{2},\quad \alpha \neq \frac{1}{4}, \quad \alpha \neq \frac{3}{4},\quad \alpha \neq \frac{5}{4}, \quad \alpha \neq \frac{3}{2} \mbox{ and } \alpha \neq \frac{7}{4},
\end{equation}
which implies that $\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq 1$. Hence we have
\begin{equation}\label{eq:67 a1b1=0}
a_1^{\pm1}=b_1^{\pm1}=a_1^0=b_1^0=0
\end{equation}
from Lemma \ref{lem:vani}. Since the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_1$ is imposed on $\widetilde {\Pi}_1$ where $\boldsymbol{\eta}_1\equiv \infty $, from Lemma \ref{lem:31} we have
\begin{equation}\label{eq:68 a2b2=0}
b_2^0=0,\quad b_2^1+b_2^{-1}=0,\quad b_2^2+b_2^{-2}=0
\end{equation}
by \eqref{eq:lem31 33} and
\begin{equation}\label{eq:610 b3}
b_{3}^m+b_{3}^{-m}=0, \quad m=1,2, 3, \mbox{ and } b_{3}^0=0
\end{equation}
by \eqref{eq:bm1}. Substituting \eqref{eq:610 b3} into the first equation of \eqref{eq:lem31 34}, it yields that
\begin{equation}\label{eq:610 a21}
a_2^1 -a_2^{-1}=0,\quad a_2^2 -a_2^{-2}=0
\end{equation}
by noting $c_{3}^{m}=c_{3}^{-m} \neq 0$ for $ m=1,2, 3$, where $c_{3}^{m}$ and $c_{3}^{-m}$ are defined in \eqref{sphe harmonic}.
Similarly, in view of \eqref{eq:67 a1b1=0}, using Lemma \ref{lem:31}, we obtain that
\begin{equation}\label{eq:611 a2 pmc}
a_2^0=0,\quad a_2^1 e^{{\mathbf{i}}\alpha \cdot \pi}+a_2^{-1} e^{-{\mathbf{i}}\alpha \cdot \pi}=0,\quad a_2^2 e^{\bsi2\alpha \cdot \pi}+a_2^{-2} e^{-\bsi2\alpha \cdot \pi}=0
\end{equation}
by \eqref{eq:lem41 412} and
\begin{equation}\label{eq:613 a3}
a_{3}^m e^{{\mathbf{i}} m \alpha
\pi }+a_{3}^{-m}e^{-{\mathbf{i}} m \alpha
\pi }=0, \quad m=1,2, 3, \mbox{ and } a_{3}^0=0
\end{equation}
by \eqref{eq:b7}. Substituting \eqref{eq:613 a3} into the second equation of \eqref{eq:lem41 43}, it yields that
\begin{equation}\label{eq:612 a2}
b_2^1 e^{{\mathbf{i}}\alpha \cdot \pi}-b_2^{-1} e^{- {\mathbf{i}}\alpha \cdot \pi}=0,\quad b_2^2 e^{{\mathbf{i}} 2\alpha \cdot \pi}-b_2^{-2} e^{-\bsi2\alpha \cdot \pi}=0
\end{equation}
by using the fact that $c_{3}^{m}=c_{3}^{-m} \neq 0$ for $ m=1,2$ and the definition of $Y_{3}^m(\theta,\phi)$, where $c_{3}^{m}$ and $c_{3}^{-m}$ are defined in \eqref{sphe harmonic}.
Combing \eqref{eq:68 a2b2=0}, \eqref{eq:610 a21} and \eqref{eq:611 a2 pmc} with \eqref{eq:612 a2}, we obtain that
\begin{equation}\label{eq:613 four eqn}
\begin{split}
& \left\{\begin{array}{l}
a_2^1 -a_2^{-1}=0,\\
a_2^1 e^{{\mathbf{i}}\alpha \cdot \pi}+a_2^{-1} e^{- {\mathbf{i}}\alpha \cdot \pi}=0,
\end{array} \right.
\left\{\begin{array}{l}
b_2^1 +b_2^{-1}=0,\\
b_2^1 e^{{\mathbf{i}}\alpha \cdot \pi}-b_2^{-1} e^{- {\mathbf{i}}\alpha \cdot \pi}=0,
\end{array} \right. \\
& \left\{\begin{array}{l}
a_2^2 -a_2^{-2}=0,\\
a_2^2 e^{{\mathbf{i}} 2\alpha \cdot \pi}+a_2^{-2} e^{- {\mathbf{i}} 2\alpha \cdot \pi}=0,
\end{array} \right.
\left\{\begin{array}{l}
b_2^2 +b_2^{-2}=0,\\
b_2^1 e^{{\mathbf{i}} 2\alpha \cdot \pi}-b_2^{-2} e^{- {\mathbf{i}} 2\alpha \cdot \pi}=0.
\end{array} \right.
\end{split}
\end{equation}
Under the assumption \eqref{eq:67 assump} it is easy to see that
$$
\left|\begin{array}{cc}
1 & -1 \\
e^{{\mathbf{i}} \alpha\cdot\pi} & e^{-{\mathbf{i}} \alpha\cdot\pi}
\end{array}\right|
=2\cos (\alpha\cdot\pi) \neq0,\quad \left|\begin{array}{cc}
1 & -1 \\
e^{2{\mathbf{i}} \alpha\cdot\pi} & e^{-2{\mathbf{i}} \alpha\cdot\pi}
\end{array}\right|
=2\cos (2\alpha\cdot\pi) \neq0
$$
which imply that $a_2^{\pm1}=b_2^{\pm1}=a_2^{\pm 2}=b_2^{\pm 2}=0$ in view of \eqref{eq:613 four eqn}. Due to \eqref{eq:68 a2b2=0} and \eqref{eq:611 a2 pmc}, we have $a_2^0=b_2^0=0$, hence from Lemma \ref{lem:vani} we prove $\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq 2$ under the assumption \eqref{eq:67 assump}.
By the induction, we assume that
\begin{equation}\label{eq:614 assump}
\alpha \neq \frac{2q+1 }{2p}, \, p=1,\ldots, n, \mbox{ for a fixed }p, \ \ q=0,1,\ldots, 2p-1.
\end{equation}
Therefore, we know that $\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq n-1$ from the induction under the assumption \eqref{eq:614 assump}, which implies that
\begin{equation}\label{eq:615 an-1=0}
a_{l}^m=0 \mbox{ for } l=1,\ldots, n-1 \mbox{ and } m \in [l]_0.
\end{equation}
from Lemma \ref{lem:vani}.
Due to \eqref{eq:615 an-1=0} and the fact that the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_1$ is imposed on $\widetilde {\Pi}_1$ where $\boldsymbol{\eta}_1 \equiv \infty$, from Lemma \ref{lem:31}, we have
\begin{equation}\label{eq:616 bn1}
b_n^m+b_n^{-m}=0, \quad m=1,\ldots, n, \mbox{ and } b_n^0=0
\end{equation}
by \eqref{eq:lem31 33} and
\begin{equation}\label{eq:616 bn+1}
b_{n+1}^m+b_{n+1}^{-m}=0, \quad m=1,\ldots, n+1, \mbox{ and } b_{n+1}^0=0
\end{equation}
by \eqref{eq:bm1}. Substituting \eqref{eq:616 bn+1} into the first equation of \eqref{eq:lem31 34}, it yields that
\begin{equation}\label{eq:617 an1}
a_2^m -a_2^{-m}=0,\quad m=1,\ldots, n,
\end{equation}
by noting $c_{n+1}^{m}=c_{n+1}^{-m} \neq 0$ for $ m=1,\ldots, n$, where $c_{n+1}^{m}$ and $c_{n+1}^{-m}$ are defined in \eqref{sphe harmonic}.
Similarly, due to \eqref{eq:615 an-1=0} and the fact that the generalized impedance condition \eqref{eq:imp2} associated with $\boldsymbol{ \eta}_2$ is imposed on $\widetilde {\Pi}_2$ where $\boldsymbol{\eta}_2 \equiv 0$, using Lemma \ref{lem:31}, we get that
\begin{equation}\label{eq:618 an1}
a_n^m e^{{\mathbf{i}} m \alpha
\pi }+a_n^{-m} e^{-{\mathbf{i}} m \alpha
\pi }=0, \quad m=1,\ldots, n, \mbox{ and } a_n^0=0
\end{equation}
by \eqref{eq:lem41 412} and
\begin{equation}\label{eq:620 an+1}
a_{n+1}^m e^{{\mathbf{i}} m \alpha
\pi }+a_{n+1}^{-m}e^{-{\mathbf{i}} m \alpha
\pi }=0, \quad m=1,\ldots, n+1, \mbox{ and } a_{n+1}^0=0
\end{equation}
by \eqref{eq:b7}.
Substituting \eqref{eq:620 an+1} into the second equation of \eqref{eq:lem41 43}, it yields that
\begin{equation}\label{eq:621 a2}
b_n^m e^{{\mathbf{i}} m \alpha \cdot \pi}-b_n^{-m} e^{- {\mathbf{i}} m\alpha \cdot \pi}=0,\quad m=1,\ldots, n
\end{equation}
by using the fact that $c_{n+1}^{m}=c_{n+1}^{-m} \neq 0$ for $ m=1,\ldots, n$ and the definition of $Y_{n+1}^m(\theta,\phi)$, where $c_{n+1}^{m}$ and $c_{n+1}^{-m}$ are defined in \eqref{sphe harmonic}.
Combing \eqref{eq:616 bn1}, \eqref{eq:617 an1} and \eqref{eq:618 an1} with \eqref{eq:621 a2}, we obtain that
\begin{equation}\label{eq:622 two eqn}
\left\{\begin{array}{l}
a_n^m -a_n^{-m}=0,\\
a_n^m e^{{\mathbf{i}} m\alpha \cdot \pi}+a_n^{-m} e^{- {\mathbf{i}} m\alpha \cdot \pi}=0,
\end{array} \right.
\left\{\begin{array}{l}
b_n^m +b_n^{-m}=0,\\
b_n^m e^{{\mathbf{i}} m\alpha \cdot \pi}-b_n^{-m} e^{- {\mathbf{i}} m\alpha \cdot \pi}=0,
\end{array} \right. \quad m=1,\ldots, n.
\end{equation}
Under the assumption \eqref{eq:614 assump} it is not difficult to see that
$$
\left|\begin{array}{cc}
1 & -1 \\
e^{{\mathbf{i}} m \alpha\cdot\pi} & e^{-{\mathbf{i}} m\alpha\cdot\pi}
\end{array}\right|
=2\cos (m\alpha\cdot\pi) \neq0,
$$
which imply that $a_n^{\pm m}=b_n^{\pm m}=0$ in view of \eqref{eq:622 two eqn}. Due to \eqref{eq:616 bn1} and \eqref{eq:618 an1}, we have $a_n^0=b_n^0=0$, hence from Lemma \ref{lem:vani} we prove $\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq n$ under the assumption \eqref{eq:67 assump}.
The proof is complete.
\end{proof}
In the following two theorems, we consider the generalized impedance edge-corner ${\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2, \boldsymbol {l} )$ where the generalize impedance parameter $ \boldsymbol{ \eta}_2$ on $\widetilde{ \Pi}_2 $ satisfies (iii) in \eqref{eq:imp1} and has the expansion \eqref{eq:eta2 ex}, whereas the generalize impedance parameter $ \boldsymbol{ \eta}_1$ on $\widetilde{ \Pi}_1 $ satisfies either (i) or (ii) in \eqref{eq:imp1}. In the sequel, we shall make use of the reflection principles for the Maxwell equations from \cite{Liu3,Liu09}.
For any two-dimensional plane $\Pi \in \mathbb R^3 $, let $\nu_\Pi$ and ${\mathcal R}_\Pi$ be respectively the unit normal to $\Pi$ and the reflection with respect to $\Pi$ in $\mathbb R^3$.
\begin{lemma}\cite[Theorems 2.1 and 2.2]{Liu09}\label{lem:reflection}
Consider a generalized impedance edge-corner ${\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2, \boldsymbol {l} ) \Subset \Omega$ with $\angle(\Pi_{1},\Pi_2)=\phi_0=\alpha \pi$, where $\alpha\in(0,1)$. Assume that the generalize impedance parameter $ \boldsymbol{ \eta}_2$ on $\widetilde{ \Pi}_2 $ satisfies (iii) in \eqref{eq:imp1} and has the expansion \eqref{eq:eta2 ex} while the generalize impedance parameter $ \boldsymbol{ \eta}_1$ on $\widetilde{ \Pi}_1 $ satisfies (ii) in \eqref{eq:imp1} (i.e., $ \boldsymbol{ \eta}_1 \equiv \infty $). Recall that $\Pi_1$ be a plane containing $\widetilde \Pi_1$. Let ${\widetilde {\Pi}_2}'={\mathcal R}_{ \Pi_1}(\widetilde \Pi_2)$. Then
\begin{equation}\label{eq:lem62}
\nu_{\widetilde \Pi_2' } \wedge (\nabla\wedge \mathbf{E})+ \widetilde{ \boldsymbol{ \eta}}_2 (\nu_{\widetilde \Pi_2' }\wedge\mathbf{E})\wedge\nu_{\widetilde \Pi_2' } =\mathbf 0 \mbox{ on } \widetilde \Pi_2',
\end{equation}
where $\nu_{\Pi_2' } $ is the unit normal to $\Pi_2'$ directed to the interior of ${\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2', \boldsymbol {l} ) $ and $\widetilde{ \boldsymbol{ \eta}}_2 (\mathbf x) =\boldsymbol{ \eta}_2({\mathcal R}_{\Pi_1} (\mathbf x) )$ for $\mathbf{x} \in \widetilde{\Pi}_2' $.
Similarly, consider a generalized impedance edge-corner ${\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2, \boldsymbol {l} ) \Subset \Omega$ with $\angle(\Pi_{1},\Pi_2)=\phi_0=\alpha \pi$, where $\alpha\in(0,1)$. Assume that the generalize impedance parameter $ \boldsymbol{ \eta}_2$ on $\widetilde{ \Pi}_2 $ satisfies (iii) in \eqref{eq:imp1} and has the expansion \eqref{eq:eta2 ex} while the generalize impedance parameter $ \boldsymbol{ \eta}_1$ on $\widetilde{ \Pi}_1 $ satisfies (i) in \eqref{eq:imp1} (i.e., $ \boldsymbol{ \eta}_1 \equiv 0 $). Recall that $\Pi_1$ be a plane containing $\widetilde \Pi_1$. Let ${\widetilde {\Pi}_2}'={\mathcal R}_{ \Pi_1}(\widetilde \Pi_2)$. Then
\begin{equation} \notag
\nu_{\widetilde \Pi_2' } \wedge (\nabla\wedge \mathbf{E})+ \widetilde{ \boldsymbol{ \eta}}_2 (\nu_{\widetilde \Pi_2' }\wedge\mathbf{E})\wedge\nu_{\widetilde \Pi_2' } =\mathbf 0 \mbox{ on } \widetilde \Pi_2',
\end{equation}
where $\nu_{\Pi_2' } $ is the unit normal to $\Pi_2'$ directed to the interior of ${\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2', \boldsymbol {l} ) $ and $\widetilde{ \boldsymbol{ \eta}}_2 (\mathbf x) =\boldsymbol{ \eta}_2({\mathcal R}_{\Pi_1} (\mathbf x) )$ for $\mathbf{x} \in \widetilde{\Pi}_2' $.
\end{lemma}
\begin{theorem}\label{thm:imp pec}
Let $\mathbf{E}$ be a solution to \eqref{eq:eig}. Consider a generalized impedance edge-corner ${\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2, \boldsymbol {l} ) \Subset \Omega$ with $\angle(\Pi_{1},\Pi_2)=\phi_0=\alpha \pi$, where $\alpha\in(0,2)$ and $\alpha \neq 1$. Assume that the generalize impedance parameter $ \boldsymbol{ \eta}_2$ on $\widetilde{ \Pi}_2 $ satisfies (iii) in \eqref{eq:imp1} and has the expansion \eqref{eq:eta2 ex} while the generalize impedance parameter $ \boldsymbol{ \eta}_1$ on $\widetilde{ \Pi}_1 $ satisfies (ii) in \eqref{eq:imp1} (i.e., $ \boldsymbol{ \eta}_1 \equiv \infty $). Then
\begin{align}\label{eq:Th44 cond}
&\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq N,\quad \mbox{if } \alpha \neq \frac{q }{2p}, \, p=1,\ldots, N,
\end{align}
where $N\in\mathbb{N}$ and for a fixed $p$, $ q=1,2,\ldots, 4p-1.$
\end{theorem}
\begin{proof}
Let $\widetilde \Pi_2'={\mathcal R}_{{\Pi}_1}(\widetilde \Pi_2)$, where ${\Pi}_1$ is a plane containing $\Pi_1$. With the help of Lemma \ref{lem:reflection}, we know that $\mathbf{E}$ satisfies the generalized impedance boundary condition \eqref{eq:lem62} on $\widetilde \Pi_2'$. Since $\mathbf{x} \in \widetilde{\Pi}_2 $, we have the spherical coordinate of $\mathbf{x}=(r, \theta, \phi_0)$, where $0\leq r \leq h$, $\theta \in [-\pi,\pi]$ and $\phi_0=\alpha \pi $. It is clear that the spherical coordinate of ${\mathcal R}_{\Pi_1} (\mathbf x) )$, where $\mathbf{x} \in \widetilde{\Pi}_2$, is given by
\begin{equation}
\notag
(r, \theta, \phi_1), \mbox{ where } \phi_1=2-\alpha \in (0,2).
\end{equation}
Recall that $ \boldsymbol{ \eta}_2$ has the expansion \eqref{eq:eta2 ex}. Although $\mathbf{x} \in \widetilde{\Pi}_2$ and ${\mathcal R}_{\Pi_1} (\mathbf x) ) \in \widetilde{\Pi}_2'$ have different azimuthal angles but they have the same polar angle $\theta$, hence from Definition \ref{def:class1}, we know that $\widetilde{\boldsymbol{\eta}}_2 $ has the same expansion \eqref{eq:eta2 ex} as ${\boldsymbol{\eta}}_2 $.
Furthermore, the dihedral angle between $\widetilde \Pi_2$ and $\widetilde \Pi_2'$ satisfies
$$
\angle(\widetilde \Pi_2,\widetilde \Pi_2' )= \begin{cases}
2\alpha \pi \in (0,\pi], \hspace{1.3cm} \alpha \in (0,1/2),\\[5pt]
2(1-\alpha)\pi\in (0,\pi],\quad \alpha \in [1/2,1),\\[5pt]
2(\alpha-1)\pi\in (0,\pi],\quad \alpha \in (1,3/2),\\[5pt]
2(2-\alpha)\pi\in (0,\pi],\quad \alpha \in [3/2,2),
\end{cases}
$$
We divide our remaining proof into four separate cases. Recall that that the Maxwell system \eqref{eq:eig} is invariant under rigid motions. Without loss of generality, we assume that the generalized impedance edge-corner ${\mathcal E}(\widetilde \Pi_2, \widetilde \Pi_2', \boldsymbol {l} ) \Subset \Omega$ are placed as shown in Figure \ref{fig:coordinate1}.
\medskip
\noindent {\bf Case 1.}~If $\alpha \in (0,1/2)$, then $2\alpha \in (0,1)$. By virtue of Theorem \ref{th:two imp}, if
\begin{equation}\label{eq:623 alpha}
2\alpha \neq \frac{q}{p}, \quad p=1,\ldots,N, \mbox{ for a fixed } p, \, q=1,\ldots, p-1,
\end{equation}
we have $\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq N$. It is easy to see that \eqref{eq:623 alpha} is equivalent to
\begin{equation}\label{eq:624}
\alpha \neq \frac{q}{2p}, \quad p=1,\ldots,N, \mbox{ for a fixed } p, \, q=1,\ldots, p-1.
\end{equation}
\noindent {\bf Case 2.}~If $\alpha \in [1/2,1)$, then $2(1-\alpha) \in (0,1]$. By virtue of Theorem \ref{th:two imp}, if
\begin{equation}\label{eq:624 alpha}
2(1-\alpha) \neq \frac{q}{p}, \quad p=1,\ldots,N, \mbox{ for a fixed } p, \, q=1,\ldots, p,
\end{equation}
we have $\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq N$. It is easy to see that \eqref{eq:624 alpha} is equivalent to
\begin{equation}\label{eq:626}
\alpha \neq \frac{q}{2p}, \quad p=1,\ldots,N, \mbox{ for a fixed } p, \, q=p,\ldots, 2p-1.
\end{equation}
\noindent {\bf Case 3.}~If $\alpha \in (1,3/2)$, then $2(\alpha-1) \in (0,1)$. By virtue of Theorem \ref{th:two imp}, if
\begin{equation}\label{eq:624 alpha1}
2(\alpha-1) \neq \frac{q}{p}, \quad p=1,\ldots,N, \mbox{ for a fixed } p, \, q=1,\ldots, p-1,
\end{equation}
we have $\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq N$. It is easy to see that \eqref{eq:624 alpha1} is equivalent to
\begin{equation}\label{eq:626 1}
\alpha \neq \frac{q}{2p}, \quad p=1,\ldots,N, \mbox{ for a fixed } p, \, q=2p+1,\ldots, 3p-1.
\end{equation}
\noindent {\bf Case 4.}~If $\alpha \in [3/2,2)$, then $2(2-\alpha) \in (0,1]$. By virtue of Theorem \ref{th:two imp}, if
\begin{equation}\label{eq:624 alpha2}
2(2-\alpha) \neq \frac{q}{p}, \quad p=1,\ldots,N, \mbox{ for a fixed } p, \, q=1,\ldots, p-1,
\end{equation}
we have $\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq N$. It is easy to see that \eqref{eq:624 alpha2} is equivalent to
\begin{equation}\label{eq:626 2}
\alpha \neq \frac{q}{2p}, \quad p=1,\ldots,N, \mbox{ for a fixed } p, \, q=3p,\ldots, 4p-1.
\end{equation}
\medskip
In view of \eqref{eq:624}, \eqref{eq:626}, \eqref{eq:626 1} and \eqref{eq:626 2}, we finish the proof of this theorem.
\end{proof}
With the help of Lemma \ref{lem:reflection}, using the similar argument for proving Theorem \ref{thm:imp pec}, we can prove the following theorem, where the detailed proof is omitted.
\begin{theorem}\label{thm:imp pmc}
Let $\mathbf{E}$ be a solution to \eqref{eq:eig}. Consider a generalized impedance edge-corner ${\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2, \boldsymbol {l} ) \Subset \Omega$ with $\angle(\Pi_{1},\Pi_2)=\phi_0=\alpha \pi$, where $\alpha\in(0,1)$. Assume that the generalize impedance parameter $ \boldsymbol{ \eta}_2$ on $\widetilde{ \Pi}_2 $ satisfies (iii) in \eqref{eq:imp1} and has the expansion \eqref{eq:eta2 ex} while the generalize impedance parameter $ \boldsymbol{ \eta}_1$ on $\widetilde{ \Pi}_1 $ satisfies (i) in \eqref{eq:imp1} (i.e., $ \boldsymbol{ \eta}_1 \equiv 0 $). Then
\begin{align} \notag
&\mathrm{Vani}(\mathbf{E}; \mathbf{0})\geq N,\quad \mbox{if } \alpha \neq \frac{q }{2p}, \, p=1,\ldots, N,
\end{align}
where $N\in\mathbb{N}$ and for a fixed $p$, $ q=1,2,\ldots, 4p-1$.
\end{theorem}
\section{Irrational intersections and infinite vanishing orders}\label{sec4}
From the results derived in Sections \ref{sec:5} to \ref{sec:6}, one can identify that the vanishing order of the eigenfunction $\mathbf E$ at a generalized impedance edge-corner relies on the degree of the dihedral angle of the underlying corner. Next, we introduce the irrational and rational edge-corners, and then, based on the results in Sections \ref{sec:5} to \ref{sec:6}, we show that the vanishing order of the eigenfunction at an irrational edge-corner is generically infinity and hence it vanishes identically in $\Omega$, namely strong uniqueness continuation principle holds in such a case.
\begin{definition}
Let ${\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2, \boldsymbol {l} )$ be an edge-corner defined in Section \ref{sec:Intro} and the corresponding dihedral angle of $\widetilde \Pi_1$ and $\widetilde \Pi_2$ is denoted by $\phi_0=\alpha \pi $, $\alpha\in(0,2)$ and $\alpha \neq 1$. If $\alpha$ is an irrational number, then the edge-corner is called irrational. If $\alpha$ is a rational number of the form $q/p$ with $p, q\in\mathbb{N}$ being irreducible, the edge-corner is called rational and $p$ is referred to as its rational degree.
\end{definition}
We readily have the following theorem from Theorems~\ref{th:two imp}, \ref{thm:pec pmc}, \ref{thm:imp pec} and \ref{thm:imp pmc}.
\begin{theorem}\label{ir-2nodal}
Let $\mathbf{E}$ be a solution to \eqref{eq:eig}. Consider an irrational generalized impedance edge-corner ${\mathcal E}(\widetilde \Pi_1, \widetilde \Pi_2, \boldsymbol {l} ) \Subset \Omega$ with $\angle(\Pi_{1},\Pi_2)=\phi_0=\alpha \pi$, where $\alpha\in(0,2)$ and $\alpha \neq 1$. Under the same requirement on $\boldsymbol{\eta}_j$, $j=1,2$, to either one from Theorems~\ref{th:two imp}, \ref{thm:pec pmc}, \ref{thm:imp pec} and \ref{thm:imp pmc},
it holds that
\begin{equation*}\label{result1}
\mathrm{Vani}({\mathbf E}; {\mathbf 0})=+\infty,\quad {\mathbf 0}\in \boldsymbol {l} .
\end{equation*}
\end{theorem}
\section{Applications to inverse electromagnetic scattering problems}\label{sec5}
In this section, we consider two applications of the UCP results established in the previous sections to the inverse electromagnetic scattering problems. In what follows, we first present the mathematical formulation of the inverse problem of determining an impenetrable obstacle from its associated electromagnetic far-field measurement. It is a prototypical model problem for many real applications including radar/sonar, non-destructive testing and medical imaging.
\subsection{Unique identifiability results for inverse obstacle scattering problems}
Let $\Omega\subset\mathbb{R}^3$ be a bounded Lipschitz domain such that $\mathbb{R}^3\backslash\bar{\Omega}$ is connected, and the
incident electric and magnetic fields be of the form
\begin{equation}\label{eq:p1n}
\begin{aligned}
\mathbf{E}^{{i}}\big(\mathbf{x}\big) &:=\mathbf{p} {e}^{\mathbf{i} k \mathbf{x} \cdot \mathbf{d}},\ \ \mathbf{H}^{{i}}\big(\mathbf{x}\big) &:=\frac{1}{\mathbf{i}k}\nabla\wedge \mathbf{p}\, {e}^{\mathbf{i} k \mathbf{x} \cdot \mathbf{d}}=\mathbf{d} \wedge \mathbf{p} \, {e}^{\mathbf{i} k \mathbf{x} \cdot \mathbf{d}},
\end{aligned}
\end{equation}
which are known as the time-harmonic electromagnetic plane waves, with ${\mathbf p} \in \mathbb{R}^{3}\backslash\{\mathbf{0}\}, k \in \mathbb R_+$ and ${\mathbf d} \in \mathbb{S}^{2}:=\left\{\mathbf{x} \in \mathbb{R}^{3} ;|\mathbf{x}|=1\right\}$ representing respectively the polarization, wave number and direction of propagation, and it holds that $\mathbf{p}\perp\mathbf{d}$. The associated forward scattering problem can be described by the following the time-harmonic Maxwell equations (cf. \cite{CK}):
\begin{equation}\label{eq:forward}
\begin{cases}
&\nabla\wedge\mathbf{E}-\mathbf{i} k \mathbf{H}=0 \quad \text { in } \quad \mathbb{R}^{3} \backslash \overline{\Omega}, \\
& \nabla\wedge \mathbf{H}+\mathbf{i} k \mathbf{E}=0 \quad \text { in } \quad \mathbb{R}^{3} \backslash\overline{\Omega}, \\
&\mathbf{E}({\mathbf x})=\mathbf{E}^{{i}}({\mathbf x})+\mathbf{E}^{s}({\mathbf x} ),\\
& \mathbf{H}({\mathbf x})=\mathbf{H}^{{i}}({\mathbf x})+\mathbf{H}^{s}({\mathbf x} ),\\
& \mathscr{B}({\mathbf E})={\mathbf 0}\hspace*{1.75cm}\mbox{on}\ \ \partial\Omega,\medskip\\
&\lim _{|{\mathbf x}| \rightarrow \infty}\left(\mathbf{H}^{s} \wedge {\mathbf x}-|{\mathbf x} | \mathbf{E}^{s}\right)={\mathbf 0},
\end{cases}
\end{equation}
where $\mathbf{E}=\left(E_{1}, E_{2}, E_{3}\right)$ and $\mathbf{H}=\left(H_{1}, H_{2}, H_{3}\right)$ are respectively the total electric and magnetic fields formed by the incident fields $\mathbf{E}^{{i}}({\mathbf x} )$ and $\mathbf{H}^{{i}}({\mathbf x} )$ and scattered fields $\mathbf{E}^{s}({\mathbf x} )$ and $\mathbf{H}^{s}({\mathbf x} )$. The last equation of \eqref{eq:forward} is the Silver-M\"uller radiation condition. The boundary condition $\mathscr{B}(\mathbf{E} )$ on $\partial \Omega$ could be either of the following three conditions:
\begin{enumerate}
\item the Dirichlet condition (corresponding to that $\Omega$ is a perfectly electric conducting (PEC) obstacle):
\begin{equation}\label{eq:83}
\mathscr{B}(\mathbf{E})=\nu \wedge \mathbf{E};
\end{equation}
\item the Neumann condition (corresponding to that $\Omega$ is a perfectly magnetic conducting (PMC) obstacle):
\begin{equation}\label{eq:84}
\mathscr{B}(\mathbf{E})=\nu \wedge ( \nabla \wedge \mathbf{E}) ;
\end{equation}
\item the impedance condition (corresponding to that $\Omega$ is an impedance obstacle):
\begin{equation} \label{eq:bound imp}
\mathscr{B}(\mathbf{E})=\nu \wedge ( \nabla \wedge \mathbf{E}) +\boldsymbol \eta( \nu \wedge \mathbf{E}) \wedge \nu ,\ \Re(\boldsymbol \eta)\geq 0 \mbox{ and } \Im(\boldsymbol\eta)<0,
\end{equation}
\end{enumerate}
where $\nu$ denotes the exterior unit normal vector to $\partial\Omega$ and $\boldsymbol\eta\in L^\infty(\Omega)$. We would also like to point out that the conditions $\Re(\boldsymbol\eta)\geq 0 \mbox{ and } \Im(\boldsymbol\eta)<0$ are the physical requirement.
In what follows, in order to ease the exposition and similar to our notation in \eqref{eq:imp1}--\eqref{eq:imp1}, we unify the three types of boundary conditions as
\begin{equation}\label{bound}
\mathscr{B}(\mathbf{E})=\nu \wedge ( \nabla \wedge \mathbf{E}) +\boldsymbol\eta( \nu \wedge \mathbf{E}) \wedge \nu \quad\mbox{on } \partial\Omega,
\end{equation}
where the cases that $\boldsymbol\eta=\infty$ and $\boldsymbol\eta=0$ stand for the Dirichlet and Neumann boundary conditions respectively.
For the forward scattering problem \eqref{eq:forward}, it is known that there exists a unique pair of solutions $({\mathbf E}, {\mathbf H}) \in$ $H_{\mathrm{loc }}(\mathrm{curl} , \mathbb{R}^{3} \backslash\overline{\Omega}) \times H_{\mathrm{loc }}(\mathrm{curl}, \mathbb{R}^{3} \backslash\overline{\Omega})$ (cf. \cite{Ned}). Furthermore, the radiating fields $\mathbf{E}^{s}$ and $\mathbf{H}^{s}$ to \eqref{eq:forward} possess the following asymptotic expansions
\begin{equation}\label{eq:far}
\begin{split}
\mathbf{E}^{s}(\mathbf{x} ; \Omega, k, \mathbf{d}, \mathbf{p})&=\frac{{e}^{\mathbf{i} k \mathbf{x} \cdot {\mathbf d} }}{|\mathbf{x} |}\left\{ \mathbf{E}_{\infty}(\hat{\mathbf{x}} ; \Omega, k, \mathbf{d}, \mathbf{p})+\mathcal{O}\left(\frac{1}{|\mathbf{x} |}\right)\right\} \quad \text { as } \quad|\mathbf{x} | \rightarrow \infty, \\
\mathbf{H}^{s}(\mathbf{x} ; \Omega, k, \mathbf{d}, \mathbf{p})&=\frac{{e}^{\mathbf{i} k \mathbf{x} \cdot {\mathbf d} }}{|\mathbf{x} |}\left\{\mathbf{H}_{\infty}(\hat{\mathbf{x}} ; \Omega, k, \mathbf{d}, \mathbf{p})+\mathcal{O}\left(\frac{1}{|\mathbf{x} |}\right)\right\} \quad \text { as } \quad|\mathbf{x} | \rightarrow \infty,
\end{split}
\end{equation}
which hold uniformly in the angular variable $\hat{\mathbf{x} }=\mathbf{x} /|\mathbf{x} | \in \mathbb{S}^{2} .$ The functions $\mathbf{E}_{\infty}(\hat{\mathbf{x} })$ and $\mathbf{H}_{\infty}(\hat{\mathbf{x} })$ in \eqref{eq:far} are called, respectively, the electric and magnetic far field patterns, and both are analytic on the entire unit sphere $\mathbb{S}^{2}$. As above and also in what follows, the notation $\mathbf{U}(\mathbf{x} ; \Omega, \mathbf{p}, k, \mathbf{d})$ will be frequently used to specify the dependence of a given function $\mathbf{U}$ on the scatterer $\Omega,$ the polarization $\mathbf{p},$ the wave number $k$ and the incident direction $\mathbf{d}$.
The inverse electromagnetic obstacle scattering problem corresponding to \eqref{eq:forward} is to recover $\Omega$ (and $\boldsymbol \eta$ as well in the impedance case) by the knowledge of the far-field pattern $ \mathbf{E}_{\infty}(\hat{\mathbf{x}} ; \Omega, \mathbf{p}, k, \mathbf{d})$ (or equivalently $ \mathbf{H}_{\infty}(\hat{\mathbf{x}} ; \Omega, \mathbf{p}, k, \mathbf{d})$). By introducing an operator $\mathcal{F}$ which sends the obstacle to the corresponding far-field pattern, defined by the forward scattering system \eqref{eq:forward}, the aforementioned inverse problem can be formulated as
\begin{equation}\label{inverse}
\mathcal{F}(\Omega, \boldsymbol \eta)= \mathbf{E}_{\infty}(\hat{\mathbf{x}} ; \Omega, k, \mathbf{d}, \mathbf{p}).
\end{equation}
It can be directly verified that the inverse problem \eqref{inverse} is nonlinear and moreover it is ill-conditioned (cf. \cite{CK}). It is a longstanding problem that one can establish the one-to-one correspondence for \eqref{inverse} by a single far-field pattern or a finite number of far-field patterns (namely with a fixed triplet of $k$, $\mathbf{d}$ and $\mathbf{p}$ or a finite number of triplets of $k$, $\mathbf d$ and $\mathbf{p}$); see the recent survey paper \cite{CK18} by Colton and Kress for more discussions about the historical developments of this fundamental problem.
Under the assumption that $\Omega$ is a polyhedral obstacle associated with $\boldsymbol\eta\equiv 0$ or $\boldsymbol \eta\equiv \infty$, the unique correspondence, a.k.a unique identifiability, for the inverse problem \eqref{inverse} by a single far-field measurement was established in the literature; see \cite{LiuA,LRX,Liu3,Liu09}. However, it is still unclear whether one can establish the unique identifiability for an impedance obstacle of the polyhedral shape, even for the case that $\boldsymbol\eta$ is a nonzero constant, and a fortiori in our present paper $\boldsymbol\eta$ is a generalised impedance parameter which can be $0$, $\infty$ or a variable function. To be more specific about the generalised impedance obstacle, we introduce the following definition.
\begin{definition}\label{ir obstacle}
Let $\Omega$ be an open and bounded polyhedron in $\mathbb{R}^3$. Hence, $\partial\Omega$ possesses finitely many edge-corners that are formed by the intersections of any two adjacent faces of $\partial\Omega$. $\Omega$ is said to be irrational if all of its edge-corners are irrational, otherwise it is called rational, and the smallest degree among the rational degrees of all of its rational corners is referred to the degree of the polyhedron.
\end{definition}
\begin{definition}\label{def:so1}
$(\Omega, \boldsymbol\eta)$ is said to be an admissible polyhedral obstacle if $\Omega$ is an open bounded polyhedron and $\boldsymbol\eta$ fulfils the following requirements.
\begin{enumerate}
\item For each face of $\partial\Omega$, say $\widetilde\Pi$, and each edge of $\widetilde\Pi$, say $ \boldsymbol {l} $, there exists a neighbourhood $\Sigma_{ \boldsymbol {l} }:=B_\rho( \boldsymbol {l} )\cap \widetilde\Pi$ with $\rho\in\mathbb{R}_+$ and $B_\rho( \boldsymbol {l} ):=\{\mathbf{x}\in\mathbb{R}^3; |\mathbf{x}-\mathbf{x}'|<\rho, \forall \mathbf{x}'\in \boldsymbol {l} \}$, such that either $\boldsymbol \eta|_{\Sigma_{ \boldsymbol {l} }}=0$, or $\boldsymbol \eta|_{\Sigma_{ \boldsymbol {l} }}=\infty$, or $\eta|_{\Sigma_{ \boldsymbol {l} }}\in\mathcal{A}( \boldsymbol {l} )$.
\item On any open subset of the other part of $\partial\Omega$ other than the neighbourhood of each edge of $\partial\Omega$ introduced in (1), $\boldsymbol \eta$ can be $0$, or $\infty$ or $\eta\in L^\infty$.
\item In the case $\boldsymbol \eta\in L^\infty$, one has that $\Re(\boldsymbol \eta)\geq 0$ and $\Im(\boldsymbol \eta)<0$.
\end{enumerate}
\end{definition}
\begin{definition}\label{def6}
$\Omega$ is said to be an admissible complex polyhedral obstacle if it consists of finitely many admissible polyhedral obstacles.
That is,
\begin{equation*}\label{eq:r2a}
(\Omega, \boldsymbol \eta)=\bigcup_{j=1}^l (\Omega_j, \boldsymbol \eta_j),
\end{equation*}
where $l\in\mathbb{N}$ and each $(\Omega_j, \boldsymbol \eta_j)$ is an admissible polyhedral obstacle.
Here, we define
\begin{equation*}\label{eq:r2b}
\boldsymbol \eta=\sum_{j=1}^l \boldsymbol \eta_j\chi_{\partial\Omega_j}.
\end{equation*}
Moreover, $\Omega$ is said to be irrational if all of its component polyhedral obstacles are irrational, otherwise it is said to be rational. For the latter case, the smallest degree among all the degrees of its rational components is defined to be the degree of the complex obstacle $\Omega$.
\end{definition}
Next, we first derive a local unique identifiability result in determining an admissible complex irrational polyhedral obstacle by a single far-field pattern.
\begin{theorem}\label{thm:uniqueness1}
Consider a fixed triplet of $k\in\mathbb{R}_+$, $\mathbf{d}\in\mathbb{S}^2$ and $\mathbf{p}\in\mathbb{R}^3\backslash\{\mathbf{0}\}$.
Let $(\Omega, \eta)$ and $(\widetilde\Omega, \widetilde\eta)$ be
two admissible complex irrational obstacles, with $\mathbf{E}_\infty$ and $\widetilde{\mathbf{E}}_\infty$ being
their corresponding far-field patterns
and $\mathbf{G}$ being
the unbounded connected component of $\mathbb{R}^3\backslash\overline{(\Omega\cup\widetilde\Omega)}$.
If $\mathbf{E}_\infty$ and $\widetilde{\mathbf{E}}_\infty$ are the same in the sense that
\begin{equation}\label{eq:cond1}
\mathbf{E}_{\infty}(\hat{\mathbf{x}} ; \Omega, k, \mathbf{d}, \mathbf{p})=\widetilde {\mathbf{E}}_{\infty}(\hat{\mathbf{x}} ; \widetilde {\Omega}, k, \mathbf{d}, \mathbf{p}), \ \
\mbox{for } ~~ ~~\mbox{all} ~~\hat{\mathbf x}\in\mathbb{S}^2,
\end{equation}
then
$
(\partial \Omega \backslash \partial \overline{ \widetilde{\Omega }} )\bigcup (\partial \widetilde{\Omega } \backslash \partial \overline{ \Omega } )
$
cannot possess an edge-corner on $\partial \mathbf{G}$.
Moreover,
\begin{equation}\label{eta}
\boldsymbol \eta=\widetilde{\boldsymbol \eta}\quad\mbox{on}\quad \partial\Omega\cap\partial\widetilde{\Omega}\cap\partial\mathbf{G}.
\end{equation}
\end{theorem}
\begin{proof
We prove the theorem by contradiction. Assume that
$(\partial \Omega \backslash \partial \overline{ \widetilde{\Omega }} )\bigcup (\partial \widetilde{\Omega } \backslash \partial \overline{ \Omega } )$ has an edge corner $\mathbf x_c$ on $\partial \mathbf{G}$. Then, $\mathbf x_c$ is either located at $\partial\Omega$ or $\partial\widetilde\Omega$. Without loss of generality, we assume that $\mathbf x_c$ is an edge corner of $\partial\widetilde\Omega$, which also indicates that $\mathbf{x}_c$ lies outside $\Omega$. Let $h\in\mathbb{R}_+$ be sufficiently small such that $B_h(\mathbf{x}_c)\Subset\mathbb{R}^2\backslash\overline \Omega $, then we have
\begin{equation*}\label{eq:aa2}
B_h(\mathbf x_c)\cap \partial\widetilde\Omega=\widetilde\Pi_\ell,\quad \ell=1,2,
\end{equation*}
where $\widetilde\Pi_\ell$ are two flat subsets lying on the faces of $\widetilde\Omega$ that intersect at $\mathbf x_c$. Moreover, for the subsequent use, we let $h$ be smaller than $\rho$, where $\rho$ is the parameter in Definition~\ref{def:so1}. Hence we have an edge-corner $\mathcal{E}(\widetilde\Pi_1,\widetilde\Pi_2, \boldsymbol {l} )\in \partial \mathbf G$ with $\mathbf{x}_c\in \boldsymbol {l} $, where $\mathbf{G}$ is the unbounded connected component of $\mathbb{R}^3\backslash\overline{(\Omega\cup\widetilde\Omega)}$. By \eqref{eq:cond1} and the Rellich theorem (cf. \cite{CK}), we know that
\begin{equation}\label{eq:aa3}
\mathbf{E}(\mathbf x; k, \mathbf{d},\mathbf{p})=\widetilde{\mathbf{E}}(\mathbf x; k, \mathbf{d},\mathbf{p}),\quad {\mathbf x} \in\mathbf{G}.
\end{equation}
Since $\widetilde\Pi_\ell \subset\partial\mathbf{G}$, $\ell=1,2$, combining \eqref{eq:aa3} with the generalized boundary condition \eqref{bound} on $\partial\widetilde\Omega$, it is easy to obtain that
\begin{equation}\label{eq:aa4}
\nu_\ell \wedge (\nabla\wedge\mathbf{E})+\widetilde{\boldsymbol\eta} (\nu_\ell \wedge\mathbf{E})\wedge\nu_\ell=\nu_\ell \wedge (\nabla\wedge\mathbf{\widetilde E})+\widetilde{\boldsymbol \eta}(\nu_\ell \wedge\mathbf{\widetilde E})\wedge\nu_\ell=\mathbf 0 \mbox{ on } \widetilde\Pi_\ell.
\end{equation}
We consider the following two separate cases, depending on the values of $\widetilde{\boldsymbol\eta}$ on $\widetilde\Pi_\ell$ associated with the edge-corner $\mathcal{E}(\widetilde\Pi_1, \widetilde\Pi_2, \boldsymbol {l} )$
\medskip
\noindent{\bf Case 1.}~$\widetilde{\boldsymbol\eta}\big|_{\widetilde\Pi_\ell}=0$ or $\widetilde{\boldsymbol\eta}\big|_{\widetilde\Pi_\ell}=\infty$, $\ell=1, 2$. We only consider the case $\widetilde{\boldsymbol\eta}\big|_{\widetilde\Pi_\ell}=\infty$ and the other case can be treated in a similar manner. First, we note that one has from \eqref{eq:aa4},
\begin{equation}\label{eq:p1}
(\nu_\ell\wedge\mathbf{E})\wedge\nu_\ell=0\quad\mbox{on}\ \ \widetilde\Pi_\ell,\ \ell=1, 2.
\end{equation}
Let $\widehat{\Pi}_\ell$ denote the full flat extension of $\widetilde\Pi_\ell$ within $\mathbb{R}^3\backslash\overline{\Omega}$. We claim that at least one of $\widehat\Pi_\ell$ is bounded. In fact, if on the contrary that both $\widehat\Pi_1$ and $\widehat\Pi_2$ are unbounded, then one has from analytic continuation (noting that $\mathbf{E}$ is real analytic in $\mathbb{R}^3\backslash\overline{\Omega}$) and \eqref{eq:p1} that
\begin{equation}\label{eq:p2}
\lim_{|\mathbf{x}|\rightarrow\infty, \mathbf{x}\in\widehat\Pi_\ell}\left| (\nu_\ell\wedge\mathbf{E})\wedge\nu_\ell \right|=0, \ \ell=1, 2.
\end{equation}
Using \eqref{eq:far}, we note that $\mathbf{E}^s(\mathbf{x})\rightarrow \mathbf{0}$ as $|\mathbf{x}|\rightarrow\infty$, and hence we further have from \eqref{eq:p2} that
\begin{equation}\label{eq:p3}
\lim_{|\mathbf{x}|\rightarrow\infty, \mathbf{x}\in\widehat\Pi_\ell}\left| (\nu_\ell\wedge\mathbf{E}^i)\wedge\nu_\ell \right|=0, \ \ell=1, 2,
\end{equation}
which together with \eqref{eq:p1n} readily implies that $|(\nu_\ell\wedge\mathbf{E})\wedge\nu_\ell|=0$, $\ell=1,2$. But this is impossible since $\nu_1$ and $\nu_2$ are linearly independent. Without loss of generality, we can assume that $\widehat\Pi_1$ is bounded. Clearly, $\widehat\Pi_1$ and part of $\partial\Omega$ form a bounded domain in $\mathbb{R}^3\backslash\overline{\Omega}$, and we denote it as $\Omega_1$. It is noted from \eqref{eq:aa4} that one has
\begin{equation}\label{eq:p4}
\nu \wedge (\nabla\wedge\mathbf{E})+\widetilde{\boldsymbol\eta} (\nu \wedge\mathbf{E})\wedge\nu=\mathbf{0}\ \mbox{on}\ \partial\Omega_1\backslash\widehat{\Pi}_1\ \mbox{and}\ \nu \wedge (\nabla\wedge\mathbf{E})=0\ \mbox{on}\ \widehat{\Pi}_1.
\end{equation}
We next show that $\widetilde{\boldsymbol\eta}$ can only take $0$ or $\infty$ on $\partial\Omega_1\backslash\widehat{\Pi}_1$. Indeed, we assume on the contrary that there exists a nonempty open subset $\Lambda_1\subset \partial\Omega_1\backslash\widehat{\Pi}_1$ such that $\widetilde{\boldsymbol\eta} \in L^\infty(\Lambda_1)$ with $\Re(\widetilde{\boldsymbol\eta})\geq 0$ and $\Im(\widetilde{\boldsymbol\eta} )<0$, and on $(\partial\Omega_1\backslash\widehat{\Pi}_1)\backslash\overline{\Lambda_1}$, $\widetilde{\boldsymbol \eta} $ takes either $0$ or $\infty$. Noting that the Maxwell equations, namely the first two equations in \eqref{eq:forward} are satisfied in $\Omega_1$, we have from Green's formula that
\begin{equation}\label{eq:p5}
\begin{split}
\mathbf{i}k\int_{\Omega_1} |\mathbf{H}|^2=&\int_{\Omega_1}(\nabla\wedge\mathbf{E})\cdot\overline{\mathbf{H}}=\int_{\Omega_1}\mathbf{E}\cdot(\nabla\wedge\overline{\mathbf{H}})+\int_{\Omega_1} (\overline{\mathbf{H}}\wedge\nu)\cdot\mathbf{E}\\
=&\mathbf{i}k\int_{\Omega_1}|\mathbf{E}|^2+\int_{\partial\Omega_1}(\overline{\mathbf{H}}\wedge\nu)\cdot\mathbf{E}=\mathbf{i}k\int_{\Omega_1}|\mathbf{E}|^2+\int_{\Lambda_1}(\overline{\mathbf{H}}\wedge\nu)\cdot\mathbf{E},
\end{split}
\end{equation}
where in deriving the last equality, we make use of the fact that $(\overline{\mathbf{H}}\wedge\nu)\cdot\mathbf{E}=0$ on $\partial\Omega_1\backslash\overline{\Lambda_1}$. Using the fact that $\Im(\widetilde{\boldsymbol\eta})<0$ on $\Lambda_1$, one can readily infer from \eqref{eq:p5} that $\nu\wedge\mathbf{E}|_{\Lambda_1}=\mathbf{0}$, which together with \eqref{eq:p4} further implies that $\nu\wedge\mathbf{H}|_{\Lambda_1}=0$. Hence, by the Holmgren's uniqueness principle (cf. \cite{CK}), one has that
\begin{equation}\label{eq:aa51}
{\mathbf E}(\mathbf x; k, \mathbf{d},\mathbf{p}) =\mathbf{0} \mbox{ in } \mathbb{R}^3\backslash\overline{\Omega},
\end{equation}
which in particular yields that
\begin{equation}\label{eq:aa6}
\lim_{|\mathbf x|\rightarrow\infty} \left|{\mathbf E}(\mathbf x; k, \mathbf{d},\mathbf{p})\right|={\mathbf 0}.
\end{equation}
But this contradicts to the fact that follows from \eqref{eq:far}:
\begin{equation}\label{eq:aa61}
\lim_{|\mathbf x|\rightarrow\infty} \left|{\mathbf E}(\mathbf x; k, \mathbf{d},\mathbf{p})\right|=\lim_{|\mathbf x|\rightarrow\infty} \left|\mathbf{p}{e}^{\mathbf{i} k \mathbf{x} \cdot \mathbf{d}}+\mathbf{E}^{s}(\mathbf{x} ; k, \mathbf{d}, \mathbf{p})\right|=|\mathbf{p}| \neq 0.
\end{equation}
Hence, we actually can find a polyhedral domain $\Omega_1\subset\mathbb{R}^3\backslash\Omega$ such that one has on $\partial\Omega_1$, either $\nu\wedge\mathbf{E}=0$ or $\nu\wedge\mathbf{H}=0$. The situation is reduced to that was considered in \cite{LiuA} and \cite{Liu09}. It is noted that in \cite{Liu09}, two far-field patterns are used to handle the above situation. However, the pair of incident fields $(\mathbf{E}^i, \mathbf{H}^i)$ in \eqref{eq:p1n} in our current case is chosen slightly different from that in \cite{Liu09}, which enables one to apply the path-argument from \cite{LiuA} to arrive at a contradiction by starting from $\Omega_1$.
\medskip
\noindent {\bf Case 2.}~$\widetilde{\boldsymbol \eta}\big|_{\widetilde\Pi_\ell}\in\mathcal{A}( \boldsymbol {l} )$, $\ell=1, 2$; or one of $\widetilde{\boldsymbol \eta}\big|_{\widetilde\Pi_\ell}$ belongs to $\mathcal{A}( \boldsymbol {l} )$, and the other one takes $0$ or $\infty$; or one of $\widetilde{\boldsymbol \eta}\big|_{\widetilde\Pi_\ell}$ is $0$ and the other one is $\infty$. This falls exactly to the situation considered in Theorem~\ref{ir-2nodal}. By the irrationality of the edge-corner as well as the strong uniqueness continuation principle in Theorem~\ref{ir-2nodal}, we readily \eqref{eq:aa51}, which again leads to the contradiction \eqref{eq:aa61}.
\medskip
It remains to prove \eqref{eta}, and we establish it by contradiction.
Let $\Gamma\subset \partial\Omega\cap\partial\widetilde\Omega\cap\partial\mathbf{G}$ be an open subset such that $\boldsymbol \eta\neq \widetilde{\boldsymbol\eta}$ on $\Gamma$. By taking a smaller subset of $\Gamma$ if necessary, we can assume that $\eta$ (respectively
$\widetilde\eta$) is either $L^\infty$ or $0$ or $\infty$ on $\Gamma$. Clearly, one has $\mathbf{E}=\widetilde {\mathbf E}$ in $\mathbf{G}$. Hence it holds that
\begin{equation*}\label{eq:bb6}
( \nu \wedge \mathbf{E}) \wedge \nu=( \nu \wedge \widetilde{ \mathbf{E}} ) \wedge \nu \mbox{ and } \nu \wedge (\nabla \wedge \mathbf{E} ) =\nu \wedge (\nabla \wedge \widetilde{ \mathbf{E}} )\quad\mbox{on}\ \ \Gamma.
\end{equation*}
and
$$
\nu \wedge ( \nabla \wedge \mathbf{E}) +\boldsymbol \eta( \nu \wedge \mathbf{E}) \wedge \nu =\mathbf {0},\ \ \nu \wedge ( \nabla \wedge \widetilde{\mathbf{E}}) +\widetilde{\boldsymbol \eta} ( \nu \wedge \widetilde {\mathbf{E}}) \wedge \nu =\mathbf{0} \quad\mbox{on}\ \ \Gamma.
$$
Combining with the assumption that $\boldsymbol \eta\neq\widetilde{\boldsymbol \eta}$ on $\mathcal{E}$, we can directly deduce that
\[
\nu\wedge\mathbf{E}=\nu\wedge\mathbf{H}=0\quad\mbox{on}\ \Gamma,
\]
which in turn yields by the Holmgren's uniqueness principle (cf. \cite{CK}) that $\mathbf{E} =\mathbf{0}$ in $\mathbb{R}^3\backslash\Omega$. Therefore, we arrive at the same contradiction as that in \eqref{eq:aa6} and \eqref{eq:aa61}, which readily proves \eqref{eta}.
The proof is complete.
\end{proof}
It is recalled that the convex hull of $\Omega$, denoted by $\mathcal{CH}(\Omega)$, is the smallest convex set that contains $\Omega$. As a direct consequence of Theorem \ref{thm:uniqueness1}, we next show that the convex hull of a complex irrational obstacle can be uniquely determined by one far-field measurement. Furthermore, the boundary impedance parameter $\eta$ can be partially identified as well. In fact we have
\begin{corollary}\label{co:84}
Consider a fixed triplet of $k\in\mathbb{R}_+$, $\mathbf{d}\in\mathbb{S}^2$ and $\mathbf{p}\in\mathbb{R}^3\backslash\{\mathbf{0}\}$.
Let $(\Omega, \eta)$ and $(\widetilde\Omega, \widetilde\eta)$ be
two admissible complex irrational obstacles, with $\mathbf{E}_\infty$ and $\widetilde{\mathbf{E}}_\infty$ being
their corresponding far-field patterns.
If $\mathbf{E}_\infty$ and $\widetilde{\mathbf{E}}_\infty$ satisfy \eqref{eq:cond1}, then one has that
\begin{equation}\label{eq:cond2}
\mathcal{CH}(\Omega)=\mathcal{CH}(\widetilde\Omega):=\Sigma,
\end{equation}
and
\begin{equation}\label{eq:cond3}
\boldsymbol \eta=\widetilde{\boldsymbol \eta}\ \ \mbox{on}\ \ \partial\Omega\cap\partial\widetilde\Omega\cap \partial\Sigma.
\end{equation}
\end{corollary}
Corollary~\ref{co:84} implies that if the underlying polyhedral obstacle is convex, then one can uniquely determine the obstacle as well as its boundary impedance by a single far-field pattern. As a further application of the UCP results established in this work, we consider the unique determination of a rather general class of non-convex obstacles. To that end, we first introduce the aforesaid class of non-convex obstacles.
In the sequel, we denote by ${\boldsymbol P}_{S}(\mathbf{x})$ the projection of a point $\mathbf{x}\in\mathbb{R}^3$ onto a set $S$. Let $\partial (\mathcal{ CH}(\Omega
))=\{\Sigma_\ell~|\ell=1,\ldots, N\}$, where $\Sigma_\ell$, $\ell=1,\ldots, N$ are the finitely many faces of $\mathcal{ CH}(\Omega
)$. Let $\mathcal{V}(\Omega)$ and $\mathcal{V}(\mathcal{CH}(\Omega))$ denote, respectively, the sets of vertices of $\Omega$ and $\mathcal{CH}(\Omega)$. It is known that $\mathcal{V}(\mathcal{CH}(\Omega))\subset\mathcal{V}(\Omega)$. For any vertex $\mathbf{v} \in \mathcal{V}(\Omega) \backslash \mathcal{V}(\mathcal{CH}(\Omega))$, we consider the projection, ${\boldsymbol P}_{\Sigma_j} (\mathbf{v})$, where $\Sigma_j\subset\partial(\mathcal{CH}(\Omega))$ is a face. It is assumed that there exists at least one $\Sigma_j$ such that $\mathbf{v}-{\boldsymbol P}_{\Sigma_j}(\mathbf{v})\subset\mathbb{R}^3\backslash\Omega$. Then for a face $\Sigma_l\subset\partial(\mathcal{CH}(\Omega))$ we say that $\mathbf{v}\vdash\Sigma_l$ if
\begin{equation}\label{eq:projection}
\mathbf{v}-{\boldsymbol P}_{\Sigma_\ell}(\mathbf{v})=\argmin_{\mathbf{v}-{\boldsymbol P}_{\Sigma_j} (\mathbf{v}) \in \mathbb R^3 \backslash \Omega, \forall \Sigma_j\subset\partial(\mathcal{CH}(\Omega)) } \left|\mathbf{v}-{\boldsymbol P}_{\Sigma_j} (\mathbf{v}) \right|.
\end{equation}
\begin{definition}\label{def:85}
Let $\Omega$ be an admissible polyhedral obstacle, and let $\Sigma_l$ be a given face of $\partial(\mathcal{CH}(\Omega))$, and $\mathcal{V}_\mathcal{C}$ be a given set of finitely many, discrete and distinct points on $\Sigma_l$. $\Omega$ is said to be uniformly concave with respect to $\mathcal{V}_\mathcal{C}$ if $\forall\mathbf{v}\in\mathcal{V}(\Omega)\backslash\mathcal{V}(\mathcal{CH}(\Omega))$, $\mathbf{v}\vdash\Sigma_l$ and
\[
\{{\boldsymbol P}_{\Sigma_l}(\mathbf{v})~|~\mathbf{v}\in\mathcal{V}(\Omega)\backslash\mathcal{V}(\mathcal{CH}(\Omega))\}=\mathcal{V}_\mathcal{C}.
\]
\end{definition}
\begin{figure}[htbp]
\centering
\vspace*{-1cm} \includegraphics[width=0.3\linewidth]{convex}\\[-25pt]
\caption{Schematic illustration of two different uniformly concave hexahedrons $ABCDE_1$ and $ABCDE_2$ with $\mathcal{CH}(ABCDE_1)$=$\mathcal{CH}(ABCDE_2)=ABCD$.}
\label{fig:convex}
\end{figure}
As a simple illustrating example of Definition \ref{def:85}, we consider two different uniformly concave hexahedrons $\Omega_1:= ABCDE_1$ and $\Omega_2:=ABCDE_2$ that are shown in Figure \ref{fig:convex}. It is easy to see that $\Omega_1$ and $\Omega_2$ has the same convex hull, which is the tetrahedron $ABCD$. The vertexes $E_1$ and $E_2$ corresponding to $\Omega_1$ and $\Omega_2$ have the same projecting point on the face $\Sigma:=BCD$ of the convex hull $ABCD$. It is pointed out that the vertex corner $\mathcal{V}(BE_2C,CE_2D,BE_2D, E_2)\in \partial \mathbf G$, where $BE_2C,CE_2D,BE_2D$ are faces of $\Omega_2$ and $\mathbf{G}=\mathbb R^3 \backslash (\Omega_1 \cup \Omega_2
)$.
\begin{theorem}\label{thm:uniqueness2}
Consider a fixed triplet of $k\in\mathbb{R}_+$, $\mathbf{d}\in\mathbb{S}^2$ and $\mathbf{p}\in\mathbb{R}^3\backslash\{\mathbf{0}\}$.
Let $(\Omega, \eta)$ and $(\widetilde\Omega, \widetilde\eta)$ be
two uniformly concave irrational admissible polyhedral obstacles with respect to the set $\mathcal{V}_{\mathcal C}$, with $\mathbf{E}_\infty$ and $\widetilde{\mathbf{E}}_\infty$ being
their corresponding far-field patterns.
If $\mathbf{E}_\infty$ and $\widetilde{\mathbf{E}}_\infty$ satisfy \eqref{eq:cond1}, then
$$
\Omega =\widetilde \Omega \mbox{ and } \boldsymbol \eta=\widetilde{\boldsymbol \eta}.
$$
\end{theorem}
\begin{proof}
We prove this theorem by contradiction. Assume that $\Omega \neq \widetilde \Omega $ but \eqref{eq:cond1} is still fulfilled. From Corollary \ref{co:84}, we have $\mathcal{CH}(\Omega)= \mathcal{CH}(\widetilde{ \Omega} )$, which implies that the vertices of $\Omega$ contributing to $\mathcal{CH}(\Omega)$ are the same as the corresponding vertices of $\widetilde \Omega$ contributing to $\mathcal{CH}(\widetilde{ \Omega} )$. We shall prove that there must exit an edge-corner $\mathcal{E}(\Pi_1,\Pi_2,\mathbf{x}_c) \in \partial \mathbf G$, where $\mathbf{G}$ is
the unbounded connected component of $\mathbb{R}^3\backslash\overline{(\Omega\cup\widetilde\Omega)}$. Since $\Omega \neq \widetilde \Omega $, there exits an edge $ \boldsymbol {l} \subset \partial \Omega \backslash \partial \widetilde \Omega$ or $ \boldsymbol {l} \subset \partial \widetilde \Omega \backslash \partial \Omega$. Without loss of generality, we assume that $ \boldsymbol {l} \subset \partial \widetilde \Omega \backslash \partial \Omega $.
In the sequel, we let ${\mathbf a}_{\boldsymbol l } $ and ${\mathbf b}_{\boldsymbol{l} }$ denote the two vertices of the line segment $ \boldsymbol {l} $. We divide our remaining proof into two separate cases.
\medskip
\noindent {\bf Case 1.}~Suppose that ${\mathbf a}_{\boldsymbol l } \in \mathcal{V}(\mathcal{CH}(\widetilde \Omega))$ and ${\mathbf b}_{\boldsymbol{l} } \in \mathcal{V}(\mathcal{CH}(\widetilde \Omega)) $. Therefore, $ \boldsymbol {l} \subset \partial \mathbf{G} \cap \partial \widetilde{ \Omega}$. There exits a point $\mathbf{x}_c \in \boldsymbol {l} $ and a sufficient small $h\in \mathbb R_+$ such that
\begin{equation*}\label{eq:aa2}
B_h(\mathbf x_c)\cap \partial\widetilde\Omega=\widetilde\Pi_\ell,\quad \ell=1,2,
\end{equation*}
where $\widetilde\Pi_\ell$ are two flat subsets lying on the faces of $\widetilde\Omega$ that intersect at $\mathbf x_c$. Clearly, $\mathbf{x}_c \in \boldsymbol {l} $ is an edge-corner point.
\medskip
\noindent {\bf Case 2.}~Suppose that there exits at least one of ${\mathbf a}_{\boldsymbol l }$ and ${\mathbf b}_{\boldsymbol l }$ belonging to $ \mathcal{V} (\widetilde \Omega )\backslash \mathcal{V}(\mathcal{CH}(\widetilde \Omega)) $; namely, $ \mathbf{x}_c\in \mathcal{V} (\widetilde \Omega )\backslash \mathcal{V}(\mathcal{CH}(\widetilde \Omega)) $, where $ \mathbf{x}_c$ could be either ${\mathbf a}_{\boldsymbol l }$ or ${\mathbf b}_{\boldsymbol l }$. Since $\Omega$ and $\widetilde\Omega$ are uniformly concave admissible polyhedral obstacles with respect to the set $\mathcal{V}_{\mathcal C}$, there exits a face $\Sigma_\ell \Subset \partial(\mathcal{CH}(\Omega))$ such that $ \mathbf{x}_c \vdash \Sigma_\ell $ and $ \mathcal{V}_{\mathcal C} \Subset \Sigma_\ell$. Furthermore, we know that there exits a vertex ${\mathbf x}_{c,\Omega } \in \mathcal{V}(\Omega) \backslash \mathcal{V}(\mathcal{CH}(\Omega) $ such that
$$
{\mathbf x}_{c,\Omega }\vdash \Sigma_\ell, \quad \boldsymbol{ P}_{\Sigma_\ell } \left({\mathbf x}_{c,\Omega }\right)=\boldsymbol{ P}_{\Sigma_\ell } \left({\mathbf x}_{c}\right) \in \mathcal{V}_{\mathcal C}.
$$
Since ${\mathbf x}_{c,\Omega }$ and ${\mathbf x}_{c}$ are distinct, it holds that
$$
{\mathrm d}\left( {\mathbf x}_{c }, \Sigma_\ell \right)\neq {\mathrm d}\left( {\mathbf x}_{c,\Omega }, \Sigma_\ell \right),
$$
where $ \mathrm {d}\left( {\mathbf x}_{c }, \Sigma_\ell \right) $ is the distance between $ {\mathbf x}_{c }$ and $\Sigma_\ell$. Without loss of generality, we may assume that ${\mathrm d}\left( {\mathbf x}_{c }, \Sigma_\ell \right)< {\mathrm d}\left( {\mathbf x}_{c,\Omega }, \Sigma_\ell \right)$. Hence, one can conclude that
$$
{\mathbf x}_{c } \in \partial \mathbf G,
$$
which also indicates that $\mathbf{x}_c$ lies outside $\Omega$. Let $h\in\mathbb{R}_+$ be sufficiently small such that $B_h({\mathbf x}_c)\Subset\mathbb{R}^2\backslash\overline \Omega $, then due to the fact that $\mathcal{V}_{\mathcal C}$ is discrete and distinct we can conclude that
\begin{equation*}\label{eq:aa2}
B_h(\mathbf x_c)\cap \partial\widetilde\Omega=\widetilde \Pi_\ell,\quad \ell=1,2,
\end{equation*}
where $\widetilde \Pi_\ell$ are two plane cells lying on the faces of $\widetilde\Omega$ that intersect at $\mathbf x_c$.
The remaining proof is similar to the that of Theorem \ref{thm:uniqueness1}, which is omitted.
\end{proof}
Finally, we remark that in this section, we only consider the case that the underlying obstacle is irrational in order to make use of the strong unique continuation principle in Theorem~\ref{ir-2nodal}. That is, in the contradiction argument in proving Theorems~\ref{thm:uniqueness1} and \ref{thm:uniqueness2}, one can find an edge-corner that can lead to the vanishing of the total wave field outside the obstacle by the strong unique continuation principle in Theorem~\ref{ir-2nodal}. However, we would like to emphasize that the same argument would work for the case that the underlying obstacle is of a general polyhedral shape, subject to a some slight modification. In fact, in such a case, it may happen that the edge-corner in the contradiction argument is rational, and hence instead of Theorem~\ref{ir-2nodal}, one would need to make use of the finite vanishing order results in
Theorems~\ref{th:two imp}, \ref{thm:pec pmc}, \ref{thm:imp pec} and \ref{thm:imp pmc} to obtain that the total wave field is ``small" around the edge-corner (compared to the totally vanishing in the irrational case). Hence, a contradiction can be obtained if one requires that the total wave field outside the obstacle is everywhere ``big", which can be fulfilled in certain scenarios of practical interest, see e.g. \cite{CDL1}. Nevertheless, we shall not explore this direction any further in this paper.
\subsection{Information-encoding for inverse problems and generalised Holmgren's uniqueness principle }
We recall the classical Holmgren's theorem for an elliptic PDO $\mathcal{P}$ with real-analytic coefficients (cf. \cite{TF}). If $\mathcal{P}\mathbf{u}$ is real analytic in a connected open neighbourhood of $\Omega$, then $\mathbf{u}$ is also real-analytic. The Holmgren's theorem applied to $\mathbf{u}=(\mathbf{E},\mathbf{H})$ in \eqref{eq:eig}, we immediately see that $(\mathbf{E},\mathbf{H})$ is real-analytic in $\Omega$. Let $\Gamma$ be an analytic surface in $\Omega$. Suppose that
\begin{equation}\label{eq:cond1l}
\nu\wedge\mathbf{E}=\mathbf{0}\quad\mbox{and}\quad\nu\wedge\mathbf{H}=0\quad\mbox{on}\ \ \Gamma,
\end{equation}
then by the Cauchy-Kowalevski theorem, one readily has that $\mathbf{E}=\mathbf{H}\equiv 0$ in $\Omega$. This is known as the Holmgren's uniqueness principle. In fact, in the proofs of Theorems~\ref{thm:uniqueness1} and \ref{thm:uniqueness2}, we have made use of the Holmgren's principle in the case that $\Gamma$ is an open subset of a plane. In the sequel, to ease the exposition and with a bit abuse of notations, we simply refer to $\Gamma$ as a plane in such a case, though it may actually be an open subset of a plane. Our results established in Theorems~\ref{th:two imp}, \ref{thm:pec pmc}, \ref{thm:imp pec}, \ref{thm:imp pmc} and \ref{ir-2nodal} can be regarded as generalizing the Holmgren's uniqueness principle as discussed in what follows.
Suppose that there are two planes $\widetilde\Pi_1$ and $\widetilde\Pi_2$ which intersect at a line segment $ \boldsymbol {l} $ within $\Omega$ (see Fig.~\ref{fig:coordinate1}), and
\begin{equation}\label{eq:gg1}
\nu\wedge\mathbf{E}=\mathbf{0}\ \ \mbox{on}\ \widetilde\Pi_1\quad\mbox{and}\quad\nu\wedge\mathbf{H}=\mathbf{0}\ \mbox{on}\ \widetilde\Pi_2.
\end{equation}
Let $\angle(\widetilde\Pi_1,\widetilde\Pi_2)=\alpha\pi$. Suppose that $\alpha=1/N$ with $N\in\mathbb{N}$. Then according to Theorem~\ref{thm:pec pmc}, we know that the vanishing order of $\mathbf{E}$ around $ \boldsymbol {l} $ is at least $N$. Letting $N\rightarrow \infty$, we see that in the limiting case, one has \eqref{eq:cond1l} with $\widetilde{\Pi}_1=\widetilde\Pi_2=\Gamma$ as well as that the vanishing order becomes infinity. That is, the classical Holmgren's uniqueness principle associated a plane $\Gamma$ for the Maxwell system \eqref{eq:eig} is the limiting case of our result in Theorem~\ref{thm:pec pmc}. It is surprisingly interesting that we have generalised such an observation in three aspect. First, the angle between the two intersecting planes is not infinitesimal and hence the vanishing order may be finite. Second, if the angle is irrational, not necessarily infinitesimal, the vanishing order is still infinity. Third, the homogeneous condition on the plan can be the much more general impedance condition.
The application to inverse problem of the above observation can be described as follows. In inverse problems with electromagnetic probing, one usually sends a pair of incident fields and then collects the corresponding scattered wave data away from the inhomogeneous object; see \eqref{inverse} associated with \eqref{eq:forward}. In the following, we first take \eqref{inverse} as a specific exam elucidate the basic idea. Usually, the collection of the data is made on an analytic surface, say $\Gamma$, in the form $(\nu\wedge\mathbf{E}|_{\Gamma}, \nu\wedge\mathbf{H}|_{\Gamma})$. Then by the Holmgren's principle, we know that the information encoded into $(\nu\wedge\mathbf{E}|_{\Gamma}, \nu\wedge\mathbf{H}|_{\Gamma})$ is equivalent to knowing the electromagnetic fields outside the scattering obstacle, namely $\mathbb{R}^3\backslash\overline{\Omega}$, and hence is equivalent to the far-field pattern $\mathbf{E}_\infty/\mathbf{H}_\infty$. According to Theorem~\ref{ir-2nodal}, the measurement data can also be collected as $(\nu\wedge\mathbf{H}+\boldsymbol \eta_1\nu\wedge\mathbf{E}|_{\widetilde\Pi_1}, \nu\wedge\mathbf{H}+\boldsymbol \eta_2\nu\wedge\mathbf{H}|_{\widetilde\Pi_2})$ as long as $\widetilde\Pi_1$ and $\widetilde\Pi_2$ can intersect within $\mathbb{R}^3\backslash\overline{\Omega}$ with an irrational angle. Clearly, due to the analytic extension, it is not necessary for $\widetilde\Pi_1$ and $\widetilde\Pi_2$ to really intersect each other. The irrational intersection seems to be too restrictive and one can relax it to be a rational intersection with a large degree. Clearly, this conceptual information encoding technique also work for the other inverse electromagnetic scattering problem where the underlying object is not necessary an impenetrable obstacle as that considered in \eqref{inverse}. We hope that it might find practical applications in some special situations.
|
train/arxiv
|
BkiUa6bxK0iCl7DT6WVU
| 5
| 1
|
\section{Introduction}
\label{sec:intro}
Young L dwarfs are often noted to be redder than their field-age counterparts \citep{kirkpatrick2008, cruz2009, faherty2013, faherty2016, liu2016}. This is partly due to their lower surface gravities, which lead to lower pressures in their atmospheres, and hence reduced collision-induced absorption (CIA) by H$_2$ \citep{lin69}. Further, lower surface gravities can lead to higher altitude clouds, leading to less efficient gravitational settling of condensate particles (e.g., \citealt{madhu2011, helling2014}). Red near-infrared colors have been efficiently utilized to characterize and discover new young brown dwarfs and planetary-mass objects (e.g., \citealt{kellogg2015, schneider2017}). There also exists a population of red L dwarfs that do not have obvious signs of youth (e.g., \citealt{looper2008, kirkpatrick2010, marocco2014}). While the exact reasons for the red colors of these relatively high-gravity objects are not entirely clear, their spectra have been well-reproduced by the presence of micron or submicron-sized grains in their upper atmospheres \citep{marocco2014, hiranaka2016, charnay2018}. This high-altitude dust suppresses emission at shorter wavelengths much more efficiently than longer wavelengths, leading to significantly reddened spectra compared to ``normal'' brown dwarfs. There is evidence that the strength of silicate absorption features in the mid-infrared correlates with the near-infrared colors of L dwarfs \citep{burgasser2008,suarez2022}, indicating that variations in silicate cloud thickness also plays a role. Further, viewing angle \citep{vos2017} and variability \citep{ashraf2022} have been shown to be related to the colors of substellar objects. There is also evidence that convective instabilities can produce similar effects as clouds in young red L dwarfs \citet{tremblin2017}. In any case, young red L dwarfs and old reddened L dwarfs have proven to be compelling laboratories for the study of low temperature substellar atmospheres.
The vast majority of the current population of directly-imaged planetary-mass companions are also young and have similar effective temperatures, masses, and radii as young L dwarfs, as well as observed properties, including unusually red near-infrared colors. Examples include 2M1207b \citep{chauvin2004, chauvin2005, patience2010}, HD 206893B \citep{milli2017, delorme2017, krammerer2021, meshkat2021, ward2021}, VHS J125601.92$-$125723.9B \citep{gauza2015}, 2MASS J22362452$+$4751425b \citep{bowler2017}, BD$+$60 1417B \citep{faherty2021}, HR8799bcd \citep{marois2008}, and HD 203030B \citep{metchev2006}. Young, red L dwarfs in the field provide an opportunity to study the physical properties of giant exoplanet-like atmospheres without the technical challenge of blocking host star light.
In this article, we present the discovery of CWISE J050626.96$+$073842.4 (CWISE J0506$+$0738), an exceptionally red brown dwarf discovered as part of the Backyard Worlds: Planet 9 (BYW) citizen science project \citep{kuchner2017}. We detail its discovery in Section \ref{sec:discovery}, present Keck/NIRES spectroscopic follow-up observations in Section \ref{sec:obs}, analyze these data in Section \ref{sec:anal}, and discuss CWISE J0506$+$0738 in the context of other red brown dwarfs in Section \ref{sec:discussion}.
\section{Discovery of CWISE 0506+0738}
\label{sec:discovery}
CWISE J0506$+$0738 was submitted as an object of interest to the BYW project by citizen scientists Austin Rothermich, Arttu Sainio, Sam Goodman, Dan Caselden, and Martin Kabatnik because it had notable motion amongst epochs of WISE observations. BYW uses unWISE images \citep{lang2014, meisner2018} covering the 2010--2016 time frame and is typically sensitive to objects with proper motions $\gtrsim$ 0\farcs05--0\farcs1 yr$^{-1}$. As part of the initial investigation to evaluate whether or not CWISE J0506$+$0738 was a newly discovered substellar object, we gathered available photometry from the Two Micron All-Sky Survey (2MASS) reject catalog \citep{skrutskie2006, tmass2006}, the United Kingdom Infrared Telescope (UKIRT) Hemisphere Survey DR1 (UHS; \citealt{dye2018}), and the CatWISE 2020 main catalog \citep{marocco2021}, and determined a photometric spectral type of $\sim$L7.5 using the method described in \cite{schneider2016a}. It was noted during the initial evaluation of this object that its $J-K$ color, using UHS $J$- and 2MASS $K$-band photometry, was exceptionally red ($J-K$ = 3.17$\pm$0.21 mag), more than half a magnitude redder than the reddest known free-floating L dwarf, PSO J318.5338$-$22.8603 ($J-K$ = 2.64$\pm$0.02 mag; \citealt{liu2013}). An inspection of 2MASS, UHS, WISE, and Pan-STARRS DR2 \citep{magnier2020} images showed no sources of contamination, suggesting that the near-infrared colors accurately reflect the true spectral energy distribution of the source (Figure \ref{fig:finder}).
The astrometry and photometry of CWISE J0506$+$0738 were further analyzed using measurements from the UHS DR2 catalog, which will provide $K$-band photometry for much of the northern hemisphere (Bruursema et al.~in prep.). CWISE J0506$+$0738 was found to have a $K$-band magnitude of 15.513$\pm$0.022 mag, consistent with the previous 2MASS measurement but significantly more precise. This measurement results in a UHS $(J-K)_{\rm MKO}$ color of 3.24$\pm$0.10 mag, slightly redder but consistent with UHS and 2MASS photometry. We therefore considered this candidate a high-priority target for follow-up spectroscopic observations.
\begin{figure*}
\plotone{Figure1.pdf}
\caption{Images of CWISE J0506$+$0738 from 2MASS (upper left and center), UHS (bottom left and center), Pan-STARRS (upper right, three-color image with $g/i/y$ bands), and WISE (lower right, three-color image with $W1/W2/W3$ bands). The position of CWISE J0506$+$0738 as determined in the UHS $K$-band images is denoted by a red circle. Note that CWISE J0506$+$0738 is undetected at 2MASS $J$ and in the Pan-STARRS 3-color image, but clearly detected in the 2MASS $K$-band, UHS, and WISE images. The greenish hue of CWISE J0506$+$0738 in the WISE images shows that this object is significantly brighter at WISE channel W2 (4.6 $\mu$m) than WISE channel W1 (3.4 $\mu$m) or W3 (12 $\mu$m), typical of brown dwarfs with late-L or later spectral types.}
\label{fig:finder}
\end{figure*}
\begin{deluxetable}{lcccc}
\label{tab:cwise0506}
\tablecaption{Properties of CWISE J050626.96$+$073842.4}
\tablehead{
\colhead{Parameter} & \colhead{Value} & \colhead{Ref.}}
\startdata
R.A. (\degr) (epoch=2022.7)\tablenotemark{a} & 76.6124377 & 1 \\
Dec. (\degr) (epoch=2022.7)\tablenotemark{a} & 7.6449299 & 1 \\
R.A. (\degr) (epoch=2017.8)\tablenotemark{a} & 76.6123885 & 2 \\
Dec. (\degr) (epoch=2017.8)\tablenotemark{a} & 7.6450716 & 2 \\
$\mu$$_{\alpha}$ (mas yr$^{-1}$) & 31.5$\pm$2.6 & 1\\
$\mu$$_{\delta}$ (mas yr$^{-1}$) & -82.7$\pm$2.7 & 1\\
$d$\tablenotemark{b} (pc) & 32$^{+4}_{-3}$ & 1 \\
RV (km s$^{-1}$) & +16.3$^{+8.8}_{-7.7}$ & 1 \\
$J_{\rm MKO}$ (mag) & 18.487$\pm$0.017 & 1 \\
$K_{\rm MKO}$ (mag) & 15.513$\pm$0.022 & 2 \\
W1 (mag) & 14.320$\pm$0.015 & 3 \\
W2 (mag) & 13.552$\pm$0.013 & 3 \\
Sp.~Type & L8$\gamma$--T0$\gamma$ & 1 \\
\enddata
\tablenotetext{a}{R.A. and Dec. values are given in the ICRS coordinate system.}
\tablenotetext{b}{Photometric distance estimate based on the UHS $K_{\rm MKO}$-band magnitude and the absolute magnitude-spectral type relation in \citealt{dupuy2012} (see Section \ref{sec:dist}).}
\tablerefs{ (1) This work; (2) UHS DR2 (\citealt{dye2018}, Bruursema et al.~in prep); (3) CatWISE 2020 \citep{marocco2021} }
\end{deluxetable}
\section{Observations}
\label{sec:obs}
\subsection{UKIRT/WFCAM}
\label{sec:ukirt}
In an effort to refine the astrometry and photometry of CWISE J0506$+$0738, we observed it with the $J_{\rm MKO}$ filter on the infrared Wide-Field Camera (WFCAM; \citealt{casali2007}) on UKIRT on 20 September 2022. Observations were performed using a 3 $\times$ 3 microstepping pattern, with the resulting 9 images interleaved \citep{dye2006} to provide improved sampling over that of a single WFCAM exposure. The microstepping sequence was repeated five times, resulting in 45 single exposures each lasting 20 seconds, for a total exposure time of 900 seconds. We re-registered the world coordinate system (WCS) of each interleaved frame using the Gaia DR3 catalog \citep{gaia2022}. Images were then combined using the \texttt{imstack} routine from the \texttt{CASUTOOLS} package\footnote{http://casu.ast.cam.ac.uk/surveys-projects/software-release} \citep{irwin2004}. The position and photometry of CWISE J0506$+$0738 were extracted using the \texttt{CASUTOOLS} \texttt{imcore} routine.
Combining the position of this $J$-band observation with the UHS $K$-band observation, we calculated proper motion components of $\mu$$_{\alpha}$ = 31.5$\pm$2.6 mas yr$^{-1}$ and $\mu$$_{\delta}$ = -82.7$\pm$2.7 mas yr$^{-1}$. CatWISE 2020 reports proper motions of $\mu$$_{\alpha}$ = 44.2$\pm$7.9 mas yr$^{-1}$ and $\mu$$_{\delta}$ = -97.5$\pm$8.4 mas yr$^{-1}$ (with offset corrections applied according to \citealt{marocco2021}). The proper motion calculated from our UKIRT observations is significantly more precise than the proper motion measurements from CatWISE 2020, and we adopt the former for our analysis.
We measure a $J_{\rm MKO}-$band magnitude of 18.487$\pm$0.017 mag from these observations, which is $>$2$\sigma$ brighter than the value from the UKIRT Hemisphere Survey (18.76$\pm$0.10 mag). To verify our measured photometry, we compared the photometry for other sources found to have similar magnitudes (18.4 $< J <$ 18.6 mag) in our images to UHS values. We found a median $J$-band difference for the 52 objects in this sample to be -0.03 mag, with a median absolute deviation of 0.07 mag, showing that differences as large as that measured for this object (0.27 mag) are relatively rare. The origin of the difference between these $J$-band measurements is unclear, though we note that variability may be a contributing factor, as young (and red) objects are often found to have larger amplitude variability than field-age objects with similar spectral types (e.g., \citealt{vos2022}). While this new $J$-band measurement results in bluer $(J-K)_{\rm MKO}$ = 2.97$\pm$0.03 mag and $J_{\rm MKO}-$W2 = 4.94$\pm$0.02 mag colors, they both remain significantly redder than those of any previously identified free-floating brown dwarf.
All UKIRT photometry and astrometry for CWISE J0506$+$0738 are provided in Table \ref{tab:cwise0506}.
\subsection{Keck/NIRES}
CWISE J0506$+$0738 was observed with the Near-Infrared Echellette Spectrometer (NIRES; \citealt{wilson2004}) mounted on the Keck II telescope on UT 19 January 2022. NIRES provides a resolution $\lambda/\Delta\lambda$ $\approx$ 2700 over five cross-dispersed orders spanning a wavelength range of 0.9--2.45 $\mu$m. CWISE J0506$+$0738 was observed in four 250 second exposures nodded in an ABBA pattern along the slit, which was aligned with the parallactic angle, for a total on-source integration time of 1000 seconds. The spectrum was extracted using a modified version of the SpeXTool package \citep{vacca2003, cushing2004}, with the A0~V star HD 37887 ($V$ = 7.67) used for telluric correction. The large $J-K$ color of CWISE J0506$+$0738 resulted in significant signal to noise (S/N) differences across the final reduced spectrum, with a S/N$\sim$25 at the $J$-band peak ($\sim$1.3~$\mu$m) and a S/N$\sim$200 at the $K$-band peak ($\sim$2.2~$\mu$m).
The inter-band flux calibration for Keck/NIRES orders is occasionally skewed by seeing or differential refraction slit losses. In particular, there is a gap between the third ($K$-band) and fourth ($H$-band) orders spanning 1.86 to 1.89 $\mu$m\footnote{https://www2.keck.hawaii.edu/inst/nires/genspecs.html}, and the overlap between the fourth and fifth ($J$-band) orders lies in a region of strong telluric and stellar H$_2$O absorption. We therefore re-scaled the resulting spectrum to have a $J-K$ synthetic color consistent with UKIRT $J$-band and UHS $K$-band photometry by applying small multiplicative constants to the $H$- and $K$-band portions of the spectrum. The final reduced spectrum is shown in Figure \ref{fig:spectrum}.
\begin{figure*}
\plotone{Figure2.pdf}
\caption{The Keck/NIRES spectrum of CWISE J0506$+$0738, shown
in the original resolution (grey lines) and smoothed to a resolution of $\lambda/\Delta\lambda$ $\approx$ 100 (black lines).
CWISE J0506$+$0738 is compared to the L7 spectral standard 2MASSI J0825196$+$211552 \citep{kirkpatrick2000, cruz2018} in the top panel, and the young L7 VL-G dwarf PSO J318.5338$-$22.8603 \citep{liu2013} in the bottom panel. Both comparisons highlight the extremely red nature of CWISE J0506$+$0738. All spectra are normalized between 1.27 and 1.29 $\mu$m, and prominent absorption features have been labeled.
}
\label{fig:spectrum}
\end{figure*}
\section{Analysis}
\label{sec:anal}
\subsection{Spectral Type}
\label{sec:spt}
As with many of the known, young, late-type red L dwarfs, none of the L dwarf spectral standards \citep{kirkpatrick2010, cruz2018} provide a suitable match to the near-infrared spectrum of CWISE J0506$+$0738. The best match to the $J$-band portion of the spectrum is the L7 standard 2MASSI J0825196$+$211552 \citep{kirkpatrick1999,cruz2018}, which is shown in the top panel of Figure \ref{fig:spectrum}. CWISE J0506$+$0738 shows much stronger H$_2$O absorption around 1.1 $\mu$m, a feature commonly seen in low-gravity L dwarfs. This comparison also shows how red CWISE J0506$+$0738 is compared to a normal, field-age/field-gravity late-L dwarf. The bottom panel of Figure \ref{fig:spectrum} shows a comparison of CWISE J0506$+$0738 with PSO J318.5338$-$22.8603 \citep{liu2013}, which is typed as L7 VL-G in that work. These two objects match relatively well across the $J$-band portion of the spectrum, though the extreme redness of CWISE J0506$+$0738 can still be seen in this comparison via the mismatch in the $H$- and $K$-band portions of their spectra.
We also note that the spectrum of CWISE J0506$+$0738 has a noticeable absorption feature at the $H$-band peak. There is also a second, less-pronounced absorption feature present in the $K$-band portion of CWISE J0506$+$0738's spectrum between 2.2 and 2.3 $\mu$m. While we cannot {\em a priori} rule out systematic noise or a data reduction artifact for these features, we note that no similar features have been seen in Keck/NIRES spectra of L dwarfs obtained and reduced by our group (e.g., \citealt{meisner2021, schapera2022, softich2022, theissen2022}). We also note that these features occur at the approximate locations of CH$_4$ absorption seen in model spectra of low-surface gravity brown dwarfs with effective temperatures $\lesssim$1400 K. Figure \ref{fig:ch4} compares solar-metallicity model spectra from \cite{marley2021} with fixed low-surface gravities (log(g)=3.5) and varying effective temperatures. Prominent methane absorption features can be seen in the $H$- and $K$-bands for \teff\ $\lesssim$1400 K. While these models are informative for (potentially) identifying the source of some of the absorption features seen in the spectrum of CWISE J0506$+$0738, we were unable to find any models that successfully reproduced the overall shape of CWISE J0506$+$0738's spectrum, similar to previous studies of young brown dwarfs (e.g., \citealt{manjavacas2014}).
\begin{figure*}
\plotone{Figure3.pdf}
\caption{Model spectra from \cite{marley2021} with varying effective temperatures and surface gravity fixed at log(g)=3.5. The gray bands highlight the approximate regions of the absorption features seen in the spectrum of CWISE J0506$+$0738. }
\label{fig:ch4}
\end{figure*}
The presence of CH$_4$ in the $H$- and $K$-band peaks of CWISE J0506$+$0738's spectrum would suggest that this source is early T dwarf \citep{burgasser2006}, although these features are fairly weak in strength. \cite{charnay2018} showed that the presence of clouds can greatly reduce the abundance of CH$_4$ in the photospheres of low-gravity objects, a possible explanation for the absence of CH$_4$ bands in the spectra of 2M1207b and HR8799bcd \citep{barman2011a, barman2011b, konopacky2013}. If the same effect holds here, it would argue for a particularly low temperature for CWISE J0506$+$0738, below that of the {\teff} $\approx$ 1200~K planetary-mass L dwarf PSO J318.5338$-$22.8603 and VHS 1256$-$1257B which originally showed no indication of CH$_4$ absorption in the 1--2.5~$\mu$m region\footnote{Recent high S/N {\em JWST}/NIRSPEC observations of VHS~1256$-$1257B have revealed the presence of weak 1.6~$\micron$ absorption in its spectrum \citep{miles2022}.}. \citep{liu2013, gauza2015}. These two sources do have detectable absorption in the 3.3 $\mu$m $\nu_3$ CH$_4$ fundamental band \citep{miles2018}, and cloud scattering opacity is likely responsible for muting the 1.6~$\mu$m and 2.2~$\mu$m bands in these red L dwarfs \citep{charnay2018,burningham2021}. Indeed, it has been noted previously that PSO J318.5338$-$22.8603 is just on the warmer side of the transition to CH$_4$ becoming the dominant carbon-bearing molecule in its atmosphere \citep{tremblin2017}. We tentatively assert that both $H$- and $K$-band features in the spectrum of CWISE J0506$+$0738 are due to CH$_4$ absorption, which may be tested with more detailed analysis (e.g., atmospheric retrievals; \citealt{burningham2017, burningham2021}) and higher S/N moderate-resolution data. Given the similarity of the $J$-band portion of CWISE J0506$+$0738's spectrum to PSO J318.5338$-$22.8603 (L7 VL-G), and likely detection of CH$_4$ in the $H$- and $K$-bands, we assign a near-infrared spectral type of L8$\gamma$--T0$\gamma$ to CWISE J0506$+$0738, where the $\gamma$ signifies very low surface gravity \citep{kirkpatrick2005}.
\subsection{Spectral Evidence of Youth}
\label{sec:youth}
The characterization of brown dwarfs and planetary mass objects as ``low surface gravity'' or ``young'' typically arises from gravity-sensitive (or more specifically, photosphere pressure-sensitive) spectral features quantified by spectral indices (e.g., \citealt{steele1995, martin1996, luhman1997, gorlova2003, mcgovern2004, kirkpatrick2006, allers2007, manjavacas2020}). Many of these spectral indices, however, are designed for optical spectra (e.g., \citealt{cruz2009}) or are only applicable to objects with spectral types earlier than $\sim$L5 (e.g., \citealt{allers2013, lodieu2018}). The $H$-cont index is a gravity-sensitive index defined in \cite{allers2013} that is one of the few gravity-sensitive indices applicable to spectral types later than L5. This index is designed to approximate the slope of the blue side of the $H$-band peak, with low-gravity objects exhibiting a much steeper slope than field-age brown dwarfs. However, this index is defined using a band centered at 1.67 $\mu$m, which is where a feature potentially attributable to CH$_4$ occurs in our spectrum. Thus the $H$-cont index does not provide an accurate assessment of the slope of the blue side of the $H$-band peak for this object.
We have created a modified slope index for the blue side of the $H$-band peak by computing a simple linear least-squares fit to the 1.45--1.64 $\mu$m region after normalizing to the $J$-band peak between 1.27 and 1.29 $\mu$m. We measured this slope (normalized flux/$\mu$m) for several late-L and early-T dwarfs, both field and young association members, as shown in Figure~\ref{fig:slope-index}. We note that the largest slope for the entire sample belongs to WISE J173859.27$+$614242.1, an object that has been difficult to classify \citep{mace2013}, but is most consistent with an extremely red L9 \citep{thompson2013}. It is unclear if this object is young, has an extremely dusty photosphere, or both. For typical L7-T0 dwarfs, $H$-slope values for field objects range from 2--4, while equivalently classified young L dwarfs have values that range over 3--5. For CWISE J0506$+$0738, we find a slope of 4.38, significantly larger than field-age late-L dwarfs. The known population of young, very red L dwarfs similarly has larger $H$-slope values than their field-age counterparts.
\begin{figure}
\plotone{Figure4.pdf}
\caption{$H$-band slope index versus spectral type for field late-L and T dwarfs (colored circles) based on data from the SPLAT archive \citep{burgasser2017}, with colors corresponding to spectral type. Young L and T dwarfs are represented by purple squares. CWISE J0506$+$0738 (blue diamond) is an outlier amongst field-age late-Ls, similar to the young, late-type L dwarf population. Small offsets have been added to spectral type values for differentiation purposes.}
\label{fig:slope-index}
\end{figure}
\cite{schneider2014} also showed that the H$_2$($K$) index defined in \cite{canty2013} could distinguish young, low-gravity late-Ls from the field late-L population. The H$_2$($K$) index determines the slope of the $K$-band between 2.17~$\mu$m and 2.24~$\mu$m. CWISE J0506$+$0738 has an H$_2$($K$) value of 1.030, which is again consistent with the known population of low-gravity late-type L dwarfs (1.029 $\leq$ H$_2$($K$) $\leq$ 1.045) compared field-age L6--L8 brown dwarfs (H$_2$($K$) $\gtrsim$ 1.05).
Another spectral feature that has been used to distinguish low-surface gravity late-L dwarfs are the K I absorption lines between 1.1 and 1.3 $\mu$m \citep{mcgovern2004,allers2013,miles2022}. Our Keck/NIRES spectrum does not have sufficient S/N around the $J$-band peak to investigate these lines. A higher S/N spectrum would help to ensure no ambiguity regarding the surface gravity of CWISE J0506$+$0738.
\subsection{Radial Velocity}
\label{sec:rv}
The resolution of the Keck/NIRES data is sufficient to obtain a coarse measure of the radial velocity (RV) of CWISE J0506$+$0738, particularly in the vicinity of strong molecular features. We followed a procedure similar to that described in \citep{burgasser2015} (see also \citealt{blake2010,hsu2021}), forward-modeling the wavelength-calibrated spectrum prior to telluric correction in the 2.26--2.38~$\mu$m region. This spectral band contains the prominent 2.3~$\mu$m CO 2-0 band present in L dwarf spectra, as well as strong telluric features that allow refinement of the spectral wavelength calibration (cf. \citealt{newton2014}). We used a {\teff} = 1300~K, {\logg} = 4.5~dex (cgs) BTSettl atmosphere model ($M[\lambda]$) from \citet{allard2012} which provides the best match to the CO band strength, and a telluric absorption model ($T[\lambda]$) from \citet{livingston1991}. We forward modeled the data ($D[\lambda]$) using four parameters: the barycentric radial velocity of the star (RV$_\oplus$), the strength of telluric absorption ($\alpha$), the instrumental gaussian broadening profile width ($\sigma_{broad}$), and the wavelength offset from the nominal SpeXtool solution ($\Delta\lambda$):
\begin{equation}
D[\lambda] = \left(M[\lambda^*+\Delta\lambda]\times{T[\lambda+\Delta\lambda]^\alpha}\right) \\ \otimes\kappa_G(\sigma_{broad})
\end{equation}
with $\lambda^* = \lambda(1+{RV_\oplus}/{c})$ accounting for the radial motion of the star and $\kappa_G$ representing the gaussian broadening kernel. Preliminary fits that additionally included rotational broadening of the stellar spectrum indicated that this parameter was equal to the instrumental broadening and is likely unresolved ($v\sin{i}$ $\lesssim$ 65~km/s), so it was ignored in our final fit.
After an initial ``by-eye'' optimization of parameters, we used a simple Markov Chain Monte Carlo (MCMC) algorithm to explore the parameter space, evaluating goodness of fit between model and data using a $\chi^2$ statistic. Figure~\ref{fig:rv} displays the posterior distribution of our fit parameters after removing the first half of the MCMC chain (``burn-in''), which are normally distributed. There is a small correlation between RV$_\oplus$ and $\Delta\lambda$ which is expected given that stellar and telluric features are intermixed in this region. This correlation increases the uncertainties of these parameters. We find that the best-fit model from this analysis is an excellent match to the NIRES spectrum, with residuals consistent with uncertainties. After correction for barycentric motion ($-$19.2~km/s), we determine a heliocentric radial velocity of +16.3$^{+8.8}_{-7.7}$~km s$^{-1}$ for for CWISE J0506$+$0738.
\begin{figure*}
\plotone{Figure5.pdf}
\caption{MCMC forward model fit of the normalized 2.26--2.38~$\mu$m spectrum of CWISE J0506$+$0738 for RV measurement. The panels along the diagonal show the posterior distributions for our four fitting parameters: the barycentric radial velocity of the star (RV$_\oplus$ in km/s), the strength of the telluric absorption ($\alpha$), the instrumental gaussian broadening profile width ($\sigma_{broad}$ in km/s), and the wavelength offset from the nominal SpeXtool solution ($\Delta\lambda$ in {\AA}). The lower left panels illustrate correlations between parameters; only the RV and $\Delta\lambda$ parameters show a modest inverse correlation, effectively expanding the uncertainty on the RV measurement. The upper right corner shows the NIRES spectrum of CWISE J0506$+$0738 prior to telluric correction (black line) and the best-fit model spectrum (magenta line) composed of stellar model and telluric absorption components (offset lines above fit). Residuals (data minus model, blue line) are consistent with measurement uncertainties (grey band).
}
\label{fig:rv}
\end{figure*}
\begin{figure*}
\plotone{Figure6.pdf}
\caption{Color-color diagrams showing known brown dwarfs recovered in the UKIRT Hemisphere Survey (Schneider et al.~in prep), supplemented with known red L dwarfs from Table \ref{tab:redLs}. CWISE J0506$+$0738 is a clear outlier, being significantly redder than other known L dwarfs both in $J-K$ and $J-$W2 color. }
\label{fig:ccds}
\end{figure*}
\section{Discussion}
\label{sec:discussion}
\subsection{Redder than Red}
\label{sec:red}
CWISE J0506$+$0738 has exceptionally red colors compared to the known brown dwarf population. Figure \ref{fig:ccds} highlights this by comparing CWISE J0506$+$0738 to other UHS DR2 L and T dwarfs (Schneider et al.~in prep.) and red L dwarfs not covered by the UHS survey. Table \ref{tab:redLs} summarizes photometric and spectral type information for all known free-floating L dwarfs with $J-K$ colors greater than 2.2 mag. All photometry is on the MKO system and comes from the VISTA Hemisphere Survey (VHS; \citealt{mcmahon2013}), \cite{liu2016}, or \cite{best2021}. WISE J173859.27$+$614242.1 has no near-infrared MKO photometry in the literature or in available catalogs. For this source, we used its low-resolution near-infrared spectrum published in \citet{mace2013} normalized to its most precise $K$-band photometric measurement (2MASS $K_{\rm S}$; \citealt{skrutskie2006}), and then computed synthetic $J_{\rm MKO}$ and $K_{\rm MKO}$ photometry. Even amongst known red L dwarfs, CWISE J0506$+$0738 stands out as exceptionally red, being $\sim$0.3 mag redder in both $(J-K)_{MKO}$ and $J_{MKO}-$W2 color than all other known free-floating L dwarfs.
Directly imaged planetary-mass companions also have exceptionally red near-infrared colors. Some of the L-type companions (Table~\ref{tab:redLs}) do not have {\em WISE} W1 (3.4 $\mu$m) and W2 (4.6 $\mu$m) photometry, but have equivalent Spitzer/IRAC photometry in ch1 (3.6 $\mu$m) and ch2 (4.5 $\mu$m). For HD 203030B, we use $J$- and $K$-band photometry from \citet{metchev2006} and \citet{miles2017}, and convert Spitzer/IRAC ch1 and ch2 photometry from \cite{martinez2022} using the Spitzer-WISE relations from \cite{kirkpatrick2021}. For VHS 1256$-$1257B, we use $J$- and $K$-band photometry from \cite{gauza2015}, and convert Spitzer/IRAC ch2 photometry from \cite{zhou2020} to W2 using the \cite{kirkpatrick2021} relation. We chose not to use the published W1 photometry of VHS 1256$-$1257B from \cite{gauza2015} because of its large uncertainty (0.5 mag). For BD$+$60 1417B, all photometry comes directly from \cite{faherty2021}. Both HD 203030B and BD$+$60 1417B are included in both panels of Figure \ref{fig:ccds}, while VHS 1256$-$1257B is included in the left panel of Figure \ref{fig:ccds}. We note that none of these companions have $(J-K)_{MKO}$ or $J_{MKO}-$W2 colors as red as CWISE J0506$+$0738. Of the remaining planetary-mass companions that lack 3--5~$\mu$m photometry, only 2M1207b ($J-K$=3.07$\pm$0.23 mag; \citealt{chauvin2004, chauvin2005, mohanty2007, patience2010}) and HD 206893B ($J-K$=3.36$\pm$0.08 mag; \citealt{milli2017, delorme2017, krammerer2021, meshkat2021, ward2021}) have redder $J-K$ colors than CWISE J0506$+$0738.
\subsection{WISE Photometric Variability}
Young brown dwarfs have been shown to have enhanced photometric variability compared to field-age brown dwarfs \citep{biller2015, metchev2015, schneider2018, vos2020, vos2022}. Most brown dwarfs with detected variability at 3--5 $\mu$m, measured largely with Spitzer/IRAC, have amplitudes of a few percent or less (see compilation in \citealt{vos2020}). Multi-epoch photometry from WISE generally does not have the precision to detect such variability (\citealt{mace2015}, Brooks et al.~submitted). However, objects with extremely high-amplitude variability could be distinguished in multi-epoch WISE data.
\begin{figure*}
\plotone{Figure7.pdf}
\caption{Standard deviation ($\sigma$) versus average magnitude over all single-exposure WISE/NEOWISE W1 (left) and W2 (right) detections of known brown dwarfs. Color contours indicate 16--84\% and 5--95\% confidence intervals in 0.5 magnitude bins. The insets on each panel show the difference between measured $\sigma$ values and polynomial fits to the magnitude trend. 2MASS J2139$+$0220 (dark green square), PSO J318.5338$-$22.8603 (light green circle), 2MASSW J0310599$+$164816 (light purple hexagon), CWISE J0506$+$0738 (cyan diamond), and WISE J052857.68$+$090104.4 (dark purple pentagon) are all highlighted as clear deviants from these trends. }
\label{fig:var}
\end{figure*}
Given tentative evidence of near-infrared photometric variability (see Section \ref{sec:ukirt}), we investigated WISE \citep{wright2010} and NEOWISE \citep{mainzer2011, mainzer2014} data for evidence of mid-infrared variability for CWISE J0506$+$0738. WISE/NEOWISE has been scanning the mid-infrared sky for over 10 years, and a typical location on the sky has been observed with the W1 and W2 filters every six months since early 2010.\footnote{With the exception of a $\sim$3 year gap between the initial WISE mission and reactivation as NEOWISE from February 2011 to December 2013.} During each $\sim$1 day visit, 10--15 individual exposures are typically acquired. We chose to analyze these single exposures as opposed to epochal coadds (e.g. ``unTimely''; \citealt{meisner2022}) because CWISE J0506$+$0738 is brighter than the nominal threshold where single exposure photometry becomes unreliable, especially at W2 ($\sim$14.5 mag; \citealt{schneider2016a}); and the concern that the coadded frames would dilute any traces of photometric variability. Such coadded photometry may prove useful for future investigations of long-term/long-period variability.
We gathered photometry from the WISE/NEOWISE Single Exposure Source Catalogs \citep{wise2020a, wise2020b, wise2020c, neowise2020} for CWISE J0506$+$0738 and the same set of known L, T, and Y dwarfs shown in Figure \ref{fig:ccds}. Collectively, these objects should have comparable levels of low-amplitude variability generally undetectable by WISE. For each source, we measured the average and standard deviation of both W1 and W2 magnitudes. We omit frames with {\it qual\_frame} values equal to zero, as these frames likely have contaminated flux measurements. Because single exposure frames are subject to astronomical transients (e.g., cosmic ray hits, satellite streaks), we excluded 4$\sigma$ outliers from the set of single exposure photometry for each source. We also excluded sources that were either blended or contaminated (e.g., bright star halos, diffraction spikes).
Figure \ref{fig:var} compares mean and standard deviation values, which show clear trends in both W1 and W2 photometry. We immediately identify four objects with magnitudes between 12 and 14.5 that have photometric scatter above the 5--95\% confidence interval ($\gtrsim$2$\sigma$) in either W1 or W2.
\noindent
{\em 2MASS 21392676$+$0220226 (2MASS J2139$+$0220)} is a T1.5 dwarf \citep{burgasser2006} that is well-known for its large-amplitude infrared variability. \cite{radigan2012} monitored 2MASS J2139$+$0220 and found $J$-band variability with a peak-to-peak amplitude of $\sim$26\%, which until recent observations of VHS 1256$-$1257B \citep{zhou2022} was the highest amplitude variability found for any brown dwarf. Since the \cite{radigan2012} study, this object has been the subject of numerous variability investigations \citep{apai2013, khandrika2013, karalidi2015}, with \cite{yang2016} finding variability of 11--12\% in Spitzer/IRAC ch1 and ch2 photometry. The extreme variability of 2MASS J2139$+$0220 is attributed to variations in the thickness of silicate clouds \citep{apai2013, karalidi2015, vos2022b}. This object has also been shown to have a nearly edge-on inclination \citep{vos2017}, and is a kinematic member of the $\sim$200 Myr-old Carina-Near moving group \citep{zhang2021}.
\noindent
{\em WISE J052857.68$+$090104.4 (WISE~J0528$+$0901)} is a clear W1 outlier, originally classified as a late-M giant by \cite{thompson2013} but later reclassified as a very low-gravity L1 brown dwarf member of the $\sim$20 Myr 32 Orionis group \citep{burgasser2016}. This planetary-mass object has an anomalous $J-$W2 color, suggestive of excess flux at 5 $\mu$m, although \cite{burgasser2016} found no evidence of circumstellar material or cool companions. The source may also be a variable in the W2 band, but its fainter magnitude here makes it less distinct than comparably bright L and T dwarfs. Nevertheless, these data suggest that WISE~J0528$+$0901 has an unusually dusty and variable atmosphere, making it a compelling source for future photometric monitoring.
\noindent
{\em PSO J318.5338$-$22.8603} is a clear W2 outlier and exceptionally red $\beta$ Pic member that has been shown to have large-amplitude infrared variability in the infrared \citep{biller2015, vos2019}, with a peak-to-peak amplitude of 3.4\% in Spitzer/IRAC ch2 photometry \citep{biller2018}. Interestingly, PSO J318.5338$-$22.8603 is an outlier in W2 and not in W1, which may indicate a cloud depth effects given that the W1 and W2 bands probe different depths in the atmosphere.
\noindent
{\em 2MASSW J0310599$+$164816 (2MASS~J0310+1648AB)} is another W2 outlier, and is an optically classified L8 \cite{kirkpatrick2000}. This object is a resolved (0\farcs2) $\sim$equal brightness binary \citep{stumpf2010} that shows evidence of high amplitude variability in the near-infrared \citep{buenzli2014}. While the variability observations were not long enough to determine a true amplitude or period, the brightening rate of $\sim$2\% per hour was the largest measured in the sample. While there is no clear evidence of youth for 2MASS~J0310+1648AB in the literature, this object was typed as L9.5 (sl.~red) in \cite{schneider2014}. Further investigation of the potential youth and cloud properties of this object may be warranted.
CWISE J0506$+$0738 joins this group of variability outliers, as one of very few objects with both W1 and W2 scatter outside the 16--84\% confidence interval of comparable-brightness L and T dwarfs. To estimate the amplitude of variability associated with these deviations, we fit tenth-order polynomials to the scatter versus magnitude trends in W1 and W2, and calculated RMS values by finding the magnitude offset (in quadrature) for our outlying targets. Assuming sinusoidal variability, RMS values can be converted to peak-to-peak amplitudes with a multiplicative factor of 2$\sqrt{2}$. Using the 16--84\% confidence region as uncertainties for the predicted values from the polynomial fits, we find peak-to-peak variability on the order of 13$\pm$1\% for W1 and 12$\pm$2\% for W2 for 2MASS J2139$+$0220, which is generally consistent with results from Spitzer \citep{yang2016}. For CWISE J0506$+$0738, we estimate 15$\pm$5\% variability for W1 and 23$\pm$9\% variability for W2. Variability at these levels would certainly be extraordinary; however, we caution that the relatively low precision of WISE/NEOWISE single exposure measurements may inflate these results. Future photometric and/or spectroscopic monitoring would help to explore the variability properties of CWISE J0506$+$0738.
\subsection{Distance}
\label{sec:dist}
CWISE J0506$+$0738 is faint at optical wavelengths and was therefore undetected by the Gaia mission \citep{gaia2022}. The currently available astrometry for CWISE J0506$+$0738 is insufficient for a parallax measurement. Because CWISE J0506$+$0738 has such an unusually shaped spectrum, standard spectral-type versus absolute magnitude relations for normal, field-age brown dwarfs are not applicable. There have been efforts to create relations between absolute magnitudes and spectral types for low-gravity brown dwarfs; however, these are typically valid for spectral types earlier than L7 (e.g., \citealt{faherty2016, liu2016}). \cite{faherty2013} found absolute photometry of the young L5 dwarf 2MASS J03552337$+$1133437 was fainter than field L5 dwarfs at wavelengths shorter than $\sim$2.5 $\mu$m, and brighter at longer wavelengths. \cite{schneider2016b} investigated other young, red L dwarfs with measured parallaxes and found that $K$-band photometry produced photometric distances that aligned well with parallactic distances. This trend was also noted in \cite{filippazzo2015}, \cite{faherty2016}, and \cite{liu2016}.
Here, we use nine young, free-floating brown dwarfs (Table \ref{tab:redLs}) with measured parallaxes \citep{liu2016, best2020, kirkpatrick2021, gaia2022} to compare measured distances to photometric distances based on absolute magnitude-spectral type relations for $J_{\rm MKO}$, $K_{\rm MKO}$, W1, and W2 (\citealt{dupuy2012,kirkpatrick2021}; Figure~\ref{fig:dist}). Consistent with prior results, we find that $K_{\rm MKO}$-band photometric distances (average offset $\Delta$d = $-$0.8~pc, scatter $\sigma_d$ = 3.3~pc) are generally more accurate than $J_{\rm MKO}$ ($\Delta$d = $-$10~pc, $\sigma_d$ = 5.1~pc), W1 ($\Delta$d = +2.6~pc, $\sigma_d$ = 3.8~pc), or W2 ($\Delta$d = +4.5~pc, $\sigma_d$ = 4.1~pc) photometric distances. To ensure these values are not biased, we also evaluated the fractional difference for each photometric band, defined as $\Delta$d/d$_{\rm plx}$, and find that $K$-band photometric distances are typically within 5\% for this sample, compared to 52\%, 11\%, and 20\% for $J_{\rm MKO}$, W1, and W2, respectively.
Using the absolute magnitude-spectral type relation from \cite{dupuy2012}, a spectral type of L9$\pm$1, and its measured $K_{MKO}$ photometry, we estimate a photometric distance of 32$^{+4}_{-3}$ pc for CWISE J0506$+$0738. Again, given the exceptional nature of this source, and its unknown multiplicity, we advise that this distance estimate be used with caution until it can be confirmed with a trigonometric parallax.
\begin{figure}
\plotone{Figure8.pdf}
\caption{A comparison of photometric and parallactic distances for free-floating objects from Table \ref{tab:redLs} with measured parallaxes. Objects are labeled on the x-axis. Dashed lines show average differences between photometric and parallactic distances for each band, with colors corresponding to those given in the legend. }
\label{fig:dist}
\end{figure}
\subsection{Moving Group Membership}
\label{sec:mg}
Young brown dwarfs are often associated both spatially and kinematically with young, nearby moving groups, thereby serving as invaluable age benchmarks.
To assess the potential moving group membership of CWISE J0506$+$0738, we use the BANYAN $\Sigma$ algorithm \citep{gagne2018}, which deploys a Bayesian classifier to assign probabilities of moving group membership through 6D coordinate alignment (position and velocity) to 26 known moving groups in the solar neighborhood. We used the position and proper motion of CWISE J0506$+$0738 from UKIRT and UHS measurements (Table \ref{tab:cwise0506}), and our measured radial velocity from the NIRES spectrum (Section \ref{sec:rv}). With these values alone, we find an 82\% membership probability in the $\beta$ Pictoris moving group (BPMG; \citealt{zuckerman2001}), a 3\% membership probability in the AB Doradus moving group (ABDMG; \citealt{zuckerman2004}), and a 15\% probability of being unassociated with any moving group. The predicted/optimal distances for membership in BPMG and ABDMG are 32~pc and 64~pc, respectively; our estimated distance clearly aligns with the former. If we include the distance estimate in the BANYAN $\Sigma$ algorithm, the probability of BPMG membership goes up to 99\%.
We also tested the kinematic membership of CWISE J0506$+$0738 using the LACEwING analysis code \citep{riedel2017}. Again, using just the position, proper motion, and radial velocity of CWISE J0506$+$0738, we find non-zero probabilities for ABDMG (56\%), the Argus Moving Group (71\%), BPMG (28\%), the Columba Association (52\%), and the Tucana-Horologium Association (6\%). Note that LACEwING is stricter in assigning membership probabilities than BANYAN, with bona fide BPMG members having a maximum membership probability of $\sim$70\% when only proper motion and radial velocity are used \citep{riedel2017}. If we use our photometric distance as an additional constraint, BPMG is returned as the group with the highest probability of membership at 86\%.
Membership in the $\beta$ Pictoris moving group is clearly favored for CWISE J0506$+$0738, although a directly measured distance is necessary for confirmation. If confirmed, CWISE J0506$+$0738 would have the latest spectral type and lowest mass amongst free-floating BPMG members, following PSO J318.5338$-$22.8603 \citep{liu2013}. Several candidate members with L7 or later spectral types have also been proposed (\citealt{best2015, schneider2017, kirkpatrick2021, zhang2021}; however, see \citealt{hsu2021}). PSO J318.5338$-$22.8603 has proven to be an exceptionally valuable laboratory for studying planetary-mass object atmospheres \citep{biller2015, biller2018, allers2016, faherty2016}. A second planetary-mass object in this group that bridges the L/T transition will further contribute to these studies.
Assuming $\beta$ Pic membership, we can use the group age of 22$\pm$6 Myr \citep{shkolnik2017} to estimate the mass of CWISE J0506$+$0738. To do this, we must first estimate the luminosity ($L_{\rm bol}$) or effective temperature (\teff) of the source. For the former, we used the empirical $K$-band bolometric correction/spectral type relation for young brown dwarfs quantified in \cite{filippazzo2015}. Combining this with the UHS $K$-band magnitude and our distance estimate, we infer a bolometric luminosity of $\log$($L_{\rm bol}$/$L_{\odot}$) = -4.55$\pm$0.12. We caution that this value is based on our estimated distance from Section \ref{sec:dist}, and will need to be updated when a measured parallax becomes available. We then used the solar metallicity evolutionary models of \cite{marley2021} to infer a mass of 7$\pm$2 $M_{\rm Jup}$. The evolutionary models also provide a radius of 1.32$\pm$0.03 $R_{\rm Jup}$ for these parameters, consistent with the radii of low-gravity late-type L dwarfs \citep{filippazzo2015}. Combining this radius with our bolometric luminosity, we find \teff\ = 1140$\pm$80 K. This is $\sim$130 K cooler than a field-age L9 \citep{kirkpatrick2021}, consistent with previous works showing low-gravity late-Ls tend to be $\sim$100--200 K cooler than field-age objects at the same spectral type \citep{filippazzo2015, faherty2016}. In particular, this temperature is 50-100~K cooler than \teff\ estimates of PSO J318.5338$-$22.8603 \citep{liu2013,miles2018}, consistent with the appearance of CH$_4$ absorption at lower temperatures.
The predicted mass of 7$\pm$2 $M_{\rm Jup}$ is well below the deuterium-fusion minimum mass of 14 $M_{\rm Jup}$ commonly used to distinguish brown dwarfs from planetary mass objects. As such, this object helps bridge the mass gap between the lowest mass free-floating $\beta$ Pic members and directly imaged exoplanets, such as 51 Eri b ($\sim$T6.5; \citealt{macintosh2015, rajan2017}). CWISE J0506$+$0738 could also help to constrain the effective temperature of the L/T transition at an age of $\sim$20-25 Myr \citep{binks2014, bell2015, messina2016, nielsen2016, shkolnik2017, miret2020}. CWISE J0506$+$0738 would be one of the youngest objects to join a small but growing number of benchmark substellar objects with known ages a the L/T transition such as HD 203030B (30--150 Myr; \citealt{metchev2006, miles2017}), 2MASS J13243553+6358281 ($\sim$150 Myr; \citealt{looper2007, gagne2018b}), HIP 21152B and other T-type Hyades members ($\sim$650 Myr; \citealt{kuzuhara2022, schneider2022}), $\epsilon$ Indi Ba ($\sim$3.5 Gyr; \citealt{scholz2003, chen2022}), and the white dwarf companion COCONUTS-1 ($\sim$7 Gyr; \citealt{zhang2020}).
\section{Summary}
We have presented the discovery and analysis of an exceptionally red brown dwarf, CWISE J0506$+$0738, identified as part of the Backyard Worlds: Planet 9 citizen science project. The near-infrared spectrum of CWISE J0506$+$0738 is highly reddened and shows signatures of low-surface gravity, as well as weak absorption features that we associate with methane bands. This object has the reddest $J-K$ and $J-$W2 colors of any free-floating L-type brown dwarf, and we tentatively assign a near-infrared spectral type of L8$\gamma$--T0$\gamma$. The exceptionally red color of CWISE J0506$+$0738 may be due to several factors. Objects with low surface gravities have inefficient gravitational settling of silicate dust grains, which can remain high in the atmospheres. Such grains can be directly detected at long wavelengths (e.g., \citealt{cushing2006, burgasser2008, suarez2022}) and could be constrained for CWISE J0506$+$0738 with future long-wavelength observations (e.g., \citealt{miles2022}). The angle at which a brown dwarf is viewed has also been shown to affect its near-infrared colors, with objects viewed equator-on tending to have redder colors than those viewed pole-on \citep{vos2017}. A measurement of CWISE J0506$+$0738's rotational period combined with its rotational velocity (e.g., $v$sin$i$) from a high-resolution spectrum could determine whether or not CWISE J0506$+$0738 is viewed closer to pole-on or equator-on. A high-resolution spectrum would also allow for a higher precision radial velocity measurement and a more detailed probe of gravity-sensitive features.
CWISE J0506$+$0738's astrometry and kinematics points to likely membership in the 22~Myr $\beta$ Pictoris moving group, to be confirmed or rejected with future trigonometric parallax and higher precision radial velocity measurements. If associated, CWISE J0506$+$0738 would be the lowest-mass $\beta$ Pictoris member found to date, with an estimated mass of 7$\pm$2 $M_{\rm Jup}$, well within the planetary-mass regime. The extreme colors of this object, and its relatively low proper motion ($<$100 mas yr$^{-1}$), suggests the existence of a other extremely red L dwarfs that may have been missed by previous searches due to assumptions about brown dwarf colors or selection requirements for large proper motions. Recent large-scale near-infrared surveys such as UHS \citep{dye2018} and VHS \citep{mcmahon2013} that push several magnitudes deeper than previous efforts (e.g., 2MASS) may be able to confidently detect the faint $J$-band magnitudes of similar objects.
Because of this object's unique spectroscopic properties, and the fact that young brown dwarfs often display large-amplitude variability (e.g., \citealt{vos2022}), CWISE J0506$+$0738 is an intriguing target for future photometric or spectroscopic variability monitoring. Longer wavelength observations with the James Webb Space Telescope would have the additional advantage of further constraining the existence and abundance of CH$_4$ and analyzing the presence and properties of dust grains through silicate absorption features \citep{miles2022}.
\begin{longrotatetable}
\begin{deluxetable*}{lcccccccccccc}
\label{tab:redLs}
\tablecaption{Infrared Photometry for L Dwarfs with $J-K$ $>$ 2.2 mag}
\tablehead{
\colhead{Name} & \colhead{Disc.} & \colhead{SpT} & \colhead{SpT} & \colhead{$J_{\rm MKO}$} & \colhead{$K_{\rm MKO}$} & \colhead{NIR} & \colhead{W1} & \colhead{W2} & \colhead{$(J-K)_{\rm MKO}$} & \colhead{$J_{\rm MKO}-$W2}\\
\colhead{} & \colhead{Ref.} & \colhead{} & \colhead{Ref.} & \colhead{(mag)} & \colhead{(mag)} & \colhead{Ref.} & \colhead{(mag)} & \colhead{mag} & \colhead{(mag)} & \colhead{(mag)} }
\startdata
\cutinhead{Free Floating}
WISEP J004701.06$+$680352.1 & 1 & L6--L8$\gamma$ & 2 & 15.490$\pm$0.070 & 13.010$\pm$0.030 & 3 & 11.768$\pm$0.010 & 11.242$\pm$0.008 & 2.480$\pm$0.076 & 4.248$\pm$0.070 \\
PSO 057.2893$+$15.2433 & 4 & L7 red & 4 & 17.393$\pm$0.027 & 14.869$\pm$0.012 & 20 & 13.818$\pm$0.014 & 13.254$\pm$0.012 & 2.524$\pm$0.030 & 4.139$\pm$0.030 \\
2MASS J03552337$+$1133437 & 5 & L3--L6$\gamma$ & 2 & 13.940$\pm$0.003 & 11.491$\pm$0.001 & 20 & 10.617$\pm$0.012 & 10.032$\pm$0.008 & 2.449$\pm$0.003 & 3.908$\pm$0.009 \\
CWISE J050626.96$+$073842.4 & 6 & L8--T0$\gamma$ & 6 & 18.487$\pm$0.017 & 15.513$\pm$0.022 & 6,20 & 14.320$\pm$0.015 & 13.552$\pm$0.013 & 2.974$\pm$0.028 & 4.935$\pm$0.021 \\
WISEA J090258.99$+$670833.1 & 7 & L7 red & 7 & 16.864$\pm$0.246 & 14.305$\pm$0.108 & 8 & 13.192$\pm$0.013 & 12.722$\pm$0.009 & 2.559$\pm$0.269 & 4.142$\pm$0.246 \\
2MASS J11193254$-$1137466 & 9 & L7 VL-G\tablenotemark{a} & 10 & 17.330$\pm$0.029 & 14.751$\pm$0.012 & 21 & 13.540$\pm$0.014 & 12.879$\pm$0.010 & 2.580$\pm$0.032 & 4.451$\pm$0.031 \\
WISEA J114724.10$-$204021.3 & 11 & L7$\gamma$ & 12 & 17.445$\pm$0.028 & 14.872$\pm$0.011 & 21 & 13.677$\pm$0.013 & 13.088$\pm$0.011 & 2.573$\pm$0.030 & 4.357$\pm$0.030 \\
2MASS J16154255$+$4953211 & 13 & L3--L6$\gamma$ & 2 & 16.506$\pm$0.016 & 14.260$\pm$0.070 & 3,20 & 13.225$\pm$0.012 & 12.648$\pm$0.008 & 2.246$\pm$0.072 & 3.858$\pm$0.018\\
WISE J173859.27$+$614242.1 & 14 & L9 pec(red) & 14 & 17.680$\pm$0.110\tablenotemark{c} & 15.237$\pm$0.100\tablenotemark{c} & 6,22 & 14.059$\pm$0.011 & 13.374$\pm$0.009 & 2.443$\pm$0.149\tablenotemark{c} & 4.306$\pm$0.100\tablenotemark{c}\\
WISE J174102.78$-$464225.5 & 15 & L5--L7$\gamma$ & 2 & 15.951$\pm$0.010 & 13.533$\pm$0.005 & 21 & 12.362$\pm$0.027 & 11.802$\pm$0.024 & 2.418$\pm$0.011 & 4.149$\pm$0.026\\
PSO J318.5338$-$22.8603 & 16 & L7 VL-G & 16 & 17.181$\pm$0.018 & 14.540$\pm$0.009 & 21 & 13.210$\pm$ 0.013 & 12.526$\pm$0.010 & 2.640$\pm$0.020 & 4.655$\pm$0.021\\
2MASS J21481628$+$4003593 & 17 & L6.5 pec & 17 & 14.054$\pm$0.003 & 11.745$\pm$0.001 & 20 & 10.801$\pm$0.011 & 10.292$\pm$0.007 & 2.309$\pm$0.003 & 3.762$\pm$0.008 \\
ULAS J222711$-$004547 & 18 & L7 pec & 18 & 17.954$\pm$0.039 & 15.475$\pm$0.014 & 21\tablenotemark{b} & 14.259$\pm$0.014 & 13.663$\pm$0.013 & 2.479$\pm$0.041 & 4.291$\pm$0.041\\
2MASS J22443167$+$2043433 & 19 & L6--L8$\gamma$ & 2 & 16.401$\pm$0.016 & 13.826$\pm$0.006 & 20 & 12.775$\pm$0.012 & 12.130$\pm$0.008 & 2.575$\pm$0.017 & 4.271$\pm$0.018\\
\cutinhead{Companions}
BD$+$60 1417B & 23 & L6--L8$\gamma$ & 23 & 18.53$\pm$0.20 & 15.83$\pm$0.20 & 23 & 14.461$\pm$0.014 & 13.967$\pm$0.013 & 2.70$\pm$0.28 & 4.46$\pm$0.20 \\
HD 203030B & 24 & L7.5 & 24 & 18.77$\pm$0.08 & 16.21$\pm$0.10 & 24,25 & 15.67$\pm$0.02\tablenotemark{d} & 14.77$\pm$0.02\tablenotemark{d} & 2.56$\pm$0.13 & 4.00$\pm$0.08 \\
VHS 1256$-$1257B & 26 & L7.5 & 26 & 17.136$\pm$0.020 & 14.665$\pm$0.010 & 21 & \dots & 12.579$\pm$0.020\tablenotemark{e} & 2.471$\pm$0.022 & 4.557$\pm$0.028 \\
2MASS J1207334$-$393254b & 27,28 & L3 VL-G & 29 & 20.0$\pm$0.2 & 16.93$\pm$0.11 & 27,30 & \dots & \dots & 3.07$\pm$0.23 & \dots \\
HD 206893B & 31 & L4--L8 & 32 & 18.38$\pm$0.03 & 15.02$\pm$0.07 & 32 & \dots & \dots & 3.36$\pm$0.08 & \dots \\
2MASS J22362452+4751425b & 33 & late-L pec & 33 & 19.97$\pm$0.11 & 17.28$\pm$0.04 & 33 & \dots & \dots & 2.69$\pm$0.12 & \dots \\
HR 8799b & 34 & L5--T2 & 35 & 19.46$\pm$0.17 & 16.99$\pm$0.06 & 36,37,38 & \dots & \dots & 2.47$\pm$0.18 & \dots \\
\enddata
\tablenotetext{a}{2MASS J11193254$-$1137466 is a binary \citep{best2017} and the spectral type listed is the unresolved spectral type.}
\tablenotetext{b}{ULAS J222711$-$004547 also has $J$- and $K$-band photometry in the UKIRT Large Area Survey (LAS; \citealt{lawrence2007}). We use the VHS photometric measurements here because they have smaller uncertainties than those in the UKIRT LAS.}
\tablenotetext{c}{Near-infrared photometry for WISE J173859.27$+$614242.1 was determined synthetically from its near-infrared spectrum.}
\tablenotetext{d}{Converted from Spitzer ch1 and ch2 photometry in \cite{miles2017} using relations in \cite{kirkpatrick2021}.}
\tablenotetext{e}{Converted from Spitzer ch2 photometry in \cite{zhou2020} using relations in \cite{kirkpatrick2021}.}
\tablerefs{(1) \cite{gizis2012}; (2) \cite{gagne2015}; (3) \cite{liu2016}; (4) \cite{best2015}; (5) \cite{reid2006}; (6) This work; (7) \cite{schneider2017}; (8) \cite{best2021}; (9) \cite{kellogg2015}; (10) \cite{best2017}; (11) \cite{schneider2016b}; (12) \cite{faherty2016}; (13) \cite{metchev2008}; (14) \cite{mace2013}; (15) \cite{schneider2014}; (16) \cite{liu2013b}; (17) \cite{looper2008}; (18) \cite{marocco2014}; (19) \cite{dahn2002}; (20) UHS (\citealt{dye2018}, Bruursema et al.~in prep.); (21) VHS \citep{mcmahon2013}; (22) 2MASS \citep{skrutskie2006}; (23) \cite{faherty2021}; (24) \cite{metchev2006}; (25) \cite{miles2017}; (26) \cite{gauza2015}; (27) \cite{chauvin2004}; (28) \cite{chauvin2005}; (29) \cite{allers2013}; (30) \cite{mohanty2007}; (31) \cite{milli2017}; (32) \cite{ward2021}; (33) \cite{bowler2017}; (34) \cite{marois2008}; (35) \cite{bowler2010}; (36) \cite{esposito2013}; (37) \cite{oppenheimer2013}; (38) \cite{liu2016} }
\end{deluxetable*}
\end{longrotatetable}
\acknowledgments
.
The Backyard Worlds: Planet 9 team would like to thank the many Zooniverse volunteers who have participated in this project. We would also like to thank the Zooniverse web development team for their work creating and maintaining the Zooniverse platform and the Project Builder tools. This research was supported by NASA grant 2017-ADAP17-0067. This material is supported by the National Science Foundation under Grant No. 2007068, 2009136, and 2009177. This publication makes use of data products from the UKIRT Hemisphere Survey, which is a joint project of the United States Naval Observatory, The University of Hawaii Institute for Astronomy, the Cambridge University Cambridge Astronomy Survey Unit, and the University of Edinburgh Wide-Field Astronomy Unit (WFAU). UHS is primarily funded by the United States Navy. The WFAU gratefully acknowledges support for this work from the Science and Technology Facilities Council through ST/T002956/1 and previous grants. The authors acknowledge the support provided by the US Naval Observatory in the areas of celestial and reference frame research, including the USNO's postdoctoral program. (Some of) The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. This publication makes use of data products from the {\it Wide-field Infrared Survey Explorer}, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE which is a project of the Jet Propulsion Laboratory/California Institute of Technology. {\it WISE} and NEOWISE are funded by the National Aeronautics and Space Administration. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has benefitted from the SpeX Prism Spectral Libraries, maintained by Adam Burgasser. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
\facilities{UKIRT/WFCAM, Keck/NIRES, WISE, NEOWISE}
\software{
BANYAN~$\Sigma$ \citep{gagne2018},
CASUTOOLS \citep{irwin2004},
LACEwING \citep{riedel2017},
SpeXTool \citep{cushing2004},
SPLAT \citep{burgasser2014},
WiseView \citep{caselden2018}
}
\clearpage
|
train/arxiv
|
BkiUcOs5qhLAB45oIvk6
| 5
| 1
|
\section{Introduction}
\label{sec:intro}
As one of the most important intra-operative data modality, laparoscopic images' quality
is of vital importance for navigation systems and for the operating surgeons~\cite{stoyanov2012surgical}.
The artifacts during laparoscopic images specificities include smoke, blood, dynamic illumination conditions, specular reflections, \textit{etc}~\cite{sdiri2016adaptive}. Smoke significantly reduces the contrast and radiance information for large areas of the scene. Computer vision algorithms' performance and surgeons' visibility would inevitably suffer from this degradation. Therefore, smoke removal in laparoscopic images becomes necessary to improve the image guided surgery conditions and to provide a better operation field visualization.
\par
To the best of our knowledge, there is only a few recent works related to laparoscopic desmoking~\cite{kotwal2016joint, baid2017joint, tchakaa2017chromaticity, luo2017vision}. In these papers, the image desmoking problem is considered as a dehazing problem which has been studied for many years in the literature~\cite{tan2008visibility, he2011single}. In such problem, the atmospheric scattering model presented by Eq. (\ref{scattering}) describes the formation of a hazy image and is widely used in computer vision~\cite{narasimhan2002vision}.
\begin{small}
\begin{equation}
\label{scattering}
\mathbf{I}(x)=\mathbf{J}(x)t(x)+\mathbf{A}(1-t(x)),
\end{equation}
\end{small}
where $\mathbf{I}$ is the observed intensity, $\mathbf{J}$ is the scene radiance representing the haze-free image, $t$ is the medium transmission map considered to be inversely related to the scene's depth, and $\mathbf{A}$ is the airlight which is usually a constant as it is the global atmospheric light which is location independent.
\par
However, while haze related to scene depth, smoke concentration is a local phenomenon which does not depend on the scene depth, but rather depends on the position of the tip of the thermal cutting instrument. Moreover, in laparoscopic images, the light source is provided from the instrument which is not evenly distributed, and the organ surface is not a Lambertian surface. These properties violate the assumptions underlying Eq. (\ref{scattering}), which makes it inappropriate for laparoscopic images.
\par
In this paper, we propose a novel laparoscopic images smoke removal method. More precisely, instead of resorting to the classical physical atmospheric model, we propose another model where the degraded image is separated to two parts: the weighted smoke part and the desmoked one that should be recovered. To estimate the smoke, our approach relies on two easily verifiable assumptions: smoke has a low contrast and low inter-channel differences.
\par
The remainder of this paper is organized as follows. In Sec.~\ref{sec:rel_works}, a review of image dehazing as well as laparoscopic image desmoking methods are given. Sec.~\ref{sec:method} describes our proposed approach by defining the energy function and the optimization procedure. Finally, in Sec.~\ref{sec:result}, experimental results are presented and some conclusions are drawn in Sec.~\ref{sec:conclustion}.
\section{RELATED WORKS}
\label{sec:rel_works}
Recently, some works were proposed for desmoking in laparoscopic
images~\cite{kotwal2016joint, baid2017joint, tchakaa2017chromaticity, luo2017vision}. In~\cite{kotwal2016joint}, the authors formulated a joint desmoking and denoising problem as a Bayesian inference problem based on probabilistic graphical model. This work is then extended in~\cite{baid2017joint} for desmoking, denoising and specularity removal.
In~\cite{tchakaa2017chromaticity}, an adapted dark-channel prior combined with histogram equalization method is presented. In~\cite{luo2017vision}, a visibility-driven fusion defogging framework is proposed.
While there is few works related to laparoscopic images smoke removal, a similar problem referred to as image dehazing has also been studied in the literature. Many of the image dehazing works use the atmospheric scattering model and rely on the estimation of the transmission map $t$ or the depth map of the images and the airlight alternatively~\cite{zhu2015fast, he2011single, tarel2009fast}. He \textit{et al.} propose the dark channel approach based on a statistical observation from outdoor haze-free images: for most of the haze-free natural images, pixel values are very low for at least one channel~\cite{he2011single}. The transmission map $t$ computed by this prior together with an estimated $\mathbf{A}$ calculated from the detected most haze-opaque region of the image are applied to invert Eq. (\ref{scattering}), resulting in a haze free image. This is a well-known efficient approach and lots of recent methods are proposed based on it~\cite{tchakaa2017chromaticity, xu2012fast}.
\par
Some works have been also developed without estimating transmission or depth maps. Tan \textit{et al.}~\cite{tan2008visibility} tried to enhance the haze image directly by maximizing the local contrast under an airlight smooth constraint.
In~\cite{ancuti2013single}, a multi-scale fusion dehazing method is proposed by deriving a white balance and contrast enhanced inputs.
\par
In~\cite{galdran2014variational}, a variational contrast enhancement framework for image dehazing with a modified gray-world assumption is proposed. Later in~\cite{galdran2015enhanced}, an improved version is presented, where a saturation term is added to the variational cost function aiming to maximize the contrast and saturation together. In~\cite{galdran2017fusion}, Galdran \textit{et al.} further improved their work by enhancing faraway regions where normally have more fog and preserving nearby low-fog regions. Those methods do not rely on a physical atmospheric model, but try to maximize contrast and saturation. However, the modified gray-world assumption and the assumption that pixels' intensity is related to depth are violated in laparoscopic images.
\par
\par
\section{Proposed smoke removal approach}
\label{sec:method}
Variational techniques have attracted a considerable attention over the last years in signal/image processing literature. They have been found to be among the most powerful techniques in different fields such as enhancement, restoration, super resolution and disparity/motion estimation from a sequence of images. For this reason, we propose here a variational approach to remove smoke in laparoscopic images. More precisely, an energy function is first defined and then minimized (i.e optimized) via an augmented Lagrangian method, as we shall address next.
\par
\subsection{Energy function}
Due to the aforementioned limitations of the atmospheric scattering model in laparoscopic images, we propose to consider another model where the degraded laparoscopic image is assumed to follow this decomposition strategy:
\begin{equation}
\label{model}
\mathbf{I}^{c}=\mathbf{J}^{c}+\alpha^{c} \cdot \mathbf{F}^{c},
\end{equation}
where $c\in\{R,G,B\} $ indicates the RGB channels, $\mathbf{I}$ is the obtained degraded image by laparoscopic camera, $\mathbf{J}$ contains the color image information, $\mathbf{F}$ is the unwanted smoke component and $\alpha^{c}$ is a scalar weight for every channel. Thus, the smoked images $\mathbf{I}$ is separated into two parts: the smoke part $\mathbf{F}$ and the enhanced one $\mathbf{J}$.
\par
Based on the observations that smoke part's variation is smooth and the RGB channel differences are low as a result of the whitish property of smoke, we propose to estimate the smoke part by minimizing the following energy function:
\begin{equation}
\label{cost}
E=\frac{\lambda }{2}\left \| \mathbf{F}-\mathbf{I} \right \|^{2}+\left \| \mathbf{F}_{TV} \right \|_{2},
\end{equation}
where $\mathbf{I}$ is the degraded color image in the RGB color space, $\mathbf{F}$ is the smoke part to be estimated, $\lambda$ is a scalar to adjust weights between the two terms of the equation, and $\left \| \mathbf{F} _{TV}\right \|_{2}$ is an isotropic total variation (TV)-norm which is given by:
\begin{equation}
\label{tv}
\left \| \mathbf{F} _{TV}\right \|_{2}=\sum_{i} \sqrt{\beta _{x}^2[\mathbf{D}_{x}\mathbf{F}]_{i}^{2}+\beta _{y}^2[\mathbf{D}_{y}\mathbf{F}]_{i}^{2}+\beta _{c}^2[\mathbf{D}_{c}\mathbf{F}]_{i}^{2}},
\end{equation}
where $\beta_{x}$, $\beta_{y}$, $\beta_{c}$ are three scalar parameters to balance the weights between the gradient of the color image and the inter-channel differences. $\mathbf{D}_{x}$, $\mathbf{D}_{y}$, $\mathbf{D}_{c}$ are the forward differential operators along the three dimensions. Thus, we have $\mathbf{D}_{x}\mathbf{F}=\mathbf{F}(x+1,y,c)-\mathbf{F}(x,y,c)$, $\mathbf{D}_{y}\mathbf{F}=\mathbf{F}(x,y+1,c)-\mathbf{F}(x,y,c)$, and $\mathbf{D}_{c}\mathbf{F}=\mathbf{F}(x,y,c+1)-\mathbf{F}(x,y,c)$. Note that $(x,y,c)$ represents the pixel coordinates of the color image with horizontal and vertical directions $(x,y)$ and channel direction $c$. Using matrix-vector notation, $[\mathbf{D}_{d}\mathbf{F}]_i$, with $d \in \{x,y,c\}$, denotes the $i$-th component of the one dimensional vector obtained from $\mathbf{D}_{d}\mathbf{F}$.
\par
The first term in Eq. (\ref{cost}) aims to keep the similarity between the estimated smoke part and the input degraded image. The second total variation norm part represents the properties of the smoke part: low contrast and low inter-channel differences.
\par
After the estimation of global smoke $\textbf{F}$, the smoke free image $\textbf{J}$ is then calculated as:
\begin{equation}
\label{final}
\mathbf{J}^{c}=\mathbf{I}^{c}-\alpha^{c} \cdot \mathbf{F}^{c},
\end{equation}
where $\alpha^{c}$ is defined as the mean values of the estimated smoke image over the RGB channels.
\par
\begin{comment}
\textbf{Estimation of the local smoke concentration:}
Smoke is not smoothly distributed over the whole image. A tuning step with a fine local smoke concentration estimation is proposed to correct the estimation of thesmoke part. The smoke concentration $\alpha_{c}$ is calculated according to the color attenuation prior statistic proposed in~\cite{zhu2015fast}: the concentration of haze is positively correlated with the difference between saturation and brightness channels. As the phenomenon of smoke has some similarity with haze, we assume that the smoke concentration follows this color attenuation prior. We propose to estimate the smoke concentration in the HSV color space by minimizing an energy function similar to that given by Eq. (\ref{cost2}). More precisely, by considering the inter-channel differences which only include the differences between saturation and brightness (S and V channels), the second energy function is expressed as:
\begin{equation}
\label{cost2}
E_2=
\frac{\lambda_{HSV} }{2}\left \| \mathbf{w}-\mathbf{I}_{HSV} \right \|^{2}+\left \| \mathbf{w}_{TV} \right \|_{2},
\end{equation}
where $\mathbf{I}_{HSV}$ is the input color image in HSV color space, and $\mathbf{w}$ is the three dimensional (3D) representation of the smoke concentration to be estimated. The isotropic total variation $\mathbf{w}_{TV} $ is then defined as follows:
\begin{equation}
\label{tvhsv}
\left \| \mathbf{w}_{TV} \right \|_{2}=\sum_{i} \sqrt{\beta _{x}^{2}[\mathbf{D}_{x}\mathbf{w}]_{i}^{2}+\beta_{y}^{2}[\mathbf{D}_{y}\mathbf{w}]_{i}^{2}+\beta_{c}^2[\mathbf{D}_{sv}\mathbf{w}]_{i}^{2}},
\end{equation}
where $\mathbf{D}_{x}$ and $\mathbf{D}_{y}$ are the forward differential operators which are similar to those used in Eq. (\ref{tv}), and $\mathbf{D}_{sv}\mathbf{w}=\mathbf{w}_{S}-\mathbf{w}_{V}$ is the difference between S and V channels. \\
Then, the gray-scale representation of the estimated 3D representation $\mathbf{w}$ will correspond to smoke concentration and will be denoted by $\alpha_{c}$.\\
Finally, the enhanced image can be deduced as follows:
\begin{equation}
\label{output}
\mathbf{J}=\mathbf{I}-\mathbf{F}=\mathbf{I}-\alpha_{c} \cdot \mathbf{F}.
\end{equation}
In what follows, we will describe the employed optimization algorithms for minimizing Eqs. (\ref{cost}) and (\ref{cost2}) to estimate thesmoke part $\mathbf{F}$ as well as the smoke concentration $\mathbf{w}$ (i.e $\alpha_c$) and deduce the enhanced image using Eq. (\ref{output}).
\end{comment}
\par
\subsection{Optimization method}
The energy function minimization problem can be solved by employing the augmented Lagrangian method~\cite{chan2011augmented}, which will be described in the following. The function, given by Eq. (\ref{cost}), is split by introducing an intermediate new variable $\textbf{u}$:
\begin{equation}
\label{spliting}
\begin{split}
&\displaystyle{\min_{\textbf{F}} \quad \frac{\lambda }{2}\left \| \mathbf{F}-\mathbf{I} \right \|^{2}+\left \| \mathbf{u} \right \|_{2}}, \\
&\textrm{\textit{s. t.}} \quad \mathbf{F}_{TV}-\mathbf{u}=0.
\end{split}
\end{equation}
The augmented Lagrangian for Eq. (\ref{spliting}) is:
\begin{small}
\begin{equation}
\label{split}
L_{\rho}(\mathbf{F},\mathbf{u},\mathbf{y})=\frac{\lambda }{2}\left \| \mathbf{F}-\mathbf{I} \right \|^{2}+\left \| \mathbf{u} \right \|_{2}+\mathbf{y}^{T}(\mathbf{F}_{TV}-\mathbf{u})+\frac{\rho}{2}\left \|\mathbf{F} _{TV}- \mathbf{u}\right \|^{2},
\end{equation}
\end{small}
where $\rho$ is a non-negative constant parameter called penalty parameter and $\mathbf{y}=[\mathbf{y}_{x}^{\top},\mathbf{y}_{y}^{\top},\mathbf{y}_{c}^{\top}]^{\top}$ is the Lagrange multipliers vector and $\mathbf{u}=[\mathbf{u}_{x}^{\top},\mathbf{u}_{y}^{\top},\mathbf{u}_{c}^{\top}]^{\top}$. Then, the alternating direction method (ADM)~\cite{boyd2011distributed} is used to solve the following minimization sub-problems iteratively:
\begin{footnotesize}
\begin{equation}
\label{subproblem}
\begin{split}
\textbf{F}^{k+1}&:=\argmin_{\textbf{F}} L_{\rho}(\textbf{F},\textbf{u}^{k},\textbf{y}^{k}),\\
&=\argmin_{\textbf{F}} \frac{\lambda }{2}\left \| \mathbf{F}-\mathbf{I} \right \|^{2}+(\mathbf{y}^{k})^{\top}(\mathbf{F}_{TV}-\mathbf{u}^{k})+\frac{\rho}{2}\left \|\mathbf{F} _{TV}- \mathbf{u}^{k}\right \|^{2}, \\
%
\textbf{u}^{k+1}&:=\argmin_{\textbf{u}} L_{\rho}(\textbf{F}^{k+1},\textbf{u},\textbf{y}^{k}),\\
&=\argmin_{\textbf{u}} \left \| \mathbf{u} \right \|_{2}+ (\mathbf{y}^{k})^{\top}(\mathbf{F}_{TV}^{k+1}-\mathbf{u})+\frac{\rho}{2}\left \|\mathbf{F} _{TV}^{k+1}- \mathbf{u}\right \|^{2}, \\
%
\textbf{y}^{k+1}&:= \textbf{y}^{k}+\rho(\textbf{F}_{TV}^{k+1}-\textbf{u}^{k+1}).
\end{split}
\end{equation}
\end{footnotesize}
By introducing the operator $\textbf{D}=[\beta _{x}\textbf{D}_{x}^{\top},\beta _{y}\textbf{D}_{y}^{\top},\beta _{c}\textbf{D}_{c}^{\top}]^{\top}$, the $\textbf{F}$-minimization subproblem leads to the following solution:
\begin{equation}
\label{fproblem}
\mathbf{F}=\mathcal{F}^{-1}[\frac{\mathcal{F}[\lambda \mathbf{I}+\rho\mathbf{D}^{\top}\mathbf{u}-\mathbf{D}^{\top}\mathbf{y}]}{\lambda+\rho(\left | \beta_{x}\mathcal{F}[\textbf{D}_{x}]\right |^{2}+\left |\beta_{y}\mathcal{F}[\textbf{D}_{y}]\right |^{2}+\left|\beta_{c}\mathcal{F}[\textbf{D}_{c}]\right |^{2} )}],
\end{equation}
where $\mathcal{F}$ is the Fourier transform operator. Then, the
$\textbf{u}$ minimization subproblem results in:
\begin{equation}
\label{uproblem}
\mathbf{u}_{x}=\mbox{max}\left \{ \mathbf{v}-\frac{1}{\rho},0 \right \} \cdot \frac{\mathbf{v}_{x}}{\mathbf{v}},
\end{equation}
where $\mathbf{v}_{x}=\beta_{x} \textbf{D}_{x}\textbf{F}+(\frac{1}{\rho})\textbf{y}_{x}$. Similar definition is applied to $\textbf{v}_{y}$, $\textbf{v}_{c}$, and $\textbf{v}=\mbox{max} \{ \sqrt{\left|\textbf{v}_{x}\right|^{2}+\left|\textbf{v}_{y}\right|^{2} +\left|\textbf{v}_{c}\right|^{2}}, \epsilon \}$ with $\epsilon $ a small constant. In a similar way, $\textbf{u}_{y}$ and $\textbf{u}_{c}$ are determined to obtain the vector $\mathbf{u}$. More details about these solutions can be found in~\cite{chan2011numerical}.
\section{EXPERIMENTAL RESULTS}
\label{sec:result}
In vivo procedure datasets~\cite{giannarou2013probabilistic, ye2017self}, taken from Hamlyn Centre Laparoscopic / Endoscopic Video Dataset Page\footnote{http://hamlyn.doc.ic.ac.uk/vision/}, are used for validation.
\textit{Dataset1} has 96 smoked images and \textit{Dataset2} contains 4031 images. In order to show the benefits of the proposed method, we will compare it to the following recent ones~\cite{he2011single,galdran2015enhanced, galdran2017fusion}. The first one is the atmospheric model based image dehazing method with dark channel prior~\cite{he2011single}. This method will be designated by DCP. It is important to note here that similar approach has been investigated in~\cite{tchakaa2017chromaticity} to remove smoke by adding thresholding or refining steps. However, the later approach has not been considered in our evaluation because of its sensitivity to different parameters which should be empirically selected for input smoked images of the large experimental dataset.
The second one, which will be denoted by E-VAR, corresponds to an enhanced variational approach developed in~\cite{galdran2015enhanced}. Finally, the third one, designated by F-VAR, is a fusion-based variational technique~\cite{galdran2017fusion}.
The parameters setting used in this experiment are: $\lambda=1$ for Eq.~(\ref{cost}), $\beta_{x}=\beta_{y}=\beta_{c}=1$ for Eq.~(\ref{tv}) and $\rho=5$.
\begin{table}[t]
\resizebox{.48\textwidth}{!}{
\begin{centering} %
\begin{tabular}{ccccc}
\hline
\hline
&FADE~\cite{choi2015referenceless}&JNBM~\cite{ferzli2009no}&RE~\cite{hautiere2011blind}\\ \hline
Input images &$0.40\pm 0.03 $ &$1.42\pm 0.12 $&$0$& \\
DCP~\cite{he2011single} &$0.27\pm 0.01$ &$1.57\pm 0.14 $&$0.38\pm 0.06$&\\
F-VAR~\cite{galdran2017fusion} &$0.43\pm 0.02$ & $1.62\pm 0.12$&$0.12\pm 0.02$&\\
E-VAR~\cite{galdran2015enhanced} &$0.35\pm 0.02$ &$1.50\pm 0.11$&$0.24\pm 0.05$&\\
Proposed &$\mathbf{0.23\pm0.02}$&$\mathbf{1.77 \pm 0.11}$&$\mathbf{0.39 \pm 0.07}$&\\ \hline \hline
\end{tabular}
\end{centering}
}
\caption{Quantitative evaluation results for \textit{Dataset1}.}
\label{result1}
\end{table}
\begin{table}[t]
\resizebox{.48\textwidth}{!}{
\begin{centering} %
\begin{tabular}{ccccc}
\hline \hline
&FADE~\cite{choi2015referenceless}&JNBM~\cite{ferzli2009no}&RE~\cite{hautiere2011blind}\\ \hline
Input images &$0.67 \pm0.16$ &$1.03\pm 0.11$ & $0$& \\
DCP~\cite{he2011single} &$0.33 \pm 0.05$ &$1.06\pm 0.11$ &$0.88\pm 0.42$&\\
F-VAR~\cite{galdran2017fusion} &$0.50 \pm 0.09$ & $1.09\pm 0.11$&$0.41\pm 0.20$&\\
E-VAR~\cite{galdran2015enhanced}&$0.36 \pm 0.05$ &$1.05\pm 0.10$ &$0.73\pm 0.39$&\\
Proposed &$\mathbf{0.30\pm 0.05}$&$\mathbf{1.16 \pm 0.10}$&$\mathbf{1.19\pm 0.62}$&\\ \hline \hline
\end{tabular}
\end{centering}
}
\caption{Quantitative evaluation results for \textit{Dataset2}.}
\label{result2}
\end{table}
\begin{figure*}[htb]
\begin{minipage}[b]{0.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{img192.png}}
\centerline{
\end{minipage}
\begin{minipage}[b]{.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{DCP192.png}}
\centerline{
\end{minipage}
\begin{minipage}[b]{0.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{VARF192.png}
\centerline{
\end{minipage}
\begin{minipage}[b]{0.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{VARE192.png}}
\centerline{
\end{minipage}
\begin{minipage}[b]{0.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{OUR192.png}}
\centerline{
\end{minipage}
\begin{minipage}[b]{0.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{4003.png}}
\centerline{
\end{minipage}
\begin{minipage}[b]{.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{DCP4003.png}}
\centerline{
\end{minipage}
\begin{minipage}[b]{0.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{VARF4003.png}
\centerline{
\end{minipage}
\begin{minipage}[b]{0.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{VARE4003.png}}
\centerline{
\end{minipage}
\begin{minipage}[b]{0.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{OUR4003.png}}
\centerline{
\end{minipage}
\begin{minipage}[b]{0.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{3366.png}}
\centerline{(a)
\end{minipage}
\begin{minipage}[b]{.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{DCP3366.png}}
\centerline{(b)
\end{minipage}
\begin{minipage}[b]{0.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{VARF3366.png}
\centerline{(c)
\end{minipage}
\begin{minipage}[b]{0.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{VARE3366.png}}
\centerline{(d)
\end{minipage}
\begin{minipage}[b]{0.19\linewidth}
\centering
\centerline{\includegraphics[width=3.3cm]{OUR3366.png}}
\centerline{(e)
\end{minipage}
\caption{Subjective results for \textit{Dataset1} and \textit{Dataset2}. (a) Input smoked laparoscopic image and the obtained desmoked ones using: (b) DCP~\cite{he2011single}, (c) F-VAR~\cite{galdran2017fusion}, (d) E-VAR~\cite{galdran2015enhanced}, and (e) proposed method.}
\label{fig:res1}
\end{figure*}
\begin{figure*}[htb]
\begin{minipage}[b]{0.33\linewidth}
\centering
\centerline{\includegraphics[width=1.1\linewidth]{fade.png}}
\centerline{ (a)
\end{minipage}
\begin{minipage}[b]{.33\linewidth}
\centering
\centerline{\includegraphics[width=1.1\linewidth]{JNBM.png}}
\centerline{ (b)
\end{minipage}
\begin{minipage}[b]{0.33\linewidth}
\centering
\centerline{\includegraphics[width=1.1\linewidth]{e.png}
\centerline{ (c)
\end{minipage}
\caption{Plotted metrics for \textit{Dataset2}. (a) FADE~\cite{choi2015referenceless}, (b) JNBM~\cite{ferzli2009no}, (c) RE~\cite{hautiere2011blind}. Note that, for the JNBM, only 500 frames are plotted to provide better illustration.}
\label{fig:res2}
\end{figure*}
\par
\noindent
\textbf{Quantitative evaluation:}
Examples of three images are shown in Fig.~\ref{fig:res1}(a).
As the ground-truth information for a smoked laparoscopic image is not available, we propose to employ two no-reference image quality metrics and another metric that compares the visibility of edges before and after smoke removal. For the purpose of evaluating the ability of smoke removal, a referenceless Fog Aware Density Evaluator (FADE) has been used to evaluate the perceptual fog density~\cite{choi2015referenceless, fog}. A lower FADE value means a lower perceptual fog density.
Besides, a just noticeable blur based no-reference objective image sharpness metric (JNBM)~\cite{ferzli2009no} is used to evaluate the perceptual sharpness. A higher value means higher perceptual sharpness or lower blurriness.
Further more, we employ a metric, proposed by Hauti\`ere \textit{et al.}~\cite{hautiere2011blind}, which aims to assess the ability of restoring edges (RE) that are not visible in \textbf{I} but are in \textbf{J} (obtained after smoke removal). This metric will be designated by RE. A higher RE value means a better edge restoration.
\par
Tables~\ref{result1} and~\ref{result2} show mean and standard deviation of the scores of the different approaches for \textit{Dataset1} and \textit{Dataset2}. Fig.~\ref{fig:res2} illustrates the scores of the three metrics on \textit{Dataset2}. All the three metrics show better scores for our approach. In terms of FADE metric, the DCP method removes smoke well. However, it scarifies the perceptual quality as shown in Fig.~\ref{fig:res1}~ (b) as a result of the constant airlight assumption. E-VAR removes more smoke than F-VAR. F-VAR's results indicate that there are still high smoke density in the images. Our proposed method's smoke density is the lowest. The proposed approach removes the smooth smoke component of the image resulting in a contrast enhanced image, which has the best scores for JNBM and RE.
\par
\noindent
\textbf{Qualitative evaluation:} In this part, we evaluate the different methods subjectively.
Fig.~\ref{fig:res1} illustrates the results for three laparoscopic images from the two datasets. It can be observed, from Fig.~\ref{fig:res1}(b), that the DCP method can remove the smoke effectively but causes an unnatural color change in the desmoked images. Moreover, as shown in Fig.~\ref{fig:res1}(c), the smoke is not well removed by the F-VAR approach as smoke is independent of depth in laparoscopic images and this approach tries to preserve image information on nearby region under the assumption that the concentration of the haze is related with depth.
E-VAR~\cite{galdran2015enhanced} method relies mildly on the physical model, leads to the fine result shown in Fig.~\ref{fig:res1}(d).
Finally, Fig.~\ref{fig:res1}(e) shows that our proposed method allows to remove smoke effectively, leading to an output image with enhanced contrast.
\par
Therefore, all the obtained results confirm the benefits of the proposed desmoking method for laparoscopic images.
\begin{comment}
\begin{figure}[htb]
\begin{minipage}[b]{0.47\linewidth}
\centering
\centerline{\includegraphics[width=3.7cm]{fig//img195}}
\centerline{(a)
\end{minipage}
\begin{minipage}[b]{.47\linewidth}
\centering
\centerline{\includegraphics[width=3.7cm]{fig//DCP195}}
\centerline{(b)
\end{minipage}
\begin{minipage}[b]{0.47\linewidth}
\centering
\centerline{\includegraphics[width=3.7cm]{fig//VARF195}
\centerline{(c)
\end{minipage}
\begin{minipage}[b]{0.47\linewidth}
\centering
\centerline{\includegraphics[width=3.7cm]{fig//OUR2195}}
\centerline{(d)
\end{minipage}
\caption{Input smoked laparoscopic image and the obtained desmoked ones using: (b) DCP~\cite{he2011single}, (c) F-VAR~\cite{galdran2017fusion}, and (d) Proposed method}
\label{fig:res1}
\end{figure}
\begin{figure}[htb]
\begin{minipage}[b]{0.47\linewidth}
\centering
\centerline{\includegraphics[width=3.7cm]{fig//000400}}
\centerline{(a)
\end{minipage}
\begin{minipage}[b]{0.47\linewidth}
\centering
\centerline{\includegraphics[width=3.7cm]{fig//DCP400}}
\centerline{(b)
\end{minipage}
\begin{minipage}[b]{0.47\linewidth}
\centering
\centerline{\includegraphics[width=3.7cm]{fig//VARF400}}
\centerline{(c)
\end{minipage}
\begin{minipage}[b]{0.47\linewidth}
\centering
\centerline{\includegraphics[width=3.7cm]{fig//OUR400}}
\centerline{(d)
\end{minipage}
\caption{Input smoked laparoscopic image and the obtained desmoked ones using: (b) DCP~\cite{he2011single}, (c) F-VAR~\cite{galdran2017fusion}, and (d) Proposed method}
\label{fig:res2}
\end{figure}
\begin{comment}
Fig. \ref{fig:res3} presents the estimatedsmoke part and the adaptive weight (smoke concentration) for degraded image Fig. \ref{fig:res1}. (a). The brighter value indicates more smoke.
\begin{figure}[htb]
\begin{minipage}[b]{0.47\linewidth}
\centering
\centerline{\includegraphics[width=3.7cm]{fig//smoke175}}
\centerline{(a)
\end{minipage}
\begin{minipage}[b]{.47\linewidth}
\centering
\centerline{\includegraphics[width=3.7cm]{fig//density175}}
\centerline{(b)
\end{minipage}
\caption{Estimated smoke $\mathbf{F}$ and density $\alpha_{c}$ for Image Fig. \ref{fig:res1}. (a) }
\label{fig:res3}
\end{figure}
\end{comment}
\section{Conclusion}
\label{sec:conclustion}
Unlike most of the natural image dehazing methods which rely on a physical model, we propose in this paper a variational desmoking method for laparoscopic images. The aim is to remove the smoke from the scene, thus to improve the image guided surgery condition as well as the surgeons' visibility. Quantitative and qualitative evaluations are performed and show that the proposed approach reduces the smoke effectively while preserving the important perceptual information of the image. Further work should include a more robust prior about the smoke.
\begin{small}
\bibliographystyle{IEEEbib}
|
train/arxiv
|
BkiUeqjxK7Tt522WbzKf
| 5
| 1
|
\section{Introduction\label{sec:Introduction}}
In his 1956 paper on the Foundations of Kinetic Theory (\cite{key-9}),
Mark Kac proposed a probabilistic model describing a system of $N$
one dimensional, randomly colliding particles. The description is
given by Kac's Master Equation \begin{equation}
\frac{\partial\psi}{\partial t}\left(v_{1},\dots,v_{N},t\right)=-N(I-Q)\psi\left(v_{1},\dots,v_{N},t\right)\label{master}\end{equation}
where \[
Q\phi\left(v_{1},\dots,v_{N}\right)=\frac{1}{2\pi}\cdot\frac{1}{\left(\begin{array}{c}
N\\
2\end{array}\right)}\sum_{i<j}\int_{0}^{2\pi}\phi\left(R_{i,j}(\vartheta)\left(v_{1},\dots,v_{N}\right)\right)d\vartheta\]
with \[
R_{i,j}(\vartheta)\left(v_{1},\dots,v_{N}\right)=\left(v_{1},\dots v_{i}(\vartheta),\dots,v_{j}(\vartheta),\dots,v_{N}\right)\]
\[
v_{i}(\vartheta)=v_{i}\cos\vartheta+v_{j}\sin\vartheta,\,\, v_{j}(\vartheta)=-v_{i}\sin\vartheta+v_{j}\cos\vartheta\ .\]
The function $\psi(v_{1},\dots,v_{N},t)$ is a probability distribution
on the energy sphere and it is formally given by \[
\psi(\cdot,t)=e^{-N(I-Q)t}\psi_{0}\]
for some initial condition $\psi_{0}$. In the same paper, Kac introduced
the notion of chaotic sequences (although he did not call it that
way) and showed that this notion is preserved under the time evolution.
This property is now called Propagation of Chaos. Kac went further
and showed in fact that single particle marginal of the evolved density
is a solution of the model Boltzmann equation \[
\frac{\partial f}{\partial t}(v,t)=\frac{1}{2\pi}\int_{\mathbb{R}}d\omega\int_{0}^{2\pi}d\vartheta\left(f\left(v\cos\vartheta+\omega\sin\vartheta,t\right)f\left(-v\sin\vartheta+\omega\cos\vartheta,t\right)-f(v,t)f(\omega,t)\right)\]
and thus giving a cogent derivation of the spatially homogeneous
Boltzmann equation. For a detailed review the reader may consult \cite{key-7}.
The equation (\ref{master}), or rather the operator, is a bounded
self-adjoint operator in the space $L^{2}\left(\mathbb{S}^{N-1}(\sqrt{N}),d\sigma^{N}\right)$
where $d\sigma^{N}$ is the normalized uniform measure on the sphere.
It is fairly easy to see that the time evolution defined by (\ref{master})
is ergodic, i.e., the solution will approach the function $\psi=1$
as $t\to\infty$. By the spectral theorem, the rate of approach to
the constant function in the sense of $L^{2}$ distance is governed
by the gap \[
\Delta_{N}=\inf\left\{ \left\langle \varphi,N(I-Q)\varphi\right\rangle \,\,:\,\,\left\langle \varphi,1\right\rangle =0,\,\,\left\langle \varphi,\varphi\right\rangle =1\right\} \]
where the infimum is taken over all $\varphi\in L^{2}\left(\mathbb{S}^{N-1}(\sqrt{N}),d\sigma^{N}\right)$.
Kac conjectured that \[
\liminf_{N\rightarrow\infty}\Delta_{N}>0\ .\]
The conjecture was proved to be true by Janvresse in (\cite{key-8})
and the exact value of $\Delta_{N}$ was computed by Carlen, Carvalho,
and Loss in (\cite{key-6}).
The $L^{2}$ distance is rather unsatisfactory. For any reasonable
density $\psi$, in particular a chaotic one, it is easy to see that
\[
\left\Vert \psi(v_{1},\dots,v_{N},0)\right\Vert _{L^{2}\left(\mathbb{S}^{N-1}(\sqrt{N}),d\sigma^{N}\right)}\geq C^{N}\]
where $C>1$ and hence it would take a time of order $N$ to see
a substantial decay of the $L^{2}$. Clearly, this is not what one
considers {}``approach to equilibrium''. A more natural quantity
to use is the entropy \[
H_{N}(\psi)=\int_{\mathbb{S}^{N-1}(\sqrt{N})}\psi\log\psi\]
The crucial difference between the $L^{2}$ distance and the entropy
lies in the extensivity of the entropy, namely that if $\psi_{N}\left(v_{1},\dots,v_{N},t\right)$
satisfies $\psi_{N}\left(v_{1},\dots,v_{N},t\right)\approx\Pi_{i=1}^{N}f(v_{i},t)$
in a weak sense, i.e., chaotic (referred by Kac as 'The Boltzmann
Property') then \[
H_{N}(\psi_{N})\approx N\int_{\mathbb{R}}f(v,t)\log\left(\frac{f(v,t)}{\gamma(v)}\right)dv=NH(f(v,t)\vert\gamma(v))\]
where $\gamma(v)$ is the normalized Gaussian.
Differentiating the entropy of a solution to the Kac Model gives the
time evolution equation:\[
\frac{\partial H_{N}(\psi_{N})}{\partial t}=\left\langle \log\psi_{N},N(I-Q)\psi_{N}\right\rangle \]
This, along with a known inequality by Csiszar, Kullback, Leibler
and Pinsker and the enxtensivity property allows us to conclude that
\[
\left\Vert \psi_{N}(v_{1},\dots,v_{N},t)d\sigma^{N}-d\sigma^{N}\right\Vert _{\mbox{Total Variation}}^{2}\leq2Ne^{-\Gamma_{N}t}H(f(v,0)\vert\gamma(v))\]
for \[
\Gamma_{N}=\inf\frac{\left\langle \log\left(\psi_{N}\right),N(I-Q)\psi_{N}\right\rangle }{H_{N}(\psi_{N})}\]
where the infimum is taken over all probability densities $\psi_{N}$
on $\mathbb{S}^{N-1}(\sqrt{N})$ which are symmetric in all their
components. $\Gamma_{N}$ is called the \emph{entropy production}.
The hope that there exists $C>0$ such that $\Gamma_{N}\geq C$ was
refuted in $2010$ in an paper by Carlen, Carvalho, Le Roux, Loss,
and Villani (\cite{key-7}) where the authors managed to find a sequence
of probability densities $\left\{ \phi_{N}\right\} _{N\in\mathbb{N}}$
with \begin{equation}
\limsup_{N\rightarrow\infty}\frac{\left\langle \log\left(\phi_{N}\right),N(I-Q)\phi_{N}\right\rangle }{H_{N}(\phi_{N})}=0\label{eq:Loss result}\end{equation}
While this means that the time of convergence to equilibrium is not
of logarithm type, an exact estimation on the entropy production might
still give a better convergence rate than that of the original Kac
model.
The first step towards this goal was done in $2003$ by Villani in
(\cite{key-10}) who proved that\[
\Gamma_{N}\geq\frac{2}{N-1}\]
Villani conjectured that\[
\Gamma_{N}=O\left(\frac{1}{N}\right)\]
which wouldn't bode well for the approach to equilibrium in the ergodic
sense, but poses an interesting mathematical problem.
The main result of this paper is to show that Villani's conjecture
is essentially true. More precisely, we will show that
\begin{thm*}
For any $0<\beta<\frac{1}{6}$ there exists a constant $C_{\beta}$
depending only on $\beta$ such that\begin{equation}
\Gamma_{N}\leq\frac{C_{\beta}\log N}{N^{1-2\beta}}\label{eq:my result}\end{equation}
\end{thm*}
(See Theorem \ref{thm:tnropy production result} in Section \ref{sec:Entropy-Production}).
Both (\ref{eq:Loss result}) and (\ref{eq:my result}) are proved
with the same idea: creating an $N$ particle symmetric function $F_{N}$
from a one particle function $f$\[
F_{N}\left(v_{1},\dots,v_{N}\right)=\frac{\Pi_{i=1}^{N}f(v_{i})}{Z_{N}(f,\sqrt{N})}\]
where \[
Z_{N}(f,r)=\int_{\mathbb{S}^{N-1}(r)}\Pi_{i=1}^{N}f(v_{i})d\sigma_{r}^{N}\]
and $d\sigma_{r}^{N}$ is the uniform probability measure on $\mathbb{S}^{N-1}(r)$.
The main difference between the two proofs lies in the fact that while
in (\cite{key-7}) $f$ remains fixed, in our paper $f$ changes with
$N$ via a parameter $\delta=\delta_{N}$.
The paper is structured as follows: Section \ref{sec:About-the-Function Z}
reviews known results about the normalization function $Z_{N}(f,r)$.
Section \ref{sec:Centrel-Limit-Theorem} is our main theoretical part
of the paper, dealing with general properties that will allow us to
give an asymptotic expression to the normalization function. Section
\ref{sec:Entropy-Production} is where we prove our main result. Picking
a function which is natural to the problem at hand and using the result
of the previous sections along with some involved computation. Section
\ref{sec:Final-Remarks} contains a few last remarks and the Appendix
has some simple but very useful computation that we use throughout
the entire paper.
We'd like to conclude the introduction by thanking Michael Loss for
his helpful remarks and discussions, making this paper possible.
\section{The Function $Z_{N}(f,r)$\label{sec:About-the-Function Z}}
The key to the computation of the entropy production lies with the
normalization function $Z_{N}(f,r)$. In this short section we'll
find a simple probabilistic interpretation to it, along with a formula
that will serve us in the following sections and the final computation.
This section is a short review of known results from (\cite{key-7}).
\begin{lem}
\label{lem: Definition of h}Let $f$ be a density function for the
real valued random variable $V$. Then the density function of the
random variable $V^{2}$ is given by \[
h(u)=\frac{f(\sqrt{u})+f(-\sqrt{u})}{2\sqrt{u}}\]
\end{lem}
\begin{proof}
For any function $\varphi=\varphi(|x|)=\varphi(r)$ we find that\[
\mathbb{E}\varphi=\int_{0}^{\infty}\varphi(r)\cdot\left(f(r)+f(-r)\right)dr\]
on the other hand\[
\mathbb{E}\varphi=\int_{0}^{\infty}\varphi\left(\sqrt{t}\right)h(t)dt=\int_{0}^{\infty}\varphi(r)\cdot2r\cdot h\left(r^{2}\right)dr\]
Since $\varphi$ was arbitrary we find that\[
2r\cdot h\left(r^{2}\right)=f(r)+f(-r)\]
and the result follows.\end{proof}
\begin{lem}
\label{lem:Definition of s}Let $V_{1},\dots,V_{N}$ be independent
real valued random variables with identical density function $f(v)$.
Then the density function for $S_{N}=\sum_{i=1}^{N}V_{i}^{2}$ is
given by $s_{N}(u)=\frac{|\mathbb{S}^{N-1}|}{2}u^{\frac{N}{2}-1}Z_{N}(f,\sqrt{u})$. \end{lem}
\begin{proof}
Similar to Lemma \ref{lem: Definition of h} for any $\varphi=\varphi(r)$
we find that\[
\mathbb{E}\varphi=\int_{0}^{\infty}\varphi(r)\left(\int_{\mathbb{S}^{N-1}(r)}f(v_{1})\dots f(v_{N})ds_{r}^{N}\right)dr=\int_{0}^{\infty}\varphi(r)|\mathbb{S}^{N-1}|r^{N-1}Z_{N}(f,r)dr\]
on the other hand\[
\mathbb{E}\varphi=\int_{0}^{\infty}\varphi(\sqrt{x})s_{N}(x)dx=\int_{0}^{\infty}\varphi(r)\cdot2r\cdot s_{N}\left(r^{2}\right)dr\]
Since $\varphi$ is arbitrary\[
2r\cdot s_{N}\left(r^{2}\right)=|\mathbb{S}^{N-1}|r^{N-1}Z_{N}(f,r)\]
which implies the result.\end{proof}
\begin{cor}
\label{cor:Expression-for Z}(Expression for $Z_{N}(f,r)$) Under
the conditions of Lemma \ref{lem:Definition of s}\[
Z_{N}(f,\sqrt{r})=\frac{2h^{*N}(r)}{|\mathbb{S}^{N-1}|r^{\frac{N}{2}-1}}\]
where $h^{^{*N}}$ is the $N$-fold convolution of $h$, defined
in Lemma \ref{lem: Definition of h}.\end{cor}
\begin{proof}
This follows immediately from Lemma \ref{lem:Definition of s}, Lemma
\ref{lem: Definition of h} and a known probability fact.
\end{proof}
\section{Central Limit Theorem\label{sec:Centrel-Limit-Theorem}}
In order for us to be able to compute the entropy production an asymptotic
behavior for $Z_{N}(f,r)$ is needed. As seen in Section \ref{sec:About-the-Function Z}
the function $Z_{N}(f,r)$ is closely related to the $N$-fold convolution
of the density function $h(u)$ and as such we'll employ standard
techniques to estimate it. The specific function we'll construct as
a test function for the entropy production has the property that the
Fourier transform of its one particle function splits the line into
two natural domains: One where we can use analytic expansion, and
one where the decay is dominated by exponential functions. The radius
of the separating circle would depend on a parameter $\delta=$$\delta_{N}$
that we'll exploit later on to get the final conclusion.
While this is the case arising in our specific construction, we believe
that it's a natural way to view the problem. Even though we have yet
to attempt any different test functions we think that similar situation
would happen in a larger class of functions created from one particle
function. As such, a generalization of our computation was made and
is presented in this section.
The reader should keep in mind the following intuition while reading
this section: $g(\xi)$ represents the Fourier transform of the function
$h(u)$, connected to the one particle function via Lemma \ref{lem: Definition of h}.
The first lemma of the section explores the domain outside the radius
of analiticity while the second explores the domain where analytic
expansion is possible. Lastly, the parameter$\delta$ is a function
of $N$, going to zero as $N$ goes to infinity.
\begin{lem}
\label{lem:Non-Analycity Domain}Let $g_{\delta}(\xi)=g_{\delta_{N}}(\xi)$
be such that
$(i)$ for $|\xi|>c\delta$ $|g_{\delta}(\xi)|\leq1-\alpha(\delta)$,
where $\alpha(\delta)>0$.
$(ii)$ $|g_{\delta}(\xi)|\leq1$ for all $\xi$.
Then\[
\int_{|\xi|>c\delta}\left|g_{\delta}^{N}(\xi)-\gamma_{1}^{N}(\xi)\right|d\xi\]
\[
\leq2\int_{|\xi|>c\delta}\left|g_{\delta}(\xi)\right|^{N-1}d\xi+\frac{\left(1-\alpha(\delta)\right)^{\frac{N}{2}-1}}{\pi c\delta\Sigma_{\delta}^{2}}+\frac{1}{\pi c\delta\Sigma_{\delta}^{2}}\cdot e^{-(1+N)\pi^{2}c^{2}\delta^{2}\Sigma_{\delta}^{2}}\]
where $\gamma_{1}(\xi)=e^{-2\pi i\zeta}\cdot e^{-2\pi^{2}\xi^{2}\Sigma_{\delta}^{2}}$.\end{lem}
\begin{proof}
We have that\[
\int_{|\xi|>c\delta}\left|g_{\delta}^{N}(\xi)-\gamma_{1}^{N}(\xi)\right|d\xi=\int_{|\xi|>c\delta}\left|g_{\delta}(\xi)-\gamma_{1}(\xi)\right|\cdot\left|\sum_{k=0}^{N-1}g_{\delta}^{N-k-1}(\xi)\gamma_{1}^{k}(\xi)\right|d\xi\]
\[
\leq2\int_{|\xi|>c\delta}\sum_{k=0}^{N-1}\left|g_{\delta}^{N-k-1}(\xi)\right|\left|\gamma_{1}^{k}(\xi)\right|d\xi\]
\[
\leq2\int_{|\xi|>c\delta}\left|g_{\delta}(\xi)\right|^{N-1}d\xi+2\sum_{k=1}^{N-1}\left(1-\alpha(\delta)\right)^{N-k-1}\int_{|\xi|>c\delta}e^{-2k\pi^{2}\xi^{2}\Sigma_{\delta}^{2}}d\xi\]
Using Lemma \ref{lem:Gaussian-integral-estimation} and \ref{lem:Special-Sums-Evaluation}
in the Appendix we find that\[
\sum_{k=k_{0}}^{N-1}\int_{|\xi|>c\delta}e^{-2k\pi^{2}\xi^{2}\Sigma_{\delta}^{2}}d\xi\leq\sum_{k=k_{0}}^{N-1}\frac{\sqrt{2\pi}\cdot e^{-\frac{4k\pi^{2}c^{2}\delta^{2}\Sigma_{\delta}^{2}}{2}}}{\sqrt{4k\pi^{2}\Sigma_{\delta}^{2}}}\]
\[
\leq\frac{1}{2\pi c\delta\Sigma_{\delta}^{2}}\cdot e^{-2k_{0}\pi^{2}c^{2}\delta^{2}\Sigma_{\delta}^{2}}\]
Hence\[
\int_{|\xi|>c\delta}\left|g_{\delta}^{N}(\xi)-\gamma_{1}^{N}(\xi)\right|d\xi\]
\[
\leq2\int_{|\xi|>c\delta}\left|g_{\delta}(\xi)\right|^{N-1}d\xi+2\left(1-\alpha(\delta)\right)^{N-\left[\frac{N}{2}\right]-1}\sum_{k=1}^{\left[\frac{N}{2}\right]}\int_{|\xi|>c\delta}e^{-2k\pi^{2}\xi^{2}\Sigma_{\delta}^{2}}d\xi\]
\[
+2\sum_{k=\left[\frac{N}{2}\right]+1}^{N-1}\int_{|\xi|>c\delta}e^{-2k\pi^{2}\xi^{2}\Sigma_{\delta}^{2}}d\xi\]
\[
\leq2\int_{|\xi|>c\delta}\left|g_{\delta}(\xi)\right|^{N-1}d\xi+\frac{\left(1-\alpha(\delta)\right)^{\frac{N}{2}-1}}{\pi c\delta\Sigma_{\delta}^{2}}+\frac{1}{\pi c\delta\Sigma_{\delta}^{2}}\cdot e^{-(1+N)\pi^{2}c^{2}\delta^{2}\Sigma_{\delta}^{2}}\]
\end{proof}
\begin{lem}
\label{lem:Analytic Domain}Let $g_{\delta}(\xi)=g_{\delta_{N}}(\xi)$
be such that
$(i)$ there exist $M_{0},M_{1},M_{2}>0$ such that $\sup_{|\xi|<c\delta}\left|g_{\delta}(\xi)-\gamma_{1}(\xi)\right|\leq\left(\frac{M_{0}}{\delta^{2}}+\frac{M_{1}}{\delta}+M_{2}\right)|\xi|^{3}$.
$(ii)$ for $c\delta^{1+\beta}<|\xi|<c\delta$ $|g_{\delta}(\xi)|\leq1-\alpha_{\beta}(\delta)$
where $\alpha_{\beta}(\delta)>0$.
$(iii)$ $|g_{\delta}(\xi)|\leq1$ for all $\xi$.
Then \[
\int_{|\xi|<c\delta}\left|g_{\delta}^{N}(\xi)-\gamma_{1}^{N}(\xi)\right|d\xi\leq\frac{c^{4}\delta^{2}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)}{2}\]
\[
+\frac{c^{3}\delta\sqrt{N}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\left(1-\alpha_{\beta}(\delta)\right)^{\frac{N}{2}-1}}{\sqrt{\pi\Sigma_{\delta}^{2}}}+\frac{c^{3}\delta^{1-\beta}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)e^{-\pi^{2}(N-1)c^{2}\delta^{2+2\beta}\Sigma_{\delta}^{2}}}{2\pi c\delta\Sigma_{\delta}^{2}\cdot\sqrt{1-e^{-2\pi^{2}Nc^{2}\delta^{2}\Sigma_{\delta}^{2}}}}\]
\[
+\frac{2c^{3}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\sqrt{N}\delta^{1+3\beta}}{\sqrt{2\pi\Sigma_{\delta}^{2}}}\]
where $\gamma_{1}(\xi)=e^{-2\pi i\zeta}\cdot e^{-2\pi^{2}\xi^{2}\Sigma_{\delta}^{2}}$.\end{lem}
\begin{rem}
The coefficients $M_{0},M_{1}$and $M_{2}$ play a major role in the
estimation. Notice that we can get a better result if have that $M_{0}=0$
and an even better result if both $M_{0}$ and $M_{1}$ are zero. \end{rem}
\begin{proof}
Similar to Lemma \ref{lem:Non-Analycity Domain} we find that\[
\int_{|\xi|<c\delta}\left|g_{\delta}^{N}(\xi)-\gamma_{1}^{N}(\xi)\right|d\xi\leq\sum_{k=0}^{N-1}\int_{|\xi|<c\delta}\left|g_{\delta}(\xi)-\gamma_{1}(\xi)\right|\left|g_{\delta}(\xi)\right|^{N-k-1}\left|\gamma_{1}(\xi)\right|^{k}d\xi\]
\[
\leq\int_{|\xi|<c\delta}\left(\frac{M_{0}}{\delta^{2}}+\frac{M_{1}}{\delta}+M_{2}\right)|\xi|^{3}d\xi+\sum_{k=1}^{N-1}\int_{|\xi|<c\delta}\left(\frac{M_{0}}{\delta^{2}}+\frac{M_{1}}{\delta}+M_{2}\right)|\xi|^{3}\left|g_{\delta}(\xi)\right|^{N-k-1}\left|\gamma_{1}(\xi)\right|^{k}d\xi\]
\[
=\frac{c^{4}\delta^{2}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)}{2}+\sum_{k=1}^{N-1}\int_{c\delta^{1+\beta}<|\xi|<c\delta}\left(\frac{M_{0}}{\delta^{2}}+\frac{M_{1}}{\delta}+M_{2}\right)|\xi|^{3}\left|g_{\delta}(\xi)\right|^{N-k-1}\left|\gamma_{1}(\xi)\right|^{k}d\xi\]
\[
+\sum_{k=1}^{N-1}\int_{|\xi|<c\delta^{1+\beta}}\left(\frac{M_{0}}{\delta^{2}}+\frac{M_{1}}{\delta}+M_{2}\right)|\xi|^{3}\left|g_{\delta}(\xi)\right|^{N-k-1}\left|\gamma_{1}(\xi)\right|^{k}d\xi\]
We have that\[
\sum_{k=1}^{N-1}\int_{c\delta^{1+\beta}<|\xi|<c\delta}\left(\frac{M_{0}}{\delta^{2}}+\frac{M_{1}}{\delta}+M_{2}\right)|\xi|^{3}\left|g_{\delta}(\xi)\right|^{N-k-1}\left|\gamma_{1}(\xi)\right|^{k}d\xi\]
\[
\leq c^{3}\delta\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\sum_{k=1}^{N-1}\left(1-\alpha_{\beta}(\delta)\right)^{N-k-1}\int_{c\delta^{1+\beta}<|\xi|<c\delta}e^{-2k\pi^{2}\xi^{2}\Sigma_{\delta}^{2}}d\xi\]
\[
\leq c^{3}\delta\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\left(1-\alpha_{\beta}(\delta)\right)^{\frac{N}{2}-1}\sum_{k=1}^{\left[\frac{N}{2}\right]}\int_{|\xi|<c\delta}e^{-2k\pi^{2}\xi^{2}\Sigma_{\delta}^{2}}d\xi\]
\[
+c^{3}\delta\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\sum_{k=\left[\frac{N}{2}\right]+1}^{N-1}\int_{c\delta^{1+\beta}<|\xi|<c\delta}e^{-2k\pi^{2}\xi^{2}\Sigma_{\delta}^{2}}d\xi\]
\[
\leq c^{3}\delta\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\left(1-\alpha_{\beta}(\delta)\right)^{\frac{N}{2}-1}\sum_{k=1}^{\left[\frac{N}{2}\right]}\frac{\sqrt{1-e^{-4\pi^{2}kc^{2}\delta^{2}\Sigma_{\delta}^{2}}}}{\sqrt{2\pi\Sigma_{\delta}^{2}k}}\]
\[
+c^{3}\delta\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\sum_{k=\left[\frac{N}{2}\right]+1}^{N-1}\left(\int_{|\xi|<c\delta}e^{-2k\pi^{2}\xi^{2}\Sigma_{\delta}^{2}}d\xi-\int_{|\xi|<c\delta^{1+\beta}}e^{-2k\pi^{2}\xi^{2}\Sigma_{\delta}^{2}}d\xi\right)\]
\[
\leq c^{3}\delta\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\left(1-\alpha_{\beta}(\delta)\right)^{\frac{N}{2}-1}\sum_{k=1}^{\left[\frac{N}{2}\right]}\frac{1}{\sqrt{2\pi\Sigma_{\delta}^{2}k}}\]
\[
+c^{3}\delta\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\sum_{k=\left[\frac{N}{2}\right]+1}^{N-1}\frac{\left(\sqrt{1-e^{-4\pi^{2}kc^{2}\delta^{2}\Sigma_{\delta}^{2}}}-\sqrt{1-e^{-2\pi^{2}kc^{2}\delta^{2+2\beta}\Sigma_{\delta}^{2}}}\right)}{\sqrt{2\pi k\Sigma_{\delta}^{2}}}\]
\[
\leq\frac{c^{3}\delta\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\left(1-\alpha_{\beta}(\delta)\right)^{\frac{N}{2}-1}}{\sqrt{2\pi\Sigma_{\delta}^{2}}}\cdot\sqrt{4\left[\frac{N}{2}\right]}\]
\[
+\frac{c^{3}\delta\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)}{\sqrt{2\pi\Sigma_{\delta}^{2}}}\sum_{k=\left[\frac{N}{2}\right]+1}^{N-1}\frac{1}{\sqrt{k}}\cdot\frac{e^{-2\pi^{2}kc^{2}\delta^{2+2\beta}\Sigma_{\delta}^{2}}-e^{-4\pi^{2}kc^{2}\delta^{2}\Sigma_{\delta}^{2}}}{\left(\sqrt{1-e^{-4\pi^{2}kc^{2}\delta^{2}\Sigma_{\delta}^{2}}}+\sqrt{1-e^{-2\pi^{2}kc^{2}\delta^{2+2\beta}\Sigma_{\delta}^{2}}}\right)}\]
\[
\leq\frac{c^{3}\delta\sqrt{N}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\left(1-\alpha_{\beta}(\delta)\right)^{\frac{N}{2}-1}}{\sqrt{\pi\Sigma_{\delta}^{2}}}\]
\[
+\frac{c^{3}\delta\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)}{\sqrt{2\pi\Sigma_{\delta}^{2}}}\sum_{k=\left[\frac{N}{2}\right]+1}^{N-1}\frac{1}{\sqrt{k}}\cdot\frac{e^{-2\pi^{2}kc^{2}\delta^{2+2\beta}\Sigma_{\delta}^{2}}}{\sqrt{1-e^{-4\pi^{2}kc^{2}\delta^{2}\Sigma_{\delta}^{2}}}}\]
\[
\leq\frac{c^{3}\delta\sqrt{N}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\left(1-\alpha_{\beta}(\delta)\right)^{\frac{N}{2}-1}}{\sqrt{\pi\Sigma_{\delta}^{2}}}\]
\[
+\frac{c^{3}\delta\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)}{\sqrt{2\pi\Sigma_{\delta}^{2}}\cdot\sqrt{1-e^{-2\pi^{2}Nc^{2}\delta^{2}\Sigma_{\delta}^{2}}}}\sum_{k=\left[\frac{N}{2}\right]+1}^{N-1}\frac{e^{-2\pi^{2}kc^{2}\delta^{2+2\beta}\Sigma_{\delta}^{2}}}{\sqrt{k}}\]
\[
\leq\frac{c^{3}\delta\sqrt{N}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\left(1-\alpha_{\beta}(\delta)\right)^{\frac{N}{2}-1}}{\sqrt{\pi\Sigma_{\delta}^{2}}}+\frac{c^{3}\delta^{1-\beta}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)e^{-\pi^{2}(N-1)c^{2}\delta^{2+2\beta}\Sigma_{\delta}^{2}}}{2\pi c\delta\Sigma_{\delta}^{2}\cdot\sqrt{1-e^{-2\pi^{2}Nc^{2}\delta^{2}\Sigma_{\delta}^{2}}}}\]
Next we find that\[
\sum_{k=1}^{N-1}\int_{|\xi|<c\delta^{1+\beta}}\left(\frac{M_{0}}{\delta^{2}}+\frac{M_{1}}{\delta}+M_{2}\right)|\xi|^{3}\left|g_{\delta}(\xi)\right|^{N-k-1}\left|\gamma_{1}(\xi)\right|^{k}d\xi\]
\[
\leq c^{3}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\delta^{1+3\beta}\cdot\sum_{k=1}^{N-1}\int_{|\xi|<c\delta^{1+\beta}}e^{-2k\pi^{2}\xi^{2}\Sigma_{\delta}^{2}}d\xi\]
\[
\leq c^{3}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\delta^{1+3\beta}\cdot\sum_{k=1}^{N-1}\frac{\sqrt{1-e^{-4k\pi^{2}c^{2}\delta^{2+2\beta}\Sigma_{\delta}^{2}}}}{\sqrt{2\pi k\Sigma_{\delta}^{2}}}\]
\[
\leq\frac{c^{3}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\delta^{1+3\beta}}{\sqrt{2\pi\Sigma_{\delta}^{2}}}\cdot\sum_{k=1}^{N-1}\frac{1}{\sqrt{k}}\]
\[
\leq\frac{2c^{3}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\sqrt{N}\delta^{1+3\beta}}{\sqrt{2\pi\Sigma_{\delta}^{2}}}\]
Which completes the proof.\end{proof}
\begin{thm}
\label{thm:uniform approximation of convulotion}Let $h_{\delta}(x)=h_{\delta_{N}}(x)$
be a function such that $g_{\delta}(\xi)=\widehat{h_{\delta}}(\xi)$
satisfies
$(i)$ for $|\xi|>c\delta_{N}$ $|g_{\delta_{N}}(\xi)|\leq1-\alpha(\delta_{N})$,
where $\alpha(\delta_{N})>0$
$(ii)$ there exist $M_{0},M_{1},M_{2}>0$ such that $\sup_{|\xi|<c\delta_{N}}\left|g_{\delta_{N}}(\xi)-\gamma_{1}(\xi)\right|\leq\left(\frac{M_{0}}{\delta_{N}^{2}}+\frac{M_{1}}{\delta_{N}}+M_{2}\right)|\xi|^{3}$
$(iii)$ for $c\delta_{N}^{1+\beta}<|\xi|<c\delta_{N}$ $|g_{\delta_{N}}(\xi)|\leq1-\alpha_{\beta}(\delta_{N})$
where $\alpha_{\beta}(\delta_{N})>0$
$(vi)$ $|g_{\delta_{N}}(\xi)|\leq1$ for all $\xi$
and if\begin{equation}
\begin{array}{c}
\delta_{N},\alpha(\delta_{N})\, and\,\alpha_{\beta}(\delta_{N})\, are\, domianted\, by\, powers\, of\, N\\
\alpha(\delta_{N})N\underset{N\rightarrow\infty}{\longrightarrow}\infty\\
\alpha_{\beta}(\delta_{N})N\underset{N\rightarrow\infty}{\longrightarrow}\infty\\
\Sigma_{\delta_{N}}^{2}\delta_{N}^{2+2\beta}N\underset{N\rightarrow\infty}{\longrightarrow}\infty\\
\delta_{N}^{1+3\beta}N\underset{N\rightarrow\infty}{\longrightarrow}0\\
\sqrt{N}\Sigma_{\delta_{N}}\int_{|\xi|>c\delta_{N}}\left|g_{\delta_{N}}(\xi)\right|^{N-1}d\xi\underset{N\rightarrow\infty}{\longrightarrow}0\\
\delta_{N}^{\frac{3}{2}(1-\beta)}\Sigma_{\delta_{N}}\, is\, bounded\end{array}\label{eq:condition}\end{equation}
then\[
\sup_{x}\left|h_{\delta_{N}}^{*N}(x)-\frac{1}{\sqrt{N}\Sigma_{\delta_{N}}}\cdot\frac{e^{-\frac{\left(x-N\right)^{2}}{2N\Sigma_{\delta_{N}}^{2}}}}{\sqrt{2\pi}}\right|\leq\frac{\epsilon(N)}{\sqrt{N}\Sigma_{\delta_{N}}}\]
where $h_{\delta_{N}}^{*N}(x)$ is the $N$-fold convolution and
$\epsilon(N)\underset{N\rightarrow\infty}{\longrightarrow}0$. \end{thm}
\begin{proof}
It is easy to check that $\widehat{\frac{1}{\sqrt{N}\Sigma_{\delta}}\cdot\frac{e^{-\frac{\left(x-N\right)^{2}}{2N\Sigma_{\delta}^{2}}}}{\sqrt{2\pi}}}(\xi)=\gamma_{1}^{N}(\xi)$
Using Lemma \ref{lem:Non-Analycity Domain} and \ref{lem:Analytic Domain}
we find that \[
\sup_{x}\left|h_{\delta}^{*N}(x)-\frac{1}{\sqrt{N}\Sigma_{\delta}}\cdot\frac{e^{-\frac{\left(x-N\right)^{2}}{2N\Sigma_{\delta}^{2}}}}{\sqrt{2\pi}}\right|\leq\int_{\mathbb{R}}\left|g_{\delta}^{N}(\xi)-\gamma_{1}^{N}(\xi)\right|d\xi\]
\[
=\int_{|\xi|<c\delta}\left|g_{\delta}^{N}(\xi)-\gamma_{1}^{N}(\xi)\right|d\xi+\int_{|\xi|>c\delta}\left|g_{\delta}^{N}(\xi)-\gamma_{1}^{N}(\xi)\right|d\xi\]
\[
\leq\frac{1}{\sqrt{N}\Sigma_{\delta}}\left(\frac{c^{4}\sqrt{N\delta^{1+3\beta}}\delta^{\frac{3}{2}(1-\beta)}\Sigma_{\delta}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)}{2}\right.\]
\[
+\frac{c^{3}\delta N\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)\left(1-\alpha_{\beta}(\delta)\right)^{\frac{N}{2}-1}}{\sqrt{\pi}}+\frac{c^{3}\sqrt{N}\delta^{1-\beta}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)e^{-\pi^{2}(N-1)c^{2}\delta^{2+2\beta}\Sigma_{\delta}^{2}}}{2\pi c\delta\Sigma_{\delta}\cdot\sqrt{1-e^{-2\pi^{2}Nc^{2}\delta^{2}\Sigma_{\delta}^{2}}}}\]
\[
+\frac{2c^{3}\left(M_{0}+M_{1}\delta+M_{2}\delta^{2}\right)N\delta^{1+3\beta}}{\sqrt{2\pi}}+2\sqrt{N}\Sigma_{\delta}\int_{|\xi|>c\delta}\left|g_{\delta}(\xi)\right|^{N-1}d\xi\]
\[
\left.+2\left(1-\alpha(\delta)\right)^{\frac{N}{2}-1}\cdot\frac{\sqrt{N}}{2\pi c\delta\Sigma_{\delta}}+\frac{\sqrt{N}}{\pi c\delta\Sigma_{\delta}}\cdot e^{-(1+N)\pi^{2}c^{2}\delta^{2}\Sigma_{\delta}^{2}}\right)\]
Conditions (\ref{eq:condition}) insure the desired conclusion.\end{proof}
\begin{rem}
\label{rem:epsilon j}A careful look at the proof of Theorem \ref{thm:uniform approximation of convulotion}
shows that for a fixed $j$ if $\lim_{N\rightarrow\infty}\sqrt{N-j}\Sigma_{\delta_{N}}\int_{|\xi|>c\delta_{N}}\left|g_{\delta_{N}}(\xi)\right|^{N-j-1}d\xi=0$
and conditions (\ref{eq:condition}) are satisfied (with the obvious
change) then\[
\sup_{x}\left|h_{\delta_{N}}^{*N-j}(x)-\frac{1}{\sqrt{N-j}\Sigma_{\delta_{N}}}\cdot\frac{e^{-\frac{\left(x-N+j\right)^{2}}{2(N-j)\Sigma_{\delta_{N}}^{2}}}}{\sqrt{2\pi}}\right|\leq\frac{\epsilon_{j}(N)}{\sqrt{N-j}\Sigma_{\delta_{N}}}\]
where $\epsilon_{j}(N)\underset{N\rightarrow\infty}{\longrightarrow}0$.
\end{rem}
\section{Entropy Production and Villani's Conjecture\label{sec:Entropy-Production}}
In this section we'll find an exact estimation for the entropy production.
The idea behind this estimation is to use superposition of stationary
solutions for the Boltzmann equation: the Maxwellian densities $M_{a}(v)=\frac{e^{-\frac{b^{2}}{2a}}}{\sqrt{2\pi a}}$.
This idea was exploited by Carlen, Carvalho, Le Roux, Loss, and Villani
(\cite{key-7}) and Bobylev and Cercignani (\cite{key-1}) before
them.
The basic one particle function would be\[
f_{\delta_{N}}(v)=f_{\delta}(v)=\delta M_{\frac{1}{2\delta}}(v)+(1-\delta)M_{\frac{1}{2(1-\delta)}}(v)\]
This function has the property that both its parts have the same
energy\[
\int_{\mathbb{R}}\delta M_{\frac{1}{2\delta}}(v)dv=\int_{\mathbb{R}}(1-\delta)M_{\frac{1}{2(1-\delta)}}(v)dv=\frac{1}{2}\]
while as $\delta$ gets smaller the number of particles represented
by $\delta M_{\frac{1}{2\delta}}(v)$ is far smaller than those represented
by $(1-\delta)M_{\frac{1}{2(1-\delta)}}(v)$. The fact that we have
a small number of very energetic particles and a large number of very
stable particles trying to equilibrate will cause slow decay into
equilibrium. That physical intuition is indeed true as would be seen
shortly.
\begin{lem}
\label{lem:Properties of h}Let $h_{\delta}(u)=\frac{f_{\delta}(\sqrt{u})+f_{\delta}(-\sqrt{u})}{2\sqrt{u}}=\frac{f_{\delta}(\sqrt{u})}{\sqrt{u}}$
then
$(i)$ $\int_{0}^{\infty}h_{\delta}(u)du=1$
$(ii)$ $\int_{0}^{\infty}uh_{\delta}(u)du=1$
$(iii)$ $\Sigma_{\delta}^{2}=\int_{0}^{\infty}u^{2}h_{\delta}(u)du-\left(\int_{0}^{\infty}uh_{\delta}(u)du\right)^{2}=\frac{3}{4\delta(1-\delta)}-1$
$(iv)$ $\widehat{h_{\delta}}(\xi)=\frac{\delta}{\sqrt{1+\frac{2\pi i\xi}{\delta}}}+\frac{1-\delta}{\sqrt{1+\frac{2\pi i\xi}{1-\delta}}}$\end{lem}
\begin{proof}
$(i)-(iii)$ follow immediately from the fact that $\int_{0}^{\infty}u^{m}h_{\delta}(u)du=\int_{\mathbb{R}}x^{2m}f_{\delta}(x)dx$
and the fact that\[
\int_{\mathbb{R}}M_{a}(u)du=1,\,\,\int_{\mathbb{R}}u^{2}M_{a}(u)du=a,\,\,\int_{\mathbb{R}}u^{4}M_{a}(u)du=3a^{2}\]
We're only left with proving $(iv)$.
It is easy to check that\[
\frac{d}{d\xi}\int_{\mathbb{R}}M_{a}(u)\cdot e^{-2\pi i\xi u^{2}}du=\frac{-2\pi ia}{1+4\pi ia\xi}\int_{\mathbb{R}}M_{a}(u)\cdot e^{-2\pi i\xi u^{2}}du\]
The initial value problem $\frac{d}{d\xi}\varphi(\xi)=\frac{-2\pi ia}{1+4\pi ia\xi}\varphi(\xi),\,\,\xi\in\mathbb{R}$,
$\varphi(0)=1$ has the unique solution\[
\varphi(\xi)=\frac{1}{\sqrt{1+4\pi ia\xi}}\]
Thus, the result follows from the definition of $f_{\delta}$ and
the fact that \[
\widehat{h_{\delta}}(\xi)=\int_{0}^{\infty}h_{\delta}(u)e^{-2\pi i\xi u}du=\int_{\mathbb{R}}f_{\delta}(u)e^{-2\pi i\xi u^{2}}du\]
\end{proof}
\begin{lem}
\label{lem:Special properties of h}Let $g_{\delta}(\xi)=\widehat{h_{\delta}}(\xi)$
where $\delta<\frac{1}{2}$ then
$(i)$ for $|\xi|>\frac{\delta}{4\pi}$ $|g_{\delta}(\xi)|\leq1-\delta\left(1-\sqrt[4]{\frac{4}{5}}\right)+\rho_{1}(\delta)$
where $\frac{\rho_{1}(\delta)}{\delta}\underset{\delta\rightarrow0}{\longrightarrow}0$
$(ii)$ there exist $M_{0},M_{1},M_{2}>0$ such that $\sup_{|\xi|<\frac{\delta}{4\pi}}\left|g_{\delta}(\xi)-\gamma_{1}(\xi)\right|\leq\left(\frac{M_{0}}{\delta^{2}}+\frac{M_{1}}{\delta}+M_{2}\right)|\xi|^{3}$.
$(iii)$ for $\frac{\delta^{1+\beta}}{4\pi}<|\xi|<\frac{\delta}{4\pi}$
$|g_{\delta}(\xi)|\leq1-\frac{\delta^{1+2\beta}}{16}+\rho_{2}(\delta)$
where $\frac{\rho_{2}(\delta)}{\delta^{1+2\beta}}\underset{\delta\rightarrow0}{\longrightarrow}0$
$(vi)$ $|g_{\delta}(\xi)|\leq1$ for all $\xi$.
$(v)$ for a fixed $j$ $\int_{|\xi|>\frac{\delta}{4\pi}}\left|g_{\delta_{N}}(\xi)\right|^{N-j-1}d\xi\leq\frac{\left(1-\delta\left(1-\sqrt[4]{\frac{4}{5}}\right)+\rho_{1}(\delta)\right)^{N-j-1}}{\pi}+\frac{2}{\pi(N-j)}$\end{lem}
\begin{proof}
$(i)$ For $|\xi|>\frac{\delta}{4\pi}$ \[
\left|g_{\delta}(\xi)\right|\leq\frac{\delta}{\sqrt[4]{1+\frac{4\pi^{2}\xi^{2}}{\delta^{2}}}}+\frac{1-\delta}{\sqrt[4]{1+\frac{4\pi^{2}\xi^{2}}{(1-\delta)^{2}}}}\leq\frac{\delta}{\sqrt[4]{\frac{5}{4}}}+\frac{1-\delta}{\sqrt[4]{1+\frac{\delta^{2}}{4(1-\delta)^{2}}}}\]
\[
=\sqrt[4]{\frac{4}{5}}\delta+(1-\delta)\left(1-\frac{\delta^{2}}{16(1-\delta)^{2}}+\dots\right)=1-\delta\left(1-\sqrt[4]{\frac{4}{5}}\right)+\rho_{1}(\delta)\]
where $\frac{\rho_{1}(\delta)}{\delta}\underset{\delta\rightarrow0}{\longrightarrow}0$.
$(ii)$ Using the expansions for $\frac{1}{\sqrt{1+x}}$ and $e^{x}$
we find that for $|\xi|<\frac{\delta}{4\pi}$\[
\left|h_{\delta}(\xi)-\gamma_{1}(\xi)\right|\leq|\xi|^{3}\left(\frac{8\pi^{3}}{\delta^{2}}\cdot\left|\phi\left(\frac{2\pi i\xi}{\delta}\right)\right|+\frac{8\pi^{3}}{(1-\delta)^{2}}\cdot\left|\phi\left(\frac{2\pi i\xi}{1-\delta}\right)\right|\right.\]
\[
+\frac{3\pi^{3}}{\delta(1-\delta)}-4\pi^{3}+2\pi^{4}\left(\frac{3}{4\delta(1-\delta)}-1\right)^{2}|\xi|+\frac{3\pi^{4}}{\delta(1-\delta)}|\xi|-4\pi^{4}|\xi|\]
\[
+4\pi^{5}\left(\frac{3}{4\delta(1-\delta)}-1\right)^{2}|\xi|^{2}+4\pi^{6}\left(\frac{3}{4\delta(1-\delta)}-1\right)^{2}|\xi|^{3}\]
\[
\left.+8\pi^{3}\left|\psi\left(-2\pi i\xi\right)\right|+8\pi^{6}\left(\frac{3}{4\delta(1-\delta)}-1\right)^{3}|\xi|^{3}\left|\psi\left(-2\pi^{2}\Sigma_{\delta}^{2}\xi^{2}\right)\right|\right)\]
where $\phi(x)$ is analytic in $|x|<\frac{1}{2}$ and $\psi(x)$
is an entire function. Denoting $M_{\phi}=\sup_{|x|\leq\frac{1}{2}}\left|\phi(x)\right|$
and $M_{\psi}=\sup_{|x|\leq\frac{1}{2}}\left|\psi(x)\right|$ we find
that \[
\left|h_{\delta}(\xi)-\gamma_{1}(\xi)\right|\leq\left(\frac{8\pi^{3}}{\delta^{2}}M_{\phi}+\frac{57\pi^{3}}{8\delta}+\pi^{3}\left(32M_{\phi}+\frac{141}{64}+\frac{539}{64}M_{\psi}\right)\right)|\xi|^{3}\]
$(iii)$ For $|\xi|>\frac{\delta^{1+\beta}}{4\pi}$ \[
\left|g_{\delta}(\xi)\right|\leq\frac{\delta}{\sqrt[4]{1+\frac{4\pi^{2}\xi^{2}}{\delta^{2}}}}+\frac{1-\delta}{\sqrt[4]{1+\frac{4\pi^{2}\xi^{2}}{(1-\delta)^{2}}}}\leq\frac{\delta}{\sqrt[4]{1+\frac{\delta^{2\beta}}{4}}}+\frac{1-\delta}{\sqrt[4]{1+\frac{\delta^{2+2\beta}}{4(1-\delta)^{2}}}}\]
\[
=\delta\left(1-\frac{\delta^{2\beta}}{16}+\dots\right)+\left(1-\delta\right)\left(1-\frac{\delta^{2+2\beta}}{16(1-\delta)^{2}}+\dots\right)=1-\frac{\delta^{1+2\beta}}{16}+\rho_{2}(\delta)\]
where $\frac{\rho_{2}(\delta)}{\delta^{1+2\beta}}\underset{\delta\rightarrow0}{\longrightarrow}0$.
$(iv)$ This is a general property of the Fourier transform of a density
function.
$(v)$ \[
\int_{|\xi|>\frac{\delta}{4\pi}}\left|g_{\delta_{N}}(\xi)\right|^{N-j-1}d\xi\leq\int_{|\xi|>\frac{\delta}{4\pi}}\left(\frac{\delta}{\sqrt[4]{1+\frac{4\pi^{2}\xi^{2}}{\delta^{2}}}}+\frac{1-\delta}{\sqrt[4]{1+\frac{4\pi^{2}\xi^{2}}{(1-\delta)^{2}}}}\right)^{N-j-1}d\xi\]
\[
=\frac{\delta}{2\pi}\int_{|x|>\frac{1}{2}}\left(\frac{\delta}{\sqrt[4]{1+x^{2}}}+\frac{1-\delta}{\sqrt[4]{1+\frac{\delta^{2}x^{2}}{(1-\delta)^{2}}}}\right)^{N-j-1}dx\]
\[
\leq\frac{\left(1-\delta\left(1-\sqrt[4]{\frac{4}{5}}\right)+\rho_{1}(\delta)\right)^{N-j-1}}{\pi}+\frac{\delta}{\pi}\int_{\frac{1}{\delta}}^{\infty}\left(\frac{\delta^{\frac{3}{2}}}{\sqrt{\delta x}}+\frac{(1-\delta)^{\frac{3}{2}}}{\sqrt{\delta x}}\right)^{N-j-1}dx\]
\[
\leq\frac{\left(1-\delta\left(1-\sqrt[4]{\frac{4}{5}}\right)+\rho_{1}(\delta)\right)^{N-j-1}}{\pi}+\frac{2}{\pi(N-j-3)}\]
\end{proof}
\begin{rem}
\label{rem:eps j is ok}Note that in our case\[
\sqrt{N-j}\Sigma_{\delta_{N}}\int_{|\xi|>c\delta_{N}}\left|g_{\delta_{N}}(\xi)\right|^{N-j-1}d\xi\]
\[
\leq\frac{3\sqrt{N-j}\left(1-\delta\left(1-\sqrt[4]{\frac{4}{5}}\right)+\rho_{1}(\delta)\right)^{N-j-1}}{2\pi\delta}+\frac{3}{\sqrt{N\delta}\cdot\sqrt{1-\frac{j+3}{N}}}\]
so as long as the conditions in (\ref{eq:condition}) are satisfied
we have that $\epsilon_{j}(N)$ defined in Remark \ref{rem:epsilon j}
would satisfy $\epsilon_{j}(N)\underset{N\rightarrow\infty}{\longrightarrow}0$.\end{rem}
\begin{thm}
\label{thm:approximation of Z}Let $f_{\delta_{N}}(v)=f_{\delta}(v)=\delta M_{\frac{1}{2\delta}}(v)+(1-\delta)M_{\frac{1}{2(1-\delta)}}(v)$
such that \begin{equation}
\begin{array}{c}
\delta_{N}\, is\, domianted\, by\, powers\, of\, N\\
\begin{array}{c}
\delta_{N}^{1+2\beta}\cdot N\underset{N\rightarrow\infty}{\longrightarrow}\infty\\
\delta_{N}^{1+3\beta}\cdot N\underset{N\rightarrow\infty}{\longrightarrow}0\end{array}\end{array}\label{eq:delta conditions}\end{equation}
then for a fixed $j$ \[
Z_{N-j}\left(f_{\delta_{N}},\sqrt{u}\right)=\frac{2}{\sqrt{N-j}\cdot\Sigma_{\delta_{N}}\cdot|\mathbb{S}^{N-j-1}|u^{\frac{N-j}{2}-1}}\left(\frac{e^{-\frac{\left(u-N+j\right)^{2}}{2(N-j)\Sigma_{\delta_{N}}^{2}}}}{\sqrt{2\pi}}+\lambda_{j}(N-j,u)\right)\]
where $\sup_{u\in\mathbb{R}}\left|\lambda_{j}(N-j,u)\right|\leq\epsilon_{j}(N)$
and $\lim_{N\rightarrow\infty}\epsilon_{j}(N)=0$. \end{thm}
\begin{proof}
This is immediate from Lemma \ref{cor:Expression-for Z}, \ref{lem:Properties of h},
\ref{lem:Special properties of h}, Theorem \ref{thm:uniform approximation of convulotion}
and Remark \ref{rem:eps j is ok}.
\end{proof}
We're now ready to compute the entropy production. We'll start by
estimating its denominator and numerator.
\begin{lem}
\label{lem:denominator of entropy production}Let $F_{N}\left(v_{1},\dots,v_{N}\right)=\frac{\Pi_{i=1}^{N}f_{\delta_{N}}(v_{i})}{Z_{N}(f,\sqrt{N})}$where
$\delta_{N}$ satisfies conditions (\ref{eq:delta conditions}). Then
\[
\lim_{N\rightarrow\infty}\frac{\int_{\mathbb{S}^{N-1}(\sqrt{N})}F_{N}\log F_{N}d\sigma^{N}}{N}=\frac{\log2}{2}\]
\end{lem}
\begin{proof}
Using the symmetry of the problem, Lemma \ref{lem:Integration-on-the-Sphere II}
from the Appendix, Theorem \ref{thm:approximation of Z} and Stirling's
formula we find that\[
\int_{\mathbb{S}^{N-1}(\sqrt{N})}F_{N}\log F_{N}d\sigma^{N}=\frac{1}{Z_{N}(f_{\delta},\sqrt{N})}\cdot\sum_{k=1}^{N}\int_{\mathbb{S}^{N-1}(\sqrt{N})}\left(\Pi_{i=1}^{N}f_{\delta}(v_{i})\right)\log f_{\delta}(v_{k})d\sigma^{N}-\log Z_{N}(f_{\delta},\sqrt{N})\]
\[
=\frac{N|\mathbb{S}^{N-2}|}{N^{\frac{N-2}{2}}|\mathbb{S}^{N-1}|}\int_{-\sqrt{N}}^{\sqrt{N}}f_{\delta}(v_{1})\log f_{\delta}(v_{1})\left(N-v_{1}^{2}\right)^{\frac{N-3}{2}}\cdot\frac{Z_{N-1}\left(f_{\delta},\sqrt{N-v_{1}^{2}}\right)}{Z_{N}(f_{\delta},\sqrt{N})}dv_{1}-\log Z_{N}(f_{\delta},\sqrt{N})\]
\[
=\frac{N}{\sqrt{1-\frac{1}{N}}\left(1+\sqrt{2\pi}\lambda_{0}\left(N,N\right)\right)}\int_{\mathbb{R}}f_{\delta}(v_{1})\log f_{\delta}(v_{1})\cdot\chi_{[-\sqrt{N},\sqrt{N}]}(v_{1})\]
\[
\cdot\left(e^{-\frac{\left(1-v_{1}^{2}\right)^{2}}{(N-1)\Sigma_{\delta}^{2}}}+\sqrt{2\pi}\lambda_{1}\left(N-1,N-v_{1}^{2}\right)\right)dv_{1}\]
\[
-\left(\log\left(\sqrt{2}\left(1+O\left(\frac{1}{\sqrt{N}}\right)\right)\left(1+\sqrt{2\pi}\lambda_{0}(N,N)\right)\right)-\frac{N}{2}\left(\log2\pi+1\right)-\frac{1}{2}\cdot\log\left(\frac{3}{4\delta(1-\delta)}-1\right)\right)\]
Since $0<f_{\delta}\leq1$ we have that\[
\left|f_{\delta}(v_{1})\log f_{\delta}(v_{1})\cdot\chi_{[-\sqrt{N},\sqrt{N}]}(v_{1})\cdot\left(e^{-\frac{\left(1-v_{1}^{2}\right)^{2}}{(N-1)\Sigma_{\delta}^{2}}}+\sqrt{2\pi}\lambda_{1}\left(N-1,N-v_{1}^{2}\right)\right)\right|\]
\[
\leq\left(1+\sqrt{2\pi}\epsilon_{1}(N)\right)\left(-f_{\delta}(v_{1})\log f_{\delta}(v_{1})\right)\]
\[
\leq\left(1+\sqrt{2\pi}\epsilon_{1}(N)\right)\left(-\delta M_{\frac{1}{2\delta}}(v_{1})\log\left(\delta M_{\frac{1}{2\delta}}(v_{1})\right)-(1-\delta)M_{\frac{1}{2(1-\delta)}}(v_{1})\log\left((1-\delta)M_{\frac{1}{2(1-\delta)}}(v_{1})\right)\right)\]
\[
=g_{\delta}(v_{1})\]
It is easy to check that \[
g_{\delta_{N}}(v)\underset{N\rightarrow0}{\longrightarrow}-M_{\frac{1}{2}}(v)\log M_{\frac{1}{2}}(v)\]
and \[
\int_{\mathbb{R}}g_{\delta_{N}}(v)dv\underset{N\rightarrow0}{\longrightarrow}-\int_{\mathbb{R}}M_{\frac{1}{2}}(v)\log M_{\frac{1}{2}}(v)dv=\frac{\log\pi}{2}+\frac{1}{2}\]
.
Since\[
f_{\delta_{N}}(v_{1})\log f_{\delta_{N}}(v_{1})\cdot\chi_{[-\sqrt{N},\sqrt{N}]}(v_{1})\cdot\left(e^{-\frac{4\left(1-v_{1}^{2}\right)^{2}\delta_{N}(1-\delta_{N})}{(N-1)\left(3-4\delta_{\nu}(1-\delta_{N})\right)}}+\sqrt{2\pi}\lambda_{1}\left(N-1,N-v_{1}^{2}\right)\right)\]
\[
\underset{N\rightarrow\infty}{\longrightarrow}M_{\frac{1}{2}}(v_{1})\log M_{\frac{1}{2}}(v_{1})\]
we conclude that \[
\frac{\int_{\mathbb{S}^{N-1}(\sqrt{N})}F_{N}\log F_{N}d\sigma^{N}}{N}\underset{N\rightarrow\infty}{\longrightarrow}\int_{\mathbb{R}}M_{\frac{1}{2}}(v_{1})\log M_{\frac{1}{2}}(v_{1})dv_{1}+\frac{1}{2}+\frac{\log2\pi}{2}=\frac{\log2}{2}\]
due to the generalized dominated convergence theorem. \end{proof}
\begin{lem}
\label{lem:numerator of the entropy production}Let $F_{N}\left(v_{1},\dots,v_{N}\right)=\frac{\Pi_{i=1}^{N}f_{\delta_{N}}(v_{i})}{Z_{N}(f,\sqrt{N})}$
where $\delta_{N}$ satisfies conditions (\ref{eq:delta conditions}).
Then there exists a constant $C_{type-\delta}$ depending only on
the behavior of $\delta_{N}$ such that \[
\frac{\left\langle \log F_{N},N(I-Q)F_{N}\right\rangle }{N}\leq C_{type-\delta}\left(-\delta_{N}\log\delta_{N}\right)\]
\end{lem}
\begin{proof}
Similar to Lemma \ref{lem:denominator of entropy production} by using
the symmetry of the problem, Lemma \ref{lem:Integration-on-the-Sphere II}
from the Appendix, Theorem \ref{thm:approximation of Z} and Stirling's
formula we find that \[
\left\langle \log F_{N},N(I-Q)F_{N}\right\rangle \]
\[
=\frac{1}{Z_{N}(f_{\delta},\sqrt{N})(N-1)\pi}\sum_{k=1}^{N}\int_{\mathbb{S}^{N-1}(\sqrt{N})}\log f_{\delta}(v_{k})\]
\[
\cdot\left(\sum_{i<j}\int_{0}^{2\pi}\left(f^{\otimes N}\left(v_{1},\dots,v_{N}\right)-f^{\otimes N}\left(R_{i.j}(\vartheta)\left(v_{1},\dots,v_{N}\right)\right)\right)d\vartheta\right)d\sigma^{N}\]
if $i$ and $j$ are different than $k$ the integral is zero and
so\[
\left\langle \log F_{N},N(I-Q)F_{N}\right\rangle =\frac{1}{Z_{N}(f_{\delta},\sqrt{N})(N-1)\pi}\sum_{k=1}^{N}\sum_{j\not=k}\int_{\mathbb{S}^{N-1}(\sqrt{N})}\log f_{\delta}(v_{k})\]
\[
\cdot\left(\int_{0}^{2\pi}\left(f^{\otimes N}\left(v_{1},\dots,v_{N}\right)-f^{\otimes N}\left(R_{k.j}(\vartheta)\left(v_{1},\dots,v_{N}\right)\right)\right)d\vartheta\right)d\sigma^{N}\]
\[
=\frac{N}{Z_{N}(f_{\delta},\sqrt{N})\pi}\int_{0}^{2\pi}d\vartheta\int_{\mathbb{S}^{N-1}(\sqrt{N})}\left(-\log f_{\delta}(v_{1})\right)\left(f_{\delta}(v_{1}(\vartheta))f_{\delta}(v_{2}(\vartheta))-f_{\delta}(v_{1})f_{\delta}(v_{2})\right)\left(\Pi_{i=3}^{N}f_{\delta}(v_{i})\right)d\sigma^{N}\]
\[
=\frac{N|\mathbb{S}^{N-3}|}{|\mathbb{S}^{N-1}|N^{\frac{N-2}{2}}\pi}\int_{0}^{2\pi}d\vartheta\int_{v_{1}^{2}+v_{2}^{2}\leq N}\left(-\log f_{\delta}(v_{1})\right)\left(f_{\delta}(v_{1}(\vartheta))f_{\delta}(v_{2}(\vartheta))-f_{\delta}(v_{1})f_{\delta}(v_{2})\right)\]
\[
\cdot\left(N-v_{1}^{2}-v_{2}^{2}\right)^{\frac{N-4}{2}}\frac{Z_{N-2}\left(f_{\delta}.\sqrt{N-v_{1}^{2}-v_{2}^{2}}\right)}{Z_{N}(f_{\delta},\sqrt{N})}dv_{1}dv_{2}\]
\[
=\frac{N}{\pi\sqrt{1-\frac{2}{N}}}\int_{0}^{2\pi}d\vartheta\int_{v_{1}^{2}+v_{2}^{2}\leq N}\left(-\log f_{\delta}(v_{1})\right)\left(f_{\delta}(v_{1}(\vartheta))f_{\delta}(v_{2}(\vartheta))-f_{\delta}(v_{1})f_{\delta}(v_{2})\right)\]
\[
\cdot\frac{e^{-\frac{\left(2-v_{1}^{2}-v_{2}^{2}\right)}{(N-2)\Sigma_{\delta}^{2}}}+\sqrt{2\pi}\lambda_{2}\left(N-2.N-v_{1}^{2}-v_{2}^{2}\right)}{1+\sqrt{2\pi}\lambda_{0}(N,N)}dv_{1}dv_{2}\]
Using rotational symmetry and symmetry in the variables we find that
\[
\left\langle \log F_{N},N(I-Q)F_{N}\right\rangle \]
\[
=\frac{N}{4\pi\sqrt{1-\frac{2}{N}}}\int_{0}^{2\pi}d\vartheta\int_{v_{1}^{2}+v_{2}^{2}\leq N}\left(\log f_{\delta}(v_{1}(\vartheta))f_{\delta}(v_{2}(\vartheta))-\log f_{\delta}(v_{1})f_{\delta}(v_{2})\right)\]
\[
\left(f_{\delta}(v_{1}(\vartheta))f_{\delta}(v_{2}(\vartheta))-f_{\delta}(v_{1})f_{\delta}(v_{2})\right)\cdot\frac{e^{-\frac{\left(2-v_{1}^{2}-v_{2}^{2}\right)}{(N-2)\Sigma_{\delta}^{2}}}+\sqrt{2\pi}\lambda_{2}\left(N-2.N-v_{1}^{2}-v_{2}^{2}\right)}{1+\sqrt{2\pi}\lambda_{0}(N,N)}dv_{1}dv_{2}\]
\[
\leq\frac{N}{4\pi\sqrt{1-\frac{2}{N}}}\int_{0}^{2\pi}d\vartheta\int_{\mathbb{R}^{2}}\left(\log f_{\delta}(v_{1}(\vartheta))f_{\delta}(v_{1}(\vartheta))-\log f_{\delta}(v_{1})f_{\delta}(v_{1})\right)\]
\[
\cdot\left(f_{\delta}(v_{1}(\vartheta))f_{\delta}(v_{2}(\vartheta))-f_{\delta}(v_{1})f_{\delta}(v_{2})\right)\cdot\frac{1+\sqrt{2\pi}\epsilon_{2}(N)}{1+\sqrt{2\pi}\lambda_{0}(N,N)}dv_{1}dv_{2}\]
\[
=\frac{N\left(1+\sqrt{2\pi}\epsilon_{2}(N)\right)}{\pi\sqrt{1-\frac{2}{N}}\left(1+\sqrt{2\pi}\lambda_{0}(N,N)\right)}\int_{0}^{2\pi}d\vartheta\int_{\mathbb{R}^{2}}\left(-\log f_{\delta}(v_{1})\right)\left(f_{\delta}(v_{1}(\vartheta))f_{\delta}(v_{2}(\vartheta))-f_{\delta}(v_{1})f_{\delta}(v_{2})\right)dv_{1}dv_{2}\]
Since $M_{a}(v_{1}(\vartheta))M_{a}(v_{2}(\vartheta))=M_{a}(v_{1})M_{a}(v_{2})$
we see that\[
f_{\delta}(v_{1}(\vartheta))f_{\delta}(v_{2}(\vartheta))-f_{\delta}(v_{1})f_{\delta}(v_{2})=\delta(1-\delta)\left(M_{\frac{1}{2\delta}}(v_{1}(\vartheta))M_{\frac{1}{2(1-\delta)}}(v_{2}(\vartheta))-M_{\frac{1}{2\delta}}(v_{1})M_{\frac{1}{2(1-\delta)}}(v_{2})\right)\]
\[
+\delta(1-\delta)\left(M_{\frac{1}{2\delta}}(v_{2}(\vartheta))M_{\frac{1}{2(1-\delta)}}(v_{1}(\vartheta))-M_{\frac{1}{2\delta}}(v_{2})M_{\frac{1}{2(1-\delta)}}(v_{1})\right)\]
\[
\leq\delta(1-\delta)\left(M_{\frac{1}{2\delta}}(v_{1}(\vartheta))M_{\frac{1}{2(1-\delta)}}(v_{2}(\vartheta))+M_{\frac{1}{2\delta}}(v_{2}(\vartheta))M_{\frac{1}{2(1-\delta)}}(v_{1}(\vartheta))\right)\]
and along with \[
-\log f_{\delta}(v_{1})\leq-\log\left(\delta M_{\frac{1}{2\delta}}(v_{1})\right)\leq-\frac{3\log\delta}{2}+\frac{\log\pi}{2}+\delta\left(v_{1}^{2}(\vartheta)+v_{2}^{2}(\vartheta)\right)\]
we conclude that \[
\frac{\left\langle \log F_{N},N(I-Q)F_{N}\right\rangle }{N}\]
\[
\leq\frac{4\left(1+\sqrt{2\pi}\epsilon_{2}(N)\right)\delta(1-\delta)}{\sqrt{1-\frac{2}{N}}\left(1+\sqrt{2\pi}\lambda_{0}(N,N)\right)}\int_{\mathbb{R}^{2}}\left(-\frac{3\log\delta}{2}+\frac{\log\pi}{2}+\delta\left(v_{1}^{2}+v_{2}^{2}\right)\right)M_{\frac{1}{2\delta}}(v_{1})M_{\frac{1}{2(1-\delta)}}(v_{2})dv_{1}dv_{2}\]
\[
\leq\frac{4\left(1+\sqrt{2\pi}\epsilon_{2}(N)\right)}{\sqrt{1-\frac{2}{N}}\left(1+\sqrt{2\pi}\lambda_{0}(N,N)\right)}\left(\frac{3}{2}-\frac{\log\pi}{2\log\delta}-\frac{1}{2\log\delta}-\frac{\delta}{2\log\delta}\right)\left(-\delta\log\delta\right)\]
The result follows. \end{proof}
\begin{thm}
\label{thm:big result general}Let $F_{N}\left(v_{1},\dots,v_{N}\right)=\frac{\Pi_{i=1}^{N}f_{\delta_{N}}(v_{i})}{Z_{N}(f,\sqrt{N})}$
where $\delta_{N}$ satisfies conditions (\ref{eq:delta conditions}).
Then there exists a constant $C_{type-\delta}$ and an integer $N_{type-\delta}$
depending only on the behavior of $\delta_{N}$ such that for every
$N>N_{type-\delta}$ \[
\frac{\left\langle \log F_{N},N(I-Q)F_{N}\right\rangle }{\int_{\mathbb{S}^{N-1}(\sqrt{N})}F_{N}\log F_{n}d\sigma^{N}}\leq C_{type-\delta}\left(-\delta_{N}\log\delta_{N}\right)\]
\end{thm}
\begin{proof}
This follows immediately from Lemma \ref{lem:denominator of entropy production}
and \ref{lem:numerator of the entropy production}.\end{proof}
\begin{thm}
\label{thm:big result beta}Let $F_{N}\left(v_{1},\dots,v_{N}\right)=\frac{\Pi_{i=1}^{N}f_{\delta_{N}}(v_{i})}{Z_{N}(f,\sqrt{N})}$
where $\delta_{N}=\frac{1}{N^{1-2\beta}}$ and $0<\beta<\frac{1}{6}$.
Then there exists a constant $C_{\beta}$ and an integer $N_{\beta}$
depending only on $\beta$ such that for every $N>N_{\beta}$ \[
\frac{\left\langle \log F_{N},N(I-Q)F_{N}\right\rangle }{\int_{\mathbb{S}^{N-1}(\sqrt{N})}F_{N}\log F_{n}d\sigma^{N}}\leq\frac{C_{\beta}\log N}{N^{1-2\beta}}\]
\end{thm}
\begin{proof}
This follows immediately from Theorem \ref{thm:big result general}
and the fact that $\delta_{N}=\frac{1}{N^{1-2\beta}}$ satisfies conditions
(\ref{eq:delta conditions}).
From this we conclude our main result:\end{proof}
\begin{thm}
\label{thm:tnropy production result}For any $0<\beta<\frac{1}{6}$
there exists a constant $C_{\beta}$ depending only on $\beta$ such
that\[
\Gamma_{N}\leq\frac{C_{\beta}\log N}{N^{1-2\beta}}\]
\end{thm}
\section{Final Remarks\label{sec:Final-Remarks}}
One question we might ask ourselves is: Can we modify the given proof
to get the exact value in Villani's conjecture? Looking at the proof
we notice that the result we obtained has very tight conditions in
terms of $\beta$. We needed $\delta_{N}^{1+2\beta}N$ to diverge
to infinity \emph{and} $\delta_{N}^{1+3\beta}N$ to go to zero. This
doesn't leave much room for variations. This leads us to believe that
the family of functions constructed here would not be helpful to prove
the exact version of Villani's conjecture. Something more clever must
be done.
Another question we don't know the answer to is the fourth moment
question. Both in this paper and in (\cite{key-8}) the family of
functions constructed has an unbounded fourth moment. Would restricting
the fourth moment lead to a lower bound on the entropy production?
Lastly, can our computation be generalized to a more difficult interaction
than Kac's model? Can we try and use the same idea in a different
models of the Boltzmann equation?
While we don't know the answers to the proposed questions we hope
that this paper shed some light on the entropy production problem
and that at least some of the above questions would seem more solvable
after reading it.
|
train/arxiv
|
BkiUbgA4dbjiU7SsVkFS
| 5
| 1
|
\section{Introduction}
The formation and distribution of vortices\cite{Toreblad2004} are
important aspects in the studies of many-body systems, such as
superconductors and Bose-Einstein condensates. The vortices can give
the information of the many-body wave functions and the correlations
between particles. The famous Laughlin wave
function\cite{Laughlin1983} whose vortices are concentrated on the
electron is the basis to understand the fractional quantum Hall
effect (FQHE) of two-dimensional electron gas (2DEGs) in strong
magnetic fields. The concentrated vortices keep the electrons far
apart and reduce the short-range interaction most effectively. When
an electron moves around a vortex, the phase of the many-body wave
function will be changed by 2$n\pi$, where $n$ is the order of the
zero. In composite fermion theory\cite{Jain1989} where the FQHE can
be understood in terms of its integer counterpart, the electron
feels the effective magnetic field because the phase change caused
by the vortices partly cancels the Aharonov-Bohm phase caused by the
field.
The quantum dots (QDs) with a few electrons have attracted a lot of
interests in recent years. In magnetic fields, the quantum dots can
be viewed as the precursor of quantum Hall system. So called maximum
density droplet (MDD) which corresponds to the filling factor
$\nu=1$ has been demonstrated by both the experiments and
theoretical studies. Besides these, the electron transport
experiments in magnetic fields have shown various transitions of the
charge distributions.\cite{Oosterkamp1999} This leads to the
increasing interests in investigations of the electronic states and
vortices.\cite{Saarikoski2004, Saarikoski2005} The long-range part
of the interaction is more important in quantum dots. When
$\nu\leq1$, it has been found that the vortices are no longer
concentrated on but bounded around the
electrons.\cite{Tavernier2004, Saarikoski2004} The interaction also
makes the liquid-crystal transition\cite{Yannouleas2002,
Reimann2006, Huang2006} in quantum dots much easier than that in
2DEGs. With the localization of the electrons, the distributions of
vortices become more dispersed to form the vortex clusters.
Due to the Zeeman splitting, the electrons are taken as full
polarized in most theoretical discussions on ground states of QDs in
strong fields. Then the spin degree of freedom can be ignored and is
irrelevant to the studies of vortices. With the improvements of
nanotechnology, it is recently achieved to fabricate the QDs with
negligible Zeeman splitting.\cite{Salis2001, Ellenberger2006} Then
even in the strong magnetic fields, the spin degree of freedom
should be taken into account when the properties of the ground and
low lying states are concerned. The angular momentum transitions of
the ground states and the lowest states with different spins for a
few electrons in quantum dot have been explored by both the theory
of electron molecules\cite{Maksym1995, Maksym2000} and the exact
diagonalization.\cite{Tavernier2003, Tavernier2006} The formation
and redistribution of the vortices with spin degree of freedom are
important aspects of understanding the characters of the electronic
states and the transport measurements in the system without the
Zeeman splitting. In this paper, we study the vortices in
few-electron quantum dots with spin degree of freedom to explore the
transitions of the electronic states in magnetic fields.
\section{Conditional wave function with spin}
The model Hamiltonian of a few-electron quantum dot in the magnetic
field with parabolic confinement and without the Zeeman splitting is
\begin{equation}\label{EQ:Hamil}
H\!\!=\!\!\sum_{i=1}^N {\left [\frac{1}{2m}\left
(\hat{P}_i+e\vec{A}\right )^2\!\!+V(r_i)\right
]}\!\!+\!\!\sum_{i<j}{\frac{e^2}{4\pi\varepsilon|\vec{r}_i-\vec{r}_j|}}.
\end{equation}
where N is the particle number, $\vec{A}$ is the vector potential of
the field, $V(r_i)$ is the confinement of the dot with the strength
equals to $2meV$ in the following discussions, and the last term is
the Coulomb interaction between particles. The effective mass $m$ of
the electron and the static dielectric constant $\varepsilon$ are
respectively $0.067m_e$ and 12.4 for GaAs. The eigenstates $\Psi$ of
the Eq.(\ref{EQ:Hamil}) are obtained by the exact diagonalization.
Without the spin-orbit coupling, such states are the common
eigenstates of the total angular momentum $L$, the total spin $S$
and its z-component $S_z$, so in the following discussions we use
the abbreviation $(L,S,S_z)$ to represent them.
It has been demonstrated that the vortex structures depend on the
range of the interaction between electrons.\cite{Stopa2006} So we
also use the Yukawa screened Coulomb interaction\cite{Ando1982,
Stopa2006}
\begin{equation}
I(r)=\frac{e^2}{4\pi\varepsilon}\frac{\exp(-r/\alpha)}{r}
\end{equation}
instead of the Coulomb interaction to understand the vortex
behaviors in some of the following discussions.
Having got the many-body eigenstates, we employ the conditional
single-particle wave function\cite{Saarikoski2004} with spin degree
of freedom to explicitly show the vortices of the wave functions
\begin{equation}\label{EQ:Cwaf}
\psi_c({\bf r})=\frac{\Psi({\bf r},\sigma^*,{\bf r_2}^*,\sigma_2^*,
\cdots {\bf r}_N^*,\sigma_N^*)}{\Psi({\bf r}^*,\sigma^*,{\bf
r}_2^*,\sigma_2^*, \cdots {\bf r_N^*},\sigma_N^*)}
\end{equation}
where $\sigma_i$ represent the spins of electrons. In the function,
an electron is chosen as the probe electron and other electrons are
pinned at fixed positions. In Eq.(\ref{EQ:Cwaf}), the variables with
asterisk superscript are fixed ones. The phase angle of the
conditional wave function reveals the change of the phase of a
many-body function when an electron moves around another one. Then
the plot of the electron density and the phase of the conditional
wave function gives the picture of vortices in the many-body wave
function.
\begin{figure}[ht]
\includegraphics*[angle=0,width=0.38\textwidth]{Fig.1.eps}
\caption{\label{FIG:DiffSz} (Color online) The vortices for the
four-electron state (-15,1,1) with a spin-up (left column) and
spin-down (right column) probe electron in conditional
single-particle wave function. The upper and lower rows correspond
to the case where the Coulomb and the screened interactions are
adopted, respectively. The density of the probe electron is plotted
as contours in logarithmic scale. The phase changes from $-\pi$ to
$\pi$ as the shadowing changes from the darkest gray to white. $+$
and $\times$ indicate the positions of pinned spin-up and -down
electrons, respectively.}
\end{figure}
We should present some explanations about the choices of the probe
and pinned electrons. First, when there are asymmetry between the
spin-up and spin-down electrons in some states, the different
choices of the probe electron's spin may result in different
displays of the vortices. In Fig.\ref{FIG:DiffSz}, we present the
vortices of the four-electron state (-15,1,1) as an example. We fix
the pinned electrons at the most probable radius. The state
(-15,1,1) contains three spin-up and only one spin-down electrons.
Then there are two choices of the probe electron and they lead to
different displays of the vortices. If the probe electron is the
spin-up one, it can be seen that there are respectively three and
two vortices around each spin-up and spin-down fixed electron. If
the unique spin-down one is chosen to be the probe electron, only
two vortices around each spin-up electron can be seen. In the case
of the FQHE where the vortices are concentrated on the electrons,
Halperin\cite{Halperin1983} has suggested a set of trial functions
including the spin degree of freedom. The polynomial parts of the
functions have the form
$\prod_{i>j}(z_i-z_j)^{m_+}\prod_{i>j}(\xi_i-\xi_j)^{m_-}\prod_{i,j}(z_i-\xi_j)^n$
where $z$ and $\xi$ are the complex coordinates of the spin-up and
spin-down electrons. It can be found that the orders of the vortices
indeed depend on the spins of the electrons. And from the viewpoint
of the electron with a certain species of spin, only parts of the
vortices are visible. In the case of the quantum dot, although the
vortices are no longer concentrated on the electrons, the number of
the visible vortices can still depend on the spins. Nevertheless,
with appropriate choice of the spin of the probe electron, we can
get useful information about the vortices of a state. In the
following discussions about four-electron states, we mainly focus on
the ones with $S_z=0$ then there are no such difficulty. For
five-electron case with $S_z=0.5$, we chose a spin-up electron as
the probe one.
Another feature should be pointed out here is that there are
vortices in the system which originate from long-range interactions.
And such vortices will move to the infinity if the interactions is
totally screened.\cite{Stopa2006} In Fig.\ref{FIG:DiffSz}b we can
see two such vortices. It can be seen in Fig.\ref{FIG:DiffSz}d that
they will move away when the interaction is screened.
\begin{figure}[ht]
\includegraphics*[angle=0,width=0.38\textwidth]{Fig.2.eps}
\caption{\label{FIG:diffpin} The vortex displays of the
five-electron state (-30,0.5,0.5) with the electrons are fixed in
the neighbor (a) and alternative (b) modes, respectively. $+$ and
$\times$ indicate the positions of pinned spin-up and -down
electrons, respectively.}
\end{figure}
Besides the choice of the probe electron, the manner of the pinned
electrons may also affect the vortex displays of some states if
there are more than one way to fix the positions of the electrons
with spins. For instance, if one spin-up electron is selected as the
probe electron for a five-electron state with $S_z=0.5$, and the
remainder electrons are fixed at the vertices of a square, there are
two inequivalent ways to determine the spin of each pinned electron,
i.e. fix the electrons with same spin at neighbor or alternative
positions (we call them neighbor and alternative mode respectively).
Then for the states with some angular momenta, the two ways may give
different vortex numbers. In Fig.\ref{FIG:diffpin} there is an
example of such situation. With two fixation modes, the displayed
vortex numbers of the state (-30,0.5,0.5) are different by one. Such
phenomenon will generally occur when we inspect a higher dimensional
object from a lower dimensional space, i.e. the visibility of an
entity relies on the viewpoint of the projection. It should be
pointed out that such differences only exist in the
non-full-polarized states. For the full-polarized states, the spin
and spatial parts of the wave functions can be separated and the
ways to fix the electrons with different spins do not affect the
display of the vortices. For four electron case, there is only one
way to fix the electrons if they are fixed at the vertices of an
equilateral triangle and of course no such kind of difference exist.
Then in the following counts of the vortices of five-electron states
with $S_z=0.5$, we can simply use appropriate fixation mode which
give larger vortex number for each state. However, when we discuss
the behaviors of the vortices in the subsection III.B, we must
inspect the different displays of the vortices more carefully and we
will recall this topic there.
\section{Disscussion}
\subsection{Energy level structures and spin-dependent vortices}
\begin{figure}[ht]
\includegraphics*[angle=0,width=0.4\textwidth]{Fig.3.eps}
\caption{\label{FIG:Energy} The energy level of the lowest states
with different spins for four-electron (a) and five-electron (b)
cases as functions of magnetic fields. The solid and dashed lines
correspond to the energies without and with the Zeeman splitting.
The N times of the energy of the first landau level have been
subtracted from the total energy.}
\end{figure}
Before the detailed study of the vortices, we must discuss the
energy level structures of quantum dots briefly. In
Fig.\ref{FIG:Energy} we show typical energy spectra of four and five
electron in magnetic fields obtained from the exact diagonalization.
If the Zeeman splitting is concerned, the ground states of the
few-electron quantum dot are full-polarized with maximum $S_z$ in
strong magnetic fields. With the increase of the field, for the
four- and five-electron case there are angular momentum transitions
of the ground states whose increment between neighbor states equals
to the particle numbers. The allowable angular momenta are so called
``magic number". If the Zeeman splitting can be ignored, the states
with same $L$ and $S$ but different $S_z$ are degenerate. The ground
states even in strong fields no longer need to be full-polarized and
the lowest states with different spins gradually form a narrow band.
In the narrow band, the energies of different spin states are nearly
degenerate. The ground states in the field corresponding to the
fractional filling factor $\nu=1/(2p+1)$ are still full-polarized
although they are multiple degenerate due to different $S_z$. Along
with the gradual formation of the narrow band, there are transition
from the liquid to crystal states.
\begin{figure}[ht]
\includegraphics*[angle=0,width=0.4\textwidth]{Fig.4.eps}
\caption{\label{FIG:vformation} (Color online) The vortex numbers of
the lowest states with different total spins as functions of the
magnetic fields. For four-electron case (a), $S_z=0$ and the black,
red and green lines correspond to the states with $S=0,1\text{ and
}2$, respectively. For five-electron case (b), $S_z=0.5$ and the
black, red and green lines correspond to the states with
$S=0.5,1.5\text{ and }2.5$, respectively. The vortex numbers of the
states with $S=0.5$ as function of the angular momenta are shown in
the inset for clarity.}
\end{figure}
\begin{figure*}[ht!]
\includegraphics*[angle=0,width=0.72\textwidth]{Fig.5.eps}
\caption{\label{FIG:vor4} (Color online) Spin-dependent vortex
distributions of several states with long-range Coulomb (first row)
and Yukawa screened interaction (second row). $+$ and $\times$
indicate the positions of pinned spin-up and -down electrons,
respectively. The arrows indicate the separated vortices.}
\end{figure*}
With the increase of the field, there are also respective angular
momentum transitions for different spin states. The angular momentum
rules for the allowable states which can emerge in the transition
sequences can be obtained from the electron molecules theory in
strong fields\cite{Maksym1995, Maksym2000} or the exact
diagonalization. The exact diagonalization also reveals that some
states whose angular momenta are in accordance with the rules can be
absent from the transition sequence when the electronic states are
still liquidlike. With the transition from the liquid to crystal
states, the absences gradually disappear. For four- and
five-electron states, the transition rules and the absent states are
listed in Appendix. In the four-electron case, only two states are
absent from the transition sequences because the magnetic field can
easily make the electrons form rotating Wigner molecules with such a
small particle number. However, in the five-electron case, the
absences are much more and only disappear in strong fields.
In the transitions, the absolute values of the angular momenta
increase. The vortices also gradually increase. We illustrate the
vortex numbers of the lowest states with different $S$ and the
lowest $S_z$ in Fig.\ref{FIG:vformation}. The complete data are
listed in the Appendix. In the counts, the vortex numbers are those
displayed by the conditional wave functions and the vortices which
move to the infinity with screened interaction are excluded. The
vortex numbers of full-polarized states increase monotonously. And
although there are degenerations due to different $S_z$, the
vortices of full-polarized states with different $S_z$ have no
differences. So in the following discussions we will focus on the
states with lower $S$. It can be seen that the vortex numbers in the
transition sequences of the non-full-polarized states are monotone
non-decreasing when the magnetic field is not very strong, for
five-electron case $\nu\gtrsim 1/3$, i.e. $|L|\lesssim30$. Such
monotonicity is not preserved when the field becomes strong,
especially the five-electron case, see also the inset of
Fig.\ref{FIG:vformation}b where the vortex numbers as the function
of $|L|$ are shown for clarity. By inspecting the vortices of all
the states whose angular momenta are in accordance with the
transition rules, it is found that the vortex numbers of
five-electron states with $L=6,11,14,20,23,31$ and $S=0.5$ exceed
those of the next neighbor ones. And they are absent from the
transition sequence to avoid the breakdown of the monotonicity of
the vortex number with respect to the field or the angular momentum.
There are also other absent states whose total vortex numbers do not
break the monotonicity, in the next subsection, we will analyze the
vortex distributions and discuss the absences more in-depth.
\subsection{Insight of the vortex formation}
In 2DEGs, if the spin degree of freedom is taken into account, when
the magnetic field deviates from the values corresponding to the
fractional filling factors, the total spin of the ground state is no
longer full polarized. Along with the changes of the total spin,
there are also quasi-particle (quasi-electron or -hole) excitations,
namely the reversed-spin quasi-particle excitations\cite{Oaknin1996,
Szlufarska2001} or skyrmions. These excitations will change the
many-body wave functions and be reflected in the formations and
redistributions of vortices. However, due to the long range Coulomb
interaction, the vortices in QDs are dispersed. Especially when the
electrons form the RWMs states, the vortices will spread over the
whole area. So we employ the Yukawa screened Coulomb
interaction\cite{Stopa2006} to clearly show the origins and
behaviors of different vortices in QDs.
In Fig.\ref{FIG:vor4} we show the vortex distributions of some
states as examples. First kind of states is the ones corresponding
to the fractional filling factor $1/(2p+1)$, like the four-electron
state with $L=-18$ and $S=2$ whose filling factor is $1/3$. Such
states with different $S_z$ have same vortex distribution as shown
in Fig.\ref{FIG:vor4} where the state (-18,2,0) is taken as an
example. By the consideration of the screened Coulomb interaction,
the dispersive vortices can approach the positions of electrons.
This is just the scenario of the Laughlin limit in 2DEGs.
When the field deviates from the value corresponding to the
fractional filling factors, the states with lower spins can become
the ground states. Along with such reversion of the total spin of
the ground state, another kind of the vortices can be identified. As
illustrated in Fig.\ref{FIG:vor4}, in the states (-17,1,1) and
(-17,1,0), there are not only the vortices similar to those in
(-18,2,0) but also the separated vortices which do not approach the
positions of the electrons when the interaction is screened, as
indicated by arrows in the plots. Such vortices, which keep
separated from others and do not move to the infinity when the
screened interaction is considered, are analogy of the
quasi-particle in 2DEGs. An interesting feature is that the two
states have different vortex distributions although they are
degenerate in energy and have same total spin. In
Fig.\ref{FIG:vor4}, it can be seen that the vortex numbers of
(-17,1,1) and (-17,1,0) are same as that of (-18,2,0) because they
have same filling factor. For (-17,1,1), there is one separated
vortex. The state (-17,1,0) has one more separated vortex than
(-17,1,1). The reason is that the two identical spin-down electrons
in (-17,1,0) must have same vortex number. So one of the vortices
belonging to a spin-up electron of (-17,1,1) must leave the electron
when it becomes spin-down. Similar differences of vortex
distributions also exist in some other degenerate states with
different $S_z$. In fact, these degenerate states also have
different entanglement entropies, because both the vortex
distribution and the entropy reflect the component differences
between the states.
For five-electron case, there are also the separated vortex
excitations. However, when we recognize the separated vortices by
employing the screened interaction, it must be realized that the
display of the behavior of a vortex may depend on the fixation
manner of the pinned electron. The merging behavior of a vortex to
the position of the electron may be only the result of the
inspection from a particular `viewpoint'. That is to say, only those
merging behaviors irrespective to the fixation manner are true. We
present an example in Fig.\ref{FIG:vor4}. For the five-electron
state (-14,0.5,0.5), the total vortex number in the plots is six. If
the pinned electrons are in alternative mode, it seems that there
are two vortices which can approach the position of the spin-down
electron. But from the viewpoint of the neighbor mode, only one
vortex does so, another one keeps separated from the electron. Then
we can realize that the merging behavior is only a false expression
due to the particular `viewpoint' and there are two separated
vortices in (-14,0.5,0.5).
By making great effort on the classification of the vortices, now we
can return to the analysis of the absent states in the angular
momentum transitions. The behavior of the separated vortices implies
that they have no use in reducing the short-range interaction
between electrons. We know that the concentrated vortices in the
Laughlin wave function are most efficient way to reduce the
short-range interaction. Then we see that when the short-range
interaction becomes more and more important, there are vortices in
the quantum dot which approach the Laughlin limit to reduce the
interaction except the separated ones. From the tables in Appendix,
we can find that most of the absent states have more separated
vortices than neighbor states. For example, the separated vortex
numbers of the four-electron absent state (-6,0,0) and all the
five-electron absent states with $S=0.5$ exceed those of the
neighbor states. Most of these absent states have no less than three
separated vortices. In fact, more separated vortices make the vortex
distribution of those states more dispersed than the neighbor ones
and are unfavorable in energy when the short-range interaction is
still important. Then those states may be absent in the transition
sequences for the lowest states. The states with $S$ unequal to the
lowest value are more complicated. As discussed in
Fig.\ref{FIG:vor4}, the states with same $L$,$S$ but different $S_z$
can have different separated vortices. However, within the states
with same $S_z$ it can still be found that the absent states at
least have more separated vortices than the neighbor states with
smaller $|L|$. The only one special absent state is (-17,1.5,0.5).
It has fewer vortices than the neighbor states and so few vortices
also have no advantage in reducing the energy.
As mentioned previously, with the increase of the magnetic field,
the electrons gradually form the rotating Wigner molecules. The
short-range interaction becomes unimportant and the difference
between separated and other vortices can be ignored. Then even the
states with more separated vortices are no longer unfavorable in
energy and the absences of the states in the transition sequences
gradually disappear. Besides this, the monotonicity of the vortex
number in the angular momentum transition need not to be preserved.
The data in the tables in Appendix also show these conclusions.
\section{Summary}
In summary, we have investigated the vortex structures of the
electronic states in quantum dot without the Zeeman splitting. The
vortex display with spin degree of freedom depends on both the
choice of the probe electron and the fixation manner of the pinned
electrons in the conditional single-particle wave function. By
choosing an appropriate way to fix the pinned electrons in the
functions we explore the transition patterns of the vortex number in
magnetic fields for the lowest states with different spins. It is
found that the vortex number increases monotonously when the field
is not very strong and the states with certain angular momenta are
absent from the transition sequences. When the field becomes strong,
the absences disappear and the monotonicity may not be preserved. By
examining the behaviors of the vortices with different range of the
interaction between electrons, we can identify two kinds of vortices
in the quantum dot which are respectively analogous to the vortices
of the electrons and reversed-spin quasi-particles in the fractional
quantum Hall system. The quasi-particle-like vortices do not
approach the position of the electron when the interaction is
screened and have no use in reducing the short-range interaction.
Then the states with more such separated vortices are unfavorable in
energy when the field is not very strong and may be absent from the
transition sequences. These results imply that the understanding of
the vortex structures and formations with spin degree of freedom are
important for the studies and the control of the spin-relevant
electronic states in nanostructures in magnetic fields.
\begin{acknowledgments}
Financial supports from NSF China (Grant No. 10574077), the``863"
Programme of China (No. 2006AA03Z0404) and the ``973" Programme of
China (No. 2005CB623606) are gratefully acknowledged.
\end{acknowledgments}
|
train/arxiv
|
BkiUaHA25V5hcGj03IHY
| 5
| 1
|
\section{Introduction}
During the 2016 U.S. presidential primary election season, the political debate
on Twitter about the four presidential candidates Hillary Clinton, Ted Cruz,
Bernie Sanders, and Donald Trump was particularly lively and created a huge
corpus of data. It has been argued that Twitter can be considered a valid
indicator of political opinion~\cite{TumasjanSpSaWe10}, and so various parties,
including journalists, campaign managers, politicians, and social scientists,
are interested in using automated natural language processing tools to mine this
corpus.
Unsupervised learning methods have been used previously to analyze a similar
corpus, 77 millions tweets about the 2012 U.S. presidential election and create
summary statistics such as ``twitter users mentioned foreign affairs in
connection with Obama more than with Romney'' \cite{GuoVaPaDiIs16}. Supervised
learning methods also have been used, for example, to analyze filtered
``snippets'' of political blogs~\cite{HsuehMeSi09}. However, creating {\em
accurate} learning methods to analyze positive or negative sentiments is
challenging. Political opinions expressed on the internet often contain sarcasm
and mockery~\cite{GuoVaPaDiIs16,HsuehMeSi09}, which are difficult to discern by
machine or human computation~\cite{GonzalezMuWa11,YoungSo12}
Crowdsourcing has been proposed to collect training data for predictive models
used to classify political sentiments~\cite{HsuehMeSi09,WangCaKaBaNa12}. Out of
concern for the accuracy of human annotation, it is standard practice to collect
multiple labels for the same data point and then use the label that obtained a
majority vote~\cite{KargerOhSh13}. Typically an odd number of crowd workers,
e.g., five or seven, is chosen to create this redundancy. Redundancy, however,
cannot guarantee reliability, i.e., agreement among the raters with each other
about the sentiment present in the text in question. For example, when five
crowd workers analyzed the sentiments expressed in the political snippets
dataset~\cite{HsuehMeSi09}, only a 47\% agreement rate on the three labels
``positive,'' ``negative,'' or ``neutral sentiment'' could be achieved.
Hsueh et al.,~\citeyear{HsuehMeSi09}, noted that ``not all snippets [of
political blogs] are equally easy to annotate.'' We made the same observation
for our data -- sarcastic twitter messages are more difficult to label, and we
propose to allocate crowd resources according to the predicted difficulty level:
The more difficult the sentiment analysis may be, the higher the number of
workers becomes that our model assigns. In allocating fewer crowd workers to
tasks that are predicted to be easy, we aim to balance the goals of labeling
accuracy and efficiency.
The literature describes techniques for optimal trade-offs between accuracy and
redundancy in crowdsourcing~\cite{KargerOhSh13,TranVeRoJe13}. In these
works, the proposed crowdsourcing mechanism uses a fixed number of crowd workers
per task, and the assignment is agnostic about the latent difficulty level of
each task. If the difficulty of a task can be discerned, easy tasks could be
routed to novice workers and difficult tasks to expert
annotators~\cite{KolobovMaWe13}. Optimal task routing, however, is an NP-hard
problem, and so online schemes for task-to-worker assignments have been proposed
~\cite{BraggKoMaWe14,RajpalGoMa15}. Our work falls into this category of online
crowdsourcing methodology.
\noindent
Our contributions are as follows:
\begin{itemize}
\item We propose a decision-tree approach for {\em dynamically} determining the
number of crowd workers for tasks that require redundant annotations.
\item We provide two versions of this approach: The {\em offline} version
computes the number of workers needed based on the content of the data they
are asked to analyze. The {\em online} version relies on iterative rounds of
crowdsourcing and determines the number based on content and annotation
results in previous rounds.
\item To illustrate and evaluate our approach, we conducted a crowdsourcing
experiment with a dataset of 1,000 tweets that were sent during the 2016
primary election season. We collected 5,075 ratings of the sentiment towards
presidential candidates Clinton, Cruz, Sanders, and Trump in these tweets and
evaluated their accuracy with respect to a gold standard established by
experts in political communication.
\item Comparisons with traditional crowdsourcing strategies show that the
proposed offline and online selection methods intelligently detect ambiguities
in sentiment analysis and recruit more workers to resolve those. We show that
a large portion of the crowdsourcing budget can be saved at a small loss of
accuracy.
\end{itemize}
\section{Method}
We here describe our method to solve the problem of dynamically assigning
crowd workers to analyze the sentiment of political tweets. Our approach
consists of three main components. First, we designed a method to detect
sarcasm in tweets (Section~\ref{sec:sarcasm}). This first step was important
because sarcasm is one of the most confusing and misleading language features to
classify even for a human annotator, especially when a single out-of-context
tweet is being analyzed.
We then constructed a decision tree that assigns to each tweet a fixed number of
crowd workers based on the presidential candidates mentioned in the tweet and
other text properties, in particular, its sarcasm
(Section~\ref{sec:decision-tree}). In designing such a tree, we were motivated
by the following insight: For tweets which are expected to be clear and
straight-forward to analyze, fewer annotators would be required than for tweets
that are sarcastic and complicated. To build the tree, we estimated how
troublesome it would be for a crowd worker to correctly understand what kind of
sentiment is being expressed towards the candidates.
The third component of our approach moves from an offline to an online
consideration of how many crowd workers to involve in the labeling process
(Section~\ref{sec:dynamic-allocation}). Based on the inter-rater agreements
between labels obtained in a first phase of an iterative crowd sourcing process,
for tweets which proved to be challenging to annotate, our method determines how
many additional labels to acquire in one or more subsequent crowd sourcing
phases.
Our final methodological contribution is a description of the equivalency
between two crowdsourcing schemes, the traditional 5-worker-per-task scheme and
the dynamic scheme that assigns 3 workers per task in the first round and 2
additional workers in a second round if disagreement is encountered in the 1st
round. This is a general result about offline versus online crowdsourcing
schemes. It holds for any application and is therefore presented in
Section~\ref{sec:off-vs-on}, separate from the results of our sentiment analysis
of political tweets.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{diagram.png}
\caption{The Static Decision Tree (SDT) model used to determine the number of
crowd workers (leaves) to engage in analyzing tweets about four presidential
election candidates. The intensity of the leaf shading visualizes costs,
e.g. pale green corresponds to low costs. The sarcasm score is computed
according to Eq.~\ref{eq:sarcasm-score}. Experimental results are shown
under each leaf as the number of tweets processed (red).
}
\label{figure:decision-tree-diagram}
\end{figure*}
\subsection{Sarcasm Detection}
\label{sec:sarcasm}
Our first step was trying to predict whether a given tweet was sarcastic or not.
We used a Bayesian approach to estimate the likelihood of sarcasm based on
training data provided by domain experts. Our training data contains the label
``sarcasm present'' or ``sarcasm not present'' for 800 tweets about the four
presidential candidates Clinton, Cruz, Sanders, and Trump.
We looked for general features that are usually clues for the presence of
sarcasm in a sentence ~\cite{GonzalezMuWa11,DavidovTsRa10} and grouped them into 7 categories:
\begin{enumerate}
\item Quotes: People often copy a candidate's words to make fun of them.
\item Question marks, exclamation or suspension points.
\item All capital letters: Tweeters sometimes highlight sarcasm by writing words
or whole sentences with all-capital letters.
\item Emoticons like ':)', ':('
\item Words expressing a laugh, or other texting lingo, such as 'ahah,' 'lol,'
'rofl,' 'OMG,' 'eww,' etc.
\item The words 'yet' and 'sudden.'
\item Comparisons: Many tweeters use comparisons to make fun of a candidate,
using words such as 'like' and 'would'.
\end{enumerate}
The sarcasm detecting algorithm that we designed scans the tweet text for those
features and returns the list of sarcastic clues. The clues are represented by
a 7-component feature vector $f$ that contains a Boolean value for each of the
categories listed above -- ``1'' indicates ``presence'' of the feature, ``0''
otherwise.
Given a tweet $t$ and its feature vector $f$, our method computes the
probability that the tweet~$t$ contains sarcasm by using Bayes rule:
\begin{eqnarray}
P(t \: {\rm is \: sarcastic}|f_n) = \:\:\:\: \:\:\:\:\:\:\:\:\:\:\:\: \:\:\:\:\:\:\:\: \:\:\:\:\:\:\:\: \:\:\:\:\:\:\:\: \:\:\:\:\\
\frac{P(f_n | \, t {\rm \: is \: sarcastic})\: P(t \: {\rm \: is \:
sarcastic})}{P(f_n)} = \\
\frac{\# {\rm \: of \: sarcastic \: tweets \: with} \: f_n}{\# {\rm \: of \:
tweets \: with \: feature} \: f_n}.\:\:\:\: \:\:\:\:\:\:\:\:
\end{eqnarray}
To weigh the presence of the $n$-th feature in sarcastic tweets appropriately,
our method computes a weight vector $w$ by normalizing its $n$-th component by
the probability that it is sarcastic, given any of the seven features is
present:
\begin{equation}
w_n = \frac{P(t \: {\rm is \: sarcastic}|f_n)}{\sum_{n=1}^7 P(t \: {\rm
is \: sarcastic}|\, f_n)}.
\end{equation}
Our sarcasm score for each tweet is then defined to be the dot product
\begin{equation}
w^T \!\! f
\label{eq:sarcasm-score}
\end{equation}
of the weight and feature vectors.
\subsection{Decision Tree}
\label{sec:decision-tree}
The decision tree we designed maps a tweet to a number of crowd workers that
will be asked to label the tweet. To gain insight into the properties of a
tweet that could cause a crowd worker to struggle in sentiment classification
and warrant additional crowd work, we obtained gold standard data and conducted
a formative crowdsourcing study.
\subsubsection{Expert Labels}
We used 1,000 tweets about the four presidential candidates Clinton, Cruz,
Sanders, and Trump. For these tweets, we had gold standard labels about two
categories, provided by experts in political communication. The first category
was whether each of the four candidates was mentioned in the tweet. The second
category described whether the tweet was in general ``positive,'' ``neutral'' or
``negative'' about each candidate mentioned in the tweet. If more than one
candidate was mentioned in a tweet, the sentiment towards each candidate was
labeled.
\subsubsection{Formative Crowdsouring Experiment}
We asked 5 crowd workers to analyze each tweet, calling our experiment the
{\em ``Trad 5 baseline''} (the details on the crowdsourcing methodology are given
in Section~\ref{exp-methodology}). We asked the workers who among the four
candidates Sanders, Trump, Clinton and Cruz was mentioned and to indicate the
attitude that the tweeter expressed towards them on a three-point scale
``positive,'' ``neutral,'' or ``negative.''
\subsubsection{Decision Tree Design}
We designed our decision tree (see Fig.~\ref{figure:decision-tree-diagram})
based on the properties we observed that influence the accuracy with which a
worker interprets the sentiment of the tweet. The first branching of the tree
accounts for whether one or more candidates are mentioned in the tweet text, the
most relevant factor in its sentiment analysis. Tweets in which several
candidates are mentioned are more difficult to classify because annotators can
become confused by the different attitudes that the writer expresses towards
each of the candidates or by the presence of comparisons between them. We here
provide three examples:
\noindent
Tweet 1
\begin{quote}
{\em @BecketAdams @JPTruss @GayPatriot except Cruz now realises Trump's power and is
debating him. Rubio is still hiding from Trump on stage}
\end{quote}
is ``positive'' towards Trump and ``neutral'' towards Cruz, according to expert opinion.
Four crowd workers agreed that the message was ``neutral'' towards both candidates,
and one labeled it ``positive'' towards Trump and ``neutral'' towards Cruz.
\noindent Tweet 2
\begin{quote}
{\em Bernie's Super PAC Hypocrisy: Twice as Much Outside Money Spent Supporting
Sanders as Promoting Clinton https://t.co/RVAi7X4shS}
\end{quote}
is ``positive'' towards Clinton and ``negative'' towards Sanders, according to
expert opinion. All five crowd workers agreed but not on the correct labels --
they selected a negative sentiment towards Sanders and a neutral for Clinton.
\noindent Tweet 3
\begin{quote}
{\em Has Trump mentioned that he doesn't think Cruz is eligible to be President
recently? That seemed like a go-to for him}
\end{quote}
misled annotators because both sarcasm is present and two candidates are
mentioned. As a consequence, only 3 workers out of 5 agreed on a negative overall
feeling towards both candidates.
It is important whether Clinton or Trump was mentioned in the Tweet. Opinions
towards these candidates are usually more challenging to understand as tweeters
have very disparate and unclear attitudes towards them.
The next layer of the decision tree accounts for the length of the tweet and the
presence of a link. We consider a tweet short if it contains fewer than 10
proper words. Tweets that contain a webpage address are not always fully
understandable by themselves as they refer to the content of the link or they
are a response to another tweet, and therefore their context is not always
clear.
Finally, the terminating decision layer in the tree is based on the sarcastic
score that was produced by the sarcasm predictor. The decision tree uses the
sarcasm score as defined in Eq.~\ref{eq:sarcasm-score} to determine the
likelihood of sarcasm in the particular tweet.
We assigned a fixed number of crowd workers to each leaf of the tree, which
specifies the number of annotations needed for a particular tweet. In this first
model we grouped the tweets into 4 categories (very easy, easy, medium and hard)
and assigned 2, 3, 5, or 7 workers to them respectively. We call the model
``Static Decision Tree'' (SDT) due the fact that the number of crowd workers
depends only on the content analysis of the tweet (and not dynamically on the
workers' labels, as described below). With this tree, the number of crowd
workers to be queried for each tweet can be computed {\em offline} -- in advance
of any crowdsourcing experiment (i.e., the numbers shown in
Fig.~\ref{figure:decision-tree-diagram} with a green-shaded background).
\subsection{Dynamic Worker Assignment}
\label{sec:dynamic-allocation}
We here propose an {\em online} scheme for determining the number of crowd
workers to be queried for each tweet. This approach cannot be computed in
advance to the crowdsourcing experiment but is an iterative method that relies
on the results of the crowd work.
Our idea is to request a low number of workers to provide the sentiment analysis
of each tweet in a first round of crowdsourcing, and then perform one or more
rounds of crowdsourcing for the tweets for which workers disagreed. In this
way, the difficulty of the tweet is observed directly as a measure of
disagreement in the first round of crowdsourcing, and we do not risk wasting
effort on tweets that are trivial to classify. To evaluate our approach, we
designed two instantiations of our idea involving two rounds of crowdsourcing:
\subsubsection{Dynamic Decision Tree 1 (DDT1)}
The first dynamic tree assigns
2 workers to the 'very easy' and 'easy' difficulty classes, 3 for 'medium' and 5
for 'hard.' If the 2 workers disagree on classifying a 'very easy' or 'easy'
tweet, we conduct a second round of crowdsourcing on that tweet so that we can
get a majority vote. If some annotators disagree for a 'medium'-class tweet, 2
more workers are involved. The number of workers for 'hard' tweets stays fixed.
\subsubsection{Dynamic Decision Tree 1 (DDT2)} Finally, we pushed the dynamic
assignment design even further and set up a tree that starts with a very low
numbers of annotators in order to minimize the number of crowdsourced tasks. This
tree initially assigns 2 workers to the 'very easy' and 'easy' classes and
requires 3 more annotators if the initial workers disagree. The tweets in the
'medium' and 'hard' categories were first only analyzed by 3 workers, and this number
is increased by 2 workers if at least one disagreement is observed.
\subsection{Equivalency of Traditional Static versus Proposed Dynamic Worker Allocation}
\label{sec:off-vs-on}
Past work showed that the probability $p$ that a crowd worker $w$ correctly
performs a task $t$ according to a gold standard label can be described as a
function $p(t,w)$ of the task difficulty and the worker skill ~\cite{HoVa12}.
For simplicity of our analysis, we omit the dependence on the worker.
For a generic task, we can compute the probability~$P_M$ that the gold standard
is successfully obtained by majority voting for a set of crowd sourcing baseline
schemes as a function of $p$. For example, the probability $P_M$ that the
traditional 3-worker-per-task crowdsourcing scheme yields the correct results is
the probability that at least 2 out of 3 performed the task correctly,
which is
\begin{eqnarray}
P_M = \sum_{i=2}^3 P(i {\rm \, workers \: are \: correct}) = \:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\: \nonumber\\
\sum_{i=2}^3 \binom{3}{i}p^i(1-p)^{(3-i)} = p^2[3(1-p) + p].
\end{eqnarray}
Similarly, with the traditional 5-worker-per-task crowdsourcing scheme, we
attain $P_M=$
\begin{eqnarray}
\label{eq:prob-5}
\sum_{i=3}^5 \! P(i \, {\rm workers \: are \: correct})
\!=
\! \sum_{i=3}^5 \! \binom{5}{i}p^i(1\!\!-\!\!p)^{(5-i)} \nonumber \\
= \: p^3[10(1-p)^2+5p(1-p)+p^2].
\end{eqnarray}
Next we simulate the dynamic assignment of workers with 3 initial workers,
where 2 more workers are involved if disagreement is encountered. The
probability that this model produces the correct result by majority voting is
the sum of three probabilities: (1) the probability that the three initial
workers agree on the correct result, (2) the probability that one initial worker
performs the task incorrectly and at least one new worker correctly, and (3) the
probability that only one initial worker performs the task correctly and both
the new workers follow up correctly:
\begin{eqnarray}
\label{eq:prob-dyn}
\binom{3}{3}p^3+\left[\binom{3}{2}p^2(1-p)\right](1-(1-p)^2) \nonumber \\
+\left[\binom{3}{1}p(1-p)^2\right]p^2 = & \nonumber\\
p^3[1+3(1-p)(2-p) + 3(1-p)^2)] =& \nonumber\\
p^3[10(1-p)^2+5p(1-p)+p^2].
\end{eqnarray}
The derivations in Eqs.~\ref{eq:prob-5} and~\ref{eq:prob-dyn} result in the same
formula. We can therefore infer that a dynamic 3(+2) allocation method for
workers achieves the same prediction accuracy as the traditional 5-worker
crowdsourcing scheme. As we will describe in more details below, by running
such a model on all tweets in our dataset we were able to obtain optimal results
from crowdsourcing with only 4,058 tasks. This result is impressive because it
proves that we can reach exactly the same accuracy level and save 18.84\% of our
budget only by running two ``smart'' rounds of crowdsourcing.
\section{Experimental Methodology}
\label{exp-methodology}
Our data consists of 1,000 tweets about the four presidential candidates
Clinton, Cruz, Sanders, and Trump sent during the primary election season in
February 2016. We selected these candidates because they were the two leading
candidates in the polls at the time of data collection from each major
U.S. political party (Republican and Democrat). The data were collected by
using the Crimson Hexagon ForSight social media analytics platform
(http://www.crimsonhexagon.com/platform).
The tweets were labeled by two domain experts with a background
in political communication in a two-phase process. In the first phase, the experts
determined the sentiment towards each candidate mentioned in each tweet
independently. In the second phase, they came to a consensus on the tweets that
they had initially disagreed on.
For our crowdsourcing experiments, we used the Amazon Mechanical Turk (AMT)
Internet marketplace to recruit workers. We accepted all workers from the U.S.
who had previously completed 100 HITs and maintained at least a 92\% approval
rating. We paid each worker \$0.05 per completed task. We conducted two
crowdsourcing studies, a formative and a summative study, involving 200 and 800
tweets respectively.
{\bf Formative Study.} We gave the following instruction before presenting
every tweet:
\begin{quote} {\em Carefully read through each tweet and decide the author's attitude
toward each mentioned presidential candidate (support, neutral, or against).}
\end{quote}
We verified that short tweets (fewer than 10 proper words) were very difficult
to tag. Tweets with links to an external page were also difficult to analyze.
It is likely that the sentiment of the tweet heavily relies on the content of
the referenced webpage. Workers may have tried to follow the link or may
have selected a random sentiment instead of following the link. In our
instructions for our summative study, we therefore specifically asked the crowd
workers not to click on any external link for completing the task. We also
adjusted the label for positive and negative sentiments towards a candidate.
{\bf Summative Study.} We updated the instructions as follows:
\begin{quote} {\em Read through the tweet and answer the following questions. Do
NOT click on any links.
Read the tweet and decide whether the candidate was mentioned at all or
not. Note that the reference of Twitter user names (e.g.,
@realDonaldTrump, @HillaryClinton) or hashtags (e.g.,\#Trump2016,
\#HillaryClinton2016) is also counted as a mention.
Express which sentiment was manifested by the writer towards them:
positive, neutral, or negative.}
\end{quote}
We collected ratings from a traditional crowdsourcing scheme that involves 5
independent workers per tweet. We call this the ``Trad 5'' baseline. For 15
tweets that were deemed 'hard' to analyze by our decision tree and thus required
the ratings from 7 workers, we needed to collect additional ratings. Instead of
simply collecting two more, we asked for 5 additional ratings per tweet from
which we could then draw additional samples randomly for analysis. This resulted
in a total of 5,075 tasks.
To simulate a crowd sourcing experiment that employs a fixed number of three
crowd workers per tweet (our traditional Trad~3 baseline), we randomly sample
the results produced by 5 crowd workers. To simulate the crowd sourcing
experiments that use the decision tree we designed (SDT, DDT1, DDT2), we
similarly use random samples from our Trad~5 baseline. To obtain the results of
our decision trees, we averaged the collected metrics over 5 different model
runs to attenuate potential noise generated by the randomness in selecting crowd
workers.
\subsubsection{Evaluation Measures}
We use two metrics for evaluating our work. They are meaningful for
understanding the trade-off between accuracy and budget concerns, which is the
focus of our work.
\begin{itemize}
\item {\bf Number of crowd worker tasks:} This is the total number of Human
Intelligence Tasks requested by our decision tree model. The number provides
an indication of the budget needs of a crowd experiment. To find the monetary
costs of crowdsourcing, we can multiply this number by the price per task (we
used \$0.05/task).
\item {\bf Accuracy of the labeling:} The accuracy of the crowdsourced sentiment
analysis can be determined by how much agreement exists between the majority
crowdsourced opinion and the gold standard opinion provided by experts. Our
main measure of accuracy is Cohen's Kappa score $\kappa$ for measuring
inter-rater reliability (IRR). Cohen's Kappa score accounts for the
possibility that raters are guessing and so an agreement is obtained by chance.
\end{itemize}
\section{Results}
\subsubsection{Sarcasm detection}
Our experiments showed that the clues we used for sarcasm detection are very
diverse, and were used in different ways according to the topic of the tweet.
We found that smileys were not used at all, while the most meaningful element
for sarcasm detection was the presence of expressions like 'lol', 'hahaha,' for
example, in the following tweet:
\begin{quote} {\em RT @rickygervais If Trump was a teacher he'd be fired for
publicly saying the things he says. Luckily he isn't a teacher. Just the
next president. Hahaha.}
\end{quote}
The presence of sarcasm was indeed a factor which increased the difficulty of
tweet classification: in our dataset, sarcastic tweets had a 71.2\% percentage
inter-rater agreement. This metric increased to 78.3\% when dealing with
non-sarcastic tweets.
It turned out that the presence of sarcasm was not as ubiquitous as we had
expected, as only 73 messages out of 800 were estimated to be sarcastic by
domain expert, and a surprising 68.5\% of them concern Donald Trump (see
Table~\ref{table:sarcasm}). The last row of the table shows that even after
weighing the sarcasm presence over the number of tweets that mentioned each
candidate, Donald Trump still leads with 12\% of his tweets that are sarcastic.
Regarding the sentiment that is usually associated with sarcasm, the last column
of the table proves that sarcasm is usually associated with a negative feeling
towards a candidate. In fact this language feature is usually employed to make
fun of a candidate and criticize him for his statements or actions.
\begin{table}
\small
\centering
\begin{tabular}{l||c|c|c|c|r}
& \bf Clinton & \bf Cruz & \bf Sanders & \bf Trump & \\
\hline \hline
Positive & 0 & 0 & 2 & 3 & 5 \\
Neutral & 5 & 6 & 4 & 15 & 30 \\
Negative & 6 & 4 & 6 & 32 & 48 \\
\hline
& 11 & 10 & 12 & 50 & \\
Sarcastic & 5.9\% & 6.2\% & 6.8\% & 12.0 \% &
\end{tabular}
\caption{
These results show the number of sarcastic tweets
addressed to each candidate and the sentiment that they showed according to the gold
standard provided by experts in political communication.
In the dataset of 800 tweets, 73 tweets were sarcastic.
The last row shows the ratio of
sarcastic tweets over the total tweets in which each candidate was mentioned.}
\label{table:sarcasm}
\end{table}
\subsubsection{Differences Based on Specific Candidates}
As expected, we found that which presidential candidate was mentioned in a tweet
had an impact on how difficult it was to discern the tweeter's opinion about the
candidate. The sentiments that tweeters expressed towards Hillary Clinton and
Donald Trump were often unclear or veiled by sarcasm. To illustrate this point
qualitatively, we give an example tweet about Trump that confused the crowd
workers:
\begin{quote} {\em I was watching the Texas gop debate on snapchat lol and this
is the only state where I've seen people actually rally against trump YOUNG
PPL.}
\end{quote}
One crowd worker labeled the tweet to show ``a positive attitude,'' 2 crowd
workers labeled it as ``neutral'' and the remaining 2 agreed on a ``negative''
sentiment towards the candidate. In this case, it is impossible to determine a
result by majority vote, and a final label can be assigned by a reasonable
random choice. We here chose randomly between ``neutral'' and ``negative.''
To illustrate the issue quantitatively, we here provide the inter-rater
reliability values among 5 crowd workers of our formative study when classifying
sentiments towards each candidate and report both the relative observed
agreement among crowd workers and Cohen's Kappa score $\kappa$: \vspace{0.2cm}
\begin{tabular}{lcr}
Candidate & Agreement & Kappa IRR\\
Bernie Sanders: & 83.05\% & $\kappa$ = 0.74 \\
Ted Cruz: & 87.78\% & $\kappa$ = 0.78 \\
Hillary Clinton: & 63.41\% & $\kappa$ = 0.41 \\
Donald Trump & 78.13\% & $\kappa$ = 0.66\\
\end{tabular}
\vspace{0.2cm}
It is evident from the above numbers that annotators disagreed much more often
when Clinton or Trump were mentioned. For our summative study, we therefore
designed an offline model that can account for this observation and involve more
workers to label tweets from these two candidates.
\begin{table}
\small
\centering
\begin{tabular}{l||l|l||r|r|r}
& \bf Trad 3 & \bf Trad 5 & \bf SDT & \bf DDT1 & \bf DDT2 \\
\hline
Efficiency & 3,000 & 5,000 & 3,907 & 3,206 & 3,608\\
\hspace*{0.2cm} Imprv. & & & 22\% & 36\% & 28\% \\
Accuracy & 0.612 & 0.653 & 0.624 & 0.630 & 0.643\\
\hspace*{0.2cm} Loss & & & 4.4 pp & 3.5 pp & 1.0 pp\\
\hline
\end{tabular}
\caption{
Comparison of results of five methods with
respect to their efficiency and accuracy. The number of crowd workers
engaged (i.e., efficiency or costs) and the accuracy of their sentiment
labeling (Cohen's Kappa IRR rate) compared to the gold standard established by experts are given
for each method. For the first two methods, each
tweet is analyzed by the same fixed number of crowd workers, i.e., 3
crowd workers (Trad 3) or 5 crowd workers (Trad 5). For the methods that use the
decision tree (DT), the number of crowd workers engaged depends on the
content of the tweet and result in significant improvements (Improv.) in
efficiency with respect to the 5
crowd-worker models (row 2), without much loss of accuracy (row 4, given in
percent points, pp).
\label{table:results}
}
\end{table}
\subsubsection{Results for Traditional Fixed-Allocation Model}
The first two models that we considered are a fixed crowdsourcing round with the
same amount of workers for every tweet. With a total of 3 annotators we
requested 3,000 ratings and we achieved a 0.612 Kappa value (see
Table~\ref{table:results}). If we increase the number of crowd workers by 2 we
require 5,000 tasks and we would get a 0.653 reliability measure. These results
align with previous observations that the task of sentiment analysis is
challenging even for human annotators~\cite{YoungSo12,TumasjanSpSaWe10} Despite
the significantly higher costs of requesting 2,000 additional labels from crowd
workers, a 40\% increase, the average agreement between the majority of crowd
contributions and expert labels improved by only 6.3 percent (or, equivalently,
by a difference of Kappa values of 4.1 percent points).
\subsubsection{Results for the Proposed Static Decision Tree}
For the static decision tree (SDT), 3,907 labels were requested, on average, and
an IRR score of 0.624 was obtained. The allocated numbers of workers based on
the text analysis of the tweets and decision rules of the tree are shown in red
in Figure~\ref{figure:decision-tree-diagram}. With this static decision tree,
22\% of the budget would be saved with respect to the traditional
5-worker-per-task model (Trad 5). The loss in accuracy is 4.4 percent points.
\subsubsection{Results for the Proposed Dynamic Decision Trees}
The first dynamic tree (DDT1) showed a meaningful improvement as it involves
only 3,206 tasks on average and has an IRR score of 0.630. This model costs
36\% less than the fixed one with 5 workers and only 6.9\% more than the model
with 3 annotators but the gain in accuracy with respect to the latter is quite
high (2.9\%). This model would be preferable in low-budget situations.
The second dynamic tree (DDT2) is a bit more expensive as it requires 3,608
annotators by average but the Cohen's Kappa IRR rate improves to 0.643. Even
this classifier is much cheaper than the fixed 5-worker as it saves almost 28\%
of the budget and the accuracy is comparable (the difference between Kappas
scores is only 1 percent point). We propose that this predictor is suitable if
we are willing to spend a bit more in order to achieve a very good performance.
Both dynamic trees produce notably better results than the fixed decision tree
in both cost and accuracy. This shows that the difficulty of a tweet can be
inferred from the crowdsourcing outcomes themselves and that heuristic rules for
determining it are extremely complex and hard to formulate. Correct results can
be obtained by a second round of annotations, which needs to be set up
accordingly, thus saving a meaningful amount of budget.
\subsubsection{Cost Savings of Dynamic versus Static Worker Assignment}
The traditional 5-worker-per-task allocation model Trad 5 performs exactly the
same as a dynamic model which assigns 3 annotators +2 more if there is
disagreement, as described in Section~\ref{sec:off-vs-on}. This result shows
that our model allows the same accuracy but at a much lower cost. A
visualization of the differences in accuracy and efficiency between traditional
static crowdsourcing schemes and the proposed dynamic schemes is given in
Figure~\ref{figure:prob}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{both-plots.png}
\caption{Performance Analysis -- Accuracy and Costs. Left: The probability
$P_M(p)$ that a given crowdsourcing scheme produces the correct label by
majority vote as a function of the probability that a certain tweet is
labeled correctly by a worker. We compare the performance of four
traditional crowdsourcing baselines (with 1, 3, 5 or 7 crowd workers for
each tweet) and our dynamic prediction models DDT1 and DDT2. For tweets
that are easy to annotate, the accuracy of all methods is similar. When
tweets are more difficult to analyze, and thus, more workers are engaged, the
performance gains in accuracy of the DDT1 and DDT2 models compared to the
traditional models ``Trad~3'' become apparent. The DDT2 model almost
reaches the performance of the baseline ``Trad~5.'' Right: The proposed
dynamic models DDT1 and DDT2 provide large budget savings.}
\label{figure:prob}
\end{figure*}
\subsubsection{Analysis of Crowd Work Properties}
We submitted 5,075 tasks to Mechanical Turk for an overall cost of \$253.75.
The number of MT workers who contributed labels to all the tweets was 218. An
average of 23 annotations was submitted per worker.
We analyzed how much time workers spent in labeling a single tweet, which is
illustrated in Figure~\ref{figure:time}. Annotators spent an average of 85.1
seconds for classifying a single message but some workers were very meticulous
and used up to 10 minutes to complete a single task. For example one of the
best annotators who worked for us labeled 217 tweets with an average of 212
seconds per task, which sums up to almost 13 hours spent on the platform. On the
other hand, other annotators were very quick, for instance one worker contributed
by labeling 42 tweets and spent on average less than 9 seconds per message.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{time.png}
\caption{A distribution of tasks (HITs) as a function of task time, ranging
from 1 to 600 seconds. This distribution was computed over the total 5,075
tasks that were submitted to Amazon Mechanical Turk during our crowdsourcing
experiment.}
\label{figure:time}
\end{figure*}
\subsubsection{Sample Results on Political Tweets}
Analysis of the annotations of our 1,000 tweet dataset provides some fascinating
observations about political opinions. We can report the overall sentiment that
people showed towards candidates, as rated by the crowd workers
(Table~\ref{table:crowdsentiment}) and by the experts in political communication
(Table~\ref{table:expertsentiment}). We found that Trump is the ``most
popular'' candidate to tweet about, considering that more than half of the total
tweets mentioned him, while the other candidates were evenly referred to on
average. Furthermore it is clear that tweeters who discuss candidates for
presidential elections often express negative feelings and complain about
candidates, since there are about twice as many negative messages than positive
ones in our entire dataset. The main difference between the crowd worker and
expert annotations was the tendency of the crowd worker to label fewer tweets as
``neutral.''
\begin{table}
\small
\centering
\begin{tabular}{l||c|c|c|c|r}
& \bf Clinton & \bf Cruz & \bf Sanders & \bf Trump & \\
\hline \hline
Positive & 33 & 44 & 78 & 123 & 278 \\
Neutral & 86 & 67 & 104 & 189 & 446 \\
Negative & 99 & 99 & 56 & 201 & 455 \\
\hline
& 218 & 210 & 238 & 513 &
\end{tabular}
\caption{
Number of tweets, out of a total of 800, grouped according to crowd-sourced sentiment label per candidate.
The last row and columns display the sums over the columns and rows respectively
in the table.}
\label{table:crowdsentiment}
\end{table}
\begin{table}
\small
{\centering
\begin{tabular}{l||c|c|c|c|r}
& \bf Clinton & \bf Cruz & \bf Sanders & \bf Trump & \\
\hline \hline
Positive & 25 & 37 & 58 & 90 & 210 \\
Neutral & 109 & 85 & 123 & 208 & 525 \\
Negative & 91 & 89 & 55 & 212 & 447 \\
\hline
& 225 & 211 & 236 & 510 &
\end{tabular}}
\caption{
Number of tweets, out of a total of 800, grouped according to expert-provided
sentiment label per candidate.
The last row and columns display the sums over the columns and rows respectively
in the table.}
\label{table:expertsentiment}
\end{table}
\section{Discussion and Conclusions}
As crowdsourcing becomes more and more popular for large scale information
retrieval, the cost of this human computation is becoming relevant. Example
applications are real-time sentiment analysis to provide fast indications of
changes in public opinion or collection of a sufficiently large training data
for machine learning methods for big data analytics~\cite{WangCaKaBaNa12}.
Investigations, as ours, about how to balance the goals of efficiency and
accuracy in crowdsourcing, are therefore particularly timely.
Few works have explored dynamic approaches to crowdsourcing that rely on
iterative rounds of crowdsourcing and determine the number of worker assignments
based on content and annotation results in previous
rounds~\cite{BraggKoMaWe14,HoVa12,KolobovMaWe13}. Connections to active and
reactive learning ~\cite{YanRoFuDy11,LinMaWe15} have been made. While prior
work involves theoretical analysis and simulation studies, we here provide a
concrete solution to the problem of analyzing the sentiment of political twitter
messages using a dynamic worker allocation framework.
We proposed a dynamic two-round crowdsourcing scheme that we embedded into a
decision tree classifier. Other types of classifiers may be used, and, in
future work, we will explore additional learning methods.
Analysis of political tweets is challenging due to the short text and unknown
context. Sentiment analysis is particularly difficult. Existing off-the-shelf
text analysis systems can only provide a single sentiment label for a given text
automatically. We found that they fail to distinguish the separate sentiments
that were expressed when more than one presidential candidate was mentioned in a
tweet. The presence of sarcasm exacerbated the problem. Our proposed solution
is to design a classifier that early in the analysis makes a decision about the
number of sentiments that must be revealed. Our new dataset may inspire other
researchers to develop text analysis tools that address the difficult problem of
multi-sentiment analysis and sarcasm detection.
Our corpus of 1,000 twitter messages is unique because
it includes information about (1) the presence/absence of sarcasm and (2) a
label about the specific sentiment for each candidate mentioned in the tweet
(positive, neutral, negative), as determined by consensus of two domain experts.
It is notable that our study involved communication researchers in many aspects
of the research, such as the development and refinement of crowdsourcing task
instructions and the design of the Mechanical Turk interface. The intervention
of domain experts greatly helped improve the validity and performance of our
crowdsourcing method.
Likewise, the proposed approach has the potential to make a significant
contribution to communication research. Traditionally, communication researchers
use manual content analysis, a method that usually relies on two or three human
coders, to analyze text in different media outlets or that of public
opinion~\cite{RiffeLaFi14}. However, the traditional method is tedious, time
consuming, and limited by the nature of human subjectivity. Arguably, the use of
the dynamic online crowdsourcing framework introduced in this study allows
communication researchers to process larger datasets in a more efficient and
reliable manner. Given the results of the study, future research should also
consider cross-disciplinary collaboration to advance theories and methods for
large-scale text analysis.
\section*{Acknowledgments}
The authors would like to thank the Boston University Rafik B. Hariri Institute
for Computing and Computational Science and Engineering for financial support
and the crowd workers for their annotations.
|
train/arxiv
|
BkiUc9I4uzlhYyxW3ZF-
| 5
| 1
|
\section{Introduction and main results}
\subsection{Background and motivation}
In this paper we consider a pair of Markov
processes with Laplace transforms which are related by the dual representations.
These representations exchange the time arguments of one process and the argument of its Laplace transform on one side of the equation with
the time arguments of the second process and the argument of its Laplace transform on the other side.
Ref. \cite{Bryc-Wang-2017} established such representations for the Laplace transforms of Markov processes which are squares of the radial part of a 3-dimensional Cauchy process \cite[Corollary 1]{kyprianou2020doob},
and the Laplace transforms of Brownian excursion and generalized meanders. Such identities were needed to identify limiting fluctuations for the open ASEP in \cite{Bryc-Wang-2017ASEP}. A different (univariate) example of such a dual representation is
\cite[formula (2)]{Bertoin-Yor2001subordinators}. At this time there is no complete understanding how such identities may arise in general.
In this paper we establish a dual representation formula for a pair of processes that are related to stationary measures for the open KPZ equation.
We first explain why there is a need for the dual representation that we derive in this paper.
In their seminal study of stationary measures for the KPZ equation Corwin and Knizel \cite{CorwinKnizel2021} determine the multipoint Laplace transform,
for a unique WASEP-stationary measure for the open KPZ equation, i.e. a random function $H:[0,1]\to\r$ described therein. The law of this spatial process $H$
depends on two real parameters $u,v$ that enter the general inhomogeneous Neumann boundary conditions \cite[(1.2)]{CorwinKnizel2021}; in this paper we use $\C=2u$, $\A=2v$ so that in \cite[(1.5)]{CorwinKnizel2021} the parameters are $A={q}^{\A/2}$ and $C={q}^{\C/2}$.
We will treat $H$ as a stochastic process on the finite time interval $[0,1]$. Transcribed to the stochastic process notation, \color{black} Corwin and Knizel \cite{CorwinKnizel2021} show that for $0=t_0<t_1<\dots<t_d\leq t_{d+1}=1$ and $\A+\C>0$
the multipoint Laplace transform of $H$, evaluated at the increments of a decreasing sequence $s_1>s_2>\dots>s_{d}>s_{d+1}=0$, factors as
\begin{equation}
\label{pre-psi}
\e\left[\exp \left(-\sum_{k=1}^d (s_{k}-s_{k+1})H(t_k)\right)\right] = \exp\left(\frac14\sum_{k=1}^{d+1}(t_k-t_{k-1})s_k^2\right) \times \psi\topp {1/4}(\vv s,\vv t),
\end{equation}
where $\vv s=(s_1,\dots,s_d)$ and $\vv t=(t_1,\dots,t_d)$, $d\geq 1$.
To describe $\psi\topp \tau(\vv s,\vv t)$, Corwin and Knizel
\cite[Section 7.3]{CorwinKnizel2021} introduce the continuous dual Hahn process
$(\mathbb{T}_s)$, and use it to express this
non-Gaussian
factor as
\begin{equation}
\label{psi}
\psi\topp \tau(\vv s,\vv t)=\frac{1}{\Kab}\int_\r \e\left[\exp\left(-\tau \sum_{k=1}^{d+1}(t_k-t_{k-1})\mathbb{T}_{s_k}\right)\middle|\mathbb{T}_{s_{d+1}}=x\right]\mathfrak p_{s_{d+1}}(\d x),
\end{equation}
where constant
\begin{equation}\label{Kab2p0}
\Kab =\int_\r e^{-\tau x} \mathfrak p_{0}(\d x)
\end{equation} is computed from a limiting procedure based on \cite{corwin2018open}, parameter $\tau=1/4$, and $\mathfrak p_s(\d x)$ is a family \eqref{their-nu} of infinite measures on $\r$, interpreted as the
``univariate distributions''
for the Markov process $(\mathbb{T}_{s})$.
The question then arises of identifying a stochastic process that corresponds to this non-Gaussian component.
Our goal is to express
\[
\psi\topp \tau(\vv s,\vv t) = \e\left(\exp\left(-\sum_{k=1}^d(s_k-s_{k+1}) \widetilde Y_{t_k}\right)\right)
\]
for some process $\widetilde Y$.
In this paper, we study expression \eqref{psi} in its own sake without directly relating it to the KPZ equation, which now serves mainly as motivation.
We use auxiliary parameter $\tau>0$, which is a convenient scale parameter that facilitates comparison with \cite[Theorem 1.4 (5)]{CorwinKnizel2021}, where $\tau=1/4$.
We
consider expression \eqref{psi} for decreasing sequences $\{s_{k}\}$ from the interval $(-\A,\C)$ which, under our standing assumption that $\A+\C>0$, is non-empty.
The continuous dual Hahn process $(\mathbb{T}_s)_{s\in(-\A,\C)}$ is then well defined and absolutely continuous, see \cite[Section 3]{Bryc:2009}.
Unfortunately, when $\A<0$, this range can be disjoint with the range $[0,\min\{2,\C\})$ used in \cite[(1.10)]{CorwinKnizel2021} when $\C>0$. This does not affect our results, but it hinders their applicability to identification of the law of stationary measures for the open KPZ equation.
\subsection{Main results}
The multipoint Laplace transform $\psi\topp \tau(\vv s,\vv t)$ in variable $\vv t$ has a dual representation as the Laplace transform in variable $\vv s$ of a Markov process
based on the Yakubovich heat kernel
\begin{equation}\label{p_t}
p_t(x,y)= \int_0^{\infty} e^{-t u^2} K_{\i u}(e^x) K_{\i u}(e^y)\, \mu(\d u),\quad x,y\in\r,\; t>0,
\end{equation}
where
\begin{equation}\label{mu}
\mu(\d u):= \frac{2}{\pi}\frac{\d u}{|\Gamma(\i u)|^2}=\frac{2}{\pi^2} u\sinh(\pi u)\, \d u,
\end{equation}
and
\begin{equation}
\label{K-def}
K_{\i u}(x)=\int_0^\infty e^{-x\cosh w}\cos(u w)\, \d w
\end{equation}
is the modified Bessel function defined for $u\in\r$ and $x>0$.
The following result specifies the finite dimensional distributions of the process, which is all that is needed for the dual representation of the Laplace transforms.
\begin{theorem}\label{T1.1}
Fix $\tau>0$ and $\A,\C\in\r$ such that $\A+\C>0$. Then there exists a Markov process $(Y_t)_{t\in [0,\tau]}$ such that
for $0=t_0<t_1<\dots<t_{d}<t_{d+1}=\tau$ and $d\geq 0$ the finite dimensional distributions of
$(Y_{t_j})$ have joint density
\begin{equation}\label{joint-density}
f_{\vv t}(\vv x)=\frac{1}{ C\topp \tau_{\A,\C}} e^{\C x_0 + \A x_{d+1}}\prod_{k=1}^{d+1}p_{t_k-t_{k-1}}(x_{k-1},x_{k}),
\end{equation}
where $\vv t=(t_0,\dots,t_{d+1})$, $\vv x=(x_0,\dots,x_{d+1})\in\r^{d+2}$ and
\begin{equation}
\label{C-formula}
C\topp \tau_{\A,\C}:=\int_{\r} \int_{\r} e^{\A x+\C y} p_\tau(x,y)\d x \d y.
\end{equation}
\end{theorem}
The proof of Theorem \ref{T1.1} consists of the construction of the Markov process, and includes verification that $C\topp \tau_{\A,\C}<\infty$.
Of course dual representations for the Laplace transforms are bi-directional, so one can start with process $(Y_t)$ and determine the process on the other side of the dual representation; we will do that in Section \ref{Sect: DMS}.
It turns out that for $s\in(-\A,\C)$ the natural dual process to $(Y_t)$ is not $(\mathbb{T}_s)$ but its square root $(Z_s)$, which we now describe without explicitly referring to $(\mathbb{T}_s)$.
We use standard notation for the products of Gamma functions and the Pochhammer symbols:
$$\Gamma(a,b,\dots,c)= \Gamma(a)\Gamma(b)\dots\Gamma(c),\quad (a)_n=a(a+1)\dots(a+n-1), \quad (a,b,\dots,c)_n=(a)_n(b)_n\dots (c)_n.$$
Let $(Z_s)_{s\in(-\infty, \C)}$ be a Markov process with state space $(0,\infty)$ and transition probabilities $P(Z_t=\d v|Z_s=u)$ given by the densities
\begin{equation}\label{ICAK-q}
q_{s,t}(u,v)= \frac{|\Gamma(\frac{\C-t+\i v}{2},\frac{t-s+\i (u+v)}{2},\frac{t-s+\i (u-v)}{2})|^2}{4\pi\Gamma(t-s)|\Gamma(\frac{\C-s+\i u}{2},\i v)|^2},\;\; -\infty< s<t<\C.
\end{equation}
We have the following dual representation for the Laplace transforms of $(Z_s^2)$ and $(Y_t)$.
\color{black}
\begin{theorem}\label{T1.2} For $\A+\C>0$, $d\geq 0$,
$\C>s_1>\dots>s_d>s_{d+1}>-\A$,
and $t_0=0<t_1<\dots<t_d<t_{d+1}=\tau$,
\begin{equation}\label{ICAK-A1}
\int_0^\infty \e\left[\exp\left(-\sum_{k=1}^{d+1}(t_k-t_{k-1})Z_{s_k}^2\right)\middle|Z_{s_{d+1}}=u\right]\varphi_{s_{d+1}}(u)\d u
=
\e\Big[ e^{\sum\limits_{k=1}^{d+1} s_{k} (Y_{ t_{k}}-Y_{t_{k-1}}) } \Big]
\end{equation}
where $(Y_t)_{t\in[0,\tau]}$ is the Markov process from Theorem \ref{T1.1} and
\begin{equation}
\label{nu_s}
\varphi_s(u)=\frac{2^{\A+\C}}{ 8\pi C\topp \tau_{\A,\C}}\frac{|\Gamma(\tfrac{\A+s+\i u}{2},\tfrac{\C-s+\i u}{2})|^2}{|\Gamma(\i u)|^2},
\end{equation}
is a non-integrable function.
\end{theorem}
We remark that the left hand side of \eqref{ICAK-A1} differs from \eqref{psi} in the normalizing constant, and in addition we cannot apply \eqref{ICAK-A1} to $s_{d+1}=0$ when $\A<0$.
Both issues are addressed by our next result.
In this result we need to consider the continuous dual Hahn process on the interval $[0,\C)$, which may be larger than the interval in \cite[Definitions 7.8 and 7.9]{CorwinKnizel2021}. However, using the arguments similar to \cite[Section 3.2]{Bryc-Wesolowski-08} one can check \cite{Bryc-2021} that the continuous dual Hahn process $(\TT_s)$ with infinite invariant distributions $\mathfrak p_s(\d x)$ is well defined for $-\infty<s<\infty$.
\begin{theorem}\label{T1.3}
Fix $\tau>0$, $\d\geq 1$, and $\A+\C>0$ with $\C>0$.
Let
$\C>s_1>\dots>s_{d}>\max\{-\A,0\}$,
and $t_0=0<t_1<\dots<t_d=1$, augmented with $t_{d+1}=1$ and $s_{d+1}=0$.
Then
\begin{equation}\label{psi-dual}
\psi\topp \tau(\vv s,\vv t)=\e\Big[ e^{\sum\limits_{k=1}^{d} (s_{k}-s_{k+1}) (Y_{ \tau t_{k}}-Y_{0}) } \Big].
\end{equation}
\end{theorem}
\begin{remark}
\label{Rem-extend}
It is plausible that restrictions on the parameters in Theorem \ref{T1.3} could be relaxed by an analytic continuation argument as in the proof of Theorem \ref{Thm-L-Const}. We do not pursue this here, as this would require consideration of atoms in the transition probabilities for the dual continuous Hahn process $(\mathbb{T}_s)$.
\end{remark}
\begin{remark}It might be interesting to point out that identity \eqref{ICAK-A1} can formally be written in the integral form.
For
$\C>s_1>\dots>s_d>s_{d+1}>-\A$,
and $t_0=0<t_1<\dots<t_d<t_{d+1}=\tau$
define the
step-wise function
\begin{equation}
\label{step-s}
\sigma (t)=\sum_{j=1}^{d+1} s_j \mathbf{1}_{(t_{j-1},t_{j}]}(t).
\end{equation}
By linearity, the integral of $\sigma$ is well defined for any random signed measure on a field generated by left-open right-closed intervals. An example of such a measure is $(\alpha,\beta]\mapsto Y_\beta-Y_\alpha$ induced by process $(Y_t)_{t\in[0,\tau]}$.
Then \eqref{ICAK-A1} can be written as
\begin{equation}\label{IbyP}
\e \left[e^{\int_0^T \sigma(t)\d Y_t}\right]=\int_0^\infty \e\left[ e^{- \int_0^T Z_{\sigma(t)}^2 \d t }\middle | Z_{s_{d+1}}=u\right] \varphi_{s_{d+1}}(u)\d u.
\end{equation}
A different integral representation for a related identity is discussed in \cite[Section 2.1.4]{CorwinKnizel2021}.
\end{remark}
\begin{remark} Expression \eqref{psi} can also be written in the following more probabilistic form:
$$\psi\topp \tau(\vv s,\vv t) =\e\left[\exp\left( -\tau \sum_{k=1}^{d} t_k (\mathbb{T}_{s_k}-\mathbb{T}_{s_{k+1}})\right) \right]$$
where $(\mathbb T_s)_{s\ge 0}$ is a continuous dual Hahn process with the initial law given by the probability measure
$$ P(\mathbb T_0=\d x)= \frac{1}{\Kab} e^{-\tau x} \mathfrak p_{0}(\d x).$$
\end{remark}
\subsection{Application to stationary measure for the KPZ equation}
Recall that we denoted by $(H(t))_{t\in[0,1]}$ stationary measure for the KPZ equation which was denoted by $(H_{u,v}(x))_{x\in[0,1]}$ in \cite[(1.12)]{CorwinKnizel2021}, with $u=\C/2$, $v=\A/2$ and $H_{u,v}(0)=0$. In this Section, $(Y_t)_{t\in[0,1/4]}$ is the Markov process from Theorem \ref{T1.1} with parameter
$\tau=1/4$, and $(W_t)_{t\geq 0}$ is an independent
Brownian motion.
\begin{proposition}
When $\A+\C>0$ with $\min\{\A,\C\}>-2$, the law of stationary solution $(H(t))_{t\in[0,1]}$ for the KPZ equation is the law of process
$(W_{t/2}-Y_{t/4}+Y_0)_{t\in[0,1]}$.
\end{proposition}
\begin{proof}
We first consider the case $\C>0$. We apply Theorem \ref{T1.3} with the sequence $s_1>\dots>s_d$ from the (nonempty) interval $(-
\A,\min\{2,\C\})$ and with $\tau=1/4$. This interval lies within the admissible range of parameters for \cite[Theorem 1.4. (5)]{CorwinKnizel2021}, so we can invoke \cite[formula (1.12)]{CorwinKnizel2021}, which we cite in the form
\eqref{pre-psi}.
Since in Theorem \ref{T1.3} we put $t_d=t_{d+1}=1$ and $s_{d+1}=0$, using the Abel transformation
we get
\begin{multline}\label{H2Y}
\e\left[e^{ -\sum_{k=1}^d (s_{k}-s_{k+1})H(t_k)}\right] \stackrel{\eqref{pre-psi}}{=}
\e\Big[e^{-\sum_{k=1}^{d} s_k(W_{t_k/2}-W_{t_{k-1}/2})}\Big]\psi\topp{1/4}(\vv s, \vv t)
\\=\e\Big[e^{-\sum_{k=1}^{d} (s_k-s_{k+1})W_{t_k/2}}\Big] \psi\topp{1/4}(\vv s, \vv t)
\stackrel{\eqref{psi-dual}}{=} \e\Big[ e^{ \sum\limits_{k=1}^{d} (s_{k}-s_{k+1}) (Y_{t_{k}/4}-Y_{0}-W_{t_k/2})} \Big].
\end{multline}
Noting the sign change and using the fact that $d$-tuples $(s_1-s_2,s_2-s_3,\dots,s_{d-1}-s_{d},s_d)$ can be selected arbitrarily from an open subset of $\r^d$, the first and last expressions in \eqref{H2Y} identify of the laws by uniqueness of the Laplace transform \cite[Theorem 2.1]{farrell2006techniques}.
By symmetry, this identification extends to the case when $\C$ is negative with $\C>-2$. For this argument we need to indicate dependence on parameters $\A,\C$ explicitly, so
by $Y\topp{{\A,\C}}$ we denote the process with joint density \eqref{joint-density},
and by $H\topp{{\A,\C}}$ we denote the process $(H_{\C/2,\A/2}(x))_{x\in[0,1]}$ in \cite[(1.12)]{CorwinKnizel2021}.
The argument is based on the fact that time reversal exchanges the roles of parameters $\A,\C$. From \eqref{joint-density} and \eqref{p_t} it follows that process $(Y\topp{{\A,\C}}_{1-t})$ has the same law as the process $(Y\topp{\C,\A}_t)$,
and
by \cite[Theorem 1.4 (4)]{CorwinKnizel2021}, process $(H\topp{\A,\C}(1-t)-H\topp{\A,\C}(1))_{t\in[0,1]}$ has the same law as $(H\topp{\C,\A}(t)))_{t\in[0,1]}$.
So if $\C$ is negative, i.e., $\A>0$, applying the already established result to the time reversals, we see that the law of the process
$(H\topp{{\C,\A}}(t))_{t\in[0,1]}$ has the same law as the process $(\widetilde H(t))_{t\in[0,1]}$ with $\widetilde H(t)=W_{t/2}-Y_{t/4}\topp{{\C,\A}}+Y_0\topp{{\C,\A}}$. Applying the time reversals again, we see that
process
$(H\topp{{\A,\C}}(t))_{t\in[0,1]}$ has the same law as the process defined by the reverse transformation of $\widetilde H$, i.e.,
\begin{multline*}
\widetilde H(1-t)-\widetilde H(1)=
(W_{(1-t)/2}-Y\topp{\C,\A}_{1/4-t/4}+Y\topp{\C,\A}_{0})-(W_{1/2}-Y\topp{\C,\A}_{1/4}+Y\topp{\C,\A}_{0}) \\=
(W_{(1-t)/2}-W_{1/2})+(Y\topp{\C,\A}_{1/4}-Y\topp{\C,\A}_{1/4-t/4}).
\end{multline*} This ends the proof, as
$(W_{(1-t)/2}-W_{1/2})_{t\in[0,1]}\stackrel{\mathcal{L}}{\simeq} (W_{t/2})_{t\in[0,1]}$ and $(Y\topp{\C,\A}_{1/4}-Y\topp{\C,\A}_{1/4-t/4})_{t\in[0,1]}\stackrel{\mathcal{L}}{\simeq}
(Y\topp{\A,\C}_{0}-Y\topp{\A,\C}_{t/4})_{t\in[0,1]}$.
\end{proof}
In view of \cite[Remark 8.2]{CorwinKnizel2021} and Remark \ref{Rem-extend}, it is plausible that stochastic process $(W_{t/2}-Y_{t/4}+Y_0)_{t\in[0,1]}$ has the same law as the stationary solution $(H(t))_{t\in[0,1]}$ for the KPZ equation for the full range of parameters $\A+\C>0$.
\subsection{Overview of the paper} We begin with a short self-contained proof of Theorem \ref{T1.2} in Section \ref{Sect:SecondProof}.
A sizeable part of the paper deals with the proof of Theorem \ref{T1.1}. In Section \ref{Sect:Yakubovich} we develop background material, including a non-standard version of the Kontorovich-Lebedev transform, discuss properties of the Yakubovich heat kernel \eqref{p_t} and its semigroup,
and give an analytic continuation argument that is needed to relate the normalizing constant \eqref{C-formula} to the constant from \eqref{Kab2p0}.
With all the ingredients in place, in Section \ref{Sect:Recipr} we construct a Markov process for Theorem \ref{T1.1}.
In Section \ref{Sec:ProofT1.3} we prove Theorem \ref{T1.3}.
In Section \ref{Sect: DMS} we provide additional insight into the fundamental role played by Kontorovich-Lebedev transform in the dual representation identity of Theorem \ref{T1.2}.
Using the Kontorovich-Lebedev transform, we define a dual semigroup to the Yakubovich semigroup. After converting this non-integrable semigroup into a Markov semigroup, we recover transition probabilities \eqref{ICAK-q} for process $(Z_t)$.
We then use Kontorovich-Lebedev transform to prove conditional and unconditional dual representations. The unconditional dual representation in Theorem \ref{T2} yields Theorem \ref{T1.2}, but we suppressed that argument in lieu of a more direct ad-hoc argument that appears in Section \ref{Sect:SecondProof}.
In Section \ref{Sect: Rel-HW} we discuss the Hartman-Watson density and its relation to the Yakubovich heat kernel. Using this relation, we derive a closed form formula for the Laplace transform, with respect to argument $\tau$, for the normalizing constant \eqref{C-formula} in the strip $0<\A+\C<2$.
In the Appendix we collect some known results in the form that we need, sometimes with sketches of proofs for completeness.
\section{Proof of Theorem \ref{T1.2}} \label{Sect:SecondProof}
This proof consists of verification of the integral identity \eqref{ICAK-A1}, assuming Theorem \ref{T1.1}. The two key identities are
\begin{equation} \label{K-Melin}
\int_{\r } e^{t x} K_{\i u}(e^x) K_{\i v} (e^x) \d x=\frac{2^{-3+t}}{\Gamma(t)}
|\Gamma((t+\i (u+v))/2,(t+\i (u-v))/2)|^2,
\end{equation}
which holds for $u,v\in\r$ and $t>0$, (see {\cite[6.8 (48)]{erdelyi1954fg}}), and
\begin{equation}\label{K-Mellin2}
2^{s-2} |\Gamma(\tfrac{s+\i u}{2})|^2=
\int_{\r} e^{sx} K_{\i u}(e^x)\d x,
\end{equation}
which holds for all $u\in\r$ and $s>0$ (see
\cite[6.8 (26)]{erdelyi1954fg}).
For ease of reference, we denote the left-hand side of \eqref{ICAK-A1} by
\begin{equation}
\label{post-psi} \Psi(\vv s,\vv t):=\ \int_0^\infty \e\left[\exp\left(-\tau \sum_{k=1}^{d+1}(t_k-t_{k-1})Z_{s_k}^2\right)\middle|Z_{s_{d+1}}=u\right]\varphi_{s_{d+1}}(u)\d u,
\end{equation}
and expand it as explicit multivariate integral
\begin{equation}
\label{ICAK-2018}
\Psi(\vv s,\vv t) =
\int_0^\infty \varphi_{s_{d+1}}(u_{d+1}) \d u_{d+1} \int_{\r_+^d} e^{-\tau \sum_{k=1}^{d+1}(t_k-t_{k-1})u_k^2} \prod_{k=1}^{d} q_{s_{k+1},s_{k}}(u_{k+1},u_k) \d u_1\dots\d u_d,
\end{equation}
where $\C>s_1>\dots>s_d>s_{d+1}>-\A$ with $\A+\C>0$, and $q_{s,t}(u,v)$ is given by \eqref{ICAK-q}.
Denote $\Delta_k=t_{k}-t_{k-1}$.
Inserting \eqref{nu_s} and \eqref{ICAK-q} into \eqref{ICAK-2018}, after cancelations we get
\begin{multline}\label{Yizao}
\frac{ 8\pi C\topp \tau_{\A,\C}}{2^{\A+\C}}\cdot\Psi(\vv s,\vv t)= \int_{\r_+^{d+1}}\exp\left({-\tau\sum_{k=1}^{d+1}u_k^2\Delta_k}\right)\frac{|\Gamma(\frac{\A+s_{d+1}+\i u_{d+1}}{2},\frac{\C-s_{1}+\i{u_{1}}}{2})|^2}{\left|\Gamma(\i{u_{d+1}})\right|^2}\\
\times\prod_{k=1}^{d}\frac{\left|\Gamma(\frac{s_k-s_{k+1}+\i( {u_k-u_{k+1})}}{2}, \frac{s_k-s_{k+1}+\i({u_k+u_{k+1})}}{2})\right|^2}{4\pi\Gamma(s_k-s_{k+1})|\Gamma(\i{ u_k})|^2}d\vv u.
\end{multline}
Inserting \eqref{K-Mellin2} twice and \eqref{K-Melin} $d$ times into \eqref{Yizao}, we get
\begin{align} \label{MultiIntegrals}
\frac{ 8\pi C\topp \tau_{\A,\C}}{2^{\A+\C}}\cdot \Psi(\vv s,\vv t)& = \int_{\r_+^{d+1}}\exp\left({-\tau\sum_{k=1}^{d+1}u_k^2\Delta_k}\right)\frac{2^4}{\left|\Gamma(\i{u_{d+1}})\right|^2 2^{\A+\C+s_{d+1}-s_1}} \\
& \quad \times
\int_\r e^{(\A+s_{d+1}) x_{d+1}}K_{\i u_{d+1}}(e^{x_{d+1}})\d x_{d+1} \int_\r e^{(\C-s_1)x_{0}} K_{\i u_1}(e^{x_{0}})\d x_{0}\nonumber\\
& \quad \times\prod_{k=1}^{d}\frac{2}{\pi|\Gamma(\i {u_k})|^2 2^{s_k-s_{k+1}}} \int_\r e^{(s_k-s_{k+1})x_k}
K_{\i u_k}\left(e^{x_k}\right)
K_{\i u_{k+1}}\left(e^{x_k}\right)\d x_k\d\vv u \nonumber
\\& =
\frac{8\pi}{2^{\A+\C}} \int_{\r_+^{d+1}}\exp\left({-\tau\sum_{ k=1}^{d+1}u_k^2\Delta_k}\right)\int_\r e^{(\A +s_{d+1})x_{d+1}} K_{\i u_{d+1}}\left(e^{x_{d+1}}\right)\d x_{d+1} \int_\r e^{(\C-s_1)x_{0}} K_{\i u_1}\left(e^{x_{0}}\right)\d x_{0}\nonumber\\
& \quad \times\left(\prod_{k=1}^{d} \int_\r e^{(s_k-s_{k+1})x_k} K_{\i u_k}\left(e^{x_k}\right)
K_{\i u_{k+1}}\left(e^{x_k}\right) \d x_k\right) \prod_{k=1}^{d+1}\mu(\d u_k).\nonumber
\end{align}
Recalling \eqref{mu} and \eqref{K-def}, up to a multiplicative constant, the expression under the multiple integrals in \eqref{MultiIntegrals} is bounded by a product of two integrable expressions:
$$ \prod_{k=1}^{d+1}e^{-\tau u_k^2\Delta_k} u_k\sinh(\pi u_k),
$$
which is integrable with respect to $\vv u$ as $\Delta_k=t_k-t_{k-1}>0$, and
$$
e^{(\A+s_{d+1}) x_{d+1}} K_0\left(e^{x_{d+1}}\right)e^{(\C-s_1)x_0} K_0\left(e^{x_{0}}\right)\prod_{k=1}^d e^{(s_k-s_{k+1})x_k} K_0^2\left(e^{x_k}\right),
$$
which is integrable over $\r^{d+2}$ with respect to $\d\vv x$, since functions $e^{\eps x}K_0(e^{x})$ and $e^{\eps x}K_0^2(e^{x})$ are integrable for any $\eps>0$ by the well known bounds \eqref{K0-bd1} and \eqref{K0-bd2}.
Using Fubini's theorem, we rearrange the order of integrals in \eqref{MultiIntegrals}. We get
\begin{multline} \label{MultiIntegrals2}
C\topp \tau_{\A,\C} \Psi(\vv s,\vv t) =
\int_{\r^{d+2}}e^{(\C-s_1)x_{0} +\sum_{k=1}^d(s_k-s_{k+1})x_k +(\A +s_{d+1})x_{d+1}}
\\
\times \int_{\r_+^{d+1}}e^{-\tau\sum_{ k=1}^{d+1}u_k^2\Delta_k} K_{\i u_{d+1}}\left(e^{x_{d+1}}\right) K_{\i u_1}\left(e^{x_{0}}\right) \prod_{k=1}^{d} K_{\i u_k}\left(e^{x_k} \right)
K_{\i u_{k+1}}\left(e^{x_k}\right) \prod_{k=1}^{d+1}\mu(\d u_k)
\d x_{0} \d x_{1} \dots \d x_{d+1}.
\end{multline}
Noting that
$$
(\C-s_1)x_{0} +\sum_{k=1}^d(s_k-s_{k+1})x_k +(\A +s_{d+1})x_{d+1}=
\C x_0+\A x_{d+1}+\sum_{k=1}^{d+1} s_k(x_k-x_{k-1}),$$
we have
\begin{multline} \label{MultiIntegrals3}
C\topp \tau_{\A,\C}
\Psi(\vv s,\vv t)=
\int_{\r^{d+2}}e^{\sum_{k=1}^{d+1} s_k(x_k-x_{k-1})}e^{\C x_{0} + \A x_{d+1}}
\\
\times
\left(\prod_{k=1}^{d+1}\int_{\r_+}e^{-\tau u_k^2(t_k-t_{k-1})} K_{\i u_k}\left(e^{x_k} \right) K_{\i u_k}\left(e^{x_{k-1}} \right) \mu(\d u_k)\right)
\d x_{0} \d x_{1} \dots \d x_{d+1} \nonumber \\
\\= C\topp \tau_{\A,\C} \int_{\r^{d+2}} \exp\left(\sum s_k(x_k-x_{k-1})\right)
f_{\vv t}(\vv x) \d \vv x
.\nonumber
\end{multline}
Thus \eqref{ICAK-A1} holds.
\section{Killed Brownian motion and the Yakubovich heat kernel}
\label{Sect:Yakubovich}
In this section we introduce notation, collect results from the literature, and prove auxiliary facts that we need for Section \ref{Sect:Recipr}, where we prove Theorem \ref{T1.1} and for Section \ref{Sec:ProofT1.3}, where we prove Theorem \ref{T1.3}.
\subsection{Kontorovich--Lebedev transform} \label{Sec:Bkg}
According to \cite[10.45(v)]{NIST2010}, papers \cite{Yakubovich2004,Yakubovich2011}, or the book \cite{yakubovich1996index},
the Kontorovich--Lebedev integral transform is defined as
\begin{equation*}
\mathbb K_{\i u} [f]:=\int_0^{\infty} K_{\i u} (x) f(x) \d x.
\end{equation*}
It is an isometry between $L_2((0,\infty), x\d x)$ and $L_2((0,\infty), \mu(\d u))$, where $\mu$ is given by \eqref{mu}.
If $g(u)=\mathbb K_{\i u} [f]$, then the inverse transform is
\begin{equation*}
f(x)=\frac{1}{x} \int_0^{\infty} K_{\i u} (x) g(u) \mu(\d u),
\end{equation*}
see \cite[Section 4.15]{titchmarsh2011elgenfunction} and \cite{Yakubovich2004,Yakubovich2011}.
We will use a slightly different version of the Kontorovich--Lebedev transform, defined as
\begin{equation}\label{K-L+}
{\mathcal K}f (u)=\int_{\r} f(x) K_{\i u}(e^x) \d x.
\end{equation}
Note the difference between $K$, $\mathbb K$, and curly ${\mathcal K}$ in the notation.
From the above facts we conclude that ${\mathcal K}$ is an isometry between $L_2(\r, \d x)$ and $L_2((0,\infty), \mu(\d u))$ and the inverse transform is given by
\begin{equation}\label{K-inv}
{\mathcal K}^{-1} g(x)= \int_0^{\infty} K_{\i u}(e^{x}) g(u) \mu(\d u).
\end{equation}
We adopt the convention that $f$ is a function of $x\in\r$ and $g$ is a function of $u>0$. Then $\kk f$ is a function of $u$ and $\kk^{-1}g$ is a function of $x$. When we need to apply $\kk$ to an explicit expression in variable $x$, we will write $\kk[f(x)]$ or $\kk[f(x)](u)$. In this notations, the left hand side of \eqref{K-L+} is an abbreviated form of the more precise expression $\kk[f(x)](u)$. We will use similar conventions in our notation for the operators.
As integral operators, both $\mathcal{K}$ and $\kk^{-1}$ extend to functions which are not necessarily square integrable, and we will need to apply them to such functions.
(Even though they may cease to be inverses of each other.)
It is clear that $\mathcal{K}$ is well defined on
$L_1(\r, K_0(e^x)\d x)$ which we will abbreviate as $L_1(K_0)$,
compare
\cite[(1.2)]{Yakubovich2003kontorovich}. It is clear that $\kk^{-1}$ is well defined on $L_1((0,\infty),\d \mu)$, which we will abbreviate as $L_1(\d \mu)$.
\subsection{The semigroup with kernel $p_t(x,y)$}
Recall definition \eqref{p_t}. We can now explain how Yakubovich heat kernel $p_t(x,y)$ defines a sub-Markov process.
(Note that process $(Y_t)$ in Theorem \ref{T1.1} is Markov, not sub-Markov.)
The Kontorovich--Lebedev transform defines a semigroup of contractions
\begin{equation}\label{PP}
\widetilde {\pp}_t ={\mathcal K}^{-1} e^{-t u^2} {\mathcal K}
\end{equation}
which, at first, act on $L_2(\r,\d x)$.
Here we understand $e^{-t u^2}$ to be the multiplication operator on $L_2((0,\infty), \mu(\d u))$ that maps a function $g(u)$ to $e^{-t u^2} g(u)$.
Taking into account \eqref{K-L+} and \eqref{K-inv}, it is apparent that $\widetilde{\pp_t}$ is an integral operator with the Yakubovich heat kernel \eqref{p_t}. Sousa and Yakubovich
\cite[Example 3.7]{SousaYakubovich2018} relate this kernel to Sturm--Louville theory of differential operators, so the operators $\widetilde \pp_t$ as integral operators with kernel \eqref{p_t} act also on bounded continuous functions, and define transition probabilities of a sub-Markov process.
The following is a restatement of these facts.
\begin{theorem}\label{T-YS}
The Yakubovich heat kernel $p_{t}(x,y)$ defined in \eqref{p_t} has the following properties.
\begin{enumerate}
[(i)]
\item For $t>0$ and $x,y\in\r$, the kernel is symmetric, $p_t(x,y)=p_t(y,x)$ and positive, $p_t(x,y)>0$.
\item For $t>0$ and $x\in\r$, $p_t(x,y)$ defines a sub-probability density function, $\int_\r p_t(x,y)\d y <1$.
\item For $s,t>0$ and $x,z\in \r$, Markov property holds: $\int_\r p_t(x,y)p_s(y,z)\d y=p_{s+t}(x,z)$.
\end{enumerate}
\end{theorem}
Since we could not locate an appropriate reference for Theorem \ref{T-YS}, we sketch a probabilistic justification
by relating the semigroup $\widetilde \pp_t$ in \eqref{PP} to the killed Brownian motion.
Let $(\widehat X_t)_{t\geq 0}$ be the process defined by the Markov generator
\begin{equation}
{\mathcal L}^{\widehat X}f(x)=f''(x)-e^{2x} f(x).
\end{equation}
We can identify $\widehat X$ with the $\sqrt{2}$ multiple of Brownian motion that is killed at rate $k(x)=e^{2x}$. More precisely, let $W$ be a standard Brownian motion, then the semigroup of the process $\widehat X$ is given by
\begin{equation}\label{P_t-semi}
\widehat {\mathcal {P}}_t f(x)=\e_x[f(\widehat X_t)]=\e\Big[ e^{-\int_0^t e^{2 \sqrt{2} W_s} \d s} f(\sqrt{2} W_t) \Big| W_0=\frac{x}{\sqrt{2}} \Big].
\end{equation}
It is then clear that $\widehat {\mathcal {P}}_t$ is positive and sub-Markovian, $\widehat \pp_t 1 <1$ for $t>0$.
Here is how one can see that the semigroup defined by \eqref{P_t-semi} is the same semigroup that we defined in \eqref{PP}.
The modified Bessel function $K_{\i u}(x)$ satisfies the differential equation
$$
x^2 f''(x)+xf'(x)-x^2 f(x)=-u^2 f(x),
$$
thus the function $f(x)=K_{\i u}(e^x)$ satisfies the differential equation
$$
f''(x)-e^{2x} f(x)=-u^2 f(x).
$$
This shows that functions $f_{u}(x)=K_{\i u}(e^x)$ are eigenfunctions of the Markov generator ${\mathcal L}^{\widehat{X}}$ and of the Markov semigroup $\widehat{\pp}_t $:
\begin{equation}
{\mathcal L}^{\widehat X} f_{u}(x)=-u^2 f_{u}(x), \;\;\; \widehat{\pp }_t f_{u}(x) =f_{u}(x) e^{-t u^2} .
\end{equation}
We conclude that the integral operator ${\mathcal K}$ (our modified version of the Kontorovich--Lebedev transform) diagonalizes the Markov semigroup $\widehat \pp_t$, thus extending the action of semigroup
\eqref{PP} beyond the initial domain $L_2(\r,\d x)$. We therefore identify $\widetilde \pp _t$ and $\widehat \pp _t$.
(We will apply $\widetilde \pp _t$ to functions of the form $e^{ax}$ which are not in $L_2(\r,\d x)$.)
\subsection{The normalizing constants}
In view of Theorem \ref{T-YS}(iii), it is clear that the normalizing constant for the density \eqref{joint-density} is given by the bivariate integral \eqref{C-formula} which is symmetric in $\A,\C$. We need to show that this integral is finite, and that it is given by an expression that we will use in the proof of Theorem \ref{T1.3}.
\begin{theorem}\label{Thm-L-Const}
If $\A+\C>0$ and $\tau>0$ then the integral \eqref{C-formula}
is finite and
\begin{equation}
\label{C2K}
C\topp \tau_{\A,\C}= \frac{2^{\A+\C}\KabLa}{8\pi},
\end{equation}
where
\begin{equation} \label{K-normalize}
\KabLa=\CabLa+ \DabLa+\DbaLa,
\end{equation}
with
\begin{eqnarray}
\label{C-normalize}
\CabLa &=& \int_0^\infty e^{-\tau u^2} \frac{|\Gamma(\tfrac{\A+\i u}{2},\tfrac{\C+\i u}{2})|^2}{|\Gamma(\i u)|^2}\d u, \\
\DabLa&=&\begin{cases}
\frac{4\pi\Gamma(\frac{\C+\A}{2},\frac{\C-\A}{2})}{\A \Gamma(-\A )}\sum_{\{k\geq 0:\; \A+2k<0 \}} e^{\tau(\A+2k)^2}(\A+2k)\frac{(\A,\frac{\A+\C}{2})_k}{k!(1+\frac{\A-\C}{2})_k} & \A<0, \\
0 &\A\geq 0.
\end{cases}
\label{D-normalize}
\end{eqnarray}
\end{theorem}
In view of symmetry, without loss of generality we can take $\C>0$.
The proof consists of the following steps: we begin by proving \eqref{C2K} for $\A,\C>0$, then we show that the integral \eqref{C-formula} is finite for all $\A+\C>0$, and finally prove that \eqref{C2K} holds also if $\A<0$.
\subsubsection{Case $\A,\C>0$}
We use \eqref{p_t} to write the left hand side of \eqref{C-formula} as an iterated integral
$$
C\topp \tau_{\A,\C}=\int_{\r} \int_{\r} e^{\A x+ \C y} \frac{2}{\pi}\left(\int_0^{\infty} e^{-\tau u^2} K_{\i u}(e^x) K_{\i u}(e^y) \frac{\d u}{{|\Gamma(\i u)|^2}} \right)\d x \d y.
$$
Since $| K_{\i u}(e^x)|\leq K_0(e^x)$ and $\A,\C>0$, the integrand is integrable, see (\ref{K0-bd1}-\ref{K0-bd2}), and we can use Fubini's theorem to change the order of the integration.
From the integral identity \eqref{K-Mellin2}
we see that $C\topp \tau_{\A,\C}$ is equal to
\begin{equation}
\label{C-X}
\frac{2^{\A+\C}}{8\pi} \int_0^\infty e^{-\tau u^2}\frac{|\Gamma(\tfrac{\A+\i u}{2},\tfrac{\C+\i u}{2})|^2}{|\Gamma(\i u)|^2}\d u,
\end{equation}
which is
\eqref{C2K} when $\A,\C>0$.
\subsubsection{Finiteness for all $\A+\C>0$}
Let $\tau>0$ be a fixed constant. We define a function of two variables
\begin{equation}\label{eqn1}
f(\A,\C)= \frac{1}{2\pi \i} \int_{\i \r} e^{\tau z^2} \frac{\Gamma((\A+ z)/2,(\A- z)/2,(\C+z)/2,(\C- z)/2)}
{\Gamma(z) \Gamma(-z)} \d z.
\end{equation}
The integral is taken over the imaginary line in the complex plane. Writing $z=\i u$ we obtain an equivalent representation
\begin{equation}\label{eqn1b}
f(\A,\C)= \frac{1}{2\pi} \int_{\r} e^{-\tau u^2} \frac{\Gamma((\A+ \i u)/2,(\A- \i u)/2,(\C+\i u)/2,(\C- \i u)/2)}
{|\Gamma(\i u)|^2} \d u.
\end{equation}
Note that the integrand in \eqref{eqn1} (as a function of $z$) is meromorphic and has poles at points
$$
\{\pm (\A+2n), \pm (\C+2n) \; : \; n=0,1,2,3,\dots\},
$$
and it is clear that $f(\A,\C)$ is an analytic function of two variables $(\A,\C)$ in the domain
$$
D_0:=\{(\A,\C) \in \c^2 \; : \; \re(\A)>0, \re(\C)>0\}.
$$
\begin{theorem}\label{thm1}
The function $f(\A,\C)$ can be analytically continued to a function holomorphic in
$$
\Omega=\{(\A,\C) \in \c^2 \; : \; \re(\A+\C)>0\}.
$$
\end{theorem}
\begin{proof}
Let $\C$ be fixed and consider $\A$ in the strip $0<\re(\A)<\re(\C)$. Take any number $c_1$ from the interval
$(\re(\A),\min(\re(\C),\re(\A)+2))$. We shift the contour of integration in
\eqref{eqn1} from $\i \r$ to $c_1+\i \r$ and collect the residue at $z=\A$ (where the integrand has a simple pole) to obtain the following expression
\begin{align}\label{eqn2}
f(\A,\C)&=\frac{1}{2\pi \i} \int_{c_1+\i \r} e^{\tau z^2} \frac{\Gamma((\A+ z)/2,(\A- z)/2,(\C+z)/2,(\C- z)/2)}
{\Gamma(z)\Gamma(-z)} \d z\\ \nonumber
&\quad +2 e^{\tau \A^2}\frac{\Gamma((\C+\A)/2,(\C- \A)/2)}{\Gamma(-\A)}.
\end{align}
The poles at $\A, \A+2$ and $-\A$ should not be touching the contour of integration $c_1+i\r$. This gives us conditions
$\re(\A)<c_1<\re(\A)+2$, $-\re(\A)<c_1<\re(\C)$, in particular $-\re(\A)<\re(\A)+2$, or $-1<\re(\A)$. The second term in \eqref{eqn2} is analytic if $\re(\C \pm \A)>0$.
We conclude that \eqref{eqn2} gives an analytic continuation of $f(\A,\C)$ into the following domain:
$$
\widetilde D_1:=\{(\A,\C)\in \c^2 \; : \; -1<\re(\A)<\re(\C), \re(\A)+\re(\C)>0\}.
$$
Now take $(\A,\C)\in \widetilde D_1$ with $\re(\A)<0$. We shift the contour of integration from $c_1+\i \r$ back to $\i \r$, but now we need to take into account the pole at $z=-\A$:
\begin{align}\label{eqn2b}
f(\A,\C)&=\frac{1}{2\pi \i} \int_{\i \r} e^{\tau z^2} \frac{\Gamma((\A+ z)/2,(\A- z)/2,(\C+z)/2,(\C- z)/2)}
{\Gamma(z)\Gamma(-z)} \d z\\ \nonumber
&\quad +4 e^{\tau \A^2}\frac{\Gamma((\C+\A)/2,(\C- \A)/2)}{\Gamma(-\A)}.
\end{align}
Again, we can't have poles at $\A,\A+2,-\A-2,-\A$ touch the contour of integration $i\r$, thus we have conditions
$-2< \re(\A) < 0$ and $\re(\C)>0$.
Again, the second term in \eqref{eqn2b} is analytic as long as $\re(\C\pm \A)>0$. Therefore,
formula \eqref{eqn2b} gives us an analytic continuation of $f(\A,\C)$ in the domain
$$
D_2:=\{(\A,\C)\in \c^2 \; : \; -2<\re(\A)<0, \re(\A)+\re(\C)>0\}.
$$
Now we repeat this procedure. Take $(\A,\C)\in D_2$ such that $-2<\re(\A)<-1$ and $\re(\A)+2<\re(\C)$. Choose any $c_2$ in the interval $(\re(\A)+2,\min(\re(\C),\re(\A)+4))$. Shift the contour of integration in \eqref{eqn2b} from $\i \r$ to $c_2+\i \r$, and collect the residue at $z=\A+2$ and we get
\begin{align}\label{eqn3}
f(\A,\C)&=\frac{1}{2\pi \i} \int_{c_2+\i \r} e^{\tau z^2} \frac{\Gamma((\A+ z)/2,(\A- z)/2,(\C+z)/2,(\C- z)/2)}
{\Gamma(z)\Gamma(-z)} \d z\\ \nonumber
&\quad +
4 e^{\tau \A^2} \frac{\Gamma((\C+\A)/2,(\C- \A)/2)}{\Gamma(-\A)} \\ \nonumber
&\quad - 2 e^{\tau(\A+2)^2} \frac{\Gamma(\A+1,(\C+\A+2)/2,(\C- \A-2)/2)}{\Gamma(\A+2)\Gamma(-\A-2)}.
\end{align}
We can't have poles at $\A+2$, $\A+4$, $-\A-2$, $-\A$
touch the contour of integration $c_2+i\r$, so
the above expression gives us analytic continuation of $f(\A,\C)$ into the following domain:
$$
D_3:=\{(\A,\C)\in \c^2 \; : \; -3<\re(\A)<-1, \re(\A)+\re(\C)>0\}.
$$
Now take $(\A,\C)\in D_3$ such that $-3<\re(\A)<-2$. We shift the contour of integration from $c_2+\i \r$ back to $\i \r$, but now we need to take into account the pole at $z=-\A-2$:
\begin{align}\label{eqn3b}
f(\A,\C)&=\frac{1}{2\pi \i} \int_{\i \r} e^{\tau z^2} \frac{\Gamma((\A+ z)/2,(\A- z)/2,(\C+z)/2,(\C- z)/2)}
{\Gamma(z)\Gamma(-z)} \d z\\ \nonumber
&\quad +
4 e^{\tau \A^2} \frac{\Gamma((\C+\A)/2,(\C- \A)/2)}{\Gamma(-\A)} \\ \nonumber
&\quad - 4 e^{\tau(\A+2)^2} \frac{\Gamma(\A+1,(\C+\A+2)/2,(\C- \A-2)/2)}{\Gamma(\A+2)\Gamma(-\A-2)}.
\end{align}
Again, the poles at $\A+2,\A+4,-\A-4,-\A-2 $ should not be touching the contour of integration $i\r$. So
the above expression gives us analytic continuation of $f(\A,\C)$ into the following domain:
$$
D_4:=\{(\A,\C)\in \c^2 \; : \; -4<\re(\A)<-2, \re(\A)+\re(\C)>0\}.
$$
After repeating the above procedure, the function $f(\A,\C)$ would be analytically continued into domains $D_5$, $D_6$, etc., where we defined
$$
D_n:=\{(\A,\C)\in \c^2 \; : \; -n<\re(\A)<-n+2, \re(\A)+\re(\C)>0\}.
$$
Note
$$\widetilde D_1 \cup D_2 \cup D_3 \cup D_4 \cup \dots =
\{(\A,\C) \in \c^2 \; : \; \re(\A)+\re(\C)>0, \re(\C)-\re(\A)>0\},
$$
thus the function $f(\A,\C)$ is analytic in the above domain. Since $f(\A,\C)=f(\C,\A)$
and this function is analytic in
$$
\{(\A,\C) \in \c^2 \; : \; \re(\A)>0, \re(\C)>0\}
$$
we conclude that $f(\A,\C)$ can be analytically continued to a function holomorphic in
$\Omega$.
\end{proof}
We recall the following well-known result about Laplace transforms.
Let $\eta$ be a positive measure on $\r$ such that the Laplace transform
\begin{equation}\label{eqn_F}
F(\lambda):=\int_{\r} e^{\lambda x} \eta(\d x)
\end{equation}
exists (the integral converges) for $l<\lambda<r$. Then $F$ is holomorphic in the strip $l<\re(\lambda)<r$. There is also a converse result: Assume that $F$ can be analytically continued to a function holomorphic in a wider strip $L<\re(\lambda)<R$ for some $L$ and $R$ such that $[l,r] \subset [L,R]$. Then the integral in \eqref{eqn_F} converges to $F(\lambda)$ for $L<\lambda<R$.
This result follows from corresponding results for characteristic functions in \cite{lukacs1952}.
Using the above mentioned result, Theorem \ref{thm1} and the identity $C\topp \tau_{\A,\C}=2^{\A+\C-3} f(\A,\C)$ we obtain
\begin{corollary}\label{cor:fini}
Let $\tau>0$.
The integral \eqref{C-formula}
is finite if $\A+\C>0$.
\end{corollary}
(Corollary \ref{cor:fini} can also be deduced from the fact that the integral \eqref{C-formula}
is finite in the upper quadrant $\A,\C>0$ by \eqref{C-X}, and in the strip $0<\A+\C<2$ by Theorem \ref{Prop-LapC}. So it is finite in the convex hull of the union of these two sets.)
\subsubsection{Proof of formula \eqref{C2K}}
The gamma function $\Gamma(z)$ has simple poles at points $z=-k$, $k=0,1,2,\dots$ with residues at these points given by
$\frac{(-1)^k}{k!}$. Thus the residue of the integrand in \eqref{eqn1} at point $z=\A+2k$ is given by
\begin{align*}
-2 \frac{(-1)^k}{k!} e^{\tau (\A+2k)^2} \frac{\Gamma(\A+k) \Gamma((\C+\A)/2+k) \Gamma((\C-\A)/2-k)}{\Gamma(\A+2k) \Gamma(-\A-2k)}.
\end{align*}
This expression can be simplified to
\begin{align*}
-2 \frac{\Gamma((\C+\A)/2,(\C-\A)/2)}{\A\Gamma(-\A)} e^{\tau(\A+2k)^2}(\A+2k)\frac{(\A,(\C+\A)/2)_k}{k!(1+(\A-\C)/2)_k}.
\end{align*}
\arxiv{
One can use here identities
$$
\frac{\Gamma(a+n)}{\Gamma(a)}=(a)_n, \; a\not\in\{\dots,-1,0\},\; n=0,1,\dots
$$
and
$$
\frac{\Gamma(a-n)}{\Gamma(a)}=\frac{(-1)^n}{(1-a)_n}, \; a\not\in\{\dots,-1, 0,1,\dots,n\},\; n=0,1,\dots
$$
}
Similarly, the residue at $z=-\A-2k$ is given by
\begin{align*}
2 \frac{(-1)^k}{k!} e^{\tau(\A+2k)^2} \frac{\Gamma(\A+k) \Gamma((\C+\A)/2+k) \Gamma((\C-\A)/2-k)}{\Gamma(\A+2k) \Gamma(-\A-2k)}.
\end{align*}
This expression can be simplified to
\begin{align*}
2 \frac{\Gamma((\C+\A)/2,(\C-\A)/2)}{\A\Gamma(-\A)} e^{\tau(\A+2k)^2}(\A+2k)\frac{(\A,(\C+\A)/2)_k}{k!(1+(\A-\C)/2)_k}.
\end{align*}
Thus if $-2n<\re(\A)<-2(n-1)$ and $\re(\A)+\re(\C)>0$ we have
\begin{align}\label{eqn4}
f(\A,\C)&=\frac{1}{2\pi \i} \int_{\i \r} e^{t z^2} \frac{\Gamma((\A+ z)/2,(\A- z)/2,(\C+z)/2,(\C- z)/2)}
{\Gamma(z)\Gamma(-z)} \d z\\ \nonumber
&\quad+ 4 \frac{\Gamma((\C+\A)/2,(\C-\A)/2)}{\A\Gamma(-\A)}
\sum\limits_{k=0}^{n-1} e^{t(\A+2k)^2}(\A+2k)\frac{(\A,(\C+\A)/2)_k}{k!(1+(\A-\C)/2)_k}.
\
\end{align}
Let us express this identity in a different way. We change variable $z=\i u$ in the integral, use the symmetry of the integrand with respect to $u \mapsto -u$ and obtain
\begin{align}\label{eqn4.5}
f(\A,\C)&=\frac{1}{\pi} \int_{0}^{\infty} e^{-\tau u^2} \frac{\Gamma((\A+ \i u)/2,(\A- \i u)/2,(\C+\i u)/2,(\C- \i u)/2)}
{\Gamma(\i u)\Gamma(-\i u)} \d u\\ \nonumber
&\quad + 4 \frac{\Gamma((\C+\A)/2,(\C-\A)/2)}{\A\Gamma(-\A)}
\sum\limits_{k=0}^{n-1} e^{\tau(\A+2k)^2}(\A+2k)\frac{(\A,(\C+\A)/2)_k}{k!(1+(\A-\C)/2)_k}.
\
\end{align}
Thus for $-2n<\re(\A)<-2(n-1)$ and $\re(\A)+\re(\C)>0$ we have
\begin{align*}\label{eqn5}
C\topp \tau_{\A,\C}=2^{\A+\C-3}f(\A,\C)&=\frac{2^{\A+\C-3}}{\pi} \int_{0}^{\infty} e^{-t u^2} \frac{\Gamma((\A+ \i u)/2,(\A- \i u)/2,(\C+\i u)/2,(\C- \i u)/2)}
{\Gamma(\i u)\Gamma(-\i u)} \d u\\ \nonumber
&\quad + 2^{\A+\C-1} \frac{\Gamma((\C+\A)/2,(\C-\A)/2)}{\A\Gamma(-\A)}
\sum\limits_{k=0}^{n-1} e^{t(\A+2k)^2}(\A+2k)\frac{(\A,(\C+\A)/2)_k}{k!(1+(\A-\C)/2)_k}\\
&=\frac{2^{\A+\C}}{8\pi} (\CabLa+\DabLa).
\end{align*}
\section{Proof of Theorem \ref{T1.1}}
\label{Sect:Recipr}
Recall semigroup \eqref{PP} and (symmetric) Yakubovich heat kernel \eqref{p_t}.
From the formula for the joint density \eqref{joint-density} it is clear that time inversion $Y_t=X_{\tau-t}$ preserves the form of the density but swaps the roles of parameters $\A,\C$.
We give the proof for the case $\C>0$ and construct Markov process $(X_t)$ which will give process $(Y_t)$ by time inversion. (Otherwise, we'd use $\A>0$ in the construction and construct process $(Y_t)$ directly.)
From the semi-group property of $\widetilde {\pp }_t$, we see that
\begin{equation}\label{H}
H_t(x):= \int_\r p_{\tau-t}(x,y)e^{\C y}\d y, \quad 0\leq t<\tau,
\end{equation}
is a positive space-time harmonic function, so we can use it as Doob's $h$-transform. Identity \eqref{K-Mellin2} shows that
\begin{equation}
\label{eq:H_t}
H_t(x)=
2^{\C-2}\int_0^\infty K_{\i u}(e^x)e^{-u^2(\tau-t)}\left|\Gamma\left(\frac{\C+\i u}{2}\right)\right|^2\mu(\d u)
\end{equation}
with $\mu$ given by \eqref{mu}, but we do not need to use this explicit formula.
With
$H_\tau(x):=e^{\C x}$,
for $0\leq s<t\leq \tau$, we define probability measures
\begin{equation}\label{P_st}
P_{s,t}(x,dy)=
\frac{H_t(y)}{H_s(x)} p_{t-s}(x,y)\d y.
\end{equation}
It is clear that $P_{s,t}(x,\d y)$ are transition probabilities of a Markov process $(X_t)_{0\leq t\leq \tau}$ with state space $\r$.
The second real parameter, $\A$, enters the initial distribution. By Theorem \ref{Thm-L-Const} and Fubini's theorem,
function $H_0(x) e^{\A x}$ is integrable, with the integral
$$\int_\r H_0(x) e^{\A x}\d x=\int_\r\int_\r e^{\A x+\C y} p_{\tau}(x,y)\d x \d y =C\topp \tau_{\A,\C}.$$
We normalize this function and take it as the initial distribution for $X_0$,
\begin{equation}\label{X-ini}
P(X_0\in \d x)=\frac{1}{C\topp \tau_{\A,\C}}H_0(x) e^{\A x}\d x.
\end{equation}
This completes the construction of the Markov process $(X_t)_{0\leq t\leq \tau}$.
From \eqref{P_st} and \eqref{X-ini}, it is clear that the joint law of $X_0,X_\tau$ is $$\frac{1}{C\topp \tau_{\A,\C}}e^{\A x} p_\tau(x,y)e^{\C y}\d x \d y.$$
More generally, for $0=t_0<t_1<\dots<t_{n-1}<t_n=\tau$, the joint distribution of $(X_{t_0},\dots,X_{t_n})$ is
\begin{equation}\label{joint-law0}
\frac{1}{C\topp \tau_{\A,\C}} e^{\A x_0} \d x_0\prod_{j=1}^n p_{t_j-t_{j-1}}(x_{j-1},x_j) \d x_1\dots \d x_{n-1}e^{\C x_n} \d x_n.
\end{equation}
This is \eqref{joint-density} except that parameters $\A,\C$ are at ``incorrect locations''. We swap them by time-inversion.
A calculation shows that the finite dimensional distributions of process $Y_t=X_{\tau-t}$ have density \eqref{joint-density}. This proves Theorem \ref{T1.1}.
\arxiv{
To verify that the finite dimensional distributions for $Y_t=X_{\tau-t}$ are given by density \eqref{joint-density}, fix $d\geq 0$ and
$t_0<t_1<\dots<t_{d+1}=1$. Let $\bar t_j=1-t_{d+1-j}$ and $\bar x_j=x_{d+1-j}$, $0\leq j\leq d+1$. We have
$0=\bar t_0<\bar t_1<\dots<\bar t_d<\bar t_{d+1}=1$
and the joint density of $(Y_{t_0},\dots,Y_{t_{d+1}})$ at $\vv x=(x_0,\dots,x_{d+1})$ is the joint density of
$(X_{\bar t_0},\dots,X_{\bar t_{d+1}})$
at $\bar{\vv x}=(\bar x_j)$. We apply \eqref{joint-law0} with $\{\bar t_j\}$ and $\{\bar x_j\}$. Using symmetry $p_t(x,y)=p_t(y,x)$, we get
\begin{align*}
\frac{1}{C\topp \tau_{\A,\C}} e^{\A \bar x_0} \d \bar x_0 & \prod_{j=1}^{d+1} p_{\bar t_j-\bar t_{j-1}}(\bar x_{j-1},\bar x_j) \d \bar x_1\dots \d \bar x_{d}e^{\C \bar x_{d+1}} \d \bar x_{d+1}
\\
& = \frac{1}{C\topp \tau_{\A,\C}} e^{\A x_{d+1}} \d x_{d+1} \prod_{j=1}^{d+1} p_{t_{d+2-j}- t_{d+1-j}}( x_{d+2-j}, x_{d+1-j}) \d x_1\dots \d x_{d}e^{\C x_{0}} \d x_{0}
\\
& = \frac{1}{C\topp \tau_{\A,\C}} e^{\A x_{d+1}} \d x_{d+1} \prod_{j=1}^{d+1} p_{t_{j}- t_{j-1}}( x_{j}, x_{j-1}) \d x_1\dots \d x_{d}e^{\C x_{0}} \d x_{0}
\\
& = \frac{1}{C\topp \tau_{\A,\C}} e^{\A x_{d+1}} \d x_{d+1} \prod_{j=1}^{d+1} p_{t_{j}- t_{j-1}}( x_{j-1}, x_{j}) \d x_1\dots \d x_{d}e^{\C x_{0}} \d x_{0},
\end{align*}
which is \eqref{joint-density}.
}
We remark that the construction of process $(X_t)$ is a special case of the standard construction of a {\em reciprocal Markov process} (also known as Schroedinger or Bernstein processes \cite{jamison1974reciprocal,jamison1975markov}) corresponding to a sub-Markovian semigroup. A concise and readable overview of this construction and the literature appears in the first two sections of \cite{dawson1990schrodinger}.
\section{ Proof of Theorem \ref{T1.3}}\label{Sec:ProofT1.3}
In this proof we rely heavily on formulas from \cite{CorwinKnizel2021}. We will cite some of their formulas in the original parametrization by $u,v$, and then convert them to our parametrization
$u=\C/2$, $v=\A/2$.
The goal is to show that $\psi\topp \tau(\vv s,\vv t)$ matches the left hand side of \eqref{ICAK-A1} and re-write the right hand side of \eqref{ICAK-A1} by Abel transformation.
The first step is to relate the normalizing constant $\Kab$ from \eqref{Kab2p0} to the normalizing constant $C_{\A,\C}\topp \tau$ from \eqref{C-formula}. In fact, it will be more convenient to compare $\Kab$ directly with
$\KabLa$, which is related to $C_{\A,\C}\topp \tau$ via \eqref{C2K}, but has also
representation \eqref{K-normalize}.
\begin{proposition} If $\A+\C>0$, then the constants \eqref{Kab2p0}, \eqref{C2K} and \eqref{C-formula} are related as follows
\begin{equation}\label{K2K}
\Kab =\frac{(\A+\C)(\A+\C+2)}{16\pi}\KabLa=\frac{(\A+\C)(\A+\C+2)}{2^{\A+\C+1}}C_{\A,\C}\topp \tau.
\end{equation}
\end{proposition}
\begin{proof} By symmetry, without loss of generality we assume $\C>0$.
Recall that the parameters in
\cite[(1.12)]{CorwinKnizel2021} are $u=\C/2$, $v=\A/2$.
The univariate distribution for the continuous dual Hahn process in \cite[Definition 7.8]{CorwinKnizel2021} is
\begin{align}
\label{their-nu}
\mathfrak p_{s}(\d x)&=\frac{(u+v)(u+v+1)}{
8\pi} \frac{|\Gamma(\frac{s+2v+\i \sqrt{x}}{2},\frac{2u-s+\i \sqrt{x}}{2} )|^2}{\sqrt{x}|\Gamma(\i \sqrt{x})|^2}1_{x>0}\d x+
\sum_{j:\; j+v+s/2<0} p_j(s) \delta_{-4(v+j+s/2)^2)}(\d x)
\\
&=
\frac{(u+v)(u+v+1)}{
4\pi} \frac{|\Gamma(\frac{s+\A+\i \sqrt{x}}{2},\frac{\C-s+\i \sqrt{x}}{2} )|^2}{2\sqrt{x}|\Gamma(\i \sqrt{x})|^2}1_{x>0}\d x+
\sum_{j:\; 2j+\A+s<0} p_j(s) \delta_{-(\A+2j+s)^2)}(\d x),\nonumber
\end{align}
with discrete masses given by
\begin{align}
\label{their-atoms}
p_j(s)
& =\frac{\Gamma(u-v-s,u+v+2)}{\Gamma(-2v-s)}\cdot \frac{(v+j+s/2)(2v+s,u+v)_j}{(v+s/2)j!(1-u+v+s)_j}\\
& =(u+v)(u+v+1)\frac{\Gamma(u-v-s,u+v)}{\Gamma(-2v-s)}\cdot \frac{(v+j+s/2)(2v+s,u+v)_j}{(v+s/2)j!(1-u+v+s)_j}\nonumber\\
& =(u+v)(u+v+1)\frac{\Gamma(\frac{\C-\A-2s}{2},\frac{\A+\C}{2})}{\Gamma(-\A-s)}\cdot \frac{(\A+2j+s)(\A+s,\frac{\A+\C}{2})_j}{(\A+s)j!(1+\frac{2s+\A-\C}{2})_j}.\nonumber
\end{align}
The normalizing constant \eqref{Kab2p0}, compare \cite[(1.12)]{CorwinKnizel2021}, is
\begin{align}
\Kab& = \int_\r e^{-\tau x} \mathfrak p_0(\d x)\\
& =
\frac{(u+v)(u+v+1)}{4\pi}\int_0^\infty e^{-\tau x} \frac{|\Gamma(\frac{\A+\i \sqrt{x}}{2},\frac{\C+\i \sqrt{x}}{2} )|^2}{2\sqrt{x}|\Gamma(\i \sqrt{x})|^2}1_{x>0}\d x + \sum_{j: 2j+\A<0} e^{\tau(\A+2j)^2} p_j(0)\nonumber
\\
& = \frac{(u+v)(u+v+1)}{4\pi}\int_0^\infty e^{-\tau x} \frac{|\Gamma(\frac{\A+\i \sqrt{x}}{2},\frac{\C+\i \sqrt{x}}{2} )|^2}{2\sqrt{x}|\Gamma(\i \sqrt{x})|^2}1_{x>0}\d x\nonumber\\
& \quad + \frac{(u+v)(u+v+1)}{4\pi} \frac{4\pi\Gamma(\frac{\C-\A}{2},\frac{\A+\C}{2})}{\A\Gamma(-\A)} \sum_{j: 2j+\A<0} e^{\tau(\A+2j)^2} \frac{(\A+2j)(\A,\frac{\A+\C}{2})_j}{ j!(1+\frac{\A-\C}{2})_j}.\nonumber
\end{align}
Substituting a new variable for $\sqrt{x}$ in the integral and invoking formula \eqref{K-normalize}, we see that the first equality in \eqref{K2K} holds. The second equality follows from \eqref{C2K}.
\end{proof}
\subsection{Proof of Theorem \ref{T1.3}}
With $t_d=t_{d+1}=1$ in \eqref{psi},
we get
\begin{align*}
\psi\topp \tau(\vv s,\vv t)&=\frac{1}{\Kab}\int_\r \e\left[\exp\left(-\tau \sum_{k=1}^{d}(t_k-t_{k-1})\mathbb{T}_{s_k}\right)\middle|\mathbb{T}_{0}=x\right]\mathfrak p_{0}(\d x)
\\ &=
\frac{1}{\Kab}\int_\r \e\left[\exp\left(-\tau \sum_{k=1}^{d}(t_k-t_{k-1})\mathbb{T}_{s_k}\right)\middle|\mathbb{T}_{s_{d}}=y\right]\int_\r\mathfrak p_{0,s_d}(x, \d y) \mathfrak p_{0}(\d x).
\end{align*}
We now use the invariance of
measures $\mathfrak p_s(\d x)$ under the semigroup of the continuous dual Hahn process,
$\int_\r\mathfrak p_{0,s_d}(x, \d y) \mathfrak p_{0}(\d x)=\mathfrak p_{s_d}(\d y)$. (When $\A>-2$, and $s_d<2$, this is
\cite[Lemma 7.11]{CorwinKnizel2021}; for the general case, see \cite{Bryc-2021}.)
We get
\begin{equation}\label{psi+}
\psi\topp\tau(\vv s,\vv t)=\frac{1}{\Kab}\int_\r \e\left[\exp\left(-\tau \sum_{k=1}^{d}(t_k-t_{k-1})\mathbb{T}_{s_k}\right)\middle|\mathbb{T}_{s_d}=x\right]\mathfrak p_{s_d}(\d x).
\end{equation}
Since $s_d>-\A$, from \eqref{their-nu} we see that measure $\mathfrak p_{s_d}(\d x)$ is absolutely continuous. Furthermore,
canceling the common factor
$$\frac{(u+v)(u+v+1)}{
8\pi}= \frac{(\A+\C)(\A+\C+2)}{16\pi}$$
in $\mathfrak p _s(dx)$ and $\Kab$ for $s+\A>0$ we get
\begin{equation}
\label{p2phi}
\frac{1}{\Kab}\mathfrak p_s(\d x)=\frac{\varphi_s(\sqrt{x})}{2\sqrt{x}}\d x,
\end{equation}
where $\varphi_s$ is \eqref{nu_s}. (Recall \eqref{C2K}.)
For $s<t<\C$, transition probabilities for the continuous dual Hahn process are absolutely continuous with density
\begin{equation}
\label{p2q}
\mathfrak p_{s,t}(x,y) = \frac{1}{2\sqrt{y}} q_{s,t}(\sqrt{x},\sqrt{y}).
\end{equation}
This may be hard to see from \cite[Definition 7.9]{CorwinKnizel2021} because of additional constraint $0\leq s<t<\min\{2,\C\}$ there.
But we can refer to \cite[Section 3]{Bryc:2009} or \cite{Bryc-2021} for this case.
This shows that \eqref{psi+} can be written as a multivariate integral
\begin{equation*}
\psi\topp\tau (\vv s,\vv t)=
\int_0^\infty \frac{\varphi_{s_d}(\sqrt{x_d})}{2\sqrt{x_d}}\d x_d \int_{\r_+^{d-1}} e^{-\tau \sum_{k=1}^{d}(t_k-t_{k-1})x_k}
\prod_{k=1}^{d-1} \frac{q_{s_{k+1},s_{k}}(\sqrt{x_{k+1}},\sqrt{x_k})}{2\sqrt{x_k}} \d \vv x.
\end{equation*}
Substitution $x_j=u_j^2$ gives
\begin{equation}
\psi\topp\tau(\vv s,\vv t)= \int_0^\infty \e\left[\exp\left(-\sum_{k=1}^{d}(t_k-t_{k-1})Z_{s_k}^2\right)\middle|Z_{s_{d}}=u\right]\varphi_{s_{d}}(u)\d u,
\end{equation}
compare \eqref{ICAK-2018}, applied to $d-1$ instead of $d$. We now use \eqref{ICAK-A1}, again
applied to $d-1\geq 0$ instead of $d$. By Abel transformation
\begin{equation*}
\label{Abel}
\sum\limits_{k=1}^{d} a_{k} (b_{k}-{b_{k-1}})=\sum\limits_{k=1}^{d-1} (a_{k}-a_{k+1}) (b_{k}-b_{0})+a_{d}(b_{d}-b_{0}),
\end{equation*}
applied to the right hand side of \eqref{ICAK-A1}, we get \eqref{psi-dual}.
\section{The pair of dual semigroups and dual representation for the Laplace transforms}
\label{Sect: DMS}
This section uses the Kontorovich-Lebedev transform $\kk$ to explain the dual representations for the Laplace transforms in Theorem \ref{T1.2} by relating
processes $(X_t)$ and $(Z_t)$. We discuss how the semigroups of these two processes are connected, and then we use Kontorovich-Lebedev transform to relate their Laplace transforms.
\subsection{Dual semigroups}
Recall that in Section \ref{Sect:Recipr} we used the sub-Markovian semi-group $(\widetilde \pp_t)_{t>0}$ defined by \eqref{PP}
to construct the Markovian family
$\pp_{s,t}=\frac{1}{H_s}\widetilde\pp_{t-s}H_t$, i.e.,
\begin{equation}\label{P-H}
\pp _{s,t}[f]=\frac{1}{H_s}\widetilde {\pp }_{t-s}[H_t f] , \quad 0\leq s<t\leq \tau,
\end{equation}
with $H_t=\kk^{-1}\ee^{-u^2(T-t)}\kk e^{\C x}$.
(The relation between these two objects is $\pp_{s,t}[f](x)=\int_\r f(x)P_{s,t}(x,dy)$.)
Now we use the Kontorovich-Lebedev transform to define another semigroup, acting on $L_2((0,\infty),\mu(\d u))$ by reversing the order of $\kk^{-1}$ and $\kk$. For $s>0$, let
\begin{equation}\label{tilde Q}
\widetilde {\qq}_s={\mathcal K} e^{sx} {\mathcal K}^{-1},
\end{equation}
where $e^{sx}$ is a multiplication operator on $L_2((0,\infty), \d x)$.
Using the integral identity \eqref{K-Melin}
we find that the operator $\widetilde{\qq}_t$ has kernel
\begin{equation}\label{q_t}
\widetilde q_t(u,v)= \frac{2^{t}}{4\pi\Gamma(t)|\Gamma(\i v)|^2}
|\Gamma((t+\i (u+v))/2,(t+\i (u-v))/2)|^2,
\end{equation}
i.e., $\widetilde \qq_t[f](u)=\int \widetilde q_t (u,v) f(v) \d v$.
Note the difference in notation with tilde for the kernel $\widetilde q$ of $\widetilde \qq$ and no tilde in $q_{s,t}$ in \eqref{ICAK-q}.
\color{black}
It is clear that $(\widetilde \qq_s)_{s>0}$ satisfies the semigroup property
$\widetilde {\qq}_{t+s}=\widetilde {\qq}_t \widetilde{\qq}_s$
on $L_2(\d \mu)$ and the kernel is positive. However, there is a major problem in that $\widetilde {\qq}_t 1 = +\infty$ for all $t>0$. In other words, the measures $\widetilde q_t(u,v) \d v$ are not probability measures. So we will need to do some further work to turn $\widetilde {\qq}_t$ into a Markov semigroup.
To define Markov semigroup $\qq_{s,t}$, we proceed as in \eqref{P-H}. For $s<\C$ and $u>0$ we introduce the functions
\begin{equation}\label{h_s}
h_s(u)= \mathcal{K} [e^{(\C-s) x}](u) = 2^{\C-s-2} |\Gamma(\tfrac{\C-s+\i u}{2})|^2,
\end{equation}
where we used identity \eqref{K-Mellin2}.
For $s<t<\C$ we introduce operators
\begin{equation}\label{Qst0}
{\qq}_{s,t}=\frac{1}{h_{s}} \widetilde {\qq}_{t-s} h_{t}.
\end{equation}
Informally, \color{black} one can think of ${\qq}_{s,t}$ as Doob's $h$-transform of $\widetilde {\qq}_t$.
Now it is clear that operators ${\qq}_{s,t}$ have an integral kernel
\begin{align} \label{QH2}
q_{s,t}(u,v)&=\frac{h_{t}(v)}{h_{s}(u)} \widetilde q_{t-s}(u,v)\\ \nonumber
&=\frac{|\Gamma((\C-t+\i v)/2,(t-s+\i (u+v))/2,(t-s+\i (u-v))/2)|^2}{4\pi\Gamma(t-s)|\Gamma((\C-s+\i u)/2,\i v)|^2} ,
\end{align}
matching \eqref{ICAK-q}.
The following proposition says that $q_{s,t}(u,v)$ is a kernel of a Markov semigroup and is known; we include its proof in Appendix \ref{Sect:SemiQ} for completeness.
\begin{proposition}\label{Prop-Semi}
${\qq}_{s,t} 1=1$, and for $t_1<t_2<t_3<\C$ we have
$$
{\qq}_{t_1,t_2} {\qq}_{t_2,t_3} = {\qq}_{t_1,t_3}.
$$
\end{proposition}
\color{black}
So operators $\qq_{s,t}$ are well defined Markov transition operators for all real $-\infty<s<t<\C$.
To define the ``initial distributions" of Markov process $(Z_s)$, for $-\A<s<\C$, we let
\begin{equation}
\label{gsu}
g_s(u) = \kk[ e^{(s+\A) x}](u) =2^{\A+s-2} |\Gamma(\tfrac{\A+s+\i u}{2})|^2
\end{equation}
(recall \eqref{K-Mellin2}) and we introduce the family of $\sigma$-finite positive measures
\begin{equation}
\label{Z-ini}
\widetilde\nu_s(\d u)=\frac{1}{C_{\A,\C}^\tau}h_{s}(u)g_{s}(u) \mu(\d u)
=\frac{8\pi}{2^{\A+\C}C_{\A,\C}^\tau}
\frac{|\Gamma(\tfrac{\A+s+\i u}{2},\tfrac{\C-s+\i u}{2})|^2}{|\Gamma(\i u)|^2}\d u = \varphi_s(u)\d u
\end{equation}
The normalization of the infinite measure $\widetilde\nu_s(\d u)$ is chosen to match \eqref{X-ini},
explaining the unusual normalization in
\eqref{nu_s}.
It is known that measures $\widetilde \nu_s$ are $\qq_{s,t}$-invariant, compare an equivalent statement {\cite[Lemma 7.11]{CorwinKnizel2021}}.
\begin{lemma}\label{Rem:Yizao} For $-\A<s<t<\C$
$$
\widetilde \nu_s\qq_{s,t}=\widetilde \nu_t.
$$
Equivalently,
$$\int_0^\infty q_{s,t}(u,v)g_s(u)h_s(u)\mu(\d u) \d v=g_t(v)h_t(v)\mu(\d v).$$
\end{lemma}
\begin{proof}
We give a proof for completeness.
Using the symmetry of \eqref{q_t} with respect to $u,v$ and explicit expression \eqref{mu}, the density on left hand side is
\begin{align*}
\int_0^\infty h_s(u)q_{s,t}(u,v)g_s(u)\mu(\d u)& =
\frac{h_t(v)}{|\Gamma(\i v)|^2} \int_0^\infty \widetilde |\Gamma(\i v)|^2 \widetilde q_{t-s}(u,v)g_s(u)\mu(\d u)
\\
&=\frac{2}{\pi} \frac{h_t(v)g_t(v)} {|\Gamma(\i v)|^2} \int_0^\infty \frac{1}{g_t(v)}\frac{|\Gamma(\i v)|^2\widetilde q_{t-s}(u,v)}{|\Gamma(\i u)|^2}g_s(u)\d u
\\
&=\frac{2}{\pi} \frac{h_t(v)g_t(v)}{|\Gamma(\i v)|^2} \int_0^\infty \frac{1}{g_t(v)}\widetilde q_{t-s}(v,u) g_s(u) \d u.
\end{align*}
Formal calculations indicate that the last integral should be one, as it is equal to
$$\frac{1}{g_t}\left(\kk e^{(t-s)x}\kk^{-1}\right)\kk [e^{(s+\A)x}].$$
To avoid justification of the associative law in this setting, we just write down the integral explicitly
$$
\int_0^\infty \frac{|\Gamma(\frac{\A+s+\i u}{2})|^2|\Gamma((t-s+\i (u+v))/2,(t-s+\i (u-v))/2)|^2}{4\pi \Gamma(t-s)|\Gamma(\frac{\A+t+\i v}{2},\i u)|^2} \d u,
$$
and note that its value is indeed 1, as this is the integral of the kernel \eqref{ICAK-q} (in a different parametrization).
\end{proof}
\begin{summary} \label{Sec:Z}
To summarize, in this section we re-introduced Markov process $(Z_s)_{-\infty<s<\C}$ with state space $(0,\infty)$. For $s<t$, the transition probabilities $q_{s,t}(u,v)dv$ depend on parameter $\C$ and are given
by \eqref{ICAK-q}. For $-\A<s_0<\C$, we use the (infinite) measure $\widetilde\nu_{s_0}=\varphi_{s_0}(u)\d u$ as the "initial law" of $Z_{s_0}$, and then the "univariate law" of $Z_s$ for $s\in[s_0,\C)$ is $\widetilde\nu_s$.
\end{summary}
\subsection{Dual representations for the Laplace transforms}\label{Sect:DualReps}
As we mentioned in the introduction, the goal of dual representations is to swap the arguments of the multipoint Laplace transform and the time arguments of the processes. Our first duality formula is conditional, so it does not rely on the choice of the initial distributions for the processes $(X_t)$ and $(Z_s)$. It serves as a lemma for the second duality formula, but it is interesting in its own right.
\begin{theorem}\label{T1} Let $(X_t)_{t\in[0,\tau]}$ be the Markov process with finite dimensional distributions \eqref{joint-law0} and let $(Z_s)_{s\in(-\infty, \C)}$ be the Markov process described in \ref{Sec:Z}.
Let $-\infty<s_0<s_1<\dots<s_{n}<\C$ and $0=t_0<t_1<t_2<\dots<t_{n}<t_{n+1}=\tau$.
Define for $x\in\r$
\begin{equation}\label{F-W}
F(x)= \e \Big[e^{-\sum\limits_{k=0}^n s_{k}(X_{t_{k+1}}-X_{t_k})}\Big| X_0=x \Big]
\end{equation}
and for $u>0$ let
\begin{equation}
\label{Gu}
G_{s_0}(u)=\e\left[
e^{-\sum_{k=1}^n (t_{k+1}-t_k)Z_{s_k}^2} \middle |{Z_{s_0}=u} \right].
\end{equation}
Then $ e^{-t_1u^2}h_{s_0}(u)G_{s_0}(u)$ is in $L_1(\d \mu)$ and
\begin{equation}\label{Alexeys-F-G}
e^{-s_0 x} H_{0}(x) F(x)=\kk^{-1}\left[ e^{-t_1u^2}h_{s_0}(u)G_{s_0}(u)\right].
\end{equation}
\end{theorem}
\begin{proof}Recall \eqref{PP}. We use a backward recursion to define an auxiliary family of functions $F_1,\dots,F_{n+1}$ in $L_1(K_0)$.
Let
$$F_{n+1}(x)=e^{(\C-s_n) x}.$$
Clearly, $F_{n+1}\in L_1(K_0)$,
as $s_n<\C$.
For $m=1,\dots,n$,
define
$$
F_{m}(x)=e^{(s_m-s_{m-1})x}\kk^{-1}e^{-u^2 (t_{m+1}-t_m)}\kk [F_{m+1}]$$
with $F_{m+1}\in L_1(K_0)$.
Then by Lemma \ref{L0}\eqref{L0i}, $\kk(F_{m+1})$ is bounded, and $e^{-u^2 (t_{m+1}-t_m)}\in L_1(\d \mu)$.
So $F_m\in L_1(K_0)$ by Lemma \ref{L0}\eqref{L0-ii}, and the backward recursion continues until we reach $F_1$.
Since
\begin{equation}
\label{mini-step}
\kk^{-1}e^{-u^2 (t_{m+1}-t_m)}\kk=H_{t_m}(x)\pp_{t_{m+1},t_m}\frac{1}{H_{t_{m+1}}}
\end{equation}
and $H_{t_{n+1}}(x)=H_\tau(x)=e^{\C x}$ (recall that this is how we extended \eqref{H} to match \eqref{P_st} at $t=\tau$),
we observe that
$$F_m(x)= e^{(s_m-s_{m-1}) x}H_{t_m}(x) \e \Big[e^{\sum\limits_{k=m+1}^n (s_{k}-s_{k-1})X_{t_k}} e^{-s_n X_\tau}\Big| X_{t_m}=x \Big],$$
and in particular
$$F_1(x)=e^{(s_1-s_0)x}H_{t_1}(x)\e \Big[e^{\sum\limits_{k=2}^n (s_{k}-s_{k-1})X_{t_k}} e^{-s_n X_\tau}\Big| X_{t_1}=x \Big].$$
Next we rewrite \eqref{F-W} as
\begin{align*}
F(x)&=e^{s_0x} \e_{x} \Big[e^{\sum\limits_{k=1}^n (s_{k}-s_{k-1})X_{t_k}} e^{-s_n X_\tau}\Big]
=e^{s_0x} \pp_{t_0,t_1} \left[ e^{(s_1-s_0)x}\e \Big[e^{\sum\limits_{k=2}^n (s_{k}-s_{k-1})X_{t_k}} e^{-s_n X_\tau}\Big| X_{t_{1}}=x \Big]\right]
\\&=e^{s_0x} \frac{1}{H_0(x)}\widetilde \pp_{t_1}H_{t_1}(x) e^{(s_1-s_0)x}\e \Big[e^{\sum\limits_{k=2}^n (s_{k}-s_{k-1})X_{t_k}} e^{-s_n X_\tau}\Big| X_{t_{1}}=x \Big]
\\
&=e^{s_0x} \frac{1}{H_0(x)}\widetilde \pp_{t_1}F_1.
\end{align*}
So using \eqref{mini-step} again, we get
\begin{equation}
\label{F1F2} e^{-s_0x}H_0(x)F(x)=\kk^{-1} e^{-t_1u^2}\kk F_1.
\end{equation}
We now use the associative laws from Lemma \ref{L0}\eqref{L0-iv}-\eqref{L0-v}
to rewrite the expression on the right hand side, unraveling the backward recursion, starting with $F_{n+1}=e^{(\C-s_n)x}$. We get
\begin{align*}\kk F_1&=
\kk \prod_{m=1}^n\left(e^{(s_m-s_{m-1})x}\kk^{-1}e^{-u^2 (t_{m+1}-t_m)}\kk \right) e^{(\C-s_n) x}
\\&=
\kk e^{(s_1-s_0)x}\kk^{-1} e^{-u^2(t_2-t_1)}
\kk e^{(s_2-s_1)x}\kk^{-1} e^{-u^2(t_3-t_2)
\cdots
\kk e^{ (s_n-s_{n-1})x} \kk^{-1}e^{-u^2(t_{n+1}-t_n)}\kk e^{(\C-s_n) x}
\\
&= \widetilde \qq_{s_1-s_0} e^{-u^2(t_2-t_1)} \widetilde \qq_{s_2-s_1}e^{-u^2(t_3-t_2)
\cdots
\widetilde\qq_{s_n-s_{n-1}}e^{-u^2(t_{n+1}-t_n)}h_{s_n}(u) .
\end{align*}
Using \eqref{h_s}, and $\widetilde \qq_{s_k-s_{k-1}}=h_{s_{k-1}}\qq_{s_{k-1},s_k}\frac{1}{h_{s_k}}$ we get
\begin{align*}
\kk F_1&=h_{s_0}(u) \qq_{s_0,s_1}e^{-u^2(t_2-t_1)}\qq_{s_1,s_2}e^{-u^2(t_2-t_1)}\cdots
\qq_{s_{n-1},s_n}e^{-u^2(t_{n+1}-t_n)}
\\
&= h_{s_0}(u) \e\left[
e^{-\sum_{k=1}^n (t_{k+1}-t_k)Z_{s_k}^2} \middle |{Z_{s_0}=u} \right]=h_{s_0}(u) G_{s_0}(u).
\end{align*}
This ends the proof by \eqref{F1F2}.
\end{proof}
Recall that semigroup $\qq_{s,t}$ is well defined for all $-\A<s<t<\C$. The interval is non-empty when $\A+\C>0$, but it can contain negative numbers.
\begin{theorem}\label{T2} Fix $\A+\C>0$. Let $-\A<s_0<s_1<\dots<s_{n}<\C$ and $0=t_0<t_1<t_2<\dots<t_{n}<t_{n+1}=\tau$. Let $(X_t)_{t\in[0,\tau]}$ be the Markov process with finite dimensional distributions \eqref{joint-law0}
and let $(Z_s)_{s\in(-\A, \C)}$ be the Markov process described in \ref{Sec:Z}.
Then
\begin{equation}\label{For-proof}
\e\Big[ e^{\sum\limits_{k=0}^n - s_{k} (X_{t_{k+1}}-X_{t_k}) } \Big]
=\int_0^\infty\e\left[e^{-\sum\limits_{k=0}^{n} (t_{k+1}-t_k) Z_{s_k}^2}\middle| Z_{s_0}=u\right] \widetilde \nu_{s_0}(\d u),
\end{equation}
where the $\sigma$-finite measure
$\widetilde\nu_{s_0}(\d u)$
is given by \eqref{Z-ini}.
\end{theorem}
\begin{proof}
We use \eqref{Alexeys-F-G} and then Parseval identity \eqref{Prsvl+} to compute the left hand side of \eqref{For-proof}. Recall notation \eqref{F-W} and \eqref{X-ini}.
To shorten the formulas, we write $\mathfrak C$ for the normalizing constant (which is $\KabLa$ from \eqref{C2K}) in the next lines.
\begin{align}\label{F1}
\e\Big[ e^{\sum\limits_{k=0}^n - s_{k} (X_{t_{k+1}}-X_{t_k}) } \Big]
& \stackrel{\eqref{F-W}}{=}\e \left[F(X_0)\right]=\frac{1}{\mathfrak C}\int_\r H_0(x)e^{\A x} F(x)\d x
=\frac{1}{\mathfrak C}\int_\r e^{(\A+s_0)x} e^{-s_0x}H_0(x)F(x)\d x
\\&\stackrel{\eqref{Alexeys-F-G}}{=} \frac{1}{\mathfrak C}\int_\r e^{(\A+s_0)x} \mathcal{K}^{-1}\left[ e^{-t_1u^2}h_{s_0}(u)\e\big[
e^{-\sum_{k=1}^n (t_{k+1}-t_k)Z_{s_k}^2} \big |{Z_{s_0}=u} \big]\right] \d x\nonumber
\\
& \stackrel{\eqref{Prsvl+}}{=} \frac{1}{\mathfrak C}\int_0^\infty \mathcal{K}\left[e^{(\A+s_0)x}\right] e^{-(t_1-t_0)u^2}h_{s_0}(u)\e\big[
e^{-\sum_{k=1}^n (t_{k+1}-t_k)Z_{s_k}^2} \big |{Z_{s_0}=u} \big] \mu(\d u)\nonumber
\\ & \stackrel{\eqref{gsu}}{=} \frac{1}{\mathfrak C}\int_0^\infty h_{s_0}(u)g_{s_0}(u) \e\left[
e^{-\sum_{k=0}^n (t_{k+1}-t_k)Z_{s_k}^2} \middle |{Z_{s_0}=u} \right] \mu(\d u)\nonumber
\\& \stackrel{\eqref{Z-ini}}{=} \int_0^\infty \e\left[
e^{-\sum_{k=0}^n (t_{k+1}-t_k)Z_{s_k}^2} \middle |{Z_{s_0}=u} \right] \widetilde \nu_{s_0}(\d u).\nonumber
\end{align}
\end{proof}
We remark that by Lemma \ref{Rem:Yizao} we can replace the right hand side of \eqref{For-proof} by
$$
\int_0^\infty\e\left[e^{-\sum\limits_{k=0}^{n} (t_{k+1}-t_k) Z_{s_k}^2}\middle| Z_{s_*}=u\right] \widetilde \nu_{s_*}(\d u)
$$
for an arbitrary $s_*\in(-\A,s_0)$.
\arxiv{
To clarify how we proceed with infinite measures, in the next to last line we write
$$h_{s_0}(u)g_{s_0}(u)=\int_0^\infty q_{0,s_0}(v,u)g_0(v)h_0(v)\mu(\d v),$$
and apply Fubini's theorem to the non-negative integrand. We get
\begin{align*}
\int_0^\infty h_{s_0}(u)g_{s_0}(u) G_{s_0}(u) \mu(\d u) &=
\int_0^\infty \int_0^\infty q_{0,s_0}(v,u)g_0(v)h_0(v)\mu(\d v) G_{s_0}(u) \mu(\d u)
\\& =\int_0^\infty g_0(v)h_0(v) \int_0^\infty q_{0,s_0}(v,u) G_{s_0}(u) \mu(\d u)\,\mu(\d v)
\\&=\int_0^\infty g_0(v)h_0(v) G_{0}(v) \mu(\d v)
=\mathfrak C \int_0^\infty G_{0}(v) \widetilde \nu_0(\d v).
\end{align*}
}
\section{Relation to Hartman-Watson density }\label{Sect: Rel-HW}
The (unnormalized) Hartman-Watson density function $t\mapsto \theta(r,t)$ is defined as
\begin{equation}\label{formula_theta}
\theta(r,t)=\frac{re^{\pi^2/(2t)}}{\sqrt{2\pi^3 t}} \int_0^{\infty} e^{-y^2/(2t)-r \cosh(y)}
\sinh(y) \sin(\pi y/t) \d y, \;\;\; t>0, \; r>0.
\end{equation}
In some papers this function is denoted by $\theta_r(t)$, but here we will follow the notation from \cite{MatsumotoYor2005I,MatsumotoYor2005II}.
An alternative integral representation was obtained by Yakubovich in \cite[formula (3.2), slightly corrected]{Yakubovich2013}
\begin{equation}\label{Yak-theta}
\theta(r,t)=\frac{1}{\pi} \int_0^{\infty} e^{- t u^2/2} K_{\i u}(r) \frac{\d u}{|\Gamma(\i u)|^2}.
\end{equation}
The scaled version of this density function was introduced by Hartman and Watson in \cite{hartman1974normal} and the function
$\theta(r,t)$ was studied extensively by Marc Yor and many other researchers (see \cite{MatsumotoYor2005I,MatsumotoYor2005II} and the references therein). The interest in the function $\theta(r,t)$ can be explained by it close relation to the exponential functionals of Brownian motion and to pricing of Asian options in the Black--Scholes model. Let $B=\{B_t\}_{t\ge 0}$ be a one-dimensional Brownian motion starting from zero and define $B^{(\mu)}=\{B^{(\mu)}_t=B_t+\mu t\}_{t\ge 0}$ to be the Brownian motion with drift $\mu \in \r$. Define
$$
A^{(\mu)}_t=\int_0^t e^{2 B_s^{(\mu)}} \d s, \; \; \; t \ge 0.
$$
Then for $t>0$, $y>0$ and $x\in \r$ it holds that
\begin{equation}
\p(A^{(\mu)}_t \in \d y, B^{(\mu)}_t \in \d x) = e^{\mu x - \mu^2 t/2}
\exp\big( -(1+e^{2x})/(2y)\big) \theta(e^x/y,t) \frac{\d y \d x}{y},
\end{equation}
see \cite[Theorem 4.1]{MatsumotoYor2005I}.
The function $\theta(r,t)$ has the following Laplace transforms (see Proposition 4.2 and Theorem A.1 in \cite{MatsumotoYor2005I})
\begin{align}
&\int_0^{\infty} e^{-\la^2 t/2} \theta(r,t) \d t=I_{\la}(r), \;\;\; \la>0, \; r>0, \label{MY1}\\
\end{align}
\begin{align}
&\int_0^{\infty} e^{-x r} \theta(r,t) \frac{\d r}{r}=\frac{1}{\sqrt{2\pi t}} \label{MY2}
\exp\Big(-{\textrm{Argcosh}}(x)^2/(2t) \Big), \;\;\; x\ge 1, \; t>0.
\end{align}
The Yakubovich heat kernel $p_t(x,y)$ can be expressed in terms of $\theta(r,t)$ as follows
\begin{equation}\label{Yak-Hart-Wat}
p_t(x,y)= \int_0^{\infty} \exp\Big(-r \cosh(x-y) - \frac{e^{x+y}}{2r} \Big) \theta(r,2t)
\frac{\d r}{r}.
\end{equation}
The above identity is a simple re-write of
\cite[formula (50)]{SousaYakubovich2018}, and makes positivity of $p_t(x,y)$ obvious.
Yakubovich's formula \eqref{Yak-Hart-Wat} gives an unexpected closed-form formula for the Laplace transform
$$L_\la(\A,\C)= \int_0^\infty e^{-\la^2 \tau} \KabLa \d \tau$$
of
the normalizing constant \eqref{K-normalize}.
\begin{theorem}\label{Prop-LapC}
If $0<\A+\C<2$
then for $\la>\max\{-\C,-\A\}$ we have
\begin{equation}
\label{LapTC}
L_\la(\A,\C)= \frac{\pi^2 }{\sin (\pi\frac{\A+\C}{2})} \frac{\Gamma(\frac{\A+\la}{2}, \frac{\C+\la}{2})}{\Gamma(\frac{\la+2-\A}{2},\frac{\la+2-\C}{2})}.
\end{equation}
\end{theorem}
\begin{proof}
Putting \eqref{Yak-Hart-Wat} into \eqref{C-formula}, and noting that the integrand is positive, we write
\begin{equation}
C_{\A,\C}^\tau = \int_0^{\infty} {\mathcal I}(r) \theta(r,2\tau)
\frac{\d r}{r},
\end{equation}
where
\begin{align}\label{Ir-def}
{\mathcal I}(r)&:=\int_{\r} \int_{\r} e^{\C x+\A y} \exp\left(-\tfrac12(e^{r+x-y}+e^{r+y-x}+e^{x+y-r})\right) \d x \d y.
\end{align}
We will need the following two identities
\cite[3.471.9 and 6.596.3]{gradshteyn2007table} or \cite[6.3(17) and 6.8(32)]{erdelyi1954fg} :
\begin{equation}\label{id1}
\int_0^{\infty} x^{\nu-1} e^{-\beta/x-\gamma x} \d x=2 (\beta/\gamma)^{\nu/2} K_{\nu}(2\sqrt{\beta \gamma}), \;\;\; \re(\beta)>0, \; \re(\gamma)>0,
\end{equation}
and
\begin{equation}\label{id2}
\int_{0}^{\infty} K_{\nu}(\sqrt{x^2+z^2}) \frac{x^{2\mu+1}}{(x^2+z^2)^{\nu/2}} \d x
=\frac{2^{\mu} \Gamma(\mu+1)}{z^{\nu-\mu-1}} K_{\nu-\mu-1}(z), \;\;\; \re(\mu)>-1.
\end{equation}
We do a change of variables $e^x=w$, $e^y=z$, $e^r=R$ and obtain
\begin{align*}
{\mathcal I}(r)&=\int_0^{\infty} \int_0^{\infty} w^{\C-1} z^{\A-1}
\exp\left(-\tfrac12(Rw/z+Rz/w+zw/R)\right) \d z \d w.
\end{align*}
The inner integral is computed via \eqref{id1} as follows
\begin{align*}
\int_0^{\infty} z^{\A-1}
\exp\left(-\tfrac12(w/R+R/w)z-\tfrac12 Rw/z\right) \d z=
2 R^\A w^\A \frac{K_{\A}(\sqrt{R^2+w^2})}{(R^2+w^2)^{\A/2}}.
\end{align*}
The above formula holds for all real $\A$.
Next we compute the outer integral using \eqref{id2}
\begin{align}\label{id3}
{\mathcal I}(r)&=2 R^\A \int_0^{\infty} w^{\A+\C-1}
\frac{K_{\A}(\sqrt{R^2+w^2})}{(R^2+w^2)^{\A/2}} \d w
=2 R^\A \frac{2^{\frac{\A+\C}{2}-1} \Gamma\Big(\frac{\A+\C}{2}\Big)}{R^{(\A-\C)/2}} K_{(\A-\C)/2}(R)
\\&=2^{\frac{\A+\C}{2}} \Gamma\Big(\frac{\A+\C}{2}\Big) e^{r(\A+\C)/2} K_{(\A-\C)/2}(e^r).\nonumber
\end{align}
From \eqref{id3} and \eqref{MY1} we get
\begin{align*}
\int_0^\infty e^{-\la^2 \tau} C_{\A,\C}^ \tau \d \tau &=2^{\frac{\A+\C}{2}} \Gamma\Big(\frac{\A+\C}{2}\Big) \int_\r\int_0^\infty e^{-\la^2 t} e^{r(\A+\C)/2} \theta(e^r,2t) \d t K_{(\A-\C)/2}(e^r) \d r
\\&=2^{\frac{\A+\C}{2}-1} \Gamma\Big(\frac{\A+\C}{2}\Big) \int_\r e^{r(\A+\C)/2} I_\la(e^r) K_{(\A-\C)/2}(e^r) \d r \\
& =2^{\frac{\A+\C}{2}-1} \Gamma\Big(\frac{\A+\C}{2}\Big) \int_0^\infty x^{\frac{\A+\C}{2}-1} I_\la(x) K_{(\A-\C)/2}(x) \d x,
\end{align*}
with substitution $x=e^r$.
Formula \cite[6.8(43)]{erdelyi1953higher}
$$
\int_0^\infty x^{s-1} I_\nu(x)K_\mu(x)\d x= \frac{\Gamma(\frac{s+\mu+\nu}{2})B(1-s,\frac{s+\nu-\mu}{2})}{2^{2-s} \Gamma(\frac{\mu+\nu-s}{2}+1)},
$$ which holds if $\re(-\nu\pm\mu)<\re (s) <1$, used with $s=(\A+\C)/2$, $\mu=(\A-\C)/2$, $\nu=\la$
gives
$$
\int_0^\infty e^{-\la^2 \tau} C_{\A,\C}^ \tau \d \tau = \frac{2^{\A+\C}}{8} \frac{\Gamma(\frac{\A+\la}{2},\frac{2-\A-\C}{2},\frac{\C+\la}{2})}{\Gamma(\frac{\la+2-\A}{2},\frac{\la+2-\C}{2})}\Gamma(\frac{\A+\C}{2}).$$
This simplifies by Euler's reflection formula,
$$
\int_0^\infty e^{-\la^2 \tau} C_{\A,\C}^ \tau \d \tau = \frac{\pi 2^{\A+\C}}{8\sin (\pi\frac{\A+\C}{2})} \frac{\Gamma(\frac{\A+\la}{2}, \frac{\C+\la}{2})}{\Gamma(\frac{\la+2-\A}{2},\frac{\la+2-\C}{2})}.$$
Formula \eqref{C2K} now gives \eqref{LapTC}.
\end{proof}
\arxiv{
\subsection{Using GIG and gamma laws} Some integral identities can be replaced by more probabilistic arguments that rely on special properties of Generalized Inverse Distributions (GIG).
Here we show how to re-derive \eqref{id3} by this technique.
\newcommand{\R}{\r}
Let $\mathrm{GIG}(p,\alpha,\beta)$, $p\in\R$, $\alpha,\beta>0$, be the generalized inverse Gaussian distribution \cite{jorgensen2012statistical} defined by the density
\begin{equation}\label{GIG}
f(y)=\left(\tfrac{\alpha}{\beta}\right)^p\tfrac{1}{2K_p(\sqrt{\alpha\beta})}\,e^{-\tfrac{\alpha y}{2}-\tfrac{\beta }{2y}}\,I_{(0,\infty)}(y).
\end{equation}
Let $\mathrm G(q,\gamma)$, $q,\gamma>0$, be the gamma distribution defined by the density
\begin{equation}\label{gamm}
g(x)=\tfrac{\gamma^q}{\Gamma(p)}\,x^{q-1}\,e^{-\gamma x}\,I_{(0,\infty)}(x).
\end{equation}
Let $X\sim \mathrm G(q,\tfrac{1}{2c})$ and $Y\sim \mathrm{GIG}(p,c,c)$ be independent.
Consider a function $\psi$ defined by $$\psi(x,y)=\left(\sqrt{xy},\,\sqrt{\tfrac{y}{y}}\right),\quad x,y>0.$$ Note that it is a dipheomorphism from $(0,\infty)^2$ onto itself with the Jacobian of $\psi^{-1}(z,w)=(zw,\tfrac{z}{w})$ of the form
$$
J_{\psi^{-1}}(z,w)=\tfrac{2z}{w}.
$$
Therefore the density of the random vector $$(Z,W)=\left(\sqrt{XY},\,\sqrt{\tfrac{X}{Y}}\right)$$ is
\begin{align*}
f_{Z,W}(z,w)&=J_{\psi^{-1}}(z,w)f_X(zw)f_Y(z/w)\\
&=
\tfrac{2z}{w}\,\tfrac{1}{(2c)^q\Gamma(q}\,(zw)^{q-1}e^{-\tfrac{zw}{2c}}\,\tfrac{1}{2K_p(c)}\,(z/w)^{p-1}\,e^{-\tfrac{c(z/w)}{2}-\tfrac{c}{2(z/w)}}\\
&=\tfrac{z^{q+p-1}w^{q-p-1}\,\exp\left(-\tfrac{zw}{2c}-\tfrac{cz}{2w}-\tfrac{cw}{2z}\right)}{C(p,q,c)},
\end{align*}
where $C(p,q,c)=(2c)^q\Gamma(q)K_{p}(c)$ is the normalizing constant.
Thus the density of $(U,V)=(\log\,Z,\,\log W)$ is
$$
f_{U,V}(u,v)=\tfrac{e^{(p+q)v+(q-p)u}\,\exp\left(-\tfrac{1}{2}\left(c^{-1}e^{u+v}+c e^{v-u}+ce^{u-v}\right)\right)}{C(p,q,c)},\quad u,v\in\R.
$$
Summing up, $\mathcal I_r$, as defined in \eqref{Ir-def}
is the normalizing constant of the joint density of $$(U,V)=\tfrac{1}{2}(\log X +\log Y,\,\log X-\log Y)$$
where $X\sim \mathrm G(\tfrac{\A+\C}{2},\,\tfrac{1}{2e^r})$ and $Y\sim\mathrm{GIG}(\tfrac{\A-\C}{2},e^r,e^r)$ are independent random variables.
Thus
$$
\mathcal I_r=C\left(\tfrac{\A-\C}{2},\tfrac{\A+\C}{2},e^r\right)=(2e^r)^{\tfrac{\A+\C}{2}}\,\Gamma(\tfrac{\A+\C}{2})\,K_{\tfrac{\A-\C}{2}}(e^r)
$$
as given in \eqref{id3}.
Properties of GIG can also replace Macdonalds formula in the proof of \eqref{Yak-Hart-Wat}.
}
\arxiv{
\begin{note}
\cite[Theorem 3.1]{craddock2014integral} gives a formula for Yakubovich heat kernel that in our notation simplifies to:
$$
p_t(x,y)=\frac{1}{4\sqrt{\pi}t^{3/2}} \int_{|x-y|}^\infty u e^{-\frac{u^2}{4t}} J_0(e^{x+y+u}+e^{x+y-u}- e^{2x}-e^{2y})\d u.
$$
\end{note}
}
|
train/arxiv
|
BkiUeA05qsFAfyQRahV8
| 5
| 1
|
\section*{References}}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{amsmath}
\usepackage{mathrsfs}
\usepackage{graphicx}
\usepackage{epstopdf}
\usepackage{float}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{bm}
\usepackage{bbm}
\usepackage{mathrsfs}
\usepackage{cleveref}
\usepackage{soul}
\usepackage{multirow}
\usepackage{xcolor}
\usepackage{framed}
\usepackage{nomencl}
\makenomenclature
\biboptions{sort&compress}
\journal{Acta Materialia}
\makeatletter
\def\@author#1{\g@addto@macro\elsauthors{\normalsize%
\def\baselinestretch{1}%
\upshape\authorsep#1\unskip\textsuperscript{%
\ifx\@fnmark\@empty\else\unskip\sep\@fnmark\let\sep=,\fi
\ifx\@corref\@empty\else\unskip\sep\@corref\let\sep=,\fi
}%
\def\authorsep{\unskip,\space}%
\global\let\@fnmark\@empty
\global\let\@corref\@empty
\global\let\sep\@empty}%
\@eadauthor={#1}
}
\makeatother
\begin{document}
\begin{frontmatter}
\title{Analysis of the influence of microstructural traps on hydrogen assisted fatigue}
\author{Rebeca Fern\'{a}ndez-Sousa \fnref{Uniovi}}
\author{Covadonga Beteg\'{o}n \fnref{Uniovi}}
\author{Emilio Mart\'{\i}nez-Pa\~neda\corref{cor1}\fnref{IC}}
\ead{[email protected]}
\address[Uniovi]{Department of Construction and Manufacturing Engineering, University of Oviedo, Gij\'{o}n 33203, Spain}
\address[IC]{Department of Civil and Environmental Engineering, Imperial College London, London SW7 2AZ, UK}
\cortext[cor1]{Corresponding author.}
\begin{abstract}
We investigate the influence of microstructural traps on hydrogen diffusion and embrittlement in the presence of cyclic loads. A mechanistic, multi-trap model for hydrogen transport is developed, implemented into a finite element framework, and used to capture the variation of crack tip lattice and trapped hydrogen concentrations as a function of the loading frequency, the trap binding energies and the trap densities. We show that the maximum value attained by the lattice hydrogen concentration during the cyclic analysis exhibits a notable sensitivity to the ratio between the loading frequency and the effective diffusion coefficient. This is observed for both hydrogen pre-charged samples (closed-systems) and samples exposed to a permanent source of hydrogen (open-systems). Experiments are used to determine the critical concentration for embrittlement, by mapping the range of frequencies where the output is the same as testing in inert environments. We then quantitatively investigate and discuss the implications of developing materials with higher trap densities in mitigating embrittlement in the presence of cyclic loads. It is shown that, unlike the static case, increasing the density of "beneficial traps" is a viable strategy in designing alloys resistant to hydrogen assisted fatigue for both closed- and open-systems.
\end{abstract}
\begin{keyword}
Hydrogen embrittlement \sep Hydrogen diffusion \sep Fatigue \sep Microstructural traps \sep Coupled deformation-diffusion modelling
\end{keyword}
\end{frontmatter}
\begin{framed}
\nomenclature{$a$}{crack length}
\nomenclature{$b$}{Burgers vector}
\nomenclature{$C$}{total hydrogen concentration}
\nomenclature{$C_L, \, C_T$}{hydrogen concentration in lattice and trapping sites}
\nomenclature{$C_0$}{initial hydrogen concentration}
\nomenclature{${C_L}_{max,\theta=0^\circ}, \, {C_T}_{max,\theta=0^\circ} , \, {C}_{max,\theta=0^\circ}$}{maximum value of the lattice, trapped and total hydrogen concentrations attained at any material point along the extended crack plane ($r, \theta=0^\circ$) for a specific time instant}
\nomenclature{${C_L}_{max,N}$}{maximum value of the lattice hydrogen concentration attained within each cycle along the extended crack plane ($r, \theta=0^\circ$)}
\nomenclature{$D, D_e$}{lattice and effective diffusion coefficients}
\nomenclature{$D_g$}{mean grain size}
\nomenclature{$d_j$}{diameter of carbide particle $j$}
\nomenclature{$E$}{Young's modulus}
\nomenclature{$f$}{load frequency}
\nomenclature{$\bm{J}$}{hydrogen flux}
\nomenclature{$K_T^{(i)}$}{equilibrium constant for the $i$th type of trapping sites}
\nomenclature{$\Delta K$}{stress intensity factor amplitude}
\nomenclature{$K_{min}, \, K_m, \, K_{max}$}{minimum, mean and maximum stress intensity factor}
\nomenclature{$\ell$}{material gradient length scale}
\nomenclature{$L$}{average distance between the carbide particles}
\nomenclature{$M$}{Taylor's factor}
\nomenclature{$N$}{number of cycles}
\nomenclature{$\mathcal{N}$}{strain hardening exponent}
\nomenclature{$N_A$}{Avogadro's number}
\nomenclature{$N_L$}{number of lattice sites per unit volume}
\nomenclature{$N_T^{(c)}$}{number of carbide trapping sites per unit volume}
\nomenclature{$N_T^{(d)}$}{number of dislocation trapping sites per unit volume}
\nomenclature{$N_T^{(m)}$}{number of martensitic interfaces trapping sites per unit volume}
\nomenclature{$\mathcal{R}$}{universal gas constant}
\nomenclature{$R$}{load ratio}
\nomenclature{$R_p$}{reference size of the plastic zone}
\nomenclature{$r_0$}{initial crack tip blunting radius}
\nomenclature{$\bar{r}$}{Nye's factor}
\nomenclature{$r$,\, $\theta$}{polar coordinates}
\nomenclature{$T$}{absolute temperature}
\nomenclature{$u , \, v$}{horizontal and vertical components of the displacement field}
\nomenclature{$\bar{V}_H$}{partial molar volume of hydrogen}
\nomenclature{$V_M$}{molar volume of the host lattice}
\nomenclature{$W_B^{(i)}$}{binding energy for the $i$th type of trapping sites}
\nomenclature{$\beta$}{number of lattice sites per solvent atom}
\nomenclature{$\varepsilon^p$}{equivalent plastic strain}
\nomenclature{$\eta^p$}{effective plastic strain gradient}
\nomenclature{$\theta_L, \, \theta_T^{(i)}$}{occupancy of lattice and $i$th type of trapping sites}
\nomenclature{$\mu$}{shear modulus}
\nomenclature{$\nu$}{Poisson's ratio}
\nomenclature{$\rho$}{dislocation density}
\nomenclature{$\sigma_H$}{hydrostatic stress}
\nomenclature{$\sigma_f$}{tensile flow stress}
\nomenclature{$\tau$}{shear flow stress}
\printnomenclature
\end{framed}
\section{Introduction}
\label{Sec:Intro}
Hydrogen originating from water vapour, aqueous electrolytes or gaseous environments significantly increases cracking susceptibility and fatigue crack growth rates in metals \cite{Gangloff2003,Gangloff2012}. As a consequence, there is an increasing interest in developing reliable prognosis methodologies based on a mechanistic understanding of this so-called hydrogen embrittlement phenomenon. In this realm, efforts include insightful experimentation \cite{Wang2014,Girardin2015,Harris2018,Nagumo2019} and the development of theoretical and numerical models for hydrogen transport \cite{Sofronis1989,Krom1999,CS2020b}, fracture \cite{Kirchheim2015,Nagao2018,CMAME2018,Tehranchi2019,Shishvan2020,JMPS2020} and fatigue \cite{Moriconi2014,EFM2017}; see Ref. \cite{Djukic2019} for a comprehensive review.\\
Hydrogen atoms can reside at interstitial lattice sites and microstructural trapping sites, such as dislocations, grain boundaries, voids, carbides and interfaces \cite{Hirth1980,Pressouyre1979}. Traps act as hydrogen sinks, slowing diffusion, and are typically characterised by their binding energy $W_B$ and density $N_T$. The energy barrier that must be overcome for the hydrogen to detrap increases with $|W_B|$; hydrogen will be strongly retained in deep traps $|W_B|>60$ kJ/mol but can be easily released from shallow traps $|W_B|<30$ kJ/mol. Quantifying this partitioning of hydrogen atoms between lattice and trapping sites is of utmost importance in predicting diffusion and embrittlement; see, e.g., \cite{Li2004,Pundt2006a,Novak2010,Turnbull2015} and references therein. Moreover, understanding the interaction of multiple trap states with diffusible hydrogen is a key step in imbuing materials with intrinsic resilience \cite{Spencer1998,Yamasaki2006,Bhadeshia2016,Chen2017,Breen2020,Chen2020}. The ambition is to design hydrogen embrittlement-resistant alloys by engineering microstructures with a high density of \emph{beneficial} traps, which will retain the hydrogen and hinder diffusion to the fracture process zone. One strategy involves incorporating finely dispersed nano-scale carbides \cite{Ramjaun2018,Turk2018}. Vanadium carbides have been successfully used to mitigate hydrogen embrittlement in refinery pressure vessels and other \emph{closed-systems}, where hydrogen entry is essentially a one-off process. However, the works by Dadfarnia \textit{et al.} \cite{Dadfarnia2011} and Hosseini \textit{et al.} \cite{Hosseini2017} have shown that increasing the density of traps to sequester hydrogen is not a viable strategy to mitigate embrittlement in \emph{open-systems}, where there is a permanent source of hydrogen. Increasing the density of one type of trap decreases the effective diffusion of hydrogen and delays the time required to achieve the steady state but has no effect on the content of hydrogen in the lattice or any other type of trap once the steady state is reached. Moreover, even for high trap densities, the steady state is attained in days, with the lattice hydrogen reaching 98\% of its steady state magnitude in minutes - a very short time frame relative to the lifetime of an engineering component \cite{Dadfarnia2011}. But notably, the analyses of Dadfarnia \textit{et al.} \cite{Dadfarnia2011} and Hosseini \textit{et al.} \cite{Hosseini2017} are limited to monotonic/static loading conditions. In fatigue, each loading cycle is significantly faster than the time required to achieve steady state and experiments show that embrittlement is precluded if the ratio between the loading frequency and the (effective) diffusion coefficient is sufficiently low \cite{Murakami2010a,Fassina2013,Tazoe2017,Alvaro2019,Peral2019}.\\
In this work, we combine numerical analysis and experimental data to gain insight into the influence of microstructural traps in hydrogen assisted fatigue. A multi-trap model based on Oriani's equilibrium \cite{Oriani1974} and Taylor's dislocation model \cite{Taylor1938} is developed and used to analyse conditions relevant to both open and closed-systems. We shed light into the competition between multiple types of traps in governing hydrogen diffusion ahead of fatigue cracks and reveal that increasing the density of a specific trap type is a viable strategy for extending the range of loading frequencies where embrittlement is not observed. In addition, the influence of crack tip plastic strain gradients is incorporated into the modelling of multi-trap systems and hydrogen assisted fatigue for the first time.
\section{Theory}
\label{Sec:Theory}
\subsection{Multi-trap model for hydrogen transport}
\label{Sec:hydrogenTransportModel}
Denote the lattice hydrogen concentration as $C_L$, which is given by
\begin{equation}\label{eq:CL}
C_L=\theta_{L} N_L,
\end{equation}
\noindent where $N_L$ is the number of lattice sites per unit volume and $\theta_L$ is the lattice site occupancy. The former is a function of the molar volume of the host lattice $V_M$, the number of interstitial sites per solvent atom $\beta$ and Avogadro's number $N_A$ as $N_L=\beta N_A/V_M$. The choice of $\beta=6$ (as for bcc, see Ref. \cite{Krom1999}) leads to $N_L=5.1 \times 10^{29}$ sites/m$^3$ \cite{Sofronis1989,DiLeo2013}. On the other hand, the trapped hydrogen concentration for the $i$th type of trapping sites is given by
\begin{equation}\label{eq:CT}
C_T^{(i)} = \theta_T^{(i)} N_T^{(i)},
\end{equation}
\noindent where $N_T$ is the trap density (trapping sites per unit volume) and $\theta_T$ is the fraction of occupied trapping sites.\footnote{Alternatively, one can define $C_T=\theta_T \alpha N_T$, with $N_T$ being the number of traps per unit volume and $\alpha$ the number of atom sites per trap. Since commonly $\alpha=1$ \cite{Krom1999}, we choose to denote the trap density as the number of trapping sites per unit volume $N_T \equiv \alpha N_T$.} The trap density is a material property that remains constant throughout the analysis for traps such as carbides or grain boundaries but that evolves with mechanical loading for the case of dislocation traps; a Taylor-based formulation is presented below to determine $N_T^{(d)}$. We adopt Oriani's equilibrium theory \cite{Oriani1974}, resulting in the following Fermi-Dirac relation between the occupancy of the $i$th type of trapping sites and the fraction of occupied lattice sites
\begin{equation}\label{eq:Oriani}
\frac{\theta_T^{(i)}}{1 - \theta_T^{(i)}} = \frac{\theta_L}{1- \theta_L} K_T^{(i)},
\end{equation}
\noindent with $K_T^{(i)}$ being the equilibrium constant for the $i$th type of trap with binding energy $W_B^{(i)}$; given by
\begin{equation}\label{eq:KT}
K_T^{(i)}=\exp \left( \frac{-W_B^{(i)}}{\mathcal{R}T} \right).
\end{equation}
\noindent Here, $\mathcal{R}=8.3145$ J/(mol$\cdot$K) is the universal gas constant and $T$ is the absolute temperature. The implications of Oriani's equilibrium are illustrated in Fig. \ref{fig:Oriani} by combining (\ref{eq:CL}), (\ref{eq:Oriani}) and (\ref{eq:KT}) to plot the contours of trap occupancy $\theta_T$ as a function of the trap binding energy $W_B$ and the lattice hydrogen concentration $C_L$. It is observed that traps with binding energies $W_B<-50$ kJ/mol saturate at very low $C_L$ values, increasing the $C_T/C_L$ ratio for a given trap density. On the other hand, shallow traps with binding energies larger than $-20$ kJ/mol are effectively empty ($\theta_T \approx 0$) unless $C_L$ is very high, on the order of 10 wt ppm ($4.68 \times 10^{25}$ at H/m$^3$) or higher.
\begin{figure}[H]
\makebox[\textwidth][c]{\includegraphics[width=0.9\textwidth]{Oriani.eps}}
\caption{Implications of Oriani's equilibrium; sensitivity of the trap occupancy $\theta_T$ to the lattice hydrogen concentration $C_L$ and the trap binding energy $W_B$.}
\label{fig:Oriani}
\end{figure}
Finally, upon the common assumption of low occupancy conditions $\theta_L << 1$, an effective diffusion coefficient can be defined by,
\begin{equation}\label{eq:De}
D_e = D \frac{C_L}{C_L+ \sum_i C_T^{(i)} \left( 1 - \theta_T^{(i)} \right)} \, ,
\end{equation}
\noindent and the hydrogen transport equation reads
\begin{equation}
\frac{D}{D_e}\frac{\partial C_L}{\partial t}=D \nabla^2 C_L-\nabla \left( \frac{D C_L}{\mathcal{R}T} \bar{V}_H \nabla \sigma_H \right ) ,
\label{Dif}
\end{equation}
\noindent where $\bar{V}_H$ is the partial molar volume of hydrogen in solid solution and $\sigma_H$ is the hydrostatic stress.
\subsection{A Taylor-based model for plastic flow and trapping}
\label{Sec:MSG}
Capturing how the trap density for dislocations $N_T^{(d)}$ evolves with the applied load requires estimating the dislocation density $\rho$. Predicting the dislocation density \textit{via} a micromechanics approach is also relevant for providing a more precise description of crack tip deformation \cite{Hutchinson1997,Komaragiri2008,IJP2016}. Here, we follow Taylor's \cite{Taylor1938} dislocation model and accordingly relate the shear flow stress to $\rho$, the shear modulus $\mu$ and the Burgers vector $b$ as
\begin{equation}\label{eq:tau}
\tau = 0.5 \mu b \sqrt{\rho}.
\end{equation}
\noindent The dislocation density $\rho$ comprises the sum of the density $\rho_{SSD}$ for statistically stored dislocations (SSDs) and the density $\rho_{GND}$ for geometrically necessary dislocations (GNDs):
\begin{equation}\label{eq:rho}
\rho=\rho_{SSD}+\rho_{GND} .
\end{equation}
\noindent The GND density is defined by:
\begin{equation}\label{eq:rhoG}
\rho_{GND} = \bar{r} \frac{\eta^p}{b},
\end{equation}
\noindent where $\bar{r}$ is the Nye-factor and $\eta^p$ is the effective plastic strain gradient, which is defined as follows \cite{Gao1999,IJSS2015}:
\begin{equation}
\eta^p = \sqrt{\frac{1}{4} \eta_{ijk}^p \eta_{ijk}^p} \,\,\,\, \text{with} \, \, \eta_{ijk}^p=\varepsilon_{ik,j}^p + \varepsilon_{jk,i}^p - \varepsilon_{ij,k}^p \, ,
\end{equation}
\noindent where $\varepsilon_{ij}^p$ is the plastic strain tensor.
The tensile flow stress $\sigma_{f}$ is proportionally related to $\tau$ \textit{via} the Taylor factor $M$ such that, considering (\ref{eq:tau})-(\ref{eq:rhoG}),
\begin{equation}\label{eq:sigma_f}
\sigma_{f} = M \tau =0.5 M \mu b \sqrt{\rho_{SSD} + \bar{r} \frac{\eta^p}{b}}.
\end{equation}
\noindent Here, $M=2.9$ for bcc metals. The SSD density $\rho_{SSD}$ can be determined from (\ref{eq:sigma_f}) knowing the relation in uniaxial tension $(\eta = 0)$ between the flow stress and the material stress-strain curve as follows
\begin{equation}\label{eq:rhoS}
\rho_{SSD} = \left( \frac{\sigma_{ref} f \left( \varepsilon^p \right)}{0.5 M \mu b} \right)^2,
\end{equation}
\noindent where $\sigma_{ref}$ is a reference stress and $f(\varepsilon^p)$ is a non-dimensional function determined from the uniaxial stress-strain curve. Substituting back into (\ref{eq:sigma_f}), one reaches
\begin{equation}\label{eq:sF_msg}
\sigma_f = \sigma_{ref} \sqrt{f^2 \left( \varepsilon^p \right) + \ell \eta^p}
\end{equation}
\noindent where $\ell$ is the intrinsic material length. If the length parameter is set to zero or $f^2 \left( \varepsilon^p \right)$ outweighs the GND contribution $\ell \eta^p$, the model recovers conventional von Mises plasticity. For the sake of clarity, we have chosen to show results first for the case of conventional plasticity ($\rho=\rho_{SSD}$, $\ell=0$) and assess later the implications of accounting for the role of plastic strain gradients. This allows for validating the coupled deformation-diffusion model for multi-trapping with the static results of Dadfarnia \textit{et al.} \cite{Dadfarnia2011} (not shown). In both conventional and strain gradient plasticity models, relating the dislocation density with macroscopic quantities such as $\varepsilon^p$ and $\eta^p$ will allow us to estimate the evolution of the dislocation trap density $N_T^{(d)}$.
\section{Methodology}
\label{Sec:Met}
\subsection{Experiments}
\label{Sec:Material}
We build our analysis in the experimental characterisation of the fatigue behaviour of hydrogen pre-charged 42CrMo4 steel samples \cite{Peral2019}. The 42CrMo4 steel under consideration was austenitized at 845$^\circ$C for 40 min, quenched in water and tempered at 700$^\circ$C for two hours. As described elsewhere \cite{Zafra2018}, the mechanical properties of the material are obtained from uniaxial tension tests, giving a yield stress of $\sigma_y=622$ MPa. The material work hardening is characterised by means of an isotropic hardening power law:
\begin{equation}
\sigma = \sigma_y \left( 1 + \frac{E \varepsilon^p}{\sigma_y} \right)^\mathcal{N}
\end{equation}
\noindent where $\mathcal{N}=0.1$ is the hardening coefficient and $\varepsilon^p$ is the effective plastic strain. The reference stress in Eq. (\ref{eq:sF_msg}) will correspond to $\sigma_{ref}=\sigma_y (E / \sigma_y)^\mathcal{N}$ and $f \left( \varepsilon^p \right)= \left( \varepsilon^p + \sigma_y / E \right)^\mathcal{N}$.
The Young's modulus equals $E=220$ GPa and Poisson's ratio is $\nu=0.3$. Permeation tests are used to determine trap binding energies \cite{Zafra2020}, resulting in three values that are assigned to dislocations, carbides and martensitic interfaces - see Table \ref{tab:energy}. We proceed then to estimate the trap densities for each trap type. First, following Taha and Sofronis \cite{Taha2001} in assuming one trap site per atomic plane threaded by a dislocation, the dislocation trap density is given by
\begin{equation}\label{eq:N_T}
N_T^{(d)}=\frac{\sqrt{2} \rho}{a}
\end{equation}
\noindent where $a=0.2867$ nm is the lattice parameter \cite{Nagao2018}. Since $\rho$ is defined as a function of plastic strains and plastic strain gradients, see Section \ref{Sec:MSG}, an initial dislocation trap density $N_{T,0}^{(d)}$ is defined for the unstressed state. Assuming a recrystallised microstructure, $N_{T,0}^{(d)}$ can be estimated \textit{via} (\ref{eq:N_T}) from an initial dislocation density of $\rho_0=10^{14}$ m$^{-2}$. In regards to the trap density for carbides, we follow Nagao \textit{et al.} \cite{Nagao2018} and infer the volume density of carbide particles and the number of hydrogen trap sites per particle from SEM micrographs. Namely, the volume density is given by $(1/L^3)$ with $L=125$ nm being the average distance between the carbide particles, and the trap site density for carbide sites can then be estimated from each carbide particle diameter $d_j$ and associated frequency $f_j$ as follows:
\begin{equation}\label{eq:NTcarb}
N_T^{(c)}=\left(\sum_j \pi d_j^2 f_j \right) \frac{4}{a^2} \frac{1}{L^3}
\end{equation}
\noindent The value obtained is listed in Table \ref{tab:energy}; the sensitivity to $N_T^{(c)}$ will be explored, as increasing the carbide content is the main strategy in designing materials with \emph{beneficial} traps and intrinsic resilience \cite{Ramjaun2018,Turk2018}.\\
Finally, the trap density associated with martensitic interfaces, $N_T^{(m)}$, is estimated following the work by Galindo-Nava \textit{et al.} \cite{Galindo-Nava2017}. Thus, $N_T^{(m)}$ can be given as a function of the Burgers vector, the lattice site density and the mean grain size $D_g=2.5$ $\mu$m as
\begin{equation}
N_{T}^{(m)}=\frac{b}{D_g}N_L
\label{NTint}
\end{equation}
Martensitic interfaces are assumed to have a similar size to that of the mean grain size, a simplification that would allow for an alternative interpretation of the permeation data. Thus, if lath boundaries were to be of low misorientation and difficult to distinguish from dislocations in the context of permeation and desorption data, one could effectively re-interpret $N_{T}^{(m)}$ as the trap density of prior austenite grain boundaries. The binding energies and trap densities for each trap type are listed in Table \ref{tab:energy}, where the trap density for dislocations corresponds to that of the unstressed state, $N_{T,0}^{(d)}$.
\begin{table}[H]
\centering
\caption{Binding energies $W_B$ and trap densities $N_T$ measured for 42CrMo4 steel. The trap density for dislocations corresponds to that of the unstressed state, $N_{T,0}^{(d)}$}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Trap type} & \textbf{$W_B$ [kJ/mol]} & \multicolumn{1}{c|}{\textbf{$N_T$ [sites/m$^{3}$]}} \\
\hline
Dislocations & -35.2 & $4.93 \times 10^{23}$ \\
Carbides & -21.4 & $3.61 \times 10^{23}$ \\
Martensitic interfaces & -24.7 & $5.06 \times 10^{25}$ \\
\hline
\end{tabular}
\label{tab:energy}
\end{table}
The lattice diffusion coefficient is measured using permeation and estimated to be $D_L=1.3 \times 10^{-9}$ m$^2$/s \cite{Zafra2020}. All the specimens are pre-charged with gaseous hydrogen in a high-pressure hydrogen reactor for 21 h at 450$^\circ$C under a pressure of 19.5 MPa of pure hydrogen to ensure that the samples are saturated with hydrogen (10 mm thickness) \cite{Peral2019}. Thermal Desorption Spectroscopy (TDS) is used in combination with diffusion modelling to estimate the initial lattice hydrogen concentration, which equals $C_0=1.06$ wt ppm ($4.96 \times 10^{24}$ at H/m$^3$). The fatigue crack growth experiments were conducted using compact tension (CT) specimens with a width of 48 mm and a thickness of 10 mm, see Ref. \cite{Peral2019} for details. Before hydrogen pre-charging, the samples were first fatigue pre-cracked at a load ratio of $R=0.1$ and 10 Hz until reaching a crack length to width ratio of $a/W=0.2$, following the ASTM E647 standard. The results obtained in both uncharged and pre-charged samples loaded at different frequencies are shown in Fig. \ref{fig:Propagation} in terms of crack growth rates $da/dN$ versus load amplitude $\Delta K$. A load ratio of $R=\Delta K_{min} / \Delta K_{max}=0.1$ is used and experiments are conducted at room temperature. The experimental results reveal that the behaviour of the hydrogen-free samples is recovered in the hydrogen-charged experiments if the loading frequency is higher than 1 Hz. For lower frequencies, hydrogen embrittles the material and accelerates crack growth rates. The existence of a \emph{safe} regime of loading frequencies, where hydrogen has no effect, has also been demonstrated in other experimental works \cite{Murakami2010a,Fassina2013,Tazoe2017,Alvaro2019}.
\begin{figure}[H]
\makebox[\textwidth][c]{\includegraphics[width=0.9\textwidth]{Propagacion.eps}}
\caption{Experimental results in 42CrMo4 steel subjected to a load ratio of $R=0.1$ \cite{Peral2019}. Crack growth rates $da/dN$ versus load amplitude $\Delta K$. It is shown that there is a frequency threshold below which hydrogen has no effect on the fatigue behaviour.}
\label{fig:Propagation}
\end{figure}
\subsection{Numerical model}
\label{Subsec:FE}
Hydrogen transport during cyclic loading and its implications for embrittlement are investigated using a finite element model. Both qualitative and quantitative insight is gained, using the fatigue experiments on 42CrMo4 steel by Peral \textit{et al.} \cite{Peral2019} for the latter (see Section \ref{Sec:Material}). Small scale yielding conditions apply and accordingly crack tip fields are computed by using a boundary layer formulation \cite{Sofronis1989}. Hence, as described in Fig. \ref{fig:Boundary}, the crack region is contained by a circular zone and a \emph{cyclic} remote Mode I load $\Delta K$ is applied by prescribing the horizontal $u$ and vertical $v$ displacement components of the nodes at the remote circular boundary:
\begin{equation}
\Delta u(r,\theta)=\Delta K \frac{1+\nu}{E}\sqrt{\frac{r}{2\pi}}\cos\left(\frac{\theta}{2}\right)(3-4\nu-\cos\theta)
\label{despu}
\end{equation}
\begin{equation}
\Delta v(r,\theta)= \Delta K \frac{1+\nu}{E}\sqrt{\frac{r}{2\pi}}\sin\left(\frac{\theta}{2}\right)(3-4\nu-\cos\theta)
\label{despv}
\end{equation}
\noindent where $r$ and $\theta$ denote the radial and angular coordinates of a polar coordinate system centred at the crack tip. Plane strain conditions and finite deformations are considered. An initial crack tip blunting radius is defined $r_0=0.5$ $\mu$m, rendering an initial crack tip opening displacement of $b_0=1$ $\mu$m \cite{Dadfarnia2011}. The outer radius is chosen to be 300,000 times larger than $r_0$. As depicted in Fig. \ref{fig:Boundary}, cyclic loading is imposed by scaling in time $t$ the external load by a sinusoidal function with amplitude $\Delta K = K_{max} - K_{min}$ and ratio $R=K_{min}/K_{max}$. A load ratio of $R=0.1$ is used throughout the study and the number of cycles is denoted by $N$. Mimicking the experiments, an initial hydrogen concentration $C_0$ is prescribed uniformly in the entire sample. We will also model the case of an open-system, where we prescribe a constant chemical potential, as described in Section \ref{sec:ConvFreq} below. Mechanical and diffusion properties are those measured in Section \ref{Sec:Material}, with the partial molar volume of hydrogen taken to be $\bar{V}_H=2 \times 10^{-6}$ m$^3$/mol.
\begin{figure}[H]
\makebox[\textwidth][c]{\includegraphics[width=1.2\textwidth]{BCs.eps}}
\caption{Sketch of the numerical model: boundary layer formulation, with mechanical and diffusion boundary conditions, and applied $K$ as a function of time for the case of $R=0.1$, $\Delta K=35$ MPa$\sqrt{m}$, and $f=1$ Hz.}
\label{fig:Boundary}
\end{figure}
The multi-trap hydrogen transport and micromechanics constitutive models described in Section \ref{Sec:Theory} are implemented in the commercial finite element package Abaqus using, respectively, a UMATHT and a UMAT subroutine. A DISP subroutine is employed to prescribe a constant chemical potential at the crack faces \cite{IJHE2016,Diaz2016b}. The coupling between the different user subroutines is described in Fig. \ref{fig:Flow}. The model is discretised using 5238 quadrilateral quadratic elements with reduced integration. The use of a finer mesh leads to convergence problems for high values of $\Delta K$ due to large element distortions. However, at low $\Delta K$ values, the present mesh appears to give results that are quantitatively similar to those obtained with finer meshes.
\begin{figure}[H]
\makebox[\textwidth][c]{\includegraphics[width=0.7\textwidth]{FlowChartABAQUS.eps}}
\caption{Flow chart describing the coupling between the user subroutines employed in the numerical implementation.}
\label{fig:Flow}
\end{figure}
\section{Results}
\label{Sec:Results}
The influence of cyclic loading on the lattice and trapped hydrogen distributions is investigated first, Section \ref{Sec:TrapANDLattice}. We then proceed, in Section \ref{sec:ConvFreq}, to shed light into the role of frequency and boundary conditions (closed-system versus open-system). In Section \ref{Sec:BeneficialTraps} we quantify the implications of engineering alloys with a higher density of carbide trapping sites. Finally, the sensitivity of the results to reliable measurements of the binding energy and the role of plastic strain gradients are respectively addressed in Sections \ref{Sec:BindingEnergy} and \ref{Sec:CMSG}.
\subsection{Cyclic behaviour of trap and lattice concentrations}
\label{Sec:TrapANDLattice}
Consider the 42CrMo4 steel characterised in Section \ref{Sec:Material}. The behaviour of lattice $C_L$ and trapped $C_T$ hydrogen concentrations are shown in Fig. \ref{fig:TrapLattice} for a sample pre-charged with $C_0=1.06$ wt ppm, as in the experiments. A frequency of $f=1$ Hz is considered, as this corresponds to the frequency level at which the same experimental response is observed with and without hydrogen, and $\Delta K=35$ MPa$\sqrt{m}$. First, Fig. \ref{fig:TrapLattice}a shows the lattice hydrogen concentration at three different stages of a representative cycle ($N$=10): the maximum load $K_{max}$, the minimum load $K_{min}$ and the mean load $K_m=K_{max}-|K_{min}|$. In agreement with expectations, the hydrogen concentration follows qualitatively the trend depicted by the applied load, with the three curves merging far away from the crack tip (where $\sigma_H$ is small). The variation of the maximum values of $C_L$ and $C_T$ ahead of the crack ($\theta=0^{\circ}$) are shown in Fig. \ref{fig:TrapLattice}b as a function of time (number of cycles), where $C_T$ includes the contributions from all trap sites. This quantity, denoted ${C_L}_{max,\theta=0^\circ}$ (or ${C_T}_{max,\theta=0^\circ}$), is the maximum magnitude of $C_L$ (or $C_T$) attained for a given instant of time across all material points ahead of the crack tip; i.e., ${C_L}_{max,\theta=0^\circ}=\text{max}(C_L(r,\theta=0^{\circ},t))$. Consistent with Fig. \ref{fig:TrapLattice}a and Oriani's equilibrium, the results in Fig. \ref{fig:TrapLattice}b reveal a cyclic variation of ${C_L}_{max,\theta=0^\circ}$ and ${C_T}_{max,\theta=0^\circ}$. The lattice hydrogen concentration exhibits an almost periodic response while the trapped hydrogen concentration increases with time. The trend depicted by $C_T$ is due to the evolution of the dislocation trap density $N_T^{(d)}$ with plastic deformation; this is shown in Fig. \ref{fig:TrapLattice}c, where the individual contributions of each trap type are plotted. More hydrogen is trapped in martensitic interface trapping sites as the trap density is substantially higher, see Table \ref{tab:energy}.
\begin{figure}[H]
\makebox[\textwidth][c]{\includegraphics[width=1.3\textwidth]{TrapLattice.eps}}
\caption{Cyclic behaviour of the hydrogen concentrations; (a) lattice hydrogen distribution ahead of the crack for the 10th cycle, and time/cycle dependency of the maximum lattice and trapped hydrogen concentration: (b) summed contribution from all trap sites, and (c) individual contributions. Fatigue loading, $f=1$ Hz, $R=0.1$, $\Delta K=35$ MPa$\sqrt{m}$.}
\label{fig:TrapLattice}
\end{figure}
We will draw implications for embrittlement by focusing on the maximum lattice concentration ${C_L}_{max}$. Oriani's equilibrium provides a one-to-one relation between the lattice and trapped hydrogen concentrations and accordingly $C_L$ can be used to construct a unique failure locus \cite{Ayas2014}. In this way, we refrain from making any mechanistic choices regarding the damage process, retaining the generality of the analysis.
\subsection{Influence of frequency and boundary conditions}
\label{sec:ConvFreq}
We proceed to shed light into the influence of the loading frequency and the hydrogen charging conditions. Dimensional analysis shows that the role of the frequency scales with the diffusion coefficient, such that a normalised frequency can be defined as
\begin{equation}\label{eq:f_bar}
\bar{f} = \frac{f R_p^2}{D_e} \, ,
\end{equation}
\noindent where $R_p$ is the fracture process zone, given by the Irwin approximation as
\begin{equation}\label{eq:Rp}
R_p = \frac{1}{3 \pi} \left( \frac{K_I}{\sigma_y} \right)^2 \, .
\end{equation}
\noindent For simplicity, we define $\bar{f}$ using $D_L$, as it remains constant throughout the analysis, and use $K_{max}$ in (\ref{eq:Rp}). Accordingly, a normalised time can be given by $\bar{t}=D_L t/R_p^2$. The evolution of the maximum lattice hydrogen concentration is computed for different frequencies in two scenarios: a closed-system, where the sample is pre-charged with $C_0$, and an open-system, where the sample is pre-charged with $C_0$ and continuously exposed to a permanent source of hydrogen. In the latter case, the appropriate boundary condition in the crack faces is to prescribe a constant chemical potential \cite{DiLeo2013}. Using the concentration as a degree of freedom, this equates to the following boundary condition
\begin{equation}
C_b = C_0 \exp \left( \frac{\bar{V}_H \sigma_H}{RT} \right) \, .
\end{equation}
The results are shown in Fig. \ref{fig:Frequency}. In both closed- and open-systems the hydrogen concentration follows the cyclic behaviour of the applied load. The maximum value attained by $C_L$ is practically constant after a number of cycles. Some humps are observed for the highest frequencies as a larger number of cycles is considered. However, based on the crack growth rates of Fig. \ref{fig:Propagation}, crack extension is likely to occur at a lower number of cycles, re-distributing crack tip fields. More importantly, the magnitude of the maximum $C_L$ attained depends on the $f/D_e$ ratio; if we load at high frequencies or use materials with low (effective) diffusion coefficients, ${C_L}_{max,\theta=0^\circ}$ will be lower. This is quantified as a function of time in Figs. \ref{fig:Frequency}a and \ref{fig:Frequency}b for closed and open-systems, respectively. The results show that the magnitude of ${C_L}_{max,\theta=0^\circ}$ varies cyclically and shows a high sensitivity to $\bar{f}$; critical values of the hydrogen concentration may not be attained if the loading frequency is sufficiently high or $D_e$ is sufficiently low. It is important to emphasize that this behaviour is observed for both closed- and open-systems, Figs. \ref{fig:Frequency}a and \ref{fig:Frequency}b respectively, albeit to a lesser degree in the latter. Calculations conducted for open-systems with the same $C_b$ but a smaller (or zero) initial hydrogen concentration exhibit the same qualitative trends, although the effect is smaller. Moreover, the observed sensitivity of the maximum value of $C_L$ to the loading frequency is also the expected qualitative behaviour at large time scales (number of cycles). If $D_e$ is small relative to the time required to complete one loading cycle, the steady state behaviour of $C_L$ will not be governed by the maximum value of $\sigma_H$ but by the mean. In other words, the capacity of the hydrogen distribution to reach its upper limit, given by the steady state solution for the $\sigma_H$ distribution associated with $K_{max}$, is governed by the ratio between the loading frequency $f$ and the effective diffusion coefficient $D_e$. The implications are profound, if alloys can be engineered to reduce the effective diffusion coefficient, resistance to hydrogen assisted fatigue can be gained over a larger range of loading frequencies and in all applications. This will be quantified in the following Section.
\begin{figure}[H]
\makebox[\textwidth][c]{\includegraphics[width=1.\textwidth]{Frequency.eps}}
\caption{Influence of loading frequency; variation in time of the maximum value of $C_L$ ahead of the crack for: (a) a closed-system, and (b) an open-system. Fatigue loading, $R=0.1$, $\Delta K=10$ MPa$\sqrt{m}$.}
\label{fig:Frequency}
\end{figure}
\subsection{Hydrogen-trap interaction: can it be used to mitigate fatigue?}
\label{Sec:BeneficialTraps}
Let us gain quantitative insight by correlating with the experiments described in Section \ref{Sec:Material}. Fig. \ref{fig:Frequency} reveals that the maximum value attained by $C_L$ along the extended crack plane rapidly reaches a plateau in time, with the magnitude of this plateau value being highly sensitive to the frequency. As discussed in Section \ref{Sec:TrapANDLattice}, we will assume that a threshold value for $C_L$ exists that determines the onset of embrittlement. The maximum value of $C_L$ attained in each cycle is plotted in Fig. \ref{fig:BeneficialTraps} for the three loading frequencies considered in the experiments. This quantity, estimated once per cycle, is denoted as ${C_L}_{max,N}$ to differentiate from ${C_L}_{max,\theta=0^\circ}$ (the maximum value of $C_L$ at each time instant, which varies cyclically). Recall that, as shown in Fig. \ref{fig:Propagation}, a loading frequency of 1 Hz or higher does not lead to any embrittlement while an increase in fatigue crack growth rates can be observed for frequencies of 0.1 Hz or lower. Thus, the critical value of $C_L$ at which embrittlement is observed must be within the plateau values of ${C_L}_{max,N}$ predicted for $f=0.1$ and $f=1$ Hz. Results for the \emph{standard} carbide trap density, $N_T^{(c)}=3.61 \times 10^{23}$ sites/m$^3$, reveal plateau values of 2.5$C_0$ and 1.6$C_0$ for, respectively $f=0.1$ and $f=1$ Hz. Taking the average, we stipulate that the critical $C_L$ for embrittlement equals 2.05$C_0$, as depicted in Fig. \ref{fig:BeneficialTraps}.
\begin{figure}[H]
\makebox[\textwidth][c]{\includegraphics[width=1.1\textwidth]{BeneficialTraps.eps}}
\caption{Influence of increasing the carbide trap density $N_T^{(c)}$. Maximum $C_L$ in each cycle as a function of time (number of cycles) for three frequencies and two carbide trap densities. Fatigue loading, $R=0.1$, $\Delta K=35$ MPa$\sqrt{m}$. The region of embrittlement is inferred from the experimental results on 42CrMo4 steel.}
\label{fig:BeneficialTraps}
\end{figure}
Fig. \ref{fig:BeneficialTraps} also includes results obtained assuming that the density of carbides can be engineered. Namely, the density of carbide trapping sites is increased to $N_T^{(c)}=3.61 \times 10^{26}$ sites/m$^3$ \cite{Ramjaun2018}.
Increasing the trap density decreases the diffusion coefficient, see (\ref{eq:De}), and accordingly, the diffusion of lattice hydrogen within each cycle is reduced, leading to a lower value of ${C_L}_{max,N}$. As shown in Fig. \ref{fig:BeneficialTraps}, the maximum value of hydrogen concentration attained with $f=0.05$ Hz in an alloy with additional traps is similar to that obtained in a \emph{standard} alloy for a frequency of $f=0.1$ Hz. Moreover, the simulations for the trap-enhanced material show no embrittlement within the $f=0.1-1$ Hz regime, effectively extending the regime of safe frequencies at which hydrogen has no effect by an order of magnitude.
\subsection{Influence of the binding energy}
\label{Sec:BindingEnergy}
Traps are characterised by the binding energy and the trap density. In the following, we examine the implications of the binding energy $W_B$ in the above conclusions. The inverse problem of interpreting TDS data to determine $W_B$ for each trap type is ill-posed, bringing uncertainties into the analysis. Thus, we consider that carbides are the strongest trap in the model, exchanging the value of its binding energy with that of dislocations ($W_B=-35.2$ kJ/mol, see Table \ref{tab:energy}). The results are shown in Fig. \ref{fig:InfluenceBindingEnergy} assuming a high carbide trap density material, $N_T^{(c)}=3.61 \times 10^{26}$ sites/m$^3$. Predictions are compared to those obtained with the previous binding energy estimate $W_B^{(c)}=-21.4$ kJ/mol; the higher $|W_B|$, the lower the maximum value of $C_L$ attained. The results do not exhibit the high sensitivity shown by the trap density but the variation in $W_B$ is also low (13.8 kJ/mol). Given that $21.4$ kJ/mol is on the lower side of the $|W_B|$ values reported for carbides \cite{Song2015}, it is expected that the conclusions drawn in the previous section will hold to a greater degree. Moreover, results suggest that the gains derived from an increase in trap density could be significantly enhanced if the density of deep traps $|W_B| > 50$ kJ/mol is increased.
\begin{figure}[H]
\makebox[\textwidth][c]{\includegraphics[width=1.1\textwidth]{InfluenceBindingEnergy.eps}}
\caption{Influence of the binding energy. Maximum $C_L$ in each cycle as a function of time (number of cycles) for three frequencies and two carbide binding energies in a high carbide trap density material. Fatigue loading, $R=0.1$, $\Delta K=35$ MPa$\sqrt{m}$. The region of embrittlement is inferred from the experimental results on 42CrMo4 steel.}
\label{fig:InfluenceBindingEnergy}
\end{figure}
\subsection{Influence of plastic strain gradients}
\label{Sec:CMSG}
To ease the interpretation of the results, conventional $J_2$ plasticity has been used for the computations reported so far. However, it has been shown that plastic strain gradients are very high near the crack tip and elevate dislocation density and local strength \cite{Wei1997,Komaragiri2008,IJSS2015}. This local dislocation hardening associated with GNDs can be accounted for using strain gradient plasticity theories \cite{Fleck1994,Gao1999,Anand2005,JMPS2019}. Considering the dislocation-based strain gradient plasticity model formulated in Section \ref{Sec:Theory}, two implications can be foreseen. First, common to other gradient plasticity models, crack tip stresses will be significantly higher than those predicted by conventional plasticity, which in turn increases the lattice hydrogen content close to the crack tip \cite{IJHE2016}. Secondly, total dislocation density $\rho$ predictions (and the associated trap density $N_T^{(d)}$) can differ significantly. It is expected that the statistically stored dislocation (SSD) density $\rho_{SSD}$, predicted via (\ref{eq:rhoS}), will be lower than in conventional plasticity as local hardening reduces $\varepsilon^p$, but there will be an additional contribution to the total dislocation density as GNDs are accounted for ($\rho_{GND}$). The influence of these features on the cyclic behaviour of lattice and trapped hydrogen concentration are investigated here for the first time. Note that, unlike conventional plasticity, strain gradient plasticity predictions do not predict a peak $\sigma_H$ (and $C_L$) at a finite distance ahead of the crack tip \cite{AM2016,EJMAS2019}; accordingly, we do not compute the maximum value of the variables under consideration along the crack ligament but sample them at a critical distance for embrittlement. As shown by Gangloff \cite{Gangloff2003a,Gangloff1990} using $D_e$ and stage II $da/dt$ data, this critical distance is of a few microns in many alloys, which further motivates the use of strain gradient plasticity models; $x_{crit}=2$ $\mu$m is assumed. Also, the material gradient length scale is assumed to be equal to $\ell=5$ $\mu$m, an intermediate value within the range of length scales reported in the literature from micro-scale experiments \cite{IJES2020}.
Fig. \ref{fig:SGP}a shows the evolution of the dislocation density with the number of loading cycles, where the maximum value of $\rho_i$ at $x_{crit}=2$ $\mu$m is plotted. As expected, local strain gradient hardening reduces $\rho_{SSD}$ and the density of statistically stored dislocations is higher in the conventional plasticity case ($\ell=0$). However, the total dislocation density $\rho$ is substantially higher in the case of the strain gradient plasticity model as the GND density is notably larger than the total dislocation density predicted with conventional plasticity. This, in turn, translates into a significantly higher trap density for dislocation sites $N_T^{(d)}$ as the number of cycles increases. We proceed to examine if the conclusions drawn in previous sections are still applicable in view of this notably different crack tip behaviour. As with the case of conventional plasticity, the maximum value of $C_L$ per cycle remains practically constant after a certain number of cycles. Fig. \ref{fig:SGP}b shows the variation in time of the maximum value of $C_L$ per cycle at $x_{crit}$ for strain gradient plasticity, three selected values of the loading frequency and two carbide trap densities. The qualitative trends are the same as those obtained so far with conventional plasticity; increasing the density of carbide trapping sites reduces the maximum lattice hydrogen concentration attained. A quantitative comparison with the results from conventional plasticity is shown in Fig. \ref{fig:SGP}c. Strain gradient plasticity predicts a higher value of the maximum hydrogen concentration in all cases due to the higher crack tip stresses, this would translate into a higher experimentally-calibrated critical $C_L$ for embrittlement. The drop in $C_{Lmax}$ with increasing $N_T^{(c)}$ is quantitatively similar to that predicted with conventional plasticity. Therefore, the use of more accurate micromechanics-based descriptions of crack tip fields does not change the conclusions drawn before with conventional plasticity.
\begin{figure}[H]
\makebox[\textwidth][c]{\includegraphics[width=1.35\textwidth]{SGP.eps}}
\caption{Influence of plastic strain gradients; (a) dislocation density evolution with time (number of cycles) for conventional and strain gradient plasticity ($f=1$ Hz), (b) maximum $C_L$ in each cycle as a function of time (number of cycles) for three frequencies and two carbide trap densities, (c) maximum value attained by $C_L$ in the analysis for conventional and strain gradient plasticity, two carbide trap densities and three frequencies. All quantities have been sampled at $x_{crit}=2$ $\mu$m from the crack tip. Fatigue loading, $R=0.1$, $\Delta K=10$ MPa$\sqrt{m}$.}
\label{fig:SGP}
\end{figure}
\section{Conclusions}
\label{Sec:ConcludingRemarks}
We have presented a micromechanics-based multi-trap model for stress-assisted hydrogen diffusion. The model is used to investigate the competing role of the loading frequency $f$ and the effective diffusion coefficient $D_e$ on hydrogen assisted fatigue in the presence of multiple microstructural traps. Experiments on 42CrMo4 steel are used to gain quantitative insight by inferring a critical hydrogen concentration for embrittlement based on the frequency range where hydrogen has no effect on crack growth rates. The main findings are:\\
\noindent (i) The trap and lattice hydrogen concentration vary cyclically following the variation of the mechanical load. The maximum concentration value attained in each cycle by the hydrogen trapped at dislocations rises with time as the associated trap density $N_T^{(d)}$ increases with plastic deformation. Contrarily, the maximum values of the lattice hydrogen concentration $C_L$ and the hydrogen trapped at other traps such as carbides or interfaces remain practically constant after a few cycles.\\
\noindent (ii) The maximum hydrogen concentration attained ahead of the crack is highly sensitive to the $D_e/f$ ratio. A lower peak in the $C_L$ distribution is observed if the effective diffusion coefficient is small relative to the time required to complete a loading cycle. This behaviour is observed in both closed-systems (one-off hydrogen entry) and open-systems (permanent source of hydrogen). \\
\noindent (iii) Increasing the density of ``beneficial'' traps not involved in the fracture process is a viable strategy for mitigating hydrogen assisted fatigue. An increase in the density of carbide trapping sites $N_T^{(c)}$ reduces the maximum hydrogen concentration values attained for a given frequency and extends the regime of \emph{safe} frequencies where embrittlement is not predicted.\\
\noindent (iv) The maximum concentration values predicted for a given frequency can be further reduced by increasing the density of the trapping sites with stronger binding energy $|W_B|$. Quantitatively, the effect is lower than varying the trap density, as the range of binding energies is more limited. \\
\noindent (v) The use of strain gradient plasticity to better resolve crack tip fields shows a higher dislocation trap density and lattice hydrogen concentration relative to conventional plasticity predictions. However, no significant qualitative or quantitative differences are observed regarding the role of an increased carbide trap density in mitigating hydrogen assisted fatigue.\\
Given that most engineering components are subjected to cyclic loads, these insights could have important implications in the design of hydrogen-resistant alloys.
\section{Acknowledgements}
\label{Sec:Acknowledgeoffunding}
The authors would like to acknowledge helpful discussions with F.J. Belzunce and A. Zafra (University of Oviedo) in regards to the experiments and the trap density measurements. The authors acknowledge funding from the Regional Government of Asturias (grant FC-GRUPIN-IDI/2018/000134) and the IUTA (grant SV-19-GIJON-1-19). E. Mart\'{\i}nez-Pa\~neda also acknowledges financial support from EPSRC funding under grant No. EP/R010161/1 and from the UKCRIC Coordination Node, EPSRC grant number EP/R017727/1, which funds UKCRIC's ongoing coordination.
\bibliographystyle{elsarticle-num}
|
train/arxiv
|
BkiUeQw4dbgheBXBSCHx
| 5
| 1
|
\section{INTRODUCTION}
Advances in networking, intelligence, and media available in urban areas attracts people towards a more comfortable lifestyle. Urbanization at an unprecedented scale and speed incurs significant challenges to city administrators, urban planners and policy makers. In order to efficiently manage the cities functions and be responsive to dynamic transitions, surveillance systems are essential for situational awareness (SAW) \cite{liu2014adaptive}, \cite{wu2015pseudo}. Nowadays, a prohibitively large amount of surveillance data is being generated every second by ubiquitously distributed video sensors. For example, North America alone has more than 62 million cameras in the year 2016. These cameras are connected to powerful data centers through communication networks and the delivery of surveillance video streams creates a heavy burden on the network. Researchers have shown that video streaming accounts for 74\% of the total online traffic in 2017 \cite{chen2017enabling}.
Since the first generation video surveillance systems known as Close Circuit TV (CCTV) were introduced in 1960s, urban surveillance mechanisms adapted to the changing technology \cite{surette2005thinking}. Compared with today's edge computing paradigm, CCTV-like surveillance systems are limited because:
\begin{itemize}
\item The network is ``best effort'' based which means not only transmission of the video data suffers delays and jitters, the data may get lost or dropped because of network congestion.
\item The raw-data transmission is ``dedicated'' which wastes resources in the communication network and at the data center, because not all data is globally significant or worthy to be stored for long time.
\item An agent needs to pay "full attention" to the video to capture any emergency in real-time. Obviously this naïve approach is not scalable, and there are several architectures introduced based on computer vision techniques and make decisions based on machine learning algorithms. However, to date there is not a system that is able to meet the performance requirements like real-time, good scalability, and robustness \cite{tsakanikas2017video}.
\item An agent employs ``working memory'' as computing capabilities afforded only searching for a specific target of interest or focusing on a special feature. Meanwhile, today's multimedia forensics desires real-time or near real-time searching by scanning through the large surveillance video record base.
\end{itemize}
It is very challenging to immediately analyze the objects of interest or zoom in on suspicious actions from thousands of video frames. Making the big data indexable is critical to tackle the object analytics problem \cite{aved2015multi}, \cite{blasch2015dynamic}. It is ideal to generate pattern indexes in a real-time, on-site manner on the video streaming instead of depending on the batch processing at the cloud centers. The modern edge-fog-cloud computing paradigm allows implementation of time sensitive tasks at the network edge. In this paper, a novel event-oriented indexable and queryable intelligent surveillance (EIQIS) system is introduced leveraging the on-site edge devices to collect the information sensed in format of frames and extracts useful features to enhance situation awareness.
The rest of this paper is organized as follows. Section 2, briefly discusses background knowledge and relative work. Section 3 highlights the main challenges in the real-time surveillance. Section 4 introduces the rationale of the proposed indexable and queryable surveillance system. A preliminary study is presented in Section 5, which validates the concept and shows the feasibility of the system architecture. Finally, Section 6 concludes the paper with future research directions.
\section{Background Knowledge and Related Work}
Today, most available surveillance systems archive streaming video footage to be used off-line for forensics analysis \cite{chen2018smart}. Communication delays and uncertainties associated with the data transfer from image sensors to a remote computing facility limit implementation of the online surveillance tasks. However, delay sensitive applications require on-line processing. Thanks to the recent development of lightweight machine learning (ML) algorithms that require less computing power and storage space, more processing can be migrated to the edge of the network \cite{ouaddah2016fairaccess}, where no more delay is incurred for data transmission. For tasks like anomalous behavior detection that is not affordable at the edge, instead of directly outsourcing the job to the remote cloud, near-site fog nodes are powerful enough for complex data analytics tasks.
For instance, in a smart transportation application following a hierarchical system architecture, data is accessed by the sensors implemented on buses and transferred to a fog node where contextualization and decision making happens \cite{chamasemani2013systematic}. For video surveillance systems, the remote cloud is mainly used for profile building, pattern analysis, and long term historical record analysis.
In general, a smart surveillance system includes three layers as shown in Fig. \ref{fig:arch}. In the first layer, image analysis, the input camera frame is given to an edge device and the low-level features are extracted \cite{nikouei2018real}, \cite{penmetsa2014autonomous}. The edge devices are able to conduct object detection and object tracking tasks \cite{khanezaei2014framework}, \cite{yu2018survey}. The intermediate-level, considered as the fog stratum, is in charge of mode recognition for action recognition, behavior understanding, and abnormal event detection. Finally, the high-level, cloud center, is focused on systems analysis including historical profile building, global statistical analysis, and narrative reporting. Connections among the edge, fog and cloud nodes present challenges in terms of overall platform, connections, quality of service (QoS) requirements, and preserving privacy and security.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figs/Figure1.jpg}
\caption{Layered smart surveillance system hierarchy using the edge-fog-cloud computing paradigm.}
\label{fig:arch}
\vspace{-10pt}
\end{figure}
The first step of a video surveillance system is to simultaneously track and identify (ID) (STID) the objects of interest in the video \cite{blasch2005multiresolution}, \cite{blasch2000data}. STID continues to be a challenging task performed on the edge of the network \cite{nikouei2018intelligent}. Nowadays, once an event incurred, the operators need to spend considerable amount of time to go through the footage and look at videos from different cameras in order to find a specific target. Even in the next generation surveillance systems that are combined with image processing techniques for better decision making, performing a search in real-time or near real-time is very challenging \cite{blasch2014context}, \cite{tsakanikas2017video}.
Ideally, the surveillance system is expected to be able to quickly and automatically identify the clips of interest based on a given query. Earlier researchers have proposed to adopt video parsing techniques that automatically extract index data from video and store the index data in relational tables \cite{blasch2014quest}, \cite{hammoud2014automatic}, \cite{hampapur2007searching}. The index is used through SQL queries to retrieve events of interest quickly. However, this approach cannot meet the performance requirements of online, real-time, operator-in-loop interactions. Future smart surveillance video streams have to be indexable and queryable such that the operator is able to obtain the information of interest instantly.
\section{Real-time Queryable Surveillance: Architecture and Challenges}
This section introduces an edge-fog-cloud computing based system architecture to achieve event-based indexable and queryable intelligent surveillance (EIQIS). It is non-trivial to extract features in real-time and use them as indexes to conduct online query on surveillance video streams \cite{palaniappan2010efficient}. Advances in machine learning, multi-modal data fusion, and physics-based and human-derived information fusion (PHIF) show promise for EIQIS. Current systems are designed to enhance user responsibilities to include security, surveillance, and forensics. Typically, the user provides a standing query that the image processing is to provide event triggers \cite{aved2015multi}, \cite{blasch2015urref}. The user would like the system to do the functions autonomously, however, the ultimate design would include a combination of humans in, on, or out-of the loop (HIL, HON, HOON).
In order to have a smart surveillance system raise an alarm when something abnormal is detected, each captured frame that is processed requires knowledge of the proceeding frames. A three layer edge-fog-cloud hierarchical architecture reduces the delays that are incurred when the frame is transferred to a remote cloud center. The more processing that is migrated to the network edge, the faster the features are obtained and indexes are constructed because of the close proximity of the edge node to the geo-location of the camera. Meanwhile, due to the constraints on computation and storage capacity at the edge devices, more computing or data intensive tasks are outsourced to more powerful cloud.
The first layers is the edge camera, it should be mentioned that most reliable detection and tracking algorithms are dedicated for specific surveillance applications. Running them in a resource constrained environment that requires the algorithm to be a light weight version of the original does not help the accuracy. Thus, finding better methods is a contemporary research topic \cite{li2017dynamic}.
Once a frame is captured by the image sensor, it will be either transferred to the edge device that is connected via a local area network (LAN) connection or processed on-site if it is a smart camera (edge device) with sufficient computing power. The edge node has limited computing power and so all computing intensive event detections cannot be executed at this level. The edge device conducts pre-processing using a convolutional neural network (CNN), which will identify the objects of interest and give their positions in the image frame. Even with small architectures with few layers that reduce the overall computation complexity, CNNs are heavy for the edge device \cite{nikouei2018intelligent}. The edge device cannot afford to execute the CNNs more than couple of times per second. Therefore, in order to reach a higher resolution of the detection, the bounding box around the object of interest is given to a tracker algorithm that uses an online learning algorithm to follow the object in each frame until it moves out of the frame. Each time the CNN runs, the newly found bounding boxes are sent to a fast tracker such as the Kernelized Correlation Filter (KCF), improving the speed. It should be noted that although newer and powerful edge nodes are made every day, with more features to be extracted, a longer processing time is needed. Consequently, the key for the real-time application is a trade-off between the speed and the amount of features to be extracted in each frame.
After each object is detected and tracked, features can be extracted. These features might include, but are not limited to the current position and speed the object is walking, the direction of the walk and some other physical features such as the angles the other parts of the upper body parts create and so the pose of the pedestrian \cite{turaga2008machine}. For each detected pedestrian, there is a table that is updated with each frame and includes a key and value for features extracted from the video. The actual video may not be needed to be transferred to the fog level device where the decision making code is executed.
The edge device is designed to conduct immediate techniques such as feature extraction, while the advanced analytics is outsourced to a more powerful, near-site node. Several edge devices from several camera feeds can be connected to a fog node, which conducts feature contextualization, indexing, and storage. One of the challenges in a surveillance system is the security of the connection between the edge and fog. Although there are new promising technologies to address privacy/security, like blockchain technology \cite{xu2018blendcac}, more development is needed to make them light weight and robust for the smaller networks with low power. The features transmitted to the fog node can be contextualized to support decision making \cite{snidaro2016context}. Valuable data in the contextualization include: The location of the camera, time of the footage, terrain information, semantic ontologies of descriptors, etc. For example, while it is normal for people to walk and stand in a campus building, it can be considered as abnormal if it is late at night when the building should be close. Also, connecting several cameras in the same area to the same fog node will give the fog the ability to look at the monitored area from different perspectives, illuminations, and contexts.
Another challenge that the surveillance community faces is the decision support algorithm, which includes supervised, unsupervised, and semi-supervised methods. The, the lack of labeled data for unknown situations, requires methods in semi-supervised training to better characterize abnormal situations. The answer may include the location and several other factors and sequence of events lead to abnormal behavior detection. Also, the security camera and the functionality of the place surveillance may differ from one to the other which makes it very difficult to differentiate between normal and abnormal activity.
The historical analysis, profile building, and situation analysis are conducted by the most powerful node in the edge-fog-cloud architecture hierarchy, the cloud. The decisions making and the detection of false alarm and the features that raised the alarm are sent for future fine tuning of the algorithms and also some analytical studies. Figure 1 shows the interconnections of the nodes in the network described in this section.
\section{Making the Video Streams Indexable}
The usability of any exploited video is based on what is stored or indexed or fast retrieval, such as content-based image retrieval. The surveillance video streamed to the edge device enables features extracting for decision making. Decision making is based on the real-time search query. The real-time video search will make the job of the operator/user easier by giving instances of the video that are asked for in a query to the system. Search string is the query that is given to the fog node. The fog node is the ideal level to handle search requests where contextualized information from close by cameras is stored. The following describes how a query is handled at the fog layer:
\begin{enumerate}
\item The fog node receives the query and will check the eligibility of the machine asking for the information. The access level of the nodes in such a network is defined in a smart contract in a blockchain enabled security platform.
\item The fog node searches for the query in the index table to find the corresponding camera, timestamp and other information based on the real-time features provided and select them if any.
\item The fog node answers the search requester based on the information found.
\item Then the operator selects the cameras with the query and has the live feed or recorded clips (it is assumed that the operator has access to the edge device in charge of the camera of interest if he/she has access to the higher-level fog).
\end{enumerate}
The operator thus can search the video streams in real-time.
Indexing requires the association of complementary information (hashed, correlated, and linked) with the video frame for storage. Using the mapping table affords fast information retrieval. Considering the indexing table the same as the features simplifies the search operation. While there are many features extracted from the video, there might be several different indexes that are required by the system administrator. Features are generated in order to make a decision for the actions of the object in the video. However, indexes that are based on features might include more options. There are two scenarios that are plausible. First, the fog node uses the same features and adds context to make the data useable as the index table. Second, the fog node uses several edge devices (perform as microservices) to extract features required and creating a table to be used as indexes based on the resulting features.
\subsection{Indexing}
In order to facilitate faster search results, one known method used today in search engines and operating systems, is to create an index table which is used later for finding search queries. Indexing means to have a key and value table of features that are of interest and once the keys are searched for (in query format), the corresponding values are the results of the search that gives certain files that contain the query. This way the search is faster and there is no need to scan all files for the key values that are searched for. The same principle applied to the video file captured by the surveillance cameras results in efficient and real time operations. Based on the index table points to the corresponding edge device, the camera live or recorded footage clips are identified and sent to the query sender.
Once the camera captures each frame, an edge device extracts features in real-time or near real-time from the video and the features are transferred to a fog node. After the contextualization of the features, they can be used as the indexes for querying when the operator needs to find something instantly. For example, if the operator is looking for moments that there are congestion of people on the campus in the late night hours. The search can be directed to the exact hours and locations, then look for features that report more than ten people or more at the same frame. Using the query-based parameters inherent in the index table will lead to the corresponding video clips faster and the operator can look for incidents that have the exact search keys. The EIQIS method is obviously more efficient than having to check all the camera footage security systems to find what imagery is of interest.
\subsection{Features VS. Indexes}
\begin{figure}[t]
\centering
\includegraphics[width=0.425\textwidth]{figs/fig_2}
\caption{Edge feature extraction as microservices for indexing purposes.}
\label{fig:fig_2}
\vspace{-10pt}
\end{figure}
Creating the indexes for the extract features that are useful for video search supports historical analytics. However, the features that are of interest in the abnormal behavior detection may not support an operator search, be enough, or exactly the same as the indexes (key values) that are applicable in usual search. Figure \ref{fig:fig_2} shows a scenario in which more feature extraction from the video is needed. The job can be divided into more than one edge devices and each feature can be handled as a microservice \cite{nagothu2018microservice}. Microservices is defined as a separate piece of program that provides a service to a bigger piece of program. In this case the feature extraction can be considered as the microservice that is used in the video indexing platform. More features can be extracted as a result of this architecture. If any indexes need to be added, simply adding the service to the platform can expand the scope of the indexes that are used.
\section{A Preliminary Case Study}
A preliminary proof-of-concept prototype has been built to validate the feasibility of EIQIS \cite{nikouei2018realb}. It shows that the edge devices are capable of extracting and sending features in real-time to the fog layer. The features are written into a text file and sent to fog through a secure channel. The features are synchronized with every node of the network for added security. Figure \ref{fig:fig_3} is an example of features stored in the fog in a key value manner and Fig. \ref{fig:fig_4} is graphical output of the edge device, where the device adds a bounding box around the object (e.g., person, vehicle, other) of interest and the box follows the object. Figure \ref{fig:fig_4} presents several moments that are challenging to be detected. It is a proof showing an acceptable performance of the edge device.
\begin{figure}[t]
\centering
\includegraphics[width=0.425\textwidth]{figs/fig_3}
\caption{Example feature table for each camera.}
\label{fig:fig_3}
\vspace{-10pt}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.425\textwidth]{figs/fig_4}
\caption{Viualized features in real-time.}
\label{fig:fig_4}
\vspace{-10pt}
\end{figure}
The real environment validates the feasibility of the proposed system. The prototype model run on two Asus Tinker Boards with the configuration as follows: 1.8 GHz 32-bit quad-core ARM Cortex-A17 CPU, the memory is 2GB of LPDDR3 dual-channel memory and the operating system is the TinkerOS based on the Linux kernel. The fog layer functions are implemented on a laptop, in which the configuration is as follows: the processor is 2.3 GHz Intel Core i7 (8 cores), the RAM memory is 16 GB and the operating system is the Ubuntu 16.04. A private blockchain network is implemented to secure the feature data transferring from edge to fog. Our private Ethereum network includes four miners, which are distributed to four desktops that are empowered with the Ubuntu 16.04 OS, 3 GHz Intel Core TM (2 cores) processor and 4 GB memory. Each miner uses two CPU cores for mining task to maintain the private blockchain network and the resulting blocks are synchronized through the whole network so every node has a copy of the latest block. The data transfer between the fog node and the miner is carried through an encrypted channel. Before the fog node can secure the features, there should be no adversaries who can temper with the surveillance data. Python based socket programming language is used for both ends of the channel. More details of the prototype are reported in \cite{nikouei2018realb}.
\section{CONCLUSIONS}
Many surveillance systems available today cannot meet the performance requirements raised from real-time, human-in-loop interactive operations. The event-oriented indexable and queryable intelligent surveillance (EIQIS) edge-fog-cloud hierarchical architecture is promising for real-time or near real-time applications, which allows instant querying on the online surveillance video streams to give more time to first responders. In this paper, the architecture toward an event-oriented, indexable, queryable smart surveillance system is introduced. The proposed system enables query of video in real-time based on an index table, which is created on top of the features that are extracted on-site by edge computing nodes. This intelligent surveillance system enables the operator to search for scenes or events of interest instantly. A preliminary study has validated the feasibility of the proposed architecture.
\addtolength{\textheight}{-12cm}
\bibliographystyle{IEEEtranS}
|
train/arxiv
|
BkiUcB_xK2li-F6JxrDQ
| 5
| 1
|
\subsection{Interface Visualization}
\label{sec:HCI_CollectiveVisualization}
The \textit{Collective Interface} enabled an operator to interact with multiple collectives simultaneously and each collective was required to choose the highest valued target within its search range. A screenshot of the \textit{Collective Interface} is provided in Fig. \ref{fig:hciscreenshot}. Collectives and discovered targets were geo-located on the map area in the center of the interface. The Collective Requests area (lower left) enabled the operator to change individual collective entities' behavior states in a manner explained in Section \ref{sec:HCI_CollectiveControls}. The Collective Assignments area (upper right) provided a summary of active and inactive requests sent to the respective collectives. Finally, the System Messages area (lower right) provided text alerts of collective activities (e.g., finding targets) and human input error feedback.
Four collectives and sixteen targets are shown on the map in Fig. \ref{fig:hciscreenshot}.
Box symbols represented collectives and targets. Collectives were white boxes identified by a Roman numeral above four quadrants labelled for the individual collective entity states (as described in Section \ref{sec:CAS_Model}): \textit{uncommitted} ($U$), \textit{favoring} a target ($F$), \textit{committed} to a target ($C$), or \textit{executing} a move to a target ($X$). The executing state included collective individuals in the previously described initiating ($I$) and done ($D$) states. The opacity of each quadrant indicated the percentage of the population in that state. Collective I, in Figure \ref{fig:hciscreenshot}, is primarily uncommitted or favoring, as indicated by the largely opaque $U$ and $F$ quadrants. Targets were green and blue boxes, labelled in the top right with an integer value. The opacity of a target's green upper portion indicated the quality of the target, with brighter green indicating higher quality. The opacity of the blue area of each target indicated the highest percentage of favoring agents supporting that target. Target 0, above Collective I in Figure \ref{fig:hciscreenshot}, is a high valued target (bright green top) with some support (semi-transparent blue bottom). Newly discovered targets were initially transparent, but displayed a green color as soon as two individual collective entities favored the target and returned to the hub. Early in the decision process, target values fluctuated as a result of the collective members' noisy estimates of target value. A target was highlighted with a blue outline when more than $30$\% of a collective's individuals supported it, indicating a collective was about to commit to a target (see Target 0 in Fig. \ref{fig:hciscreenshot}). The target was highlighted with a green outline as individual collective entities began to execute the movement to a new location (see Target 8 in Fig. \ref{fig:hciscreenshot}). The Collective II symbol indicates that the collective is executing a movement from its current location to Target 4. The movement of Collective II is represented by the green box, which moves from the collective's current position to the chosen target's location. At the end of the movement, the Collective II symbol replaces the Target 4 symbol.
Left clicking on a hub, or a target, selected it as the designated object of the operator's collective request (see Section \ref{sec:HCI_CollectiveControls}). Left clicking on a hub also highlighted targets that were in range of the collective and were supported (white outline) or not supported (yellow outline). Right clicking on targets revealed an estimate of each collectives' support for the target, as shown with Target 2. Right clicking on a hub revealed a detail flag, as shown for Collective III, with an estimate of the number of collective members in each of the four previously described states.
The abstract visualization enabled the operator to quickly identify the state of the collective's decision making process. The operator supervised collective decisions using the Collective Interface. The operator influenced the decision making process using available behavior selection controls.
\subsection{Collective Controls}
\label{sec:HCI_CollectiveControls}
The operator adjusted the collective's autonomy by completing \textit{investigate}, \textit{abandon}, or \textit{decide} requests using the Collective Request area located on the lower left hand side of Fig. \ref{fig:hciscreenshot}. Operator activities that were not related to requests were considered observation actions. Recorded collective observation actions included determining which targets were in range of a particular collective and extra left clicks on targets. Completed operator requests were added to the Collective Assignments area shown at the top right of Figure \ref{fig:hciscreenshot}. Erroneous requests were identified in order to prevent the operator from attempting to have collectives investigate targets outside their range, abandon targets that hadn't yet been evaluated, or decide for the collective prior to the collective exploring its search space. Erroneous requests were ignored and caused an error message to be displayed to the operator in the Systems Messages Area, which was located at the bottom right of the simulation.
The operator issued an \textit{investigate} request to increase a collective's support for a specific target by transitioning ten uncommitted entities (5\% of the collective population) to the favoring state supporting the chosen target. Additional support for the same target was achieved by reissuing the investigate request. Investigate requests for targets outside the collective's range were unacceptable and reported to the operator as errors. The \textit{abandon} request reduced a collective's support for a specific target by transitioning favoring individual entities in the decision making hub to the uncommitted state. The abandon request only needed to be issued once in order for the collective to ignore a target. Abandon requests for targets with less than $2$ favoring agents were considered erroneous and reported to the operator. The \textit{decide} request committed two individual collective entities ($1$\% of the population) to the target designated by the operator. These individuals then committed other individuals through interactions that rapidly drove the collective to execute a movement to the operator's chosen target. Issuing a decide request before at least $30$\% of the collective's population favored that target resulted in an error message.
Issuing a request required the operator to select the desired request type from the drop down menu, select the desired collective and target, and select the commit button to execute the request (see lower right of Fig. \ref{fig:hciscreenshot}). The reset button cleared the request information without sending the request. Recently executed requests (e.g., Collective I: Abandon Target 3) were displayed in the upper right hand corner of the monitor area, in the Collective Assignments area. Green and red circles next to each request signified whether the request was in progress (green) or completed (red). Investigate request circles changed from green to red after ten individual entities received and acknowledged the investigate request. Abandon requests remained active (green), once issued. Once a collective reached a decision, all prior requests associated with that particular collective were removed from the collective assignments area. Abandon requests were the only requests the operator was able to cancel, which required selecting the desired request in the Collective Assignments area and pushing the Cancel Assignment button.
\subsection{Experiment 1: Independent Collective Action Selection Models}
\label{sec:Experimental_Design_Best_of_N}
The first experiment tested hypothesis $H_{1}$ by comparing the explicit action selection models described in Section \ref{sec:CAS_Model} in a series of target selection decisions with no human influence. The primary independent variable was the original ($M_{1}$), or bias-reducing ($M_{2}$) collective action selection models. Each model was evaluated 10 times for 28 trials, as described at the beginning of this section. Secondary independent variables included the locations of the targets with respect to each collective's decision making hub, the targets' values, and decision difficulty (e.g., easy or difficult). Each collective made six decisions in each trial section, for a total of twelve decisions per trial.
The dependent variables for comparing the models' performance were success rate and decision time. Success rate was the ratio of the correct number of decisions made by the collective to the total number of decisions. A collective made correct decisions by identifying and moving to the highest valued target within its search area. Decision time was the time from the start of a collective decision to the completion of the collective's movement to its chosen target. The success rate and decision times were averaged over the ten runs of each of the 28 trials in order to retain a similar sample size when comparing to the human performance in the second experiment.
\subsection{Experiment 2: Collective Action Selection Models in Human-Collective Teams}
\label{sec:Experimental_HCI_Teams}
The second experiment evaluated the two collective action selection models and a baseline model within human-collective teams using the Collective Interface Visualization. The primary and secondary independent variables were identical to the first experiment; however, a baseline model, called $M_{3}$, was created. Unlike models $M_{1}$ (original) and $M_{2}$ (bias reduction), $M_{3}$ did not make independent decisions or deliberate between targets (e.g., recruitment and inhibition interactions were disabled). The human operator increased support for targets and made decisions for the $M_{3}$ collective using investigate and decide requests, respectively. The $M_{3}$ model established operators' baseline performance with a low autonomy collective decision making algorithm.
Each operator completed three twenty minute trials, with each trial corresponding to one of the collective action selection models. The trials were assigned such that half the operators completed the evaluation with $M_{1}$ followed by $M_{2}$, and the other half experienced the $M_{2}$ model first. Additionally, the trials were assigned such that half the operators for each model started the trial with difficult initial decisions and the other half began with easy initial decisions. All operators conducted the baseline $M_{3}$ trial last, with the easy initial decisions preceding the hard decisions. The baseline model was expected to benefit from the learning effects related to the problem, interface, and collective behavior when compared to the primary models. Each trial required 12 collective target selections to be made by the four collectives simultaneously, which ensured at least two collectives made two consecutive decisions in each trial section.
The dependent variables were success rate, decision time, request frequency, and intervention rate. Success rate and decision time were defined in Section \ref{sec:Experimental_Design_Best_of_N}. Request frequency was the number of requests, per decision, per minute issued by the human operators during the trials. The frequency of each request type (e.g., investigate, abandon, and decide) was also gathered. Intervention rate was the number of interventions for 12 decisions per operator. An intervention occurred when the collective had achieved at least $10$\% support for a target and the operator issued an abandon request. Interventions indicated that the operator overrode the collective's decision making process in order for the collective to choose a different target.
Additional objective metrics included the operator actions associated with observing a collective's state (e.g., clicking on a collective or target in order to view state information). During the trials, operators were required to answer probe questions \cite{Curtis} used to determine their level of situational awareness \cite{SA_Endsley}. Situational Awareness (SA) probe questions can determine a human operator's situational awareness during critical tasks \cite{Curtis} according to \citeauthor{SA_Endsley}'s \cite{SA_Endsley} three levels of situational awareness: perception, comprehension, and projection. Perception (Level 1) questions determined the operators' ability to perceive the targets and collectives as well as attributes associated with each (e.g., \enquote{Which collective is investigating Target 1?}). Comprehension (Level 2) questions determined operators' understanding of perceived elements in relation to collective decisions (e.g., \enquote{Which target is the best choice for Collective III?}). Finally, projection (Level 3) questions determined operators' ability to estimate the collective's future state based on their perception and comprehension of the current state (e.g., \enquote{Which collective will make the next decision?}). Operators were asked four Level 1, five Level 2, and three Level 3 SA probes for each model. The SA probes were similar across operators, but were written as templates and the specific collective or target was added during the experiment. The Level 1 question, \enquote{Which Collective is investigating Target \_?}, for example, was completed with an applicable target number during the trial, before asking the SA probe question.
Recorded subjective data included answers to a demographic questionnaire, performance on \citeauthor{MRT_vandenberg_mental_1978}'s \cite{MRT_vandenberg_mental_1978} Mental Rotations Test (MRT), and a post experiment questionnaire that rank ordered the models. Each trial ended with a post trial questionnaire, the NASA Task Load Index (NASA-TLX) and a 3-D Situation Awareness Rating Technique (3-D SART) \cite{SART_selcon1991workload}. The NASA-TLX provided a workload estimate, which included the weighted summation of a variety of workload components including mental demand, effort, and frustration. The 3-D SART score for situational awareness was calculated using the perceived Situational Understanding (SU), Demands on Attentional Resources (DAR), and Supply of Attentional Resources (SAR), according to the following equation: SART Score = SU - (DAR - SAR). The post-trial questionnaires focused on the perceived performance and the responsiveness of the collective during the trial. The post experiment questionnaire required the operators to rank order the different collective models according to responsiveness, performance, and ease of comprehension.
Twenty-eight operators from the Vanderbilt University campus and surrounding area completed the experiment. The 15 female and 13 male operators were predominantly in the 18 to 30 year age range, although four operators were between the ages of 31 and 50. The operators had at least completed high school. More than half the operators had completed (13 operators), or were completing (11 operators) an undergraduate degree. Each operator began the experiment by completing the informed consent paperwork, the demographic questionnaire, and the MRT. Once these items were completed, the operators received a scripted introduction to the experiment and the simulator ($5$ minutes). Prior to each trial, the operators conducted $5$ minute training sessions with the specific model that consisted of two collectives, with one collective required to make an easy decision and one required to make a difficult decision. Operators responded to the SA probes at increments of approximately one probe per minute during each trial. At the end each trial, the operators completed the post-trial questionnaire, the NASA-TLX, and the 3-D SART. After all trials and post-trial data collection, the operators completed the post-experiment questionnaire.
\subsection{Experiment 2: Collective Action Selection Models in Human-Collective Teams}
\label{sec:HumanCollectiveResults}
The results of the Human-Collective Team experiment are presented in four sections. Section \ref{sec:HumanCollective_Independent_Comparison} compares the success rates and decision times achieved between the performance of the independent collectives ($M_{1} SIM$ and $M_{2} SIM$) and the human-collective teams ($M_{1}$, $M_{2}$, and $M_{3}$). Section \ref{sec:Human_Collective_Actions} presents the observed human actions during the experiment with models $M_{1}$, $M_{2}$ and the baseline model, $M_{3}$. Section \ref{sec:SA_HC_Comparison} presents the observation actions and responses to the Situational Awareness Probe questions. Finally, Section \ref{sec:Subjective_HC_Comparison}, presents the subjective results.
\subsubsection{Comparison Between the Independent Collectives and Human-Collective Team Experiments}
\label{sec:HumanCollective_Independent_Comparison}
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Objective Results: Success Rate Performance]{Success Rate (\%) Per Decision.}
\begin{tabular}{c|c|c"c|c"c|c|}
\cline{2-7}
& \multicolumn{2}{c"}{Overall} & \multicolumn{2}{c"}{Easy} & \multicolumn{2}{c|}{Difficult} \\ \cline{2-7}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}78.57 & \cellcolor{Gray}100 & \cellcolor{Gray}95.31 & \cellcolor{Gray}100 & \cellcolor{Gray}56.25 & \cellcolor{Gray}100 \\
\multicolumn{1}{|c|}{} & (41.09) & (0/100) & (21.19) & (0/100) & (49.78) & (0/100) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}88.39 & \cellcolor{Gray}100 & \cellcolor{Gray}94.44 & \cellcolor{Gray}100 & \cellcolor{Gray}81.41 & \cellcolor{Gray}100 \\
\multicolumn{1}{|c|}{} & (32.08) & (0/100) & (22.97) & (0/100) & (39.03) & (0/100) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}92.86 & \cellcolor{Gray}100 & \cellcolor{Gray}95.94 & \cellcolor{Gray}100 & \cellcolor{Gray}88.49 & \cellcolor{Gray}100 \\
\multicolumn{1}{|c|}{} & (25.79) & (0/100) & (19.79) & (0/100) & (32.03) & (0/100) \\ \hline
\end{tabular}
\label{table:HC_SR}
\end{table}
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Objective Results: Decision Time Performance]{Decision Time (minutes) Per Decision.}
\begin{tabular}{c|c|c"c|c"c|c|}
\cline{2-7}
& \multicolumn{2}{c"}{Overall} & \multicolumn{2}{c"}{Easy} & \multicolumn{2}{c|}{Difficult} \\ \cline{2-7}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}3.01 & \cellcolor{Gray}2.48 & \cellcolor{Gray}2.1 & \cellcolor{Gray}1.88 & \cellcolor{Gray}4.22 & \cellcolor{Gray}4.03 \\
\multicolumn{1}{|c|}{} & (1.56) & (1.16/8.58) & (0.75) & (1.16/4.79) & (1.54) & (1.6/8.58) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}3.97 & \cellcolor{Gray}3.64 & \cellcolor{Gray}3.37 & \cellcolor{Gray}3.09 & \cellcolor{Gray}4.67 & \cellcolor{Gray}4.57 \\
\multicolumn{1}{|c|}{} & (1.37) & (1.83/9.94) & (1.23) & (1.83/9.94) & (1.2) & (2.46/8.81) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}5.32 & \cellcolor{Gray}4.78 & \cellcolor{Gray}4.67 & \cellcolor{Gray} 4.2 & \cellcolor{Gray}6.24 & \cellcolor{Gray}6.03 \\
\multicolumn{1}{|c|}{} & (2.22) & (1.48/13.25) & (1.96) & (1.48/12.78) & (2.24) & (2.07/13.25) \\ \hline
\end{tabular}
\label{table:HC_DT}
\end{table}
The success rate and decision time descriptive statistics for the human-collective teams are provided in Tables \ref{table:HC_SR} and \ref{table:HC_DT} \cite{Roundtree20191}. A Kruskal-Wallis test across models $M_{1} SIM$, $M_{2} SIM$, $M_{1}$, $M_{2}$, and $M_{3}$ identified significant effects for success rate in all decisions, \textit{$\chi^{2}$(4, N = 1680) = 523.39, $\rho$ $<$ 0.001}, easy decisions, \textit{$\chi^{2}$(4, N = 1147) = 381.3, $\rho$ $<$ 0.001}, and difficult decisions \textit{$\chi^{2}$(4, N = 895) = 388.94, $\rho$ $<$ 0.001}. Significant effects for decision times were observed in all decisions, \textit{$\chi^{2}$(4, N = 1680) = 687.89, $\rho$ $<$ 0.001}, easy decisions, \textit{$\chi^{2}$(4, N = 1147) = 683.52, $\rho$ $<$ 0.001}, and difficult decisions \textit{$\chi^{2}$(4, N = 895) = 376.4, $\rho$ $<$ 0.001}.
\def 0.49 {0.49}
\begin{figure*}[t!]
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio,height=2.5in]{Figures/Success_Human_SIM_Compare.png}
\captionsetup{width=\linewidth}
\caption{Success Rate Comparison}
\label{fig:Compare_SR}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio,height=2.5in]{Figures/DecTime_Human_SIM_Compare.png}
\captionsetup{width=\linewidth}
\caption{Decision Time Comparison}
\label{fig:Compare_DT}
\end{subfigure}
\caption{A comparison of the Success Rate and Decision times for the independent collectives, $M_{1} SIM$ and $M_{2} SIM$, and the human-collective teams, $M_{1}$ (original), $M_{2}$ (bias reducing), and $M_{3}$ (baseline).}
\label{fig:Compare_DT_and_SR}
\end{figure*}
Pairwise comparisons between the independent collectives and human-collective teams indicated significant human influence. A Tukey and Kramer test revealed significant effects between $M_{1} SIM$ and $M_{1}$ for success rates in overall decisions (\textit{$\rho$ $<$ 0.001}), easy decisions (\textit{$\rho$ $<$ 0.001}), and difficult decisions (\textit{$\rho$ $<$ 0.001}). A similar test also revealed significant effects between $M_{1} SIM$ and $M_{1}$ for decision times in overall decisions (\textit{$\rho$ $<$ 0.001}) and difficult decisions (\textit{$\rho$ $<$ 0.001}), but not for easy decisions. The pairwise comparison between $M_{2} SIM$ and $M_{2}$ identified significant effects for success rates in overall decisions (\textit{$\rho$ $<$ 0.001}), easy decisions (\textit{$\rho$ $<$ 0.001}), and difficult decisions (\textit{$\rho$ $<$ 0.001}). Differences in the decision times between $M_{2} SIM$ and $M_{2}$ were also significant in overall decisions (\textit{$\rho$ $<$ 0.001}), easy decisions (\textit{$\rho$ $<$ 0.001}), and difficult decisions (\textit{$\rho$ $<$ 0.001}).
The operator's influence is evident in Figure \ref{fig:Compare_DT_and_SR}, which compares the success rates and decision times of the independent models ($M_{1} SIM$ and $M_{2} SIM$) to the operators teamed with the same models ($M_{1}$ and $M_{2}$) and the baseline, human-only, model ($M_{3}$). Operators improved the success rates of both collective models, although success rates were clearly higher for $M_{2}$ than $M_{1}$ when making difficult decisions, as shown on the right side of Figure \ref{fig:Compare_DT_and_SR} (\subref{fig:Compare_SR}). Human influence enabled a success rate more than ten times higher than the independent collective, $M_{1} SIM$, for difficult decisions. Operators decreased decision times for both collective action selection models for easy decisions, but the higher difficult decision accuracy achieved by $M_{1}$ also required the original model to be slowed down, as shown on the right hand side of Figure \ref{fig:Compare_DT_and_SR} (\subref{fig:Compare_DT}). The operators using $M_{2}$ increased success rates while decreasing decision times in all decisions.
Pairwise comparisons also indicated that the collective action selection model significantly affected human-collective team performance. A Tukey and Kramer test revealed significant effects between the $M_{1}$ and $M_{2}$ for success rates in overall decisions (\textit{$\rho$ $=$ 0.04}) and difficult decisions (\textit{$\rho$ $<$ 0.001}). Significant effects were also observed between $M_{1}$ and $M_{3}$ for success rates in overall decisions (\textit{$\rho$ $<$ 0.001}) and difficult decisions (\textit{$\rho$ $<$ 0.001}). No significant effects were observed for success rates between $M_{2}$ and $M_{3}$ for any decision category. A Tukey and Kramer test revealed significant effects for decision times between all human-collective results ($M_{1}$,$M_{2}$,$M_{3}$) in overall and easy decisions (\textit{$\rho$ $<$ 0.001} for each comparison). Significant effects were observed in pairwise comparisons between each of the collective action selection models ($M_{1}$ and $M_{2}$) and $M_{3}$ for decision times in difficult decisions (\textit{$\rho$ $<$ 0.001}), but no significant effects were observed for the same metric between $M_{1}$ and $M_{2}$ in difficult decisions.
The teams with the bias reducing model, $M_{2}$, achieved approximately 9\% higher accuracy overall and 25\% higher accuracy during difficult decisions when compared to the $M_{1}$ teams. The $M_{2}$ teams required more than a minute longer for each easy decision than the $M_{1}$ teams, although the gap in decision times for difficult decisions was noticeably smaller (less than 30 seconds on average). A moderate positive correlation was found between the decision time and success rates for the $M_{1}$ teams when making difficult decisions, \textit{r = 0.51, $\rho$ $<$ 0.001}. Weak correlations were observed for the $M_{1}$ teams in overall decisions, \textit{r = -0.19, $\rho$ $<$ 0.001}, for the $M_{2}$ teams in overall, \textit{r = -0.11, $\rho$ = 0.05}, easy, \textit{r = -0.18, $\rho$ = 0.02}, and difficult decisions, \textit{r = 0.18, $\rho$ = 0.03}, as well as for the $M_{3}$ teams in difficult decisions, \textit{r = 0.25, $\rho$ $<$ 0.01}. The analysis suggests that the human-collective teams improved difficult decision accuracy with longer decision times and that this relationship was most common in the $M_{1}$ teams during difficult decisions. The operators made the fastest decisions using $M_{1}$, but achieved the lowest accuracy in difficult decisions even after slowing the collective decision process down. The operators using the baseline model, $M_{3}$, were the most accurate, but also the slowest. Finally, the $M_{2}$ human-collective teams made more accurate decisions than the $M_{1}$ teams and made these decisions faster than operators with the $M_{3}$ teams.
\subsubsection{Human Actions in the Human-Collective Team Experiment}
\label{sec:Human_Collective_Actions}
The significant influence of operators, reported in Section \ref{sec:HumanCollective_Independent_Comparison}, resulted from the actions and awareness of the operators themselves. This activity was critical to the success of the baseline model, $M_{3}$, which required the human-operators to control the collectives without the aid of a collective decision making process. The results provided in Tables \ref{table:HC_SR} and \ref{table:HC_DT} as well as in Figure \ref{fig:Compare_DT_and_SR}, show that the human-operators achieved the highest success rates with the baseline model, although these rates were not found to be significantly greater than those of the $M_{2}$ human-collective team. The baseline decision times; however, were significantly longer than the other two models for all decisions.
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Objective Results: Request Frequency Performance]{Request Frequency (Number of Requests Per Decision/Decision Time).}
\begin{tabular}{c|c|c"c|c"c|c|}
\cline{2-7}
& \multicolumn{2}{c"}{Overall} & \multicolumn{2}{c"}{Easy} & \multicolumn{2}{c|}{Difficult} \\ \cline{2-7}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}0.57 & \cellcolor{Gray}0.48 & \cellcolor{Gray}0.61 & \cellcolor{Gray}0.54 & \cellcolor{Gray}0.5 & \cellcolor{Gray}0.4 \\
\multicolumn{1}{|c|}{} & (0.56) & (0/2.56) & (0.61) & (0/2.56) & (0.47) & (0/1.98) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}0.66 & \cellcolor{Gray}0.55 & \cellcolor{Gray}0.7 & \cellcolor{Gray}0.6 & \cellcolor{Gray}0.6 & \cellcolor{Gray}0.51 \\
\multicolumn{1}{|c|}{} & (0.49) & (0/2.6) & (0.56) & (0/2.6) & (0.4) & (0/2.03) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}1.33 & \cellcolor{Gray}1.18 & \cellcolor{Gray}1.4 & \cellcolor{Gray}1.21 & \cellcolor{Gray}1.22 & \cellcolor{Gray}1.08 \\
\multicolumn{1}{|c|}{} & (0.79) & (0.11/4.75) & (0.87) & (0.11/4.75) & (0.65) & (0.3/3.66) \\ \hline
\end{tabular}
\label{table:HC_Requests_Per_Decision_Time}
\end{table}
The operator influenced the collectives' behavior directly by issuing the \textit{investigate}, \textit{abandon}, and \textit{decide} requests described in Section \ref{sec:HCI_CollectiveControls}. The descriptive statistics for the frequency of all requests issued per decision per minute are summarized in Table \ref{table:HC_Requests_Per_Decision_Time}. The operators issued requests least frequently with $M_{1}$ and slightly more often with the bias reducing model, $M_{2}$. Requests were most frequent with the baseline model, $M_{3}$, which required consistent human control. A Kruskal-Wallis test revealed significant effects between the three models for request frequency for overall, \textit{$\chi^{2}$ (2, N = 1008) = 239.65, $\rho$ $<$ 0.001}, easy, \textit{$\chi^{2}$ (2, N = 569) = 121.84, $\rho$ $<$ 0.001}, and difficult decisions, \textit{$\chi^{2}$ (2, N = 439) = 120.3, $\rho$ $<$ 0.001}. A positive correlation was found between request frequency and success rate with $M_{1}$ for all decisions, \textit{r = 0.35, $\rho$ $<$ 0.001}, but the correlation was strongest for the difficult decisions, \textit{r = 0.58, $\rho$ $<$ 0.001}. A weak negative correlation was found for $M_{3}$ during difficult decisions, \textit{r = -0.19, $\rho$ = 0.03}. These results suggest that $M_{1}$ was more likely to make accurate decisions when the operator issued additional requests to the collectives, especially during difficult decisions. Differences in operator request frequencies did not correspond to changes in success rates for the bias reducing model, $M_{2}$, or the baseline model, $M_{3}$. The $M_{3}$ teams achieved the highest success rates, but required twice as many requests and at least a minute longer per decision than either the basic or bias-reducing collective action selection models.
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Objective Results: Number of Investigate Requests Per Decision Per Minute Performance]{Number of Investigate Requests Per Decision Per Minute.}
\begin{tabular}{c|c|c"c|c"c|c|}
\cline{2-7}
& \multicolumn{2}{c"}{Overall} & \multicolumn{2}{c"}{Easy} & \multicolumn{2}{c|}{Difficult} \\ \cline{2-7}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}0.44 & \cellcolor{Gray}0.32 & \cellcolor{Gray}0.47 & \cellcolor{Gray}0.43 & \cellcolor{Gray}0.38 & \cellcolor{Gray}0.29 \\
\multicolumn{1}{|c|}{} & (0.49) & (0/2.56) & (0.54) & (0/2.56) & (0.42) & (0/1.65) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}0.47 & \cellcolor{Gray}0.35 & \cellcolor{Gray}0.49 & \cellcolor{Gray}0.34 & \cellcolor{Gray}0.46 & \cellcolor{Gray}0.37 \\
\multicolumn{1}{|c|}{} & (0.44) & (0/2.27) & (0.5) & (0/2.27) & (0.37) & (0/1.63) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}1.07 & \cellcolor{Gray}0.95 & \cellcolor{Gray}1.12 & \cellcolor{Gray}0.97 & \cellcolor{Gray}1 & \cellcolor{Gray}0.92 \\
\multicolumn{1}{|c|}{} & (0.71) & (0/4.3) & (0.79) & (0/4.3) & (0.58) & (0/2.81) \\ \hline
\end{tabular}
\label{table:HC_Investigate_Per_Dec_Time}
\end{table}
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Objective Results: Number of Decide Requests Per Decision Per Minute Performance]{Number of Decide Requests Per Decision Per Minute.}
\begin{tabular}{c|c|c"c|c"c|c|}
\cline{2-7}
& \multicolumn{2}{c"}{Overall} & \multicolumn{2}{c"}{Easy} & \multicolumn{2}{c|}{Difficult} \\ \cline{2-7}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}0.1 & \cellcolor{Gray}0 & \cellcolor{Gray}0.12 & \cellcolor{Gray}0 & \cellcolor{Gray}0.08 & \cellcolor{Gray}0 \\
\multicolumn{1}{|c|}{} & (0.19) & (0/0.8) & (0.22) & (0/0.8) & (0.12) & (0/0.43) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}0.15 & \cellcolor{Gray}0.14 & \cellcolor{Gray}0.19 & \cellcolor{Gray}0.22 & \cellcolor{Gray}0.1 & \cellcolor{Gray}0 \\
\multicolumn{1}{|c|}{} & (0.16) & (0/0.55) & (0.18) & (0/0.55) & (0.13) & (0/0.41) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}0.23 & \cellcolor{Gray}0.21 & \cellcolor{Gray}0.26 & \cellcolor{Gray}0.24 & \cellcolor{Gray}0.2 & \cellcolor{Gray}0.17 \\
\multicolumn{1}{|c|}{} & (0.12) & (0/1.15) & (0.12) & (0.08/1.15) & (0.12) & (0/0.9) \\ \hline
\end{tabular}
\label{table:HC_Decide_Per_Decision_Time}
\end{table}
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Objective Results: Number of Abandon Requests Per Decision Per Minute Performance]{Number of Abandon Requests Per Decision Per Minute.}
\begin{tabular}{c|c|c"c|c"c|c|}
\cline{2-7}
& \multicolumn{2}{c"}{Overall} & \multicolumn{2}{c"}{Easy} & \multicolumn{2}{c|}{Difficult} \\ \cline{2-7}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}0.02 & \cellcolor{Gray}0 & \cellcolor{Gray}0.02 & \cellcolor{Gray}0 & \cellcolor{Gray}0.03 & \cellcolor{Gray}0 \\
\multicolumn{1}{|c|}{} & (0.09) & (0/0.6) & (0.08) & (0/0.6) & (0.1) & (0/0.5) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}0.02 & \cellcolor{Gray}0 & \cellcolor{Gray}0.02 & \cellcolor{Gray}0 & \cellcolor{Gray}0.02 & \cellcolor{Gray}0 \\
\multicolumn{1}{|c|}{} & (0.07) & (0/0.38) & (0.07) & (0/0.38) & (0.07) & (0/0.3) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}0.03 & \cellcolor{Gray}0 & \cellcolor{Gray}0.03 & \cellcolor{Gray}0 & \cellcolor{Gray}0.03 & \cellcolor{Gray}0 \\
\multicolumn{1}{|c|}{} & (0.07) & (0/0.36) & (0.07) & (0/0.36) & (0.07) & (0/0.35) \\ \hline
\end{tabular}
\label{table:HC_Abandon_Per_Dec_Time}
\end{table}
The type of request issued to the collectives characterizes the kind of influence the operators exerted. Tables \ref{table:HC_Investigate_Per_Dec_Time}, \ref{table:HC_Decide_Per_Decision_Time}, and \ref{table:HC_Abandon_Per_Dec_Time} provide the number of investigate, abandon, and decide requests issued per decision per minute. Investigate requests were the most common, given that the abandon and decide requests result in a persistent change to the collective's state and were required less frequently. A Kruskal-Wallis test identified significant effects for investigate requests in overall, \textit{$\chi^{2}$(2, N = 1008) = 223.38, $\rho$ $<$ 0.001}, easy, \textit{$\chi^{2}$(2, N = 708) = 114.41, $\rho$ $<$ 0.001}, and difficult decisions, \textit{$\chi^{2}$(2, N = 550) = 111.64, $\rho$ $<$ 0.001}. The success rates and investigate requests were weakly correlated for all decisions when using $M_{1}$, \textit{r = 0.36, $\rho$ $<$ 0.001}, and $M_{2}$, \textit{r = 0.11, $\rho$ = 0.04}, as well as difficult decisions using $M_{3}$, \textit{r = -0.19, $\rho$ = 0.03}. Investigate requests were strongly correlated with success rate for difficult decisions when using $M_{1}$, \textit{r = 0.62, $\rho$ $<$ 0.001}, indicating that greater human influence improved decision accuracy. The number of investigate requests was significantly higher for $M_{3}$ than for the other models, but changes in Investigate Request Frequencies for both $M_{2}$ and $M_{3}$ did not correspond with changes in success rates with either model for easy, or difficult decisions.
Decide requests occurred most frequently with $M_{3}$, as expected considering this model did not make independent decisions. A Kruskal-Wallis test identified significant effects between models for decide requests in overall, \textit{$\chi^{2}$(2, N = 1008) = 164.34 $\rho$ $<$ 0.001}, easy, \textit{$\chi^{2}$(2, N = 569) = 80.24 $\rho$ $<$ 0.001}, and difficult decisions, \textit{$\chi^{2}$(2, N = 439) = 66.07 $\rho$ $<$ 0.001}. A moderate positive correlation was observed for decide requests and success rate with $M_{1}$ in both overall, \textit{r = 0.2 $\rho$ $<$ 0.001}, and difficult, \textit{r = 0.36, $\rho$ $<$ 0.001}, decisions. A moderate, negative correlation was also observed for difficult decisions, \textit{r = -0.21, $\rho$ $<$ 0.01} with $M_{3}$. The positive correlation for $M_{1}$ suggests that the operator often directed accurate decisions in difficult decisions.
Abandon requests were also less common than investigate requests, although a Kruskal-Wallis test revealed significant effects for abandon requests for overall, \textit{$\chi^{2}$(2, N = 1008) = 11.27, $\rho$ $<$ 0.01}, and easy decisions, \textit{$\chi^{2}$(2, N = 569) = 14.10, $\rho$ $<$ 0.001}. A weak, negative correlation was observed between abandon request frequency and success rate for $M_{1}$ decisions overall, \textit{r = -0.13, $\rho$ = 0.02}, but no other correlations were identified.
The differences between the models with respect to abandon request frequency was less informative than the differences observed in the intervention rate, which captures the abandon requests that were used to force a collective to ignore a target that had already gained at least 10\% of the population's support within the collective. The descriptive statistics for the number of interventions per operator are presented in Table \ref{table:HC_Interventions_Per_Decision_Time}. A Kruskal-Wallis test identified significant effects across the models for number of interventions in overall decisions, \textit{$\chi^{2}$(2, N = 84) = 10.35, $\rho$ $<$ 0.01}.
\begin{table}[h!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Objective Results: Number of Interventions Per Participant]{Number of Interventions (Abandoned Targets with 10\% Support) Per Participant.}
\begin{tabular}{c|c|c|}
\cline{2-3}
& \multicolumn{2}{c|}{Overall} \\ \cline{2-3}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}4.79 & \cellcolor{Gray}4 \\
\multicolumn{1}{|c|}{} & (2.97) & (0/12) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}2.21 & \cellcolor{Gray}1.5 \\
\multicolumn{1}{|c|}{} & (1.99) & (0/7) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}5 & \cellcolor{Gray}3.5 \\
\multicolumn{1}{|c|}{} & (5.11) & (0/18) \\ \hline
\end{tabular}
\label{table:HC_Interventions_Per_Decision_Time}
\end{table}
These results indicate that although requests were less frequent with $M_{1}$ in general, the requests used were more likely to correspond to changes in the human-collective team's success rate than with the $M_{2}$ and $M_{3}$ models. The positive correlations between $M_{1}$'s success rate and the number of requests, number of investigation requests, number of interventions, and number of decide requests suggests that in difficult decisions, the operator pushed the collectives towards accurate decisions. Using available controls, the operators were able to take advantage of $M_{1}$'s speed during easy decisions, but were required to take control in difficult decisions in order to achieve a success rate similar to that achieved by $M_{2} SIM$, without human interaction, on difficult decisions. Further, the high success rate observed with the baseline model, $M_{3}$, indicates that the operators were capable of driving the collectives to the correct decision, but these decisions took significantly longer to make and required many more operator actions to complete.
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Objective Results: Collective Observations Performance]{Collective Observations (\%) Per Decision.}
\begin{tabular}{c|c|c"c|c"c|c|}
\cline{2-7}
& \multicolumn{2}{c"}{Overall} & \multicolumn{2}{c"}{Easy} & \multicolumn{2}{c|}{Difficult} \\ \cline{2-7}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}77.98 & \cellcolor{Gray}100 & \cellcolor{Gray}65.62 & \cellcolor{Gray}100 & \cellcolor{Gray}94.44 & \cellcolor{Gray}100 \\
\multicolumn{1}{|c|}{} & (41.5) & (0/100) & (47.62) & (0/100) & (22.99) & (0/100) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}86.01 & \cellcolor{Gray}100 & \cellcolor{Gray}80 & \cellcolor{Gray}100 & \cellcolor{Gray}92.95 & \cellcolor{Gray}100 \\
\multicolumn{1}{|c|}{} & (34.74) & (0/100) & (40.11) & (0/100) & (25.68) & (0/100) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}90.18 & \cellcolor{Gray}100 & \cellcolor{Gray}88.32 & \cellcolor{Gray}100 & \cellcolor{Gray}92.81 & \cellcolor{Gray}100 \\
\multicolumn{1}{|c|}{} & (29.8) & (0/100) & (32.19) & (0/100) & (25.93) & (0/100) \\ \hline
\end{tabular}
\label{table:HC_Collective_Observations}
\end{table}
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Objective Results: Target Observations Performance]{Target Observations (\%) Per Decision.}
\begin{tabular}{c|c|c"c|c"c|c|}
\cline{2-7}
& \multicolumn{2}{c"}{Overall} & \multicolumn{2}{c"}{Easy} & \multicolumn{2}{c|}{Difficult} \\ \cline{2-7}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}20.54 & \cellcolor{Gray}0 & \cellcolor{Gray}10.94 & \cellcolor{Gray}0 & \cellcolor{Gray}33.33 & \cellcolor{Gray}0 \\
\multicolumn{1}{|c|}{} & (40.46) & (0/100) & (31.29) & (0/100) & (47.3) & (0/100) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}22.62 & \cellcolor{Gray}0 & \cellcolor{Gray}21.11 & \cellcolor{Gray}0 & \cellcolor{Gray}24.36 & \cellcolor{Gray}0 \\
\multicolumn{1}{|c|}{} & (41.9) & (0/100) & (40.92) & (0/100) & (43.06) & (0/100) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}41.07 & \cellcolor{Gray}0 & \cellcolor{Gray}41.12 & \cellcolor{Gray}0 & \cellcolor{Gray}41.01 & \cellcolor{Gray}0 \\
\multicolumn{1}{|c|}{} & (49.27) & (0/100) & (49.33) & (0/100) & (49.36) & (0/100) \\ \hline
\end{tabular}
\label{table:HC_Target_Observations}
\end{table}
\subsubsection{Situational Awareness}
\label{sec:SA_HC_Comparison}
Operators using $M_{1}$, $M_{2}$, and $M_{3}$, maintained varying levels of situational awareness while interacting with the collectives. The descriptive statistics for operators' observation actions, including collective observations and target observations, are presented in Tables \ref{table:HC_Collective_Observations} and \ref{table:HC_Target_Observations}, respectively. The displayed percentages are the ratio of the number of collective and target observation actions over the total actions for each decision. The observation percentages differed significantly across the human-collective teams. A Kruskal-Wallis test identified significant effects between the models for collective observations in overall, \textit{$\chi^{2}$(2, N = 1008) = 19.95, $\rho$ $<$ 0.001}, and easy decisions, \textit{$\chi^{2}$(2, N = 569) = 29.77, $\rho$ $<$ 0.001}. Weak negative correlations were found between success rate and collective observations in overall decisions for $M_{1}$, \textit{r = -0.12, $\rho$ = 0.03}, and $M_{2}$, \textit{r = -0.12, $\rho$ = 0.03} (identical calculated values). A Kruskal-Wallis test identified significant effects between the models for target observations per decision by overall, \textit{$\chi^{2}$(2, N = 1008) = 42.47, $\rho$ $<$ 0.001}, easy, \textit{$\chi^{2}$(2, N = 569) = 49.38, $\rho$ $<$ 0.001}, and difficult, \textit{$\chi^{2}$(2, N = 439) = 9.31, $\rho$ $<$ 0.01}, decisions. Weak positive correlations were found between target observations and success rate with the $M_{3}$ teams for overall, \textit{r = 0.14, $\rho$ = 0.01}, and difficult, \textit{r = 0.16, $\rho$ = 0.05}, decisions.
The operators made the most collective observation actions with the baseline model, $M_{3}$, which was expected, although the number of collective observations during difficult decisions was similar for all the models. Target observations were significantly more common with the baseline model, $M_{3}$ across decision difficulty. Operators used fewer observation actions for easy decisions with $M_{1}$ when compared to $M_{2}$, but this relationship was reversed for difficult decisions. The increased target observations for $M_{1}$ were not correlated with success rates for that model in difficult decisions, but they were more common, indicating that the operators were more likely to make several observations of $M_{1}$ in difficult decisions than they were for the same model in easier decisions. Target observations were notably more common with $M_{3}$ than the other models. As with the collective observations with $M_{1}$, target observations were also significantly more common with $M_{1}$ for difficult decisions than the corresponding easy decisions.
The final objective metric observed were the operators' responses to the SA probe questions. The SA probe responses evaluated the operators' understanding of the scenario, while interacting with the collectives. The descriptive statistics for the SA Correct responses for each level of SA are provided in Table \ref{table:HC_SA_Responses} \cite{Roundtree20191}. No significant effects were observed between the models for overall SA probe response accuracy, which were above 85\% overall. The response accuracy was lowest for $SA_{3}$, which required the operators to forecast a model's future behavior (see Section \ref{sec:Experimental_Design_Best_of_N}). A Kruskal-Wallis test identified significant effects between the models for $SA_{3}$ probe questions, \textit{$\chi^{2}$(2, N = 84) = 7.57, $\rho$ = 0.02}. The operators achieved a significantly higher SA Level 3 correct response rate when using $M_{2}$, as compared to the other models. Across the SA levels within each model, the correct response rate differences were significant. Kruskal-Wallis tests also identified significant effects between levels for each model: $M_{1}$ - \textit{$\chi^{2}$(2, N = 56) = 64.63, $\rho$ $<$ 0.001}, $M_{2}$ - \textit{$\chi^{2}$(2, N = 56) = 73.68, $\rho$ $<$ 0.001}, and $M_{3}$ - \textit{$\chi^{2}$(2, N = 56) = 52.337, $\rho$ $<$ 0.001}. The human-collective team correct response percentage dropped more than 8\% between $SA_{1}$ and $SA_{3}$ when using $M_{1}$ and 18\% for $M_{3}$. The human-collective teams using $M_{2}$, in contrast, experienced less than a 2\% reduction between $SA_{1}$ and $SA_{3}$. The SA probe correct response percentages demonstrate that the human operators maintained a consistent ability across all models to access information about the collectives ($SA_{1}$), and understand the collectives' processes ($SA_{2}$). The $SA_{3}$ correct response percentages suggest that the use of the $M_{2}$ model either improved the human operator's ability to forecast the collective state, or afforded the human operator more opportunities to properly respond.
\begin{table}[h!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Subjective Results: SA Probe]{Percent Correct SA Probe Questions by SA Level.}
\begin{tabular}{c|c|c"c|c"c|c"c|c|}
\cline{2-9}
& \multicolumn{2}{c"}{$SA_{0}$} & \multicolumn{2}{c"}{$SA_{1}$} & \multicolumn{2}{c"}{$SA_{2}$} & \multicolumn{2}{c|}{$SA_{3}$}\\ \cline{2-9}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}85.12 & \cellcolor{Gray}91.67 & \cellcolor{Gray}86.31 & \cellcolor{Gray}100 & \cellcolor{Gray}87.32 & \cellcolor{Gray}100 & \cellcolor{Gray}78.57 & \cellcolor{Gray}100 \\
\multicolumn{1}{|c|}{} & (17.18) & (8.33/100) & (21.06) & (0/100) & (18.23) & (25/100) & (31.38) & (0/100) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}89.88 & \cellcolor{Gray}91.67 & \cellcolor{Gray}91.67 & \cellcolor{Gray}100 & \cellcolor{Gray}88.39 & \cellcolor{Gray}100 & \cellcolor{Gray}89.88 & \cellcolor{Gray}100 \\
\multicolumn{1}{|c|}{} & (10.96) & (58.33/100) & (11.11) & (66.67/100) & (14.6) & (60/100) & (20.46) & (33.33/100) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}87.2 & \cellcolor{Gray}91.67 & \cellcolor{Gray}94.05 & \cellcolor{Gray}100 & \cellcolor{Gray}91.43 & \cellcolor{Gray}100 & \cellcolor{Gray}76.79 & \cellcolor{Gray}75 \\
\multicolumn{1}{|c|}{} & (10.75) & (58.33/100) & (13) & (66.67/100) & (12.68) & (60/100) & (16.57) & (50/100) \\ \hline
\end{tabular}
\label{table:HC_SA_Responses}
\end{table}
The situational awareness results demonstrate that the operators needed to take many more observation actions with $M_{3}$ than the collective action selection models. Across decision difficulty the observation activity of the $M_{2}$ and $M_{3}$ teams were consistent, but the observation behavior for the $M_{1}$ team noticeably increased between easy and difficult decisions. The SA probe correct response rates were higher than anticipated for all three models, which indicates that the Collective Interface Visualization was sufficient to enable operator situational awareness. The consistently better $SA_{3}$ response percentages for human-collective teams using $M_{2}$ strongly suggests an advantage of this model over the other two for the human-operator. Due to the simultaneous decision-making during the trials, associating individual SA probe questions to decision problem difficulty was not possible; however, these results suggest that future examination of SA probe responses under different decision difficulties is likely to further distinguish these models.
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Subjective Results: SART]{SART Results for Overall Score, Situational Understanding (SU), Demand on Attentional Resources (DAR), and Supply of Attentional Resources (SAR) with Overall = SU - (DAR - SAR).}
\begin{tabular}{c|c|c"c|c"c|c"c|c|}
\cline{2-9}
& \multicolumn{2}{c"}{Overall Score} & \multicolumn{2}{c"}{SU} & \multicolumn{2}{c"}{DAR} & \multicolumn{2}{c|}{SAR}\\ \cline{2-9}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}6 & \cellcolor{Gray}6 & \cellcolor{Gray}5.75 & \cellcolor{Gray}6 & \cellcolor{Gray}5 & \cellcolor{Gray}5 & \cellcolor{Gray}5.25 & \cellcolor{Gray}5.5 \\
\multicolumn{1}{|c|}{} & (2.28) & (1/13) & (1) & (3/7) & (1.22) & (1/7) & (1.48) & (2/7) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}6.68 & \cellcolor{Gray}6.5 & \cellcolor{Gray}6.07 & \cellcolor{Gray}6 & \cellcolor{Gray}5.07 & \cellcolor{Gray}5 & \cellcolor{Gray}5.68 & \cellcolor{Gray}6 \\
\multicolumn{1}{|c|}{} & (2.26) & (3/13) & (0.9) & (4/7) & (1.18) & (1/6) & (1.09) & (3/7) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}6.39 & \cellcolor{Gray}6 & \cellcolor{Gray}6.07 & \cellcolor{Gray}6 & \cellcolor{Gray}5.04 & \cellcolor{Gray}5 & \cellcolor{Gray}5.36 & \cellcolor{Gray}5 \\
\multicolumn{1}{|c|}{} & (2.08) & (4/11) & (0.98) & (4/7) & (1.43) & (1/7) & (1.31) & (3/7) \\ \hline
\end{tabular}
\label{table:HC_SART}
\end{table}
\subsubsection{Subjective Results}
\label{sec:Subjective_HC_Comparison}
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Subjective Results: NASA-TLX]{Significant NASA-TLX results for Overall Score, Mental Demand and Temporal Demand. No signficant effects observed for Physical, Performance, Effort and Frustration NASA --TLX Components (omitted).}
\begin{tabular}{c|c|c"c|c"c|c|}
\cline{2-7}
& \multicolumn{2}{c"}{Overall} & \multicolumn{2}{c"}{Mental} & \multicolumn{2}{c|}{Temporal} \\ \cline{2-7}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median &\cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}58.31 & \cellcolor{Gray}60.67 & \cellcolor{Gray}22.19 & \cellcolor{Gray}23.33 & \cellcolor{Gray}11.55 & \cellcolor{Gray}9.67 \\
\multicolumn{1}{|c|}{} & (17.63) & (9/89.33) & (6.38) & (1.00/31.67) & (8.41) & (0.00/28.33) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}57.06 & \cellcolor{Gray}56.83 & \cellcolor{Gray}23.58 & \cellcolor{Gray}25.00 & \cellcolor{Gray}10.94 & \cellcolor{Gray}10.33\\
\multicolumn{1}{|c|}{} & (16.47) & (5.67/83.33) & (6.28) & (3.00/31.67)
& (7.60) & (0.00/24.00)\\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}50.63 & \cellcolor{Gray}54.17 & \cellcolor{Gray}16.54 & \cellcolor{Gray}18.50 & \cellcolor{Gray}7.49 & \cellcolor{Gray}5.33\\
\multicolumn{1}{|c|}{} & (17.56) & (9.33/80.33) & (9.10) & (0.00/33.33) & (6.57) & (0.00/22.67) \\ \hline
\end{tabular}
\label{table:HC_NASA_TLX}
\end{table}
This section presents the subjective data including the operator's reported situational awareness, workload, post-trial questionnaires, post-experiment questionnaires, and the Mental Rotation Tests. The descriptive statistics for the 3-D SART and NASA-TLX are presented in Tables \ref{table:HC_SART} and \ref{table:HC_NASA_TLX} \cite{Roundtree20191}, respectively. A Kruskal-Wallis test did not reveal significant effects between the models for the overall 3-D SART score, or any of the 3-D SART components. The scores were similar across the models, but slightly higher for $M_{2}$.
A Kruskal-Wallis revealed no significant effects for the overall NASA-TLX score, but identified significant effects between the models for mental demand, \textit{$\chi^{2}$(2, N = 84) = 22.166, $\rho$ $<$ 0.001}, and temporal demand, \textit{$\chi^{2}$(2, N = 84) = 8.8327, $\rho$ = 0.012}. A pairwise comparison using a Tukey and Kramer test revealed significant effects between $M_{1}$ and $M_{3}$ (\textit{$\rho$ = 0.003}) and $M_{2}$ and $M_{3}$ (\textit{$\rho$ $<$ 0.001}) for mental demand. A similar test revealed significant effects for Temporal Demand between $M_{1}$ and $M_{3}$ (\textit{$\rho$ = 0.023}) and between $M_{2}$ and $M_{3}$ (\textit{$\rho$ = 0.032}). The operators reported higher mental and temporal demand when using models $M_{1}$ and $M_{2}$. Higher mental demand indicates higher required perceptual or decision making activity. The higher mental demand reported for $M_{1}$ and $M_{2}$ suggests that operators experienced higher demand when sharing decision making tasks with the other collectives. When making decisions, as in $M_{3}$, the operators did not need to consider what the collective was doing. The higher temporal demand is likely due to the fact that both models, $M_{1}$ and $M_{2}$, made independent decisions, whether the human operator intervened or not. The use of these models likely introduced additional pressure on the human operator to move quickly in order to influence each collective's decisions, before the collective made an independent decision. $M_{3}$ did not impose a similar pressure for the human operator to act.
The post-trial questionnaires required Likert scale responses, on a scale of 1 (worst) to 7 (best) regarding the effectiveness of each type of request, the collective model's responsiveness to requests, the collective model's independent target selection ability (Performance), and ease of understanding (Comprehension). The descriptive statistics are shown in Tables \ref{table:HC_post_trial_response} and \ref{table:HC_post_trial_performance_understanding}, respectively. A Kruskal-Wallis test identified significant effects between the models for the effectiveness of the Abandon request, \textit{$\chi^{2}$(2, N = 84) = 6.33, $\rho$ = 0.04}, and the model's independent performance, \textit{$\chi^{2}$(2, N = 84) = 6.8, $\rho$ = 0.03}. The fact that the operators rated the effectiveness of the Abandon request lowest for the baseline model is not surprising, since the model only persistently investigated targets dictated by the human. The low performance reported for $M_{1}$ is consistent with the lower performance of the independent model, $M_{1} SIM$, compared to $M_{2} SIM$ and the lower performance of the $M_{1}$ human-collective teams compared to the $M_{2}$ human collective teams.
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Subjective Results: Post Trial Request Evaluation]{Post Trial Request Type Effectiveness Ranking (1-low, 7-high).}
\begin{tabular}{c|c|c"c|c"c|c|}
\cline{2-7}
& \multicolumn{2}{c"}{Investigate} & \multicolumn{2}{c"}{Abandon} & \multicolumn{2}{c|}{Decide} \\ \cline{2-7}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}5.07 & \cellcolor{Gray}5 & \cellcolor{Gray}6.04 & \cellcolor{Gray}6 & \cellcolor{Gray}5.71 & \cellcolor{Gray}7 \\
\multicolumn{1}{|c|}{} & (1.25) & (2/7) & (1.35) & (1/7) & (1.94) & (1/7) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}4.75 & \cellcolor{Gray}5 & \cellcolor{Gray}6.18 & \cellcolor{Gray}7 & \cellcolor{Gray}5.57 & \cellcolor{Gray}6 \\
\multicolumn{1}{|c|}{} & (1.53) & (2/7) & (1.42) & (1/7) & (1.99) & (1/7) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}5.18 & \cellcolor{Gray}5.5 & \cellcolor{Gray}5.29 & \cellcolor{Gray}5.5 & \cellcolor{Gray}6.54 & \cellcolor{Gray}7 \\
\multicolumn{1}{|c|}{} & (1.68) & (1/7) & (1.76) & (1/7) & (0.92) & (4/7) \\ \hline
\end{tabular}
\label{table:HC_post_trial_response}
\end{table}
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Subjective Results: Post Trial Performance and Understanding]{Post Trial Performance and Understanding Model Ranking (1-low, 7-high).}
\begin{tabular}{c|c|c"c|c|c|c|}
\cline{2-5}
& \multicolumn{2}{c"}{Performance} & \multicolumn{2}{c|}{Understanding} \\ \cline{2-5}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}5.11 & \cellcolor{Gray}5 & \cellcolor{Gray}5.39 & \cellcolor{Gray}5.5 \\
\multicolumn{1}{|c|}{} & (0.99) & (3/7) & (1.31) & (2/7) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}5.54 & \cellcolor{Gray}6 & \cellcolor{Gray}5.82 & \cellcolor{Gray}6 \\
\multicolumn{1}{|c|}{} & (1.29) & (3/7) & (1.16) & (3/7) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}5.75 & \cellcolor{Gray}6 & \cellcolor{Gray}5.93 & \cellcolor{Gray}7 \\
\multicolumn{1}{|c|}{} & (1.43) & (2/7) & (1.46) & (3/7) \\ \hline
\end{tabular}
\label{table:HC_post_trial_performance_understanding}
\end{table}
\begin{table}[bp!]
\centering
\captionsetup{aboveskip=3pt}
\caption[Human Trials Subjective Results: Post Experiment Evaluation]{Post Experiment Model Ranking (1-best, 3-worst).}
\begin{tabular}{c|c|c"c|c"c|c|}
\cline{2-7}
& \multicolumn{2}{c"}{Responsiveness} & \multicolumn{2}{c"}{Performance} & \multicolumn{2}{c|}{Comprehension} \\ \cline{2-7}
& \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median & \cellcolor{Gray}Mean & \cellcolor{Gray}Median \\
& (SD) & (Min/Max) & (SD) & (Min/Max) & (SD) & (Min/Max) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{1}$}} & \cellcolor{Gray}1.5 & \cellcolor{Gray}1.5 & \cellcolor{Gray}2 & \cellcolor{Gray}2 & \cellcolor{Gray}2.5 & \cellcolor{Gray}2.5 \\
\multicolumn{1}{|c|}{} & (0.51) & (1/2) & (1.02) & (1/3) & (0.51) & (2/3) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{2}$}} & \cellcolor{Gray}1.5 & \cellcolor{Gray}1.5 & \cellcolor{Gray}2 & \cellcolor{Gray}2 & \cellcolor{Gray}2.5 & \cellcolor{Gray}2.5 \\
\multicolumn{1}{|c|}{} & (0.51) & (1/2) & (1.02) & (1/3) & (0.51) & (2/3) \\ \thickhline
\multicolumn{1}{|c|}{\multirow{2}{*}{$M_{3}$}} & \cellcolor{Gray}3 & \cellcolor{Gray}3 & \cellcolor{Gray}2 & \cellcolor{Gray}2 & \cellcolor{Gray}1 & \cellcolor{Gray}1 \\
\multicolumn{1}{|c|}{} & (0) & (3/3) & (0) & (2/2) & (0) & (1/1) \\ \hline
\end{tabular}
\label{table:HC_post_experiment_ranking}
\end{table}
The post experiment questionnaire required the operators to rank order the different models according to overall Responsiveness to requests, overall performance, and overall Comprehension. The descriptive statistics for the post experimental rankings are summarized in Table \ref{table:HC_post_experiment_ranking}. A Kruskal-Wallis test revealed significant effects for Responsiveness, \textit{$\chi^{2}$(2, N = 28) = 62.25, $\rho$ $<$ 0.001}, and Comprehension, \textit{$\chi^{2}$(2, N = 28) = 62.25, $\rho$ $<$ 0.001} (identical values). The ranking of the baseline model, $M_{3}$ overall is interesting, as it was ranked consistently the lowest for responsiveness, the highest for comprehension, and between the remaining models for performance.
The final subjective metric was the Mental Rotations Test scores. The MRT has a minimum score of $0$ and a maximum possible score of 24. The operators' MRT scores resulted in a 10.9 mean score (standard deviation = $\pm 5.5$, median = 10, minimum = 1, maximum = 24) \cite{Roundtree20191}. These results were virtually identical to the results of a large study comprised of 636 operators with a 10.8 mean score and a standard deviation of $\pm 5$ \cite{MRT_peters1995redrawn}. The results of the MRT were compared to the human-collective team success rates for each problem type. A Spearman correlations test only identified weak to moderate positive correlations between the MRT results and success rates with model $M_{1}$ for overall, \textit{r = 0.2, $\rho$ $<$ 0.001, easy, r = 0.15, $\rho$ = 0.03}, and difficult decisions, \textit{r = 0.36, $\rho$ $<$ 0.001}. These findings suggest that operators with higher spatial awareness were slightly more likely to make accurate decisions with the $M_{1}$ collective action selection model.
\section{Introduction}
\label{sec:HCI_Introduction}
\input{HCI_ECAS_1_Intro}
\section{Related Work}
\label{sec:Related_Work}
\input{HCI_ECAS_2_Related}
\section{Explicit Collective Action Selection Model}
\label{sec:CAS_Model}
\input{HCI_ECAS_3_CAS}
\section{Human-Collective Interface}
\label{sec:HCI_Interface}
\input{HCI_ECAS_5_HCI}
\section{Experimental Design}
\label{sec:HumanCollectiveExperimentalDesign}
\input{HCI_ECAS_6_Experiment_Design}
\section{Results}
\label{sec:OverallResults}
\input{HCI_ECAS_7_Results}
\section{Discussion}
\label{sec:HumanCollectiveDiscussion}
\input{HCI_ECAS_8_Discussion}
\section{Conclusion}
\label{sec:Conclusion}
\input{HCI_ECAS_9_Conclusion}
\section{Acknowledgments}
\label{sec:Acknowledgments}
This work was partially funded by the US Office of Naval Research Awards N000141210987, N00014161302, N000141613025. The work of Jason R. Cody was supported by the United States Military Academy and the United States Army Advanced Civil Schooling (ACS) program. The views and conclusions contained herein are those of the authors and is not to be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the United States Military Academy, the U.S. Army, or the U.S. Government.
\bibliographystyle{ACM-Reference-Format}
|
train/arxiv
|
BkiUdTQ5qYVBUVZOizcp
| 5
| 1
|
\section{Introduction and statement of results}\label{intro} We prove that stable Arakelov invariants of a curve over a number field are polynomial in the Belyi degree. We apply our results to give algorithmic, geometric and Diophantine applications.
\subsection{Bounds for Arakelov invariants of three-point covers} Let $\overline{\mathbb{Q}}$ be an algebraic closure of the field of rational numbers $\mathbb{Q}$.
Let $X$ be a smooth projective connected curve over $\overline{\mathbb{Q}}$ of genus~$g$. Belyi \cite{Belyi} proved that there exists a finite morphism $X\to \mathbb{P}^1_{\overline{\mathbb{Q}}}$ ramified over at most three points. Let $\deg_B(X)$ denote the Belyi degree of $X$, i.e., the minimal degree of a finite morphism $X\to \mathbb{P}^1_{\overline{\mathbb{Q}}}$ unramified over $\mathbb{P}^1_{\overline{\mathbb{Q}}}\backslash\{0,1,\infty\}$. Since the topological fundamental group of the projective line $\mathbb{P}^1(\mathbb{C})$ minus three points is finitely generated, the set of $\overline{\mathbb{Q}}$-isomorphism classes of curves with bounded Belyi degree is finite.
We prove that, if $g\geq 1$,
the Faltings height $h_{\Fal}(X)$,
the Faltings delta invariant $\delta_{\Fal}(X)$,
the discriminant $\Delta(X)$ and the self-intersection of
the dualizing sheaf $e(X)$ are bounded by a polynomial in $\deg_B(X)$; the precise definitions of these Arakelov invariants of $X$ are given in Section \ref{invariants}.
\begin{thm}\label{mainthm} For any smooth projective connected curve $X$ over $\overline{\mathbb{Q}}$ of genus $g\geq 1$,
\[ \begin{array}{ccccc} -\log(2\pi) g & \leq & h_{\Fal}(X) & \leq & 13\cdot 10^6 g\deg_{B}(X)^5\\
0 & \leq & e(X) & \leq & 3\cdot 10^7 (g-1) \deg_{B}(X)^5 \\
0 & \leq & \Delta(X) & \leq & 5\cdot 10^8 g^2 \deg_{B}(X)^5 \\
-10^8 g^2 \deg_{B}(X)^5 & \leq & \delta_{\Fal}(X) & \leq & 2\cdot 10^8 g\deg_{B}(X)^5. \end{array} \]
\end{thm}
The Arakelov invariants in Theorem \ref{mainthm} all have a different flavour to them.
For example, the Faltings height $h_{\Fal}(X)$ plays a key role in Faltings' proof of his finiteness theorem on abelian varieties; see \cite{Faltings2}. On the other hand, the strict positivity of $e(X)$ (when $g\geq 2$) is related to the Bogomolov conjecture; see \cite{Szpiro7}.
The discriminant $\Delta(X)$ ``measures'' the bad reduction of the curve $X/\overline{\mathbb{Q}}$, and appears in Szpiro's discriminant conjecture for semi-stable elliptic curves; see \cite{Szpiro6}. Finally, as was remarked by Faltings in his introduction to \cite{Faltings1}, Faltings' delta invariant $\delta_{\Fal}(X)$ can be viewed as the minus logarithm of a ``distance''
to the boundary of the moduli space of compact connected Riemann surfaces of genus~$g$.
We were first led to investigate this problem by work of Edixhoven, de Jong and Schepers on covers of complex algebraic surfaces with fixed branch locus; see \cite{EdJoSc}.
They conjectured an arithmetic analogue (\cite[Conjecture 5.1]{EdJoSc}) of their main theorem (Theorem 1.1 in \emph{loc. cit.}). We use our results to prove this conjecture; see Section \ref{conjecture} for a more precise statement.
\subsection{Outline of proof}
To prove Theorem \ref{mainthm} we will use Arakelov theory for curves over a number field $K$. To apply Arakelov theory in this context, we will work with \textit{arithmetic surfaces} associated to such curves, i.e., regular projective models over the ring of integers $O_K$ of $K$. We refer the reader to Section \ref{arakelovs} for precise definitions and basic properties of Arakelov's intersection pairing on an arithmetic surface. Then, for any smooth projective connected curve $X$ over $\overline{\mathbb{Q}}$ of genus $g\geq 1$, we define the Faltings height $h_{\Fal}(X)$, the discriminant $\Delta(X)$, Faltings' delta invariant $\delta_{\Fal}(X)$ and the self-intersection of the dualizing sheaf $e(X)$ in Section \ref{invariants}. These are the four Arakelov invariants appearing in Theorem \ref{mainthm}.
We introduce two functions on $X(\overline{\mathbb{Q}})$ in Section \ref{invariants}: the canonical Arakelov height function and the Arakelov norm of the Wronskian differential. We show that, to prove Theorem \ref{mainthm}, it suffices to bound the canonical height of some non-Weierstrass point and the Arakelov norm of the Wronskian differential at this point; see Theorem \ref{upperboundinv} for a precise statement.
We estimate Arakelov-Green functions and Arakelov norms of Wronskian differentials on finite \'etale covers of the modular curve $Y(2)$ in Theorem \ref{MerklResult} and Proposition \ref{Wronskian2}, respectively. In our proof we use an explicit version of a result of Merkl on the Arakelov-Green function; see Theorem \ref{Merkl}. This version of Merkl's theorem was obtained by Peter Bruin in his master's thesis. The proof of this version of Merkl's theorem is reproduced in the appendix by Peter Bruin.
In Section \ref{belyiheights} we prove the existence of a non-Weierstrass point on $X$ of bounded height; see Theorem \ref{heightboundlast}. The proof of Theorem \ref{heightboundlast} relies on our bounds for Arakelov-Green functions (Theorem \ref{MerklResult}), the existence of a ``wild'' model (Theorem \ref{model}) and Lenstra's generalization of Dedekind's discriminant conjecture for discrete valuation rings of characteristic 0 (Proposition \ref{different0}).
A precise combination of the above results constitutes the proof of Theorem \ref{mainthm} given in Section \ref{proofofmaintheorem}.
\subsection{Arakelov invariants of covers of curves with fixed branch locus}\label{coversofcurves} We apply Theorem \ref{mainthm} to prove explicit bounds for the height of a cover of curves. Let us be more precise.
For any finite subset $B\subset \mathbb{P}^1(\overline{\mathbb{Q}})$ and integer $d\geq 1$, the set of smooth projective connected curves $X$ over $\overline{\mathbb{Q}}$ such that there exists a finite morphism $X\to \mathbb{P}^1_{\overline{\mathbb{Q}}}$ \'etale over $\mathbb{P}^1_{\overline{\mathbb{Q}}}-B$ of degree $d$ is finite. In particular, the Faltings height of $X$ is bounded by a real number depending only on $B$ and $d$. In this section we give an explicit version of this statement. To state our result we need to define the height of $B$.
For any finite set $B\subset \mathbb{P}^1(\overline{\mathbb{Q}})$, define the (exponential) height as $H_B= \max \{ H(\alpha): \alpha \in B\}$, where the height $H(\alpha)$ of an element $\alpha$ in $\overline{\mathbb{Q}}$ is defined as $H(\alpha) = \left( \prod_v \max(1,\Vert \alpha\Vert_v) \right)^{1/[K:\mathbb{Q}]}$.
Here $K$ is a number field containing $\alpha$ and the product runs over the set of normalized valuations $v$ of $K$. (As in \cite[Section 2]{Khadjavi} we require our normalization to be such that the product formula holds.)
\begin{thm}\label{mainthmintro}
Let $U$ be a non-empty open subscheme in $\mathbb{P}^1_{\overline{\mathbb{Q}}}$ with complement $B\subset \mathbb{P}^1(\overline{\mathbb{Q}})$. Let $N$ be the number of elements in the orbit of $B$ under the action of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$. Then, for any finite morphism $\pi:Y\to \mathbb{P}^1_{\overline{\mathbb{Q}}}$ \'etale over $U$, where $Y$ is a smooth projective connected curve over $\overline{\mathbb{Q}}$ of genus $g\geq 1$,
\[ \begin{array}{ccccc} -\log(2\pi)g & \leq & h_{\Fal}(Y) & \leq & 13\cdot 10^6 g(4NH_B)^{45N^3 2^{N-2}N!}(\deg \pi)^5 \\
0 &\leq & e(Y) & \leq & 3\cdot 10^7(g-1)(4NH_B)^{45N^3 2^{N-2}N!}(\deg \pi)^5 \\
0 &\leq & \Delta(Y) & \leq & 5\cdot 10^8 g^2(4NH_B)^{45N^3 2^{N-2}N!}(\deg \pi)^5 \\
- 10^8 g^2 (4NH_B)^{45N^3 2^{N-2}N!}(\deg \pi)^5 & \leq & \delta_{\Fal}(Y) & \leq & 2\cdot 10^8 g (4NH_B)^{45 N^3 2^{N-2}N!} (\deg \pi)^5.
\end{array} \]
\end{thm}
Theorem \ref{mainthmintro} is a consequence of Theorem \ref{mainthm2}. Note that in Theorem \ref{mainthm2} we consider branched covers of any curve over $\overline{\mathbb{Q}}$ (i.e., not only $\mathbb{P}^1_{\overline{\mathbb{Q}}}$). We use Theorem \ref{mainthmintro} to prove \cite[Conjecture 5.1]{EdJoSc}.
\subsection{Diophantine application}
Explicit bounds for Arakelov invariants of curves of genus $g\geq 2$ over a number field $K$ and with bad reduction outside a finite set $S$ of finite places of $K$ imply famous conjectures in Diophantine geometry such as the \textit{effective Mordell conjecture} and the \textit{effective Shafarevich conjecture}; see \cite{Remo} and \cite{Szpiro1}. We note that Theorem \ref{mainthm} shows that one ``could'' replace Arakelov invariants by the Belyi degree to prove these conjectures. We use this philosophy to deal with cyclic covers of prime degree. In fact, in \cite{JvK}, joint with von K\"anel, we utilize Theorem \ref{mainthm} and the theory of logarithmic forms to prove Szpiro's small points conjecture (\cite[p. 284]{Szpiro3} and \cite{Szpiro4}) for curves that are cyclic covers of the projective line of prime degree; see \cite[Theorem 3.1]{JvK} for a precise statement. In particular, we prove Szpiro's small points conjecture for hyperelliptic curves.
\subsection{Modular curves, Fermat curves, Hurwitz curves and Galois Belyi curves}
Let $X$ be a smooth projective connected curve over $\overline{\mathbb{Q}}$ of genus $g\geq 2$. We say that $X$ is a Fermat curve if there exists an integer $n\geq 4$ such that $X$ is isomorphic to the planar curve $\{x^n+y^n =z^n\}$. Moreover, we say that $X$ is a Hurwitz curve if $\#\mathrm{Aut}(X) = 84(g-1)$. Also, we say that $X$ is a Galois Belyi curve if the quotient $X/\mathrm{Aut}(X)$ is isomorphic to $\mathbb{P}^1_{\overline{\mathbb{Q}}}$ and the morphism $X\to X/\mathrm{Aut}(X)$ is ramified over exactly three points; see \cite[Proposition 2.4]{ClVo}, \cite{Wolfart1} or \cite{Wolfart2}. Note that Fermat curves and Hurwitz curves are Galois Belyi curves. Finally, we say that $X$ is a modular curve if $X_\mathbb{C}$ is a classical congruence modular curve with respect to some (hence any) embedding $\overline{\mathbb{Q}}\to \mathbb{C}$.
If $X$ is a Galois Belyi curve, we have $\deg_B(X) \leq 84(g-1)$. In \cite{Zograf} Zograf proved that, if $X$ is a modular curve, then $\deg_B(X) \leq 128(g+1)$. Combining these bounds with Theorem \ref{mainthm} we obtain the following corollary.
\begin{cor}\label{modferwol}
Let $X$ be a smooth projective connected curve over $\overline{\mathbb{Q}}$ of genus $g\geq 1$. Suppose that $X$ is a modular curve or Galois Belyi curve. Then \[ \max( h_{\Fal}(X),e(X),\Delta(X), \vert \delta_{\Fal}(X)\vert) \leq 2\cdot 10^{19} g^2(g+1)^5 .\]
\end{cor}
\begin{opm}
Let $\Gamma \subset \mathrm{SL}_2(\mathbb{Z})$ be a finite index subgroup, and let $X$ be the compactification of $\Gamma\backslash \chi^{\rm c}f_{a,b}$ obtained by adding the cusps, where $\Gamma$ acts on the complex upper half-plane $\chi^{\rm c}f_{a,b}$ via M\"obius transformations. Let $X(1)$ denote the compactification of $\mathrm{SL}_2(\mathbb{Z})\backslash \chi^{\rm c}f_{a,b}$. The inclusion $\Gamma\subset \mathrm{SL}_2(\mathbb{Z})$ induces a morphism $X\to X(1)$. For $\overline{\mathbb{Q}}\subset \mathbb{C}$ an embedding, there is a unique finite morphism $Y\to \mathbb{P}^1_{\overline{\mathbb{Q}}}$ of smooth projective connected curves over $\overline{\mathbb{Q}}$ corresponding to $X\longrightarrow X(1)$. The Belyi degree of $Y$ is bounded from above by the index $d$ of $\Gamma$ in $\mathrm{SL}_2(\mathbb{Z})$. In particular, \[ \max( h_{\Fal}(Y),e(Y),\Delta(Y), \vert \delta_{\Fal}(Y)\vert) \leq 10^{9} d^7. \]
\end{opm}
\begin{opm}
Non-explicit versions of Corollary \ref{modferwol} were previously known for certain modular curves. Firstly, polynomial bounds for Arakelov invariants of $X_0(n)$ with $n$ squarefree were previously known; see \cite[Th\'eor\`eme 1.1]{Ullmo}, \cite[Corollaire 1.3]{Ullmo}, \cite{AbUl}, \cite[Th\'eor\`eme 1.1]{MicUll} and \cite{JorKra2}. The proofs of these results rely on the theory of modular curves. Also, similar results for Arakelov invariants of $X_1(n)$ with $n$ squarefree were shown in \cite{EdJo3} and \cite{Mayer}. Finally, bounds for the self-intersection of the dualizing sheaf of a Fermat curve of prime exponent are given in \cite{CuKu} and \cite{Ku}.
\end{opm}
\subsection{The Couveignes-Edixhoven-Bruin algorithm}
Corollary \ref{modferwol} guarantees that, under the Riemann hypothesis for $\zeta$-functions of number fields, the Couveignes-Edixhoven-Bruin algorithm to compute coefficients of modular forms runs in polynomial time; see Theorem \ref{CoEdBr} for a more precise statement.
\subsection*{Conventions} By $\log$ we mean the principal value of the natural logarithm. Finally, we define the maximum of the empty set and the product taken over the empty set as 1.
\subsection*{Acknowledgements} I would like to thank Peter Bruin, Bas Edixhoven and Robin de Jong. They introduced us to Arakelov theory and Merkl's theorem, and I am grateful to them for many inspiring discussions and their help in writing this article. Also, I would like to thank Rafael von K\"anel and Jan Steffen M\"uller for motivating discussions about this article. I would like to thank Jean-Beno\^it Bost and Gerard Freixas for discussions on Arakelov geometry, Yuri Bilu for inspiring discussions, J\"urg Kramer for discussions on Faltings' delta invariant, Hendrik Lenstra and Bart de Smit for their help in proving Proposition \ref{different0}, Qing Liu for answering our questions on models of finite morphisms of curves and Karl Schwede for helpful discussions about the geometry of surfaces.
\section{Arakelov geometry of curves over number fields}
We are going to apply Arakelov theory to smooth projective geometrically connected curves~$X$ over number fields~$K$. In~\cite{Arakelov} Arakelov defined an intersection theory on the \emph{arithmetic surfaces} attached to such curves. In~\cite{Faltings1} Faltings extended Arakelov's work. In this section we aim at giving the necessary definitions and results for what we need later (and we need at least to fix our notation).
We start with some preparations concerning Riemann surfaces and arithmetic surfaces. In Section \ref{invariants} we define the (stable) Arakelov invariants of $X$ appearing in Theorem \ref{mainthm}. Finally, we prove bounds for Arakelov invariants of $X$ in the height and the Arakelov norm of the Wronskian differential of a non-Weierstrass point; see Theorem \ref{upperboundinv}.
\subsection{Arakelov invariants of Riemann surfaces} \label{admissible}
Let $X$ be a compact connected Riemann surface of genus $g\geq 1$. The space of holomorphic differentials $\mathrm{H}^0(X,\Omega_X^1)$ carries a natural hermitian inner product:
\begin{eqnarray*}\label{eqn_nat_inner_pro} (\omega,\eta) &\mapsto& \frac{i}{2} \int_X \omega \wedge \overline{\eta}. \end{eqnarray*} For any orthonormal basis $(\omega_1,\ldots,\omega_g)$ with respect to this inner product, the Arakelov $(1,1)$-form is the smooth positive real-valued $(1,1)$-form $\mu$ on~$X$ given by $\mu =\frac{i}{2g} \sum_{k=1}^g \omega_k \wedge \overline{\omega_k}$. Note that $\mu$ is independent of the choice of orthonormal basis. Moreover, $\int_X \mu=1$.
Let $\gr_X$ be the Arakelov-Green function on $(X\times X)\backslash \Delta$, where $\Delta \subset X\times X$ denotes the diagonal; see \cite{Arakelov}, \cite{deJo}, \cite{EdJo1} or \cite{Faltings1}. The Arakelov-Green functions determine certain metrics whose curvature forms are multiples of $\mu$, called \textit{admissible metrics}, on all line bundles~$\mathcal{O}_X(D)$, where $D$ is a divisor on~$X$, as well as on the holomorphic cotangent bundle~$\Omega^1_X$. Explicitly: for $D=\sum_P D_P P$ a divisor on~$X$, the metric $\| {\cdot}\|$ on $\mathcal{O}_X(D)$ satisfies $\log\|1\|(Q) = \gr_X(D,Q)$ for all $Q$ away from the support of~$D$, where $\gr_X(D,Q) := \sum_P n_P \gr_X(P,Q)$. Furthermore, for a local coordinate $z$ at a point $a$ in $X$, the metric $\Vert \cdot \Vert_{\mathrm{Ar}}$ on the sheaf $\Omega^1_{X}$ satisfies \[ -\log \Vert dz \Vert_{\mathrm{Ar}}(a) = \lim_{b\to a}\left( \gr_{X}(a,b) - \log \vert z(a) - z(b) \vert \right). \] We will work with these metrics on~$\mathcal{O}_X(P)$ and $\Omega_X^1$ (as well as on tensor product combinations of them) and refer to them as \textit{Arakelov metrics}. A metrised line bundle $\mathcal{L}$ is called \textit{admissible} if, up to a constant scaling factor, it is isomorphic to one of the admissible
bundles~$\mathcal{O}_X(D)$. The line bundle $\Omega^1_X$ endowed with the above metric is admissible; see \cite{Arakelov}.
For any admissible line bundle~$\mathcal{L}$, we endow the determinant of cohomology \[\lambda(\mathcal{L}) = \det \mathrm{H}^0(X,\mathcal{L}) \otimes \det \mathrm{H}^1(X,\mathcal{L})^\vee\] of the underlying line bundle with the Faltings metric; see \cite[Theorem 1]{Faltings1}. We normalize this metric so that the metric on $\lambda(\Omega^1_X) =\det \mathrm{H}^0(X,\Omega^1_X)$ is induced by the hermitian inner product on~$\mathrm{H}^0(X,\Omega_X^1)$ given above.
Let $\mathbb{H}_g$ be the Siegel upper half space of complex symmetric $g$-by-$g$-matrices with positive definite imaginary part. Let $\tau$ in~$\mathbb{H}_g$ be the period matrix attached to a symplectic basis of $\mathrm{H}_1(X,\mathbb{Z})$ and consider the analytic Jacobian $J_\tau(X) = \mathbb{C}^g /(\mathbb{Z}^g + \tau \mathbb{Z}^g)$ attached to~$\tau$. On $\mathbb{C}^g$ one has a theta function $\vartheta(z;\tau)=\vartheta_{0,0}(z;\tau) = \sum_{n\in\mathbb{Z}^g} \exp(\pi i\,{}^t\hspace{-0.1em}n \tau n + 2\pi i\, {}^t\hspace{-0.1em}n z)$, giving rise to a reduced effective divisor~$\Theta_0$ and a line bundle
$\mathcal{O}(\Theta_0)$ on~$J_\tau(X)$. The function $\vartheta$ is not well-defined on ~$J_\tau(X)$. Instead, we consider the function
\begin{eqnarray}\label{eqn_thetanorm}
\|\vartheta\|(z;\tau) &=&
(\det \Im(\tau))^{1/4} \exp(-\pi\,{}^t\hspace{-0.1em}y
(\Im(\tau))^{-1} y)|\vartheta(z;\tau)|,
\end{eqnarray}
with $y = \Im(z)$. One can check that $\|\vartheta\|$ descends to a function on~$J_\tau(X)$. Now consider on the other hand the set $\mathrm{Pic}_{g-1}(X)$ of divisor classes of degree $g-1$ on~$X$. It comes with a canonical subset $\Theta$ given by the classes of effective divisors and a canonical bijection $\mathrm{Pic}_{g-1}(X)\;\tilde{\longrightarrow}\; J_\tau(X)$ mapping $\Theta$ onto~$\Theta_0$. As a result, we can equip $\mathrm{Pic}_{g-1}(X)$ with the structure of a compact complex manifold, together with a divisor $\Theta$ and a line bundle~$\mathcal{O}(\Theta)$. Note that we obtain $\|\vartheta\|$ as a function on~$\mathrm{Pic}_{g-1}(X)$. It can be checked that this function is independent of the choice of~$\tau$. Furthermore, note that $\|\vartheta\|$ gives a canonical way to put a metric on the line bundle $\mathcal{O}(\Theta)$ on~$\mathrm{Pic}_{g-1}(X)$.
For any line bundle $\mathcal{L}$ of degree~$g-1$ there is a canonical isomorphism from $\lambda(\mathcal L)$ to $\mathcal{O}(-\Theta)[\mathcal{L}]$, the fibre of $\mathcal{O}(-\Theta)$ at the point $[\mathcal{L}]$ in $\mathrm{Pic}_{g-1}(X)$ determined by~$\mathcal{L}$. Faltings proves that when we give both sides the metrics discussed above, the norm of this isomorphism is a constant independent of~$\mathcal{L}$; see \cite[Section 3]{Faltings1}. We will write this norm as $\exp(\delta_{\Fal}(X)/8)$ and refer to $\delta_{\Fal}(X)$ as Faltings' delta invariant of $X$.
Let $S(X)$ be the invariant of $X$ defined in \cite[Definition 2.2]{deJo}. More explicitly, by \cite[Theorem 2.5]{deJo},
\begin{eqnarray}\label{SX} \log S(X) &=& -\int_X \log \| \vartheta \| (gP-Q) \cdot \mu(P), \end{eqnarray} where $Q$ is any point on $X$. It is related to Faltings' delta invariant $\delta_{\Fal}(X)$. In fact, let $(\omega_1,\ldots,\omega_g)$ be
an orthonormal basis of $\mathrm{H}^0(X,\Omega_X^1)$. Let $b$ be a point on $X$ and let $z$ be a local
coordinate about $b$. Write $\omega_k = f_k dz$ for $k=1,\ldots,g$. We have a holomorphic function
\[W_z(\omega) = \det\left( \frac{1}{(l-1)!} \frac{d^{l-1}f_k}{dz^{l-1}}\right)_{1\leq k,l\leq g}\]
locally about $b$ from which we build the $g(g+1)/2$-fold holomorphic differential $ W_z(\omega) (dz)^{\otimes g(g+1)/2}$.
It is readily checked that this holomorphic differential is independent of the choice of local coordinate and orthonormal basis.
Thus, the holomorphic differential $ W_z(\omega) (dz)^{\otimes g(g+1)/2}$ extends over $X$ to give a non-zero global section, denoted by $\mathrm{Wr}$, of the line bundle $\Omega^{\otimes g(g+1)/2}_{X}$. The divisor of the non-zero global section $\mathrm{Wr}$, denoted by $\mathcal{W}$, is the divisor of Weierstrass points. This divisor is effective of degree $g^3-g$. We follow \cite[Definition 5.3]{deJo} and denote the constant norm of the canonical isomorphism of (abstract) line bundles \[\Omega_X^{g(g+1)/2} \otimes_{\mathcal{O}_X}\left( \Lambda^g \mathrm{H}^0(X,\Omega^1_X) \otimes_{\mathbb{C}} \mathcal{O}_X \right)^{\vee}\longrightarrow \mathcal{O}_X(\mathcal{W}) \] by $R(X)$. Then, \begin{eqnarray}\label{Sinvariant} \log S(X) & =& \frac{1}{8}\delta_{\Fal}(X) + \log R(X). \end{eqnarray} Moreover, for any non-Weierstrass point $b$ in $X$,\begin{eqnarray}\label{Wronskian} \gr_X(\mathcal W,b) - \log R(X) &=& \log \Vert \mathrm{Wr}\Vert_{\mathrm{Ar}}(b).\end{eqnarray}
\subsection{Arakelov's intersection pairing on an arithmetic surface} \label{arakelovs}
Let $K$ be a number field with ring of integers $O_K$, and let $S=\Spec O_K$. Let $p:\mathcal{X}\to S$ be an arithmetic surface, i.e., an integral regular flat projective $S$-scheme of relative dimension 1 with geometrically connected fibres. For the sake of clarity, let us note that $p:\mathcal{X}\to S$ is a regular projective model of the generic fibre $\mathcal{X}_K \to \Spec K$ in the sense of \cite[Definition~10.1.1]{Liu2}.
In this section, we will assume the genus of the generic fibre $\mathcal{X}_K$ to be positive. An Arakelov divisor $D$ on $\mathcal{X}$ is a divisor $D_{\fin}$ on $\mathcal{X}$, plus a contribution $D_{\inff} = \sum_\sigma \alpha_{\sigma} F_\sigma$ running over the embeddings $\sigma:K\longrightarrow \mathbb{C}$ of $K$ into the complex numbers. Here the $\alpha_\sigma$ are real numbers and the $F_\sigma$ are formally the ``fibers at infinity'', corresponding to the Riemann surfaces $\mathcal{X}_\sigma$ associated to the algebraic curves $\mathcal{X}\times_{O_K,\sigma} \mathbb{C}$. We let $\widehat{\Div}(\mathcal{X})$ denote the group of Arakelov divisors on $\mathcal{X}$. To a non-zero rational function $f$ on $\mathcal{X}$, we associate an Arakelov divisor $\widehat{\divv}(f) := (f)_{\fin} + (f)_{\inff}$ with $(f)_{\fin}$ the usual divisor associated to $f$ on $\mathcal{X}$, and $(f)_{\inff} = \sum_\sigma v_\sigma(f) F_\sigma$, where $v_\sigma(f):= -\int_{\mathcal{X}_\sigma} \log\vert f\vert_\sigma \cdot \mu_\sigma$. Here $\mu_\sigma$ is the Arakelov $(1,1)$-form on $\mathcal{X}_\sigma$. We will say that two Arakelov divisors on $\mathcal{X}$ are linearly equivalent if their difference is of the form $\widehat{\divv}(f)$ for some non-zero rational function $f$ on $\mathcal{X}$. We let $\widehat{\Cl}(\mathcal{X})$ denote the group of Arakelov divisors modulo linear equivalence on $\mathcal{X}$.
In \cite{Arakelov} Arakelov showed that there exists a unique symmetric bilinear map $(\cdot, \cdot):\widehat{\Cl}(\mathcal{X})\times \widehat{\Cl}(\mathcal{X})\longrightarrow \mathbb{R}$ with the following properties:
\begin{itemize}
\item if $D$ and $E$ are effective divisors on $\mathcal{X}$ without common component, then \[(D,E) = (D,E)_{\fin} -\sum_{\sigma:K\to \mathbb{C}} \gr_{\mathcal{X}_\sigma}(D_\sigma,E_\sigma), \] where $\sigma$ runs over the complex embeddings of $K$. Here $(D,E)_{\fin}$ denotes the usual intersection number of $D$ and $E$ as in \cite[Section~9.1]{Liu2}, i.e., \[(D,E)_{\fin} = \sum_{s \in \vert S\vert} i_s(D,E) \log \# k(s),\] where $s$ runs over the set $\vert S \vert$ of closed points of $S$, $i_s(D,E)$ is the intersection multiplicity of $D$ and $E$ at $s$ and $k(s)$ denotes the residue field of $s$. Note that if $D$ or $E$ is vertical, the sum $\sum_{\sigma:K\to \mathbb{C}} \gr_{\mathcal{X}_\sigma}(D_\sigma,E_\sigma)$ is zero;
\item if $D$ is a horizontal divisor of generic degree $n$ over $S$, then $(D,F_\sigma) = n$ for every $\sigma:K\longrightarrow \mathbb{C}$;
\item if $\sigma_1,\sigma_2:K\to \mathbb{C}$ are complex embeddings, then $(F_{\sigma_1}, F_{\sigma_2}) =0 $.
\end{itemize}
An \textit{admissible line bundle} on $\mathcal{X}$ is the datum of a line bundle $\mathcal{L}$ on $\mathcal{X}$, together with admissible metrics on the restrictions $\mathcal{L}_\sigma$ of $\mathcal{L}$ to the $\mathcal{X}_\sigma$. Let $\widehat{\textrm{Pic}}(\mathcal{X})$ denote the group of isomorphism classes of admissible line bundles on $\mathcal{X}$. To any Arakelov divisor $D= D_{\fin} + D_{\inff}$ with $D_{\inff} = \sum_{\sigma} \alpha_\sigma F_\sigma$, we can associate an admissible line bundle $\mathcal{O}_{\mathcal{X}}(D)$. In fact, for the underlying line bundle of $\mathcal{O}_{\mathcal{X}}(D)$ we take $\mathcal{O}_{\mathcal{X}}(D_{\fin})$. Then, we make this into an admissible line bundle by equipping the pull-back of $\mathcal{O}_{\mathcal{X}}(D_{\fin})$ to each $\mathcal{X}_\sigma$ with its Arakelov metric, multiplied by $\exp(-\alpha_\sigma)$. This induces an isomorphism \[\xymatrix{\widehat{\Cl}(\mathcal{X})\ar[r]^{\sim} &\widehat{\textrm{Pic}}(\mathcal{X}).}\] In particular, the Arakelov intersection of two admissible line bundles on $\mathcal{X}$ is well-defined.
Recall that a metrised line bundle $(\mathcal{L},\|{\cdot}\|)$ on $\Spec O_K$ corresponds to an invertible $O_K$-module, $L$, say, with hermitian metrics on the $L_\sigma:=\mathbb{C}\otimes_{\sigma,O_K}L$. The \emph{Arakelov degree} of~$(\mathcal{L},\|{\cdot}\|)$ is the real number defined by:
\begin{eqnarray*}\label{eqn_ar_degree}
\widehat{\deg}(\mathcal{L})= \widehat{\deg}(\mathcal{L},\|{\cdot}\|) =
\log\#(L/O_Ks) -\sum_{\sigma\colon K\to\mathbb{C}}\log\|s\|_\sigma,
\end{eqnarray*}
where $s$ is any non-zero element of~$L$ (independence of the choice of~$s$ follows from the product formula).
Note that the relative dualizing sheaf $\omega_{\mathcal{X}/O_K}$ of $p:\mathcal{X} \to S$ is an admissible line bundle on $\mathcal{X}$ if we endow the restrictions $\Omega^1_{\mathcal{X}_\sigma}$ of $\omega_{\mathcal{X}/O_K}$ to the $\mathcal{X}_\sigma$ with their Arakelov metric. Furthermore, for any section $P:S\to \mathcal{X}$, we have \[\widehat{\deg} P^\ast \omega_{\mathcal{X}/O_K} = (\mathcal{O}_X(P), \omega_{\mathcal{X}/O_K}) =: (P,\omega_{\mathcal{X}/O_K}),\] where we endow the line bundle $P^\ast \omega_{\mathcal{X}/O_K}$ on $\Spec O_K$ with the pull-back metric.
\begin{defn}\label{semi-stable}
We say that $\mathcal{X}$ is \textit{semi-stable (or nodal) over $S$} if every geometric fibre of $\mathcal{X}$ over $S$ is reduced and has only ordinary double singularities; see \cite[Definition~10.3.1]{Liu2}. We say that $\mathcal{X}$ is \textit{(relatively) minimal} if it does not contain any exceptional divisor; see \cite[Definition~9.3.12]{Liu2}.
\end{defn}
\begin{opm}
Suppose that $\mathcal{X}$ is semi-stable over $S$ and minimal. The blowing-up $\mathcal{Y}\to\mathcal{X}$ along a smooth closed point on $\mathcal{X}$ is semi-stable over $S$, but no longer minimal.
\end{opm}
\subsection{Arakelov invariants of curves }\label{invariants}
Let $X$ be a smooth projective connected curve over $\overline{\mathbb{Q}}$ of genus $g\geq 1$. Let $K$ be a number field such that $X$ has a semi-stable minimal regular model $p:\mathcal{X}\to \Spec O_K$; see Theorems 10.1.8, 10.3.34.a and 10.4.3 in \cite{Liu2}. (Note that we implicitly chose an embedding $K\to \overline{\mathbb{Q}}$.)
The \textit{Faltings delta invariant} of $X$, denoted by $\delta_{\Fal}(X)$, is defined as \[\delta_{\Fal}(X) =\frac{1}{[K:\mathbb{Q}]}\sum_{\sigma:K\to \mathbb{C}} \delta_{\Fal}(\mathcal{X}_\sigma),\] where $\sigma$ runs over the complex embeddings of $K$ into $\mathbb{C}$. Similarly, we define \[ \Vert \vartheta \Vert_{\textrm{max}}(X) = \left(\prod_{\sigma:K\to \mathbb{C}} \max_{\mathrm{Pic}_{g-1}(\mathcal{X}_\sigma)}\Vert \vartheta\Vert\right)^{1/[K:\mathbb{Q}]}.\] Moreover, we define \[R(X) = \left(\prod_{\sigma:K\to \mathbb{C}} R(\mathcal{X}_\sigma)\right)^{1/[K:\mathbb{Q}]}, \quad S(X) = \left(\prod_{\sigma:K\to \mathbb{C}} S(\mathcal{X}_\sigma)\right)^{1/[K:\mathbb{Q}]}.\]
The \emph{Faltings height} of $X$ is defined by \[h_{\Fal}(X) = \frac{\widehat{\deg} \det p_\ast \omega_{\mathcal{X}/O_K}}{[K:\mathbb{Q}]} = \frac{\widehat{\deg} \det R^\cdot p_\ast \mathcal{O}_{\mathcal{X}}}{[K:\mathbb{Q}]},\] where we endow the determinant of cohomology with the Faltings metric; see Section \ref{admissible}. Note that $h_{\Fal}(X)$ coincides with the stable Faltings height of the Jacobian of $\mathcal{X}_K$; see \cite[Lemme~3.2.1, Chapitre~I]{Szpiroa}. Furthermore, we define the \textit{self-intersection of the dualizing sheaf} of $X$, denoted by $e(X)$, as \[e(X):= \frac{(\omega_{\mathcal{X}/O_K},\omega_{\mathcal{X}/O_K})}{[K:\mathbb{Q}]},\]
where we use Arakelov's intersection pairing on the arithmetic surface $\mathcal{X}/O_K$. The \textit{discriminant} of $X$, denoted by $\Delta(X)$, is defined as \[\Delta(X) = \frac{\sum_{\mathfrak{p}\subset O_K} \delta_{\mathfrak{p}} \log\# k(\mathfrak{p})}{[K:\mathbb{Q}]},\] where $\mathfrak{p}$ runs through the maximal ideals of $O_K$ and $\delta_{\mathfrak{p}}$ denotes the number of singularities in the geometric fibre of $p:\mathcal{X}\to \Spec O_K$ over $\mathfrak{p}$. These invariants of $X$ are well-defined; see \cite[Section 5.4]{Moret-Bailly3}.
To bound the above Arakelov invariants, we introduce two functions on $X(\overline{\mathbb{Q}})$: the height and the Arakelov norm of the Wronskian differential.
More precisely, let $b\in X(\overline{\mathbb{Q}})$ and suppose that $b$ induces a section $P$ of $\mathcal{X}$ over $O_K$.
Then we define the \textit{height of $b$}, denoted by $h(b)$, to be \[h(b) = \frac{\widehat{\deg}P^\ast \omega_{\mathcal{X}/O_K}}{[K:\mathbb{Q}]} = \frac{(P,\omega_{\mathcal{X}/O_K})}{[K:\mathbb{Q}]}.\]
Note that the height of $b$ is the stable canonical height of a point, in the Arakelov-theoretic sense, with respect to the admissible line bundle $\omega_{\mathcal{X}/O_K}$. We define the Arakelov norm of the Wronskian differential at $b$ as \[\Vert \mathrm{Wr}\Vert_{\mathrm{Ar}}(b) = \left(\prod_{\sigma:K\to \mathbb{C}} \Vert \mathrm{Wr}\Vert_{\mathrm{Ar}}(b_\sigma)\right)^{1/[K:\mathbb{Q}]}. \] These functions on $X(\overline{\mathbb{Q}})$ are well-defined; see \cite[Section 5.4]{Moret-Bailly3}.
Changing the model for $X$ might change the height of a point. Let us show that the height of a point does not become smaller if we take another regular model over $O_K$.
\begin{lem}\label{heightbigger}
Let $\mathcal{Y}\to \Spec O_K$ be an arithmetic surface. Assume that $\mathcal{Y}$ is a model for $\mathcal{X}_K$. If $Q$ denotes the section of $\mathcal{Y}$ over $O_K$ induced by $b\in X(\overline{\mathbb{Q}})$, then \[h(b) \leq \frac{(Q,\omega_{\mathcal{Y}/O_K})}{[K:\mathbb{Q}]}.\]
\end{lem}
\begin{proof}
By the minimality of $\mathcal{X}$, there is a unique birational morphism $\phi:\mathcal{Y}\to \mathcal{X}$; see \cite[Corollary~9.3.24]{Liu2}. By the factorization theorem, this morphism is made up of a finite sequence \[\xymatrix{\mathcal{Y} = \mathcal{Y}_n \ar[r]^{\phi_n} & \mathcal{Y}_{n-1} \ar[r]^{\phi_{n-1}} & \ldots \ar[r]^{\phi_1} & \mathcal{Y}_0 = \mathcal{X}}\] of blowing-ups along closed points; see \cite[Theorem~9.2.2]{Liu2}. For $i=1,\ldots,n$, let $E_i \subset \mathcal{Y}_i$ denote the exceptional divisor of $\phi_i$. Since the line bundles $\omega_{\mathcal{Y}_i/O_K}$ and $\phi^\ast_i\omega_{\mathcal{Y}_{i-1}/O_K}$ agree on $\mathcal{Y}_i - E_i$, there is an integer $a$ such that \[\omega_{\mathcal{Y}_i/O_K} = \phi_i^\ast \omega_{\mathcal{Y}_{i-1}/O_K}\otimes_{\mathcal{O}_{\mathcal{Y}_i}} \mathcal{O}_{\mathcal{Y}_i}(aE_i).\] Applying the adjunction formula, we see that $a=1$. Since $\phi_i$ restricts to the identity morphism on the generic fibre, we have a canonical isomorphism of admissible line bundles \[\omega_{\mathcal{Y}_i/O_K} = \phi_i^\ast \omega_{\mathcal{Y}_{i-1}/O_K}\otimes_{\mathcal{O}_{\mathcal{Y}_i}} \mathcal{O}_{\mathcal{Y}_i}(E_i).\] Let $Q_i$ denote the section of $\mathcal{Y}_i$ over $O_K$ induced by $b\in X(\overline{\mathbb{Q}})$. Then \[(Q_i,\omega_{\mathcal{Y}_i/O_K}) = (Q_{i},\phi^\ast_i \omega_{\mathcal{Y}_{i-1}/O_K}) + (Q_i,E_i) \geq (Q_i, \phi^\ast_i\omega_{\mathcal{Y}_{i-1}/O_K}) = (Q_{i-1},\omega_{\mathcal{Y}_{i-1}/O_K}),\] where we used the projection formula in the last equality. Therefore, we conclude that \[(Q,\omega_{\mathcal{Y}/O_K}) = (Q_n,\omega_{\mathcal{Y}_n/O_K}) \geq (Q_0, \omega_{\mathcal{Y}_0/O_K}) = (P,\omega_{\mathcal{X}/O_K})=h(b)[K:\mathbb{Q}]. \qedhere \]
\end{proof}
\subsection{Bounding Arakelov invariants in the height of a non-Weierstrass point}\label{invariants2}
In this section we prove bounds for Arakelov invariants of curves in the height of a non-Weierstrass point and the Arakelov norm of the Wronskian differential in this point.
\begin{thm}\label{upperboundinv} Let $X$ be a smooth projective connected curve over $\overline{\mathbb{Q}}$ of genus $g\geq 1 $. Let $b\in X(\overline{\mathbb{Q}})$. Then
\[ \begin{array}{ccc} e(X) & \leq & 4g(g-1) h(b), \\ \delta_{\Fal}(X) & \geq & -90 g^3 - 4g(2g-1)(g+1) h(b) .
\end{array} \] Suppose that $b$ is not a Weierstrass point. Then
\[ \begin{array}{ccc}
h_{\Fal}(X) & \leq & \frac{1}{2} g(g+1) h(b) +\log \Vert \mathrm{Wr}\Vert_{\mathrm{Ar}}(b), \\
\delta_{\Fal}(X) & \leq & 6 g(g+1) h(b) + 12\log \Vert\mathrm{Wr}\Vert_{\mathrm{Ar}}(b)+ 4g\log(2\pi), \\ \Delta(X) &\leq & 2g (g+1)(4g+1) h(b) + 12 \log \Vert \mathrm{Wr}\Vert_{\mathrm{Ar}} (b) + 93g^3.
\end{array} \]
\end{thm}
This theorem is essential to the proof of Theorem \ref{mainthm} given in Section \ref{heightboundlastsection}. We give a proof of Theorem \ref{upperboundinv} at the end of this section.
\begin{lem}\label{theta}
For a smooth projective connected curve $X$ over $\overline{\mathbb{Q}}$ of genus $g\geq 1$, \[\log \Vert \vartheta \Vert_{\max}(X) \leq \frac{g}{4} \log \max(1,h_{\Fal}(X)) + (4g^3+5g+1)\log 2.\]
\end{lem}
\begin{proof} We kindly thank R. de Jong for sharing this proof with us. We follow the idea of \cite[Section 2.3.2]{Graf}, see also \cite[Appendice]{Davi}. Let $\mathcal{F}_g$ be the Siegel fundamental domain of dimension $g$ in the Siegel upper half-space $\mathbb{H}_g$, i.e., the space of complex $(g\times g)$-matrices $\tau$ in $\mathbb{H}_g$ such that the following properties are satisfied. Firstly, for every element $u_{ij}$ of $u=\Re(\tau)$, we have
$\vert u_{ij} \vert \leq 1/2$. Secondly, for every $\gamma$ in $\mathrm{Sp}(2g,\mathbb{Z})$, we have $\det \Im(\gamma \cdot \tau)\leq \det \Im(\tau)$, and finally, $\Im(\tau)$ is Minkowski-reduced, i.e., for all $\xi = (\xi_1,\ldots,\xi_g) \in \mathbb{Z}^g$ and for all $i$ such that $\xi_i,\ldots,\xi_g$ are non-zero, we have $\xi \Im(\tau)^t \xi \geq (\Im(\tau))_{ii}$ and, for all $1\leq i \leq g-1$ we have $(\Im(\tau))_{i,i+1} \geq 0$. One can show that $\mathcal{F}_g$ contains a representative of each $\mathrm{Sp}(2g,\mathbb{Z})$-orbit in $\mathbb{H}_g$.
Let $K$ be a number field such that $X$ has a model $X_K$ over $K$. For every embedding $\sigma:K\to \mathbb{C}$, let $\tau_\sigma$ be an element of $\mathcal{F}_g$ such that $\mathrm{Jac}(X_{K,\sigma}) \cong \mathbb{C}^g/(\tau_\sigma \mathbb{Z}^g+\mathbb{Z}^g)$ as principally polarized abelian varieties, the matrix of the Riemann form induced by the polarization of $\mathrm{Jac}(X_{K,\sigma})$ being $\Im(\tau_\sigma)^{-1}$ on the canonical basis of $\mathbb{C}^g$. By a result of Bost (see \cite[Lemme 2.12]{Graf} or \cite{Pa}), we have \begin{eqnarray}\label{yeah} \frac{1}{[K:\mathbb{Q}]} \sum_{\sigma:K\to \mathbb{C}} \log \det(\Im(\tau_\sigma)) & \leq & g \log \max(1,h_{\Fal}(X)) + (2g^3+2)\log(2). \end{eqnarray} Here we used that the Faltings height of $X$ equals the Faltings height of its Jacobian.
Now, let $\vartheta(z;\tau)$ be the Riemann theta function as in Section \ref{admissible}, where $\tau$ is in $\mathcal{F}_g$ and $z=x+iy$ is in $\mathbb{C}^g$ with $x,y\in \mathbb{R}^g$. Combining (\ref{yeah}) with the upper bound \begin{eqnarray}\label{yeah2} \exp(-\pi^t y (\Im(\tau))^{-1} y) \vert \vartheta (z;\tau)\vert &\leq & 2^{3g^3+5g} \end{eqnarray} implies the result. Let us prove (\ref{yeah2}). Note that, if we write $y= \Im(z) = (\Im(\tau)) \cdot b$ for $b$ in $\mathbb{R}^g$, \[\exp(-\pi^t g(\Im(\tau))^{-1} y) \vert \vartheta(z;\tau)\vert \leq \sum_{n\in \mathbb{Z}^g} \exp( -\pi^t (n+b) (\Im(\tau))(n+b)).\] Since $\Im(\tau)$ is Minkowski reduced, we have $^tm\Im(\tau)m \geq c(g) \sum_{i=1}^g m_i^2 (\Im(\tau))_{ii}$ for all $m$ in $\mathbb{R}^g$. Here $c(g) = \left(\frac{4}{g^3}\right)^{g-1} \left(\frac{3}{4}\right)^{g(g-1)/2}$. Also, $(\Im(\tau))_{ii} \geq \sqrt{3}/2$ for all $i=1,\ldots,g$ (see \cite[Chapter V.4]{Igus} for these facts). We deduce that \begin{eqnarray*} \sum_{n\in \mathbb{Z}^g} \exp(-\pi^t(n+b)(\Im(\tau))(n+b)) &\leq &
\sum_{n\in \mathbb{Z}^g} \exp\left( -\sum_{i=1}^g \pi c(g) (n_i+b_i)^2 (\Im(\tau))_{ii} \right) \\ &\leq & \prod_{i=1}^g \sum_{n_i \in \mathbb{Z}} \exp(-\pi c(g)(n_i + b_i)^2 (\Im(\tau))_{ii}) \\ &\leq & \prod_{i=1}^g \frac{2}{1-\exp(-\pi c(g) (\Im(\tau))_{ii})} \leq 2^g\left(1+\frac{2}{\pi \sqrt{3} c(g)}\right)^g.
\end{eqnarray*}
This proves (\ref{yeah2}).
\end{proof}
\begin{lem}\label{ineq} Let $a\in \mathbb{R}_{>0}$ and $b\in \mathbb{R}_{\leq 1}$. Then, for all real numbers $x\geq b$, \[x - a\log\max(1,x) =\frac{1}{2}x +\frac{1}{2}(x-2a\log\max(1,x)) \geq \frac{1}{2}x + \min(\frac{1}{2}b,a-a\log (2a)).\]
\end{lem}
\begin{proof}
It suffices to prove that $x-2a\log\max(1,x) \geq \min(b,2a-2a\log(2a))$ for all $x\geq b$. To prove this, let $x\geq b$. Then, if $2a\leq 1$, we have $x-2a\log \max (1,x)\geq b\geq \min(b,2a-2a\log(2a))$. (To prove that $x-2a \log \max(1,x)\geq b$, we may assume that $x\geq 1$. It is easy to show that $x-2a\log x$ is a non-decreasing function for $x\geq 1$. Therefore, for all $x\geq 1$, we conclude that $x-2a\log x \geq 1 \geq b$.) If $2a>1$, the function $x-2a\log (x)$ attains its minimum value at $x=2a$ on the interval $[1,\infty)$.
\end{proof}
\begin{lem}{ \bf (Bost)} \label{bost}
Let $X$ be a smooth projective connected curve over $\overline{\mathbb{Q}}$ of genus $g\geq 1$. Then \[h_{\Fal}(X) \geq -\log(2\pi)g.\]
\end{lem}
\begin{proof}
See \cite[Corollaire 8.4]{GaRe}. (Note that the Faltings height $h(X)$ utilized by Bost, Gaudron and R\'emond is bigger than $h_{\Fal}(X)$ due to a difference in normalization. In fact, we have $h(X) = h_{\Fal}(X) +g\log(\sqrt{\pi})$. In particular, the slightly stronger lower bound $h_{\Fal}(X) \geq -\log(\sqrt{2}\pi)g$ holds.)
\end{proof}
\begin{lem}\label{logs_plus_h}
Let $X$ be a smooth projective connected curve over $\overline{\mathbb{Q}}$ of genus $g\geq 1$. Then \[ \log S(X) + h_{\Fal}(X) \geq \frac{1}{2}h_{\Fal}(X) - (4g^3+5g+1)\log 2 +\min\left(-\frac{g}{2}\log(2\pi), \frac{g}{4}-\frac{g}{4}\log\left(\frac{g}{2}\right)\right).\]
\end{lem}
\begin{proof}
By the explicit formula (\ref{SX}) for $S(X)$ in Section \ref{admissible} and our bounds on theta functions (Lemma \ref{theta}), \[\log S(X) + h_{\Fal}(X) \geq -\frac{g}{4}\log \max(1,h_{\Fal}(X)) -(4g^3+5g+1)\log 2 +h_{\Fal}(X) .\] Since $h_{\Fal}(X) \geq -g\log(2\pi)$, the statement follows from Lemma \ref{ineq} (with $x=h_{\Fal}(X)$, $a= g/4$ and $b=-g \log(2\pi)$).
\end{proof}
\begin{lem}\label{Rdejong} Let $X$ be a smooth projective connected curve of genus $g\geq 2$ over $\overline{\mathbb{Q}}$. Then
\[ \frac{(2g-1)(g+1)}{8 (g-1)}e(X) +\frac{1}{8}\delta_{\Fal}(X) \geq \log S(X) +h_{\Fal}(X) . \]
\end{lem}
\begin{proof}
By \cite[Proposition 5.6]{deJo}, \begin{eqnarray*} e(X) &\geq & \frac{8(g-1)}{(g+1)(2g-1)}\left(\log R(X) +h_{\Fal}(X) \right). \end{eqnarray*} Note that $\log R(X) = \log S(X) - \delta_{\Fal}(X)/8$; see (\ref{Sinvariant}) in Section \ref{admissible}. This implies the inequality.
\end{proof}
\begin{lem} {\bf (Noether formula)}
Let $X$ be a smooth projective connected curve over $\overline{\mathbb{Q}}$ of genus $g\geq 1$. Then \[12 h_{\Fal}(X) = e(X) + \Delta(X) + \delta_{\Fal}(X) - 4g \log (2\pi).\]
\end{lem}
\begin{proof}
This is well-known; see \cite[Theorem 6]{Faltings1} and \cite[Th\'eor\`eme 2.2]{Moret-Bailly2}.
\end{proof}
\begin{prop}\label{faltingsomega1}
Let $X$ be a smooth projective connected curve of genus $g\geq 2$ over $\overline{\mathbb{Q}}$. Then
\[\begin{array}{ccc} h_{\Fal}(X) & \leq & \frac{(2g-1)(g+1)}{4 (g-1)}e(X)+\frac{1}{4}\delta_{\Fal}(X)+ 20g^3 \\ -g\log(2\pi) & \leq & \frac{(2g-1)(g+1)}{4 (g-1)}e(X)+\frac{1}{4}\delta_{\Fal}(X)+ 20g^3 \\ \Delta(X) & \leq & \frac{3(2g-1)(g+1)}{g-1} e(X) + 2\delta_{\Fal}(X) + 248g^3. \end{array}\]
\end{prop}
\begin{proof}
Firstly, by Lemma \ref{Rdejong},
\[ \frac{(2g-1)(g+1)}{8 (g-1)}e(X) +\frac{1}{8}\delta_{\Fal}(X) \geq \log S(X) +h_{\Fal}(X) . \]
To obtain the upper bound for $h_{\Fal}(X)$, we proceed as follows. By Lemma \ref{logs_plus_h}, \[ \log S(X) + h_{\Fal}(X) \geq \frac{1}{2}h_{\Fal}(X) - (4g^3+5g+1)\log 2 + \min\left(-\frac{g}{2}\log(2\pi), \frac{g}{4}-\frac{g}{4}\log\left(\frac{g}{2}\right)\right). \] From these two inequalities, we deduce that \[\frac{1}{2}h_{\Fal}(X) \leq \frac{(2g-1)(g+1)}{8 (g-1)}e(X) +\frac{1}{8}\delta_{\Fal}(X) + (4g^3+5g+1)\log 2 +\max\left(\frac{g}{2}\log(2\pi), \frac{g}{4}\log\left(\frac{g}{2}\right) -\frac{g}{4}\right). \] Finally, it is straightforward to verify the inequality \[(4g^3+5g+1)\log 2 +\max\left(\frac{g}{2}\log(2\pi), \frac{g}{4}\log\left(\frac{g}{2}\right) -\frac{g}{4}\right)\leq 10g^3.\] This concludes the proof of the upper bound for $h_{\Fal}(X)$.
The second inequality follows from the first inequality of the proposition and the lower bound $h_{\Fal}(X)\geq -g\log (2\pi)$ of Bost (Lemma \ref{bost}).
Finally, to obtain the upper bound of the proposition for the discriminant of $X$, we eliminate the Faltings height of $X$ in the first inequality using the Noether formula and obtain \[ \Delta(X) + e(X) + \delta_{\Fal}(X) - 4g\log(2\pi) \leq \frac{3(2g-1)(g+1)}{ (g-1)}e(X)+3\delta_{\Fal}(X)+ 240g^3. \] In \cite[Theorem 5]{Faltings1} Faltings showed that $e(X)\geq 0$. Therefore, we conclude that \[ \Delta(X) + \delta_{\Fal}(X) - 4g\log(2\pi) \leq \frac{3(2g-1)(g+1)}{ (g-1)}e(X)+3\delta_{\Fal}(X)+ 240g^3. \qedhere\]
\end{proof}
We are now ready to prove Theorem \ref{upperboundinv}. \\
\noindent \emph{Proof of Theorem \ref{upperboundinv}.}
The proof is straightforward. The upper bound $e(X) \leq 4g(g-1)h(b)$ is well-known; see \cite[Theorem 5]{Faltings1}.
Let us prove the lower bound for $\delta_{\Fal}(X)$. If $g\geq 2$, the lower bound for $\delta_{\Fal}(X)$ can be deduced from the second inequality of Proposition \ref{faltingsomega1} and the upper bound $e(X)\leq 4g(g-1)h(b)$. When $g=1$, this follows from a result of Szpiro (\cite[Proposition 7.2]{deJo2}) and the non-negativity of $h(b)$.
From now on, we suppose that $b$ is a non-Weierstrass point. The upper bound $h_{\Fal}(X) \leq \frac{1}{2} g(g+1) h(b) +\log \Vert \mathrm{Wr}\Vert_{\mathrm{Ar}}(b)$ follows from Theorem 5.9 in \cite{deJo} and (\ref{Wronskian}) in Section \ref{admissible}.
We deduce the upper bound $\delta_{\Fal}(X) \leq 6 g(g+1) h(b) + 12\log \Vert\mathrm{Wr}\Vert_{\mathrm{Ar}}(b)+ 4g\log(2\pi)$ as follows. Since $e(X)\geq 0$ and $\Delta(X)\geq 0$, the Noether formula implies that \[\delta_{\Fal}(X) \leq 12 h_{\Fal}(X) + 4g \log(2\pi).\] Thus, the upper bound for $\delta_{\Fal}(X)$ follows from the upper bound for $h_{\Fal}(X)$.
The upper bound $$\Delta(X)\leq 2g (g+1)(4g+1) h(b) + 12 \log \Vert \mathrm{Wr}\Vert_{\mathrm{Ar}} (b) + 93g^3$$ follows from the inequality $$\Delta(X) \leq 12h_{\Fal}(X) -\delta_{\Fal}(X) + 4g\log(2\pi)$$ and the preceding bounds. (One could also use the last inequality of Proposition \ref{faltingsomega1} to obtain a similar result.) \qed
\section{Bounds for Arakelov-Green functions of Belyi covers}\label{ArakelovGreen}
Our aim is to give explicit bounds for the Arakelov-Green function on a Belyi cover of $X(2)$.
Such bounds have been obtained for certain Belyi covers using spectral methods in \cite{JorKra3}.
The results in \textit{loc. cit.} do not apply to our situation since the smallest positive eigenvalue of the Laplacian can go to zero in a tower of Belyi covers; see \cite[Theorem 4]{Lo}.
Instead, we use a theorem of Merkl to prove explicit bounds for the Arakelov-Green function on a Belyi cover in Theorem \ref{MerklResult}.
More precisely, we construct a ``Merkl atlas'' for an arbitrary Belyi cover.
Our construction uses an explicit version of a result of Jorgenson and Kramer (\cite{JorKra1}) on the Arakelov $(1,1)$-form due to Bruin.
We use our results to estimate the Arakelov norm of the Wronskian differential in Proposition \ref{Wronskian2}.
Merkl's theorem (\cite[Theorem 10.1]{Merkl}) was used to prove bounds for Arakelov-Green functions of the modular curve $X_1(5p)$ in \cite{EdJo3}. It is also used by David Holmes \cite{Holmes} to construct ``weak-pseudo-metrics" on hyperelliptic curves.
\subsection{Merkl's theorem}\label{StatingMerkl}
Let $X$ be a compact connected Riemann surface of positive genus and recall that $\mu$ denotes the Arakelov $(1,1)$-form on $X$.
\begin{defn}\label{MerklAtlas} A \emph{Merkl atlas} for $X$ is a quadruple $(\{(U_j,z_j)\}_{j=1}^n, r_1, M,c_1)$, where $\{(U_j,z_j)\}_{j=1}^n$ is a finite atlas for $X$, $\frac{1}{2}<r_1<1$, $M\geq 1$ and $c_1>0$ are real numbers such that the following properties are satisfied.
\begin{enumerate}
\item Each $z_j U_j$ is the open unit disc.
\label{hyp:open-unit-disc}
\item The open sets $U_j^{r_1} := \{x\in U_j : \vert z_j(x) \vert < r_1\}$ with $1\leq j\leq n$ cover $X$.
\label{hyp:covering}
\item For all $1\leq j,j^\prime\leq n$, the function $\vert dz_j/dz_{j^\prime}\vert$ on $U_j\cap U_{j^\prime}$ is bounded from above by $M$.
\label{hyp:glueing-function-bound}
\item For $1\leq j \leq n$, write $\mu_{\mathrm{Ar}} = iF_j dz_j \wedge d\overline{z_j}$ on $U_j$. Then $0 \leq F_j(x) \leq c_1$ for all $x\in U_j$.
\label{hyp:mu-bound}
\end{enumerate}
\end{defn}
Given a Merkl atlas $(\{(U_j,z_j)\}_{j=1}^n, r_1, M,c_1)$ for $X$, the following result provides explicit bounds for Arakelov-Green functions in $n$, $r_1$, $M$ and $c_1$.
\def330{330}
\begin{thm}[Merkl]
\label{Merkl}
Let $(\{(U_j,z_j)\}_{j=1}^n, r_1, M, c_1)$ be a Merkl atlas for $X$. Then
\[ \sup_{X\times X\backslash \Delta} \gr_X \leq \frac{330 n}{(1-r_1)^{3/2}} \log\frac{1}{1-r_1} + 13.2nc_1 +(n-1) \log M. \] Furthermore, for every index $j$ and all $x\neq y\in U_j^{r_1}$, we have \[ \vert \gr_X(x,y) -\log \vert z_j(x) - z_j(y) \vert \vert \leq \frac{330 n}{(1-r_1)^{3/2}}\log\frac{1}{1-r_1} + 13.2nc_1 +(n-1) \log M. \]
\end{thm}
\begin{proof}
Merkl proved this theorem without explicit constants and without the dependence on $r_1$ in \cite{Merkl}. A proof of the theorem in a more explicit form was given by P. Bruin in his master's thesis. This proof is reproduced, with minor modifications, in the appendix.
\end{proof}
\subsection{An atlas for a Belyi cover of $X(2)$}\label{Atlas}
Let $\chi^{\rm c}f_{a,b}$ denote the complex upper half-plane. Recall that $\SL_2(\mathbb{R})$ acts on $\chi^{\rm c}f_{a,b}$ via M\"obius transformations. Let $\Gamma(2)$ denote the subgroup of $\textrm{SL}_2(\mathbb{Z})$ defined as \[\Gamma(2) =\left\{ \left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right) \in \textrm{SL}_2(\mathbb{Z}) : a \equiv d \equiv 1 \mod 2 \ \textrm{and} \ b\equiv c\equiv 0 \mod 2\right\}.\] The Riemann surface $Y(2) = \Gamma(2)\backslash \chi^{\rm c}f_{a,b}$ is not compact. Let $X(2)$ be the compactification of the Riemann surface $Y(2) = \Gamma(2)\backslash \chi^{\rm c}f_{a,b}$ obtained by adding the cusps $0$, $1$ and $\infty$. Note that $X(2)$ is known as the \textit{compact modular curve associated to the congruence subgroup $\Gamma(2)$ of $\mathrm{SL}_2(\mathbb{Z})$}. The modular lambda function $\lambda:\chi^{\rm c}f_{a,b}\to \mathbb{C}$ induces an analytic isomorphism $\lambda: X(2)\to \mathbb{P}^1(\mathbb{C})$; see Section \ref{modular_lambda_function} for details. In particular, the genus of $X(2)$ is zero. For a cusp $\kappa \in \{0,1,\infty\}$, we fix an element $\gamma_\kappa$ in $\mathrm{SL}_2(\mathbb{Z})$ such that $\gamma_\kappa (\kappa) = \infty$.
We construct an atlas for the compact connected Riemann surface $X(2)$. Let $\dot{B}_\infty$ be the open subset given by the image of the strip \[\dot{S}_{\infty} := \left\{x+iy : -1\leq x <1, y>\frac{1}{2}\right\} \subset \chi^{\rm c}f_{a,b}\] in $Y(2)$ under the quotient map $\chi^{\rm c}f_{a,b}\longrightarrow \Gamma(2)\backslash \chi^{\rm c}f_{a,b}$ defined by $\tau\mapsto \Gamma(2)\tau$. The quotient map $\chi^{\rm c}f_{a,b}\longrightarrow \Gamma(2)\backslash \chi^{\rm c}f_{a,b}$ induces a bijection from this strip to $\dot{B}_\infty$. More precisely, suppose that $\tau$ and $\tau^\prime$ in $\dot{S}_{\infty}$ lie in the same orbit under the action of $\Gamma(2)$. Then, there exists an element \[\gamma = \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) \in \Gamma(2)\] such that $\gamma \tau = \tau^\prime$. If $c\neq 0$, by definition, $c$ is a non-zero integral multiple of $2$. Thus, $c^2 \geq 4$. Therefore, \[ \frac{1}{2} < \Im \tau^\prime =\frac{\Im \tau}{\vert c\tau+d\vert^2} \leq \frac{1}{4 \Im \tau} < \frac{1}{2}.\] This is clearly impossible. Thus, $c=0$ and $\tau^\prime = \tau \pm b$. By definition, $b=2k$ for some integer $k$. Since $\tau$ and $\tau^\prime$ lie in the above strip, we conclude that $b=0$. Thus $\tau=\tau^\prime$.
Consider the morphism $z_\infty:\chi^{\rm c}f_{a,b}\longrightarrow \mathbb{C}$ given by $\tau \mapsto \exp(\pi i \tau+\frac{\pi}{2})$. The image of the strip $\dot{S}_{\infty}$ under $z_\infty$ in $\mathbb{C}$ is the punctured open unit disc $\dot{B}(0,1)$. Now, for any $\tau$ and $\tau^\prime$ in the strip $\dot{S}_{\infty}$, the equality $z_\infty(\tau) =z_\infty(\tau^\prime)$ holds if and only if $\tau^\prime = \tau \pm 2k$ for some integer $k$. But then $k=0$ and $\tau =\tau^\prime$. We conclude that $z_\infty$ factors injectively through $\dot{B}_\infty$. Let $z_\infty:B_\infty\longrightarrow B(0,1)$ denote, by abuse of notation, the induced chart at $\infty$, where $B_\infty := \dot{B}_\infty\cup \{\infty\}$ and $B(0,1)$ is the open unit disc in $\mathbb{C}$. We translate our neighbourhood $B_\infty$ at $\infty$ to a neighborhood for $\kappa$, where $\kappa$ is a cusp of $X(2)$. More precisely, for any $\tau$ in $\chi^{\rm c}f_{a,b}$, define $z_{\kappa}(\tau) = \exp(\pi i \gamma_{k}^{-1}\tau+\pi /2)$. Let $\dot{B}_\kappa$ be the image of $\dot{S}_{\infty}$ under the map $\chi^{\rm c}f_{a,b}\longrightarrow Y(2)$ given by $\tau\mapsto \Gamma(2)\gamma_\kappa \tau$. We define $B_{\kappa}=\dot{B}_{\kappa}\cup \{\kappa\}$. We let $z_{\kappa}: B_\kappa \to B(0,1)$ denote the induced chart (by abuse of notation).
Since the open subsets $B_{\kappa}$ cover $X(2)$, we have constructed an atlas $\{(B_\kappa,z_\kappa)\}_{\kappa}$ for $X(2)$, where $\kappa$ runs through the cusps $0$, $1$ and $\infty$.
\begin{defn}\label{belyidef} A \textit{Belyi cover} of $X(2)$ is a morphism of compact connected Riemann surfaces $Y\longrightarrow X(2)$ which is unramified over $Y(2)$. The points of $Y$ not lying over $Y(2)$ are called \emph{cusps}.
\end{defn}
\begin{lem}\label{genusofbelyi}
Let $\pi:Y\longrightarrow X(2)$ be a Belyi cover with $Y$ of genus~$g$. Then, $g\leq \deg \pi$.
\end{lem}
\begin{proof} This is trivial for $g\leq 1$. For $g\geq 2$, the statement follows from the Riemann-Hurwitz formula.
\end{proof}
Let $\pi:Y\longrightarrow X(2)$ be a Belyi cover. We are going to ``lift'' the atlas $\{(B_\kappa,z_\kappa)\}$ for $X(2)$ to an atlas for $Y$.
Let $\kappa$ be a cusp of $X(2)$. The branched cover $\pi^{-1}(B_{\kappa}) \longrightarrow B_{\kappa}$ restricts to a finite degree topological cover $\pi^{-1}(\dot{B}_{\kappa}) \longrightarrow \dot{B}_\kappa$. In particular, the composed morphism \[\xymatrix{\pi^{-1}\dot{B}_\kappa \ar[rr] & & \dot{B}_\kappa \ar[rr]^{\sim}_{z_\kappa|_{\dot{B}_\kappa}} & & \dot{B}(0,1) }\] is a finite degree topological cover of $\dot{B}(0,1)$.
Recall that the fundamental group of $\dot{B}(0,1)$ is isomorphic to $\mathbb{Z}$. More precisely, for any connected finite degree topological cover $V\to\dot{B}(0,1)$, there is a unique integer $e\geq 1$ such that $V\to \dot{B}(0,1)$ is isomorphic to the cover $\dot{B}(0,1)\longrightarrow \dot{B}(0,1)$ given by $x\mapsto x^e$.
For every cusp $y$ of $Y$ lying over $\kappa$, let $\dot{V}_y$ be the unique connected component of $\pi^{-1}\dot{B}_\kappa$ whose closure $V_y$ in $\pi^{-1}(B_\kappa)$ contains $y$. Then, for any cusp $y$, there is a positive integer $e_y$ and an isomorphism $\xymatrix{ w_y:\dot{V}_y \ar[r]^{\sim} & \dot{B}(0,1)}$ such that $w_y^{e_y} = z_\kappa \circ \pi|_{\dot{V}_y}$. The isomorphism $w_y:\dot{V}_y\longrightarrow \dot{B}(0,1)$ extends to an isomorphism $w_y:V_y\longrightarrow B(0,1)$ such that $w_y^{e_y} = z_\kappa \circ \pi|_{V_y}$. This shows that $e_y$ is the ramification index of $y$ over $\kappa$. Note that we have constructed an atlas $\{(V_{y},w_y)\}$ for $Y$, where $y$ runs over the cusps of $Y$.
\subsection{The Arakelov $(1,1)$-form and the hyperbolic metric}\label{cofin}
Let \[\mu_{\mathrm{hyp}}(\tau) = \frac{i}{2} \frac{1}{\Im(\tau)^2}d\tau d\overline{\tau} \] be the hyperbolic metric on $\chi^{\rm c}f_{a,b}$. A Fuchsian group is a discrete subgroup of $\SL_2(\mathbb{R})$. For any Fuchsian group $\Gamma$, the quotient space $\Gamma\backslash \chi^{\rm c}f_{a,b}$ is a connected Hausdorff topological space and can be made into a Riemann surface in a natural way. The hyperbolic metric $\mu_{\mathrm{hyp}}$ on $\chi^{\rm c}f_{a,b}$ induces a measure on $\Gamma\backslash \chi^{\rm c}f_{a,b}$, given by a smooth positive real-valued $(1,1)$-form outside the set of fixed points of elliptic elements of $\Gamma$. If the volume of $\Gamma\backslash \chi^{\rm c}f_{a,b}$ with respect to this measure is finite, we call $\Gamma$ a \emph{cofinite Fuchsian group}.
Let $\Gamma$ be a cofinite Fuchsian group, and let $X$ be the compactification of $\Gamma\backslash \chi^{\rm c}f_{a,b}$ obtained by adding the cusps. We assume that $\Gamma$ has no elliptic elements and that the genus~$g$ of $X$ is positive. There is a unique smooth function $F_\Gamma: X\longrightarrow [0,\infty)$ which vanishes at the cusps of $\Gamma$ such that \begin{eqnarray}\label{mumuhyp}
\mu &=& \frac{1}{g} F_\Gamma \mu_{\mathrm{hyp}}.
\end{eqnarray} A detailed description of $F_\Gamma$ is not necessary for our purposes.
\begin{defn}\label{GammaBelyi}
Let $\pi:Y\longrightarrow X(2)$ be a Belyi cover. Then we define the cofinite Fuchsian group $\Gamma_Y$ (or simply $\Gamma$) associated to $\pi:Y\to X(2)$ as follows. Since the topological fundamental group of $Y(2)$ equals $\Gamma(2)/\{\pm 1\} $, we have $\pi^{-1}(Y(2))=\Gamma^\prime\backslash \chi^{\rm c}f_{a,b}$ for some subgroup $\Gamma^\prime\subset \Gamma(2)/\{\pm 1\}$ of finite index. We define $\Gamma \subset \Gamma(2)$ to be the inverse image of $\Gamma^\prime$ under the quotient map $\Gamma(2) \longrightarrow \Gamma(2)/\{\pm 1\}$. Note that $\Gamma$ is a cofinite Fuchsian group without elliptic elements.
\end{defn}
\begin{thm}\label{JK}{\bf (Jorgenson-Kramer)} For any Belyi cover $\pi:Y\longrightarrow X(2)$, where $Y$ has positive genus, \[ \sup_{\tau \in Y} F_\Gamma \leq 64 \max_{y\in Y}(e_y)^2 \leq 64 (\deg \pi)^2.\] \end{thm}
\begin{proof} This is shown in \cite{Bruin0}.
More precisely, in the notation of \textit{loc. cit.}, Bruin shows that, with $a=1.44$, we have $N_{\mathrm{SL}_2(\mathbb{Z})}(z,2a^2-1) \leq 58$. In particular, $\sup_{z\in Y} N_\Gamma(z,z,2a^2-1) \leq 58$; see Section 8.2 in \textit{loc. cit.}. Now, we apply Proposition 6.1 and Lemma 6.2 (with $\epsilon = 2\deg \pi$) in \textit{loc. cit.} to deduce the sought inequality.
\end{proof}
\begin{opm}
Jorgenson and Kramer prove a stronger (albeit non-explicit) version of Theorem \ref{JK}; see \cite{JorKra1}.
\end{opm}
\subsection{A Merkl atlas for a Belyi cover of $X(2)$}\label{MerklAtlasSection}
In this section we prove bounds for Arakelov-Green functions of Belyi covers.
Recall that we constructed an atlas $\{(B_\kappa,z_\kappa)\}_{\kappa}$ for $X(2)$. For a cusp $\kappa$ of $X(2)$, let $$y_\kappa:~\mathbb{H}~\longrightarrow~(0,\infty)$$ be defined by \[\tau \mapsto \Im(\gamma_{\kappa}^{-1} \tau)=\frac{1}{2}-\frac{\log\vert z_\kappa(\tau)\vert}{\pi}.\] This induces a function $\dot{B}_\kappa\longrightarrow (0,\infty)$ also denoted by $y_\kappa$.
\begin{lem}\label{transition}
For any two cusps $\kappa$ and $\kappa^\prime$ of $X(2)$, we have \[\left\vert \frac{dz_\kappa}{dz_{\kappa^\prime}}\right\vert \leq 4\exp(3\pi/2)\] on $B_\kappa \cap B_{\kappa^\prime}$.
\end{lem}
\begin{proof} We work on the complex upper half-plane $\chi^{\rm c}f_{a,b}$. We may and do assume that $\kappa \neq \kappa^\prime$. By applying $\gamma^{-1}_{\kappa^\prime}$, we may and do assume that $\kappa^\prime =\infty$. On $B_\kappa\cap B_\infty$, we have \[dz_\kappa(\tau) = \pi i \exp(\pi i \gamma_\kappa^{-1} \tau + \pi/2) d(\gamma_\kappa^{-1} \tau), \quad dz_\infty(\tau) = \pi i \exp(\pi i \tau + \pi/2) d( \tau).\] Therefore, \[\frac{dz_\kappa}{dz_{\infty}}(\tau) = \exp(\pi i (\gamma_\kappa^{-1}\tau - \tau))\frac{d(\gamma_\kappa^{-1} \tau) }{d( \tau)}.\] It follows from a simple calculation that, for $\gamma_\kappa^{-1} = \left( \begin{array}{cc} a & b \\ c & d\end{array}\right)$ with $c\neq 0$, \[ \left\vert \frac{dz_\kappa}{dz_{\infty}}\right\vert(\tau) =\frac{1}{\vert c \tau+d\vert^2} \exp(\pi(y_\infty(\tau) - y_\kappa(\tau))). \] For $\tau$ and $\gamma_\kappa^{-1}\tau$ in $B_\infty$, one has $y_\infty(\tau)>1/2$ and $y_\kappa(\tau)>1/2$. From $\vert c\tau +d\vert \geq y_\infty(\tau) = \Im(\tau)$, it follows that \[y_\kappa(\tau)=\Im(\gamma_\kappa^{-1}(\tau)) = \gamma_\infty\left(\frac{a\tau + b }{c\tau + d}\right) = \frac{\Im \tau}{\vert c\tau +d\vert^2} \leq \frac{\Im \tau}{(\Im \tau)^2}\leq 2, \] and similarly $y_\infty(\tau) \leq 2 $. The statement follows.
\end{proof}
Let $\pi:Y\longrightarrow X(2)$ be a Belyi cover, and let $V=\pi^{-1}(Y(2))$ be the complement of the set of cusps in $Y$. Recall that we constructed an atlas $\{(V_y,w_y)\}$ for $Y$. We assume that the genus~$g$ of $Y$ is positive and, as usual, we let $\mu$ denote the Arakelov $(1,1)$-form on $Y$.
\begin{lem}\label{formula}
For a cusp $y$ of $\pi:Y\to X(2)$ with $\kappa = \pi(y)$, the equality \[idw_y d\overline{w_y} = \frac{2\pi^{2} y_\kappa^2 \vert w_y\vert^{2}}{e_y^2 } \mu_{\mathrm{hyp}}\] holds on $\dot{V}_y$. \end{lem}
\begin{proof} Let $\kappa = \pi(y)$ in $X(2)$. We work on the complex upper half-plane.
By the chain rule, we have\[d(z_\kappa)= d(w_y^{e_y}) = e_y w_y^{e_y-1} dw_y.\] Therefore,
\[ e_y^2 \vert w_y\vert^{2e_y-2} dw_y d\overline{w_y} = dz_\kappa d\overline{z_\kappa}.\]
Note that $dz_\kappa =\pi i z_\kappa d(\gamma_{\kappa}^{-1} ),$ where we view $\gamma_{\kappa}^{-1}:\chi^{\rm c}f_{a,b}\longrightarrow \mathbb{C}$ as a function. Therefore, \[
e_y^2 \vert w_y\vert^{2e_y-2} dw_y d\overline{w_y} = \pi^{2} \vert z_\kappa\vert^{2} d(\gamma^{-1}_\kappa)d(\overline{\gamma^{-1}_\kappa }) .\]
Since $\vert w_y^{e_y}\vert = \vert z_\kappa \vert$, we have \begin{eqnarray*} i dw_y d\overline{w_y} &=&\frac{i\pi^{2} \vert w_y\vert^{2}}{e_y^2 }d(\gamma_{\kappa}^{-1})d(\overline{\gamma_\kappa^{-1}}) = \frac{2\pi^{2} y_\kappa^2 \vert w_y\vert^{2}}{e_y^2 } \frac{i d(\gamma_{\kappa}^{-1})d(\overline{\gamma_\kappa^{-1}})}{2y_\kappa^2} =\frac{2\pi^{2} y_\kappa^2 \vert w_y\vert^{2}}{e_y^2 }\left(\mu_{\mathrm{hyp}}\circ \gamma_\kappa^{-1}\right) . \end{eqnarray*} Since $\mu_{\mathrm{hyp}}$ is invariant under the action of $\textrm{SL}_2(\mathbb{Z})$, this concludes the proof.
\end{proof}
\begin{prop}\label{fy} Let $y$ be a cusp of $\pi:Y\to X(2)$. Write $\mu = i F_y dw_y d\overline{w_y}$ on $V_y$. Then $F_y$ is a subharmonic function on $V_y$ and \[0\leq F_y \leq \frac{128 \exp(3\pi)(\deg \pi)^4}{\pi^2 g}.\]
\end{prop}
\begin{proof} The first statement follows from \cite[page 8]{JorKra1}; see also \cite[page 58]{Bruin}.
The lower bound for $F_y$ is clear from the definition. Let us prove the upper bound for $F_y$.
For a cusp $\kappa$ of $X(2)$, let $\dot{B}_\kappa(2)\subset \dot{B}_\kappa$ be the image of the strip $\{x+iy:-1 \leq x < 1, y>2\}$ in $Y(2)$ under the map $\chi^{\rm c}f_{a,b}\longrightarrow Y(2)$ given by $\tau \mapsto \Gamma(2)\gamma_{\kappa}\tau$. For a cusp $y$ of $Y$ lying over $\kappa$, define $\dot{V}_y(2)=\pi^{-1}(\dot{B}_\kappa(2))$ and $V_y(2) = \dot{V}_y(2)\cup \{y\}$. Since the boundary $\partial V_y(2)$ of $V_y(2)$ is contained in $V_y - V_y(2)$, by the maximum principle for subharmonic functions,
\begin{eqnarray*}
\sup_{ V_y} F_y = \max( \sup_{ V_y(2)} F_y, \sup_{V_y-V_y(2)} F_y) = \max(\sup_{\partial V_y(2)} F_y , \sup_{V_y-V_y(2)} F_y) = \sup_{V_y-V_y(2)} F_y.
\end{eqnarray*}
By Lemma \ref{formula}, Definition \ref{GammaBelyi} and (\ref{mumuhyp}) in Section \ref{cofin}, \begin{eqnarray}\label{fything} F_y &=& F_\Gamma \frac{e_y^2}{2g\pi^2 y_\kappa^2 \vert w_y\vert^2}.\end{eqnarray} Note that $y_\kappa^{-2} < 4$ on $V_y$. Furthermore, \[\sup_{V_y-V_y(2)}
\vert w_y\vert^{-2}\leq \sup_{B_\kappa-B_\kappa(2)} \vert z_\kappa\vert^{-2} = \exp(-\pi) \sup_{B_\kappa-B_\kappa(2)} \exp(2\pi y_\kappa) \leq \exp(3\pi). \] Thus, the proposition follows from Jorgenson-Kramer's upper bound for $F_\Gamma$ (Theorem \ref{JK}).
\end{proof}
\begin{defn}\label{r1} Define $s_1 = \sqrt{1/2} $. Note that $\frac{1}{2} <s_1<1$. For any cusp $\kappa$ of $X(2)$, let $B_{\kappa}^{s_1}$ be the open subset of $B_{\kappa}$ whose image under $z_\kappa$ is $\{x\in \mathbb{C} : \vert x\vert < s_1\}$. Moreover, define the positive real number $r_1$ by the equation $r_1^{\deg \pi} = s_1$. Note that $\frac{1}{2} < r_1 < 1$. For all cusps $y$ of $\pi:Y\to X(2)$, define the subset $V_{y}^{r_1}\subset V_y$ by $V_{y}^{r_1} = \{x\in V_y: \vert w_y(x)\vert < r_1 \}$.
\end{defn}
\begin{thm}\label{MerklResult} Let $\pi:Y\longrightarrow X(2)$ be a Belyi cover such that $Y$ is of genus $g\geq 1$. Then
\begin{eqnarray*} \sup_{Y\times Y\backslash \Delta} \gr_Y &\leq & 6378027\frac{(\deg \pi)^5}{g}. \end{eqnarray*} Moreover, for every cusp $y$ and all $x\neq x^\prime $ in $V_y^{r_1}$,
\begin{eqnarray*} \left \vert \gr_Y(x,x^\prime) - \log\vert w_{y}(x)-w_y(x^\prime)\vert \right \vert &\leq & 6378027\frac{(\deg \pi)^5}{g}\end{eqnarray*}
\end{thm}
\begin{proof} Write $d=\deg \pi$. Let $s_1$ and $r_1$ be as in Definition \ref{r1}. We define real numbers \[n:=\#(Y-V), \quad M :=4d\exp(3\pi), \quad c_1:= \frac{128 \exp(3\pi)d^4}{\pi^2 g} .\]
Since $n$ is the number of cusps of $Y$, we have $ n \leq 3d$. Moreover \[\frac{1}{1-r_1} \leq \frac{d}{1-s_1}.\]
Note that \[ \frac{330 n}{(1-r_1)^{3/2}}\log\frac{1}{1-r_1} + 13.2nc_1 +(n-1) \log M \leq 6378027\frac{d^5}{g}\] Therefore, by Theorem \ref{Merkl}, it suffices to show that \[(\{(V_y,w_y)\}_{y}, r_1, M, c_1), \] where $y$ runs over the cusps of $\pi:Y\to X(2)$, constitutes a Merkl atlas for $Y$.
The first condition of Merkl's theorem is satisfied. That is, $w_y V_{y}$ is the open unit disc in $\mathbb{C}$.
To verify the second condition of Merkl's theorem, we have to show that the open sets $V_y^{r_1}$ cover $Y$. For any $x\in V_{y}$, we have $x\in V_y^{r_1}$ if $\pi(x) \in B_{\kappa}^{s_1}$. In fact, for any $x$ in $V_{y}$, we have $\vert w_y( x)\vert < r_1$ if and only if \[\vert z_\kappa(\pi(x))\vert=\vert w_y (x)\vert^{e_y} < r_1^{e_y}.\] Since $r_1<1$, we see that $s_1=r_1^d\leq r_1^{e_y}$. Therefore, if $\pi(x)$ lies in $B_{\kappa}^{s_1}$, we see that $x$ lies in $V_y^{r_1}$. Now, since $s_1 < \frac{\sqrt{3}}{2}$, we have $X(2) = \cup_{\kappa \in \{0,1,\infty\}} B_\kappa^{s_1}$. Thus, we conclude that $Y=\cup_y V_y^{r_1}$, where $y$ runs through the cusps.
Since we have already verified the fourth condition of Merkl's theorem in Lemma \ref{fy}, it suffices to verify the third condition to finish the proof. Let $\kappa$ and $\kappa^\prime$ be
cusps of $X(2)$. We may and do assume that
$\kappa \neq \kappa^\prime$. Now, as usual, we work on the complex upper half-plane. By the chain rule,
\[ \left\vert\frac{dw_y}{dw_{y^\prime}}\right\vert \leq \frac{d}{\vert w_y\vert^{e_y-1}} \sup_{B_\kappa\cap B_{\kappa^\prime}}\left \vert \frac{dz_\kappa}{dz_{\kappa^\prime}} \right\vert\] on $V_{y}\cap V_{y^\prime}$.
Note that $\vert w_y(\tau)\vert^{e_y-1} \geq \vert w_y(\tau)\vert^{e_y}= \vert z_\kappa(\tau)\vert $ for any $\tau$ in $\chi^{\rm c}f_{a,b}$. Therefore,
\[ \left\vert\frac{dw_y}{dw_{y^\prime}}\right\vert\leq \frac{d}{\vert z_{\kappa}\vert} \sup_{B_\kappa\cap B_{\kappa^\prime}}\left \vert \frac{dz_\kappa}{dz_{\kappa^\prime}} \right\vert \leq M,\] where we used Lemma \ref{transition} and the inequality $\vert z_\kappa\vert > \exp(-3\pi /2)$ on $B_\kappa\cap B_{\kappa^\prime}$.
\end{proof}
\subsection{The Arakelov norm of the Wronskian differential}
\begin{prop}\label{Wronskian2} Let $\pi:Y\longrightarrow X(2)$ be a Belyi cover with $Y$ of genus $g\geq 1$. Then \[\sup_{Y-\mathrm{Supp} \mathcal{W}}\log\Vert \mathrm{Wr} \Vert_{\mathrm{Ar}} \leq 6378028 g(\deg \pi)^5.\]
\end{prop}
\begin{proof}
Let $b$ be a non-Weierstrass point on $ Y$ and let $y$ be a cusp of $Y$ such that $b$ lies in $V_y^{r_1}$. Let $\omega=(\omega_1,\ldots,\omega_g)$ be an orthonormal basis of $\mathrm{H}^0(Y,\Omega^1_Y)$. Then, as in Section \ref{admissible}, \[\log \Vert \mathrm{Wr}\Vert_{\mathrm{Ar}}(b) = \log\vert W_{w_y}(\omega)(b)\vert+ \frac{g(g+1)}{2}\log\Vert dw_y\Vert_{\mathrm{Ar}}(b).\] By Theorem \ref{MerklResult}, \begin{eqnarray*} \frac{g(g+1)}{2}\log \Vert dw_y\Vert_{\mathrm{Ar}}(b) &\leq & 6378027 g(\deg \pi)^5. \end{eqnarray*} Let us show that $\log \vert W_{w_y}(\omega)(b)\vert \leq g (\deg \pi)^5$. Write $\omega_k = f_k dw_y$ on $V_y$. Note that $\omega_k \wedge \overline{\omega_k} = \vert f_k\vert^2 dw_y \wedge d\overline{w_y}$. Therefore, \[ \mu = \frac{i}{2g} \sum_{k=1}^g \omega_k \wedge \overline{\omega_k} = \frac{i}{2g} \sum_{k=1}^g \vert f_k\vert^2 dw_y \wedge d\overline{w_y}.\] We deduce that $\sum_{k=1}^g \vert f_k\vert^2 = 2g F_y$, where $F_y$ is the unique function on $V_y$ such that $\mu = i F_y dw_y \wedge d\overline{w_y}$. By our upper bound for $F_y$ (Proposition \ref{fy}), for any $j=1,\ldots, g$, \[\sup_{V_y} \vert f_j\vert^2 \leq \sup_{V_y} \sum_{k=1}^g
\vert f_k\vert^2 =2gF_y \leq \frac{256\exp(3\pi)(\deg \pi)^4}{\pi^2 }.\] By Hadamard's inequality, \[ \log\vert W_{w_y}(\omega)(b)\vert \leq \sum_{l=0}^{g-1} \log\left(\sum_{k=1}^g \left\vert \frac{d^{l}f_k}{dw_y^{l}}\right\vert^2(b) \right)^{1/2}.\] Let $r_1 < r<1$ be some real number. By Cauchy's integral formula, for any $0\leq l \leq g-1$, \[\left \vert \frac{d^{l}f_k}{dw_y^{l}}\right\vert(b) = \left\vert \frac{l!}{2\pi i} \int_{\vert w_y\vert =r}\frac{f_k}{(w_y-w_y(b))^{l+1}}dw_y\right\vert \leq \frac{l!}{(r-r_1)^{l+1}} \sup_{V_y} \vert f_k\vert \leq \frac{g!}{(1-r_1)^{g}} \sup_{V_y} \vert f_k\vert. \] By the preceding estimations, since $g! \leq g^g$ and $\frac{1}{1-r_1} \leq \frac{\deg\pi}{1-s_1}$, we obtain that
\begin{eqnarray*}
\log\vert W_{w_y}(\omega)(b)\vert &\leq & \sum_{l=0}^{g-1} \log\left(\frac{g!}{(1-r_1)^{g}} \left(\sum_{k=1}^g \sup_{V_y} \vert f_k\vert^2 \right)^{1/2}\right)
\\ & \leq & \sum_{l=0}^{g-1} \log \left( \frac{g!}{(1-r_1)^g} \left(\sum_{k=1}^g \frac{256 \exp(3\pi)(\deg \pi)^4}{\pi^2 } \right)^{1/2}\right) \\ &=& g\log(g!) +g^2 \log \left(\frac{1}{1-r_1}\right) +\frac{g}{2} \log\left(\frac{ 256g\exp(3\pi)}{\pi^2} \right) + 2g\log(\deg \pi) \\ &\leq & \left(4.5+\log\left(\frac{1}{1-s_1}\right)+\frac{1}{2}\log\left(\frac{256\exp(3\pi)}{\pi^2} \right)\right) g^2 \log (\deg \pi) \\
& \leq & 13 g (\deg \pi)^2.
\end{eqnarray*} Since $g\geq 1$ and $\pi:Y\to X(2)$ is a Belyi cover, the inequality $\deg \pi \geq 3$ holds. Thus, \[13g (\deg \pi)^2 \leq \frac{13 g (\deg \pi)^5}{27} \leq g (\deg \pi)^5. \qedhere \]
\end{proof}
\section{Points of bounded height}\label{belyiheights}
\subsection{Lenstra's generalization of Dedekind's discriminant bound}\label{lenstra_section}
Let $A$ be a discrete valuation ring of characteristic zero with fraction field $K$. Let $\mathrm{ord}_A$ denote the valuation on $A$. Let $L/K$ be a finite field extension of degree $n$, and let $B$ be the integral closure of $A$ in $L$. Note that $L/K$ is separable, and $B/A$ is finite; see \cite[Proposition I.4.8]{Serre}.
The inverse different $\mathfrak{D}_{B/A}^{-1}$ of $B$ over $A$ is the fractional ideal \[\{x\in L : \mathrm{Tr}(xB)\subset A\},\] where $\mathrm{Tr}$ is the trace of $L$ over $K$. The inverse of the inverse different, denoted by $\mathfrak{D}_{B/A}$, is the different of $B$ over $A$. Note that $\mathfrak{D}_{B/A}$ is actually an integral ideal of $L$.
The following proposition (which we would like to attribute to H.W. Lenstra jr.) is a generalization of Dedekind's discriminant bound ( \cite[Proposition III.6.13]{Serre}).
\begin{prop}\label{different0}{ \bf (H.W. Lenstra jr.)} Let $A$ be a discrete valuation ring of characteristic zero with fraction field $K$, and let $B$ be the integral closure of $A$ in a finite field extension $L/K$ of degree $n$. Suppose that $B$ is a discrete valuation ring of ramification index $e$ over $A$. Then, the valuation $r$ of the different ideal $\mathfrak{D}_{B/A}$ on $B$ satisfies the inequality \[ r \leq e - 1+e\cdot \mathrm{ord}_A(n) .\]
\end{prop}
\begin{proof} Let $x$ be a uniformiser of $A$. Since $A$ is of characteristic zero, we may define $y:=\frac{1}{nx}$; note that $y$ is an element of $K$. The trace of $y$ (as an element of $L$) is $\frac{1}{x}$. Since $1/x$ is not in $A$, this implies that the inverse different $\mathfrak{D}_{B/A}^{-1}$ is strictly contained in the fractional ideal $yB$. (If not, since $A$ and $B$ are discrete valuation rings, we would have that $yB$ is strictly contained in the inverse different.) In particular, the different $\mathfrak{D}_{B/A}$ strictly contains the fractional ideal $(nx)$. Therefore, the valuation $\mathrm{ord}_B(\mathfrak{D}_{B/A})$ on $B$ of $\mathfrak{D}_{B/A}$ is strictly less than the valuation of $nx$. Thus, \[\mathrm{ord}_B(\mathfrak{D}_{B/A}) < \mathrm{ord}_B(n x) = e \cdot \mathrm{ord}_A(nx) = e (\mathrm{ord}_A(n) + 1) = e \cdot \mathrm{ord}_A(n) + e .\] This concludes the proof of the inequality.
\end{proof}
\begin{opm}
If the extension of residue fields of $B/A$ is separable, Proposition \ref{different0} follows from the \textit{Remarque} following Proposition III.6.13 in \cite{Serre}. (The result in \textit{loc. cit.} was conjectured by Dedekind and proved by Hensel when $A=\mathbb{Z}$.) The reader will see that, in the proof of Proposition \ref{upperbound2}, we have to deal with imperfect residue fields.
\end{opm}
\begin{prop}\label{different1} Let $A$ be a discrete valuation ring of characteristic zero with fraction field $K$, and let $B$ be the integral closure of $A$ in a finite field extension $L/K$ of degree $n$. Suppose that the residue characteristic $p$ of $A$ is positive. Let $m$ be the biggest integer such that $p^m\leq n$. Then, for $\beta\subset B$ a maximal ideal of $B$ with ramification index $e_\beta$ over $A$, the valuation $r_\beta$ of the different ideal $\mathfrak{D}_{B/A}$ at $\beta$ satisfies the inequality \[ r_\beta \leq e_\beta - 1+e_\beta\cdot \mathrm{ord}_A(p^m) .\]
\end{prop}
\begin{proof}
To compute $r_\beta$, we localize $B$ at $\beta$, and then take the completions $\widehat{A}$ and $\widehat{B_\beta}$ of $A$ and $B_\beta$, respectively. Let $d$ be the degree of $\widehat{B_\beta}$ over $\widehat{A}$. Then, by Lenstra's result (Proposition \ref{different0}), the inequality \[r_\beta \leq e_\beta -1+e_\beta\cdot \mathrm{ord}_{\widehat{A}}(d). \] holds. By definition, $\mathrm{ord}_{\widehat A} (d)=\mathrm{ord}_A(d)\leq \mathrm{ord}_A(p^m)$. This concludes the proof.
\end{proof}
\subsection{Covers of arithmetic surfaces with fixed branch locus}\label{someupperbound}
Let $K$ be a number field with ring of integers $O_K$, and let $S=\Spec O_K$. Let $D$ be a reduced effective divisor on $\mathcal{X} = \mathbb{P}^1_{S}$, and let $U$ denote the complement of the support of $D$ in $\mathcal{X}$. Let $\mathcal{Y} \to S$ be an integral normal 2-dimensional flat projective $S$-scheme with geometrically connected fibres, and let $\pi:\mathcal{Y}\longrightarrow \mathcal{X}$ be a finite surjective morphism of $S$-schemes which is \'etale over $U$. Note that $\pi:\mathcal{Y}\longrightarrow \mathcal{X}$ is a flat morphism. (The source is normal of dimension two, and the target is regular.) Let $\psi:\mathcal{Y}^\prime \to \mathcal{Y}$ be the minimal resolution of singularities (\cite[Proposition 9.3.32]{Liu2}). We have the following diagram of morphisms \[\xymatrix{\mathcal{Y}^\prime \ar[r]^{\psi} & \mathcal{Y} \ar[r]^\pi & \mathcal{X} \ar[r] & S. }\] Consider the prime decomposition $D= \sum_{i\in I} D_i$, where $I$ is a finite index set. Let $D_{ij}$ be an irreducible component of $\pi^{-1}(D)$ mapping onto $D_i$, where $j$ is in the index set $J_i$. We define $r_{ij}$ to be the valuation of the different ideal of $\mathcal{O}_{\mathcal{Y},D_{ij}}/\mathcal{O}_{\mathcal{X},D_i}$. We define the \textit{ramification divisor} $R$ to be $\sum_{i\in I}\sum_{j\in J_i} r_{ij} D_{ij}$. We define $B:=\pi_\ast R$. (We call $B$ the \textit{branch divisor} of $\pi:\mathcal{Y}\to \mathcal{X}$.)
We apply \cite[6.4.26]{Liu2} to obtain that there exists a dualizing sheaf $\omega_{\mathcal{Y}/S}$ for $\mathcal{Y}\to S$, and a dualizing sheaf $\omega_\pi$ for $\pi:\mathcal{Y}\to \mathcal{X}$ such that the adjunction formula \[\omega_{\mathcal{Y}/S} = \pi^\ast \omega_{\mathcal{X}/S}\otimes\omega_{\pi}\] holds. Since the local ring at the generic point of a divisor on $\mathcal{X}$ is of characteristic zero, basic properties of the different ideal imply that $\omega_\pi$ is canonically isomorphic to the line bundle $\mathcal{O}_{\mathcal{Y}}(R)$. We deduce the \textit{Riemann-Hurwitz} formula \[\omega_{\mathcal{Y}/S} = \pi^\ast \omega_{\mathcal{X}/S}\otimes\mathcal{O}_{\mathcal{Y}}(R).\]
Let $K_{\mathcal{X}}=-2\cdot[\infty]$ be the divisor defined by the tautological section of $\omega_{\mathcal{X}/O_K}$. Let $K_{\mathcal{Y}^\prime}$ denote the Cartier divisor on $\mathcal{Y}^\prime$ defined by the rational section $d(\pi\circ \psi)$ of $\omega_{\mathcal{Y}^\prime/S}$. We define the Cartier divisor $K_{\mathcal{Y}}$ on $\mathcal{Y}$ analogously, i.e., $K_{\mathcal{Y}}$ is the Cartier divisor on $\mathcal{Y}$ defined by $d\pi$. Note that $K_{\mathcal{Y}} = \psi_\ast K_{\mathcal{Y}^\prime}$. Also, the Riemann-Hurwitz formula implies the following equality of Cartier divisors \[K_\mathcal{Y} = \pi^\ast K_{\mathcal{X}} + R.\]
Let $E_1,\ldots, E_s$ be the exceptional components of $\psi: \mathcal{Y}^\prime \longrightarrow \mathcal{Y}$. Note that the pull-back of the Cartier divisor $\psi^\ast K_{\mathcal{Y}}$ coincides with $K_{\mathcal{Y}^\prime}$ on $$\mathcal{Y}^\prime - \bigcup_{i=1}^s E_i.$$ Therefore, there exist integers $c_i$ such that \[K_{\mathcal{Y}^\prime} = \psi^\ast K_{\mathcal{Y}} + \sum_{i=1}^s c_i E_i,\] where this is an equality of Cartier divisors (\textbf{not only} modulo linear equivalence). Note that $(\psi^\ast K_{\mathcal{Y}},E_i) =0 $ for all $i$. In fact, $K_{\mathcal{Y}}$ is linearly equivalent to a Cartier divisor with support disjoint from the singular locus of~$\mathcal{Y}$.
\begin{lem}\label{c_i}
For all $i=1,\ldots,s$, we have $c_i \leq 0$.
\end{lem}
\begin{proof}
We have the following local statement. Let $y$ be a singular point of $\mathcal{Y}$, and let $E_1,\ldots, E_r$ be the exceptional components of $\psi$ lying over $y$. We define $$V_+ = \sum_{i=1, c_i >0}^r c_i E_i$$ as the sum on the $c_i>0$. To prove the lemma, it suffices to show that $V_+ =0$. Since the intersection form on the exceptional locus of $\mathcal{Y}^\prime\to \mathcal{Y}$ is negative definite (\cite[Proposition~9.1.27]{Liu2}), to prove $V_+ =0$, it suffices to show that $(V_+,V_+) \geq 0$. Clearly, to prove the latter inequality, it suffices to show that, for all $i$ such that $c_i>0$, we have $(V_+,E_i)\geq 0$. To do this, fix $i\in \{1,\ldots,r\}$ with $c_i>0$. Since $\mathcal{Y}^\prime\to \mathcal{Y}$ is minimal, we have that $E_i$ is not a $(-1)$-curve. In particular, by the adjunction formula, the inequality $(K_{\mathcal{Y}^\prime},E_i) \geq 0$ holds. We conclude that \[(V_+, E_i) = (K_{\mathcal{Y}^\prime},E_i) - \sum_{j=1,c_j<0}^r c_j (E_j,E_i) \geq 0, \] where, in the last inequality, we used that, for all $j$ such that $c_j<0$, we have that $E_j \neq E_i$.
\end{proof}
\begin{prop}\label{hulp} Let $P^\prime:S\to \mathcal{Y}^\prime$ be a section, and let $Q:S\to \mathcal{X}$ be the induced section. If the image of $P^\prime$ is not contained in the support of $K_{\mathcal{Y}^\prime}$, then \[( K_{\mathcal{Y}^\prime},P^\prime)_{\fin} \leq (B,Q)_{\fin}.\]
\end{prop}
\begin{proof} Note that, by the Riemann-Hurwitz formula, we have $K_{\mathcal{Y}} = \pi^\ast K_{\mathcal{X}} + R$. Therefore, by Lemma \ref{c_i}, we get that
\begin{eqnarray*}
( K_{\mathcal{Y}^\prime},P^\prime)_{\fin} &=& (\psi^\ast K_{\mathcal{Y}}+\sum c_i E_i,P^\prime)_{\fin} \\ &=& (\psi^\ast \pi^\ast K_{\mathcal{X}}+\psi^\ast R + \sum_{i=1}^s c_i E_i, P^\prime)_{\fin} \\
& \leq & (\psi^\ast \pi^\ast K_{\mathcal{X}},P^\prime)_{\fin}+(\psi^\ast R,P^\prime)_{\fin}.
\end{eqnarray*}
Since the image of $P^\prime$ is not contained in the support of $K_{\mathcal{Y}^\prime}$,
we can apply the projection formula for the composed morphism $\pi\circ \psi:\mathcal{Y}^\prime\to \mathcal{X}$
to $( \psi^\ast \pi^\ast K_{\mathcal{X}}, P^\prime)_{\fin}$ and $(\psi^\ast R,P^\prime)_{\fin}$; see~\cite[Section~9.2]{Liu2}. This gives \[(K_{\mathcal{Y}^\prime},P^\prime)_{\fin} \leq (\psi^\ast \pi^\ast K_{\mathcal{X}},P^\prime)_{\fin}+(\psi^\ast R,P^\prime)_{\fin}= (K_{\mathcal{X}},Q)_{\fin}+ (\pi_\ast R,Q)_{\fin}.\]
Since $K_{\mathcal{X}} = -2\cdot [\infty]$, the inequality $(K_{\mathcal{X}},Q)_{\fin} \leq 0$ holds. By definition, $B=\pi_\ast R$. This concludes the proof.
\end{proof}
We introduce some notation. For $i$ in $I$ and $j$ in $J_i$, let $e_{ij}$ and $f_{ij}$ be the ramification index and residue degree of $\pi$ at the generic point of $D_{ij}$, respectively. Moreover, let $\mathfrak{p}_i\subset O_K$ be the maximal ideal corresponding to the image of $D_i$ in $\Spec O_K$. Then, note that $e_{ij}$ is the multiplicity of $D_{ij}$ in the fibre of $\mathcal{Y}$ over $\mathfrak{p}_i$. Now, let $e_{\mathfrak{p}_i}$ and $f_{\mathfrak{p}_i}$ be the ramification index and residue degree of $\mathfrak{p}_i$ over $\mathbb{Z}$, respectively. Finally, let $p_i$ be the residue characteristic of the local ring at the generic point of $D_i$ and, if $p_i>0$, let $m_i$ be the biggest integer such that $p_i^{m_i} \leq \deg \pi$, i.e., $m_i = \lfloor \log (\deg \pi)/\log (p_i)\rfloor$.
\begin{lem}\label{different_thesis} Let $i$ be in $I$ such that $0<p_i \leq \deg \pi$. Then, for all $j$ in $J_i$, \[r_{ij} \leq 2e_{ij}m_i e_{\mathfrak{p}_i}.\]
\end{lem}
\begin{proof} Let $\mathrm{ord}_{D_i}$ be the valuation on the local ring at the generic point of $D_i$. Then, by Proposition \ref{different1}, the inequality \[r_{ij} \leq e_{ij} - 1 +e_{ij} \cdot \mathrm{ord}_{D_i}(p_i^{m_i})\] holds. Note that $\mathrm{ord}_{D_i}(p_i^{m_i}) = m_i e_{\mathfrak{p}_i}$. Since $p_i \leq \deg \pi$, we have that $m_i \geq 1$. Therefore, \[r_{ij}\leq e_{ij} - 1 +e_{ij} m_i e_{\mathfrak{p}_i} \leq 2e_{ij} m_i e_{\mathfrak{p}_i}.\qedhere\]
\end{proof}
Let us introduce a bit more notation. Let $I_1$ be the set of $i$ in $I$ such that $D_i$ is horizontal (i.e., $p_i =0$) or $p_i > \deg \pi $. Let $D_1 = \sum_{i\in I_1} D_i$. We are now finally ready to combine our results to bound the ``non-archimedean'' part of the height of a point.
\begin{prop}\label{upperbound2} Let $P^\prime:S\to \mathcal{Y}^\prime$ be a section, and let $Q:S\to \mathcal{X}$ be the induced section. If the image of $P^\prime$ is not contained in the support of $K_{\mathcal{Y}^\prime}$, then \[( K_{\mathcal{Y}^\prime},P^\prime)_{\fin} \leq \deg \pi(D_1,Q)_{\fin} + 2(\deg \pi)^2 \log(\deg \pi)[K:\mathbb{Q}].\]
\end{prop}
\begin{proof}
Note that \[B= \sum_{i\in I} \left(\sum_{j\in J_i} r_{ij} f_{ij}\right) D_i.\] Let $I_2 $ be the complement of $I_1$ in $I$. Let $D_2 = \sum_{i\in I_2} D_i$, and note that $D= D_1+D_2$. In particular, \begin{eqnarray*}(B,Q)_{\fin} &=& \sum_{i\in I} \sum_{j\in J_i} r_{ij}f_{ij} (D_i,Q)_{\fin} \\ &= & \sum_{i\in I_1} \sum_{j\in J_i} r_{ij}f_{ij} (D_i,Q)_{\fin} + \sum_{i\in I_2}\sum_{j\in J_i} r_{ij}f_{ij} (D_{i},Q)_{\fin}. \end{eqnarray*} Note that, for all $i$ in $I_1$ and $j$ in $J_i$, the ramification of $D_{ij}$ over $D_i$ is tame, i.e., the equality $r_{ij} = e_{ij}-1$ holds.
Note that, for all $i$ in $I$, we have $\sum_{j\in J_i} e_{ij} f_{ij} = \deg \pi$. Thus,
\[ \sum_{i\in I_1} \sum_{j\in J_i} r_{ij}f_{ij} (D_i,Q)_{\fin} \leq \sum_{i\in I_1} \sum_{j\in J_i} e_{ij}f_{ij} (D_i,Q)_{\fin} = \deg \pi (D_1,Q)_{\fin}. \] We claim that \[\sum_{i\in I_2}\sum_{j\in J_i} r_{ij}f_{\ij} (D_{i},Q)_{\fin} \leq 2(\deg\pi)^2\log(\deg \pi)[K:\mathbb{Q}]. \] In fact, since, for all $i$ in $I_2$ and $j$ in $J_i$, by Proposition \ref{different_thesis}, the inequality \[r_{ij} \leq 2e_{ij} m_i e_{\mathfrak p_i}\] holds, we have that \begin{eqnarray*} \sum_{i\in I_2}\sum_{j\in J_i} r_{ij}f_{ij} (D_{i},Q)_{\fin} &\leq & 2\sum_{i\in I_2} m_ie_{\mathfrak{p}_i}(D_{i},Q)_{\fin}\left(\sum_{j\in J_i} e_{ij} f_{ij}\right) \\ &=& 2(\deg \pi)\sum_{i\in I_2} m_i e_{\mathfrak{p}_i} (D_i,Q)_{\fin}.\end{eqnarray*} Note that $(D_i,Q) = \log (\# k(\mathfrak{p}_i)) = f_{\mathfrak p_i}\log p_i$. We conclude that \begin{eqnarray*} \sum_{i\in I_2} m_i e_{\mathfrak{p}_i} (D_i,Q)_{\fin} &=& \sum_{p \textrm{ prime}} \left(\sum_{i\in I_2, p_i =p} e_{\mathfrak{p}_i}f_{\mathfrak{p}_i}\right) \left\lfloor \frac{\log(\deg \pi)}{\log p}\right\rfloor \log(p) \\ &=& [K:\mathbb{Q}] \sum_{\mathcal X_p\cap \vert D_2\vert \neq \emptyset} \left\lfloor \frac{\log(\deg \pi)}{\log p}\right\rfloor \log(p), \end{eqnarray*} where the last sum runs over all prime numbers $p$ such that the fibre $\mathcal X_p$ contains an irreducible component of the support of $D_2$. Thus,
\[(B,Q)_{\fin} \leq(\deg \pi)(D_1,Q)_{\fin} + 2(\deg \pi) [K:\mathbb{Q}] \sum_{\mathcal X_p\cap D_2\neq \emptyset} \left\lfloor \frac{\log(\deg \pi)}{\log p}\right\rfloor \log(p). \] Note that \[\sum_{\mathcal X_p\cap D_2\neq \emptyset} \left\lfloor \frac{\log(\deg \pi)}{\log p}\right\rfloor \log(p) \leq \sum_{\mathcal X_p\cap D_2 \neq \emptyset} \log(\deg \pi) \leq \deg\pi \log(\deg \pi),\] where we used that $\mathcal X_p \cap D_2 \neq \emptyset$ implies that $p\leq \deg \pi$.
In particular, \[(B,Q)_{\fin} \leq (\deg \pi) ( D_1,Q)_{\fin} + 2(\deg \pi)^2 \log(\deg \pi) [K:\mathbb{Q}].\] By Proposition \ref{hulp}, we conclude that \[(K_{\mathcal{Y}^\prime},P^\prime)_{\fin} \leq (\deg \pi) ( D_1,Q)_{\fin} + 2(\deg \pi)^2 \log(\deg \pi) [K:\mathbb{Q}].\qedhere\]
\end{proof}
\subsection{Models of covers of curves}
In this section, we give a general construction for a model of a cover of the projective line. Let $K$ be a number field with ring of integers $O_K$, and let $S=\Spec O_K$.
\begin{prop}\label{semi_stable_generalization} Let $\mathcal Y\to \Spec O_K$ be a flat projective morphism with geometrically connected fibres of dimension one, where $\mathcal{Y}$ is an integral normal scheme. Then, there exists a finite field extension $L/K$ such that the minimal resolution of singularities of the normalization of $\mathcal Y \times_{O_K} O_L$ is semi-stable over $O_L$.
\end{prop}
\begin{proof}
This follows from \cite[Corollary 2.8]{Liu1}.
\end{proof}
The main result of this section reads as follows.
\begin{thm}\label{model} Let $K$ be a number field, and let $Y$ be a smooth projective geometrically connected curve over $K$. Then, for any finite morphism $\pi_K:Y\to \mathbb{P}^1_K$, there exists a number field $L/K$ such that:
\begin{itemize}
\item the normalization $\pi:\mathcal{Y}\to \mathbb{P}^1_{O_L}$ of $\mathbb{P}^1_{O_L}$ in the function field of $Y_L$ is finite flat surjective;
\item the minimal resolution of singularities $\psi:\mathcal{Y}^\prime \longrightarrow \mathcal{Y}$ is semi-stable over $O_L$;
\item each irreducible component of the vertical part of the branch locus of the finite flat morphism $\pi:\mathcal{Y}\to \mathbb{P}^1_{O_L}$ is of characteristic less than or equal to $\deg \pi$. (The characteristic of a prime divisor $D$ on $\mathbb{P}^1_{O_L}$ is the residue characteristic of the local ring at the generic point of $D$.)
\end{itemize}
\end{thm}
\begin{proof} By Proposition \ref{semi_stable_generalization}, there exists a finite field extension $L/K$ such that the minimal resolution of singularities $\psi:\mathcal{Y}^\prime \longrightarrow \mathcal{Y}$ of the normalization of $\mathbb{P}^1_{O_L}$ in the function field of $Y_L$ is semi-stable over $O_{L}$. Note that the finite morphism $\pi:\mathcal{Y}\to \mathbb{P}^1_{O_L}$ is flat. (The source is normal of dimension two, and the target is regular.) Moreover, since the fibres of $\mathcal{Y}^\prime\to \Spec O_L$ are reduced, the fibres of $\mathcal{Y}$ over $O_L$ are reduced. Let $\mathfrak p\subset O_L$ be a maximal ideal of residue characteristic strictly bigger than $\deg \pi$, and note that the ramification of $\pi:\mathcal{Y}\to \mathbb{P}^1_{O_L}$ over (each prime divisor of $\mathbb{P}^1_{O_L}$ lying over) $\mathfrak p$ is tame. Since the fibres of $\mathcal{Y}\to \Spec O_L$ are reduced, we see that the finite morphism $\pi$ is unramified over $\mathfrak p$. In fact, since $\mathbb{P}^1_{O_L}\to \Spec O_L$ has reduced (even smooth) fibres, the valuation of the different ideal $\mathcal{D}_{\mathcal{O}_D/\mathcal{O}_{\pi(D)}}$ on $\mathcal{O}_D$ of an irreducible component $D$ of $\mathcal Y_{\mathfrak p}$ lying over $\pi(D)$ in $\mathcal{X}$ is precisely the multiplicity of $D$ in $\mathcal Y_{\mathfrak p}$. (Here we let $\mathcal{O}_D$ denote the local ring at the generic point of $D$, and $\mathcal{O}_{\pi(D)}$ the local ring at the generic point of $\pi(D)$.) Thus, each irreducible component of the vertical part of the branch locus of $\pi:\mathcal{Y}\to \mathbb{P}^1_{O_L}$ is of characteristic less or equal to $\deg \pi$.
\end{proof}
\subsection{The modular lambda function}\label{modular_lambda_function}
The modular function $\lambda:\chi^{\rm c}f_{a,b}\to \mathbb{C}$ is defined as
\[\lambda(\tau) =\frac{\mathfrak{p}\left(\frac{1}{2}+\frac{\tau}{2}\right)- \mathfrak{p}\left(\frac{\tau}{2} \right)}{ \mathfrak{p}\left(\frac{\tau}{2}\right)-\mathfrak{p}\left(\frac{1}{2}
\right)},\] where $\mathfrak{p}$
denotes the Weierstrass elliptic function for the lattice $\mathbb{Z}+\tau \mathbb{Z}$ in $\mathbb{C}$.
The function $\lambda$ is $\Gamma(2)$-invariant. More precisely, $\lambda$
factors through the $\Gamma(2)$-quotient map $\chi^{\rm c}f_{a,b}\rightarrow Y(2)$ and
an analytic isomorphism $Y(2)\overset{\sim}{\longrightarrow} \mathbb{C}-\{0,1\}$.
Thus, the modular function $\lambda$ induces an analytic isomorphism $X(2) \to \mathbb{P}^1(\mathbb{C})$.
Let us note that $\lambda(i\infty) =0$, $\lambda(1) = \infty$ and $\lambda(0) =1$.
The restriction of $\lambda$ to the imaginary axis $\{iy: y>0\}$ in $\chi^{\rm c}f_{a,b}$
induces a homeomorphism, also denoted by $\lambda$, from $\{iy: y>0\}$ to
the open interval $(0,1)$ in $\mathbb{R}$. In fact, for $\alpha$ in the open interval $(0,1)$,
\[\lambda^{-1}(\alpha) = i \frac{\mathrm{M}(1,\sqrt{\alpha})}{\mathrm{M}(1,\sqrt{1-\alpha})},\]
where $\mathrm{M}$ denotes the arithmetic-geometric-mean.
\begin{lem}\label{Clambda} For $\tau$ in $\chi^{\rm c}f_{a,b}$, let $q(\tau) = \exp(\pi i \tau)$ and let $\lambda(\tau) = \sum_{n=1}^\infty a_n q^{n}(\tau)$ be the $q$-expansion of $\lambda$ on $\chi^{\rm c}f_{a,b}$.
Then, for any real number $4/5\leq y\leq 1$, \[ -\log \vert \sum_{n=1}^\infty na_n q^{n}(iy)\vert \leq 2. \] \end{lem}
\begin{proof}
Note that \[\sum_{n=1}^\infty na_n q^{n} = q \frac{d\lambda }{d q}. \] It suffices to show that $\vert q d\lambda/dq\vert \geq 3/20$. We will use the product formula for $\lambda$.
Namely, \[ \lambda(q) = 16q\prod_{n=1}^\infty f_n(q), \quad f_n(q) := \frac{1+q^{2n}}{1+q^{2n-1}}.\] Write $ f^\prime_n(q) = df_n(q)/ dq$.
Then, \[q\frac{d\lambda}{d q} = \lambda \left(1 +q\sum_{n=1}^\infty \frac{f_n^\prime (q)}{f_n(q)}\right) = \lambda\left(1+q \sum_{n=1}^\infty \frac{d}{dq} (\log f_n(q))\right). \]
Note that, for any positive integer $n$
and $4/5\leq y \leq 1$, \[\left(\frac{d}{dq} \log f_n(q)\right)(iy)\leq 0. \] Moreover, since $\lambda(i) =1/2$ and $\lambda(0) =1$, the inequality $\lambda(iy) \geq 1/2$ holds for all $0\leq y\leq 1$. Also, for $4/5 \leq y \leq 1$, \[\left(-q \sum_{n=1}^\infty \frac{d}{dq} \log f_n(q)\right)(iy) \leq \frac{7}{10} .\]
In fact,
\begin{eqnarray*}
\sum_{n=1}^\infty \frac{d}{dq} \left(\log f_n(q)\right) &=& \sum_{n=1}^\infty \frac{2nq^{2n-1}}{1+q^{2n}} - \sum_{n=1}^\infty \frac{(2n-1)q^{2n-2}}{1+q^{2n-1}}
\end{eqnarray*} It is straightforward to verify that, for all $4/5\leq y\leq 1$, the inequality \[ \sum_{n=1}^\infty \frac{2nq^{2n-1}(iy)}{1+q^{2n}(iy)} - \sum_{n=1}^\infty \frac{(2n-1)q^{2n-2}(iy)}{1+q^{2n-1}(iy)} \geq \frac{100}{109}\sum_{n=1}^\infty 2n q^{2n-1}(iy) - \sum_{n=1}^\infty (2n-1)q^{2n-2}(iy)\] holds. Finally, utilizing classical formulas for geometric series, for all $4/5\leq y\leq 1$, \begin{eqnarray*}
q(iy)\sum_{n=1}^\infty \frac{d}{dq} \left(\log f_n(q)\right)(iy)
&\geq & q(iy)\left( \frac{200 q(iy)}{109(1-q^2(iy))^2} - \frac{1+q^2(iy)}{(1-q^2(iy))^2}\right)\geq \frac{7}{10}.
\end{eqnarray*}
We conclude that \[\left\vert q \frac{d\lambda}{dq} \right\vert \geq \frac{1}{2}\left(1-\frac{7}{10}\right) = \frac{3}{20}.\qedhere\]
\end{proof}
\subsection{ A non-Weierstrass point with bounded height}\label{heightboundlastsection}
The logarithmic height of a non-zero rational number $a=p/q$ is given by \[h_{\textrm{naive}}(a) = \log\max(\vert p\vert,\vert q\vert),\] where $p$ and $q$ are coprime integers and $q> 0$.
\begin{thm}\label{heightofpoint1}
Let $\pi_{\overline{\mathbb{Q}}}:Y\longrightarrow \mathbb{P}^1_{\overline{\mathbb{Q}}}$ be a finite morphism of degree $d$, where $Y/\overline{\mathbb{Q}}$ is a smooth projective connected curve of positive genus $g\geq 1$. Assume that $\pi_{\overline{\mathbb{Q}}}:Y\to \mathbb{P}^1_{\overline{\mathbb{Q}}}$ is unramified over $\mathbb{P}^1_{\overline{\mathbb{Q}}} - \{0,1,\infty\}$. Then, for any rational number $0< a \leq 2/3$ and any $b \in Y(\overline{\mathbb{Q}})$ lying over $a$, \[h(b) \leq 3 h_{\textrm{naive}}(a) d^2 +6378031 \frac{d^5}{g} .\]
\end{thm}
\begin{proof}
By Theorem \ref{model}, there exist a number field $K$ and a model $$\pi_K:Y\longrightarrow \mathbb{P}^1_K$$ for $\pi_{\overline{\mathbb{Q}}}:Y\longrightarrow \mathbb{P}^1_{\overline{\mathbb{Q}}}$ with the following three properties: the minimal resolution of singularities $\psi:\mathcal{Y}^\prime \longrightarrow \mathcal{Y}$ of the normalization $\pi:\mathcal{Y}\longrightarrow \mathbb{P}^1_{O_K}$ of $\mathbb{P}^1_{O_K}$ in $\mathcal{Y}$ is semi-stable over $O_K$, each irreducible component of the vertical part of the branch locus of $\pi:\mathcal{Y} \to \mathbb{P}^1_{O_K}$ is of characteristic less or equal to $\deg \pi$ and every point in the fibre of $\pi_K$ over $a$ is $K$-rational. Also, the morphism $\pi:\mathcal{Y}\to \mathbb{P}^1_{O_K}$ is finite flat surjective.
Let $b\in Y(K)$ lie over $a$. Let $P^\prime$ be the closure of $b$ in $\mathcal{Y}^\prime$. By Lemma \ref{heightbigger}, the height of $b$ is ``minimal'' on the minimal regular model. That is, \[h(b) \leq \frac{(P^{\prime}, \omega_{\mathcal{Y}^\prime/O_K})}{[K:\mathbb{Q}]}.\] Recall the following notation from Section \ref{someupperbound}. Let $\mathcal{X} = \mathbb{P}^1_{O_K}$. Let $K_{\mathcal{X}} = -2 \cdot [\infty]$ be the divisor defined by the tautological section. Let~$K_{\mathcal{Y}^\prime}$ be the divisor on $\mathcal{Y}^\prime$ defined by $d(\pi_K)$ viewed as a rational section of $\omega_{\mathcal{Y}^\prime/O_K}$. Since the support of $K_{\mathcal{Y}^\prime}$ on the generic fibre is contained in $\pi_K^{-1}(\{0,1,\infty\})$, the section $P^\prime$ is not contained in the support of $K_{\mathcal{Y}^\prime}$. Therefore, we get that \[h(b) [K:\mathbb{Q}] \leq (P^{\prime}, \omega_{\mathcal{Y}^\prime/O_K})= (P^\prime,K_{\mathcal{Y}^\prime})_{\fin} + \sum_{\sigma:K\longrightarrow \mathbb{C}} (-\log \Vert d\pi_K\Vert_\sigma)(\sigma(b)).\]
Let $D$ be the branch locus of $\pi: \mathcal{Y} \longrightarrow \mathcal{X}$ endowed with the reduced closed subscheme structure. Write $D= 0+1+\infty+D_{\textrm{ver}}$, where $D_{\textrm{ver}}$ is the vertical part of $D$. Note that, in the notation of Section \ref{someupperbound}, we have that $D_1 = 0+1+\infty$. Thus, if $Q$ denotes the closure of $a$ in $\mathcal{X}$, by Proposition \ref{upperbound2}, we get \[( P^\prime,K_{\mathcal{Y}^\prime})_{\fin} \leq (\deg \pi)(0+1+\infty,Q)_{\fin} + 2(\deg \pi)^2 \log(\deg \pi)[K:\mathbb{Q}].\] Write $a=p/q$, where $p$ and $q$ are coprime positive integers with $q>p$. Note that \begin{eqnarray*} (0+1+\infty,Q)_{\fin} &=& [K:\mathbb{Q}]\log (p q(q-p)) \\ & \leq & 3\log(q)[K:\mathbb{Q}] \\ &=& 3h_{\textrm{naive}}(a)[K:\mathbb{Q}] .\end{eqnarray*} We conclude that \[\frac{(P^\prime,K_{\mathcal{Y}^\prime})_{\fin}}{[K:\mathbb{Q}]} \leq 3h_{\textrm{naive}}(a)(\deg \pi)^2 + 2( \deg \pi)^3.\]
It remains to estimate $\sum_{\sigma:K\longrightarrow \mathbb{C}} (-\log \Vert d\pi_K\Vert_\sigma)(\sigma(b))$. We will use our bounds for Arakelov-Green functions.
Let $\sigma:K\to \mathbb{C}$ be an embedding. The composition \[\xymatrix{ Y_{\sigma} \ar[rr]^{\pi_{\sigma}} & & \mathbb{P}^1(\mathbb{C}) \ar[rr]^{\lambda^{-1}} & & X(2) }\] is a Belyi cover (Definition \ref{belyidef}). By abuse of notation, let $\pi$ denote the composed morphism $Y_{\sigma}\longrightarrow X(2)$. Note that $\lambda^{-1}(2/3) \approx 0.85i$. In particular, $\Im(\lambda^{-1}(a)) \geq \Im(\lambda^{-1}(2/3)) > s_1$. (Recall that $s_1 = \sqrt{1/2}$.) Therefore, the element $\lambda^{-1}(a)$ lies in $\dot{B}_{\infty}^{s_1}$. Since $V_y^{r_1}\supset V_y\cap \pi^{-1}B_{\infty}^{s_1}$, there is a unique cusp $y$ of $Y_\sigma\to X(2)$ lying over $\infty$ such that $\sigma(b)$ lies in $ V_y^{r_1}$.
Note that $q =z_\infty\exp(-\pi/2)$. Therefore, since $\lambda = \sum_{j=1}^\infty a_j q^{j}$ on $\chi^{\rm c}f_{a,b}$, \[\lambda \circ \pi= \sum_{j=1}^\infty a_j \exp(-j\pi/2) (z_\infty\circ \pi)^{j} = \sum_{j=1}^\infty a_j \exp(-j\pi/2) w_y^{e_yj} \] on $V_y$. Thus, by the chain rule, \[d(\lambda\circ \pi) = e_y\sum_{j=1}^\infty ja_{j} \exp(-j\pi/2) w_y^{e_yj-1} d(w_y). \] By the trivial inequality $e_y \geq 1$, the inequality $\vert w_y \vert \leq 1$ and Lemma \ref{Clambda},
\begin{eqnarray*}
-\log \Vert d(\lambda\circ \pi)\Vert_{\mathrm{Ar}}(\sigma(b)) &=& -\log \Vert dw_y\Vert_{\mathrm{Ar}}(\sigma(b)) - \log \vert e_y\sum_{j=1}^\infty ja_j \exp(-j\pi/2) w_y^{e_y j -1}(\sigma(b))\vert \\
& \leq & -\log \Vert dw_y\Vert_{\mathrm{Ar}}(\sigma(b)) - \log \vert \sum_{j=1}^\infty ja_j \exp(-j\pi/2) w_y^{e_y j}(\sigma(b))\vert \\ &\leq & -\log \Vert dw_y\Vert_{\mathrm{Ar}}(\sigma(b)) +2 . \end{eqnarray*} Thus, by Theorem \ref{MerklResult}, we conclude that \[ \frac{\sum_{\sigma:K\to \mathbb{C}} (-\log \Vert d\pi_K\Vert_\sigma)(\sigma(b))}{[K:\mathbb{Q}]} \leq 6378027\frac{(\deg \pi)^5}{g} +2. \qedhere\]
\end{proof}
\begin{thm}\label{heightboundlast}
Let $Y$ be a smooth projective connected curve over $\overline{\mathbb{Q}}$ of genus $g\geq 1$.
For any finite morphism $\pi:Y\to \mathbb{P}^1_{\overline{\mathbb{Q}}}$ ramified over exactly three points, there exists a non-Weierstrass point $b$ on $Y$ such that
\[h(b) \leq 6378033\frac{(\deg\pi)^5}{g}.\]
\end{thm}
\begin{proof} Define the sequence $(a_n)_{n=1}^\infty$ of rational numbers by $a_1 = 1/2$ and $a_n = n/(2n-1)$ for $n\geq 2$.
Note that $1/2 \leq a_n \leq 2/3$, and that $h_{\textrm{naive}}(a_n) \leq \log(2n)$. We may and do assume that $\pi:Y\to \mathbb{P}^1_{\overline{\mathbb{Q}}}$ is unramified over $\mathbb{P}^1_{\overline{\mathbb{Q}}}-\{0,1,\infty\}$.
By Theorem \ref{heightofpoint1}, for all $x\in \pi^{-1}(\{a_n\})$,
\begin{eqnarray}\label{heighty} h(x) \leq 3\log(2n) (\deg \pi)^2 + 6378031 \frac{(\deg \pi)^5}{g} . \end{eqnarray}
Since the number of Weierstrass points on $Y$ is at most $g^3-g$, there exists an integer $1\leq i \leq (\deg \pi)^2$
such that the fibre $\pi^{-1}(a_i)$ contains a non-Weierstrass point, say $b$. Applying (\ref{heighty}) to $b$, we conclude that \[h(b) \leq 3\log\left(2 (\deg \pi)^2\right) (\deg \pi)^2 + 6378031 \frac{(\deg \pi)^5}{g} \leq 2 \frac{(\deg \pi)^5}{g} + 6378031 \frac{(\deg \pi)^5}{g}. \qedhere\]
\end{proof}
\subsection{}\label{proofofmaintheorem}
For a smooth projective connected curve $X$ over $\overline{\mathbb{Q}}$, we let $\deg_{B}(X)$ denote the Belyi degree of $X$. \\
\noindent \emph{Proof of Theorem \ref{mainthm}.}
The inequality $\Delta(X)\geq 0$ is trivial, the lower bound $e(X)\geq 0$ is due to Faltings (\cite[Theorem 5]{Faltings1}) and the lower bound $h_{\Fal}(X)\geq -g\log(2\pi)$ is due to Bost (Lemma \ref{bost}).
For the remaining bounds, we proceed as follows. By Theorem \ref{heightboundlast}, there exists a non-Weierstrass point $b$ in $X(\overline{\mathbb{Q}})$ such that \[h(b) \leq 6378033\frac{\deg_B(X)^5}{g}.\] By our bound on the Arakelov norm of the Wronskian differential in Proposition \ref{Wronskian2}, we have \[\log \Vert \mathrm{Wr}\Vert_{\mathrm{Ar}}(b) \leq 6378028 g \deg_B(X)^5.\] To obtain the theorem, we combine these bounds with Theorem \ref{upperboundinv}. \qed
\section{Computing coefficients of modular forms}\label{modularforms}
Let $\Gamma\subset \mathrm{SL}_2(\mathbb{Z})$ be a congruence subgroup, and let $k$ be a positive integer. A modular form $f$ of weight $k$ for the group $\Gamma$ is determined by $k$ and its $q$-expansion coefficients $a_m(f)$ for $0 \leq m \leq k \cdot [\mathrm{SL}_2(\mathbb{Z}):\{\pm 1\} \Gamma]/12$. In this section we follow \cite{Bruin2} and give an algorithmic application of the main result of this paper. More precisely, the goal of this section is to complete the proof of the following theorem. The proof is given at the end of this section.
\begin{thm} {\bf (Couveignes-Edixhoven-Bruin)} \label{CoEdBr} Assume the Riemann hypothesis for $\zeta$-functions of number fields. Then there exists a probabilistic algorithm that, given
\begin{itemize}
\item a positive integer $k$,
\item a number field $K$,
\item a congruence subgroup $\Gamma \subset \mathrm{SL}_2(\mathbb{Z})$,
\item a modular form $f$ of weight $k$ for $\Gamma$ over $K$, and
\item a positive integer $m$ in factored form,
\end{itemize} computes $a_m(f)$, and whose expected running time is bounded by a polynomial in the length of the input.
\end{thm}
\begin{opm} We should make precise how the number field $K$, the congruence subgroup $\Gamma$ and the modular form $f$ should be given to the algorithm, and how the algorithm returns the coefficient $a_m(f)$. We should also explain what ``probabilistic'' means in this context. For the sake of brevity, we refer the reader to \cite[p. 20]{Bruin2} for the precise definitions. Following the definitions there, the above theorem becomes a precise statement.
\end{opm}
\begin{opm}
The algorithm in Theorem \ref{CoEdBr} is due to Bruin, Couveignes and Edixhoven. Assuming the Riemann hypothesis for $\zeta$-functions
of number fields, it was shown that the algorithm runs in polynomial time for \textbf{certain} congruence subgroups; see \cite[Theorem 1.1]{Bruin2}. Bruin did not have enough information about the semi-stable bad reduction of the modular curve $X_1(n)$ at primes $p$ such that $p^2$ divides $n$ to show that the algorithm runs in polynomial time.
Nevertheless, our bounds on the discriminant of a curve can be used to show that the algorithm runs in polynomial time for \textbf{all} congruence subgroups.
\end{opm}
\noindent \emph{Proof of Theorem \ref{CoEdBr}.} We follow Bruin's strategy \cite[Chapter V.1, p. 165]{Bruin}. In fact, Bruin notes that,
to assure that the algorithm runs in polynomial time for all congruence subgroups, it suffices to show that, for all positive integers $n$, the discriminant $\Delta(X_1(n))$ is polynomial in $n$ (or equivalently the genus of $X_1(n)$). The latter follows from Corollary \ref{modferwol}. In fact, the Belyi degree of $X_1(n)$ is at most the index of $\Gamma_1(n)$ in $\mathrm{SL}_2(\mathbb{Z})$. Since $$[\mathrm{SL}_2(\mathbb{Z}):\Gamma_1(n)] = n^2\prod_{p\vert n} (1-1/p^2) \leq n^2 ,$$ we conclude that $\Delta(X_1(n)) \leq 5\cdot 10^8 n^{14} $. \qed
\section{Bounds for heights of covers of curves}\label{conjecture}
Let $X$ be a smooth projective connected curve over $\overline{\mathbb{Q}}$. We prove that Arakelov invariants of (possibly ramified) covers of $X$ are polynomial in the degree. Let us be more precise.
\begin{thm}\label{mainthm2} Let $X$ be a smooth projective connected curve over $\overline{\mathbb{Q}}$, let $U$ be a non-empty open subscheme of $X$, let $B_f\subset \mathbb{P}^1(\overline{\mathbb{Q}})$ be a finite set, and let $f:X\to \mathbb{P}^1_{\overline{\mathbb{Q}}}$ be a finite morphism unramified over $\mathbb{P}^1_{\overline{\mathbb{Q}}} - B_f$. Define $B:=f(X-U)\cup B_f$. Let $N$ be the number of elements in the orbit of $B$ under the action of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ and let $H_B$ be the height of $B$ as defined in Section \ref{coversofcurves}. Define
\[ c_B := (4NH_B)^{45N^3 2^{N-2}N!}.\]
Then, for any finite morphism $\pi:Y\to X$ \'etale over $U$, where $Y$ is a smooth projective connected curve over $\overline{\mathbb{Q}}$ of genus $g\geq 1$,
\[ \begin{array}{ccccc} -\log(2\pi)g & \leq & h_{\Fal}(Y) & \leq & 13\cdot 10^6 g c_B(\deg f)^5(\deg \pi)^5 \\
0 &\leq & e(Y) & \leq & 3\cdot 10^7(g-1)c_B (\deg f)^5(\deg \pi)^5 \\
0 &\leq & \Delta(Y) & \leq & 5\cdot 10^8 g^2c_B (\deg f)^5(\deg \pi)^5 \\
-10^8 g^2 c_B (\deg f)^5(\deg \pi)^5 & \leq & \delta_{\Fal}(Y) & \leq & 2\cdot 10^8 g c_B (\deg f)^5 (\deg \pi)^5.
\end{array} \]
\end{thm}
\begin{proof} We apply Khadjavi's effective version of Belyi's theorem. More precisely, by \cite[Theorem 1.1.c]{Khadjavi}, there exists a finite morphism $R:\mathbb{P}^1_{\overline{\mathbb{Q}}}\to \mathbb{P}^1_{\overline{\mathbb{Q}}}$ \'etale over $\mathbb{P}^1_{\overline{\mathbb{Q}}} - \{0,1,\infty\}$ such that
$R(B) \subset \{0,1,\infty\}$ and $$\deg R \leq (4NH_B)^{9N^3 2^{N-2}N!}.$$ Note that the composed morphism \[\xymatrix{R\circ f\circ \pi:Y \ar[rr]^\pi & & X \ar[r]^f & \mathbb{P}^1_{\overline{\mathbb{Q}}} \ar[r]^R & \mathbb{P}^1_{\overline{\mathbb{Q}}}}\] is unramified over $\mathbb{P}^1_{\overline{\mathbb{Q}}} -\{0,1,\infty\}$. We conclude by applying Theorem \ref{mainthm} to the composition $R\circ f \circ \pi$.
\end{proof}
Note that Theorem \ref{mainthm2} implies Theorem \ref{mainthmintro} (with $X=\mathbb{P}^1_{\overline{\mathbb{Q}}}$, $B_f$ the empty set and $f:X\to \mathbb{P}^1_{\overline{\mathbb{Q}}}$ the identity morphism.)
In the proof of Theorem \ref{mainthm2}, we used Khadjavi's effective version of Belyi's theorem. Khadjavi's bounds are not optimal; see \cite[Lemme 4.1]{Litcanu} and \cite[Theorem 1.1.b]{Khadjavi} for better bounds when $B$ is contained in $\mathbb{P}^1(\mathbb{Q})$. Actually, the use of Belyi's theorem makes the dependence on the branch locus enormous in Theorem \ref{mainthm2}. It should be possible to avoid the use of Belyi's theorem and improve the dependence on the branch locus in Theorem \ref{mainthm2}. This is not necessary for our present purposes.
\begin{opm} Let us mention the quantitative Riemann existence theorem due to Bilu and Strambi; see \cite{BiSt}.
Bilu and Strambi give explicit bounds for the naive logarithmic height of a cover of $\mathbb{P}^1_{\overline{\mathbb{Q}}}$ with fixed branch locus. Although their bound on the naive height is exponential in the degree, the dependence on the height of the branch locus in their result is logarithmic.
\end{opm}
Let us show that Theorem \ref{mainthmintro} implies the following:
\begin{thm} {\bf (\cite[Conjecture 5.1]{EdJoSc})}\label{EdjS}
Let $U\subset \mathbb{P}^1_\mathbb{Z}$ be a non-empty open subscheme. Then there are integers $a$ and $b$
with the following property. For any prime number $\ell$, and for any connected finite \'etale cover $\pi:V\to U_{\mathbb{Z}[1/\ell]}$, the Faltings height of the normalization of $\mathbb{P}^1_\mathbb{Q}$ in the function field of $V$ is bounded by $(\deg \pi)^a\ell^b$. \end{thm}
\begin{proof} We claim that this conjecture holds with $b=0$ and an integer $a$ depending only on the generic fibre $U_\mathbb{Q}$ of $U$. In fact, let $\pi:Y\to \mathbb{P}^1_\mathbb{Q}$ denote the normalization of $\mathbb{P}^1_\mathbb{Q}$ in the function field of $V$. Note that $\pi$ is \'etale over $U_\mathbb{Q}$. Let $B=\mathbb{P}^1_\mathbb{Q} - U_\mathbb{Q}\subset \mathbb{P}^1(\overline{\mathbb{Q}})$ and let $N$ be the number of elements in the orbit of $B$ under the action of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$. By Theorem \ref{mainthmintro}, \[h_{\Fal}(Y):=\sum_{X\subset Y_{\overline{\mathbb{Q}}}} h_{\Fal}(X) \leq (\deg \pi)^a, \] where the sum runs over all connected components $X$ of $Y_{\overline{\mathbb{Q}}}:=Y\times_\mathbb{Q} \overline{\mathbb{Q}}$, and \[ a = 6+\log \left(13\cdot 10^6 N (4NH_B)^{45N^3 2^{N-2}N!}\right).\] Here we used that, $g\leq N\deg\pi $ and \[13\cdot 10^6 g(4NH_B)^{45N^3 2^{N-2}N!} \leq (\deg \pi)^{1+\log\left(13\cdot 10^6 N(4NH_B)^{45N^3 2^{N-2}N!}\right)}.\] This concludes the proof.
\end{proof}
Let us briefly mention the context in which these results will hopefully be applied. Let $S$ be a smooth projective geometrically connected surface over $\mathbb{Q}$. As is explained in Section 5 of \cite{EdJoSc}, it seems reasonable to suspect that, there exists an algorithm which, on input of a prime $\ell$, computes the \'etale cohomology groups $\mathrm{H}^i(S_{\overline{\mathbb{Q}},\textrm{et}},\mathbb{F}_\ell)$ with their $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$-action in time \textbf{polynomial} in $\ell$ for all $i=0, \ldots,4$.
\section*{Appendix: Merkl's method of bounding Green functions}
\ifamslatex
\centerline{by Peter Bruin}
\else
\leftline{by Peter Bruin}
\fi
\medbreak
\noindent
The goal of this appendix is to prove Theorem~\ref{Merkl}. Let $X$ be
a compact connected Riemann surface, and let $\mu$ be a smooth
non-negative $(1,1)$-form on~$X$ such that $\int_X\mu=1$. Let $*$
denote the star operator on 1-forms on~$X$, given with respect to a
holomorphic coordinate $z=x+iy$ by
$$
*dx=dy,\quad *dy=-dx,
$$
or equivalently
$$
*dz=-i\,d\bar z,\quad *d\bar z=i\,dz.
$$
The \emph{Green function\/} for~$\mu$ is the unique smooth function
$$
\gr_\mu\colon X\times X\setminus\Delta\to\mathbb{R},
$$
with a logarithmic singularity along the diagonal $\Delta$, such that for fixed
$w\in X$ we have, in a distributional sense,
$$
{1\over2\pi}d*d\gr_\mu(z,w)=\delta_w(z)-\mu(z)\quad\hbox{and}\quad
\int_{z\in X\setminus\{w\}}\gr_\mu(z,w)\mu(z)=0.
$$
For all $a,b\in X$, we write $g_{a,b}$ for the unique smooth function
on $X\setminus\{a,b\}$ satisfying
\begin{equation}
d*dg_{a,b}=\delta_a-\delta_b\quad\hbox{and}\quad
\int_{X\setminus\{a,b\}} g_{a,b}\mu=0.
\label{eq:def-gab}
\end{equation}
Then for all $a\in X$, we consider the function $g_{a,\mu}$ on
$X\setminus\{a\}$ defined by
\begin{equation}
g_{a,\mu}(x)=\int_{b\in X\setminus\{x\}}g_{a,b}(x)\mu(b).
\label{eq:def-gamu}
\end{equation}
A straightforward computation using Fubini's theorem shows that this
function satisfies
$$
d*dg_{a,\mu}=\delta_a-\mu\quad\hbox{and}\quad
\int_{X\setminus\{a\}} g_{a,\mu}\mu=0.
$$
This implies that $2\pi g_{a,\mu}(b)=\gr_\mu(a,b)$, where $\gr_\mu$ is
the Green function for~$\mu$ defined above.
We begin by restricting our attention to one of the charts of our
atlas, say $(U,z)$. By assumption, $z$ is an isomorphism from $U$ to
the open unit disc in ${\bf C}$. Let $r_2$ and~$r_4$ be real numbers
with
$$
r_1<r_2<r_4<1,
$$
and write
$$
r_3=(r_2+r_4)/2.
$$
We choose a smooth function
$$
\tilde\chi\colon{\bf R}_{\ge0}\to[0,1]
$$
such that $\tilde\chi(r)=1$ for $r\le r_2$ and $\tilde\chi(r)=0$ for
$r\ge r_4$.
We also define a smooth function $\chi$ on~$X$ by putting
$$
\chi(x)=\tilde\chi(|z(x)|)\quad\hbox{for }x\in U
$$
and extending by~0 outside~$U$. Furthermore, we put
$$
\chi^{\rm c}=1-\chi.
$$
For $0<r<1$, we write
$$
U^r=\{x\in U \ : \ |z(x)|<r\}.
$$
For all $a,b\in U^{r_1}$, the function
$$
f_{a,b}={1\over2\pi}\log\left|{(z-z(a))(\overline{z(a)}z-r_4^2)\over
(z-z(b))(\overline{z(b)}z-r_4^2)}\right|
$$
is defined on $U\setminus\{a,b\}$. Moreover, $f_{a,b}$ is harmonic on
$U\setminus\{a,b\}$, since the logarithm of the modulus of a
holomorphic function is harmonic. We extend $\chi^{\rm c}f_{a,b}$ to
a smooth function on $U$ by defining it to be zero in $a$~and~$b$.
We consider the open annulus
$$
A=U^{r_4}\setminus\overline{U^{r_2}}.
$$
Let $(\rho,\phi)$ be polar coordinates on $A$ such that
$z=\rho\exp(i\phi)$. A straightforward calculation shows that in
these coordinates the star operator is given by
$$
*d\rho=\rho\,d\phi,\quad*d\phi=-{d\rho\over\rho}.
$$
We consider the inner product
$$
\langle\alpha,\beta\rangle_A=\int_A\alpha\wedge *\beta.
$$
on the ${\bf R}$-vector space of square-integrable real-valued 1-forms
on~$A$. Furthermore, we write
$$
\|\alpha\|_A^2=\langle\alpha,\alpha\rangle_A.
$$
\begin{lem}
\label{maxmin}
For every real harmonic function $g$ on $A$ such that $\|dg\|_A$
exists,
$$
\max_{|z|=r_3}g-\min_{|z|=r_3}g\le{2\sqrt{\pi}\over r_4-r_2}\|dg\|_A.
$$
\end{lem}
\begin{proof}
By the formula for the star operator in polar coordinates,
\begin{align*}
dg\wedge*dg&=(\partial_\rho g\,d\rho+\partial_\phi g\,d\phi)\wedge
(\rho\partial_\rho g\,d\phi-\rho^{-1}\partial_\phi g\,d\rho)\\
&=\bigl((\partial_\rho g)^2+(\rho^{-1}\partial_\phi g)^2\bigr)
\rho\,d\rho\,d\phi.
\end{align*}
Using the mean value theorem, we can bound the left-hand side of the
inequality we need to prove by
\begin{align*}
\max_{|z|=r_3}g-\min_{|z|=r_3}g&\le
\pi\max_{|z|=r_3}|\partial_\phi g|\\
&=\pi|\partial_\phi g|(x)
\quad\hbox{for some $x$ with }|z(x)|=r_3.
\end{align*}
We write $R=(r_4-r_2)/2$, and we consider the open disc
$$
D=\bigl\{z\in U\bigm| |z-z(x)|<R\bigr\}
$$
of radius $R$ around $x$; this lies in~$A$ because $r_3=(r_4+r_2)/2$.
Let $(\sigma,\psi)$ be polar coordinates on~$D$ such that
$z-z(x)=\sigma\exp(i\psi)$. Because $g$ is harmonic, so is
$\partial_\phi g$, and Gauss's mean value theorem implies that
$$
\partial_\phi g(x)={1\over\pi R^2}
\int_D\partial_\phi g\,\sigma\,d\sigma\,d\psi.
$$
On the space of real continuous functions on $D$, we have the inner
product
$$
(h_1,h_2)\mapsto\int_D h_1h_2\,\sigma\,d\sigma\,d\psi.
$$
Applying the Cauchy--Schwarz inequality with
$h_1=\rho^{-1}\partial_\phi g$ and $h_2=\rho$ gives
\begin{align*}
\left|\int_D\partial_\phi g\,\sigma\,d\sigma\,d\psi\right|&\le
\left[\int_D\left(\rho^{-1}\partial_\phi g\right)^2
\sigma\,d\sigma\,d\psi\right]^{1/2}\cdot
\left[\int_D\rho^2\sigma\,d\sigma\,d\psi\right]^{1/2}\\
&\le\left[\int_A(\rho^{-1}\partial_\phi g)^2
\rho\,d\rho\,d\phi\right]^{1/2}\cdot
\left[\int_D\sigma\,d\sigma\,d\psi\right]^{1/2}\\
&\le\left[\int_A dg\wedge *dg\right]^{1/2}[\pi R^2]^{1/2}\\
&=\sqrt{\pi}\,R\Vert dg\Vert_A.
\end{align*}
Combining the above results finishes the proof.
\end{proof}
\begin{lem}
\label{tildegab}
For all $a,b\in U^{r_1}$, there exists a smooth function $\tilde
g_{a,b}$ on~$X$ such that
$$
d*d\tilde g_{a,b}=\begin{cases}
d*d(\chi^{\rm c}f_{a,b})& \text{on }U\\
0&\text{on }X\setminus\overline{U}.\end{cases}
$$
It is unique up to an additive constant and fulfills
$$
\|d\tilde g_{a,b}\|_A\le\|d(\chi^{\rm c}f_{a,b})\|_A.
$$
\end{lem}
\begin{proof}
First we note that the expression on the right-hand side of the
equality defines a smooth 2-form on~$X$, because $d*d(\chi^{\rm
c}f_{a,b})(z)$ vanishes for $|z|>r_4$; this follows from the choice
of~$\chi$ and the fact that $f_{a,b}$ is harmonic for $|z|>r_1$.
Since moreover $\chi^{\rm c}f_{a,b}=0$ on $U^{r_2}$, we see that the
support of this 2-form is contained in the closed annulus $\bar A$.
By Stokes's theorem,
$$
\int_{\bar A} d*d(\chi^{\rm c}f_{a,b})
=\int_{\partial\bar A}*d(\chi^{\rm c}f_{a,b}).
$$
Notice that $f_{a,b}$ is invariant under the substitution $z\mapsto
r_4^2/\bar z$; this implies that $\partial_\rho f_{a,b}(z)=0$ for
$|z|=r_4$. Furthermore, $\chi^{\rm c}(z)=1$ and $d\chi^{\rm c}(z)=0$
for $|z|=r_4$, so we see that
$$
d(\chi^{\rm c}f_{a,b})(z)=\chi^{\rm c}(z)df_{a,b}(z)
=(\partial_\phi f_{a,b}\,d\phi)(z)\quad\hbox{if }|z|=r_4.
$$
Likewise, since $\chi^{\rm c}=0$ and $d\chi^{\rm c}(z)=0$ for
$|z|=r_2$,
$$
d(\chi^{\rm c}f_{a,b})(z)=\chi^{\rm c}(z)df_{a,b}(z)=0
\quad\hbox{if }|z|=r_2.
$$
This means that for $z$ on the boundary of $\bar A$,
$$
*d(\chi^{\rm c}f_{a,b})(z)=\begin{cases}
-(\partial_\phi f_{a,b}\,d\rho)(z)& \text{if }|z|=r_4\\
0& \text{if }|z|=r_2.\end{cases}
$$
In particular, $*d(\chi^{\rm c}f_{a,b})$ vanishes when restricted to
the submanifold $\partial\bar A$ of~$X$. From this we conclude that
$$
\int_{\bar A}d*d(\chi^{\rm c}f_{a,b})=
\int_{\partial\bar A}*d(\chi^{\rm c}f_{a,b})=0.
$$
This implies that a function $\tilde g_{a,b}$ with the required
property exists.
\bgroup
\def\tilde g_{a,b}{\tilde g_{a,b}}
\def\chi^{\rm c}f_{a,b}{\chi^{\rm c}f_{a,b}}
To prove the inequality $\Vert d\tilde g_{a,b}\Vert_A\le\Vert d(\chi^{\rm c}f_{a,b})\Vert_A$, we
note that
\begin{align*}
\|d(\chi^{\rm c}f_{a,b})\|_A^2&=\|d\tilde g_{a,b}+d(\chi^{\rm c}f_{a,b}-\tilde g_{a,b})\|_A^2\\
&=\|d\tilde g_{a,b}\|_A^2+2\langle d\tilde g_{a,b},d(\chi^{\rm c}f_{a,b}-\tilde g_{a,b})\rangle_A+\|d(\chi^{\rm c}f_{a,b}-\tilde g_{a,b})\|_A^2.
\end{align*}
The last term is clearly non-negative. Furthermore, integration by
parts using Stokes's theorem gives
\begin{align*}
\langle d\tilde g_{a,b},d(\chi^{\rm c}f_{a,b}-\tilde g_{a,b})\rangle_A&=\int_A d\tilde g_{a,b}\wedge *d(\chi^{\rm c}f_{a,b}-\tilde g_{a,b})\\
&=\int_{\partial\bar A}\tilde g_{a,b}\,{*d(\chi^{\rm c}f_{a,b}-\tilde g_{a,b})}-\int_A\tilde g_{a,b}\,d*d(\chi^{\rm c}f_{a,b}-\tilde g_{a,b}).
\end{align*}
The second term vanishes because $d*d\tilde g_{a,b}=d*d(\chi^{\rm c}f_{a,b})$ on~$A$. From our
earlier expression for $*d(\chi^{\rm c}f_{a,b})(z)$ on the boundary
of~$A$, we see that
$$
\int_{\partial\bar A}\tilde g_{a,b}\,{*d(\chi^{\rm c}f_{a,b})}=0.
$$
Finally, because $\partial\bar A$ is also the (negatively oriented)
boundary of $X\setminus A$ and because $d*d\tilde g_{a,b}=0$ on $X\setminus A$,
$$
-\int_{\partial\bar A}\tilde g_{a,b}\,{*d\tilde g_{a,b}}=\int_{X\setminus A}d\tilde g_{a,b}\wedge *d\tilde g_{a,b}
\ge 0.
$$
Thus we have
$$
\langle d\tilde g_{a,b},d(\chi^{\rm c}f_{a,b}-\tilde g_{a,b})\rangle_A\ge 0,
$$
which proves the inequality.\egroup
\end{proof}
\begin{lem}
\label{maxmintildegab}
Let $\lambda=\max_{r_2\le r\le r_4}|\tilde\chi'(r)|$. Then
$$
\max_X\tilde g_{a,b}-\min_X\tilde g_{a,b}\le c_3(r_1,r_2,r_4,\lambda),
$$
where
\begin{align*}
c_3(r_1,r_2,r_4,\lambda)&=4\sqrt{\frac{r_4+r_2}{r_4-r_2}}\left(
\frac{\lambda}{2}\log\frac{(r_1+r_4)^2}{(r_2-r_1)(r_4-r_1)}
+{1\over r_2-r_1}+{r_1\over r_4(r_4-r_1)}\right)\\
&\quad+{2\over\pi}\log\frac{(r_1+r_4)^2}{(r_2-r_1)(r_4-r_1)}.
\end{align*}
\end{lem}
\begin{proof}
First, we note that
\begin{align*}
\max_X\tilde g_{a,b}&=\max\biggl\{\sup_{U^{r_3}}\tilde g_{a,b},
\sup_{X\setminus U^{r_3}}\tilde g_{a,b}\biggr\},\\
\min_X\tilde g_{a,b}&=\min\biggl\{\inf_{U^{r_3}}\tilde g_{a,b},
\inf_{X\setminus U^{r_3}}\tilde g_{a,b}\biggr\}.
\end{align*}
Furthermore,
\begin{align*}
\sup_{U^{r_3}}\tilde g_{a,b}&\le\sup_{U^{r_3}}(\tilde g_{a,b}
-\chi^{\rm c}f_{a,b})+\sup_{U^{r_3}}\chi^{\rm c}f_{a,b}\\
&=\max_{|z|=r_3}(\tilde g_{a,b}-\chi^{\rm c}f_{a,b})
+\max_{r_2\le|z|\le r_3}\chi^{\rm c}f_{a,b}
\end{align*}
because of the maximum principle ($\tilde g_{a,b}-\chi^{\rm c}f_{a,b}$
is harmonic on $U$) and because $\chi^{\rm c}(z)=0$ for $|z|<r_2$. In
the same way, we find
$$
\inf_{U^{r_3}}\tilde g_{a,b}\ge
\min_{|z|=r_3}(\tilde g_{a,b}-\chi^{\rm c}f_{a,b})
+\min_{r_2\le|z|\le r_3}\chi^{\rm c}f_{a,b}.
$$
We extend $\chi f_{a,b}$ to a smooth function on $X\setminus\{a,b\}$
by putting $(\chi f_{a,b})(x)=0$ for $x\not\in U$. Then $\tilde
g_{a,b}+\chi f_{a,b}$ is harmonic on $X\setminus\{a,b\}$, and the same
method as above gives us
\begin{align*}
\sup_{X\setminus U^{r_3}}\tilde g_{a,b}&\le
\max_{|z|=r_3}(\tilde g_{a,b}+\chi f_{a,b})
-\min_{r_3\le|z|\le r_4}\chi f_{a,b}\\
&\le\max_{|z|=r_3}(\tilde g_{a,b}-\chi^{\rm c}f_{a,b})
+\max_{|z|=r_3}f_{a,b}-\min_{r_3\le|z|\le r_4}\chi f_{a,b}
\end{align*}
and
$$
\inf_{X\setminus U^{r_3}}\tilde g_{a,b}\ge
\min_{|z|=r_3}(\tilde g_{a,b}-\chi^{\rm c}f_{a,b})
+\min_{|z|=r_3}f_{a,b}-\max_{r_3\le|z|\le r_4}\chi f_{a,b}.
$$
These bounds imply that
\begin{align*}
\max_X\tilde g_{a,b}&\le
\max_{|z|=r_3}(\tilde g_{a,b}-\chi^{\rm c}f_{a,b})
+2\sup_A|f_{a,b}|,\\
\min_X\tilde g_{a,b}&\ge
\min_{|z|=r_3}(\tilde g_{a,b}-\chi^{\rm c}f_{a,b})
-2\sup_A|f_{a,b}|,
\end{align*}
and hence
$$
\max_X\tilde g_{a,b}-\min_X\tilde g_{a,b}\le
\max_{|z|=r_3}(\tilde g_{a,b}-\chi^{\rm c}f_{a,b})
-\min_{|z|=r_3}(\tilde g_{a,b}-\chi^{\rm c}f_{a,b})
+4\sup_A|f_{a,b}|.
$$
By Lemma~\ref{maxmin} and Lemma~\ref{tildegab},
\begin{align*}
\max_{|z|=r_3}(\tilde g_{a,b}-\chi^{\rm c}f_{a,b})
-\min_{|z|=r_3}(\tilde g_{a,b}-\chi^{\rm c}f_{a,b})
&\le{2\sqrt{\pi}\over r_4-r_2}
\|d\tilde g_{a,b}-d(\chi^{\rm c}f_{a,b})\|_A\\
&\le{2\sqrt{\pi}\over r_4-r_2}
(\|d\tilde g_{a,b}\|_A+\|d(\chi^{\rm c}f_{a,b})\|_A)\\
&\le{4\sqrt{\pi}\over r_4-r_2}\|d(\chi^{\rm c}f_{a,b})\|_A.
\end{align*}
We have
\begin{align*}
\|d(\chi^{\rm c}f_{a,b})\|_A&\le\|d(\chi^{\rm c})f_{a,b}\|_A
+\|\chi^{\rm c}df_{a,b}\|_A\\
&\le\|\tilde\chi'(\rho)f_{a,b}\,d\rho\|_A
+\|df_{a,b}\|_A\\
&\le\lambda\|d\rho\|_A\sup_A|f_{a,b}|+\|df_{a,b}\|_A.
\end{align*}
Now
\begin{align*}
\|d\rho\|_A^2&=\int_A d\rho\wedge *d\rho\\
&=\int_A\rho\,d\rho\wedge d\phi\\
&=\pi(r_4^2-r_2^2).
\end{align*}
Furthermore, for all $a,b\in U^{r_1}$ we have
$$
|f_{a,b}(z)|={1\over2\pi}\left|
\log|z-z(a)|+\log|\overline{z(a)}z-r_4^2|
-\log|z-z(b)|-\log|\overline{z(b)}z-r_4^2|\right|.
$$
For all $a\in U^{r_1}$ and all $z\in A$, the triangle inequality gives
$$
r_2-r_1<|z-z(a)|<r_4+r_1\quad\hbox{and}\quad
r_4(r_4-r_1)<|\overline{z(a)}z-r_4^2|<r_4(r_4+r_1).
$$
From this we deduce that for all $a,b\in U^{r_1}$,
$$
\sup_A|f_{a,b}|\le {1\over2\pi}\log\frac{(r_1+r_4)^2}{(r_2-r_1)(r_4-r_1)}.
$$
Finally we bound the quantity $\|df_{a,b}\|_A$. Because $f_{a,b}$ is
a real function, we have
$$
df_{a,b}=\partial_z f_{a,b}\,dz+\overline{\partial_z f_{a,b}}\,d\bar z.
$$
Therefore,
\begin{align*}
\|df_{a,b}\|_A^2&=\int_A df_{a,b}\wedge *df_{a,b}\\
&=2i\int_A|\partial_z f_{a,b}|^2\,dz\wedge d\bar z\\
&=4\int_0^{2\pi}\!\!\!\int_{r_2}^1
|\partial_z f_{a,b}|^2\,\rho\,d\rho\,d\phi\\
&\le 4\pi(1-r_2^2)\sup_A|\partial_z f_{a,b}|^2.
\end{align*}
A straightforward computation gives
$$
\partial_z f_{a,b}={1\over4\pi}\left({1\over z-z(a)}
+{\overline{z(a)}\over\overline{z(a)}z-r_4^2}-{1\over z-z(b)}
-{\overline{z(b)}\over\overline{z(b)}z-r_4^2}\right).
$$
Our previous bounds for $|z-z(a)|$ and $|\overline{z(a)}z-1|$ yield
$$
\sup_A|\partial_z f_{a,b}|\le{1\over2\pi}
\left({1\over r_2-r_1}+{r_1\over r_4(r_4-r_1)}\right).
$$
From this we obtain
$$
\|df_{a,b}\|_A\le\sqrt{r_4^2-r_2^2\over\pi}
\left({1\over r_2-r_1}+{r_1\over r_4(r_4-r_1)}\right).
$$
Combining the bounds for $\sup_A|f_{a,b}|$ and $\|df_{a,b}\|_A$ yields
the lemma.
\end{proof}
From now on we impose the normalisation condition
$$
\int_X\tilde g_{a,b}\mu=0
$$
on~$\tilde g_{a,b}$ for all $a,b\in U^{r_1}$; this can be attained by
adding a suitable constant to~$\tilde g_{a,b}$. Then for all $a,b\in
U^{r_1}$, the function $g_{a,b}$ defined earlier is equal to
\begin{equation}
g_{a,b}=\tilde g_{a,b}+\chi f_{a,b}-\int_X\chi f_{a,b}\mu.
\label{eq:gab-expr}
\end{equation}
Indeed, by the definition of~$\tilde g_{a,b}$, the right-hand side
satisfies \eqref{eq:def-gab}. Furthermore, for all $a\in U^{r_1}$ we
define a smooth function $l_a$ on~$X\setminus\{a\}$ by
$$
l_a=\begin{cases}{\chi\over2\pi}\log|z-z(a)|& \text{on }U\\
0& \text{on }X\setminus\overline{U};\end{cases}
$$
this is bounded from above by ${1\over2\pi}\log(r_4+r_1)$.
\begin{lem}
\label{gablalb}
For all $a,b\in U^{r_1}$, we have
$$
\max_X|g_{a,b}-l_a+l_b|<c_4(r_1,r_2,r_4,\lambda,c_1),
$$
where
$$
c_4(r_1,r_2,r_4,\lambda,c_1)=c_3(r_1,r_2,r_4,\lambda)
+{1\over2\pi}\log{r_4+r_1\over r_4-r_1}
+\left({8\over3}\log2-{1\over4}\right)\frac{c_1}{r_4^2}.
$$
\end{lem}
\begin{proof}
By \eqref{eq:gab-expr} and the definitions of $f_{a,b}$ and $l_a$, we
get
$$
g_{a,b}-l_a+l_b=\tilde g_{a,b}-\int_X\chi f_{a,b}\mu
+{\chi\over2\pi}\log\left|{\overline{z(a)}z-r_4^2\over
\overline{z(b)}z-r_4^2}\right|,
$$
where the last term is extended to zero outside~$U$. We bound each of
the terms on the right-hand side. From $\int_X\tilde g_{a,b}\mu=0$
and the non-negativity of~$\mu$ it follows that
$$
\max_X\tilde g_{a,b}\ge0\ge\min_X\tilde g_{a,b}.
$$
Together with the bound for $\max_X\tilde g_{a,b}-\min_X\tilde
g_{a,b}$ from Lemma~\ref{maxmintildegab}, this implies
$$
\max_X|\tilde g_{a,b}|\le c_3(r_1,r_2,r_4,\lambda,c_1).
$$
Because the support of~$\chi$ is contained in $U^{r_4}$, the
hypothesis~\ref{hyp:mu-bound} of Definition~\ref{MerklAtlas} together
with the definition of~$f_{a,b}$ gives
$$
\int_X\chi f_{a,b}\mu=\int_{U^{r_4}}{\chi\over2\pi}\left(
\log\left|\frac{z-z(a)}{r_4}\right|
+\log\left|\frac{\overline{z(a)}z}{r_4^2}-1\right|
-\log\left|\frac{z-z(b)}{r_4}\right|
-\log\left|\frac{\overline{z(b)}z}{r_4^2}-1\right|\right)\mu.
$$
Writing $w=z/r_4$ and $t=z(a)/r_4$, we have
\begin{align*}
\int_{U^{r_4}}{\chi\over2\pi}\log\left|\frac{z-z(a)}{r_4}\right|\mu
&\le{c_1\over2\pi r_4^2}
\int_{\lower1ex\hbox{$\mkern-8mu{|w|<1\atop|w-t|>1}$}}
\mkern-12mu\log|w-t|\,i\,dw\wedge d\bar w.
\end{align*}
We note that $t$ satisfies $|t|<r_1/r_4$; for simplicity, we relax
this to $|t|\le1$. Then it is easy to see that the above expression
attains its maximum for $|t|=1$; by rotational symmetry we can take
$t=1$. We now have to integrate over the crescent-shaped domain
$\bigl\{w\in{\bf C}\bigm| |w|<1\hbox{ and }|w-1|>1\bigr\}$,
which is contained in
$\bigl\{1+r\exp(i\phi)\bigm| 1<r<2,2\pi/3<\phi<4\pi/3\bigr\}$.
We get
\begin{align*}
\int_{U^{r_4}}{\chi\over2\pi}\log\left|\frac{z-z(a)}{r_4}\right|\mu
&<{c_1\over\pi}\int_{2\pi/3}^{4\pi/3}\!\!\int_1^2\log(r)\,r\,dr\,d\phi\\
&=\left({4\over3}\log2-{1\over2}\right)c_1.
\end{align*}
In a similar way, we obtain
\begin{align*}
\int_{U^{r_4}}{\chi\over2\pi}\log\left|\frac{z-z(a)}{r_4}\right|\mu&\ge-\frac{c_1}{2r_4^2},\\
\int_{U^{r_4}}{\chi\over2\pi}\log\left|\frac{\overline{z(a)}z}{r_4^2}-1\right|\mu&<\left({4\over3}\log2-{1\over2}\right)\frac{c_1}{r_4^2},\\
\int_{U^{r_4}}{\chi\over2\pi}\log\left|\frac{\overline{z(a)}z}{r_4^2}-1\right|\mu&\ge-\frac{c_1}{4r_4^2}.
\end{align*}
The same bounds hold for~$b$. Combining everything, we get
$$
\left|\int_X\chi f_{a,b}\mu\right|\le
\left({8\over3}\log2-{1\over4}\right)\frac{c_1}{r_4^2}.
$$
Finally, we have
\begin{align*}
\max_X{\chi\over2\pi}\log\left|
\frac{\overline{z(a)}z-r_4^2}{\overline{z(b)}z-r_4^2}\right|
&\le{1\over2\pi}\sup_{U^{r_4}}
\log\left|\frac{r_4-\overline{z(a)}z/r_4}{r_4-\overline{z(b)}z/r_4}\right|\\
&\le{1\over2\pi}\log\frac{r_4+r_1}{r_4-r_1},
\end{align*}
which finishes the proof.
\end{proof}
We will now apply Lemma~\ref{gablalb}, which holds for any chart
$(U,z)$ satisfying the hypotheses \ref{hyp:open-unit-disc}
and~\ref{hyp:mu-bound} of Definition~\ref{MerklAtlas}, to our atlas
$\{(U_j,z_j)\mid 1\le j\le n\}$. Besides including the index $j$ in
the notation for the coordinates, we denote by $l_a^{(j)}$ and
$\chi^{(j)}$ the functions $l_a$ and $\chi$ defined for the coordinate
$(U_j,z_j)$. We obtain the following generalisation of
Lemma~\ref{gablalb} to the situation where $a$~and~$b$ are arbitrary
points of~$X$.
\begin{lem}
\label{gablalb2}
For all $a,b\in X$ and all $j,k$ such that $a\in U_j^{r_1}$ and $b\in
U_k^{r_1}$,
$$
\sup_X\bigl|g_{a,b}-l^{(j)}_{a\vphantom b}+l^{(k)}_b\bigr|
\le c_5(r_1,r_2,r_4,\lambda,n,c_1,M),
$$
where
$$
c_5(r_1,r_2,r_4,\lambda,c_1,n,M)=
n c_4(r_1,r_2,r_4,\lambda,c_1)
+{n-1\over2\pi}\log\left(M\frac{r_4+r_1}{r_2-r_1}\right).
$$
\end{lem}
\begin{proof}
We first show that for any two coordinate indices $j$~and~$k$ and for
all $a\in U_k^{r_1}\cap U_j^{r_1}$,
\begin{equation}
\sup_X\bigl|l_a^{(k)}-l_a^{(j)}\bigr|\le{1\over2\pi}
\log\left(M\frac{r_4+r_1}{r_2-r_1}\right).
\label{eq:star}
\end{equation}
To prove this, let $y\in X$. We distinguish three cases to prove that
$l_a^{(k)}(y)-l_a^{(j)}(y)$ is bounded from above by the right-hand
side of~\eqref{eq:star}; the inequality then follows by interchanging
$j$ and~$k$.
\smallskip\noindent{\it Case 1:}\enspace Suppose $y\in U_j$ with
$|z_j(y)-z_j(a)|<(r_2-r_1)/M$. In this case we have
$$
|z_j(y)|<|z_j(a)|+{r_2-r_1\over M}<r_2,
$$
hence $a,y\in U_j^{r_2}$. Let $[a,y]^j$ denote the line segment
between $a$ and $y$ in the $z_j$-coordinate, i.e.\ the curve in
$U_j^{r_2}$ whose $z_j$-coordinate is parametrised by
$$
\hat z_j(t)=(1-t)z_j(a)+tz_j(y)\quad(0\le t\le 1).
$$
We claim that this line segment also lies inside $U_k^{r_2}$. Suppose
this is not the case; then, because the `starting point'
$z_j^{-1}\bigl(\hat z_j(0)\bigr)=a$ does lie in $U_k^{r_2}$,
there exists a smallest $t\in(0,1)$ for which the point
$$
y'=z_j^{-1}\bigl(\hat z_j(t)\bigr)\in U_j^{r_2}
$$
lies on the boundary of $U_k^{r_2}$. It follows from the
hypothesis~\ref{hyp:glueing-function-bound} of
Definition~\ref{MerklAtlas} that
$$
|z_k(y')-z_k(a)|\le M|z_j(y')-z_j(a)|.
$$
On the other hand,
\begin{align*}
|z_j(y')-z_j(a)|&=t|z_j(y)-z_j(a)|\\
&<(r_2-r_1)/M,
\end{align*}
by assumption, and
$$
|z_k(y')-z_k(a)|>r_2-r_1
$$
by the triangle inequality. This implies
$$
|z_k(y')-z_k(a)|>M|z_j(y')-z_j(a)|,
$$
a contradiction. Therefore, the line segment $[a,y]^j$ lies inside
$U_j^{r_2}\cap U_k^{r_2}$. By
hypothesis~\ref{hyp:glueing-function-bound} of
Definition~\ref{MerklAtlas}, we have
$$
|z_k(y)-z_k(a)|\le M|z_j(y)-z_j(a)|.
$$
Because $\chi^{(j)}(y)=\chi^{(k)}(y)=1$, we find
\begin{align*}
l_a^{(k)}(y)-l_a^{(j)}(y)&={1\over2\pi}\log\left|
{z_k(y)-z_k(a)\over z_j(y)-z_j(a)}\right|\\
&\le{1\over2\pi}\log M,
\end{align*}
which is bounded by the right-hand side of~\eqref{eq:star}.
\smallskip\noindent{\it Case 2:}\enspace Suppose $y\not\in U_j$.
Then $l_a^{(j)}(y)=0$, and thus
$$
l_a^{(k)}(y)-l_a^{(j)}(y)=l_a^{(k)}(y)\le{\log(r_4+r_1)\over2\pi}.
$$
\smallskip\noindent{\it Case 3:}\enspace Suppose $y\in U_j$ and
$|z_j(y)-z_j(a)|\ge(r_2-r_1)/M$. Then
$$
l_a^{(k)}(y)-l_a^{(j)}(y)\le{\log(r_4+r_1)\over2\pi}
-{\chi^{(j)}(y)\over2\pi}\log{r_2-r_1\over M},
$$
which is also bounded by the right-hand side in~\eqref{eq:star}.
By hypothesis~\ref{hyp:covering} of Definition~\ref{MerklAtlas}, the
open sets $U_j^{r_1}$ cover $X$. Furthermore, $X$ is connected. For
arbitrary $a,b\in X$ and indices $j$ and $k$ such that $a\in
U_j^{r_1}$ and $b\in U_k^{r_1}$, we can therefore choose a finite
sequence of indices $j=j_1$, $j_2$, \dots, $j_m=k$ with $m\le n$ and
points $a=a_0$, $a_1$, \dots, $a_m=b$ such that $a_i\in
U_{j_i}^{r_1}\cap U_{j_{i+1}}^{r_1}$ for $1\le i\le m-1$. Using
$$
g_{a,b}=\sum_{i=1}^m g_{a_{i-1},a_i}
$$
we get
\begin{align*}
\sup_X\bigl|g_{a,b}-l_{a\vphantom b}^{(j)}+l_b^{(k)}\bigr|&=\sup_X\left|
\sum_{i=1}^m\left(g_{a_{i-1},a_i}-l_{a_{i-1}}^{(j_i)}
+l_{a_i}^{(j_i)}\right)+\sum_{i=1}^{m-1}\left(
l_{a_i}^{(j_{i+1})}-l_{a_i}^{(j_i)}\right)\right|\\
&\le\sum_{i=1}^m\sup_X
\left|g_{a_{i-1},a_i}-l_{a_{i-1}}^{(j_i)}+l_{a_i}^{(j_i)}\right|
+\sum_{i=1}^{m-1}\sup_X
\left|l_{a_i}^{(j_{i+1})}-l_{a_i}^{(j_i)}\right|.
\end{align*}
The lemma now follows from Lemma~\ref{gablalb} and the
inequality~\eqref{eq:star}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Merkl}]
We choose a continuous partition of unity $\{\phi^j\}_{j=1}^n$
subordinate to the covering $\{U_j^{r_1}\}_{j=1}^n$. Let $a\in X$ and
let $j$ be an index such that $a\in U_j^{r_1}$. By the definition of
$g_{a,\mu}$ we have
\begin{align*}
g_{a,\mu}(x)-l_a^{(j)}(x)
&=\int_{b\in X}g_{a,b}(x)\mu(b)-l_a^{(j)}(x)\\
&=\sum_{k=1}^n\int_{b\in U_k^{r_1}}\phi^k(b)
\bigl(g_{a,b}(x)-l_a^{(j)}(x)\bigr)\mu(b)\\
&=\sum_{k=1}^n\int_{b\in U_k^{r_1}}\phi^k(b)
\bigl(g_{a,b}(x)-l^{(j)}_a(x)+l_b^{(k)}(x)\bigr)\mu(b)
-\sum_{k=1}^n\int_{b\in U_k^{r_1}}\phi^k(b)l_b^{(k)}(x)\mu(b).
\end{align*}
In a similar way as in the proof of Lemma~\ref{gablalb}, one can check
check that for every index~$k$ and all $x\in X$ we have
$$
-\frac{c_1}{2}\le
\int_{b\in U_k^{r_1}} \phi^k(b)l_b^{(k)}(x)\mu(b)
\le\left(\frac{4}{3}\log{2}-\frac{1}{2}\right)c_1,
$$
so that
$$
\sup_{x\in X}\left|
\int_{b\in U_k^{r_1}} \phi^k(b)l_b^{(k)}(x)\mu(b)\right|
\le\frac{c_1}{2}.
$$
Together with Lemma~\ref{gablalb2}, this gives the inequality
\begin{align*}
\sup_X \bigl|g_{a,\mu}-l_a^{(j)}\bigr|&\le
c_5(r_1,r_2,r_4,\lambda,c_1,n,M)
\sum_{j=1}^n\int_{b\in U_j^{r_1}}\phi^j(b)\mu(b)
+\sum_{j=1}^n\frac{c_1}{2}\\
&=c_5(r_1,r_2,r_4,\lambda,c_1,n,M)+\frac{nc_1}{2}.
\end{align*}
We also have
\begin{align*}
\sup_X g_{a,\mu}&\le\sup_X\bigl(g_{a,\mu}-l_a^{(j)}\bigr)
+\sup_X l_a^{(j)}\\
&\le\sup_X\bigl(g_{a,\mu}-l_a^{(j)}\bigr)+{\log(r_4+r_1)\over2\pi}.
\end{align*}
By varying the choice of $r_4$ and~$\tilde\chi$, we can let $r_4$ tend
to~1 and $\lambda$ to $1/(1-r_2)$.
This leads to
\begin{align*}
c_3(r_1,r_2,1,1/(1-r_2))&=4\sqrt{\frac{1+r_2}{1-r_2}}\left(
\frac{1}{2(1-r_2)}\log\frac{(r_1+1)^2}{(r_2-r_1)(1-r_1)}
+{1\over r_2-r_1}+{r_1\over 1-r_1}\right)\\
&\qquad+{2\over\pi}\log\frac{(r_1+1)^2}{(r_2-r_1)(1-r_1)},\\
c_4(r_1,r_2,1,1/(1-r_2),c_1)&=c_3(r_1,r_2,1,1/(1-r_2))
+{1\over2\pi}\log{1+r_1\over 1-r_1}
+\left({8\over3}\log2-{1\over4}\right){c_1},\\
c_5&=n c_4(r_1,r_2,r_4,1/(1-r_2),c_1)
+{n-1\over2\pi}\log\left(M\frac{1+r_1}{r_2-r_1}\right).
\end{align*}
We take
$$
r_2=0.39+0.61r_1.
$$
Then for $r_1>1/2$ one can check numerically that
$$
c_5\le 52.4 \frac{n}{(1-r_1)^{3/2}}\log\frac{1}{1-r_1}
+1.60nc_1+\frac{n-1}{2\pi}\log M.
$$
From this the theorem follows.
\end{proof}
|
train/arxiv
|
BkiUdp44ubng6eY9MekG
| 5
| 1
|
\section{Introduction}
Question answering (QA) and question generation (QG) are two fundamental tasks in natural language processing \cite{Manning1999,Jurafsky2000}.
Both tasks involve reasoning between a question sequence $q$ and an answer sentence $a$.
In this work, we take answer sentence selection \cite{yang2015wikiqa} as the QA task, which is a fundamental QA task and is very important for many applications such as search engine and conversational bots.
The task of QA takes a question sentence $q$ and a list of candidate answer sentences as the input, and finds the top relevant answer sentence from the candidate list.
The task of QG takes a sentence $a$ as input, and generates a question sentence $q$ which could be answered by $a$.
It is obvious that the input and the output of these two tasks are (almost) reverse, which is referred to as ``duality'' in this paper.
This duality connects QA and QG, and potentially could help these two tasks to improve each other.
Intuitively, QA could improve QG through measuring the relevance between the generated question and the answer.
This QA-specific signal could enhance the QG model to generate not only literally similar question string, but also the questions that could be answered by the answer.
In turn, QG could improve QA by providing additional signal which stands for the probability of generating a question given the answer.
Moreover, QA and QG have probabilistic correlation as both tasks relate to the joint probability between $q$ and $a$.
Given a question-answer pair $\langle q, a \rangle$, the joint probability $P(q, a)$ can be computed in two equivalent ways.
\begin{equation}\label{equation:pqa}
P(q, a) = P(a) P(q|a) = P(q)P(a|q)
\end{equation}
The conditional distribution $P(q|a)$ is exactly the QG model, and the conditional distribution $P(a|q)$ is closely related to the QA model\footnote{In this work, our QA model is $f_{qa}(a,q;\theta_{qa})$. The conditional distribution $P(a|q)$ could be derived from the QA model, which will be detailed in the next section.}.
Existing studies typically learn the QA model and the QG model separately by minimizing their own loss functions, while ignoring the probabilistic correlation between them.
Based on these considerations, we introduce a training framework that exploits the duality of QA and QG to improve both tasks.
There might be different ways of exploiting the duality of QA and QG.
In this work, we leverage the probabilistic correlation between QA and QG as the regularization term to influence the training process of both tasks.
Specifically, the training objective of our framework is to jointly learn the QA model parameterized by $\theta_{qa}$ and the QG model parameterized by $\theta_{qg}$ by minimizing their loss functions subject to the following constraint.
\begin{equation}\label{equation:regular}
P_a(a) P(q|a;\theta_{qg}) = P_q(q)P(a|q;\theta_{qa})
\end{equation}
$P_a(a)$ and $P_q(q)$ are the language models for answer sentences and question sentences, respectively.
We examine the effectiveness of our training criterion by applying it to strong neural network based QA and QG models.
Specifically, we implement a generative QG model based on sequence-sequence learning, which takes an answer sentence as input and generates a question sentence in an end-to-end fashion.
We implement a discriminative QA model based on recurrent neural network, where both question and answer are represented as continuous vector in a sequential way.
As every component in the entire framework is differentiable, all the parameters could be conventionally trained through back propagation.
We conduct experiments on three datasets \cite{yang2015wikiqa,rajpurkar-EtAl:2016:EMNLP2016,nguyen2016ms}. Empirical results show that our training framework improves both QA and QG tasks. The improved QA model performs comparably with strong baseline approaches on all three datasets.
\section{The Proposed Framework}
In this section, we first formulate the task of QA and QG, and then present the proposed algorithm for jointly training the QA and QG models.
We also describe
the connections and differences between this work and existing studies.
\subsection{Task Definition and Notations}
This work involves two tasks, namely question answering (QA) and question generation (QG).
There are different kinds of QA tasks in natural language processing community.
In this work, we take answer sentence selection \cite{yang2015wikiqa} as the QA task, which takes a question $q$ and a list of candidate answer sentences $A = \{a_1, a_2, ... , a_{|A|}\}$ as input, and outputs one answer sentence $a_i$ from the candidate list which has the largest probability to be the answer. This QA task is typically viewed as a ranking problem. Our QA model is abbreviated as $f_{qa}(a,q;\theta_{qa})$, which is parameterized by $\theta_{qa}$ and the output is a real-valued scalar.
The task of QG takes a sentence $a$ as input, and outputs a question $q$ which could be answered by $a$.
In this work, we regard QG as a generation problem and develop a generative model based on sequence-to-sequence learning. Our QG model is abbreviated as $P_{qg}(q|a;\theta_{qg})$, which is parameterized by $\theta_{qg}$ and the output is the probability of generating a natural language question $q$.
\subsection{Algorithm Description}
We describe the proposed algorithm in this subsection.
Overall, the framework includes three components, namely a QA model, a QG model and a regularization term that reflects the duality of QA and QG.
Accordingly, the training objective of our framework includes three parts, which is described in Algorithm 1.
The QA specific objective aims to minimize the loss function $l_{qa}(f_{qa}(a,q;\theta_{qa}), label)$,
where $label$ is 0 or 1 that indicates whether $a$ is the correct answer of $q$ or not.
Since the goal of a QA model is to predict whether a question-answer pair is correct or not, it is necessary to use negative QA pairs whose labels are zero.
The details about the QA model will be presented in the next section.
For each correct question-answer pair, the QG specific objective is to minimize the following loss function,
\begin{equation}
l_{qg}(q, a) = -log P_{qg}(q|a;\theta_{qg})
\end{equation}
where $a$ is the correct answer of $q$.
The negative QA pairs are not necessary because the goal of a QG model is to generate the correct question for an answer.
The QG model will be described in the following section.
\begin{algorithm}[tb]
\caption{Algorithm Description}
\label{alg:example}
\begin{algorithmic}
\STATE {\bfseries Input:} Language models $P_a(a)$ and $P_q(q)$ for answer and question, respectively; hyper parameters $\lambda_q$ and $\lambda_a$; optimizer $opt$
\STATE {\bfseries Output:} QA model $f_{qa}(a,q)$ parameterized by $\theta_{qa}$; QG model $P_{qg}(q|a)$ parameterized by $\theta_{qg}$
\STATE
\STATE Randomly initialize $\theta_{qa}$ and $\theta_{qg}$
\REPEAT
\STATE Get a minibatch of positive QA pairs $\langle q^p_i, a^p_i \rangle_{i=1}^m$, where $a_i$ is the answer of $q_i$;
\STATE Get a minibatch of negative QA pairs $\langle q^n_i, a^n_i \rangle_{i=1}^m$, where $a^n_i$ is not the answer of $q^n_i$;
\STATE Calculate the gradients for $\theta_{qa}$ and $\theta_{qg}$.
\vspace{-0.3cm}
\STATE \begin{align}\nonumber G_{qa} = \triangledown_{\theta_{qa}} &\frac{1}{m}\sum_{i = 1}^{m}[l_{qa}(f_{qa}(a^p_i,q^p_i;\theta_{qa}), 1) \\
&\nonumber + l_{qa}(f_{qa}(a^n_i,q^n_i;\theta_{qa}),0) \\
& +\lambda_al_{dual}(a^p_i,q^p_i;\theta_{qa}, \theta_{qg})]\end{align}
\vspace{-0.8cm}
\STATE \begin{align}\nonumber G_{qg} = \triangledown_{\theta_{qg}} &\frac{1}{m}\sum_{i = 1}^{m}[\ l_{qg}(q^p_i,a^p_i) \\& + \lambda_ql_{dual}(q^p_i,a^p_i;\theta_{qa}, \theta_{qg})]\end{align}
\STATE Update $\theta_{qa}$ and $\theta_{qg}
\STATE $\theta_{qa} \leftarrow opt(\theta_{qa}, G_{qa})$, $\theta_{qg} \leftarrow opt(\theta_{qg}, G_{qg})$
\UNTIL{models converged}
\end{algorithmic}
\end{algorithm}
The third objective is the regularization term which satisfies the probabilistic duality constrains as given in Equation~\ref{equation:regular}.
Specifically, given a correct $\langle q, a \rangle$ pair, we would like to minimize the following loss function,
\begin{align} \nonumber
l_{dual}(a,q;\theta_{qa}, \theta_{qg}) &= [logP_a(a) + log P(q|a;\theta_{qg}) \\
& - logP_q(q) - logP(a|q;\theta_{qa})]^2
\end{align}
where $P_a(a)$ and $P_q(q)$ are marginal distributions, which could be easily obtained through language model. $P(a|q;\theta_{qg})$ could also be easily calculated with the markov chain rule:
$P(q|a;\theta_{qg}) = \prod_{t=1}^{|q|} P(q_t|q_{<t}, a;\theta_{qg})$, where the function $P(q_t|q_{<t}, a;\theta_{qg})$ is the same with the decoder of the QG model (detailed in the following section).
However, the conditional probability $P(a|q;\theta_{qa})$ is different from the output of the QA model $f_{qa}(a,q;\theta_{qa})$. To address this, given a question $q$, we sample a set of answer sentences $A'$, and derive the conditional probability $P(a|q;\theta_{qa})$ based on our QA model with the following equation.
\begin{align}\nonumber
&P(a|q;\theta_{qa}) = \\
&\dfrac{exp(f_{qa}(a,q;\theta_{qa}))}{exp(f_{qa}(a,q;\theta_{qa})) + \sum_{a' \in A'} exp(f_{qa}(a',q;\theta_{qa}))}
\end{align}
In this way, we learn the models of QA and QG by minimizing the weighted combination between the original loss functions and the regularization term.
\subsection{Relationships with Existing Studies}
Our work differs from \cite{yang2017semi} in that they regard reading comprehension (RC) as the main task, and regard question generation as the auxiliary task to boost the main task RC.
In our work, the roles of QA and QG are the same, and our algorithm enables QA and QG to improve the performance of each other simultaneously.
Our approach differs from Generative Domain-Adaptive Nets \cite{yang2017semi} in that we do not pretrain the QA model. Our QA and QG models are jointly learned from random initialization.
Moreover, our QA task differs from RC in that the answer in our task is a sentence rather than a text span from a sentence.
Our approach is inspired by dual learning \cite{xia2016dual,xia2017dual}, which leverages the duality between two tasks to improve each other. Different from the dual learning \cite{xia2016dual} paradigm, our framework learns both models from scratch and does not need task-specific pretraining.
The recently introduced supervised dual learning \cite{xia2017dual} has been successfully applied to image recognition, machine translation and sentiment analysis. Our work could be viewed as the first work that leveraging the idea of supervised dual learning for question answering.
Our approach differs from Generative Adversarial Nets (GAN) \cite{goodfellow2014generative} in two respects.
On one hand, the goal of original GAN is to learn a powerful generator, while the discriminative task is regarded as the auxiliary task. The roles of the two tasks in our framework are the same.
On the other hand, the discriminative task of GAN aims to distinguish between the real data and the artificially generated data, while we focus on the real QA task.
\section{The Question Answering Model}
We describe the details of the question answer (QA) model in this section.
Overall, a QA model could be formulated as a function $f_{qa}(q, a;\theta_{qa})$ parameterized by $\theta_{qa}$ that maps a question-answer pair to a scalar. In the inference process, given a $q$ and a list of candidate answer sentences, $f_{qa}(q, a;\theta_{qa})$ is used to calculate the relevance between $q$ and every candidate $a$.
The top ranked answer sentence is regarded as the output.
We develop a neural network based QA model.
Specifically, we first represent each word as a low dimensional and real-valued vector, also known as word embedding \cite{Bengio2003,Mikolov2013a,Pennington2014}.
Afterwards, we use recurrent neural network (RNN) to map a question of variable length to a fixed-length vector.
To avoid the problem of gradient vanishing, we use gated recurrent unit (GRU) \cite{cho-EtAl:2014:EMNLP2014} as the basic computation unit.
The approach recursively calculates the hidden vector $h_{t}$ based on the current word vector $e^q_t$ and the output vector $h_{t-1}$ in the last time step,
\begin{align}
&z_i = \sigma(W_{z}e^q_{i} + U_{z}{h}_{i-1}) \\
&r_i = \sigma(W_{r}e^q_{i} + U_{r}{h}_{i-1}) \\
&\widetilde{h}_i = \tanh(W_{h}e^q_{i} + U_{h}(r_i \odot {h}_{i-1})) \\
&{h}_{i} = z_i \odot \widetilde{h}_i + (1-z_i) \odot {h}_{i-1}
\end{align}
where $z_i$ and $r_i$ are update and reset gates of s, $\odot$ stands for element-wise multiplication, $\sigma$ is sigmoid function.
We use a bi-directional RNN to get the meaning of a question from both directions, and use the concatenation of two last hidden states as the final question vector $v_q$. We compute the answer sentence vector $v_a$ in the same way.
After obtaining $v_q$ and $v_a$, we implement a simple yet effective way to calculate the relevance between question-sentence pair.
Specifically, we represent a question-answer pair as the concatenation of four vectors, namely $v(q, a) = [v_q; v_a; v_q \odot v_a ; e_{c(q,a)}]$, where $\odot$ means element-wise multiplication, $c(q,a)$ is the number of co-occurred words in $q$ and $a$.
We observe that incorporating the embedding of the word co-occurrence $e^c_{c(q,a)}$ could empirically improve the QA performance.
We use an additional embedding matrix $L_c \in \mathbb{R}^{d_c \times |V_c|}$, where $d_c$ is the dimension of word co-occurrence vector and $|V_c|$ is vocabulary size.
The values of $L_c$ are jointly learned during training. The output scalar $f_{qa}(a,q)$ is calculated by feeding $v(q,a)$ to a linear layer followed by $tanh$.
We feed $f_{qa}(a,q)$ to a $softmax$ layer and use negative log-likelihood as the QA specific loss function. The basic idea of this objective is to classify whether a given question-answer is correct or not.
We also implemented a ranking based loss function $max(0, 1 - f_{qa}(q,a) + f_{qa}(q,a^*))$, whose basic idea is to assign the correct QA pair a higher score than a randomly select QA pair. However, our empirical results showed that the ranking loss performed worse than the negative log-likelihood loss function. We use log-likelihood as the QA loss function in the experiment.
\begin{table*}[t]
\centering
\begin{tabular}{l|ccc|ccc|ccc}
\hline
\multirow{2}{*}{} & \multicolumn{3}{c|}{MARCO} & \multicolumn{3}{c|}{SQUAD} & \multicolumn{3}{c}{WikiQA} \\
\cline{2-10}
& Train & Dev & Test & Train & Dev & Test& Train & Dev & Test \\
\hline
{\# questions} & 82,326 & 4,806 & 5,241 & 87,341 & 5,273 & 5,279 & 1,040 & 140 & 293\\
\# question-answer pairs & 676,193 & 39,510 & 42,850 & 440,573 & 26,442 & 26,604 & 20,360 & 2,733 & 6,165\\
{Avg \# answers per question} & 8.21& 8.22&8.18 & 5.04 & 5.01 & 5.04 & 19.57 & 19.52 & 21.04\\
{Avg length of questions} & 6.05& 6.05& 6.10 & {11.37} &11.57 &11.46 & 6.40& 6.46& 6.42\\
{Avg length of answers} & 82.73& 82.54& 82.89 & {27.80} & 28.70 & 28.66 & 30.04& 29.65& 28.91\\
\hline
\end{tabular}
\caption{Statistics of the MARCO, SQUAD and WikiQA datasets for answer sentence selection.}
\label{table:statistic}
\end{table*}
\section{The Question Generation Model}
We describe the question generation (QG) model in this section.
The model is inspired by the recent success of sequence-to-sequence learning in neural machine translation.
Specifically, the QG model first calculates the representation of the answer sentence with an encoder, and then takes the answer vector to generate a question in a sequential way with a decoder.
We will present the details of the encoder and the decoder, respectively.
The goal of the encoder is to represent a variable-length answer sentence ${a}$ as a fixed-length continuous vector.
The encoder could be implemented with different neural network architectures such as convolutional neural network \cite{kalchbrenner-blunsom:2013:EMNLP,meng2015encoding} and recurrent neural network (RNN) \cite{bahdanau2014neural,sutskever2014sequence}.
In this work, we use bidirectional RNN based on GRU unit, which is consistent with our QA model as described in Section 3.
The concatenation of the last hidden vectors from both directions is used as the output of the encoder, which is also used as the initial hidden state of the decoder.
The decoder takes the output of the encoder and generates the question sentence.
We implement a RNN based decoder, which works in a sequential way and generates one question word at each time step.
The decoder generates a word $q_{t}$ at each time step $t$ based on the representation of $a$ and the previously predicted question words $q_{<t}=\{q_1,q_2,...,q_{t-1}\}$.
This process is formulated as follows.
\begin{equation}
p(q|a)=\prod^{|q|}_{t=1}p(q_{t}|q_{<t},a)
\end{equation}
Specifically, we use an attention-based architecture \cite{luong-pham-manning:2015:EMNLP}, which selectively finds relevant information from the answer sentence when generating the question word. Therefore, the conditional probability is calculated as follows.
\begin{equation}
p(q_{t}|q_{<t},a)=f_{dec}(q_{t-1},s_{t}, c_t)
\end{equation}
where $s_{t}$ is the hidden state of GRU based RNN at time step $t$, and $c_t$ is the attention state at time step $t$.
The attention mechanism assigns a probability/weight to each hidden state in the encoder at one time step, and calculates the attention state $c_t$ through weighted averaging the hidden states of the encoder: $c_{t}=\sum^{|a|}_{i=1}\alpha_{\langle t,i\rangle}h_i$.
When calculating the attention weight of $h_i$ at time step $t$, we also take into account of the attention distribution in the last time step. Potentially, the model could remember which contexts from answer sentence have been used before, and does not repeatedly use these words to generate the question words.
\begin{align}
\alpha_{\langle t,i\rangle}=\frac{\exp{[z(s_{t},h_i,\sum^{N}_{j=1}\alpha_{\langle t-1,j\rangle}h_j)]}}{\sum^{H}_{i'=1}\exp{[z(s_{t},h_{i'},\sum^{N}_{j=1}\alpha_{\langle t-1,j\rangle}h_{j})]}}
\end{align}
Afterwards, we feed the concatenation of $s_t$ and $c_t$ to a linear layer followed by a $softmax$ function.
The output dimension of the $softmax$ layer is equal to the number of top frequent question words (e.g. 30K or 50K) in the training data.
The output values of the $softmax$ layer form the probability distribution of the question words to be generated.
Furthermore, we observe that question sentences typically include informative but low-frequency words such as named entities or numbers.
These low-frequency words are closely related to the answer sentence but could not be well covered in the target vocabulary.
To address this, we add a simple yet effective post-processing step which replaces each ``unknown word'' with the most relevant word from the answer sentence.
Following \cite{luong-EtAl:2015:ACL-IJCNLP}, we use the attention probability as the relevance score of each word from the answer sentence.
Copying mechanism \cite{gulcehre2016pointing,gu2016incorporating} is an alternative solution that adaptively determines whether the generated word comes from the target vocabulary or from the answer sentence.
Since every component of the QG model is differentiable, all the parameters could be learned in an end-to-end way with back propagation.
Given a question-answer pair $\langle q,a\rangle$, where $a$ is the correct answer of the question $q$, the training objective is to minimize the following negative log-likelihood.
\begin{equation}
l_{qg}(q,a)=-\sum^{|q|}_{t=1}\log[p(y_t|y_{<t},a)]
\end{equation}
In the inference process, we use beam search to get the top-$K$ confident results, where $K$ is the beam size.
The inference process stops when the model generates the symbol $\langle eos \rangle$ which stands for the end of sentence.
\begin{table*}[t]
\centering
\begin{tabular}{l|ccc|ccc}
\hline
\multirow{2}{*}{Method} & \multicolumn{3}{c|}{MARCO} & \multicolumn{3}{c}{SQUAD} \\
\cline{2-7}
& MAP & MRR & P@1 & MAP & MRR & P@1 \\
\hline
WordCnt & 0.3956 &0.4014&0.1789 & 0.8089&0.8168&0.6887\\
WgtWordCnt & 0.4223& 0.4287&0.2030 & 0.8714&0.8787&0.7958 \\
CDSSM \cite{shen2014CDSSM} & 0.4425 &0.4489 &0.2284 & 0.7978 & 0.8041 &0.6721 \\
ABCNN \cite{yin2015abcnn} & 0.4691 & 0.4767 & 0.2629 & 0.8685 & 0.8750 & 0.7843 \\
\hline
Basic QA & 0.4712 & 0.4783 & 0.2628 & 0.8580 & 0.8647 & 0.7740 \\
Dual QA & 0.4836 & 0.4911 & 0.2751 & 0.8643 & 0.8716 & 0.7814 \\
\hline
\end{tabular}
\caption{QA Performance on the MARCO and SQUAD datasets.}
\label{table:results-qa}
\end{table*}
\section{Experiment}
We describe the experimental setting and report empirical results in this section.
\subsection{Experimental Setting}
We conduct experiments on three datasets, including MARCO \cite{nguyen2016ms}, SQUAD \cite{rajpurkar-EtAl:2016:EMNLP2016}, and WikiQA \cite{yang2015wikiqa}.
The MARCO and SQUAD datasets
are originally developed for the reading comprehension (RC) task, the goal of which is to answer a question with a text span from a document.
Despite our QA task (answer sentence selection) is different from RC, we use these two datasets because of two reasons. The first reason is that to our knowledge they are the QA datasets that contains largest manually labeled question-answer pairs.
The second reason is that, we could derive two QA datasets for answer sentence selection from the original MARCO and SQUAD datasets, with an assumption that the answer sentences containing the correct answer span are correct, and vice versa.
We believe that our training framework could be easily applied to RC task, but we that is out of the focus of this work.
We also conduct experiments on WikiQA \cite{yang2015wikiqa}, which is a benchmark dataset for answer sentence selection.
Despite its data size is relatively smaller compared with MARCO and SQUAD, we still apply our algorithm on this data and report empirical results to further compare with existing algorithms.
It is worth to note that a common characteristic of MARCO and SQUAD is that the ground truth of the test is invisible to the public.
Therefore, we randomly split the original validation set into the dev set and the test set.
The statistics of SQUAD and MARCO datasets are given in Table \ref{table:statistic}.
We use the official split of the WikiQA dataset.
We apply exactly the same model to these three datasets.
We evaluate our QA system with three standard evaluation metrics: \textit{Mean Average Precision (MAP)}, \textit{Mean Reciprocal Rank (MRR)} and \textit{Precision@1 (P@1)} \cite{manning2008ir}.
It is hard to find a perfect way to automatically evaluate the performance of a QG system. In this work, we use BLEU-4~\cite{papineni2002bleu} score as the evaluation metric, which measures the overlap between the generated question and the ground truth.
\subsection{Implementation Details}
We train the parameters of the QA model and the QG model simultaneously.
We randomly initialize the parameters in both models with a combination of the fan-in and fan-out \cite{glorot2010understanding}.
The parameters of word embedding matrices are shared in the QA model and the QG model.
In order to learn question and answer specific word meanings, we use two different embedding matrices for question words and answer words. The vocabularies are the most frequent 30K words from the questions and answers in the training data.
We set the dimension of word embedding as 300, the hidden length of encoder and decoder in the QG model as 512, the hidden length of GRU in the QA model as 100, the dimension of word co-occurrence embedding as 10, the vocabulary size of the word co-occurrence embedding as 10, the hidden length of the attention layer as 30.
We initialize the learning rate as 2.0, and use AdaDelta~\cite{zeiler2012adadelta} to adaptively decrease the learning rate.
We use mini-batch training, and empirically set the batch size as 64.
The sampled answer sentences do not come from the same passage.
We get 10 batches (640 instances) and sort them by answer length for accelerating the training process. The negative samples come from these 640 instances, which are from different passages.
In this work, we use smoothed bigram language models as $p_a(a)$ and $p_q(q)$. We also tried trigram language model but did not get improved performance.
Alternatively, one could also implement neural language model and jointly learn the parameters in the training process.
\subsection{Results and Analysis}
We first report results on the MARCO and SQUAD datasets.
As the dataset is splitted by ourselves, we do not have previously reported results for comparison.
We compare with the following four baseline methods.
It has been proven that word co-occurrence is a very simple yet effective feature for this task \cite{yang2015wikiqa,yin2015abcnn}, so the first two baselines are based on the word co-occurrence between a question sentence and the candidate answer sentence.
\textbf{WordCnt} and \textbf{WgtWordCnt} use unnormalized and normalized word co-occurrence.
The ranker in these two baselines are trained with with FastTree, which performs better than SVMRank and linear regression in our experiments.
We also compare with \textbf{CDSSM} \cite{shen2014CDSSM}, which is a very strong neural network approach to model the semantic relatedness of a sentence pair.
We further compare with \textbf{ABCNN} \cite{yin2015abcnn}, which has been proven very powerful in various sentence matching tasks.
\textbf{Basic QA} is our QA model which does not use the duality between QA and QG.
Our ultimate model is abbreviated as \textbf{Dual QA}.
The QA performance on MARCO and SQUAD datasets are given in Table \ref{table:results-qa}.
We can find that CDSSM performs better than the word co-occurrence based method on MARCO dataset.
On the SQUAD dataset, Dual QA achieves the best performance among all these methods.
On the MARCO dataset, Dual QA performs comparably with ABCNN.
We can find that Dual QA still yields better accuracy than Basic QA, which shows the effectiveness of the joint training algorithm.
It is interesting that word co-occurrence based method (WgtWordCnt) is very strong and hard to beat on the MARCO dataset. Incorporating sophisticated features might obtain improved performance on both datasets, however, this is not the focus of this work and we leave it to future work.
\begin{table}[h]
\centering
\begin{tabular}{l|cc}
\hline
Method & MRR & MAP \\
\hline
CNN \cite{yang2015wikiqa} &0.6652& 0.6520 \\
APCNN \cite{dos2016attentive} & 0.6957& 0.6886 \\
NASM \cite{miao2016neural} &0.7069 & 0.6886 \\
ABCNN \cite{yin2015abcnn} & 0.7018 & 0.6921 \\
\hline
Basic QA & 0.6914 & 0.6747 \\
Dual QA & 0.7002 & 0.6844 \\
\hline
\end{tabular}
\caption{QA performance on the WikiQA dataset.}
\label{table:results-qa-wikiqa}
\end{table}
\begin{table*}[t]\small
\centering
\begin{tabular}{p{3cm}|p{6.5cm}|p{3cm}|p{3cm}}
\hline
\textbf{question} & \textbf{correct answer} & \textbf{question generated by \ \ \ \ \ Dual QG} & \textbf{question generated by Basic QG}\\
\hline
\textit{what 's the name of the green space north of the center of newcastle ?} & \textit{Another green space in Newcastle is the Town Moor , lying immediately north of the city centre .} & \textit{what is the name of the green building in the city ?} &\textit{ what is the name of the city of new haven ? }\\
\hline
\textit{for what purpose do organisms make peroxide and superoxide ?} & \textit{Parts of the immune system of higher organisms create peroxide , superoxide , and singlet oxygen to destroy invading microbes .} & \textit{what is the purpose of the immune system ?} & \textit{what is the main function of the immune system ?} \\
\hline
\textit{how much money was spent on other festivities in the bay area to help celebrate the coming super bowl 50 ?} & \textit{In addition , there are \$ 2 million worth of other ancillary events , including a week - long event at the Santa Clara Convention Center , a beer , wine and food festival at Bellomy Field at Santa Clara University , and a pep rally .}& \textit{how much of the beer is in the santa monica convention center ?} & \textit{what is the name of the beer in the santa monica center ?} \\
\hline
\end{tabular}
\caption{Sampled examples from the SQUAD dataset.}
\label{table:results-example}
\end{table*}
Results on the WikiQA dataset is given in Table \ref{table:results-qa-wikiqa}.
On this dataset, previous studies typically report results based on their deep features plus the number of words that occur both in the question and in the answer \cite{yang2015wikiqa,yin2015abcnn}. We also follow this experimental protocol. We can find that our basic QA model is simple yet effective. The Dual QA model achieves comparably to strong baseline methods.
To give a quantitative evaluation of our training framework on the QG model, we report BLEU-4 scores on MARCO and SQUAD datasets.
The results of our QG model with or without using joint training are given in Table \ref{table:results-qg}.
We can find that, despite the overall BLEU-4 scores are relatively low, using our training algorithm could improve the performance of the QG model.
\begin{table}[h]
\centering
\begin{tabular}{lccc}
\hline
{Method} & MARCO & SQUAD &WikiQA \\
\hline
Basic QG & 8.87 & 4.34 & 2.91\\
Dual QG & 9.31 & 5.03 & 3.15\\
\hline
\end{tabular}
\caption{QG performance (BLEU-4 scores) on MARCO, SQUAD and WikiQA datasets.}
\label{table:results-qg}
\end{table}
We would like to investigate how the joint training process improves the QA and QG models. To this end, we analyze the results of development set on the SQUAD dataset.
We randomly sample several cases that the Basic QA model gets the wrong answers while the Dual QA model obtains the correct results.
Examples are given in Table \ref{table:results-example}.
From these examples, we can find that the questions generated by Dual QG tend to have more word overlap with the correct question, despite sometimes the point of the question is not correct.
For example, compared with the Basic QG model, the Dual QG model generates more informative words, such as ``green'' in the first example, ``purpose'' in the second example, and ``how much'' in the third example.
We believe this helps QA because the QA model is trained to assign a higher score to the question which looks similar with the generated question.
It also helps QG because the QA model is trained to give a higher score to the real question-answer pair, so that generating more answer-alike words gives the generated question a higher QA score.
Despite the proposed training framework obtains some improvements on QA and QG, we believe the work could be further improved from several directions.
We find that our QG model not always finds the point of the reference question.
This is not surprising because the questions from these two reading comprehension datasets only focus on some spans of a sentence, rather than the entire sentence. Therefore, the source side (answer sentence) carries more information than the target side (question sentence).
Moreover, we do not use the answer position information in our QG model.
Accordingly, the model may pay attention to the point which is different from the annotator's direction, and generates totally different questions.
We are aware of incorporating the position of the answer span could get improved performance \cite{zhou2017neural}, however, the focus of this work is a sentence level QA task rather than reading comprehension.
Therefore, despite MARCO and SQUAD are of large scale, they are not the desirable datasets for investigating the duality of our QA and QG tasks.
Pushing forward this area also requires large scale sentence level QA datasets.
\subsection{Discussion}
We would like to discuss our understanding about the duality of QA and QG, and also present our observations based on the experiments.
In this work, ``duality'' means that the QA task and the QG task are equally important. This characteristic makes our work different from Generative Domain-Adaptive Nets \cite{yang2017semi} and Generative Adversarial Nets (GAN) \cite{goodfellow2014generative}, both of which have a main task and regard another task as the auxiliary one.
There are different ways to leverage the ``duality'' of QA and QG to improve both tasks.
We categorize them into two groups. The first group is about the training process and the second group is about the inference process.
From this perspective, dual learning \cite{xia2016dual} is a solution that leverages the duality in the training process.
In particular, dual learning first pretrains the models for two tasks separately, and then iteratively fine-tunes the models.
Our work also belongs to the first group.
Our approach uses the duality as a regularization item to guide the learning of QA and QG models simultaneously from scratch.
After the QA and QG models are trained, we could also use the duality to improve the inference process, which falls into the second group.
The process could be conducted on separately trained models or the models that jointly trained with our approach.
This is reasonable because the QA model could directly add one feature to consider $q$ and $q'$, where $q'$ is the question generated by the QG model.
The first example in Table \ref{table:results-example} also motivates this direction.
Similarly, the QA model could give each $\langle q', a \rangle$ a score which could be assigned to each generated question $q'$.
In this work we do not apply the duality in the inference process. We leave it as a future plan.
This work could be improved by refining every component involved in our framework.
For example, we use a simple yet effective QA model, which could be improved by using more complex neural network architectures \cite{hu2014convolutional,yin2015abcnn} or more external resources.
We use a smoothed language model for both question and answer sentences, which could be replaced by designed neural language models whose parameters are jointly learned together with the parameters in QA and QG models.
The QG model could be improved as well, for example, by developing more complex neural network architectures to take into account of more information about the answer sentence in the generation process.
In addition, it is also very important to investigate an automatic evaluation metric to effectively measure the performance of a QG system.
BLEU score only measures the literal similarity between the generated question and the ground truth.
However, it does not measure whether the question really looks like a question or not.
A desirable evaluation system should also have the ability to judge whether the generated question could be answered by input sentence, even if the generated question use totally different words to express the meaning.
\section{Related Work}
Our work relates to existing studies on question answering (QA) and question generation (QG).
There are different types of QA tasks including text-level QA \cite{yu2014deep}, knowledge based QA \cite{berant2013semantic}, community based QA \cite{qiu2015convolutional} and the reading comprehension \cite{rajpurkar-EtAl:2016:EMNLP2016,nguyen2016ms}.
Our work belongs to text based QA where the answer is a sentence.
In recent years, neural network approaches \cite{hu2014convolutional,yu2014deep,yin2015abcnn} show promising ability in modeling the semantic relation between sentences and achieve strong performances on QA tasks.
Question generation also draws a lot of attentions in recent years.
QG is very necessary in real application as it is always time consuming to create large-scale QA datasets.
In literature, \cite{yao2010question} use Minimal Recursion Semantics (MRS) to represent the meaning of a sentence, and then realize the MSR structure into a natural language question.
\cite{heilman2011automatic} present a overgenerate-and-rank framework consisting of three stages. They first transform a sentence into a simpler declarative statement, and then transform the statement to candidate questions by executing well-defined
syntactic transformations. Finally, a ranker is used to select the questions of high-quality.
\cite{chali2015towards} focus on generating questions from a topic.
They first get a list of texts related to the topic, and then generate questions by exploiting the named entity information and the predicate argument structures of the texts.
\cite{labutov2015deep} propose an ontology-crowd-relevance approach to generate questions from novel text.
They encode the original text in a low-dimensional ontology, and then align the question templates obtained via crowd-sourcing to that space. A final ranker is used to select the top relevant templates.
There also exists some studies on generating questions from knowledge base \cite{song2016domain,serban-EtAl:2016:P16-1}.
For example, \cite{serban-EtAl:2016:P16-1} develop a neural network approach which takes a knowledge fact (including a subject, an object, and a predicate) as input, and generates the question with a recurrent neural network.
Recent studies also investigate question generation for the reading comprehension task \cite{du2017question,zhou2017neural}.
The approaches are typically based on the encoder-decoder framework, which could be conventionally learned in an end-to-end way.
As the answer is a text span from the sentence/passage, it is helpful to incorporate the position of the answer span \cite{zhou2017neural}.
In addition, the computer vision community also pays attention to generating natural language questions about an image \cite{mostafazadeh2016generating}.
\section{Conclusion}
We focus on jointly training the question answering (QA) model and the question generation (QG) model in this paper.
We exploit the ``duality'' of QA and QG tasks, and introduce a training framework to leverage the probabilistic correlation between the two tasks.
In our approach, the ``duality'' is used as a regularization term to influence the learning of QA and QG models.
We implement simple yet effective QA and QG models, both of which are neural network based approaches.
Experimental results show that the proposed training framework improves both QA and QG on three datasets.
\subsubsection{Acknowledgments.}
We sincerely thank Wenpeng Yin for running the powerful ABCNN model on our setup.
|
train/arxiv
|
BkiUd8jxK6-gD0Srd-eN
| 5
| 1
|
\section{Introduction}
While fashion is a multi-billion dollar global industry it comes with severe \textbf{environmental and social costs} worldwide. The fashion industry is considered to be the world's second largest polluter, after oil and gas. Fashion accounts for 20 to 35 percent of microplastic flows into the ocean and outweighs the carbon footprint of international flights and shopping combined \cite{bof-article-2020-the-year-ahead}. Every stage in a garment's life threatens the planet and its resources. For example, it can take more than 20,000 liters of water to produce 1kg of cotton, equivalent to a single t-shirt and pair of jeans. Up to 8,000 different chemicals are used to turn raw materials into clothes, including a range of dyeing and finishing processes. This also has social costs with factory workers being underpaid and exposed to unsafe workplace conditions, particularly when handling materials like cotton and leather that require extensive processing \cite{mckinsey-article-2016-style}. Since fashion is heavily trend-driven and most retailers operate by season (for example, spring/summer, autumn/winter, holiday etc.), at the end of each season any unsold inventory is generally liquidated. While smaller retailers generally move the merchandise to second-hand shops, large brands resort to recycling or destroying the merchandise.
In recent years this has led to addressing \textbf{sustainability} challenges as a core agenda for most fashion companies. Increasing pressure from investors, governments and consumer groups are leading to companies adopting sustainable practices to reduce their carbon footprint. Moreover, several companies may have sustainability targets (due to government regulations and/or self-imposed) to honor, which may lead to significant changes in the entire fashion supply/value chain. Sustainable practices can be adopted at various stages of the fashion value chain and several efforts are underway including more sustainable farming practices for growing fabric (for example, cotton), material innovation for alternatives (to cotton fabric, leather, dyes etc.), end-to-end transparency/visibility in the entire supply chain, sourcing from sustainable suppliers, better recycling technologies and sustainability index for measuring the full life-cycle impact of an apparel. In this paper, our main focus is to address sustainability challenges in the pre-season assortment planning activity in the fashion supply chain.
\textbf{Assortment planning} is a common pre-season planning done by buyers and merchandisers. Typically a fashion retailer has a large set of products under consideration to be potentially launched for the next up coming season. These could be a combination of \textit{existing products} from the earlier seasons along with the \textit{new products} that are designed for the next season. The designers interpret the fashion trends to design and develop a certain number of products for each category as specified in the option plan. The final products (both existing and new) are presented to the buyer and merchandiser who then curate/select a subset of them as the assortment for the next season. This assortment planning is typically based on her estimation of how well the product will sell (based on historical sales data and her interpretation of trends). During the initial planning the team works only with the initial designs or some times a sample procured from a vendor. Once the assortment has been selected the buyer then works with the sourcing team and the vendors to procure the products. The choice of the final assortment is a crucial decision since it has a big impact on the sell through rate, unsold inventory and eventually the revenue for the next season.
In practice, the merchandiser has to actually select a different assortment for each region or store, referred to as, \textbf{hyper-local assortment planning}. While a retailer has a large set of products to offer, due to budget and space constraints only a smaller number of products can be stocked at each store. In this context, one of the most crucial planning tasks for most retail merchandisers is to decide the right assortment for a store, that is, what set of products to stock at each store.
The current practice for assortment planning is heavily spreadsheet driven and relies on the expertise and intuition of the merchandisers, coupled with trends identified from the past sales transactions data. While it is still manageable for a merchandiser to plan an assortment for a single store, it is not scalable when a merchandiser has to do planning for hundreds of stores. Typically stores are grouped into store clusters and an assortment is planned for each cluster rather than store. A sub-optimal assortment results in excess leftover inventory for the unpopular items, increasing the inventory costs, and stock outs of popular items, resulting in lost demand and unsatisfied customers. With better assortment planning algorithms retailer are now open to more algorithmic store-level automated assortments.
The task of hyper-local assortment planning is to determine the optimal subset of products (from a larger set of products) to be stocked in each store so that the revenue/profit is maximized under various constraints and at the same time the assortment is localized to the preferences of the customers shopping in that store. The notion of a store can be generalized to a location and can potentially include store, region, country, channel, distribution center etc.
Existing approaches to assortment planning only maximize the expected revenue under certain store and budget constraints. Along with the revenue the choice of the final assortment has also an environmental cost associated with it. The final environmental impact of an assortment is eventually the sum of the environmental impact of each of the products in the assortment. In this paper, we address the notion of \textbf{sustainable assortments} and optimize the assortments under additional sustainability constraints.
To achieve this we need a metric to measure the environmental impact of an apparel. One of the main deciding factors is the fabric or the kind of material used in the apparel. For example, cotton, accounting for about 30 percent of all textile fiber consumption, is usually grown using a lot of water, pesticides, and fertilizer, and making 1 kilogram of fabric generates an average of 23 kilograms of greenhouse gases. In this work we use the \textbf{Higg Material Sustainability Index} (MSI) score which is the apparel industry's most trusted tool to accurately measure the environmental sustainability impacts of materials \cite{higg-msi}. The Higg MSI score allows us to quantify the effect of using different materials, for example, while the cotton fabric has a score of 98, viscose/rayon fabric is a more sustainable fabric with score of 62. While we demonstrate our algorithms with the Higg MSI score any other suitable sustainability metric can be incorporated in our framework.
While designers and merchandisers strive to make sustainable fabric choices during the design phase there is always a trade-off involved between sustainable choices and achieving high sell through rates. Also, the choice will typically be made at an individual product level and it is hard for the designer or buyer to assess the environmental impact of the assortment as a whole. The trade-off between revenue and environmental impact is balanced through a multi-objective optimization approach, that yields a Pareto-front of optimal assortments for merchandisers to choose from.
The rest of the paper is organized as follows. In \S~\ref{ref:assortment-planning} we define the problem of hyper-local assortment planning. In \S~\ref{ref:sustainability-scores}, we present the sustainability score calculations. In \S~\ref{ref:sustainable-assortment-planning}, we outline our approach to do a sustainable assortment planning. In \S~\ref{ref:experiments}, we present experimental results of our approach.
\section{Hyper-local assortment planning}
\label{ref:assortment-planning}
We define an (hyper-local) assortment for a store as a subset of $k$ products carried in the store from the total $n$ (potential) products. The task of \textbf{assortment planning} is to determine the optimal subset of $k$ products to be stocked in each store so that the \textit{assortment is localized to the preferences of the customers shopping in that store}. The optimization is done to maximize sales or gross margin subject to financial (limited budget for each store), store space (limited shelf life for displaying products) and other constraints.
Broadly there are three aspects to assortment planning, (1) the choice of the \textbf{demand model}, (2) \textbf{estimating the parameters} of the chosen demand model and (3) using the demand estimates in an \textbf{assortment optimization} setup.
\subsection{Demand Models}
The starting point for any assortment planning is to leverage an accurate demand forecast at a store level for a product the retailer is planning to introduce this season. The demand for a product is dependent on the assortment present in the store when the purchase was made. Several models have been proposed in the literature to model the demand. The forecast demand will then be used in a suitable stochastic optimization algorithm to do the assortment planning and refinement.
Given a set of $n$ substitutable products $\mathcal{N} = \{1,2,...,n\}$ and $m$ stores $\mathcal{S} = \{1,2,...,m\}$, let $d_{js}(\mathbf{q}_s)$ be the \textbf{demand} for product $j \in \mathcal{N}$ at store $s \in \mathcal{S}$ when the assortment offered at the store was $\mathbf{q}_s \subset \mathcal{N}$. An alternate construct is to specify it as a customer \textbf{choice} model $p_{js}(\mathbf{q}_s)$ which is the probability that a random customer chooses/prefers the product $j$ at store $s$ over other products in the assortment offered at the store.
\textbf{Independent demand model} The simplest approach is to assume product demand to be independent of the offer set or the assortment, that is, the demand for a product does not depend on other available products. This model can therefore be specified by a discrete probability distribution over each of the products.
\begin{equation}
p_{js}(\mathbf{q}_s) = \mu_{js} \quad \text{if } j \in \mathbf{q}_s \quad \text{such that} \sum_{j \in \mathcal{N}} \mu_{js} = 1
\end{equation}
This is the simplest demand model that has been traditionally around in retail operations, and assumes no substitution behavior. In practice the demand for a product is heavily influenced by the assortment that is under offer mainly due to \textbf{product substitution} (cannibalization) and \textbf{product complementarity} (halo-effect). The literature here is mainly focused on various parametric and non-parametric discrete choice models to capture product substitution, including, multinomial logit and variants\cite{kok-2008}, the exponomial discrete choice model\cite{alptekinoglu-2016}, deep neural choice models \cite{otsuka-2016} \cite{mottini-2017} and non-parametric rank-list models \cite{farias-2017}.
Since the main focus is to address the notion of sustainability in assortment for ease of exposition in this paper, we mainly focus on this simple independent demand model and ignore the effects of substitution. In general, any demand model can be plugged into the optimization framework.
\subsection{Estimating demand models}
Once an appropriate demand/choice model is chosen the parameters of the model have to be estimated based on historical sales and inventory data. Different demand models come with its own challenges and computation complexities in estimating the model parameters and include least squares, standard gradient based optimization, column generation and EM algorithms to maximize the likelihood. Berbeglia et al. 2019 \cite{berbeglia-2018} presents a good overview and a comparative empirical study of different choice-based demand models and their parameter estimation algorithms.
For the independent demand model, we mainly rely on the historical store-level sales data to get an estimate of $d_{js}$ and multiply it by a suitable scalar to capture the trend increase or decrease for that year.
\begin{itemize}
\item For existing products that were historically carried at a store, this is essentially the number of units of the products sold in the last season.
\item However, in general, not all products are historically carried at all stores. For existing products that were not carried at the store, we use \textbf{matrix factorization} approaches to estimate the demand by modeling the problem as a product $\times$ store matrix and filling in the missing entries via matrix completion. This is described in more detail in Section \nameref{MF}.
\item For completely new products without any previous sales history, then we use its visual and textual attributes to get a multi-modal embedding, and based on that we forecast the store-wise potential sales. \cite{ekambaram-kdd-2020}.
\end{itemize}
\subsection{Matrix Factorization} \label{MF}
Matrix factorization (MF) popularized in the collaborative filtering and recommender systems literature \cite{koren-2009} is commonly used to impute missing data. Let $\mathbf{X}$ be a $\texttt{product} \times \texttt{store}$ matrix of dimension $n \times m$ where each element $X_{ij}$ of the matrix represents the metric (for example, total sales) associated with product $i$ at store $j$. This matrix is sparse with elements missing for products not carried at the store. MF essentially decomposes this sparse matrix into two lower dimensional matrices $\mathbf{U}$ and $\mathbf{V}$ where $\mathbf{U} \in \mathbb{R}^{n \times D}$ and $\mathbf{V} \in \mathbb{R}^{m \times D}$, such that rows in $\mathbf{U}$ and $\mathbf{V}$ encapsulate the product and store embeddings of dimension $D$. These $D$ dimensional embeddings (latent vectors) namely $\mathbf{U}_{i}$ and $\mathbf{V}_{j}$ are expected to capture the underlying hidden structure that influences the sales for product $i$ and store $j$ respectively. A common approach towards MF is to use Alternating Least Squares algorithm, however, other regularization extensions have also been characterized at length in the literature. In this paper, we have adopted the Alternating Least Squares approach and minimize the following loss function.
\begin{align}\begin{split}
\mathbf{L}(\mathbf{X},\mathbf{U},\mathbf{V}) = \sum_{i,j}c_{ij}(X_{ij} - \mathbf{U}_{i}\mathbf{V}_j^{T} - \beta_{i}-\gamma_{j})^{2} + \lambda(\sum_{i}(\|\mathbf{U}_{i}\|+\beta_{i})^{2} + \\ \sum_{j}(\|\mathbf{V}_{j}\|+\gamma_{j})^{2})
\end{split}\end{align}
where $\mathbf{\beta}$ and $\mathbf{\gamma}$ are product and store bias vectors of dimension $n$ and $m$ respectively and $c_{ij}$ be the weightage given to observed entries based on their upper and lower bounds limit. Once the loss function gets minimized we estimate the unseen entries $X_{ij}^{*}$, as follows.
\begin{equation}
X_{ij}^{*} = \mathbf{U}_{i}\mathbf{V}_j^{T} + \beta_{i} + \gamma_{j}
\end{equation}
Thus, matrix $\mathbf{X}$ which was initially sparse now gets completely filled and is fed into our assortment planning module.
\subsection{Assortment optimization}
The forecast demand will then be used in a suitable stochastic optimization algorithm to do the assortment planning. The task of assortment optimization is to choose an optimal subset of products to maximize the expected revenue \textbf{subject to various constraints}.
\begin{equation}
\mathbf{q}_s^{*} = \argmax_{\mathbf{q}_s \subset \mathcal{N}} \sum_{j \in \mathbf{q}_s} \pi_{js} d_{js}(\mathbf{q}_s)
\end{equation}
where $\pi_{js}$ is the expected revenue when the product $j$ is sold at store $s$. Some of the commonly used constraints include,
\textbf{Cardinality constraints} The number of products to be included in an assortment is specified via a coarse range plan (sometimes also called an option plan or buy plan) for a store. The range plan specifies either the count of products or the total budget the retailer is planning to launch for a particular season as the granularity of category, brands, attributes and price points.
\textbf{Diversity constraints} For some domain it is important to ensure that the selected assortment is \textit{diverse} to offer greater variety to the consumer. Without the diversity constraints the assortment tends to prefer products that are similar to each other. The general framework is to define a \textbf{product similarity function} which measures the similarity between two products and use that as an additional constraint in the optimization.
\textbf{Complementarity constraints} The other important aspect is that a good assortment has products that are frequently bought together. Product complementarity (sometimes referred to as halo-effect) refers to behavior where a customer buys another product (say, a \textit{blue jeans}) that typically goes well with a chosen product (say, \textit{white top}).
In this paper, we mainly focus on cardinality constraints. Our main contribution is to introduce environmental impact as additional constraints to the assortment optimization problem. As a result, this helps in making optimal assortment decisions in supply chains while accounting for both the economic and the environmental impact.
\section{Sustainability scores}
\label{ref:sustainability-scores}
We need a metric to measure the environmental impact of an apparel. One of the main deciding factors is the fabric or the kind of material used in the apparel.
We calculate the sustainability score for a product using the \textbf{Higg Material Sustainability Index (MSI)} developed by the Sustainable Apparel Coalition \cite{higg-msi}. The Higg MSI quantifies impact score for each fabric by taking into account various processes involved in the manufacturing of fabrics such as raw material procurement, yarn formation, textile formation, dyeing etc. Higg MSI calculates the impact on climate change, eutrophication, resource depletion, water scarcity and chemistry. The score is calculated for each impact area, then normalized followed by a weighted average. The Higg MSI score allows us to quantify the effect of using different materials; for example, while cotton has a score of 98, viscose/rayon is a more sustainable fabric with a score of 62.
The Higg MSI value corresponds to consolidated environmental impact of 1 kg of a given material. Moreover, products made up of these constituent materials will typically have different weights. Thus, we adjust the Higg MSI of a product based on its weight.
For blended fabrics, we take a weighted average of Higg MSI of individual fabrics in the same proportions as they are in the blend.
\begin{equation}
h_j = (\sum_{f \in F}H_f * p_f) \times w_j
\end{equation}
where $p_f$ is fabric percentage for each fabric $f$ present in the blended fabric $F$, $H_f$ is the Higg MSI for fabric $f$, $w_j$ is the weight of the product in $\texttt{kg}$, $h_j$ is the sustainability score for product $j$. For a set of $N$ products in an assortment, the sustainability score can be calculated as
\begin{equation}
h_{a_N} = \frac{1}{N} (\sum_{j \in \mathcal{N}}h_j )
\end{equation}
It should be noted that the Higg MSI is a cradle-to-gate index and doesn't consider downstream processes such as the impact due to laundry, wear and tear etc.
\section{Sustainable assortment planning}
\label{ref:sustainable-assortment-planning}
Once we have the store-wise product-wise demand/sales forecasts and the Higg MSI score for each product we can formulate this as a multi-objective optimization problem where we are interested in selecting those products for which we have better sales forecasts and at the same time that result in a sustainable assortment. Moreover, instead of just one solution, we would like to give the user a set of solutions near the Pareto Optimal front so that the user can visualize and select whichever assortment satisfies her criteria. We solve the following multi-objective problem for each store $s$.
\begin{equation}
\mathbf{x}_s^{*} = \argmax_{\mathbf{x}_s \in \{0,1\}^{n}, ||\mathbf{x}_s|| \le k} \frac{(1- \lambda)}{k} \underbrace{\sum_{j \in \mathcal{N}} \pi_{js} d_{js} x_{js}}_{\text{revenue}} - \frac{\lambda}{k}\underbrace{\sum_{j \in \mathcal{N}} h_j x_{js}}_{\text{sustainability}}
\end{equation}
where $x_{js}$ is a binary variable denoting presence or absence of product $j$ from the assortment at the store $s$, $d_{js}$ the demand for product $j$ at store $s$, $h_j$ is the weighted Higg MSI score for that product and $\lambda$ is a parameter through which the user can specify the relative importance of each objective. In the results section we show the optimal Pareto frontier by varying $\lambda$.
\subsection{Multi-objective optimization}
As described in the earlier section, the objective of the assortment planning problem is to determine optimal assortments that have the least Higg MSI score (least environmental impact) with a minimal impact on the sales. Optimizing these two objectives – maximizing sales and minimizing the Higg MSI score – individually will likely yield fundamentally different assortment solutions that may lead to superior sales but with a high Higg MSI (high environmental impact) or vice versa. To address this trade-off, formulating the assortment planning problem as a multi-objective optimization problem that optimizes the sales and the Higg MSI score at the same time is justified.
Multi-objective optimization problems have been formulated and solved using classical methods as well as meta-heuristics in literature \cite{deb-2001}. Of the available methods, the weighted sum method is employed to formulate and solve the assortment planning problem, due to its simplicity in configuration and use. In this method, relative importance of different objectives, as represented by multiplicative coefficients of the objective functions, is continually changed; and for each realization of these coefficients, a single-objective optimization problem is solved yielding an optimal assortment. Solving the single-optimization problem for multiple coefficient realizations yields a family of Pareto-optimal assortments that are non-dominated with respect to each other in the objective function space of sales and the Higg MSI score. The merchandiser can then choose from these optimal assortment solutions, depending on the preferred balance between sales and environmental impact.
In the proposed formulation, for a given $\lambda$, we compute the single objective function score for each product, which is a weighted combination of the sustainability and quality (revenue) scores. We then choose the top $k$ products as the assortment.
\section{Experimentation Validation}
\label{ref:experiments}
For our experimental validation, our main goals were to visualize the effect of including sustainability on the assortment. We used a dataset obtained from a leading fashion retailer consisting of 3484 products and sales over the time period Spring-Summer 2018 season. Product weight and the product's fabric composition were used to calculate the Higg MSI score for each product. We analyzed the \textit{upper} category that mainly consisted of \textit{t-shirts, shirts, tops} (total 1600 products). We calculated the Higg MSI score and the quality score (sales forecast) using the methods outlined in the previous sections.
\subsection{Sustainability and Quality Distribution}
Before planning the assortments, we visualized the distribution of different sustainability and quality scores that the products had (Figures \ref{fig:higg-dist}, \ref{fig:quality-dist}). In the plots we can see that the quality scores are evenly distributed; however, there are three peaks in the Higg MSI scores. On investigating further, we found that these corresponded to those products for which the fabric composition was either 100\% cotton or 100\% viscose or 100\% polyester where cotton had the highest Higg MSI (least sustainable) and polyester had the least Higg MSI (most sustainable).
\begin{figure}
\includegraphics[width=0.4\textwidth]{images/histogram-sust.png}
\caption{Histogram distribution of product Higg MSI scores.}
\label{fig:higg-dist}
\end{figure}
\begin{figure}
\includegraphics[width=0.4\textwidth]{images/histogram-quality.png}
\caption{Histogram distribution of product quality scores.}
\label{fig:quality-dist}
\end{figure}
\subsection{Pareto Front for Assortment Optimization}
We ran our optimization algorithm for multiple assortment sizes and plotted the Pareto optimal front by varying $\lambda$, the relative importance weight given to sustainability and revenue (quality) from 0 to 1. (Figure \ref{fig:pareto})
\begin{figure}[]
\centering
\subfigure[Assortment size 1: All products are plotted.]{\includegraphics[scale=0.3]{images/size-1-allproducts.png}}
\subfigure[Assortment size 10: All points on Pareto Front are plotted, besides 2000 randomly chosen assortments.]{\includegraphics[scale=0.3]{images/sum-size-10.png}}
\caption{Pareto Optimal fronts for varying assortment sizes.}
\label{fig:pareto}
\end{figure}
In the plots, the blue curve corresponds to the optimal Pareto frontier. We can see that as we increase the assortment size, the Pareto frontier and the assortment cluster shrinks \textit{relative to the frontier}. This is because as we aggregate the scores of more products, the consolidated scores move closer to their mean. Also, the 3 horizontal clusters in assortment size 1 plot is consistent with our observation that the Higg MSI score distribution also contains 3 peaks corresponding to 100\% cotton, 100\% viscose and 100\% polyester products respectively.
\subsection{Fabric composition variation}
We further investigated and visualized the assortment compositions for 3 points on the Pareto Optimal frontier for assortment size 100, corresponding to $\lambda = 0.0, 0.5, 1.0$ and saw the interesting distributions plotted in Figure \ref{fig:fabric-comp}.
\begin{figure}[!t]
\centering
\subfigure[Pareto optimal for $\lambda = 0.0$]{\includegraphics[scale=0.2]{images/lambda0_size100.png}}
\subfigure[Pareto optimal for $\lambda = 0.5$]{\includegraphics[scale=0.2]{images/lambdamid_size100.png}}
\subfigure[Pareto optimal for $\lambda = 1.0$]{\includegraphics[scale=0.2]{images/lambdamax_size100.png}}
\caption{Fabric Composition of extreme and middle points on the Pareto Optimal Frontier for assortment size 100.}
\label{fig:fabric-comp}
\end{figure}
We can see that for $\lambda=1.0$ (maximum importance to sustainability), the fabric composition in the assortment products comprises mostly of polyester since its Higg MSI is the lowest signifying that it is most sustainable fabric. Looking at $\lambda=0.0$ and $\lambda=0.5$ plots we see that viscose fabric is dominant since its Higg MSI is lower than cotton and it has the best quality score in terms of quality scores as well.
\section{Conclusions and Future Work}
In this work, we have proposed a method of assortment planning that jointly optimizes the environmental impact of an assortment and the revenue. We formulated the problem as a multi- objective optimization problem whose optimal solutions lie on the Pareto Optimal front. The proposed approach would allow retailers to meet their sustainability targets with minimal impact on the revenue. In future work, we would like to consider cannibalization and halo effects in demand modeling as well. We would also like to consider diversity and complementarity of products in the assortment in the optimization formulation. Another extension would be to use a cradle-to-grave sustainability metric for assortment planning.
\bibliographystyle{ACM-Reference-Format}
|
train/arxiv
|
BkiUbgY25YjgKDMz-_kH
| 5
| 1
|
\section{Introduction}
Dynamic allocation of communication resources such as bandwidth or transmission power is a central
issue in multiple access channels in view of the time varying nature of the channel and
interference effect. Most of the existing literature focuses on specific communication schemes such
as TDMA (time-division multiple access) \cite{TDMA} and CDMA (code-division multiple access)
\cite{CDMA1,CDMA3} systems. An exception is the work by Tse \emph{et al.} \cite{Tse}, who consider
the notion of \emph{throughput capacity} for the fading channel with Channel State Information
(CSI). That is the notion of Shannon capacity applied to the fading channel, where the codeword
length can be arbitrarily long to average over the fading of the channel. The points on the
boundary of the capacity region are attained by dynamically allocating the resources with the goal
of maximizing a \emph{linear} utility function.
In this paper, we consider the problem of rate and power allocation in a multiple access channel
with perfect CSI. Contrary to the linear case in \cite{Tse}, we consider maximizing a general
utility function of transmission rates over the throughput capacity region. Such a general concave
utility function allows us to capture different performance metrics such as fairness or delay (c.f.
Shenker \cite{She95}, Srikant \cite{Srikant}). Our contributions can be summarized as follows.
We first consider the case where channel statistics are known and power can be controlled at the
transmitters. Owing to strict convexity of the capacity region, we show that the resource
allocation problem for a general concave utility is equivalent to another problem with a linear
utility. Hence, the optimal resource allocation policies are obtained by applying the results in
\cite{Tse} for the linear utility. Given a general utility function, the conditional gradient
method is used to obtain the corresponding linear utility. Second, we consider the case where the
transmitters do not have the power control feature and channel statistics are not known. In this
case, a greedy policy which maximizes the utility function for any given channel state, is
suboptimal. However, we can bound the performance difference between the optimal and the greedy
policies. We show that this bound is tight in the sense that it goes to zero either as the utility
function tends to a linear function of the rates or as the channel variations vanish.
Other than the papers cited above, our work is also related to the work of Vishwanath \emph{et al.}
\cite{Vishwanath} which builds on \cite{Tse} and takes a similar approach to the resource
allocation problem for linear utility functions. Other works address different criteria for
resource allocation including minimizing the weighted sum of transmission powers \cite{power_min},
and considering Quality of Service (QoS) constraints \cite{QoS}. In contrast to this literature, we
consider the utility maximization framework for general concave utility functions.
The remainder of this paper is organized as follows: In Section II, we introduce the model and
describe the capacity region of a fading multiple-access channel. In Section III, we address the
resource allocation problem with power control and known channel statistics. In Section IV, we
consider the same problem without power control and channel statistics. Finally, we give our
concluding remarks in Section V.
Regarding the notation, we denote by $x_i$ the $i$-th component of a vector $\boldsymbol x$. A vector $\boldsymbol
x$ is positive when $x_i> 0$ for all components $i$ of $\boldsymbol x$. We denote the nonnegative orthant
by $\mathbb{R}^n_+$, i.e., $\mathbb{R}^n_+ = \{\boldsymbol x\in \mathbb{R}^n\mid \boldsymbol x\ge 0\}$. We write
$\boldsymbol x'$ to denote the transpose of a vector $\boldsymbol x$.
\section{System Model}
We consider $M$ transmitters sharing the same media to communicate to a single receiver. We model
the channel as a Gaussian multiple access channel with flat fading effects
\begin{equation}\label{fading_model}
Y(n) = \sum_{i=1}^M \sqrt{H_i(n)} X_i(n) + Z(n),
\end{equation}
where $X_i(n)$ and $H_i(n)$ are the transmitted waveform and the fading process of the
\textit{i}-th transmitter, respectively, and $Z(n)$ is white Gaussian noise with variance $N_0$. We
assume that the fading processes of all transmitters are jointly stationary and ergodic, and the
stationary distribution of the fading process has continuous density. We also assume that all the
transmitters and the receiver have instant access to channel state information. In practice, the
receiver measures the channels and feeds back the channel information to the transmitters. The
implicit assumption in this model is that the channel variations are much slower than the data
rate, so that the channel can be measured accurately at the receiver and the amount of feedback
bits is negligible compared to that of transmitting information.
First, consider the non-fading case where the channel gains are fixed. The capacity region of the
Gaussian multiple-access channel with no power control is described as follows \cite{cover}
\begin{eqnarray}\label{Cg}
C_g(\boldsymbol P, \boldsymbol h) &=& \bigg\{ \boldsymbol R \in \mathbb{R}^M_+:
\sum_{i \in S} R_i \leq C\Big(\sum_{i \in S} h_i P_i, N_0\Big), \nonumber \\
&& \quad \textrm{for all}\ S \subseteq \mathcal M = \{1, \ldots, M\} \bigg\},
\end{eqnarray}
where $P_i$ and $R_i$ are the \emph{i}-th transmitter's power and rate, respectively. $C(P,N)$
denotes Shannon's formula for the capacity of AWGN channel given by
\begin{equation}\label{C_AWGN}
C(P,N) = \frac{1}{2}\log(1+\frac{P}{N}) \quad \textrm{nats}.
\end{equation}
For a multiple-access channel with fading, but fixed transmission powers $P_i$, the
\emph{throughput} capacity region is given by averaging the instantaneous capacity regions with
respect to the fading process \cite{Shamai},
\begin{eqnarray}\label{Ca}
C_a(\boldsymbol P) &=& \bigg\{ \boldsymbol R \in \mathbb{R}^M_+: \sum_{i \in S} R_i
\leq \mathbb{E}_{\boldsymbol H} \bigg[ C\Big(\sum_{i \in S} H_i P_i, N_0\Big) \bigg], \nonumber \\
&& \qquad \qquad \qquad \qquad \textrm{for all} \ S \subseteq \mathcal M \bigg\},
\end{eqnarray}
where $\boldsymbol H$ is a random vector with the stationary distribution of the fading process.
A power control policy $\mathcal{P}$ is a map from any given fading state $\boldsymbol h$ to
$\mathcal{P}(\boldsymbol h) =(\mathcal{P}_1(\boldsymbol h), \ldots, \mathcal{P}_M(\boldsymbol h))$, the powers allocated to
the transmitters. Similarly, we can define the rate allocation policy, $\mathcal R$, as a map from
the fading state $\boldsymbol h$ to the transmission rates, $\mathcal R(\boldsymbol h)$. For any given
power-control policy $\mathcal{P}$, the capacity region follows from (\ref{Ca}) as
\begin{eqnarray}\label{Cf}
C_f(\mathcal{P}) &=& \bigg\{\boldsymbol R \in \mathbb{R}^M_+: \sum_{i \in S} R_i \leq \nonumber \\ &&
\mathbb{E}_{\boldsymbol H} \bigg[ C\Big(\sum_{i \in S} H_i \mathcal{P}_i(\boldsymbol H),
N_0\Big) \bigg], \textrm {for all}\ S \subseteq \mathcal M \bigg\}. \nonumber
\end{eqnarray}
Tse \emph{et al.} \cite{Tse} have shown that the throughput capacity of a multiple access fading channel is given by
\begin{equation}\label{C_power_ctrl}
C(\bar{\boldsymbol P}) = \bigcup_{\mathcal{P} \in \mathcal{G}} C_f(\mathcal{P}),
\end{equation}
where $\mathcal{G} = \{ \mathcal{P}: \mathbb{E}_{\boldsymbol H} [\mathcal{P}_i(\boldsymbol H)] \leq \bar{P}_i,
\textrm{for all}\ i\} $ is the set of all power control policies satisfying the average power
constraint. Let us define the notion of boundary or dominant face for any of the capacity regions
defined above.
\begin{dom_face}\label{dom_face}
The \emph{dominant face} or \emph{boundary} of a capacity region, denoted by $\mathcal{F}(\cdot)$,
is defined as the set of all $M$-tuples in the capacity region such that no component can be
increased without decreasing others while remaining in the capacity region.
\end{dom_face}
\section{Rate Allocation with Power Control}
In this section, we assume that the channel statistics are known a priori. The goal of
optimal resource allocation is to find feasible rate and power allocation policies denoted
by $\mathcal{R}^*$ and $\mathcal{P}^*$, respectively, such that $\mathcal{R}^*(\boldsymbol H) \in C_g\big(\mathcal{P}^*(\boldsymbol H),\boldsymbol
H\big)$, and $\mathcal{P}^* \in \mathcal G$. Moreover,
\begin{eqnarray}\label{RAC_pctrl}
\mathbb{E}_{\boldsymbol{H}} [\mathcal{R}^*(\boldsymbol H)] = &\boldsymbol R^* =& \textrm{argmax} \quad u(\boldsymbol R) \nonumber \\ && \ \textrm{subject to} \quad \boldsymbol R \in C(\bar{\boldsymbol P})
\end{eqnarray}
where $u(\cdot)$ is a given utility function and is assumed to be a continuously differentiable concave function of $\boldsymbol R$, and
monotonically increasing in each component $R_i$ for all $i$.
For the case of a linear utility function, i.e., $u(\boldsymbol R) = \boldsymbol \mu '
\boldsymbol R $ for some $\boldsymbol \mu \in \mathbb{R} _{+}^M $, Tse \emph{et al.} \cite{Tse}
have shown that the optimal rate and power allocation policies are given by the optimal solution to a
linear program, i.e.,
\begin{eqnarray}\label{LP_RAC_pctrl}
\left(\mathcal R^*(\boldsymbol h), \mathcal P^*(\boldsymbol h)\right) &=& \textrm{arg}\max_{\boldsymbol r , \boldsymbol p} \left( \boldsymbol \mu ' \boldsymbol r - \boldsymbol
\lambda ' \boldsymbol p \right) \nonumber \\ &&\ \textrm{subject to} \quad \boldsymbol r \in
C_g(\boldsymbol h, \boldsymbol p),
\end{eqnarray}
where $\boldsymbol h$ is the channel state realization, and $\boldsymbol \lambda \in \mathbb{R}
_{+}^M$ is a Lagrange multiplier satisfying the average power constraint, i.e.,
$\boldsymbol \lambda$ is the unique solution of the following equations
\begin{eqnarray}\label{lambda_mu}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int_0^{\infty}\!\! \frac{1}{h} \int_{\frac{2
\lambda_i (N_0+z) }{\mu_i}}^{\infty} \nonumber \\ &&\!\!\!\!\!\! \prod_{k \neq i} F_k \left( \frac{2 \lambda_k h
(N_0 + z)}{2 \lambda_i (N_0+z) + (\mu_k - \mu_i) h} \right) f_i(h) \mathrm{d}h
\mathrm{d}z = \bar{P}_i \nonumber \\
\end{eqnarray}
where $F_k$ and $f_k$ are cumulative distribution function (CDF) and probability density
function (PDF) of the stationary distribution of the channel state process for transmitter $k$,
respectively.
Exploiting the polymatroid structure of the capacity region, problem (\ref{LP_RAC_pctrl}) can be
solved by a simple greedy algorithm (see Lemma 3.2 of \cite{Tse}). It is also shown in \cite{Tse}
that for positive $\boldsymbol \mu$ the optimal solution, $\boldsymbol R^*$, to the problem in
(\ref{RAC_pctrl}) is \emph{uniquely} obtained. Given the distribution of channel state process,
denoted by $F_k$ and $f_k$, we have
\begin{eqnarray}\label{R_mu}
R_i^*(\boldsymbol \mu) &=& \int_0^{\infty}\!\!\!\! \frac{1}{2(N_0+z)} \int_{\frac{2
\lambda_i (N_0+z) }{\mu_i}}^{\infty} \nonumber \\
&& \!\!\!\!\!\!\!\prod_{k \neq i} F_k \left( \frac{2 \lambda_k h
(N_0 + z)}{2 \lambda_i (N_0+z) + (\mu_k - \mu_i) h} \right) f_i(h) \mathrm{d}h
\mathrm{d}z, \nonumber \\
\end{eqnarray}
The uniqueness of $\boldsymbol R^*$ follows from the fact that the stationary
distribution of the fading process has continuous density \cite{Tse}. It is worth
mentioning that (\ref{R_mu}) parametrically describes the \emph{boundary} of the capacity
region, and hence, there is a one-to-one correspondence between the boundary of
$C(\boldsymbol{\bar{P}})$ and the positive vectors $\boldsymbol \mu$ with unit norm.
Now consider a general concave utility function. We use an iterative method to compute the optimal solution, $\boldsymbol R^*$, of problem
(\ref{RAC_pctrl}). Note that by monotonicity of the utility function, $\boldsymbol R^*$ always lies
on the \emph{boundary} of the capacity region, $C(\boldsymbol{\bar{P}})$. Once $\boldsymbol R^*$ is known,
then in view of one-to-one correspondence between the boundary of
$C(\boldsymbol{\bar{P}})$ and the positive vectors $\boldsymbol \mu$, there exist a positive vector $\boldsymbol
\mu^*$ such that
\begin{equation}\label{R_mu_corresp}
\boldsymbol R^* = \textrm{argmax} \quad (\boldsymbol\mu ^*)' \boldsymbol R \quad \textrm{subject to} \quad \boldsymbol R \in
C(\bar{\boldsymbol P}).
\end{equation}
Therefore the optimal rate and power allocation policies can be obtained by using the greedy
policies of Tse \emph{et al.} \cite{Tse} for the linear utility function, $u(\boldsymbol R) = (\boldsymbol\mu ^*)' \boldsymbol
R$.
We use the conditional gradient method \cite{nlp} in order to iteratively compute the optimal
solution of problem (\ref{RAC_pctrl}). The $k$-th iteration of the method is given by
\begin{equation}\label{frank-wolfe}
\boldsymbol R^{k+1} = \boldsymbol R^k + \alpha^k(\boldsymbol{\bar{R}}^k - \boldsymbol R^k),
\end{equation}
where $\alpha^k $ is the stepsize and $\boldsymbol{\bar{R}}^k$ is obtained as
\begin{equation}\label{frank-wolfe2}
\boldsymbol{\bar{R}}^k \in \textrm{arg}\!\!\!\!\max_{\boldsymbol{R} \in C(\boldsymbol{\bar{P}})} \left( \nabla u(\boldsymbol{R}^k)'(\boldsymbol
R - \boldsymbol R^k)\right).
\end{equation}
Since the utility function is monotonically increasing, the gradient vector is always
positive and, hence, the unique optimal solution to the above sub-problem is obtained by
(\ref{R_mu}), in which $\boldsymbol \mu$ is replaced by $\nabla u(\boldsymbol{R}^k)$. By concavity of the
utility function and convexity of the capacity region, the iteration (\ref{frank-wolfe})
will converge to the optimal solution of (\ref{RAC_pctrl}) for appropriate stepsize
selection rules such as Armijo rule or limited maximization rule (c.f. \cite{nlp} pp. 220-222).
Note that our goal is to determine rate and power allocation policies. Finding $\boldsymbol R^*$
allows us to determine such policies by the greedy policy in (\ref{LP_RAC_pctrl}) for $\boldsymbol \mu
^* = \nabla u(\boldsymbol{R}^*)$. It is worth mentioning that all the computations for obtaining $\boldsymbol
R^*$ are performed once in the setup of the communication session. So the convergence rate of the conditional
gradient method is generally not of critical importance.
\section{Rate Allocation without Power Control}
In this section we assume that the channel statistics are not known and that the
transmission powers are fixed to $\boldsymbol P$. In practice, this scenario occurs when the
transmission power may be limited owing to environmental limitations such as human presence,
or limitations of the hardware.
The capacity region of the multiple access channel for this scenario is a polyhedron and is
given by (\ref{Ca}). Similarly to the previous case, the optimal rate allocation
policy, $\mathcal{R}^*(\cdot)$, is such that $\mathcal{R}^*(\boldsymbol H) \in C_g(\boldsymbol P, \boldsymbol H)$, and
\begin{eqnarray}\label{RAC_npctrl}
\mathbb{E}_{\boldsymbol{H}} [\mathcal{R}^*(\boldsymbol H)] = &\boldsymbol R^* &\in \textrm{argmax} \quad u(\boldsymbol R)\nonumber \\ && \quad \textrm{subject to} \quad \boldsymbol R \in C_a(\boldsymbol
P).
\end{eqnarray}
It is worth mentioning that the approach used to find the optimal resource allocation
policies for the previous case does not apply to this scenario, because $C_g(\boldsymbol
P, \boldsymbol h)$ is a polyhedron and hence, the uniqueness property of $\boldsymbol R^*$ for any positive vector
$\boldsymbol \mu$ does not hold anymore.
Here we present a \emph{greedy} rate allocation policy and compare its performance with the
unknown optimal policy. The performance of a particular rate allocation policy is defined
as the utility at the average rate achieved by that policy. The greedy policy, denoted by
$\bar{\mathcal{R}}(\cdot)$, optimizes the utility function for any channel realization.
i.e.,
\begin{equation}\label{R_greedy}
\bar{\mathcal{R}}(\boldsymbol h) = \textrm{argmax}_{\boldsymbol R \in C_g(\boldsymbol P, \boldsymbol h)} \quad
u(\boldsymbol R).
\end{equation}
Consider the following relations
\begin{eqnarray}\label{jensen}
\mathbb{E}_{\boldsymbol H}\big[u\big(\mathcal{R}^*(\boldsymbol H)\big)\big] &\leq& \mathbb{E}_{\boldsymbol H}\big[u\big(\bar{\mathcal{R}}(\boldsymbol H)\big)\big] \nonumber \\
&\leq& u\big(\mathbb{E}_{\boldsymbol H}\big[\bar{\mathcal{R}}(\boldsymbol H)\big]\big) \nonumber \\
&\leq& u\big(\mathbb{E}_{\boldsymbol H}\big[\mathcal{R}^*(\boldsymbol H)\big]\big),
\end{eqnarray}
where the second inequality follows from the Jensen's inequality by concavity of the utility function.
In the case of a linear utility function we have $u\big(\mathbb{E}_{\boldsymbol H}\big[\mathcal{R}^*(\boldsymbol
H)\big]\big) = \mathbb{E}_{\boldsymbol H}\big[u\big(\mathcal{R}^*(\boldsymbol H)\big)\big]$, so equality holds throughout in (\ref{jensen}) and
$\bar{\mathcal{R}}(\cdot)$ is indeed the optimal rate allocation policy. For
nonlinear utility functions, the greedy policy can be strictly suboptimal.
However, the greedy policy is not arbitrarily worse than the optimal one. In view of (\ref{jensen}), we can bound the
performance difference, $u(\boldsymbol{R}^*) - u\big(\mathbb{E}_{\boldsymbol H}\big[\bar{\mathcal{R}}(\boldsymbol H)\big]\big)$, by
bounding $\Big|u\big(\mathbb{E}_{\boldsymbol H}\big[\mathcal{R}^*(\boldsymbol H)\big]\big) - u\big(\mathbb{E}_{\boldsymbol H}\big[\bar{\mathcal{R}}(\boldsymbol H)\big]\big)\Big|$ or
$\Big|u\big(\mathbb{E}_{\boldsymbol H}\big[\mathcal{R}^*(\boldsymbol H)\big]\big) - \mathbb{E}_{\boldsymbol H}\big[u\big(\mathcal{R}^*(\boldsymbol
H)\big)\big]\Big|$ from above. We show that the first bound goes to zero as the channel variations
become small and the second bound vanishes as the utility function tends to have a more
linear structure.
Before stating the main theorems, let us introduce some useful definitions and lemmas.
\begin{expansion}\label{expansion_def}
Let $Q$ be a polyhedron described by a set of linear constraints, i.e.,
\begin{equation}\label{polyhedron}
Q = \left\{\boldsymbol x \in \mathbb{R}^n: A \boldsymbol x \leq \boldsymbol b \right\}.
\end{equation}
Define the \emph{expansion} of $Q$ by $\delta$, denoted by $\mathcal{E}_\delta(Q)$, as the polyhedron
obtained by relaxing all the constraints in (\ref{polyhedron}), i.e., $ \mathcal{E}_\delta(Q) = \left\{\boldsymbol x \in \mathbb{R}^n: A \boldsymbol x \leq \boldsymbol b + \delta\mathbf{1}
\right\},$
where $\mathbf{1}$ is the vector of all ones.
\end{expansion}
\begin{Hausdorff}\label{Hausdorff_def}
Let $X$ and $Y$ be two polyhedra described by a set of linear constraints. Let
$\mathcal{E}_d(X)$ be an \emph{expansion} of $X$ by relaxing its constraints by $d$. The distance
$d_H(X,Y)$ between $X$ and $Y$ is defined as the minimum scalar $d$ such that $X \subseteq \mathcal{E}_d(Y)$ and $ Y \subseteq
\mathcal{E}_d(X)$.
\end{Hausdorff}
Lemma \ref{region_chebyshev} extends
Chebychev's inequality for capacity regions. It states that the time varying capacity
region does not deviate much from its mean with high probability.
\begin{region_chebyshev} \label{region_chebyshev}
Let $\boldsymbol H$ be a random vector with the stationary distribution of the fading process
with mean $\boldsymbol{\bar{H}}$ and covariance matrix $K$. Then
\begin{equation}\label{capacity_cheby}
\textbf{\textrm{Pr}} \Big\{ d_H \left(C_g(\boldsymbol{P},\boldsymbol{H}), C_a(\boldsymbol{P}) \right) > \delta \Big\}
\leq \frac{\sigma_H^2}{\delta^2},
\end{equation}
where $\sigma_H^2$ is defined as
\begin{eqnarray}\label{sigma_H}
&& \sigma_H^2 \triangleq \frac{1}{4}\sum_{S \subseteq \{1,\ldots, M\}} \boldsymbol{\Gamma}_S' K \boldsymbol \Gamma_S \Bigg(1+ \nonumber \\
&& \left[(1+\boldsymbol{\Gamma}'_S \boldsymbol{\bar{H}})(\sqrt{2 \log(1+\boldsymbol{\Gamma}'_S \boldsymbol{\bar{H}})} -
\frac{\sqrt{\boldsymbol \Gamma_S' K \boldsymbol \Gamma_S}}{2})\right]^2\Bigg), \nonumber \\
\end{eqnarray}
where
\begin{equation}\label{P_indicator}
{(\boldsymbol \Gamma_S)}_i = \left\{ \begin{array}{ll}
\frac{P_i}{N_0}, & \textrm{$i \in S$}\\
0, & \textrm{otherwise.}
\end{array} \right.
\end{equation}
\end{region_chebyshev}
\begin{proof} Define random variables $Y_S$ and $Z_S$ as the following:
\begin{equation}\label{Y_S}
Y_S = \frac{1}{2} \log\big(1+\sum_{i \in S}\frac{H_i P_i}{N_0}\big) = \frac{1}{2} \log(1+Z_S), \quad \textrm{for all}\ S \subseteq
\mathcal M.
\end{equation}
The facet defining constraints of $C_g(\boldsymbol P, \boldsymbol H)$ and $C_a(\boldsymbol P)$ are of the form of $\sum_{i \in S}R_i \leq
Y_S$ and $\sum_{i \in S}R_i \leq \mathbb{E}[Y_S]$, respectively. Hence, by
Definition \ref{Hausdorff_def}, we have $d_H \left(C_g(\boldsymbol{P},\boldsymbol{H}), C_a(\boldsymbol{P}) \right) >
\delta$ if and only if $|Y_S - \mathbb{E}[Y_S]| > \delta$, for all $S \subseteq \mathcal M$ . After some manipulations, the following
relations can be verified by employing Chebyshev's
inequality:
\begin{eqnarray}
\textbf{\textrm{Pr}} \Big\{ d_H \left(C_g(\boldsymbol{P},\boldsymbol{H}), C_a(\boldsymbol{P}) \right) > \delta \Big\} && \nonumber
\end{eqnarray}
\begin{eqnarray}\label{chebyshev_pf_eqn}
&=& \ \textbf{Pr}\Big\{ \max_{S} |Y_S - \mathbb{E}[Y_S]| > \delta \Big\} \nonumber \\
&\leq& \frac{1}{\delta^2} \sum_{S \subseteq \mathcal M} \sigma^2_{Y_S}
\end{eqnarray}
where $\sigma^2_{Y_S}$ denotes variance of $Y_s$, and can be bounded from above
as follows (c.f. Appendix II, \cite{tech_report})
\begin{equation}\label{sigma_Y}
\sigma^2_{Y_S} \leq \frac{\sigma_{Z_S}^2}{4} \left(1+ \left[(1+\bar{Z_S})(\sqrt{2 \log(1+\bar{Z_S})} -
\frac{\sigma_{Z_S}}{2})\right]^2\right),
\end{equation}
where
$$ \bar{Z}_S = \mathbb{E}\Big[ \sum_{i \in S}H_i P_i\Big] = \boldsymbol \Gamma'_S \boldsymbol{\bar{H}}, $$
$$ \sigma_{Z_S}^2 = \textrm{var}\Big( \sum_{i \in S}H_i P_i\Big) = \boldsymbol\Gamma_S' K \boldsymbol \Gamma_S. $$
The desired result is concluded by substituting $\bar{Z_S}$ and $\sigma_{Z_S}^{2}$ in
(\ref{sigma_Y}) and combing the result with (\ref{chebyshev_pf_eqn}).
\end{proof}
The system parameter $\sigma_H^2$ in Lemma \ref{region_chebyshev} is proportional to
channel variations, and we expect it to vanish for small channel variations. The
following lemma ensures that the distance between the optimal solutions of the utility
maximization problem over two regions is small, provided that the regions are close to each
other.
\begin{opt_dist}\label{opt_dist}
Let $\boldsymbol R_1^*$ and $\boldsymbol R_2^*$ be the optimal solution of maximizing the utility over $C_g(\boldsymbol P,
\boldsymbol H_1)$ and $C_g(\boldsymbol P, \boldsymbol H_2)$, respectively. If there exist positive scalars $A$ and $B$ such
that
\begin{eqnarray}\label{AB_hyp}
|u(\boldsymbol R_1) - u(\boldsymbol R_2)| &\leq& B \|\boldsymbol R_1- \boldsymbol R_2\|, \nonumber \\
&\textrm{for all }& \boldsymbol R_i \in \mathcal{F}(C_g(\boldsymbol P, \boldsymbol H_i)) , \quad i=1,2. \nonumber \\
|u(\boldsymbol R_i^*) - u(\boldsymbol R_i)| &\geq& A \|\boldsymbol R_i^* - \boldsymbol R_i\|^2, \nonumber \\
&\textrm{for all }& \boldsymbol R_i \in C_g(\boldsymbol P, \boldsymbol H_i), \quad i=1,2, \nonumber \\
\end{eqnarray}
and moreover if
$$d_H\big(C_g(\boldsymbol P, \boldsymbol H_1), C_g(\boldsymbol P, \boldsymbol H_2)\big) \leq \delta$$
then, we have
\begin{equation}\label{opt_dist_result}
\|\boldsymbol R_1^* - \boldsymbol R_2^*\| \leq
{\delta}^{\frac{1}{2}}\left[{\delta}^{\frac{1}{2}}+\Big(\frac{B}{A}\Big)^{\frac{1}{2}}\right].
\end{equation}
\end{opt_dist}
\begin{proof}
Without loss of generality assume that $u(\boldsymbol R_2^*) \geq u(\boldsymbol R_1^*)$. To
simplify the notations for capacity regions, let $C_1 = C_g\big(\boldsymbol P, \boldsymbol H_1\big)$ be a \emph{polymatroid}, i.e.,
\begin{equation}\label{polymatroid}
C_1 = \bigg\{ \boldsymbol R \in \mathbb{R}^M_+: \sum_{i \in S} R_i \leq f(S),\
\textrm{for all}\ S \subseteq \mathcal M \bigg\},
\end{equation}
for some submodular function $f(S)$, and let $C_2$ be an \emph{expansion} of $C_1$ by
$\delta$ as defined in Definition \ref{expansion_def}. We first show that for every $\boldsymbol R \in \mathcal{F}(C_2)$, there exists a vector
$\boldsymbol R' \in \mathcal{F}(C_1)$ such that $\|\boldsymbol R - \boldsymbol R'\| \leq \delta$, where $\mathcal
F(\cdot)$ denotes the dominant face of a capacity region as in Definition \ref{dom_face}.
Assume $R$ is a vertex of $C_2$. Then the polymatroid structure of $C_2$ implies
that $R$ is the intersection of $M$ constraints corresponding to a chain of subsets
of $\mathcal{M}$. Hence, there is some $k \in \mathcal{M}$ such that $ R_k = f(\{k\}) + \delta
$. Choose $\boldsymbol R'$ as follows
\begin{equation}\label{R'_R}
R'_i = \left\{ \begin{array}{ll}
R_i - \delta, & \textrm{$i = k$}\\
R_i, & \textrm{otherwise.}
\end{array} \right.
\end{equation}
$\boldsymbol R'$ is obviously in a $\delta$-neighborhood of $\boldsymbol R$. Moreover, the
constraint corresponding to the set $\mathcal{M}$ is active for $\boldsymbol R'$, so we
just need to show that $R'$ is feasible in order to prove that it is on the
dominant face. First, let us consider the sets $S$ that contain $k$. We have
\begin{equation}\label{k_in_S}
\sum_{i \in S} R'_i = \sum_{i \in S} R_i - \delta \leq f(S).
\end{equation}
Second, consider the case that $k \notin S$.
\begin{eqnarray}
\sum_{i \in S} R'_i &=& \sum_{i \in S \cup \{k\}} R'_i - R_k + \delta \nonumber \\
&\leq& f(S \cup \{k\}) + \delta - R_k \nonumber \\
&\leq& f(S) + f(\{k\}) + \delta - R_l \nonumber \\
&=& f(S). \nonumber
\end{eqnarray}
where the first inequality come from (\ref{k_in_S}), and the second inequality is
valid because of the submodularity of the function $f(\cdot)$.
The previous argument establishes that the claim is true for each vertex $\boldsymbol R_j$ of the dominant face. But
every other point $\boldsymbol R$ on the dominant face can be represented as
a convex combination of the vertices, i.e.,
$$ \boldsymbol R = \sum_j \alpha_j \boldsymbol R_j, \qquad \sum_j \alpha_j = 1, \alpha_j \geq 0.$$
Using the convexity of the norm function, it is quite straightforward to show that the desired $\boldsymbol R'$
is given by
$$ \boldsymbol R' = \sum_j \alpha_j \boldsymbol R'_j,$$
where $\boldsymbol R'_j$ is obtained for each $\boldsymbol R_j$ in the same manner as in
(\ref{R'_R}).
So we have verified that there exists a point, $\boldsymbol R$, on the dominant face of $C_1 = C_g(\boldsymbol P, \boldsymbol H_1)$ such
that $\|\boldsymbol R_2^* - \boldsymbol R\| \leq \delta$. By monotonicity of the utility function the
optimal solution $\boldsymbol R_2^*$ lies on the dominant face of $C_g(\boldsymbol P, \boldsymbol H_2)$,
hence, from the hypothesis and the fact that $u(\boldsymbol R_2^*) \geq u(\boldsymbol R_1^*) \geq
u(\boldsymbol R)$, we conclude
\begin{equation}\label{opt_dist1}
u(\boldsymbol R_2^*) - u(\boldsymbol R) = |u(\boldsymbol R_2^*) - u(\boldsymbol R)| \leq B\|\boldsymbol R_2^* - \boldsymbol R\| \leq B
\delta.
\end{equation}
Now suppose that $\|\boldsymbol R_1^* - \boldsymbol R\| > (\frac{B}{A} \delta)^{\frac{1}{2}}$. By the hypothesis in (\ref{AB_hyp}) we
can write
\begin{equation}\label{opt_dist2}
u(\boldsymbol R_1^*) - u(\boldsymbol R) = |u(\boldsymbol R_1^*) - u(\boldsymbol R)| > B \delta.
\end{equation}
By subtracting (\ref{opt_dist1}) from (\ref{opt_dist2}) we obtain $u(\boldsymbol R_2^*) <
u(\boldsymbol R_1^*)$ which is a contradiction. Therefore, $\|\boldsymbol R_1^* - \boldsymbol R\| \leq
(\frac{B}{A} \delta)^{\frac{1}{2}}$, and the desired result follows immediately by invoking the
triangle inequality.
\end{proof}
The following theorem combines the results of the above lemmas to obtain a bound on the performance
difference of the greedy and optimal policy.
\begin{Bound2}\label{Bound2}
Let $\boldsymbol R^*$ be the optimal solution to (\ref{RAC_npctrl}), and $\bar{\mathcal{R}}(\cdot)$ the greedy
rate allocation policy as defined in (\ref{R_greedy}) for a non-negative concave utility function $u(\cdot)$. Then for every $\epsilon \in
(0,1]$,
\begin{eqnarray}\label{Bound2_eps}
u(\boldsymbol{R}^*) &\!\!\!- u\big(\mathbb{E}_{\boldsymbol H}\big[\bar{\mathcal{R}}(\boldsymbol H)\big]\big)& \leq \epsilon u(\boldsymbol{R}^*) +
(1-\epsilon)B(\epsilon) \nonumber \\
&&\Big[\Big(\frac{\sigma_H}{\sqrt{\epsilon}}\Big)^{\frac{1}{2}}+\Big(\frac{B(\epsilon)}{A(\epsilon)}\Big)^{\frac{1}{2}}\Big]\Big(\frac{\sigma_H}{\sqrt{\epsilon}}\Big)^{\frac{1}{2}},
\nonumber \\
\end{eqnarray}
where $B(\epsilon)$ and $A(\epsilon)$ are positive functions of
$\epsilon$, such that for all $\boldsymbol H$ with $d_H(C_g(\boldsymbol P,
\boldsymbol H), C_a(\boldsymbol P)) ) \leq \frac{\sigma_H}{\sqrt{\epsilon}}$, they satisfy the
following conditions.
\begin{eqnarray}\label{Bound2_hyp1}
|u(\boldsymbol{R_a}) - u(\boldsymbol{R_g})| &\leq& B(\epsilon) \|\boldsymbol{R_a} - \boldsymbol{R_g}\|, \nonumber \\ &&
\textrm{for all } \boldsymbol{R_a} \in \mathcal{F}(C_a(\boldsymbol P)), \nonumber \\
&&\textrm{for all } \boldsymbol{R_g} \in \mathcal{F}(C_g(\boldsymbol P, \boldsymbol H)), \nonumber \\
\\ \label{Bound2_hyp}
|u(\bar{\mathcal{R}}(\boldsymbol H)) - u(\boldsymbol{R})| &\geq& A(\epsilon) \|\bar{\mathcal{R}}(\boldsymbol H) - \boldsymbol{R}\|^2, \nonumber \\&& \textrm{for all } \ \boldsymbol{R} \in C_g(\boldsymbol P, \boldsymbol
H). \\ \nonumber
\end{eqnarray}
\end{Bound2}
\begin{proof}
Pick any $\epsilon \in (0,1]$. Define the event $\mathcal{V}$ as
$$d_H(C_g(\boldsymbol P, \boldsymbol H), C_a(\boldsymbol P)) ) \leq \frac{\sigma_H}{\sqrt{\epsilon}}.$$
By Lemma \ref{region_chebyshev}, the probability of this event is at least
$1-\epsilon$. Conditioned on $\mathcal{V}$, we have the following
\begin{eqnarray}\label{Bound2_ch0}
\big|u(\boldsymbol{R}^*) -u\big(\bar{\mathcal{R}}(\boldsymbol H)\big) \big| \leq&& \!\!\!\!\!\!\!\!\!\!\! B(\epsilon) \|\bar{\mathcal{R}}(\boldsymbol H) - \boldsymbol{R}^*\| \nonumber \\
\leq&& \!\!\!\!\!\!\!\!\!\!\! B(\epsilon)\Big[\Big(\frac{\sigma_H}{\sqrt{\epsilon}}\Big)^{\frac{1}{2}}+\Big(\frac{B(\epsilon)}{A(\epsilon)}\Big)^{\frac{1}{2}}\Big]\Big(\frac{\sigma_H}{\sqrt{\epsilon}}\Big)^{\frac{1}{2}},
\nonumber \\
\end{eqnarray}
where the first inequality follows from monotonicity of the utility function and (\ref{Bound2_hyp1}). The second inequality is a direct result of applying Lemma \ref{opt_dist}.
Using Jensen's inequality as in (\ref{jensen}) we can bound the left-hand side of
(\ref{Bound2_eps}) as follows
\begin{eqnarray}\label{bound2_ch1}
&&u(\boldsymbol{R}^*) - u\big(\mathbb{E}_{\boldsymbol H}\big[\bar{\mathcal{R}}(\boldsymbol H)\big]\big) \nonumber \\
&&\qquad \leq u(\boldsymbol{R}^*) - \mathbb{E}_{\boldsymbol H}\big[u\big(\bar{\mathcal{R}}(\boldsymbol H)\big)\big] \nonumber \\
&&\qquad \leq u(\boldsymbol{R}^*) - (1-\epsilon)\mathbb{E}_{\boldsymbol H}\Big[u(\bar{\mathcal{R}}(\boldsymbol H))\Big|
\mathcal{V}\Big] \nonumber \\
&& \quad \qquad - \mathbf{Pr}(\mathcal{V}^c)\mathbb{E}_{\boldsymbol H}\Big[u(\bar{\mathcal{R}}(\boldsymbol H))\Big| \mathcal{V}^c\Big] \nonumber \\
&&\qquad \leq \epsilon u(\boldsymbol{R}^*) + (1-\epsilon)\mathbb{E}_{\boldsymbol H}\Big[ |u(\boldsymbol{R}^*) -u(\bar{\mathcal{R}}(\boldsymbol H)) | \Big| \mathcal{V}
\Big]. \nonumber \\
\end{eqnarray}
In the above relations, the second inequality follows from $\mathbf{Pr}(\mathcal{V})
\geq 1- \epsilon$, and the third inequality is obtained from non-negativity of the
utility function after some manipulation. Replacing (\ref{Bound2_ch0}) in
(\ref{bound2_ch1}) gives the desired upperbound.
\end{proof}
Theorem \ref{Bound2} provides a bound parameterized by $\epsilon$. For very small channel
variations, $\sigma_H$ tends to zero, and we can choose $\epsilon$ proportional to
$\sigma_H$ such that the bound in (\ref{Bound2_eps}) approaches zero. Figure
\ref{Bound2_fig} illustrates the behavior of the parameterized bound provided in
(\ref{Bound2_eps}) for different values of $\sigma_H$. For each value of $\sigma_H$, the
upperbound is minimized for a specific choice of $\epsilon$, which is illustrated as a dot
in Figure \ref{Bound2_fig}. As demonstrated in the figure, for smaller channel variations
tighter bound is achieved and the minimizer parameter decreases.
The next theorem
provides another bound demonstrating the impact of the structure of the utility
function on the performance of the greedy policy.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{bound2}\\
\caption{Parameterized upperbound on performance difference between greedy and optimal policies.}\label{Bound2_fig}
\end{figure}
\begin{Bound1}\label{Bound1}
Let $\boldsymbol R^*$ be the optimal solution to (\ref{RAC_npctrl}) for the non-negative utility function $u(\boldsymbol R)$. Also let ${\mathcal{R}^*}(\cdot)$ and ${\bar{\mathcal{R}}}(\cdot)$ be the
optimal and greedy rate allocation policies, respectively. Then for every $\epsilon \in (0,1]$,
\begin{equation}\label{Bound1_eps}
u(\boldsymbol{R}^*) - u\big(\mathbb{E}_{\boldsymbol H}\big[\bar{\mathcal{R}}(\boldsymbol H)\big]\big) \leq \epsilon
u(\boldsymbol{R}^*) +
\frac{1}{2}(1-\epsilon) r(\epsilon)^2 \Omega,
\end{equation}
where $\Omega$ satisfies the following
\begin{equation}\label{bound2_hyp1}
\lambda_{\max}(- \nabla^2 u(\boldsymbol \xi)) \leq \Omega, \qquad \textrm{for all}\ \boldsymbol \xi, \|\boldsymbol \xi - \boldsymbol
R^*\| \leq r(\epsilon),
\end{equation}
and $r(\epsilon)$ is given by
\begin{eqnarray}\label{bound1_hyp2}
&&\!\!\!\!\!\!\!\!\!\!r(\epsilon) = \sqrt{M}\frac{\sigma_H}{\sqrt{\epsilon}} + \nonumber \\
&& \left[ \sum_{i=1}^M \mathbb{E}_{\boldsymbol H}\left[\frac{1}{2} \log \left(\frac{(1+H_i P_i)(1+\sum_{j \neq i} H_j P_j)}{1+\sum_{j=1}^M H_j P_j}\right)\right]^2
\right]^\frac{1}{2}. \nonumber \\
\end{eqnarray}
\end{Bound1}
\begin{proof}
Pick any $\epsilon \in (0,1]$. Define the event $\mathcal{V}$ similarly to the proof of
Theorem \ref{Bound2}.
Because of monotonicity of the utility function, we know that $\boldsymbol R^*$ lies on the dominant face of $C_a(\boldsymbol P)$. Since the region $C_a(\boldsymbol
P)$ is the average of all regions $C_g(\boldsymbol P, \boldsymbol H)$, the optimal policy ${\mathcal{R}^*}(\boldsymbol
H)$ should give a point on the dominant face of $C_g(\boldsymbol P, \boldsymbol H)$, for almost all $\boldsymbol H$. Therefore, conditioned on $\mathcal{V}$, we can
bound the set in which ${\mathcal{R}^*}(\boldsymbol H)$ ranges, i.e., $\|{\mathcal{R}}^*(\boldsymbol H) - \boldsymbol R^*\| \leq
r(\epsilon)$, after some straightforward manipulations.
Now let us write the Taylor expansion of the function $u(\cdot)$ at $\boldsymbol R^*$. We
have
\begin{eqnarray}
u(\boldsymbol R) &=& u(\boldsymbol R^*) + \nabla u(\boldsymbol R^*)'(\boldsymbol R - \boldsymbol R^*) \nonumber \\
&&- \frac{1}{2}(\boldsymbol R - \boldsymbol R^*)'(-\nabla^2u(\boldsymbol \xi))(\boldsymbol R - \boldsymbol R^*) \nonumber \\
&\geq& u(\boldsymbol R^*) + \nabla u(\boldsymbol R^*)'(\boldsymbol R - \boldsymbol R^*) \nonumber \\
&&- \frac{1}{2}\|\boldsymbol R - \boldsymbol R^*\|^2 \lambda_{\max}(-\nabla^2u(\boldsymbol \xi)), \nonumber \\
&& \qquad \qquad \textrm{for some} \ \boldsymbol \xi, \|\boldsymbol \xi - \boldsymbol R^*\| \leq \|\boldsymbol R -\boldsymbol
R^*\|. \nonumber \\
\end{eqnarray}
By replacing $\boldsymbol R$ by $\mathcal{R}^*(\boldsymbol H)$ and conditioning on $\mathcal{V}$ we
have the following
\begin{eqnarray}
u(\mathcal{R}^*(\boldsymbol H))&\geq& u(\boldsymbol R^*) + \nonumber \\
&& \nabla u(\boldsymbol R^*)'( \mathcal{R}^*(\boldsymbol H)- \boldsymbol R^*)
- \frac{1}{2} r(\epsilon)^2 \Omega. \nonumber
\end{eqnarray}
Now we can bound the left-hand side of (\ref{Bound1_eps}) by bounding the Jensen's
difference $u(\boldsymbol{R}^*) - \mathbb{E}_{\boldsymbol H}[u(\mathcal{R}^*(\boldsymbol H))]$. After some manipulation similar to
(\ref{bound2_ch1}), we have
\begin{eqnarray}\label{bound1_ch1}
&&u(\boldsymbol{R}^*) - u\big(\mathbb{E}_{\boldsymbol H}\big[\bar{\mathcal{R}}(\boldsymbol H)\big]\big) \nonumber \\
&&\qquad \qquad\qquad \leq \ u(\boldsymbol{R}^*) - \mathbb{E}_{\boldsymbol H}\big[u\big(\mathcal{R}^*(\boldsymbol H)\big)\big] \nonumber \\
&&\qquad\qquad\qquad \leq \ u(\boldsymbol{R}^*) - (1-\epsilon)\mathbb{E}_{\boldsymbol H}\Big[u({\mathcal{R}^*}(\boldsymbol H))\Big| \mathcal{V}\Big] \nonumber \\
&& \qquad\qquad\qquad\quad - \mathbf{Pr}(\mathcal{V}^c)\mathbb{E}_{\boldsymbol H}\Big[u({\mathcal{R}^*}(\boldsymbol H))\Big| \mathcal{V}^c\Big] \nonumber \\
&&\qquad\qquad\qquad \leq\ u(\boldsymbol{R}^*) - (1-\epsilon)\Big(u(\boldsymbol R^*) - \frac{1}{2} r(\epsilon)^2 \Omega \Big) \nonumber \\
&&\qquad\qquad\qquad = \ \epsilon u(\boldsymbol{R}^*) + \frac{1}{2}(1-\epsilon) r(\epsilon)^2 \Omega. \nonumber
\end{eqnarray}
\end{proof}
Similarly to Theorem \ref{Bound2}, Theorem \ref{Bound1} provides a bound parameterized by $\epsilon$ which goes to zero for proper choice of $\epsilon$ as $\Omega$
becomes smaller and the utility function tends to have a more linear structure. The
behavior of this parameterized upperbound is also similar to the one illustrated in Figure
\ref{Bound2_fig}.
In summary, the performance difference between the greedy and the optimal policy is
bounded from above by the minimum of the bounds provided by Theorem \ref{Bound2} and
Theorem \ref{Bound1}.
\section{Conclusion}
We addressed the problem of optimal resource allocation in a fading multiple access channel from an
information theoretic point of view. We formulated the problem as a utility maximization problem
for a more general class of utility functions.
We considered two different scenarios. First, we assumed that the transmitters have power control
feature and the channel statistics are known a priori. In this case, the optimal rate and power
allocation policies are obtained by greedily maximizing a properly defined linear utility function.
In the second scenario, power control and channel statistics are not available. In this case, the
greedy policy is not optimal for nonlinear utility functions. However, we showed that its
performance in terms of the utility is not arbitrarily worse compared to the optimal policy, by
bounding their performance difference. The provided bound tends to zero as the channel variations
become small or the utility function behaves more linearly.
The greedy policy may itself be computationally expensive. A computationally efficient algorithm
can be employed to allocate rates close to the ones allocated by the greedy policy. This algorithm
just takes one iteration of the gradient projection method at each time slot. Under slow fading
conditions, it can be shown that this method tracks the greedy policy very closely, and its
performance is close to the optimal policy.
\bibliographystyle{unsrt}
|
train/arxiv
|
BkiUe3g5qhLBjuJj90Kb
| 5
| 1
|
\section{Introduction}
Convolutional layers have made a considerable improvement in computer vision tasks. Before introducing convolutional layers, deep learning was known for using fully connected (FC) layers. One of the main problems of FC layers was utilizing too many parameters (weights), which made the developers unable to extend the number of layers due to the possibility of overfitting and also the need for huge RAM storage.
Convolutional layers' most important novelty was to decrease the number of weights while improving learning efficiency, so the developers could design models with more layers that achieved more reliable results on different benchmarks.
LeNet\cite{726791} and AlxNet\cite{krizhevsky2012imagenet} were the leaders for introducing convolutional models in Image classification tasks. After them, VGG \cite{simonyan2015deep} designed an architecture made by convolutional layers and obtained a good improvement in image classification on the ImageNet benchmark\cite{5206848}.
VGG models generate a 7×7×512 feature map from an input image with 224×244×3 resolution. VGG is made of almost 14 million weights for the feature extraction base. At the final layers, VGG architectures flatten the feature map to a 25088 neuron tensor and use two FC layers, each with 4096 neurons, to connect the flattened feature map to the final classification layer. The biggest issue in this conversion is increasing the number of weights from 14 million to almost 138 million! This increment for just three layers is not optimized at all.
At the next level, ResNet models \cite{he2015deep}, inspired by the architecture of VGG, introduced new residual layers and designed a new series of convolution models.
These models generate a 7×7×2048 feature map from an input image with 224×224×3 resolution. Based on the feature map size, the authors couldn't use FC layers at the end of their models because the number of models' parameters would have become very high, and the performance would be turned awful.
Another innovation of the ResNet models was using the Global Average Pooling (GAP) layer instead of FC layers at the final layers.
GAP was previously created by \cite{lin2013network}, which proposed this idea to average between the spatial values of each channel of the extracted feature map to compress the features.
For a tensor array with a shape equal to 7×7×2048, we call the 7×7 windows and 2048 the kernels and channels, respectively. So this tensor is made of channel data and spatial data. Spatial data is related to each of the 7×7=49 elements of each channel, while the channel data is related to each spatial element in different channels (between 1 to 2048).
Utilizing the GAP layer on ResNet models allowed the authors to extend the number of channels and also reduce computational cost. However, this approach was efficient but caused the model to lose spatial data due to the averaging between each channel's spatial resolution.
After ResNet, other proposed architectures in the future like DenseNet \cite{huang2018densely}, NasNet \cite{zoph2018learning}, Inception \cite{szegedy2016rethinking, szegedy2016inceptionv4, chollet2017xception}, MobileNet \cite{howard2017mobilenets}, and EfficientNet \cite{tan2020efficientnet} families also applied the GAP layer at the end of their models.
Another revolution in deep learning was the introduction of separable convolutional layers \cite{chollet2017xception}.
Separable convolutional layer proposes the idea that each filter of a convolutional layer doesn't have to be applied to both spatial and channel data simultaneously. It is constructed of a depthwise convolutional layer and a pointwise convolutional layer.
This approach did almost the same mutation to convolutional layers as convolutional layers did to FC layers because of remarkably decreasing the number of weights.
Here, we mention an example for better understanding. Consider applying a convolution layer with 512 filters and 3×3 kernel to a tensor array with 20×20×256 shape to get an output array with shape 20×20×512. If we use a simple convolutional layer, then for each filter of the convolution layer, 3×3×256 weights should be trained, and in total, the number of weights would become 3×3×256×512=1179648 weights.
Instead, if we use a separable convolutional layer, it applies a 3×3 kernel of weights for each of the 256 channels separately. A pointwise convolutional layer will be then applied to the depthwise convolution layer's output to achieve a tensor array with a shape of 20×20×512. So, the total number of weights will be equal to 3×3×256 (for the depthwise convolutional layer) + 256×512 (for the pointwise convolutional layer) =133376 weights. Note we did not count the bias parameters.
By comparing 1179648 weights with 133376 weights, it will be clear that using separable convolutional layers will significantly reduce the number of model weights. From the date these layers were created, some models like Xception \cite{chollet2017xception}, and NasNet \cite{zoph2018learning} used them and achieved remarkable results.
In this work, we want to address the problem of losing spatial data (caused by using GAP layers) without increasing computational cost. Our goal is to make the models able to learn how to combine the extracted feature map data and feed it to the final classification layer without losing any information. Inspired by the architecture of depthwise convolution layers, we have designed a series of layers that enable the models to learn how to analyze both the spatial and channel information of the extracted feature map. We have called our novel architecture Wise-SrNet.
We replaced the GAP layer with a depthwise convolutional layer with specific configurations. Our developed depthwise convolutional layer gives the model the ability to learn some weights for compressing the final feature map without losing spatial information.
In this while, we faced that the model may become overfitted due to the large kernel size of the depthwise convolutional layer.
For removing this issue, we also added some constraints and other layers to the depthwise convolutional layer, and these layers together considerably improved the classification accuracy while not increasing computational cost. Our proposed techniques can be applied to all the deep convolutional models, and so we can create more robust models and much better benchmark results on the ImageNet or other challenges.
In our experiments, we observed that using the GAP layer in datasets with many classes and large images is not reliable and may fail in many circumstances. In these conditions, our proposed architecture can do a great help for image classification tasks.
These papers \cite{qiu2018global,Card_2019}, proposed Global Weighted Average Pooling (GWAP) layers instead of GAP. In these researches, the GWAP weights will be indicated based on the values of the previous layers. The main difference between these works and ours is that the weighted-averaging layers' weights are equal for each channel of the feature map. It means the same weighted averaging matrix will be applied to each channel of the feature map. In contrast, in our architecture, different weights for each channel will be trained regarding the fact that the model must learn how to contribute the data of each channel independently in producing output array. Another difference is that our work is not based on performing averaging on the final feature map data, and we let the model learn how to process the feature map when training. The function the model creates for processing the feature map data can be anything and is not limited to a single function like averaging.
Another research \cite{peeples2021histogram} proposed a histogram-based architecture to replace the GAP layer and improve classification accuracy. Their architecture is made of two main convolution layers, an average pooling layer alongside some other layer for extracting the histogram information from the final feature map. Then this histogram feature map will be fed to the final classification layer.
Their work's main problem is that the two convolutional layers increase the number of weights, especially when the number of classes rises, computational cost increases notably.
Our paper is organized as follows: In section \ref{2}, we will describe the neural networks and their limits, and our proposed architectures, and section \ref{3}, represents the utilized datasets and our experimental results. In section \ref{4}, the paper is discussed, and in section \ref{5}, we have concluded our paper.
\section{Materials and Methods}
\label{2}
\subsection{Architectures}
\label{22}
\subsubsection{Global Average Pooling}
\label{221}
Global Average Pooling (GAP) \cite{lin2013network} proposes a technique to average between the feature map's spatial values for compressing the size of feature maps. After VGG models, most of the proposed deep convolutional classifiers implemented this layer at the end of their architecture. Although this layer decreases the computation cost, it removes a part of the image information, resulting in reaching lower accuracy. The architecture of Global Average Pooling (GAP) is depicted in figure \ref{gap}.
The noteworthy point is that how much the kernel size of the feature maps is more extensive, the loss of spatial information will be more. For example, in a 7×7×2048 feature map, the GAP layer calculates the mean of 7×7=49 values for each channel. In another case, for a 16×16×2048 feature map, the GAP layer averages between 16×16=256 values of each channel. Averaging within 256 values will remove much more information than averaging between 49 values. This means the weakness of GAP layers will be more significant when the input images, and so the feature maps are larger.
\begin{figure}[!ht]
\centering
\subfloat[Classification with Global Average Pooling]{\label{gap}\includegraphics[width=\linewidth]{gap.pdf}}
\newline
\subfloat[Classification with Global Weighted Average Pooling]{\label{gwap}\includegraphics[width=\linewidth]{gwap.pdf}}
\newline
\subfloat[Extracting spatial data with depthwise convolutional layer]{\label{depthwise}\includegraphics[width=\linewidth]{depthwise.pdf}}
\caption{The architectures of different classification layers with GAP, GWAP, and depthwise convolutional layer are portrayed in this figure.}
\label{3base}
\end{figure}
\subsubsection{Flattening and Fully Connected Layers}
\label{222}
Many other researchers preferred using fully connected layers (FC Layers) instead of Global Average Pooling (GAP) layers. To use these layers, first, they flatten the extracted feature map to a one-dimensional array. Then they will use an FC layer to connect the flattened feature map to the final classification layer (figure \ref{dense}). This way, the model can see all the features and will not lose any data. figure \ref{dense}, presents the architecture of this method.
But what is wrong with this technique? This technique is only appliable when there are few classes and input images are not large. Because when we have more classes, using this technique makes the number of weights much higher. Consider having a feature map with 7×7×2048 resolution (extracted from a 224×224 image) and 100 classes of images. The flattened feature map will have 7×7×2048=100,352 neurons, and the number weights for the final FC layer will be: 7×7×2048×100=10,035,200. It means that the number of total weights will be increased by more than 10 million neurons.
If the number of classes would be even higher, e.g., the ImageNet dataset includes 1000 classes, then the number of
weights will be nearly 100 million more! Also, if the input image resolution is higher than 224 (which in many cases can be higher), the output feature map will be even larger, and so the number weights will be more.
This increment will weaken the model because of two reasons:
\begin{enumerate}
\item We will need much more RAM space to train and
test the models, and the training process will be much more time-consuming.
\item Having more weights for just one layer will result in the model's overfitting or underfitting and make the model
unable to learn, causing the output results to be even worth compared to the GAP layer.
\end{enumerate}
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{dense.pdf}
\caption{Classification with Flatten and Fully-Connected layers}
\label{dense}
\end{figure}
\subsubsection{Depthwise Convolutional Layer for Spatial Resolution Analysis}
\label{223}
In this research, we aim to explore a solution to help the classifiers do not lose the feature map's spatial data while keeping almost the same computational cost. In other words, we wish to propose an idea that is like the GAP layer in computational cost but is more accurate by processing the spatial values of the feature map wisely.
The best way to produce this idea is to create a layer that allows the deep classifiers to learn how to combine and weighted-sum the final feature map's data and compress it to a smaller array. The GAP layer performance is fixed, and it only calculates the average of the feature map spatial values (e.g., GAP converts a 7×7×2048 array to a one-dimension array with 2048 values, meaning it averages between the 49 spatial resolution values of each channel).
Suppose the model could learn how to allocate some weights to the feature map spatial resolution while training to find the best way to process and compress the feature map. In that case, the model first processes the feature map spatial data and then processes the channel data in the next layer (the layer that connects it to the final classification layer). Figures \ref{depthwise}, and \ref{224-depthwise} illustrate this idea.
We present the idea of using a specific depthwise convolutional layer after getting the final feature map, which gives the model the ability to learn how to work with the feature maps spatial data without losing information. What makes our proposed architecture different from the Global Weighted Average Pooling layer (GWAP) \cite{qiu2018global}, is that the GWAP layer allocates some fixed weights to the spatial data of all the channels (figure \ref{gwap}). It means in a 7×7×2048 feature map, 49 weights could be learned for processing the feature map spatial values, and these weights were fixed for all the 2048 channels. Applying the same weights to all the channels makes the model deficient because, indeed, channels are different from each other, and some may be more important than others. In this situation, the weights that should be applied to the spatial values of different channels should vary from each other, and these are the parameters that the model shall learn while training.
Our proposed architecture solves this problem by making the model capable of learning different weights for each channel separately. In a 7×7×2048 feature map, the model learns 49 different weights for each channel (figure \ref{depthwise}). In other words, the model investigates each of the 2048 channels independently and learns 49 weights for processing the spatial data of each channel differently. Using this architecture helps the classifier explore all the channels' spatial data alongside the channels data to make the best prediction. It is noteworthy that our architecture's spatial process is not based on averaging at all. The model learns how to combine and sum these data, and it may like any operations and is not limited to a single function like averaging. figure \ref{depthwise} illuminates the mentioned issues.
For implementing our proposed idea, in the first stage, after getting the feature map, we feed it to a depthwise convolutional layer with a kernel size equal to the feature map's spatial shape. For example, if the feature map's shape is 7×7×2048, we fed it to a depthwise convolutional layer with a kernel size equal to 7×7. In this way, this layer will apply a 7×7 set of learnable weights for each channel to process each channel's data separately. The output of this layer will be a 1×1×2048 tensor array.
After that, we apply a flatten layer to reduce the output tensor's dimensions (convert the 1×1×2048 array to an array with 2048 neurons). Then we will connect the one-dimensional layer to the final classification layer, which is a fully connected layer and contain neurons equal to the number of classes (e.g., in a 70 classes dataset, the final layer is a FC layer with 70 neurons). figure \ref{224-depthwise}, shows the explained architecture.
\subsubsection{Constraints for Overfitting Reduction}
\label{224}
After executing several investigations, we noticed that although using the expressed architecture in \ref{223} led to higher accuracy; the models faced overfitting after passing several training epochs. Based on our experiments, the overfitting mainly comes from the large kernel size of the depthwise convolutional layers. Besides, if we wish to train the models on larger images like 512×512 resolution, the generated feature map's kernel size and the depthwise convolution layer's kernel size would become 16×16, so the overfitting will be even more. Here we come with two solutions:
\begin{enumerate}
\item Applying a constraint to the depthwise convolutional layer so that its weights do not become negative while training. As the depthwise convolution layer weights are applied to the feature map neurons, it is better that the weights do not become negative because the summation of negative weights with positive weights can waive some information. This constraint helps a lot in diminishing model overfitting.
\item Placing an Average pooling layer before the depthwise convolutional layer. It makes the kernel size of the feature map and the next depthwise convolutional layer smaller. Reducing the depthwise convolutional layer's kernel size to 3×3, 4×4, or 5×5 will decrease the overfitting possibility, so the whole model learning quality will be more reliable.
For 224×224 images, we set the average pooling kernel to 2×2 to change the feature map's and the depthwise convolutional layer's kernel size from 7×7 to 3×3. On 512×512 images, as the feature map's spatial size will be 16×16, we changed the average pooling layer kernel to 3×3 in order to make the output feature map's spatial size, 5×5.
\end{enumerate}
Researchers can also test other values for choosing the average pooling layer's kernel size, which may lead to better performance. Our goal in this paper is to show that implementing these techniques improves learning effectiveness.
It must be noted that based on our experiments, using bias parameters in the depthwise convolutional layers and not using activation functions will obtain better results. No regularization algorithm was also utilized. The final proposed architecture will be like figure \ref{224-avg-depthwise-const}.
\subsubsection{Computational Costs}
\label{225}
In table. \ref{cost}, we explore and compare the computational costs of our proposed architectures for Xception \cite{chollet2017xception}, ResNet50 \cite{he2015deep} and DenseNet121 \cite{huang2018densely} models. The numbers in this table are based on having 224×244 resolution input images and 70 classes.
From this Table, we can see that models with our architecture contain almost the same number of parameters as models with the GAP layer. Xception with our architecture has 23,751,622 trainable parameters, while Xception with the GAP layer contains 23,731,142 trainable parameters, which are almost equal.
This means that our architecture is capable of processing all the feature map's data, and not losing spatial information without increasing computational cost.
Based on the table. \ref{cost}, ResNet50, DenseNet121, and Xception models with our architecture have 0.08\%, 0.14\%, and 0.09\% more weights than the same models with GAP, respectively. Increasing model weights by less than 0.2\% will not affect the computational costs.
\begin{table*}[!ht]
\centering
\large
\caption{This table shows the number of parameters (weights) of three models using different classification layers. These numbers are based on having 70 classes of images with 224×224 resolution.
Base model refers to the feature extraction part without applying any classification layer. Base model with Global Average Pooling means applying the GAP in the classification layers for feature comparison. Base Model with Depthwise Convolution is the model with the depthwise convolutional layer for analyzing spatial data.
Base Model with Average Pooling and Depthwise Convolution expresses the usage of a pre-averaging layer with 2×2 kernel size for diminishing the feature map spatial size and preventing the depthwise convolution layer from overfitting.}
\begin{adjustbox}{width=1\linewidth}
\begin{tabular}{|l|l|l|l|l|}
\hline
\begin{tabular}[c]{@{}l@{}}Model\\ Name\end{tabular} & \begin{tabular}[c]{@{}l@{}}Base Model\end{tabular} & \begin{tabular}[c]{@{}l@{}}Base Model\\ with\\ Global Average Pooling\end{tabular} & \begin{tabular}[c]{@{}l@{}}Base Model \\ with\\ Depthwise Convolution\end{tabular} & \begin{tabular}[c]{@{}l@{}}Base Model \\ with Average Pooling and\\ Depthwise Convolution\end{tabular} \\ \hline
ResNet50 & 23,587,712 & 23,731,142 & 23,833,542 & 23,751,622 \\ \hline
Xception & 20,861,480 & 21,004,910 & 21,107,310 & 21,025,390 \\ \hline
DenseNet121 & 7,037,504 & 7,109,254 & 7,160,454 & 7,119,494 \\ \hline
\end{tabular}
\end{adjustbox}
\label{cost}
\end{table*}
\newpage
\section{Experimental Results}
\label{3}
\subsection{Dataset}
\label{21}
We have used three datasets for our investigations:
\begin{enumerate}
\item A part of the ImageNet dataset \cite{5206848} \footnote{ This dataset is shared at \url{https://www.kaggle.com/mohammadrahimzadeh/imagenet-70classes}}
\item Intel image classification challenge \cite{intel} \footnote{ This dataset is shared at \url{https://www.kaggle.com/puneet6060/intel-image-classification}}
\item MIT Indoors Scenes \cite{quattoni2009recognizing} \footnote{ This dataset is shared at \url{https://www.kaggle.com/itsahmad/indoor-scenes-cvpr-2019}}
\end{enumerate}
The details of these datasets have been described in Table. \ref{dataset}.
We utilized three different datasets to investigate various criteria and prove that our architecture will enhance classification results generally.
The ImageNet dataset \cite{5206848} is a popular and large dataset that contains lots of images. It is usually the reference dataset for evaluating deep convolutional models. The developers who train their models on this dataset have access to several powerful GPUs; otherwise, the training procedure may be impossible and very time-consuming due to the existence of around 1.3 million images in this dataset. As we did not have these facilities, we selected 70 of 1000 classes of this dataset for our experiments. The ImageNet dataset carries between 1000-1500 training images and 50 validation images for each class, but our selected dataset contains 500 training images and 50 validation images for each class and 31500 training images, and 3500 validation images in total. However, our training images are less than the ImageNet dataset, but the number of validation images for each class is the same.
Although we explored our models on two other datasets, as the ImageNet dataset is a reference dataset in image classification tasks, we also wished to evaluate our architecture on some part of this dataset to show our proposed methods' ultimate ability.
The Intel Image Classification \cite{intel} is another dataset we used for our investigations. Intel corporation provided this dataset to create another benchmark in image classification tasks. It contains 14034 training images and 3000 validation images belonging to six classes which are: buildings, forest, glacier, mountain, sea, and street. The default images of this dataset are in 150×150 pixels resolution. We utilized the whole dataset for training and validation procedures.
MIT university has also introduced a new dataset for image classification benchmarks named MIT Indoors Scenes \cite{quattoni2009recognizing}. MIT Indoors Scenes includes 67 categories of images from different scenes and views like bookstore, garage, gym, library, restaurant, office, etc. There are 5360 training images and 1340 validation images in this dataset. The main challenge of this dataset is the existence of few images (80 images) per class compared to other datasets. This situation makes classification difficult and the learning process weak. We have also investigated our models on this dataset to show our architecture's performance under challenging circumstances.
\begin{table*}
\centering
\large
\caption{Details of the utilized datasets are presented in this table.}
\begin{tabular}{|l|l|l|l|}
\hline
Dataset & Number of Classes & Training Images & Test Images \\ \hline
Sub-ImageNet & 70 & 31500 & 3500 \\ \hline
Intel Image Classification Challenge & 6 & 14034 & 3000 \\ \hline
MIT Indoors Scenes & 67 & 5360 & 1340 \\ \hline
\end{tabular}
\label{dataset}
\end{table*}
We implemented our models on \href{https://colab.research.google.com/}{Google Colaboratory Notebooks}, which allocated a Tesla P100 GPU, 2.00GHz Intel Xeon CPU, and 12GB RAM on Ubuntu server to us. We used Keras library \cite{chollet2015keras} on Tensorflow backend \cite{tensorflow2015-whitepaper} for developing and running models.
Data augmentation was used for all the datasets to enhance the training procedure. All the modes used the same data augmentation techniques as detailed in Table. \ref{data augment}. To show our architecture's ultimate performance, we selected various models from the three great families of convolutional neural networks: ResNet, DenseNet, and Inception families for our experiments. We wish to prove that our proposed ideas are capable of improving every deep convolutional network.
\begin{table*}
\centering
\large
\caption{This table shows the data augmentation techniques that we utilized for training the models.}
\begin{tabular}{|l|l|}
\hline
Technique & Range / Usage \\ \hline
Horizontal Flip & Yes \\ \hline
Vertical Flip & Yes \\ \hline
Zoom & 20 \% \\ \hline
Rotation & 360 degree \\ \hline
Width Shift & 10 \% \\ \hline
Height Shift & 10\% \\ \hline
Channel Shift & 50 pixels \\ \hline
Brightness Change & 0 - 1.2 \\ \hline
Preprocessing & Yes (Caffe) \\ \hline
\end{tabular}
\label{data augment}
\end{table*}
We have divided our experiments into two categories based on the resolution of the input images: 224×224 and 512×512.
\subsection{Experiments on Images with 224×224 Pixels Resolution}
\label{31}
In the ImageNet image classification benchmark, it is usually common to resize the images to 224×224 resolution to run the models.
Although some models like InceptionV3 \cite{szegedy2016rethinking} converted ImageNet images to a different size (299×299), most of the proposed deep convolutional networks chose 224 as the images side size.
Based on this fact, in this stage of our work, all the three datasets images were resized to 224×244 resolution.
The details about all the parameters used for each dataset are listed in Table. \ref{param}.
\begin{table*}[!hb]
\centering
\large
\caption{This table shows the parameters adopted for training the models on 224x224 images.}
\begin{adjustbox}{width=1\linewidth}
\begin{tabular}{l|l|l|l|}
\cline{2-4}
& Sub-ImageNet & MIT Indoors Scenes & Intel Image Classification \\ \hline
\multicolumn{1}{|l|}{Batch Size} & 70 & 70 & 70 \\ \hline
\multicolumn{1}{|l|}{Image Size} & 224x224 & 224x224 & 224x224 \\ \hline
\multicolumn{1}{|l|}{Optimizer} & SGD (momentum:0.9) & SGD (momentum:0.9) & SGD (momentum:0.9) \\ \hline
\multicolumn{1}{|l|}{Initial Learning rate} & 0.045 & 0.045 & 0.045 \\ \hline
\multicolumn{1}{|l|}{Learning rate decay} & 0.94 every 2 epochs & 0.94 every 2 epochs & 0.94 every 2 epochs \\ \hline
\multicolumn{1}{|l|}{Epochs} & 148 & 200 & 220 \\ \hline
\multicolumn{1}{|l|}{Transfer Learning} & No & No & No \\ \hline
\multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}Base Model\\ Weight Initializer\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Keras \\ Random Initialization\end{tabular} & \begin{tabular}[c]{@{}l@{}}Keras \\ Random Initialization\end{tabular} & \begin{tabular}[c]{@{}l@{}}Keras \\ Random Initialization\end{tabular} \\ \hline
\multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}Depthwise Convolution Layer\\ Kernel Weight Initializer\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Random Normal Initialization\\ (Mean:0, Std:0.01)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Random Normal Initialization\\ (Mean:0, Std:0.01)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Random Normal Initialization\\ (Mean:0, Std:0.01)\end{tabular} \\ \hline
\multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}Depthwise Convolution Layer\\ Bias Initializer\end{tabular}} & Zero Bias Initializtion & Zero Bias Initializtion & Zero Bias Initializtion \\ \hline
\end{tabular}
\end{adjustbox}
\label{param}
\end{table*}
As described in Table. \ref{param}, we set the batch size to 70, and we used SGD optimizer with a momentum of 0.9. We selected an adaptive learning rate inspired by \cite{chollet2015keras} with an initialize value of 0.045 and a decay of 0.94 per every two epochs.
The important point is that we did not use pre-trained weights at the beginning of the training procedure because the available pre-trained weights are from the trained models on the ImageNet dataset and images with 224×224 resolution using the GAP layer. These weights are trained based on the GAP layer for many epochs meaning the model's whole weights are trained somehow to be suitable for global averaging at the end.
As we want to compare the performance of our architecture with the normal condition (using the GAP layer), the comparison will not be fair when the initialized wights are produced and converged utilizing the GAP layer.
We wish to prove that training the models with our architecture from scratch will result in better learning and classification accuracy, which can create a novel development in image classification benchmarks. Therefore all the weights were initialized using Random initialization functions.
In this section, we investigate six versions of classifiers for each model:
\begin{enumerate}
\item Base model with GAP layer: This is the simple classification method of using the GAP layer for compressing the features and feeding them to the final classification layer. The architecture is depicted in figure \ref{224-gap}. \label{v1}
\item Base model with GAP layer and 50\% Dropout:
This architecture adds a dropout layer to the Base model with GAP layer. figure \ref{224-gap-dp} presents this architecture. \label{v2}
\item Base model with Depthwise Convolutional layer: This architecture shows the usage of our proposed depthwise convolutional layer for extracting spatial data and feeding the enhanced and compressed feature map to the final FC layer. figure \ref{224-depthwise} clarifies the idea. \label{v3}
\item Base model with Depthwise Convolutional layer and Non-negative constraint: This architecture is related to applying the non-negative constraint to the depthwise convolutional layer, which forces its weights not to become negative while training. This technique was implemented to improve learning and decrease overfitting. figure \ref{224-depthwise-const} represents the architecture. \label{v4}
\item Base model with Averaging, Depthwise Convolutional layer, and Non-negative constraint:
In this architecture, a pre-averaging layer and non-negative constraint are applied to the depthwise convolutional layer for enhancing learning effectiveness and prevent overfitting. figure \ref{224-avg-depthwise-const} expresses the architecture. \label{v5}
\item Base model with Averaging, Depthwise Convolutional layer, Non-negative constraint, and 50\% Dropout: This the same as the last architecture with this difference of utilizing a dropout layer. (figure \ref{224-avg-depthwise-const-dp}) \label{v6}
\end{enumerate}
In the following, we will describe the results of each dataset separately. For each dataset, three models of the popular deep convolutional networks of DenseNet, ResNet, and Inception families were chosen for training and validating our architectures.
\begin{figure}[!ht]
\centering
\subfloat[Deep convolution model with the GAP layer at its classification section.]{\label{224-gap}\includegraphics[width=\linewidth]{224-gap.pdf}}
\newline
\subfloat[Deep convolution model with the GAP and dropout layers at its classification section.]{\label{224-gap-dp}\includegraphics[width=\linewidth]{224-gap-dp.pdf}}
\newline
\subfloat[Deep convolution model with the proposed depthwise convolutional layer for fixing spatial data loss problem.]{\label{224-depthwise}\includegraphics[width=\linewidth]{224-depthwise.pdf}}
\newline
\subfloat[Deep convolution model with the proposed depthwise convolutional layer for extracting feature map's spatial and channel data and a non-negative constraint to prevent overfitting.]{\label{224-depthwise-const}\includegraphics[width=\linewidth]{224-depthwise-const.pdf}}
\newline
\subfloat[Deep convolution model with the proposed depthwise convolutional layer for extracting feature map's spatial and channel data and a non-negative constraint and pre-averaging layer to prevent overfitting.]{\label{224-avg-depthwise-const}\includegraphics[width=\linewidth]{224-avg-depthwise-const.pdf}}
\newline
\subfloat[Deep convolution model with the proposed depthwise convolutional layer for extracting feature map's spatial and channel data. This model is enhanced by non-negative constraint, pre-averaging, and dropout layers.]{\label{224-avg-depthwise-const-dp}\includegraphics[width=\linewidth]{224-avg-depthwise-const-dp.pdf}}
\caption{These figures represent the sex architecture we implemented for our experiments on images with 224×224 resolution. The red boxes are the model computational layers, and the blue boxes show the output arrays.}
\label{224-all}
\end{figure}
\subsubsection{Sub-ImageNet Dataset}
\label{311}
We selected 70 classes of the ImageNet dataset, including 31500 training images and 3500 images, for validating the models' performance. This selected dataset can be accessed at \href{https://www.kaggle.com/mohammadrahimzadeh/imagenet-70classes}{https://www.kaggle.com/mohammadrahimzadeh/imagenet-70classes}.
We trained the models for 148 epochs on this dataset.
Three models (DenseNet121 \cite{huang2018densely}, ResNet50 \cite{he2015deep} and Xception \cite{chollet2017xception}) were selected for experimenting on this dataset. For each of these three model, we evaluated six different versions:
\begin{itemize}
\item Base model with GAP layer (version \ref{v1}, figure \ref{224-gap})
\item Base model with GAP layer and 50\% Dropout (version \ref{v2}, figure \ref{224-gap-dp})
\item Base model with Depthwise Convolutional layer (version \ref{v3}, figure \ref{224-depthwise})
\item Base model with Depthwise Convolutional layer and Non-negative constraint (version \ref{v4}, figure \ref{224-depthwise-const})
\item Base model with Averaging, Depthwise Convolutional layer, and Non-negative constraint (version \ref{v5}, figure \ref{224-avg-depthwise-const})
\item Base model with Averaging, Depthwise Convolutional layer, Non-negative constraint, and 50\% Dropout (version \ref{v6}, figure \ref{224-avg-depthwise-const-dp})
\end{itemize}
The architectures of the implemented models are depicted in the referenced figures. The results of this stage of our work are expressed in Table. \ref{224-imagenet} and figure \ref{224-imagenet-fig}
\begin{table*}[!ht]
\centering
\large
\caption{In this table, Top-1 and Top-5 accuracies and the number of parameters (weights) are reported for trained models on the sub-ImageNet dataset. For each model, six different versions based on the classification layers have been investigated. Model+GAP is the default classification model that uses the Global Average pooling layer (figure \ref{224-gap}). Model+GAP+DP is the same as Model+GAP but utilizes a 50\% dropout layer (figure \ref{224-gap-dp}).
Model+Depthwise Conv stands for using our depthwise Convolution layer for spatial data analysis (figure \ref{224-depthwise}). Model +Depthwise Conv+Constraints uses our depthwise convolutional layer alongside non-negative constraint (figure \ref{224-depthwise-const}). Model+Avg+Depthwise Conv+Constraints shows the usage of a 2×2 average pooling before the depthwise convolution layer with constraints (figure \ref{224-avg-depthwise-const}). Model+Avg+Depthwise Conv+Constraints+DP applies a 50\% dropout layer to Model+Avg+Depthwise Conv+Constraints (figure \ref{224-avg-depthwise-const-dp}).}
\begin{adjustbox}{width=1\linewidth}
\begin{tabular}{|l|l|l|l|l|}
\hline
Model name & Model + Classification type & Parameters & Top-1 accuracy & Top-5 accuracy \\ \hline
& Xception+GAP & 21,004,910 & 0.6806 & 0.8769 \\ \cline{2-5}
& Xception+GAP+DP & 21,004,910 & 0.6909 & 0.8920 \\ \cline{2-5}
Xception & Xception+Depthwise Conv & 21,107,310 & 0.6760 & 0.8751 \\ \cline{2-5}
& Xception+Depthwise Conv+Constraints & 21,107,310 & 0.7063 & 0.8929 \\ \cline{2-5}
& Xception+Avg+Depthwise Conv+Constraints & 21,025,390 & 0.7128 & 0.8989 \\ \cline{2-5}
& Xception+Avg+Depthwise Conv+Constraints+DP & 21,025,390 & \textbf{0.7223} & \textbf{0.9106} \\ \hline
& ResNet50+GAP & 23,731,142 & 0.5789 & 0.8066 \\ \cline{2-5}
& ResNet50+GAP+DP & 23,731,142 & 0.5829 & 0.8194 \\ \cline{2-5}
ResNet50 & ResNet50+Depthwise Conv & 23,833,542 & 0.5969 & 0.8240 \\ \cline{2-5}
& ResNet50+Depthwise Conv+Constraints & 23,833,542 & 0.6149 & 0.8506 \\ \cline{2-5}
& ResNet50+Avg+Depthwise Conv+Constraints & 23,751,622 & 0.6229 & 0.8634 \\ \cline{2-5}
& ResNet50+Avg+Depthwise Conv+Constraints+DP & 23,751,622 & \textbf{0.6377} & \textbf{0.8677} \\ \hline
& DenseNet121+GAP & 7,109,254 & 0.6289 & 0.8508 \\ \cline{2-5}
& DenseNet121+GAP+DP & 7,109,254 & 0.6303 & 0.8520 \\ \cline{2-5}
DenseNet121 & DenseNet121+Depthwise Conv & 7,160,454 & 0.6171 & 0.8446 \\ \cline{2-5}
& DenseNet121+Depthwise Conv+Constraints & 7,160,454 & 0.6399 & 0.8651 \\ \cline{2-5}
& DenseNet121+Avg+Depthwise Conv+Constraints & 7,119,494 & 0.6495 & 0.8694 \\ \cline{2-5}
& DenseNet121+Avg+Depthwise Conv+Constraints+DP & 7,119,494 & \textbf{0.6654} & \textbf{0.8740} \\ \hline
\end{tabular}
\end{adjustbox}
\label{224-imagenet}
\end{table*}
\begin{figure}[!ht]
\centering
\subfloat[DenseNet121 validation accuracy per epochs]{\label{224-imagenet-fig-densenet121}\includegraphics[width=0.5\linewidth]{val_accuracy-imagenet-DenseNet121.pdf}}
\subfloat[ResNet50 validation accuracy per epochs]{\label{224-imagenet-fig-resnet50}\includegraphics[width=0.5\linewidth]{val_accuracy-imagenet-ResNet50.pdf}}
\newline
\subfloat[Xception validation accuracy per epochs]{\label{224-imagenet-fig-xception}\includegraphics[width=1\linewidth]{val_accuracy-imagenet-Xception.pdf}}
\newline
\caption{This figure shows the validation accuracy of the trained models on the sub-ImageNet dataset. For each family, six versions of models (based on classification techniques) have been trained and evaluated to clarify our proposed architecture's performance. The images were resized to 224×224 pixels. The Averaging layer's kernel size was 2×2, and DP refers to a dropout layer with 50\% neurons reduction.}
\label{224-imagenet-fig}
\end{figure}
Based on the obtained results of figure \ref{224-imagenet-fig}, and Table. \ref{224-imagenet}, it is visible that using a single Depthwise convolutional layer shows higher validation accuracy in the first epochs of training but falls into overfitting wholes in the following epochs. Applying a non-negative constrain to the depthwise convolutional layer will decrease the overfitting effects, and the model's classification accuracy rises. Therefore, the models' results with a depthwise convolutional layer and constraints have been significantly improved than using the GAP layer.
Utilizing an averaging layer before the depthwise convolutional layer will still reduce overfitting and enhance learning quality. We also investigated the usage of Dropout on our final proposed model to show that our architecture can also be improved by using dropouts and is not dependent on all features.
Our final proposed architecture which is called Wise-SrNet (figure \ref{224-avg-depthwise-const-dp}) increased the Top-1 accuracy of Xception, ResNet50, and DenseNet121 models with GAP and dropout layers by 3.14\%, 5.48\%, and 3.51\% respectively. We have also reported the number of weights of each model in Table. \ref{224-imagenet} to clarify that our architectures do not increase computational costs compared to the GAP layer.
Here, this question may raise why the reported Top-1 classification accuracy of models with the GAP layer is lower than what their developers reported on the whole ImagNet dataset, and the Top-5 accuracy of our models is higher? There are two answers to this question:
\begin{enumerate}
\item ImageNet dataset includes around 1000-1500 training images and 50 validation images for each class, but our sub-ImageNet dataset consists of 500 training images and 50 validation images for each class. As our training images in almost one-third of the training images of the ImageNet dataset and the number of validation images is the same, it is clear that we can not reach the results that the developers of the same models with the GAP layer reached on the whole ImageNet dataset.
\item The developers of deep convolutional models like Xception and ResNet had access to powerful hardware and lots of GPUs, allowing them to train the models on ImageNet with high batch size (usually the batch size is set to 256). But as we did not have many GPUs available, we could not train our models on the whole ImageNet dataset. Due to the unavailability of more GPU ram space, we were forced to set the batch size to 70 for 224×224 resolution images. Having fewer batch sizes will decrease the learning quality and efficiency for sure.
\end{enumerate}
Because of these two reasons, our obtained Top-1 classification accuracy of using the GAP layer is lower than what is reported by the default models' authors. But as we had fewer classes than the whole ImageNet dataset (70 vs. 1000), the Top-5 accuracy of our trained models (with GAP layer) is higher than the default models with the GAP layer.
Our wish is to show that using our architecture will significantly increase the classification accuracy, and our results prove it. Other researchers that have access to more GPUs can train the deep convolutional models using our architecture on the whole ImageNet dataset and report the new and improved results. We think our methods can create a great improvement in the classification benchmarks.
\subsubsection{Intel Image Classification Challenge}
\label{312}
Intel Image Classification Dataset \cite{intel} consists of 14034 training images and 3000 test images belonging to 6 classes. The default resolution of this dataset's images is 150×150 pixels, which we resized to 224×224. For this dataset, we chose Xception \cite{chollet2017xception}, DenseNet169 \cite{huang2018densely}, and ResNet50 \cite{he2015deep} models for further experiments.
We trained the models for 220 epochs.
On this dataset, we evaluated three versions of each model:
\begin{itemize}
\item Base model with GAP layer (version \ref{v1}, figure \ref{224-gap})
\item Base model with Depthwise Convolutional layer and Non-negative constraint (version \ref{v4}, figure \ref{224-depthwise-const})
\item Base model with Averaging, Depthwise Convolutional layer and Non-negative constraint (version \ref{v5}, figure \ref{224-avg-depthwise-const})
\end{itemize}
The results of our trained models on the Intel Image Classification dataset are detailed and depicted in Table. \ref{224-intel}, and figure \ref{224-intel-all}.
\begin{table*}[!hb]
\centering
\large
\caption{This table shows the results of the trained models on the Intel Image Classification dataset. Xception, ResNet50, and DenseNet169 models were selected for evaluating our architecture. For each of these modes, three versions were validated. Model+GAP is the default classification model that uses the Global Average pooling layer (figure \ref{224-gap}). Model +Depthwise Conv+Constraints uses our depthwise convolutional layer alongside non-negative constraint (figure \ref{224-depthwise-const}). Model+Avg+Depthwise Conv+Constraints shows the usage of an average pooling before the depthwise convolution layer with constraints (figure \ref{224-avg-depthwise-const}).}
\begin{tabular}{|l|l|l|l|}
\hline
Model name & Model + Classification type & Parameters & Top-1 accuracy \\ \hline
& Xception+GAP & 20,873,774 & 0.8787 \\ \cline{2-4}
Xception & Xception+Depthwise Conv+Constraints & 20,976,174 & 0.8977 \\ \cline{2-4}
& Xception+Avg+Depthwise Conv+Constraints & 20,894,254 & \textbf{0.9013} \\ \hline
& ResNet50+GAP & 23,600,006 & 0.8203 \\ \cline{2-4}
ResNet50 & ResNet50+Depthwise Conv+Constraints & 23,702,406 & 0.8713 \\ \cline{2-4}
& ResNet50+Avg+Depthwise Conv+Constraints & 23,620,486 & \textbf{0.8773} \\ \hline
& DenseNet169+GAP & 12,652,870 & 0.8457 \\ \cline{2-4}
DenseNet169 & DenseNet169+Depthwise Conv+Constraints & 12,736,070 & 0.8713 \\ \cline{2-4}
& DenseNet169+Avg+Depthwise Conv+Constraints & 12,669,510 & \textbf{0.8783} \\ \hline
\end{tabular}
\label{224-intel}
\end{table*}
\begin{figure}[!ht]
\centering
\subfloat[DenseNet169 validation accuracy per epochs]{\label{224-intel-val-densenet169}\includegraphics[width=0.5\linewidth]{val_accuracy-intel-DenseNet169.pdf}}
\subfloat[DenseNet169 training accuracy per epochs]{\label{224-intel-acc-densenet169}\includegraphics[width=0.5\linewidth]{accuracy-intel-DenseNet169.pdf}}
\newline
\subfloat[ResNet50 validation accuracy per epochs]{\label{224-intel-val-resnet50}\includegraphics[width=0.5\linewidth]{val_accuracy-intel-ResNet50.pdf}}
\subfloat[ResNet50 training accuracy per epochs]{\label{224-intel-acc-resnet50}\includegraphics[width=0.5\linewidth]{accuracy-intel-ResNet50.pdf}}
\newline
\subfloat[Xception validation accuracy per epochs]{\label{224-intel-val-xception}\includegraphics[width=0.5\linewidth]{val_accuracy-intel-Xception.pdf}}
\subfloat[Xception training accuracy per epochs]{\label{224-intel-acc-xception}\includegraphics[width=0.5\linewidth]{accuracy-intel-Xception.pdf}}
\caption{In this figure, the validation and training accuracies of the trained models on the Intel Image Classification dataset for 220 epochs are presented. The images were resized to 224×224 pixels. The Averaging layer's kernel size was 2×2.}
\label{224-intel-all}
\end{figure}
As mentioned, three different classification techniques were investigated for each of these networks: The GAP layer, the Depthwise convolutional layer with constraints, and Averaging with the depthwise convolutional layer and the non-negative constraint.
Figures \ref{224-intel-acc-densenet169}, \ref{224-intel-acc-resnet50}, and \ref{224-intel-acc-xception} present the training accuracy of the trained models on this dataset. The figures clearly show that the usage of a depthwise convolutional layer with non-negative constraints will enhance the training process. This figure's important point is revealing that the ResNet50 network will show much better performance when using our architecture than the GAP layer.
Figures \ref{224-intel-val-densenet169}, \ref{224-intel-val-resnet50}, and \ref{224-intel-val-xception} and Table. \ref{224-intel} show the Top-1 accuracy of the trained models. The information on this table and figures also prove the fact that our architecture can significantly improve classification results. Using averaging before the depthwise convolution layer can also affect the classification process by preventing models from overfitting.
Our proposed architecture increases the Top-1 accuracy of Xception, ResNet50, and DenseNet169 models by 2.26\%, 5.7\%, and 3.26\% on Intel Image Classification dataset, respectively. Reported parameters show that the models with our architecture contain almost the same number of parameters as those models with the GAP layer.
It is noteworthy that dropout layers can also improve the classification results, but our goal in this research is to show that our architecture performs better than old GAP methods. However, we investigated the use of dropout on other datasets.
\subsubsection{MIT Indoors Scenes}
\label{313}
MIT Indoors Scenes dataset \cite{quattoni2009recognizing} includes 67 classes of images of different scenes and views. This dataset is made of 5360 training images and 1340 validation images. The default resolution of its pictures is mixed, but usually, the images have good resolution. At this stage, we resized all the images to 224×224 for our experiments.
Xception, DenseNet169, and ResNet50 networks were chosen as our reference models in this phase. For each of these networks, five models with different classification layers were evaluated:
\begin{itemize}
\item Base model with GAP layer (version \ref{v1}, figure \ref{224-gap})
\item Base model with GAP layer and 50\% Dropout (version \ref{v2}, figure \ref{224-gap-dp})
\item Base model with Depthwise Convolutional layer and, and Non-negative constraint (version \ref{v4}, figure \ref{224-depthwise-const})
\item Base model with Averaging, Depthwise Convolutional layer, and Non-negative constraint (version \ref{v5}, figure \ref{224-avg-depthwise-const})
\item Base model with Averaging, Depthwise Convolutional layer, Non-negative constraint and 50\% Dropout (version \ref{v6}, figure \ref{224-avg-depthwise-const-dp})
\end{itemize}
Table. \ref{224-mit} and figure \ref{224-mit-fig} show our obtaned results on this dataset.
\begin{table*}
\centering
\large
\caption{In this table, the Top-1 accuracy and Top-5 accuracy of trained models with different classification layers are presented. The number of each model's parameters is also mentioned for revealing the models' computational costs.}
\begin{adjustbox}{width=1\linewidth}
\begin{tabular}{|l|l|l|l|l|}
\hline
Model name & Model + Classification type & Parameters & Top-1 accuracy & Top-5 accuracy \\ \hline
& Xception+GAP & 20,998,763 & 0.3552 & 0.6485 \\ \cline{2-5}
& Xception+GAP+DP & 20,998,763 & 0.3571 & 0.6515 \\ \cline{2-5}
Xception & Xception+Depthwise Conv+Constraints & 21,101,163 & 0.3734 & 0.6622 \\ \cline{2-5}
& Xception+Avg+Depthwise Conv+Constraints & 21,019,243 & 0.4003 & 0.6943 \\ \cline{2-5}
& Xception+Avg+Depthwise Conv+Constraints+DP & 21,019,243 & \textbf{0.4125} & \textbf{0.7011} \\ \hline
& ResNet50+GAP & 23,724,995 & 0.2306 & 0.5179 \\ \cline{2-5}
& ResNet50+GAP+DP & 23,724,995 & 0.2313 & 0.5321 \\ \cline{2-5}
ResNet50 & ResNet50+Depthwise Conv+Constraints & 23,827,395 & 0.2955 & 0.5582 \\ \cline{2-5}
& ResNet50+Avg+Depthwise Conv+Constraints & 23,745,475 & 0.2988 & 0.5799 \\ \cline{2-5}
& ResNet50+Avg+Depthwise Conv+Constraints+DP & 23,745,475 & \textbf{0.3075} & \textbf{0.5806} \\ \hline
& DenseNet169+GAP & 12,754,435 & 0.2679 & 0.5328 \\ \cline{2-5}
& DenseNet169+GAP+DP & 12,754,435 & 0.2709 & 0.5478 \\ \cline{2-5}
DenseNet169 & DenseNet169+Depthwise Conv+Constraints & 12,837,635 & 0.2986 & 0.5588 \\ \cline{2-5}
& DenseNet169+Avg+Depthwise Conv+Constraints & 12,771,075 & 0.3125 & 0.5590 \\ \cline{2-5}
& DenseNet169+Avg+Depthwise Conv+Constraints+DP & 12,771,075 & \textbf{0.3212} & \textbf{0.5981} \\ \hline
\end{tabular}
\end{adjustbox}
\label{224-mit}
\end{table*}
\begin{figure}[!ht]
\centering
\subfloat[DenseNet169 validation accuracy per epochs]{\label{224-mit-fig-densenet169}\includegraphics[width=0.5\linewidth]{val_accuracy-MIT-DenseNet169.pdf}}
\subfloat[ResNet50 validation accuracy per epochs]{\label{224-mit-fig-resnet50}\includegraphics[width=0.5\linewidth]{val_accuracy-MIT-ResNet50.pdf}}
\newline
\subfloat[Xception validation accuracy per epochs]{\label{224-mit-fig-xception}\includegraphics[width=1\linewidth]{val_accuracy-MIT-Xception.pdf}}
\caption{In this figure, the validation accuracy of the trained models with different classification layers on the MIT Indoors Scenes dataset for 200 epochs is presented. The images were resized to 224×224 pixels. The Averaging layer's kernel size was 2×2, and DP refers to a dropout layer with 50\% neurons reduction.}
\label{224-mit-fig}
\end{figure}
The validation data of Table .\ref{224-mit} and figure \ref{224-mit-fig} illuminates this idea that applying depthwise convolutional layer with constraints will result in achieving higher classification accuracy. Our final architecture \ref{224-avg-depthwise-const-dp} increases the Top-1 validation accuracy of Xception, ResNet50, and DenseNet169 models with GAP layer by 5.54\%, 7.62\%, 5.03\% respectively.
Now we wish to explain why the classification results on the MIT Indoors Scenes are not high? This dataset contains 5360 training images of 67 classes, which means there are only 80 images for each class. This number is too low for a model to learn from scratch. As the number of total classes is considerable, so the models can not achieve better results, and the learning can not be executed appropriately. Another point is that we did not use any transfer learning, so the models are being trained from scratch, and based on all these reasons, the models can not converge to better learning points.
Of course, there can be many modifications to improve the results, like increasing the resolution of images, using transfer learning, which we will implement in Section. \ref{32}, and we will reach promising results.
\subsection{Experiments on Images with 512×512 Pixels Resolution}
\label{32}
In this stage of our work, we wish to show interesting findings. In computer vision tasks, utilizing larger input images usually means achieving higher accuracy. Images' resolution will shrink when passing through the deep convolutional model layers, and at the end, a compressed feature map will be delivered. In this while, some image information, especially information related to smaller objects and areas, may become lost or removed.
So, when the input image is larger, the objects and areas inside the image will be wider; therefore, the possibility of removing important information will be lower, and the models show higher precision.
When training on the Imagenet dataset, it is common to resize the images to 224×224 pixels because this is a large dataset, and working with larger images needs much more hardware and will also be time-consuming and difficult. But, in other cases, many developers chose to work with larger images to obtain better results. Besides, in many medical image analysis cases, as the infections or hotspots exist in small areas, it forces the researchers to use larger images to achieve acceptable results.
This section shows the effect of our architecture when the input images' size is 512×512. Here, we only work with the MIT Indoors Scenes dataset because the original images of this dataset are in good resolution, and the number of classes of this dataset is suitable for our work.
In this section, Unlike the previous steps, we used transfer learning from the ImageNet pre-trained weights to accelerate training speed and convergence.
A critical point that we mentioned before is that using transfer learning from pre-trained weights of models trained on the Global Average Pooling (GAP) layer will make our comparison unfair.
The available weights are produced based on models with the GAP layer, which have been trained on 224×224 images for many epochs and converged to the optimum point.
Training the models with the GAP layer will affect all the models' weights; somehow, the model will learn to export the final feature map suitable for global averaging. Suppose we apply these pre-trained weights at the beginning of our training for comparing our architecture with the GAP layer. As the pre-trained weights are produced for using the GAP layer on 224×224 images, then the comparison will not be correct. Because in that way, the models will know how to deal with the same size of images completely suitable for global averaging, but this condition does not exist for our architecture.
But in current condition that the images are in 512×512 resolution and the pre-trained weights are produced for 224×224 images, they will not be very similar. Although these weights are created for models with the GAP layer, we will prove that our architecture still works much better than the GAP layer, even with transfer learning from the explained weights.
We used the same data augmentation techniques as reported in Table. \ref{data augment}. All the implemented parameters are also expressed in Table. \ref{512-param}. We changed the batch size value from 70 to 15 due to the need for more RAM space caused by using larger images. We continued the training process for 130 epochs, which is less than 220 epochs used for training on the same dataset for 224×224 images as we applied transfer learning at the beginning of the training, which makes the models converge in fewer epochs.
Other parameters are almost the same.
MIT Indoors Scenes dataset has been fully described in section \ref{313}. In this section we selected Xception \cite{chollet2017xception}, ResNet50V2 \cite{he2016identity}, and DenseNet201 \cite{huang2018densely} for training and investigation. In this phase, we do not wish to compare our different proposed architectures because the results of previous steps prove that our final proposed architectures (Wise-SrNet, figures \ref{224-avg-depthwise-const-dp}, and \ref{224-avg-depthwise-const}) are more robust than others.
Most of the deep convolutional neural networks present a feature map with a kernel size of 1/32 times of input image resolution. When the image size is 512×512, the extracted feature map kernel size will be 16×16. The channel size varies for different models; Xception and ResNet families produce 2048 channels, and DenseNet121, DenseNet169, and DenseNet201 models generate 1024, 1664, and 1920 channels, respectively.
Now, if we consider working with the Xception model, the feature map size for an input image with 512×512 resolution will be 16×16×2048, while the 224×224 images would output a feature map with 7×7×2048 size. The Global Average Pooling (GAP) layer computes the average of the 16×16=256 values of each channel for 512×512 images, while it has to average between 7×7=49 values of each channel for 224×244 images. Averaging between 256 values can destroy much more spatial information than averaging between 49 spatial values.
This statement shows us that using large images with the GAP layer, especially when there are lots of classes, will not be efficient, and the results can not be proper.
In this phase, we trained, evaluated and compared three version of models for each of the Xception, ResNet50V2, and DenseNet201 networks:
\begin{itemize}
\item Base model with GAP layer (figure \ref{512-gap})
\item Base model with Averaging and FC layer (figure \ref{512-avg-dense})
\item Base model with Averaging, Depthwise Convolutional layer, and Non-negative constraint (figure \ref{512-avg-depthwise-const})
\end{itemize}
The architectures of the base model with GAP layer and Base model with Averaging, Depthwise Convolutional layer, and Non-negative constraint has been depicted in figure \ref{512-gap}, and figure \ref{512-avg-depthwise-const}, respectively.
The input images' resolution is 512×512, so the feature map size in most models (Xception and ResNet) will be 16×16×2048. Accordingly, for the averaging layer, we set the averaging kernel size to 3×3 in such a way that the output feature map size becomes 5×5×2048. Then we set the depthwise convolutional layer kernel size to 5×5; hence, the output will be a 1×1×2048 array that will be converted to a one-dimensional array and fed to the final classification layer as explained in previous sections. There can be many configurations for choosing the kernel size of the average pooling and depthwise convolutional layers and obtaining better results which are not our intention in this research. Here, we wish to prove that using these kinds of architectures instead of the GAP layers will increase classification accuracy.
Base model with Averaging and FC layer is another architecture we wish to explore in this stage. This architecture is presented in figure \ref{512-avg-dense}. It is inspired from figure \ref{dense}, which shows a classic way for classification, but the difference is that we could not feed the flattened feature map to the final classification fully connected layer. Because when the image size is 512×512, the feature map shape would be 16×16×2048, and the flattened feature map will have 524,288 neurons. If we feed this array to the final FC layer with 67 classes, the number of parameters will increase by 35,127,69 weights! Adding this number of trainable parameters to the model will cause overfitting and learning deficiency. So we attempted to use another approach, as you can see in figure \ref{512-avg-dense}; first, we applied a 3×3 averaging layer to the feature map to produce a 5×5×2048 array, then we flattened it and fed it to the final FC layer. This way, the number of model parameters will not be so high. Our goal of using this architecture is to prove that our proposed architecture can also surpass this classic architecture in both learning efficiency and computational cost. In other words, we want to say that our architecture shows the best performance.
Table. \ref{512-mit} and figure \ref{512-mit-all} show the obtained results on MIT Indoors Scenes Dataset.
\begin{table*}[!ht]
\centering
\large
\caption{This table displays all the parameters used for training the models on 512x512 images.}
\begin{tabular}{l|l|}
\cline{2-2}
& MIT Indoors Scene \\ \hline
\multicolumn{1}{|l|}{Batch Size} & 15 \\ \hline
\multicolumn{1}{|l|}{Image Size} & 512x512 \\ \hline
\multicolumn{1}{|l|}{Optimizer} & SGD (momentum:0.9) \\ \hline
\multicolumn{1}{|l|}{Initial Learning rate} & 0.045 \\ \hline
\multicolumn{1}{|l|}{Learning rate decay} & 0.94 every 2 epochs \\ \hline
\multicolumn{1}{|l|}{Epochs} & 130 \\ \hline
\multicolumn{1}{|l|}{Transfer Learning} & Yes (ImageNet Pre-trained Weights) \\ \hline
\multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}Depthwise Convolution Layer\\ Kernel Weight Initializer\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Random Normal Initialization\\ (Mean:0, Std:0.01)\end{tabular} \\ \hline
\multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}Depthwise Convolution Layer\\ Bias Initializer\end{tabular}} & Zero Bias Initializtion \\ \hline
\end{tabular}
\label{512-param}
\end{table*}
\begin{table*}[!ht]
\centering
\large
\caption{This table presents the results of our experiments on 512x512 images of the MIT Indoors Scenes dataset. For each deep convolutional model, three classification architecture has been studed. Model+GAP, Model+Avg+Dense, and Model+Avg+Depthwise Conv+Constraints are fully expressed in figures \ref{512-gap}, \ref{512-avg-dense}, and \ref{512-avg-depthwise-const}, respectively. Parameters show the number of weights of each model.}
\begin{adjustbox}{width=1\linewidth}
\begin{tabular}{|l|l|l|l|l|}
\hline
Model name & Model + Classification type & Parameters & Top-1 accuracy & Top-5 accuracy \\ \hline
& Xception+GAP & 20,998,763 & 0.0157 & 0.0754 \\ \cline{2-5}
Xception & Xception+Avg+Dense & 24,291,947 & 0.7041 & 0.9106 \\ \cline{2-5}
& Xception+Avg+Depthwise Conv+Constraints & 21,052,011 & \textbf{0.7362} & \textbf{0.9457} \\ \hline
& ResNet50V2+GAP & 23,702,083 & 0.4933 & 0.7813 \\ \cline{2-5}
ResNet50V2 & ResNet50V2+Avg+Dense & 26,995,267 & 0.2052 & 0.503 \\ \cline{2-5}
& ResNet50V2+Avg+Depthwise Conv+Constraints & 23,755,331 & \textbf{0.5388} & \textbf{0.8149} \\ \hline
& DenseNet201+GAP & 18,450,691 & 0.3433 & 0.6313 \\ \cline{2-5}
DenseNet201 & DenseNet201+Avg+Dense & 21,538,051 & 0.1149 & 0.3321 \\ \cline{2-5}
& DenseNet201+Avg+Depthwise Conv+Constraints & 18,500,611 & \textbf{0.6037} & \textbf{0.8597} \\ \hline
\end{tabular}
\end{adjustbox}
\label{512-mit}
\end{table*}
\begin{figure}[!ht]
\centering
\subfloat[DenseNet201 training accuracy per epochs]{\label{512-mit-acc-DenseNet201}\includegraphics[width=0.5\linewidth]{512-accuracy-MIT-DenseNet201.pdf}}
\subfloat[DenseNet201 validation accuracy per epochs]{\label{512-mit-val-DenseNet201}\includegraphics[width=0.5\linewidth]{512-val_accuracy-MIT-DenseNet201.pdf}}
\newline
\subfloat[ResNet50V2 training accuracy per epochs]{\label{512-mit-acc-ResNet50V2}\includegraphics[width=0.5\linewidth]{512-accuracy-MIT-ResNet50V2.pdf}}
\subfloat[ResNet50V2 validation accuracy per epochs]{\label{512-mit-val-ResNet50V2}\includegraphics[width=0.5\linewidth]{512-val_accuracy-MIT-ResNet50V2.pdf}}
\newline
\subfloat[Xception training accuracy per epochs]{\label{512-mit-acc-Xception}\includegraphics[width=0.5\linewidth]{512-accuracy-MIT-Xception.pdf}}
\subfloat[Xception validation accuracy per epochs]{\label{512-mit-val-Xception}\includegraphics[width=0.5\linewidth]{512-val_accuracy-MIT-Xception.pdf}}
\caption{This figure displays the validation and training accuracies of the trained models with different classification layers on the MIT Indoors Scenes dataset for 130 epochs. In this experiment, the images were resized to 512×512 pixels. The Depthwise convolutional layer and the Averaging layer's kernel size were 5×5 and 3×3, respectively. }
\label{512-mit-all}
\end{figure}
\begin{figure}[!ht]
\centering
\subfloat[Deep convolution model with the GAP layer at its classification section.]{\label{512-gap}\includegraphics[width=\linewidth]{512-gap.pdf}}
\newline
\subfloat[In this architecture, the extracted features are first passed through an average pooling layer for reducing the feature map's size. Then the flattened features will be fed to an FC layer for performing the classification task.]{\label{512-avg-dense}\includegraphics[width=\linewidth]{512-avg-dense.pdf}}
\newline
\subfloat[Architecture of a model using depthwise convolutional layer for extracting spatial and channel data with pre-averaging and non-negative constraint to prevent overfitting.]{\label{512-avg-depthwise-const}\includegraphics[width=\linewidth]{512-avg-dpethwise-const.pdf}}
\caption{Our designed architectures for working with 512×512 images have been illustrated in these figures. The red boxes are the model computational layers, and the blue boxes show the output arrays.}
\label{512-all}
\end{figure}
The left plots in figure \ref{512-mit-all} present the training accuracy growth of each model. Based on these figures, the results are mixed and interesting. Xception is failed to be trained on the GAP layer. This means that the GAP layer (traditional way for classification) can not be applied to the Xception model when the images are large, and the number of classes is not few (we had 67 classes). By looking at the validation accuracy of Xception in figure \ref{512-mit-all} and Table. \ref{512-mit}, we can see this model with averaging and fully connected (FC) layer (figure \ref{512-avg-dense}) will be able to be trained and learn the images features. However, still, our proposed architecture obtains 3.21\% more Top-1 accuracy while containing 3 million fewer parameters. While both our architecture and averaging with FC layer are capable of learning spatial resolution, as our architecture is more optimized and has fewer parameters, it will face overfitting less than the other model and show better results.
Although Xception with Averaging and FC layer can learn image features, this architecture fails on ResNet50V2 and DenseNet201 due to the high underfitting. This performance means that as there are many parameters in the final classification layer, it makes the models unable to learn and cause underfitting. figures \ref{512-mit-acc-DenseNet201} and \ref{512-mit-acc-ResNet50V2} prove this statement.
DenseNet201 can be trained with the GAP layer poorly (figure \ref{512-mit-acc-DenseNet201}), and we can consider that the GAP layer does not work well on this model, too. DenseNet201 with our architecture achieves near 26 \% more Top-1 accuracy than this model with the GAP layer.
ResNet50V2 training plot (figure \ref{512-mit-acc-ResNet50V2}) shows that this model can be trained properly with the GAP layer, but our architecture improves the Top-1 accuracy by 4.55 \% while having the same computational cost (Table. \ref{512-mit}).
Based on these results, the GAP layer and the averaging with FC architectures are not trustworthy and reliable when the images' resolution is high, and the number of classes is not few. Both of these architectures may work on some models and fail on other models. It means that in these situations (having large images and several classes), our proposed architecture is not just for improving the results, and it may be the only solution for training the models.
We can also conclude that one reason the developers did not use larger images on the ImageNet dataset (which includes 1000 classes) was that implementing deep models with the GAP layer fails on learning, and they could not reach acceptable results.
\subsubsection{Feature Visualization}
\label{321}
To better show the performance of our architecture on convolutional models, we have adopted the Grad-CAM algorithm \cite{Selvaraju_2019} to highlight the feature maps on the images. We have applied this algorithm on some of the validation images of the MIT Indoors Scenes dataset that were classified by ResNet50V2 or DenseNet201 models. For each image, once we run the model with our architecture and another time, we executed the same model with the GAP layer on that image to compare the operation of these two methods.
figure \ref{grad}. represents the plotted feature heatmaps on the input images.
It must be mentioned that the analyzed images are all correctly classified by both our architecture and the GAP. Our goal of this comparison is to show that even in the correctly classified images, the extracted features heatmaps confirm that models with our architecture are more robust in extracting spatial data of images.
\subsection{Methods that didn't work}
\label{33}
In section \ref{224}, we mentioned that one effective way to limit overfitting is the usage of an average pooling layer before the depthwise convolutional layer for reducing the kernel size of the feature map. This method is more practical when the input images are larger. We have tried several other methods besides the average pooling layer, but we observed that this layer performed better than all of them. Here we wish to express some of the other implemented methods which did not work:
\begin{enumerate}
\item Applying two depthwise convolutional layers instead of one. In this manner, the first depthwise convolutional layer will be used instead of the average pooling layer, and the second one extracts the final compressed features. As there are two depthwise convolutional layers, the kernel size of each one becomes smaller; therefore, the overfitting possibility decreases. Although this method seems more effective, our experiments determined this architecture causes underfitting, and the models could not be trained.
\item Utilizing interpolation algorithms for reducing the size of the feature maps. This idea was based on the assumption that using interpolation algorithms for resizing feature maps could keep the information more original than average pooling layers. Bilinear, bicubic, nearest, and area interpolations were studied and tested. Our investigations showed that using these interpolation algorithms will not result in higher accuracy than the average pooling.
\item Adding padding to the average pooling layer. As averaging with zero or unit padding changes the actual value of border features, it weakens the results.
\item Having strides on the average pooling layer. By applying stride, we could use smaller averaging kernels and increase the quality of the averaged feature map. But as averaging with stride combines the neighbor features, and the correlation between neighbor averaged features increases, it does not work well, and the results downgrade.
\end{enumerate}
Based on these experiments, we found that adding a small average pooling layer before the depthwise convolutional layer will obtain better performance.
\section{Discussion}
\label{4}
This research focuses on solving the problem of losing spatial resolution of images caused by the Global Average Pooling (GAP) layers.
We came up with this idea to replace the GAP layer with a new set of layers which let the model learn how to weighted-sum the values of feature map with the trained weights and squeeze it into a compressed array wisely.
Our final proposed architecture, called Wise-SrNet is decribed in sections. \ref{223} and \ref{224} and is depicted in figure \ref{224-avg-depthwise-const-dp}.
Based on the input images' resolution, our analyses were divided into two categories : 224×224 and 512,×512. We explored three image classification datasets for investigating our models: a part of the ImageNet dataset, MIT Indoors Scenes, and Intel Image Classification datasets. Our architecture was implemented on various models of the three famous convolutional families of ResNet, DenseNet, and Inception families. DenseNet121, DenseNet169, DenseNet201, ResNet50, ResNet50V2 and Xception models were utilized in our research.
Upon the 224×224 resolution images, our architecture showed 3\%-6\%,
2\%-6\%, and 5\%-8\% higher Top-1 accuracy on the sub-ImageNet, Intel Image Classification, and MIT Indoors Scenes datasets, compared to the GAP layer, respectively.
Among all of the evaluated models, ResNet50 is more adaptable with our methods, and our architecture dispenses better improvement on this model than other models. The ResNet50 model with our architecture showed near 5-8\% growth in validation accuracy.
Considering the 512×512 resolution images, we ran our investigations on the MIT Indoors Scenes dataset.
ResNet50V2, DenseNet201, and Xception models were chosen for running at this stage of our work. Three classification versions for each model were studied: Our architecture, the GAP layer, and the classic classification model based on flattening and FC layers (figure \ref{512-avg-dense}).
After running tests on 512×512 resolution images, we observed that both the GAP layer and the classic method could not be trusted when there are several classes of images or the image are in high resolution. Both of these architectures failed on some models in the mentioned circumstance. The GAP layer did not work on Xception and worked very weak on DenseNet201, and the classic method did not work on ResNet50V2 and DensNet201. Still, our architecture performed very well on all the models and achieved acceptable results. In this condition, our architecture may not be just for improving the accuracy, but it can be the main solution for obtaining reliable results.
Our architecture improved the Top-1 accuracy by 3-\% to 26\% on the MIT Indoors Scenes dataset, including 67 classes of images with 512×512 resolution.
Our proposed technique can be applied to many networks and can create more accurate models on the ImgaNet or other popular benchmarks. As there are 1000 classes in the ImageNet dataset, increasing the images' resolution may be impossible for the developers because the GAP layer could fail the models to learn in this condition. Hence, our architecture showed to be adaptable and still achieved better accuracy under challenging situations like having lots of classes and large images.
Accordingly, using our methods, the developers can also increase the images' resolution on the ImageNet dataset and reach much better classification accuracy.
Now, we wish to explain one of our important observations. Although we reported the results of six deep convolution models in this paper, we also had evaluated our results on many families of convolutional models. Our architecture showed improvement in most cases, as expected, but for the EfficientNet family \cite{tan2020efficientnet}, we did not witness remarkable progress in performance. Hence, the logic and observations are based on the fact that analyzing the spatial resolution with our architecture must enhance classification performance, but why doesn't this fact apply to the EfficienNet family?
The main reason behind this matter can be summarized as follow.
The EfficientNet architectures are not hand-designed and have been developed by Neural Architecture Search (NAS) engine \cite{zoph2017neural}.
These models' components are inspired by the MobileNet architecture \cite{howard2017mobilenets}, and the depth, width, and resolution of their layers have been scaled for best performance. The essential point is that the NAS had designed the EfficienNet architectures and scaled the layers' parameters when the Global Average Pooling layer was placed at the end of the models for features compression. Based on this reason, we can conclude that the EfficientNet models have been produced by a neural search engine to acquire the best performance on the GAP layer, which has affected the type of their architectures and layers scaling parameters.
This is the main reason why EfficientNet did not achieve the same improvement with our architecture as other models since it is fine-tuned for working with the GAP layer. For feature work, other researchers can use the same approach of EfficientNet, but this time replace the GAP layer with our classification layers to create the next stage of the Efficientnet models, which are more robust and accurate than the existing models.
\section{Conclusion}
\label{5}
Global Average Pooling (GAP) layers are currently the most common method for compressing the feature maps before the classification layers. After the VGG models, most of the following deep convolutional models utilized the GAP layer in their architecture for diminishing computational costs.
Although the GAP layer minimizes the number of model weights, its disadvantage is removing the spatial resolution of feature maps.
This paper solved this problem by introducing Wise-SrNet, a novel architecture that processes the feature maps' spatial data without increasing computation costs.
Our proposed method allows the model to create an independent equation for each channel of the feature map through various sets of trainable weights for analyzing the spatial values.
Unlike many other works, our architecture is not limited to any function like averaging and does not apply the same weights for all the feature map channels.
As our architecture can process both the spatial and channel data and carry almost the same computational cost as the GAP layer, it can be the best replacement for this classic classification technique.
The central core of Wise-SrNet is made of our proposed depthwise convolutional layer with some specific parameters that help the model analyze all the feature maps' data. But this layer can not work alone because it makes the model overfitted.
So, we have also added some other layers and constraints to the depthwise convolutional layer to improve its learning ability and decrease overfitting.
In this research, three datasets with different resolutions were investigated for our experiments, which are a part of the ImageNet dataset, MIT Indoodrs scenes, and Intel Image classification challenge. DenseNet121, DenseNet169, DenseNet201, ResNet50, ResNet50V2, and Xception models were also studied and compared on ours and other classification architectures, including the GAP layer.
Our architecture improved the Top-1 accuracy between 2\% to 8\% for images with 224×224 resolution and 3\% to 26\% for images with 512×512 resolution, respectively.
Our observations confirm that adopting the classic methods for compressing the feature maps and classification; is not reliable on datasets with many classes and large images. As our novel architecture is optimized and performs accurately in these challenging circumstances, it can greatly improve image classification benchmarks.
\section{Code Availability}
\label{6}
All the main codes of our paper have been shared online for public use at \href{https://github.com/mr7495/image-classification-spatial}{https://github.com/mr7495/image-classification-spatial}
\begin{figure}[!ht]
\centering
\subfloat[Plotted features produced by models utilizing the GAP layer]{\label{grad-normal}\includegraphics[width=\linewidth]{grad-cam-normal.pdf}}
\newline
\subfloat[Plotted features produced by models utilizing our proposed architecture]{\label{grad-ours}\includegraphics[width=\linewidth]{grad-cam-ours.pdf}}
\caption{By looking at this figure, you can compare the quality of extracted features using the GAP layer and our architecture. The Grad-Cam algorithm has produced the features heatmaps. All the images in this figure have been classified correctly.}
\label{grad}
\end{figure}
\bibliographystyle{abbrv}
|
train/arxiv
|
BkiUbFDxK6-gD0SrbSAa
| 5
| 1
|
\section{Introduction}
Ricci solitons are self-similar solutions of the Ricci flow equation. They play an important role in the study of Ricci flow and they often arise as singularity models.
In particular, a steady gradient Ricci soliton is a smooth complete Riemannian manifold $(M,g)$ together with a smooth function $f$ on $M$ which satisfy
\begin{equation}
\textnormal{Ric}=\nabla^2 f.
\end{equation}
where $f$ is called a potential function.
The soliton generates a Ricci flow for all time by $g(t)=\phi_{-t}^*(g)$, where $\{\phi_t\}_{t\in(-\infty,\infty)}$ is the one-parameter group of diffeomorphisms generated by $\nabla f$ with $\phi_0$ the identity.
In dimension 2, the only non-flat steady gradient Ricci soliton is Hamilton's cigar soliton, which is rotationally symmetric \cite{cigar}.
In dimension $3$, the only non-flat rotationally symmetric steady gradient Ricci soliton is the Bryant soliton \cite{bryant}.
Another example is the product of $\mathbb{R}$ and the cigar soliton.
Note that the asymptotic cones of the Bryant soliton and $\mathbb{R}\times\textnormal{Cigar}$ are a ray and a half-plane respectively.
Therefore, Hamilton conjectured that there exist 3d steady gradient Ricci solitons whose asymptotic cones are sectors with angles $\theta\in(0,\pi)$, which are called flying wings \cite{CaoHD,infinitesimal,HaRF,Catino,Chow2007a,DZ}.
In \cite{Lai2020_flying_wing}, the author confirmed this conjecture by constructing a family of $\mathbb{Z}_2\times O(2)$-symmetric 3D flying wings, which are parametrized by the ratio of Ricci curvature eigenvalues at a point.
However, it remains unknown whether the asymptotic cone angles of these flying wings can take all values in $(0,\pi)$.
Our first theorem gives a positive answer to this question, and thus completes the discussion of the existence of 3D flying wings.
Throughout the paper, the quadruple $(M,g,f,p)$ denotes a steady gradient Ricci soliton, where $f$ is the potential function and $p$ is a critical point of $f$.
\begin{theorem}[Existence]\label{t: flying wing with prescribed angles}
For any $\theta\in(0,\pi)$, there exists a $\mathbb{Z}_2\times O(2)$-symmetric 3D flying wing $(M,g,f,p)$ which is asymptotic to a sector with angle $\theta$.
\end{theorem}
With Theorem \ref{t: flying wing with prescribed angles} proved now, the classification of all 3D steady gradient Ricci solitons remains to prove the uniqueness. More precisely, it remains to see whether the soliton is unique for each asymptotic cone angle $\theta\in[0,\pi]$. For $\theta=0$, the uniqueness of the Bryant soliton as the only one asymptotic to a ray was confirmed very recently by the author in \cite[Theorem 1.1]{Lai2022_O(2)}. For $\theta=\pi$, it is easy to see that the soliton must be isometric to $\mathbb{R}\times\textnormal{Cigar}$.
So the uniqueness question is reduced to that of the 3D flying wings for each $\theta\in(0,\pi)$, which all satisfy the $O(2)$-symmetry due to the author \cite[Theorem 1.2]{Lai2022_O(2)}.
We also remark that
in the mean curvature flow,
the analogs of 3D steady gradient Ricci solitons on $\mathbb{R}^3$ are convex translators in $\mathbb{R}^3$, where the term flying wing denotes translators that are graphs over finite slabs in $\mathbb{R}^2$.
It has been proved that for each $\theta\in(0,\pi)$, there exists a unique mean curvature flow flying wings in $\mathbb{R}^3$ asymptotic to $\theta$ \cite{Wangxujia,Spruck2020CompleteTS,white,langford1}.
Let $\mathcal{M}$ be the space of all $\mathbb{Z}_2\times O(2)$-symmetric 3D steady gradient Ricci solitons on $\mathbb{R}^3$ that are pointed at the critical point where the scalar curvature $R$ is equal to $1$, and consider the map $\tau:\mathcal{M}\rightarrow[0,\pi]$ from the solitons to their asymptotic cone angles.
Then
Theorem \ref{t: flying wing with prescribed angles} shows that $\tau$ is surjective. Moreover, our second theorem shows that the space $\mathcal{M}$ is subsequentially compact under the smooth topology, and the map $\tau$ is continuous.
\begin{theorem}[Compactness]\label{t: theorem compactness}
Let $\{(M_i,g_i,f_i,p_i)\}_{i=1}^{\infty}$ be a sequence of $\mathbb{Z}_2\times O(2)$-symmetric 3D steady gradient Ricci solitons with asymptotic cone angles $\alpha_i\in[0,\pi]$, and $R(p_i)=1$.
Then there exists a subsequence smoothly converging to a 3D steady gradient Ricci soliton.
Moreover,
suppose $\lim_{i\rightarrow\infty}\alpha_i=\alpha$. Then any subsequential limit of $(M_i,g_i,f_i,p_i)$ has asymptotic cone angle equal to $\alpha$.
In particular, the limit is isometric to the Bryant soliton when $\alpha=0$, and to $\mathbb{R}\times\textnormal{Cigar}$ when $\alpha=\pi$.
\end{theorem}
Note that if we assume the uniqueness of 3D flying wings for each asymptotic cone angle, then Theorem \ref{t: theorem compactness} will also imply the continuity of the inverse of the map $\tau$, and hence the continuity of the parametrization of 3D steady gradient Ricci solitons by their asymptotic cone angles.
Theorem \ref{t: flying wing with prescribed angles} and \ref{t: theorem compactness} can also be generalized to higher dimensional $O(n-2)\times O(2)$-symmetric flying wings for any $n\ge3$, see \cite{Lai2021Thesis} for the definition and construction.
The main ideas to prove Theorem \ref{t: flying wing with prescribed angles} are the following: By the asymptotic uniqueness theorem in \cite{Lai2020_flying_wing}, we know that the asymptotic cone angles are uniquely determined by the supremum of $R$ at infinity.
So we will construct a sequence of 3D flying wings with $R=R_0$ at a sequence of points going to infinity, for some fixed $R_0>0$. Then we will show that $\sup R$ on $M\setminus B_g(p,r)$ barely decreases for all sufficiently large $r$ in any 3D flying wings $(M,g,f,p)$. So the sequence of flying wings will converge to a flying wing in which $\limsup_{x\rightarrow\infty} R=R_0$.
To analyze the asymptotic behavior of $R$, we need a dimension reduction theorem, which says that the geometry looks sufficiently like $\mathbb{R}\times\textnormal{Cigar}$, if $R\ge R_0$ and the points are far enough away from $p$.
Since $R$ and the warping function in $\mathbb{R}\times\textnormal{Cigar}$ determine each other, we can reduce the asymptotic analysis of $R$ to that of the warping function, which can then be studied by the distance distortion estimates developed by the author in \cite{Lai2022_O(2)}.
A key ingredient to prove the uniform dimension reduction is the existence of a critical point in all 3D steady gradient Ricci solitons, which was proved in \cite[Theorem 1.4]{Lai2022_O(2)}.
This paper was written during the author's visit at Beijing International Center for Mathematical Research in summer 2022, and she thanks Gang Tian and Xiaohua Zhu for conversations and their hospitality.
She also wants to thank Richard Bamler for comments.
\section{Preliminaries}
In this section, we introduce some useful notions and recall several results from \cite{Lai2020_flying_wing} and \cite{Lai2022_O(2)}.
First, to measure the closeness from the pointed manifolds to the smooth limits, we adopt the following notion of $\epsilon$-isometry and $\epsilon$-closeness.
\begin{defn}[$\epsilon$-isometry between manifolds]\label{d: epsilon isometry}
Let $\epsilon>0$ and $m\in\mathbb{N}$.
Let $(M^n_i,g_i)$, $i=1,2$, be an n-dimensional Riemannian manifolds, $x_i\in M_i$. We say a smooth map $\phi:B_{g_1}(x_1,\epsilon^{-1})\rightarrow M_2$, $\phi(x_1)=x_2$, is an $\epsilon$-isometry in the $C^m$-norm if it is a diffeomorphism onto the image, and
\begin{equation*}
|\nabla^{k}(\phi^*g_2-g_1)|\le \epsilon\quad \textit{on}\quad B_{g_1}(x_1,\epsilon^{-1}),\quad k=0,1,...,m,
\end{equation*}
where the covariant derivatives and norms are taken with respect to $g_1$.
We also say $(M_2,g_2,x_2)$ is $\epsilon$-close to $(M_1,g_1,x_1)$ in the $C^m$-norm.
In particular, if $m=[\epsilon^{-1}]$, then we simply say $(M_2,g_2,x_2)$ is $\epsilon$-close to $(M_1,g_1,x_1)$ and $\phi$ is an $\epsilon$-isometry.
\end{defn}
In a non-negatively curved complete non-compact Riemannian manifold, we can equip a length metric on the space of geodesic rays. Moreover, a blow-down sequence of this manifold converges to the metric cone over the space of rays in the Gromov-Hausdorff sense, which we call the asymptotic cone of the manifold, see e.g. \cite[Prop 5.31]{MT}.
We know that 3D steady gradient Ricci solitons have non-negative sectional curvature, and a 3D steady gradient Ricci soliton is isometric to quotients of $\mathbb{R}\times\textnormal{Cigar}$ if the curvature is not strictly positive everywhere.
If the curvature is strictly positive, then the soliton is diffeomorphic to $\mathbb{R}^3$.
By a simple adaptation of \cite[Lemma 4.2]{Lai2020_flying_wing} we see that the asymptotic cone of any 3D steady gradient Ricci soliton is isometric to a metric cone over an interval $[0,\alpha]$, for some $\alpha\in[0,\pi]$.
In \cite{Lai2022_O(2)} the author showed that all 3D steady gradient Ricci solitons $(M,g)$ are $O(2)$-symmetric. So there is a complete unit speed geodesic $\Gamma:(-\infty,\infty)\rightarrow M$ fixed by the $O(2)$-action, $\Gamma(0)=p$, such that the metric on $M\setminus\Gamma$ is a warped product metric with $S^1$-fibers over a 2D totally geodesic submanifold.
Moreover, there is a quantitative relation between the asymptotic cone angle $\alpha$ and the limit of $R$ along $\Gamma$.
\begin{lem}[Asymptotic Uniqueness]
(\cite[Theorem 1.6]{Lai2022_O(2)})
\label{l: quantitative relation}
Let $(M,g,f,p)$ be a 3d steady gradient Ricci soliton on $\mathbb{R}^3$, whose asymptotic cone is a metric cone over the interval $[-\frac{\alpha}{2},\frac{\alpha}{2}]$ for some $\alpha\in[0,\pi]$.
Let $\Gamma:(-\infty,\infty)\rightarrow M$ be the complete geodesic fixed by the $O(2)$-isometry, then
\begin{equation}
\lim_{s\rightarrow\infty}R(\Gamma(s))=R(p)\sin^2\frac{\alpha}{2}.
\end{equation}
\end{lem}
It is clear that in the soliton $\mathbb{R}\times\textnormal{Cigar}$, the potential function $f$ can be chosen as the direct sum of a linear function in the direction of $\mathbb{R}$ and the potential function on $\textnormal{Cigar}$. So there is not a critical point of $f$ when the linear function is not a constant.
The following lemma shows that the critical point of the potential function always exists in all the other 3D steady gradient Ricci solitons. Moreover, by the soliton identity
\begin{equation*}
R+|\nabla f|^2=\textnormal{const.},
\end{equation*}
it is easy to see that the critical point is also a maximum point of $R$.
Moreover, the critical point is unique when the curvature is positive.
\begin{lem}(cf. \cite[Theorem 1.6]{Lai2022_O(2)})\label{l: max}
Let $(M,g,f)$ be a 3D steady gradient Ricci soliton with positive curvature, then there exists a point $p\in M$ which is a critical point of the potential function $f$.
\end{lem}
In \cite[Lemma 4.3]{Lai2020_flying_wing} and \cite[Theorem 1.7]{Lai2022_O(2)}, we proved that the scalar curvature decays quadratically in distance to $\Gamma$ in a 3D steady gradient soliton on $\mathbb{R}^3$. In the following lemma, we show that this curvature estimate is uniform for all 3D steady gradient Ricci solitons on $\mathbb{R}^3$.
\begin{lem}[Quadratic curvature decay]\label{l: curvature upper bound initial}
There exists a constant $C>0$ such that the following holds:
Let $(M,g)$ be a 3D steady gradient Ricci soliton on $\mathbb{R}^3$.
Then for any $x\in M\setminus\Gamma$, we have
\begin{equation*}
R(x)\le\frac{C}{d_g^2(x,\Gamma)}.
\end{equation*}
\end{lem}
\begin{proof}
By Perelman's curvature estimate for non-collapsed Ricci flows with non-negative curvature operators \cite[Corollary 11.6]{Pel1}, it suffices to find a constant $\kappa>0$ (independent of $(M,g)$) such that for all $x\in M\setminus\Gamma$ in the universal covering $(\widetilde{B_g(x,d_g(x,\Gamma))},\widetilde{g},\widetilde{x})$ of $(B_g(x,d_g(x,\Gamma)),g,x)$, we have
\begin{equation}\label{e: kappa}
vol(B_{\widetilde{g}}(\widetilde{x},d_g(x,\Gamma)))\ge\kappa\, d^3_g(x,\Gamma).
\end{equation}
To see this, let $y\in\Gamma$ be a point such that $d_g(x,y)=d_g(x,\Gamma)$.
Then take $z=\Gamma(s)$ for some sufficiently large $s$ so that $d_g(y,z)\ge100\,d_g(x,y)$,
and hence $d_g(x,z)\ge 99\,d_g(x,y)$ by triangle inequality. This implies
\begin{equation}\label{e: small angle}
\widetilde{\measuredangle}xzy\le 0.1.
\end{equation}
Moreover, let $\sigma_1,\sigma_2:[0,1]\rightarrow M$ be minimizing geodesics from $y$ to $x$, and $y$ to $z$ respectively.
By the first variation formula, $\sigma'_1(0)$ is orthogonal to $\Gamma$ at $y$. Therefore, by the $O(2)$-symmetry of the soliton we may replace $\sigma_2$ by its image under a suitable isometry in the $O(2)$-action, and assume $\measuredangle(\sigma'_1(0),\sigma'_2(0))\le\frac{\pi}{2}$.
By angle comparison this implies $\widetilde{\measuredangle}xyz\le\frac{\pi}{2}$,
which combining with \eqref{e: small angle} implies
\begin{equation}\label{e: pi20.1}
\widetilde{\measuredangle}yxz\ge\frac{\pi}{2}-0.1.
\end{equation}
Let $N$ be a totally geodesic surface in $M$ so that the metric on $M\setminus\Gamma$ can be written as $g=g_N+\varphi^2d\theta^2$, where $\varphi$ is a positive smooth function on $N$.
We may assume $x\in N$ and the two minimizing geodesics $xy$ and $xz$ are both contained in $N$.
Let $y'$ and $z'$ be two points on $xy$ and $xz$ such that $d_g(x,y')=d_g(x,z')=\frac{1}{2}d_g(x,y)$.
Then by the angle monotonicity and \eqref{e: pi20.1} we obtain
\begin{equation*}
\widetilde{\measuredangle}y'xz'\ge\widetilde{\measuredangle}yxz\ge\frac{\pi}{2}-0.1,
\end{equation*}
and hence
\begin{equation*}
|\partial B_N(x,\tfrac{1}{2}d_g(x,y))|\ge d_g(y',z')\ge C^{-1}d_g(x,y).
\end{equation*}
So by volume comparison on $N$ this implies
\begin{equation}\label{e: Nvolume}
vol(B_N(x,\tfrac{1}{2}d_g(x,y)))\ge C^{-1}\,d^2_g(x,y).
\end{equation}
Then as in \cite[Lemma 4.3]{Lai2020_flying_wing}, we can show that \eqref{e: Nvolume} implies \eqref{e: kappa}, which hence proves the lemma.
\end{proof}
\section{Compactness of 3D steady gradient Ricci solitons}
In this section, we prove several compactness results of 3D steady gradient Ricci solitons, and study the asymptotic behavior of the scalar curvature at infinity along $\Gamma$.
Since the subset $\Gamma\setminus\{p\}$ is a union of two integral curves of $\nabla f$, it follows by the soliton identity $\langle\nabla R,\nabla f\rangle=-\textnormal{Ric}(\nabla f,\nabla f)$ that
$R(\Gamma(s))$ decreases in $s$ on $[0,\infty)$, and increases on $(-\infty,0]$.
The main result in this section is Proposition \ref{l: R does not change too much}, which shows that $R$ barely decreases along $\Gamma$ starting from $\Gamma(s_0)$, if $R(\Gamma(s_0))$ has a lower bound and $s_0$ is sufficiently large. This is the key ingredient in the proofs of Theorem \ref{t: flying wing with prescribed angles} and \ref{t: theorem compactness}.
We will also assume that $dr^2+g_c$ is the product metric on $\mathbb{R}\times\textnormal{Cigar}$ such that $R(r,x_{tip})=1$, $r\in\mathbb{R},x_{tip}\in\textnormal{Cigar}$, and $g_{stan}$ is the flat product metric on $\mathbb{R}^2\times S^1$ such that the length of each $S^1$-factor is equal to $4\pi$. Note that for any sequence of points $q_i\in \textnormal{Cigar}$, $q_i\rightarrow\infty$, the manifolds $(\textnormal{Cigar},g_c,q_i)$ smoothly converge to $\mathbb{R}\times S^1$ where the length of the $S^1$-factor is equal to $4\pi$.
\begin{lem}\label{compactness to a steady soliton}
Let $(M_i,g_i,f_i,p_i)$ be a sequence of 3D steady gradient Ricci solitons on $\mathbb{R}^3$ satisfying $R(p_i)=1$.
Suppose there exist $\epsilon>0$ and $q_i=\Gamma_i(s_i)\in M_i$ for some $s_i\in\mathbb{R}$ such that $R(q_i)\ge\epsilon$ for all $i$.
Then after passing to a subsequence, the solitons $(M_i,g_i,f_i-f_i(q_i),q_i)$ smoothly converge to a non-flat 3D steady gradient Ricci soliton $(M_{\infty},g_{\infty},f_{\infty},q_{\infty})$ on $\mathbb{R}^3$, and $q_{\infty}\in\Gamma_{\infty}$, where $\Gamma_{\infty}$ is the unit speed complete geodesic fixed by the $O(2)$-isometry and $f_{\infty}(q_{\infty})=0$.
\end{lem}
\begin{proof}
First, since $(M_i,g_i)$ is non-compact complete, the curvature is positive, and $R_{max}=R(p_i)=1$, by a well-known fact of Gromoll and Meyer (see \cite{CheegerEbin}), we always have an injectivity radius lower bound
\begin{equation}\label{e: v_0}
\inf_{x\in M_i}\textnormal{inj}_{g_i}(x)\ge\frac{\pi}{\sqrt{R_{\max}}}=\pi.
\end{equation}
Therefore, by Hamilton's compactness of Ricci flow \cite{Hamilton_compactness} we may pass to a subsequence and assume that the Ricci flows $(M_i,g_i(t),q_i)$, $t\in(-\infty,0]$, smoothly converge to a Ricci flow $(M_{\infty},g_{\infty}(t),q_{\infty})$, $t\in(-\infty,0]$.
Next, we show that $(M_{\infty},g_{\infty}(t),q_{\infty})$ is the Ricci flow of a steady gradient Ricci soliton.
To see this, let $\widetilde{f}_i=f_i-f_i(q_i)$, then $\widetilde{f}_i(q_i)=0$. Moreover, by $|\nabla \widetilde{f}_i|=|\nabla f_i|\le 1$ and the soliton equation
\begin{equation*}
\nabla^2 \widetilde{f}_i=\nabla^2 f_i=\textnormal{Ric}_{g_i},
\end{equation*}
we can deduce that for any integer $k\ge1$, there is $C_k>0$ such that $|\nabla^k \widetilde{f}_i|\le C_k$.
Therefore, after passing to a subsequence we may assume the functions $\widetilde{f}_i$ on $M_i$ smoothly
converge to a smooth function $f_{\infty}$ on $M_{\infty}$, which satisfies $\textnormal{Ric}_{g_{\infty}(0)}=\nabla^2 f_{\infty}$.
So $(M_{\infty},g_{\infty}(0),q_{\infty})$ is a 3D steady gradient Ricci soliton which satisfies
\begin{equation*}
R(q_{\infty})=\lim_{i\rightarrow\infty}R(q_i)\ge\epsilon>0.
\end{equation*}
It remains to show that $M_{\infty}$ is diffeomorphic to $\mathbb{R}^3$. For this, it suffices to show that $(M_{\infty},g_{\infty}(0),q_{\infty})$ is not isometric to a recaling of $S^1\times\textnormal{Cigar}$.
Suppose it is isometric to a recaling of $S^1\times\textnormal{Cigar}$, then
let $\psi_i:(S^1\times\textnormal{Cigar},g_{\infty}(0),q_{\infty})\rightarrow (M_i,g_i,q_i)$ be an $\epsilon_i$-isometry, where $\epsilon_i\ri0$ as $i\rightarrow\infty$.
Let $V=S^1\times\{x_{tip}\}\subset S^1\times\textnormal{Cigar}$, and $B_r(V)=\{x\in S^1\times\textnormal{Cigar}: d_{g_{\infty}(0)}(x,V)< r\}$ for any $r>0$.
We claim that $\Gamma_i(-\infty,\infty)\subset \psi_i(B_1(V))$.
To see this, note that the $O(2)$-isometry on $(M_i,g_i)$ converge to the $O(2)$-isometry on $S^1\times\textnormal{Cigar}$ that fixes $S^1\times{x_{tip}}$. Letting $X_i$ and $X_{\infty}$ be the corresponding killing fields on $M_i$ and $S^1\times\textnormal{Cigar}$, then it follows that $(\psi^{-1}_{i})_*(X_i)$ smoothly converge to $X_{\infty}$ as $i\rightarrow\infty$.
First, we have $q_i=\Gamma_i(s_i)=\psi_i(q_{\infty})\in \psi_i(B_1(V))$.
Next, suppose $\Gamma_i(s)\in \psi_i(B_1(V))$ for some $s\in\mathbb{R}$, then by $X_i(\Gamma_i(s))=0$ we see that $\Gamma_i(s)\in\psi_i(B_{\frac{1}{10}}(V))$, and hence $\Gamma_i([s-\frac{1}{2},s+\frac{1}{2}])\subset B_{g_i}(\Gamma_i(s),\frac{1}{2})\subset \psi_i(B_1(V))$. Therefore, by induction we can deduce $\Gamma_i(-\infty,\infty)\subset \psi_i(B_1(V))$, which contradicts the non-compactness of $\Gamma_i$.
\end{proof}
Next, we show a special case of the compactness Lemma \ref{compactness to a steady soliton}, where the limit is $\mathbb{R}\times\textnormal{Cigar}$.
For a fixed 3D flying wing, we know that it converges to a rescaling of $\mathbb{R}\times\textnormal{Cigar}$ along $\Gamma$, see \cite{Lai2020_flying_wing,Lai2022_O(2)}.
Lemma \ref{l: |nabla f| small} and \ref{l: looks like RxCigar} provide sufficient conditions, under which the geometry along $\Gamma$ is close to $\mathbb{R}\times\textnormal{Cigar}$.
These conditions are uniform for all 3D steady gradient Ricci solitons.
First, Lemma \ref{l: |nabla f| small} shows that the closeness to $\mathbb{R}\times\textnormal{Cigar}$ is guaranteed when $|\nabla f|(\Gamma(s))$ is sufficiently small and $s$ is sufficiently large.
\begin{lem}\label{l: |nabla f| small}
For any $\epsilon>0$, there exist $D(\epsilon),\delta(\epsilon)>0$ such that the following holds:
Let $(M,g,f,p)$ be a 3D steady gradient Ricci soliton on $\mathbb{R}^3$, and $R(p)=1$.
Suppose $|\nabla f|(\Gamma(s))\le\delta(\epsilon)$ for some $|s|>D(\epsilon)$, then the pointed manifold $(M,g,\Gamma(s))$ is $\epsilon$-close to $(\mathbb{R}\times\textnormal{Cigar},(0,x_{tip}))$.
\end{lem}
\begin{proof}
Without loss of generality we may assume $s>0$.
Suppose this is not true for some $\epsilon>0$, then we can find sequence of numbers $D_i\rightarrow\infty$ and $\delta_i\ri0$ and a sequence of 3D steady gradient Ricci solitons $(M_i,g_i,p_i)$ with $R(p_i)=1$, such that $|\nabla f|(\Gamma_i(D_i))\le \delta_i$, but the pointed manifolds $(M_i,g_i,\Gamma_i(D_i))$ is not $\epsilon$-close to $\mathbb{R}\times\textnormal{Cigar}$.
Since for any $s\ge0$ we have
\begin{equation}
\frac{d^2}{ds^2}f(\Gamma(s))=\nabla^2 f(\Gamma'(s),\Gamma'(s))=\textnormal{Ric}(\Gamma'(s),\Gamma'(s))\ge 0,
\end{equation}
it follows that $|\nabla f|(\Gamma(s))$ decreases in $s$, and thus $|\nabla f|(\Gamma_i(s))\le\delta_i$ for all $s\in[0,D_i]$.
By Lemma \ref{compactness to a steady soliton} we may assume the manifolds $(M_i,g_i,f_i-f_i(\Gamma_i(D_i)),\Gamma_i(D_i))$ converge to a steady gradient Ricci soliton $(M_{\infty},g_{\infty},f_{\infty},p_{\infty})$ on $\mathbb{R}^3$.
Let $\Gamma_{\infty}:(-\infty,\infty)\rightarrow M$ be the complete geodesic fixed by the $O(2)$-isometry, and $\Gamma_{\infty}(0)=p_{\infty}$, then it is easy to see that $|\nabla f_{\infty}|(\Gamma_{\infty}(s))\equiv0$ for all $s\le0$, and $R(p_{\infty})=1$.
In particular, this implies $\textnormal{Ric}(\Gamma'_{\infty}(0),\Gamma'_{\infty}(0))=\nabla^2 f_{\infty}(\Gamma'_{\infty}(0),\Gamma'_{\infty}(0))=0$, and hence $(M_{\infty},g_{\infty},p_{\infty})$ is isometric to $\mathbb{R}\times\textnormal{Cigar}$, which is a contradiction for large $i$.
\end{proof}
We say an $n$-dimensional Riemannian manifold $M$ dimension reduces to an $(n-1)$-dimensional manifold $N$ along a sequence of points $x_i\in M$, if the manifolds $(M,R(x_i)g,x_i)$ smoothly converge to $\mathbb{R}\times N$.
We know that a 3D flying wing always dimension reduces to $\textnormal{Cigar}$. However, the $\epsilon$-closeness to $\mathbb{R}\times\textnormal{Cigar}$ for a fixed $\epsilon>0$ may happen at any arbitrarily large distance to the critical point, because the soliton may be very close to the Bryant soliton.
In the following we prove a dimension reduction which is uniform for all 3D flying wings.
It shows that there is a uniform distance for the $\epsilon$-closeness to be achieved, as long as $R$ is uniformly bounded away from zero. A key ingredient in the proof is Lemma \ref{l: max} (the existence of the critical point).
\begin{lem}[Uniform dimension reduction]\label{l: looks like RxCigar}
For any $R_0\in(0,1],\epsilon>0$, there exists $D(R_0,\epsilon)>0$ such that the following holds:
Let $(M,g,f,p)$ be a 3D steady gradient Ricci soliton on $\mathbb{R}^3$.
Suppose $R(p)=1$.
Then for any $|s|>D(R_0,\epsilon)$, if $R(\Gamma(s))\ge R_0$, then the manifold $(M, R(\Gamma(s))g,\Gamma(s))$ is $\epsilon$-close to $(\mathbb{R}\times\textnormal{Cigar},(0,x_{tip}))$.
\end{lem}
\begin{proof}
Suppose the lemma is false for some $\epsilon>0$, then we can find a sequence of $(M_i,g_i,f_i,p_i)$ which are 3D steady gradient Ricci solitons on $\mathbb{R}^3$, and a sequence of numbers $D_i\rightarrow\infty$ such that $R(p_i)=1$, $R(\Gamma_i(D_i))\ge R_0$, but $(M_i, R(\Gamma_i(D_i))g_i,\Gamma_i(D_i))$ is not $\epsilon$-close to $(\mathbb{R}\times\textnormal{Cigar},(0,x_{tip}))$ for each $i$.
First, since $R_0>0$, by Lemma \ref{compactness to a steady soliton} we may assume after passing to a subsequence that $(M_i, R(\Gamma_i(D_i))g_i,\Gamma_i(D_i))$ smoothly converge to a steady gradient Ricci soliton $(M_{\infty},g_{\infty},f_{\infty},p_{\infty})$ on $\mathbb{R}^3$, and $R(p_{\infty})=1$.
First, if $|\nabla f_i|(\Gamma_i(D_i))\ri0$, then we get a contradiction by Lemma \ref{l: |nabla f| small}.
So we may assume $\limsup_{i\rightarrow\infty}|\nabla f_i|(\Gamma_i(D_i))>0$. Therefore, by the soliton identity $R+|\nabla f|^2=1$ and passing to a subsequence we may assume
\begin{equation}\label{e: Rinfinity}
R_0\le\lim_{i\rightarrow\infty}R(\Gamma_i(D_i))<1.
\end{equation}
Then we divide the discussion into two cases depending on whether there is a critical point of $f_{\infty}$ on the unit speed complete geodesic $\Gamma_{\infty}:(-\infty,\infty)\rightarrow M_{\infty}$ fixed by the $O(2)$-isometry, and $\Gamma_{\infty}(0)=p_{\infty}$.
\textbf{Case 1:} Suppose there exists $s_0\in\mathbb{R}$ such that $|\nabla f_{\infty}|(\Gamma_{\infty}(s_0))=0$.
Then it follows that $|\nabla f_i|(\Gamma_i(D_i+s_0))\ri0$ as $i\rightarrow\infty$.
Since $D_i+s_0\rightarrow\infty$, by Lemma \ref{compactness to a steady soliton} we see that $(M_i,g_i,\Gamma_i(D_i+s_0))$ smoothly converge to $(\mathbb{R}\times\textnormal{Cigar},(0,x_{tip}))$, and the geodesics $\widetilde{\Gamma}_i(s):=\Gamma_i(s+D_i+s_0)$, $s\in\mathbb{R}$, in $(M_i,g_i)$ smoothly converge to the line $\mathbb{R}\times\{x_{tip}\}$ in $\mathbb{R}\times\textnormal{Cigar}$ modulo the diffeomorphisms.
In particular, this implies
\begin{equation*}
\lim_{i\rightarrow\infty}R(\Gamma_i(D_i))=\lim_{i\rightarrow\infty}R(\widetilde{\Gamma}_i(-s_0))=1,
\end{equation*}
which contradicts with the assumption \eqref{e: Rinfinity}.
\textbf{Case 2:} Suppose $|\nabla f_{\infty}|(\Gamma_{\infty}(s))>0$ for all $s\in\mathbb{R}$. We claim that $(M_{\infty},g_{\infty})$ must be isometric to $\mathbb{R}\times\textnormal{Cigar}$. Suppose not, then it has positive curvature, and by Lemma \ref{l: max} there exists a unique critical point $q$ of $f_{\infty}$, which is also the unique maximum point of $R$. Therefore, $q$ is fixed by the $O(2)$-isometry and hence must be on $\Gamma_{\infty}$, which contradicts our assumption.
\end{proof}
Assume $f(p)=0$ where $p$ is the critical point. Let $\Sigma=f^{-1}(s)$ be a level set of $f$ for some $s>0$.
Then $\Sigma$ is a compact, $O(2)$-symmetric, smooth 2D Riemannian manifold. Moreover, since the second fundamental form satisfies $\mathrm{I\!I}=-\left.\frac{\nabla ^2 f}{|\nabla f|}\right|_{\Sigma}=-\left.\frac{\textnormal{Ric}}{|\nabla f|}\right|_{\Sigma}\le 0$,
it follows by the Gauss equation that $\Sigma$ has positive Gaussian curvature.
Moreover, $\Sigma$ intersects $\Gamma$ at two different points.
In the following lemma, we obtain a lower bound on $R$ at the two points, in terms of the warping function at certain points in $\Sigma$.
\begin{lem}\label{l: geometry of level set near tip}
For any $D>0$, there exists $C(D)>0$ such that the following holds:
Let $(M,g,f,p)$ be a 3D steady gradient Ricci soliton with positive curvature.
Suppose $R(p)=1$. For any point $x\in M$ with $\varphi(x)\ge D^{-1}$, let $y$ be one of the two intersection points of $\Gamma$ and the level set $f^{-1}(f(x))$ that is closest to $x$ with respect to the induced metric on $\Sigma$.
Suppose $d_g(x,\Gamma)\ge4\,\varphi(x)$.
Then we have
\begin{equation*}
R(y)\ge C^{-1}\, \varphi^{-2}(x).
\end{equation*}
\end{lem}
\begin{proof}
Suppose this is false, then we can find a sequence of 3D steady gradient Ricci solitons $(M_i,g_i,f_i,p_i)$ on $\mathbb{R}^3$, with $R(p_i)=1$, and $x_i,y_i\in M_i$ satisfying the assumptions, but
\begin{equation}\label{e: R and varphi}
R(y_i)\varphi_i^{2}(x_i)\ri0\quad\textit{as}\quad i\rightarrow\infty.
\end{equation}
By $\varphi_i(x_i)\ge D^{-1}$ we have $R(y_i)\ri0$ as $i\rightarrow\infty$. So we may assume $|\nabla f_i|\ge\frac{1}{2}$ for all $i$.
Consider the rescaled metrics $\widetilde{g}_i=\varphi_i^{-2}(x_i)g$ and the rescaled functions $\widetilde{f}_i:=\frac{f_i-f_i(y_i)}{\varphi_i(x_i)}$, then $\widetilde{f}_i$ satisfies $\widetilde{f}_i(y_i)=0$ and
\begin{equation}\label{e: higher derivatives of f}
\widetilde{\nabla}^2\widetilde{f}_i=\nabla^2\widetilde{f}_i=\frac{\nabla^2 f_i}{\varphi_i(x_i)}=\frac{\textnormal{Ric}}{\varphi_i(x_i)}=\frac{\widetilde{\textnormal{Ric}}}{\varphi_i(x_i)},
\end{equation}
and also $|\widetilde{\nabla}\widetilde{f}_i|_{\widetilde{g}_i}=
\varphi_i(x_i)|\nabla\widetilde{f}_i|_{g_i}=
|\nabla f_i|_{g_i}\le 1$.
In particular, at $y_i$ we have
\begin{equation}\label{e: the norm of gradient}
|\widetilde{\nabla}\widetilde{f}_i|_{\widetilde{g}_i}(y_i)=|\nabla f_i|(y_i)\in\left[\frac{1}{2},1\right].
\end{equation}
By \eqref{e: higher derivatives of f} and \eqref{e: the norm of gradient}, the derivatives of $\widetilde{f}_i$ are uniformly bounded.
Using \eqref{e: R and varphi} we can show by a limiting argument as in \cite[Lemma 3.3]{Lai2020_flying_wing} that
the manifolds $(M_i,\varphi^{-2}_i(x_i)g_i,y_i)$ smoothly converge to the 3D Euclidean space $(\mathbb{R}^3,0)$.
Moreover, a subsequence of the functions $\widetilde{f}_i$ converge to a smooth function $f_{\infty}$ on $\mathbb{R}^3$ with $f_{\infty}(0)=0$.
By \eqref{e: higher derivatives of f} and \eqref{e: the norm of gradient} it satisfies $\nabla^2f_{\infty}=0$ and $|\nabla f_{\infty}|(0)>0$.
So $f_{\infty}$ is a non-constant linear function.
In particular, $0$ is a regular value of $f_{\infty}$, so the
the level sets $(\Sigma_i,R(y_i)g_{\Sigma_i},y_i)$ of $\widetilde{f}_i$ with induced metrics smoothly converge to the level set $(f_{\infty}^{-1}(0),0)$,
which is isometric to the 2D Euclidean space with the induced metric.
Let $\sigma_i:[0,1]\rightarrow\Sigma_i$ be a $\Sigma_i$-minimizing geodesic from $y_i$ to $x_i$.
Since $y_i$ is closer to $x_i$ between the two points in $\Gamma_i\cap \Sigma_i$, by the concavity of $\varphi_i$ on $\Sigma_i$, it is easy to see
\begin{equation}\label{e: largest circle}
\sup_{s\in[0,1]}\varphi_i(\sigma_i(s))\le 2\varphi_i(x_i).
\end{equation}
By the assumption $d_{g_i}(x,\Gamma_i)\ge 4\,\varphi_i(x_i)$, we have that
\begin{equation}\label{e: distance of the circle is not too small compare to the circle length}
d_{\Sigma_i}(y_i,x_i)\ge d_{g_i}(y_i,x_i)\ge d_{g_i}(x_i,\Gamma_i) \ge 4\,\varphi_i(x_i).
\end{equation}
Write the induced metric on $f^{-1}_{\infty}(0)$ in the warped product form $dr^2+\varphi_{\infty}^2(r)d\theta^2$ so that $r=0$ at $0\in f^{-1}_{\infty}(0)$.
Then $\varphi_{\infty}(r)=r$ because $f^{-1}_{\infty}(0)$ is isometric to $\mathbb{R}^2$.
But \eqref{e: largest circle} and \eqref{e: distance of the circle is not too small compare to the circle length} imply that $\varphi_{\infty}(r)\le2$ for all $r\le4$, which is a contradiction.
\end{proof}
In some of the following results,
we will moreover assume that the soliton is $\mathbb{Z}_2$-symmetric for simplicity, by which we mean that there is a $\mathbb{Z}_2$-isometry $\tau$ on $M$, which fix the critical point $p$ and the differential map $\tau_{*p}$ is a reflection in $T_pM$ that maps the vector $\Gamma'(0)$ to $-\Gamma'(0)$.
The next lemma shows that if $R\ge R_0$ at some point on $\Gamma$ that is sufficiently far away from the critical point. Then $R$ has a uniform lower bound at the infinity of $\Gamma$, which only depends only on $R_0$.
Lemma \ref{l: looks like RxCigar} is needed in the proof: First,
it allows us to reduce the estimate of $R$ to that of the warping function.
Second, it implies the initial condition needed to apply the ODE estimate for the warping function. So we can obtain an upper bound on the warping function, which in turn implies a lower bound on $R$.
\begin{lem}\label{l: a rough lower bound C}
For any $R_0\in(0,1)>0$, there exist $D(R_0),C(R_0)>0$ such that the following holds:
Let $(M,g,f,p)$ be a $\mathbb{Z}_2\times O(2)$-symmetric 3D steady gradient Ricci soliton with positive curvature.
Suppose $R(p)=1$.
Suppose also that there is $s_0>D(R_0,\epsilon)$ such that $R(\Gamma(s_0))\ge R_0$.
Then for all $|s|\ge s_0$,
\begin{equation*}
R(\Gamma(s))\ge C^{-1}>0.
\end{equation*}
\end{lem}
\begin{proof}
We can write the metric $g$ on $M\setminus\Gamma$ as $g=g_N+\varphi^2d\theta$ where $N$ is a totally geodesic surface.
Let $N_0$ be the 2D submanifold fixed by the $\mathbb{Z}_2$-isometry, then $M\setminus N_0$ has two connected components $N_+,N_-$. It is not hard to see that for any point $x\in N_+$ (or $N_-$), the point $\phi_t(x)\in N_+$ (or $N_-$) for all $t\in\mathbb{R}$. Without loss of generality we may assume $\Gamma_+=\Gamma(0,\infty)\subset N_+$. So $d_g(x,\Gamma)=d_g(x,\Gamma_+)$ when $x\in N_+$.
In the following $C$ denotes all positive constants depending only on $R_0$, whose values may change from line to line.
Let $C_0>10$ be a constant whose value will be determined later. Let $\epsilon>0$ be sufficiently small.
By Lemma \ref{l: looks like RxCigar} we can find constant $D>0$ depending on $R_0$ and $\epsilon$, so that $(M,R(\Gamma(s_0))g,\Gamma(s_0))$ is $\epsilon$-close to $\mathbb{R}\times\textnormal{Cigar}$. In particular, we can find a point $x$ such that $d_g(x,\Gamma)=d_g(x,\Gamma_+)\ge C_0\,\varphi(x)$
and also
\begin{equation}\label{e: h(0)}
1\le 1.9\,R_0^{-1/2}\le \varphi(x)\le 2.1\,R_0^{-1/2}.
\end{equation}
Let $H(t)=d_g(\phi_t(x),\Gamma)$ and $h(t)=\varphi(\phi_t(x))$. Then we have
\begin{equation}
\frac{H(0)}{h(0)}=\frac{d_g(x,\Gamma)}{\varphi(x)}\ge C_0.
\end{equation}
Let
\begin{equation*}
a=\sup\{t\ge0: H(t)\ge4\,h(t)\}\in(0,\infty].
\end{equation*}
We will show $a=\infty$.
First, since $x\notin\Gamma$, it follows that $\phi_t(x)\notin\Gamma$. In particular, $f(\phi_t(x))>0$ and the level set $\Sigma_t=f^{-1}(f(\phi_t(x)))$ for each $t\ge0$ is a compact 2D submanifold. Assume $\Sigma_t$ intersects with $\Gamma_+$ at $y_t$.
Since $\phi_t(x)\in N_+$, there is a point $q_t\in\Gamma_+$ such that $d_g(q_t,\phi_t(x))=d_g(\phi_t(x),\Gamma)$. Then by \cite[Lemma 3.29]{Lai2022_O(2)} we see that $f(q_t)\le f(\phi_t(x))=f(y_t)$.
The conditions $d_{\Sigma_t}(\phi_t(x),\Gamma\cap\Sigma_t)=d_{\Sigma_t}(\phi_t(x),y_t)$, $\varphi(\phi_t(x))\ge\varphi(x)\ge 1$, and $H(t)\ge4h(t)$ allow us to apply
Lemma \ref{l: geometry of level set near tip} and deduce $R(y_t)\ge C^{-1}h^{-2}(t)$. Then by the monotonicity of $R$ along $\Gamma_+$, we see that the following holds in $[0,a]$,
\begin{equation}\label{e: R(q)}
R(q_t)\ge R(y_t)\ge C^{-1}h^{-2}(t).
\end{equation}
For any point $\Gamma(s)\in\Gamma$, let $e_1,e_2\in T_{\Gamma(s)}M$ be two unit vectors which together with $e_3:=\Gamma'(s)$ form an orthogonal basis.
Then by the $O(2)$-symmetry we have $R=4K(e_1,e_2)+2K(e_1,e_3)$. Together with $\textnormal{Ric}(e_1,e_1)=K(e_1,e_2)+K(e_1,e_3)$ this implies
\begin{equation}\label{e: quarter}
\textnormal{Ric}(e_1,e_1)=\textnormal{Ric}(e_2,e_2)\ge\frac{1}{4}R.
\end{equation}
Therefore,
let $\gamma_t:[0,H(t)]\rightarrow M$ be a minimizing geodesic connecting $q_t$ and $\phi_t(x)$, then by \eqref{e: quarter}, \eqref{e: R(q)}, and the uniform bound on the curvature derivatives we have
\begin{equation*}
\textnormal{Ric}(\gamma_t'(r),\gamma_t'(r))\ge C^{-1}\cdot h^{-2}(t)
\end{equation*}
for all $r\in[0,C^{-1}h(t)]$.
Since $H(t)\ge4\,h(t)$, it follows that
\begin{equation*}
H'(t)=\int_0^{H(t)}\textnormal{Ric}(\gamma_t'(r),\gamma_t'(r))\,dr\ge C^{-1}\cdot h^{-1}(t).
\end{equation*}
So there are constants $C_1,C_2>0$ that only depend on $R_0$ such that the following inequalities hold for all $t\in[0,a]$,
\begin{equation}\label{e: ODE derivative assump}
\begin{cases}
H'(t)\ge C_1^{-1}\cdot h^{-1}(t)\\
h'(t)\le C_2\cdot H^{-2}(t)\cdot h(t),
\end{cases}
\end{equation}
where the second inequality is a consequence of the Ricci flow equation and Lemma \ref{l: curvature upper bound initial}.
We may assume $\frac{2C_1C_2}{e}\ge5$ and take $C_0=2C_1C_2$, then it follows by the ODE estimates \cite[Lemma 3.37]{Lai2022_O(2)} that
\begin{equation}\label{e: caseHh}
\begin{cases}
H(t)\ge C_3t+H(0)\\
h(t)\le h(0)e^{\frac{C_2}{C_3H(0)}},
\end{cases}
\end{equation}
for all $t\in[0,a]$, where $C_3=C_1^{-1} h^{-1}(0)-C_2H^{-1}(0)>0$.
So $h(t)\le h(0)\,e$, and hence $\frac{H(t)}{h(t)}\ge\frac{1}{e}\frac{H(0)}{h(0)}\ge5$.
By the supremum of $a$ this implies $a=\infty$.
So by \eqref{e: R(q)} and \eqref{e: h(0)} we obtain $R(q_t)\ge C^{-1}$
for all $t\ge0$. Note that by \cite[Lemma 3.19]{Lai2022_O(2)} we have that $q_t\rightarrow\infty$ as $t\rightarrow\infty$.
So this implies
$\lim_{s\rightarrow\infty}R(\Gamma(s))\ge C^{-1}$, and thus proves the lemma.
\end{proof}
The next lemma shows that the distance between any two points that are not on $\Gamma$ will stay bounded under the backwards Ricci flow, see also \cite[Theorem 3.39]{Lai2022_O(2)}.
\begin{lem}\label{l: two points stay bounded}
Let $(M,g,f,p)$ be a 3d steady gradient Ricci soliton on $\mathbb{R}^3$. Then for any $x_1,x_2\in M\setminus\Gamma$, there exists $C>0$ (which may depend on $x_1,x_2$ and $(M,g)$) such that $d_{g(t)}(x_1,x_2)=d_g(\phi_t(x_1),\phi_t(x_2))<C$ for all $t\ge0$.
\end{lem}
\begin{proof}
Let $C>0$ denote all constants whose values may change from line to line.
First, by $\textnormal{Rm}\ge0$ we see that $d_g(\phi_t(x_j),\Gamma)$ increases in $t$, for $j=1,2$. Moreover, by \cite[Theorem 1.5]{Lai2022_O(2)} it is not hard to see that
\begin{equation}\label{e: grows linearly}
d_g(\phi_t(x_j),\Gamma)\ge d_g(x_j,\Gamma)+C^{-1}t\ge C^{-1}(t+1).
\end{equation}
So by Lemma \ref{l: curvature upper bound initial} (quadratic curvature decay) we see that $R(x)\le\frac{C}{(t+1)^2}$ holds for all $x\in B_g(x_j,C^{-1}(t+1))$, for each $j=1,2$. So by Perelman's distance distortion estimate \cite[8.3(b)]{Pel1} we have
\begin{equation*}
\frac{d}{dt}d_g(\phi_t(x_1),\phi_t(x_2))\le\frac{C}{t+1},
\end{equation*}
integrating which we obtain
\begin{equation}\label{e: lnt}
d_g(\phi_t(x_1),\phi_t(x_2))\le d_g(x_1,x_2)+C\ln (t+1).
\end{equation}
Therefore, for any sufficiently large $t$, let $\gamma_t:[0,1]\rightarrow M$ be a minimizing geodesic between $\phi_t(x_1),\phi_t(x_2)$, by \eqref{e: grows linearly} and the triangle inequality we have
\begin{equation*}
d_g(\gamma_t([0,1],\Gamma)\ge d_g(\phi_t(x_1),\Gamma)-d_g(\phi_t(x_1),\phi_t(x_2))> C^{-1}(t+1).
\end{equation*}
So by Lemma \ref{l: curvature upper bound initial} we have $\sup_{s\in[0,1]}R(\gamma_t(s))\le \frac{C}{(t+1)^2}$, and hence \eqref{e: lnt} implies
\begin{equation*}
\frac{d}{dt}d_g(\phi_t(x_1),\phi_t(x_2))\le\int_{\gamma}\textnormal{Ric}(\gamma_t'(s),\gamma_t'(s))\,ds\le \frac{C\ln (t+1)}{(t+1)^2}\le \frac{C}{(t+1)^{\frac{3}{2}}},
\end{equation*}
integrating which we proved the lemma.
\end{proof}
Lastly, we prove the main result in this section, which gives a condition for $R$ to be stable along $\Gamma$.
More precisely, it says that if $R(\Gamma(s_0))\ge R_0$ for some sufficiently large $s_0$ depending on $R_0$, then the value of $R(\Gamma(s))$ barely drops on $s\in [s_0,\infty)$. So $\lim_{s\rightarrow\infty}R(\Gamma(s))$ is sufficiently close to $R_0$.
The proof relies on Lemma \ref{l: looks like RxCigar} which allows us to convert the comparison of $R$ to that of the warping functions.
\begin{prop}\label{l: R does not change too much}
For any $R_{\#}\in(0,1],\epsilon>0$, there exists $D(R_{\#},\epsilon)>0$ such that the following holds:
Let $(M,g,f,p)$ be a $\mathbb{Z}_2\times O(2)$-symmetric 3D steady gradient Ricci soliton with positive curvature.
Suppose $R(p)=1$.
Suppose also that there is $s_0>D(R_{\#},\epsilon)$ such that $R(\Gamma(s_0))=R_0\ge R_{\#}$.
Then for all $s\in\mathbb{R}$, $|s|\ge s_0$, we have
\begin{equation*}
R_0(1-\epsilon)\le R(\Gamma(s))\le R_0.
\end{equation*}
\end{prop}
\begin{proof}
Let $\delta>0$ be a constant that we shall take arbitrarily small, and $\epsilon>0$ be all constants so that $\epsilon\ri0$ as $\delta\ri0$. Let $D,C>0$ denote all constants depending on $\mathbb{R}_{\#}$ and $\delta$.
Let $R_{\infty}:=\lim_{s\rightarrow\infty}R(\Gamma(s))=\lim_{s\rightarrow-\infty}R(\Gamma(s))$.
First, by Lemma \ref{l: a rough lower bound C} we see that $R_{\infty}\ge C^{-1}>0$.
So by Lemma \ref{l: looks like RxCigar} we may assume $D$ to be sufficiently large so that for any $|s|>D$, the manifold $(M, R(\Gamma(s))g,\Gamma(s))$ is $\delta$-close to $(\mathbb{R}\times\textnormal{Cigar},(0,x_{tip}))$.
So we can find two points $x_1,x_2\in M$ such that $d_g(x_1,\Gamma),d_g(x_2,\Gamma)\ge\epsilon^{-1}$, and
\begin{equation}\label{e: h(x) and R}
|\varphi(x_1)-2(R_0)^{-1/2}|\le\epsilon,\quad\textit{and}\quad |\varphi(x_2)-2(R_{\infty})^{-1/2}|\le\epsilon.
\end{equation}
Next, by using $R_{\infty}\ge C^{-1}$ and \eqref{e: quarter} we can deduce $\frac{d}{dt}d_g(\phi_t(x_j),\Gamma)\ge C^{-1}$,
integrating which we have $d_g(\phi_t(x_j),\Gamma)\ge d_g(x_j,\Gamma)+C^{-1}t$, for $j=1,2$.
Combining this with Lemma \ref{l: curvature upper bound initial} (quadratic curvature decay), we obtain
\begin{equation}\label{e: R(phi_t(x_j))}
R(\phi_t(x_j))\le\frac{C}{d_g^2(\phi_t(x_j),\Gamma)}\le\frac{C}{(C^{-1}t+d_g(x_j,\Gamma))^2}\le\frac{C}{(C^{-1}t+\epsilon^{-1})^2}.
\end{equation}
Since $2\pi\cdot\varphi(\phi_t(x))$ is equal to the $g$-length of the $S^1$-orbit at $\phi_t(x)$, which is equal to the $g(-t)$-length of the $S^1$-orbit at $x$, it hence follows by the Ricci flow equation and $\textnormal{Rm}\ge0$ that
\begin{equation*}
0\le\frac{d}{dt}\varphi(\phi_t(x_j))\le C\,R(\phi_t(x_j))\,\varphi(\phi_t(x_j)),
\end{equation*}
integrating which and using \eqref{e: R(phi_t(x_j))} we obtain
\begin{equation}\label{e: compare h(x) and h(phi_t(x))}
\varphi(x_j)\le\varphi(\phi_t(x_j))\le(1+\epsilon)\varphi(x_j).
\end{equation}
Since $d_g(\phi_t(x),\Gamma)\rightarrow\infty$, by \cite[Theorem 1.5]{Lai2022_O(2)} we see that
the manifold is $\epsilon$-close to $\mathbb{R}^2\times S^1$ at $\phi_t(x_1)$ for all sufficiently large $t$. So by Lemma \ref{l: two points stay bounded} it is easy to see
\begin{equation*}
(1-\epsilon)\varphi(\phi_t(x_2))\le \varphi(\phi_t(x_1))\le(1+\epsilon)\varphi(\phi_t(x_2)).
\end{equation*}
Combining this with \eqref{e: compare h(x) and h(phi_t(x))} and \eqref{e: h(x) and R} we obtain
\begin{equation}\label{e: suffices}
R_0(1-\epsilon)\le R_{\infty}\le R_0(1+\epsilon),
\end{equation}
and hence proves the lemma.
\end{proof}
\section{Proof of main results}
In this section we prove Theorem \ref{t: flying wing with prescribed angles} and \ref{t: theorem compactness}.
\begin{proof}[Proof of Theorem \ref{t: flying wing with prescribed angles}]
First, as in the proof of \cite[Theorem 1.1]{Lai2020_flying_wing},
we can find a sequence of smooth families of $\mathbb{Z}_2\times O(2)$-symmetric expanding gradient Ricci solitons $(M_{i,\mu},g_{i,\mu},p_{i,\mu}),\mu\in[0,1]$, $i\in\mathbb{N}$, with positive curvature operator \cite{De15}, which satisfies the following conditions,
\begin{enumerate}
\item $R(p_{i,\mu})=1$ for all $i\in\mathbb{N}$ and $\mu\in\mathbb{R}$;
\item $(M_{i,0},g_{i,0},p_{i,0})$ are rotationally symmetric for all $i$, and $(M_{i,0},g_{i,0},p_{i,0})$ smoothly converge to the Bryant soliton as $i\rightarrow\infty$;
\item $(M_{i,1},g_{i,1},p_{i,1})$ smoothly converge to $\mathbb{R}\times\textnormal{Cigar}$ as $i\rightarrow\infty$.
\item For any sequence $\mu_i\in[0,1]$, a subsequence of $(M_{i,\mu_i},g_{i,\mu_i},p_{i,\mu_i})$ smoothly converges to a $\mathbb{Z}_2\times O(2)$-symmetric 3D steady gradient Ricci soliton on $\mathbb{R}^3$.
\end{enumerate}
By abuse of notation, we will use $\Gamma$ to denote the unit speed complete geodesic in any expanding gradient soliton $(M_{i,\mu},g_{i,\mu},p_{i,\mu})$ that is fixed by the $O(2)$-isometry.
For any $\theta\in(0,\pi)$, let $R_0=\sin^2\frac{\theta}{2}\in(0,1)$, we now construct a 3D flying wing $(M_{\infty},g_{\infty},p_{\infty})$ such that $R(p_{\infty})=1$ and $\lim_{s\rightarrow\infty}R(\Gamma(s))=\lim_{s\rightarrow-\infty}R(\Gamma(s))=R_0$.
First,
by Proposition \ref{l: R does not change too much} we can choose a sequence of numbers $\{s_k\}_{k=0}^{\infty}$ so that $s_k\rightarrow\infty$ as $k\rightarrow\infty$, and if $R(\Gamma(s_j))\ge R_0$ holds in a $\mathbb{Z}_2\times O(2)$-symmetric 3D steady gradient Ricci soliton on $\mathbb{R}^3$, then the following will hold for all $s\ge s_j$,
\begin{equation}\label{e: choice of s_j}
R(\Gamma(s))\ge (1+(j+1)^{-1})^{-1}R(\Gamma(s_j)).
\end{equation}
Since $\lim_{s\rightarrow\infty}R(\Gamma(s))=0$ in the Bryant soliton,
we may take $s_k$ to be sufficiently large so that $R(\Gamma(s_k))<R_0$ in the Bryant soliton. We also see that $R(\Gamma(s_k))=1>R_0$ in $\mathbb{R}\times\textnormal{Cigar}$. So by condition (2)(3), we can find a $\mu_{i,k}\in(0,1)$ for each fixed $k$ and all sufficiently large $i$ so that $R_{g_{i,\mu_{i,k}}}(\Gamma(s_k))=R_0$.
By condition (1)(4), for each fixed $k$, we may assume after passing to a subsequence that the expanding gradient Ricci solitons $(M_{i,\mu_{i,k}},g_{i,\mu_{i,k}},p_{i,\mu_{i,k}})$ smoothly converge to a $\mathbb{Z}_2\times O(2)$-symmetric 3D flying wing
$(M_k,g_k,p_k)$, which satisfies $R_{g_{k}}(p_k)=1$ and $R_{g_{k}}(\Gamma(s_k))=R_0$.
So by the monotonicity of $R$ along $\Gamma$ we have $R_{g_k}(\Gamma(s))\ge R_0$, and hence by \eqref{e: choice of s_j} we obtain that for each $j=1,...,k$, and for all $s_{j-1}\le s\le s_k$,
\begin{equation*}
R_0\le R_{g_k}(\Gamma(s))\le R_{g_k}(\Gamma(s_{j-1}))\le(1+j^{-1})R_{g_k}(\Gamma(s_k))=(1+j^{-1})R_0.
\end{equation*}
By Lemma \ref{compactness to a steady soliton} we may assume after passing to a subsequence that the 3D flying wings $(M_k,g_k,p_k)$ smoothly converge to a $\mathbb{Z}_2\times O(2)$-symmetric 3D flying wing $(M_{\infty},g_{\infty},p_{\infty})$, which satisfies $R_{g_{\infty}}(p_{\infty})=1$ and the following holds for all $j\in\mathbb{N}_+$,
\begin{equation*}
R_0\le R_{g_{\infty}}(\Gamma(s))\le(1+j^{-1})R_0, \quad\textit{for all}\quad s\ge s_{j-1}.
\end{equation*}
In particular, this implies $\lim_{s\rightarrow\infty}R_{g_{\infty}}(\Gamma(s))=\lim_{s\rightarrow-\infty}R_{g_{\infty}}(\Gamma(s))=R_0$.
So by Lemma \ref{l: quantitative relation} we see that $(M_{\infty},g_{\infty})$ is asymptotic to a sector with angle $\theta$.
\end{proof}
Now we prove Theorem \ref{t: theorem compactness}.
\begin{proof}[Proof of Theorem \ref{t: theorem compactness}]
Let $(M_i,g_i,f_i,p_i)$ be a sequence of 3D steady gradient Ricci solitons whose asymptotic cone angles are $\alpha_i$ and $\lim_{i\rightarrow\infty}\alpha_i=\alpha$.
Then by Lemma \ref{compactness to a steady soliton}, any converging subsequence of $(M_i,g_i,f_i,p_i)$ converges to a 3D steady gradient Ricci soliton $(M,g,f,p)$ on $\mathbb{R}^3$.
First, assume $\alpha=0$. Suppose by contradiction that the asymptotic cone angle of $(M,g,f,p)$ is equal to some $\beta>0$, which by Lemma \ref{l: quantitative relation} implies $\lim_{s\rightarrow\infty}R(\Gamma(s))=\sin^2\frac{\beta}{2}>0$. So for any $s_0>0$, we have
\begin{equation*}
\lim_{i\rightarrow\infty}R_{g_i}(\Gamma_i(s_0))=R(\Gamma(s_0))\ge\sin^2\frac{\beta}{2}.
\end{equation*}
So by Proposition \ref{l: R does not change too much} there exists $C>0$ such that $\lim_{s\rightarrow\infty}R_{g_i}(\Gamma_i(s))\ge C^{-1}$ holds for all sufficiently large $i$. So by Lemma \ref{l: quantitative relation} we have $\liminf_{i\rightarrow\infty}\alpha_i>0$, which contradicts the assumption $\alpha=0$.
Therefore, we have $\beta=0$.
Moreover, by \cite[Theorem 1.1]{Lai2022_O(2)}, it follows that $(M,g,f,p)$ is isometric to the Bryant soliton.
So we may assume $\alpha>0$. Then by Lemma \ref{l: quantitative relation} we have $\lim_{i\rightarrow\infty}\lim_{s\rightarrow\infty}R_{g_i}(\Gamma_i(s))=\sin^2\frac{\alpha}{2}$.
Therefore, by applying Proposition \ref{l: R does not change too much} in each $(M_i,g_i,f_i,p_i)$ we see that for any $\epsilon>0$, there exists $s_0>0$ and $N\in\mathbb{N}$ such that for all $s\ge s_0$ and $i\ge N$, we have
\begin{equation*}
\left|R_{g_i}(\Gamma_i(s))-\sin^2\frac{\alpha}{2}\right|\le\epsilon.
\end{equation*}
Passing this to the limit we obtain
\begin{equation*}
|R(\Gamma(s))-\sin^2\frac{\alpha}{2}|\le\epsilon
\end{equation*}
for all $s\ge s_0$ in $(M,g,f,p)$. Letting $\epsilon\ri0$, we get $\lim_{s\rightarrow\infty}R(\Gamma(s))=\sin^2\frac{\alpha}{2}$, which proves the theorem by Lemma \ref{l: quantitative relation}.
\end{proof}
|
train/arxiv
|
BkiUbYY4eIXguy2tN4sB
| 5
| 1
|
\section{Introduction}
Atoms and molecules in electronically excited states are considered as ``metastable", if transitions to lower-lying energy states are forbidden by electric-dipole selection rules. Because of their long natural lifetimes ($\geq 10^{-5}$ s) and their high internal energy ($\geq$ 10 eV), the metastable states of the noble gases play an important role in a variety of environments including planetary atmospheres, flames and plasmas \cite{Makabe1992,Falcinelli2015}.
Over the years, there has been continuing interest in the use of metastable species for a variety of applications \cite{Gay1996}. For example, metastable noble-gas species serve as sensitive probes in surface analysis techniques, such as metastable atom electron spectroscopy (MAES) and metastable de-excitation spectroscopy (MDS) \cite{Harada1997, Onellion1984}. Metastable noble gases have laser-accessible, electric-dipole-allowed transitions to higher-lying states which also make them particularly suitable for use in laser cooling \cite{Vassen2012}. Applications of such laser-cooled species include the production of ultracold, quantum degenerate gases, precision spectroscopy and atomic interferometry \cite{Vassen2012} (and references therein), \cite{Vassen2016}. Traditionally, metastable helium atoms have also been popular test systems for new laser cooling schemes, e.g., for sub-Doppler cooling by velocity-selective coherent population trapping \cite{Aspect1988, Hack2000}, for white-light cooling \cite{Rasel1999} and for bichromatic cooling \cite{Cashen2001,Partlow2004,Corder2015}. The velocity of metastable helium atoms in a supersonic beam has also been successfully manipulated using the Zeeman deceleration technique which does not rely on laser cooling \cite{Dulitz2015a}.
Metastable noble gases have been subject to many fundamental studies of autoionizing collisions at thermal and at ultracold temperatures \cite{Vassen2012, Siska1993}. For example, recent studies of reactive collisions with metastable noble gases in merged supersonic beams have enabled the observation of quantum resonances and stereodynamic effects \cite{Henson2012, Gordon2017}. In fact, the quantum degeneracy of metastable noble gases could only be achieved by electron-spin polarizing the atoms which suppresses ionizing collisions \cite{Fedichev1996, Herschbach2000}.
Metastable noble gases can be produced in a variety of ways, including electron-beam bombardment, discharge excitation, charge transfer, optical pumping, and thermal excitation \cite{Gay1996}. Among the sources, the electron-beam bombardment and discharge sources are most commonly used. Since the electron excitation energies are typically well above threshold in order to maximize the metastable production rate \cite{Rundel1974a,Dunning1975}, two metastable states are populated simultaneously, the 2$^3$S$_1$ and 2$^1$S$_0$ states in the case of helium and the $^3$P$_0$ and $^3$P$_2$ states in the case of the heavier noble gases. However, many experiments require either pure beams of a single metastable species or a precise knowledge of the relative state populations. Beam purification can be achieved in several ways, e.g., by beam deflection in a magnetic field \cite{Weiser1987}, coherent momentum transfer \cite{Theuer1998} or by transverse optical deflection \cite{Aspect1990}. State purification is also achieved by optical pumping (often denoted as ``quenching"). In this scheme, the population of one metastable state is transferred to the electronic ground state via optical excitation to a higher-lying electronic state, whose decay to the electronic ground state is strongly allowed for electric-dipole radiation. There exist efficient optical pumping schemes for atomic beams of metastable helium, neon, argon and krypton \cite{Fry1969, Hotop1969a, Hotop1981, Harada1987,Dunning1975, Gaily1980, Verheijen1986, Brand1992, Kau1998, Thiel2004}. The de-excitation of metastable helium atoms in the 2$^1$S$_0$ state has thus far only been attained in a supersonic beam by illuminating the atomic beam with 2058 nm light from a helium discharge lamp, which is resonant with the 2$^1$P$_1 \leftarrow 2^1$S$_0$ transition \cite{Fry1969, Kato2012, Hotop1969a, Hotop1981, Harada1987}. The use of a discharge lamp requires a complex, bulky setup with a coil-shaped, water-cooled lamp that encloses the supersonic beam inside the vacuum chamber. Apart from its complexity, such a setup can strongly perturb the supersonic flow and thus degrade the performance of the beam source. If the water-cooling of the lamp is not sufficient, such a lamp may easily heat up and thus prevent its continuous operation.
In this work, we demonstrate an original and very efficient approach to the optical depletion of He(2$^1$S$_0$) in a supersonic beam using optical excitation via the 4$^1$P$_1 \leftarrow 2^1$S$_0$ transition at a vacuum wavelength of 396.58509 nm \cite{Martin1960}. This scheme is based on simple and inexpensive diode laser technology, which makes it suitable for many different experimental applications. We provide a detailed experimental characterization of the technique, including an examination of the quenching efficiency as a function of laser power, detuning from resonance and helium beam velocity. We also give an account of the numerical calculations used for the determination of the depletion efficiency.
\section{Experiments}
A schematic drawing of the experimental setup, which consists of the optical system and a vacuum apparatus, is shown in Fig. \ref{fig:setup}. In the following, both parts will be described in detail.
\begin{figure}[h!]
\includegraphics[width=\linewidth]{fig1.eps}
\caption{Schematic drawing of the experimental setup which consists of the optical system (blue box) and a vacuum apparatus (red box). Abbreviations: OI = 30 dB optical isolator, L1--L5 = lenses, M1--M6 = mirrors, HWP = half-wave plate, QWP = quarter-wave plate, PBS = polarizing beamsplitter, FPI = Fabry-P\'{e}rot interferometer, PD = photodiode, MS = mechanical shutter, NDF = neutral-density filter, PID = proportional-integral-derivative controller, GND = chassis ground, FC = Faraday cup.}
\label{fig:setup}
\end{figure}
\subsection{Vacuum Apparatus}
Experiments are carried out in a set of three vacuum chambers. Inside the first chamber, a pulsed supersonic beam of metastable helium atoms is produced by a He gas expansion into vacuum and subsequent discharge excitation. The supersonic beam is generated by a high-intensity, short-pulse solenoid valve (CRUCS, $d = 100$ $\mu$m orifice diameter, $40^{\circ}$ cone, copper body) whose characteristics are described in Ref. \cite{Grzesiak2018}. The temperature of the valve is controlled by a cryocooler (CTI, 350CP) whose temperature is typically regulated to a set value with accuracy to within 0.1 K using proportional-integral-derivative (PID)-controlled resistive heating (LakeShore Model 325). The discharge unit used for the excitation of ground-state helium atoms into the metastable 2$^3$S$_1$ and 2$^1$S$_0$ states is described in a previous publication \cite{Grzesiak2019} and is thus not detailed here. A pulse duration of $\leq 30\,\mu$s is inferred from the sudden decrease of the plate voltage upon discharge excitation. At each valve temperature, the He stagnation pressure and the settings at the valve driver were adjusted to maximize the metastable helium signal intensity and to avoid bouncing of the valve plunger. For the measurements on optical quenching described here, valve stagnation pressures between 10--30 bar were used.
Laser excitation is done at a distance of $z_0 \approx 130$ mm from the valve exit, in close proximity to the skimmer ($b = 1$ mm diameter, see Fig. \ref{fig:setup}). This arrangement avoids a repopulation of the 2$^1$S$_0$ state by the discharge and it ensures a good spatial overlap between the laser beam and the He beam, with the tip of the skimmer serving as a handy tool for the alignment of the laser beam with respect to the He beam. After laser excitation, the beam of metastable helium atoms is monitored on a copper plate, which serves as a Faraday-cup-type detector \cite{Hotop1996} (denoted as FC 1 in Fig. \ref{fig:setup}), inside the second chamber. The signal is amplified using a transimpedance amplifier (Femto, DLPCA-200, $10^6$ V/A gain, 500 kHz). The detection efficiency of such a detector is not well known ($\approx 50$ \% \cite{Siska1993}). However, only relative signal intensities with and without laser irradiation are measured so that the absolute detection efficiency is not important for this work. To avoid the detection of ions and atoms in Rydberg states produced during the discharge process, the skimmer is biased to a voltage of -300 V relative to chassis ground.
Another set of Faraday-cup-type detectors (FC 2 and FC 3 in Fig. \ref{fig:setup}) is placed inside a third chamber. Since their relative distance is accurately known (224.5$\pm$0.5 mm), the longitudinal velocity of the supersonic beam of metastable helium atoms can be inferred from the difference $\Delta t$ of the time-of-flight signal intensities at the two detectors \cite{Grzesiak2018}.
During the experimental runs, the three vacuum chambers are held at pressures of approximately \mbox{$2 \cdot 10^{-6}$ mbar}, \mbox{$2 \cdot 10^{-7}$ mbar} and \mbox{$3 \cdot 10^{-10}$ mbar}, respectively.
\subsection{Optical System}
Laser radiation at 397 nm, used to drive the 4$^1$P$_1 \leftarrow 2^1$S$_0$ transition, is generated inside a standard, home-built external cavity diode laser system (NDU4316 laser diode by Nichia). To allow for a straightforward frequency tuning of the diode laser, the laser frequency is locked to the output of a wavelength meter (High Finesse, WS7, 60 MHz absolute accuracy, 2 MHz relative accuracy) using digital PID control. For this, the difference between the measured wavelength and the user-set wavelength is converted into an error signal and a corresponding analog voltage using MATLAB code and a digital-to-analog converter (DAC). This voltage is used to control the piezo element which adjusts the laser grating, and hence, the laser wavelength. The mean sampling rate and the stability of the laser lock are $\approx 100$ Hz and 2 MHz, respectively. \footnote{In this estimate, frequency drifts caused by thermal fluctuations inside the wavelength meter itself are disregarded.} The absolute frequency of the wavelength meter was initially obtained by comparing the wavenumbers of the two hyperfine-structure components and the crossover resonance in the $^7$Li D$_2$ line \cite{Sansonetti1995}, obtained by Doppler-free frequency-modulation spectroscopy, with the output of the wavelength meter. During the optical depletion experiments, the resonance frequency is obtained from the measured line profile of the 4$^1$P$_1 \leftarrow 2^1$S$_0$ transition.
A Fabry-P\'{e}rot interferometer (FPI, finesse $\mathcal{F} \approx 48$) and a photodiode (PD) are used to monitor the single-mode operation of the laser. A lens is attached to the entrance of the FPI unit to achieve the mode-matching for the cavity (not shown in Fig. \ref{fig:setup}). In addition to that, a quarter-wave plate (QWP) is inserted in front of the interferometer to prevent back-reflections into the laser diode.
The remaining laser radiation is overlapped with the He beam at right angles to minimize Doppler broadening of the 4$^1$P$_1 \leftarrow 2^1$S$_0$ transition originating from the longitudinal velocity distribution of the supersonic beam. To increase the interaction time with the sample, the laser beam is retro-reflected through the He beam using a high-reflectivity mirror (M6 in Fig. \ref{fig:setup}). The mirror is located inside the vacuum chamber to avoid transmission losses through the optical windows of the vacuum chamber. The incoming light is sent at a very small vertical angle with respect to the retro-reflected beam to avoid back reflections into the diode laser.
To minimize influences by shot-to-shot fluctuations of the helium beam source, the laser beam was toggled between open and closed in between two subsequent shots of the pulsed valve using a fast mechanical shutter (SRS, SR475). A servo-motor-controlled neutral-density filter wheel allowed for a fast and accurate setting of the laser power (between 0--50 mW) admitted into the chamber. The laser power was measured on a power meter (Coherent, OP-2 VIS) outside the vacuum chamber. Transmission losses at the entrance window into the vacuum chamber (9 \%) were taken into account in the analysis. The semi-axes of the elliptical laser beam parallel and perpendicular to the He beam propagation axis were determined at the position of the interaction region as $a = 0.54$ mm and $b = 0.23$ mm (intensity full width at half maximum (FWHM)), respectively, using a beam profiler (LaserCam-HR, Coherent).
\section{Numerical Calculations}
To quantify the depletion efficiency, we have numerically solved the rate equations for the system using Mathematica and Matlab codes following standard procedures. Even though excitation to the 4$^1$P$_1$ state involves decay routes via eight electronic states, as shown in Fig. \ref{fig:ratemodel}, the process can be accurately described by a three-level model (marked in blue color in Fig. \ref{fig:ratemodel}). This can be rationalized by considering that the Einstein A coefficient for the 4$^1$P$_1 \rightarrow 2^1$S$_0$ transition is a factor of 35 higher than for all other transitions from the 4$^1$P$_1$ state. A direct comparison also shows that the population difference obtained from the solution to the nine-level model and to the three-level model is negligible (less than 1 \%) under all conditions studied here.
\begin{figure}[h!]
\includegraphics[width=\linewidth]{fig2.eps}
\caption{Schematic representation of the He energy levels and the transitions involved in the optical depletion scheme. The 4$^1$P$_1 \leftarrow 2^1$S$_0$ transition at 397 nm is shown as an arrow in red and blue color. All electric-dipole-allowed decay routes from the 4$^1$P$_1$ state are also indicated as arrows. The linestyle of these arrows provides a rough estimate of the transition probability, expressed as Einstein A coefficients. All transitions taken into account in the three-level model are shown in blue color.}
\label{fig:ratemodel}
\end{figure}
The rate equations for interaction with non-polarized light can be written as follows \cite{Metcalf1999, Steck2001, Budker2004}:
\begin{subequations}\label{eq:rateeq}
\begin{align}
\frac{\mathrm{d}N_{i}}{\mathrm{d}t} &= -\Gamma_{ie}N_{i}(t) +A_{ei}N_{e}(t) +\Gamma_{ei}N_{e}(t)\\
\frac{\mathrm{d}N_{e}}{\mathrm{d}t} &= +\Gamma_{ie}N_{i}(t) -A_{ei}N_{e}(t) -\Gamma_{ei}N_{e}(t) -A_{ef}N_{e}(t)\\
\frac{\mathrm{d}N_{f}}{\mathrm{d}t} &= +A_{ef}N_{e}(t),
\end{align}
\end{subequations}
where $N_{i}$, $N_{e}$ and $N_{f}$ denote the populations of the 2$^1$S$_0$ state, the 4$^1$P$_1$ state and the 1$^1$S$_0$ state, respectively. At $t = 0$, all the population is assumed to be in state $i$, i.e., $N_{i} = 1$. The Einstein A coefficients were taken from the NIST Atomic Spectra Database \cite{NIST_ASD}. During laser irradiation, the pump rate $\Gamma_{ei} = \Gamma_{ie}$ is given by
\begin{equation}\label{eq:gamma}
\Gamma_{ei} = \frac{3c^2}{2h\pi\nu_0^3}\frac{P_\mathrm{l}}{A_\mathrm{l}}\frac{1}{1+(4\pi\delta\nu/\left(A_{ei}+A_{ef}\right))^2}.
\end{equation}
It depends on the incident laser power $P_\mathrm{l}$, the transition frequency $\nu_0$, the frequency detuning from resonance $\delta\nu$ and the interaction area $A_\mathrm{l}$. To simplify the calculation, an elliptically shaped light beam with a homogeneous intensity distribution is assumed, so that $A_\mathrm{l} = \pi a b$, where $a$ and $b$ are the semi-axes of the laser beam at FWHM (see above). A solution of the rate equations in the presence of the light field, and taking into account the population relaxation into a new equilibrium after laser irradiation (where $\Gamma_{ei} = 0$), results in a Lorentzian line profile which accounts for the natural linewidth and for power broadening.
The line profile, obtained from the procedure above, is then convoluted with a Gaussian distribution of standard deviation $\sigma = \sqrt{\sigma_{\mathrm{D}} + \sigma_{\mathrm{l}}}$ to take into account Doppler broadening ($\sigma_{\mathrm{D}}$) and the laser linewidth ($\sigma_{\mathrm{l}}$) as additional line-broadening mechanisms. In our experiment, the contribution from Doppler broadening is due to the transverse velocity component of the supersonic beam. The Doppler linewidth is linearly dependent on the helium beam velocity $v$. The Doppler width at FWHM is assumed as \cite{Hollenstein2003}
\begin{equation}\label{eq:Doppler}
\Delta v_{\mathrm{D}} = 2\sqrt{2\ln(2)}\nu_0\frac{ v\sin{(\beta})}{c},
\end{equation}
where $c$ is the speed of light, and the opening angle
\begin{equation}\label{eq:openingangle}
\beta = \arctan{\left( \frac{1}{2} \frac{d+b}{z_0}\right) }
\end{equation}
depends on the orifice diameter $d$, the skimmer diameter $b$ and the distance between nozzle and skimmer $z_0$. A Doppler shift of the spectral line is not taken into account owing to the close-to-perpendicular geometry between the supersonic beam and the laser beam. Contributions by other line-broadening mechanisms are negliglibly small and thus not taken into account.
\section{Results and Discussion}
Fig. \ref{fig:TOF} (a) shows time-of-flight (TOF) traces of metastable helium atoms in the presence (blue line, signal intensity $I_{\mathrm{q}}$) and in the absence (black line, signal intensity $I_{0}$) of laser radiation resonant with the 4$^1$P$_1 \leftarrow 2^1$S$_0$ transition. Since Faraday-cup detection measures both He(2$^1$S$_0$) and He(2$^3$S$_1$) signal contributions, the signal intensity does not go to zero in the presence of laser light, even though the laser power of 38 mW used here leads to a full depletion of the 2$^1$S$_0$ state population at $v$ = 1070 m/s (see discussion below). When measuring under conditions at which the population in the $2^1$S$_0$ state is fully depleted, the remaining signal intensity originates from the He(2$^3$S$_1$) state only. Therefore, the signal ratio $I_{\mathrm{q}}/I_{0}$ can be directly related to the helium singlet-to-triplet ratio in the supersonic beam
\begin{equation}\label{eq:ratio}
r = \frac{I_{0}-I_{\mathrm{q}}}{I_{\mathrm{q}}}
\end{equation}
if the same detection efficiency is assumed for both metastable states.
As can be seen from Fig. \ref{fig:TOF} (b), the singlet-to-triplet ratio is much higher at the rising edge of the helium gas pulse than at other times. This effect may be related to a more efficient \textit{collisional} quenching of He(2$^1$S$_0$) by thermal electrons in the central, higher-density part of the beam. These thermal electrons are produced during the discharge process and may lead to a de-excitation of He(2$^1$S$_0$) to He(2$^3$S$_1$) \cite{Phelps1955}. To test this assumption, we have also measured the pressure dependence of the helium singlet-to-triplet ratio (Fig. \ref{fig:stratio}). The results from this measurement clearly show that the ratio decreases as the valve stagnation pressure $p$ is increased, which suggests a higher rate of He(2$^1$S$_0$) $\rightarrow$ He(2$^3$S$_1$) conversion rate under these conditions.
Since the optical quenching efficiency does not depend on the absolute value of the singlet-to-triplet ratio, integrated signal intensities were used for the further analysis (integration range between 250--365 $\mu$s). Using Eq. \ref{eq:ratio}, an average singlet-to-triplet ratio $\bar{r}$ of 0.69 is obtained at $v = 1070$ m/s ($p = 10$ bar). The average singlet-to-triplet ratios at higher beam velocities (0.69 at $v = 1500$ m/s and $p = 15$ bar; 0.43 at $v = 1750$ m/s and $p = 30$ bar) were inferred by comparison with the results from numerical calculations (see discussion below). The different singlet-to-triplet ratios are attributed to changes in the valve characteristics and beam properties caused by the use of other valve stagnation pressures and valve temperatures.
\begin{figure}[h!]
\includegraphics[width=\linewidth]{fig3.eps}
\caption{(a) Time-of-flight traces of metastable helium atoms ($v$ = 1070 m/s) measured at FC 1 in the presence (blue line) and in the absence (black line) of 38 mW of laser light resonant with the 4$^1$P$_1 \leftarrow 2^1$S$_0$ transition. The time delay is given with respect to the valve trigger pulse. (b) Helium singlet-to-triplet ratio (obtained using Eq. \ref{eq:ratio}) as a function of beam time-of-flight. Ratios obtained from low signal intensities are not shown for clarity. The dotted line at $r = 1$ is for visibility only.}
\label{fig:TOF}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=\linewidth]{fig4.eps}
\caption{Average helium singlet-to-triplet ratio as a function of valve stagnation pressure at $v$ = 1070 m/s and at a laser power of 30 mW.}
\label{fig:stratio}
\end{figure}
\begin{figure*}
\includegraphics[width=\linewidth]{fig5.eps}
\caption{\label{fig:quenchresults} (a, c, e) He(2$^1$S$_0$) state population obtained from experimental measurements (markers) and from numerical calculations (solid lines) as a function of detuning from resonance at different laser powers (indicated in the legend). (b, d, f) Power-dependence of the He(2$^1$S$_0$) state populations obtained from experimental measurements (markers) and from numerical calculations (solid lines) at two different detunings from resonance (indicated in the legend). These laser detunings are also marked as vertical dashed lines in (a, c, e). The helium beam velocities were $v = 1070$ m/s (a, b), $v = 1500$ m/s (c, d), and $v = 1750$ m/s (e, f). The light-blue-colored markers in (d) are obtained from a measurement at zero detuning in which the back reflection of the laser light at mirror M6 was blocked. To allow for a comparison with the other measurements, the plotted laser power was divided by a factor of two.}
\end{figure*}
To quantify the optical depletion efficiency, measurements were done at various laser detunings, laser powers and at three different helium beam velocities. The experimental results were then converted to He(2$^1$S$_0$) state populations taking into account the average helium singlet-to-triplet ratio. From the experimental results at $v = 1070$ m/s at a laser power of 38 mW in Fig. \ref{fig:quenchresults} (a), it can be seen that the state population barely changes over a range of about 230 MHz around resonance. This is much larger than the natural linewidth of the transition ($\Delta \nu = 40$ MHz) and it is thus a clear indication that all of the He(2$^1$S$_0$) state population is depleted over a large range of detunings. We observe that the FWHM of the depletion curves in Fig. \ref{fig:quenchresults} (a-c) is reduced as the laser power is decreased, which is also consistent with the measured power dependence of the He(2$^1$S$_0$) state population in Fig. \ref{fig:quenchresults} (b). This figure illustrates that the depletion efficiency increases as a function of increasing laser power. On resonance, the population reaches a constant value at laser powers $\geq$ 10 mW which is another clear indication for complete population transfer out of the He(2$^1$S$_0$) state and it justifies the use of Eq. \ref{eq:ratio} for the determination of the singlet-to-triplet ratio at $v = 1070$ m/s. At a laser detuning of $\delta\nu =$ 170 MHz, the population does not reach a constant value, even at the highest laser power used in the experiment. At $v = 1500$ m/s and $v = 1750$ m/s, we observe similar wavelength and power dependences as for $v = 1070$ m/s. However, the quenching efficiency at low laser powers and at non-zero detunings is reduced owing to the increase in Doppler width at higher beam velocities (cf. Eq. \ref{eq:Doppler}).
To interpret the experimental results, a global fit procedure with two adjustable fit parameters was used for the numerical calculations. On the one hand, an effective orifice diameter $d_{\mathrm{eff}}$ was used to factor in the spatially broad distribution of metastable helium atoms which results from discharge excitation. As second parameter, an effective optical intensity $I_\mathrm{eff}$ was assumed to account for the inhomogeneous intensity distribution of the laser beam in the interaction zone. The inhomogeneous beam profile also results in a variation of the laser interaction time with the atoms in the supersonic beam. However, a change in laser interaction time is analogous to a variation of the laser intensity. Therefore, the laser interaction time was kept at a constant value of $t_{\mathrm{int}} = N\left(2a \right)/v$ at each helium beam velocity, where $N$ is the number of passes of the laser beam through the interaction volume. From a global fit to all the experimental datasets shown in Fig. \ref{fig:quenchresults}, we obtain $I_\mathrm{eff} = 0.004 \cdot A_\mathrm{l}/P_\mathrm{l}$ and $d_{\mathrm{eff}} = 48 d = 4.8$ mm. Given the approximations used for the model and for the fit, the results obtained from numerical calculations (shown as solid lines in Fig. \ref{fig:quenchresults}) are in good overall agreement with the experimental data.
The Doppler linewidths resulting from $d_{\mathrm{eff}} = 4.8$ mm are in between $\sigma_{\mathrm{D}} = 99$ MHz (at $v = 1070$ m/s) and 157 MHz (at $v = 1750$ m/s). The Doppler widths are thus much higher than the laser linewidth, which we estimate as $\sigma_{\mathrm{l}} \leq 20$ MHz from the FPI fringe pattern. The true laser linewidth is probably much smaller than that, but it could not be determined to a higher accuracy owing to the low finesse of the FPI. Therefore, the uncertainty of the laser linewidth does not significantly alter the outcome of the calculations. Deviations between theory and experiment can be attributed to the assumptions made in the numerical simulation, e.g. the use of a uniform laser intensity distribution. Experimental factors, such as small wavelength drifts inside the wavelength meter due to thermal fluctuations, can also not be ruled out. The use of a double-pass configuration (mirror M6 in Fig. \ref{fig:setup}) indeed leads to a two-fold increase of the interaction time, as can be seen from the experimental results of a single-pass measurement (light-blue markers in Fig. \ref{fig:quenchresults} (d)). Since the measurements were taken at different days, the deviation from the double-pass measurement (dark-blue markers in Fig. \ref{fig:quenchresults} (d)) can be related to changes in the helium singlet-to-triplet ratio or to a different alignment of the laser beam through the chamber.
The uncertainty of the singlet-to-triplet ratio provides a lower limit to the maximum quenching efficiency at each beam velocity. At $v = 1070$ m/s, the ratio can be directly inferred from Eq. \ref{eq:ratio} (see above) and the resulting quenching efficiency is thus known to a high accuracy. At $v = 1500$ m/s and at $v = 1750$ m/s, Eq. \ref{eq:ratio} may not hold owing to the decreased depletion efficiency (as a result of the increased Doppler width). In these cases, the average singlet-to-triplet ratios were determined by comparison with the results from numerical calculations. At zero detuning, unphysical negative He(2$^1$S$_0$) state populations would be obtained at the highest laser powers if the ratios were assumed to be 10 \% lower. If the ratios were 10 \% higher, the experimental results and the numerically calculated values would not agree. Therefore, the maximum on-resonance quenching efficiencies at a laser power of 38 mW are $100^{+1}_{-0}$ \% at $v = 1070$ m/s, $98^{+2}_{-5}$ \% at $v = 1500$ m/s and $97^{+3}_{-5}$ \% at $v = 1750$ m/s.
\section{Conclusion}
In this paper, we have described an original and very efficient technique for the optical depletion of He($2^1$S$_0$) state population in a supersonic beam. This scheme is comparably inexpensive and easily implemented using commercial or home-built diode laser systems.
In the future, we are planning to use a laser lock based on saturated absorption spectroscopy. This will ensure a continuous on-resonance operation of the diode laser, and it will avoid the use of an expensive wavelength meter.
The optical depletion scheme will be especially beneficial for collision experiments. For example, in our laboratory, this setup will be used for the study of quantum-state-controlled reactive collisions between metastable helium atoms and lithium atoms. The optical quenching of He($2^1$S$_0$) will allow us to elucidate the relative contributions of the two metastable states of helium to the reaction, and it will thus make it possible to accurately describe the different reaction channels.
\section*{Acknowledgements}
We thank the group of S. Willitsch (University of Basel) and L. Petralia (University of Oxford) for technical advice on the diode laser design and on the implementation of the wavelength meter lock, respectively. This work is funded by the German Research Council (DFG) under projects DU1804/1-1 and GRK 2079. J. Grzesiak is thankful for additional financial support by the International Graduate Academy (IGA) of the Freiburg Research Services. K. Dulitz acknowledges support by the Chemical Industry Fund (FCI) through a Liebig Fellowship.
|
train/arxiv
|
BkiUcjLxK2li-DeXzku2
| 5
| 1
|
\section{Introduction}
\label{intro}
The central role of the agricultural sector is to provide adequate and high-quality food to an increasing human population,
which is expected to be increased by more than 30\% by 2050 \citep{UNFood}. This means that a significant increase in food production must be achieved.
Because of its importance and relevance, agriculture is a major focus of policy agendas worldwide.
Agriculture is considered as an important contributor to the deterioration of soil, water contamination, as well as air pollution and climate change \citep{bruinsma2003world, vu2007survey}.
Intensive agriculture has been linked to excessive accumulation of soil contaminants \citep{teira2003method},
and significant groundwater pollution with nitrates \citep{stoate2009ecological, garnier1998integrated}.
In particular, intensive livestock farming could have severe negative environmental effects \citep{heinrich2014meat}.
Livestock farms produce large amounts of animal manure, which, if not properly managed, can contaminate nearby underground and aboveground water bodies \citep{cheng2007non, infascelli2010environmental, vu2007survey}.
The autonomous community of Catalonia, located at the north-east part of Spain near the borders with France (see Figure \ref{fig:Catalonia}), is facing this challenge, as livestock farming, mainly swine, has
contributed to the pollution of the physical environment of the area during the last decades \citep{Kamilaris2017AgriBigCat}.
The high density of livestock in some areas, linked to insufficient accessible arable land, has resulted in severe groundwater pollution with nitrates \citep{directive1991council}.
Catalonia is one of the European regions with the highest livestock density\footnote{According to the agricultural statistics for 2016, provided by the Ministry of Agriculture, Government of Catalonia.},
with reported numbers of around 7M pigs, 1M cattle and 32M poultry in a geographical area of 32,108 km{$^2$}.
If handled and distributed properly, manure can be applied as organic fertilizer in crop fields that produce different types of fruits and cereals, nuts and vegetables. In this way, the potential contamination of soil and water created by animal manure could be mitigated \citep{he1998preliminary, teira2003method, paudel2009geographic},
while a positive effect on soil acidity and nutrient availability is possible \citep{whalen2000cattle}.
Hence, if the animal manure is efficiently exported at specific seasons of the year to nearby or distant crop fields, manure can eventually become a valuable resource rather than waste \citep{keplinger2006economics, teenstra2014global, oenema2007nutrient}.
To achieve this aim in an optimal manner, the costs of transporting large quantities of manure must be taken into account as a limiting factor in the process of nutrients' transfer from livestock farms to agricultural fields.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{images/catalonia.png}
\caption{Geographical map of Catalonia, Spain.}
\label{fig:Catalonia}
\end{figure}
This paper proposes two methods to solve the issue of transporting manure from livestock farms to crop fields,
to be used as fertilizer in the territory of Catalonia. The first one is a centralized approach, based on an adapted version of the Dijkstra's algorithm for finding shortest paths together with origin-destination cost matrices \citep{dijkstra1959note}. The second one is a decentralized approach, motivated by the synergistic behaviour of ants at the task of depositing pheromone near food sources, in order to attract more ants to follow their trajectory. This task is foraging, which is achieved by following pheromone trails, and depositing more pheromone on trails during their traversal.
This task creates in a synergistic way promising paths in terms of discovering food \citep{bonabeau1999swarm, garnier2007biological, paredes2017milk}. Intuitively, it can be applied in the context for discovering crop farms in need of fertilizer, similar to the way it has been applied in the past to solve a milk collection problem \citep{paredes2017milk}.
Our contribution in this paper is two-fold: on the one hand, we have solved the problem of transferring animal manure in both centralized and decentralized ways, addressing some limitations of related work (see Section \ref{relWOrk}).
On the other hand, we have proposed and developed a decentralized, nature-inspired technique for a domain (i.e. smart agriculture)
where swarm intelligence methods are still under-exploited, although there is a growing research interest from a computational science perspective \citep{KamilarisSept2018NatComp}.
It is the first attempt to use an ant-inspired algorithm (AIA) for this particular and challenging real-world problem.
The rest of the paper is organized as follows:
Section \ref{relWOrk} describes related work on manure management based on geospatial analysis and on ant-inspired applications in agriculture,
while Section \ref{Methodology} presents our methodology regarding a centralized optimal algorithm (COA), an ant-inspired modelling approach (AIA), as well as a neighbour-based method (NBS). The NBS method constitutes the existing practice used today in an ad hoc, uncoordinated manner in Catalonia \citep{teira1999case, flotats2009manure}.
Section \ref{Results} analyzes the overall findings after applying the proposed methods in the Catalonian context,
and Section \ref{Discussion} discusses the results and comments on the perspectives of this research. Finally, Section \ref{Conclusion} concludes the paper and lists future work.
\section{Related Work}
\label{relWOrk}
Related work involves two main research areas: manure management based on geospatial analysis, facilitated by Geographical Information Systems (GIS) \citep{Kamilaris2018CNNAgri},
as well as applications of ant-inspired techniques in agriculture, facilitated by ant colony optimization (ACO) \citep{dorigo1996ant, dorigo1997ant}. Less relevant work is about network flow solutions applied to other agricultural problems, such as dealing with transportation of live animals to slaughterhouses \citep{oppen2008tabu}, the routing of vehicles for optimized livestock feed distribution \citep{kandiller2017multi} or for biomass transportation \citep{gracia2014application} etc.
Related work in the two main research areas mentioned above is presented below.
\subsection{Transport of Manure for Nutrient Use}
\label{transpManureRelWork}
The idea of transporting surplus manure beyond individual farms for nutrient utilization was proposed in \citep{he1998preliminary},
focusing on animal manure distribution in Michigan.
Teira-Esmatges and Flotats (2003) proposed a methodology to apply manure at a regional and municipal scale in an agronomically correct way,
i.e. by balancing manure distribution to certain crops, based on territorial nitrogen needs and also based on predictions of future needs and availability considering changes in land use.
ValorE \citep{acutis2014valore} is a GIS-based decision support system for livestock manure management,
with a small case study performed at municipality level in the Lombardy region, northern Italy,
indicating the feasibility of manure transfer.
Other researchers proposed approaches to select sites for safe application of animal manure as fertilizer to agricultural land.
Site suitability maps have been created using a GIS-based model in the Netherlands \citep{van1992computer} and in Queensland, Australia \citep{basnet2001selecting}.
Van Lanen and Wopereis (1992) found that 40\% to 60\% of Dutch rural land was found suitable for slurry injection.
Basnet et al. (2001) presented a method of selecting sites for the safe application of animal waste as fertiliser to agricultural land, concluding that 16\% of the area under study was suitable for animal manure application.
A minimum cost spatial GIS-based model for the transportation of dairy manure was proposed in \citep{paudel2009geographic}.
The model incorporated land use types, locations of dairy farms and farmlands, road networks, and distances
from each dairy farm to receiving farmlands, to identify dairy manure transportation routes that minimize costs relative to environmental and economic constraints.
Finally, an application of ACO to solve the milk blending problem with collection points, determining where the collection points should be located and which milk producers would be allocated to them for delivery is described in \citep{paredes2017milk}.
\subsection{Ant-Inspired Techniques in Agriculture}
Not much research has been done in applying ant-inspired techniques in agriculture.
Few approaches dealing with the application of ACO in agricultural problems have been recorded.
ACO is a probabilistic technique in which artificial ants (i.e. simulation agents) locate optimal solutions by moving through a parameter space representing all possible solutions. ACO generally works by searching for optimal paths in a graph, based on the behaviour of ants seeking a path between their colony and sources of food.
We note that ACO is different than the ant-inspired technique applied to this paper (see Section \ref{AIA}), due to the fact that the agents/ants in our context need to seek multiple paths, in a probabilistic travelling salesman manner.
Paredes-Belmar et al. (2017) applied ACO to solve the milk blending problem described in the previous section.
Optimal land allocation was investigated in \citep{liu2012multi}, where the ants represented candidate solutions for different types of land use allocation.
Li et al. (2010) used an ACO algorithm for feature selection in a weed recognition problem.
Optimization of field coverage plans for harvesting operations was performed by means of ACO \citep{bakhtiari2013operations}. Finally, ACO was used for
feature selection and classification of hyperspectral remote sensing images \citep{zhou2009feature}, an operation highly relevant to agriculture.
\subsection{Assumptions in Related Work}
The aforementioned related work, presented in Section \ref{transpManureRelWork}. has adopted various assumptions:
\begin{itemize}
\item aggregating geographical areas at county-level \citep{he1998preliminary};
\item selecting generally suitable sites (i.e. crop and pasture areas) to apply animal manure \citep{van1992computer, basnet2001selecting};
\item not considering transportation distances between livestock and crop farms \citep{he1998preliminary, teira2003method};
\item not calculating the particular needs of crop fields in nitrogen that depend on the land area and the type of the crop \citep{basnet2001selecting, paudel2009geographic};
\item not including actual costs involved with the proposed solution \citep{he1998preliminary, paudel2009geographic, teira2003method, basnet2001selecting};
\item not finding a balanced, fair solution that minimizes the average distance that needs to be covered by the livestock farmers (all aforementioned papers);
\item approximating the problem by means of only centralized strategies (all aforementioned papers).
\end{itemize}
\section{Problem Modelling and Methods Description}
\label{Methodology}
The overall goal is to solve the problem of how to find an optimal and economic way to distribute animal manure in order to fulfil agricultural fertilization needs.
The purpose of this section is to describe how the problem was modelled using the area of Catalonia as a case study (Section \ref{problemmodel}) and to explain how the objective function was defined (Section \ref{objFunctionDescription}).
Furthermore, this section presents the methods adopted to solve the problem under study. These methods are the centralized optimal algorithm (COA) (Section \ref{COA}), the ant-inspired algorithm (AIA) (Section \ref{AIA}), as well as a method based on neighbour search (NBS) (Section \ref{NBS}). NBS constitutes the prevalent method currently used in the territory \citep{teira1999case, flotats2009manure}, and it has been implemented for comparison purposes.
\subsection{Problem Modelling}
\label{problemmodel}
To simplify the problem, the geographical area of Catalonia has been divided into a two-dimensional grid, as shown in Figure \ref{fig:CataloniaModel} (left).
In this way, the distances between livestock farms (i.e. original grid cell) and crop fields (e.g. destination grid cell) are easier to compute, considering straight-line grid
cell Manhattan distance as the metric to use; and not actual real distance through the existing transportation network. The centre of the crop field is used for calculations. An approximation to real-world distances is attempted in Section \ref{objFunctionDescription}.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{l}
\begin{minipage}{\linewidth}
\begin{minipage}{0.50\linewidth}
\includegraphics[width=\linewidth]
{images/catalonia_grid3.png}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\includegraphics[width=\linewidth]
{images/catalonia_grid4.png}
\end{minipage}
\end{minipage}
\end{tabular}
\end{center}
\vspace{-0.3cm}
\caption{Division of the territory of Catalonia in cells of 1 square kilometre each (left). Snapshot is from the area of Cambrils, Reus and Tarragona. Demonstration of livestock/crop farms at grid cells in a dense agricultural area of the region (right). This is a zoom of the map shown on the left. Snapshot is from the area of Reus. Livestock farms are shown as brown circles, and crop fields as blue polygons. The majority of livestock farms raise pigs.}
\label{fig:CataloniaModel}
\end{figure}
Each crop and livestock farm has been assigned to the grid cell where the farm is physically located, as depicted in Figure \ref{fig:CataloniaModel} (right).
Details about livestock farms (i.e. animal types and census, location etc.) have been provided by the Ministry of Agriculture of Catalonia (Departamento de Agricultura, Ganadería, Pesca y Alimentación, Generalitat de Cataluña) for the year 2016, after signing a confidentiality agreement.
Details about crop fields (i.e. crop type, hectares, irrigation method, location, etc.) have been downloaded from the website of the Ministry\footnote{Ministry of Agriculture of Catalonia. \url{http://agricultura.gencat.cat/ca/serveis/cartografia-sig/aplicatius-tematics-geoinformacio/sigpac/}},
for the year 2015.
For every livestock farm, the yearly amount of manure produced and its equivalent in nitrogen as fertilizer have been
calculated, depending on the type and number of animals on the farm, based on the IPCC guidelines (TIER1) \citep{IPCC2006} and the work in \citep{borhan2012greenhouse}.
Similarly, for every crop field, the yearly needs in nitrogen have been computed, depending on the crop type and total hectares of land,
according to \citep{RuralCatdossier}.
The estimated total nitrogen needs of crop fields (i.e. 81,960 K-tons of nitrogen) were lower than the availability of nitrogen from animal manure (i.e. 116,746 K-tons of nitrogen). This surplus of nitrogen is evident in Catalonia and has
contributed to the pollution of the physical environment during the last decades \citep{Kamilaris2017AgriBigCat}.
This means that the produced amount of manure/nitrogen from livestock agriculture has the potential to completely satisfy the total needs of crop farms. This would be particularly important in areas corresponding to the vulnerable zones
defined by the nitrogen EU directive\footnote{The Nitrates Directive of the European Commission. \url{http://ec.europa.eu/environment/water/water-nitrates/index_en.html}}.
Summing up, the total area of Catalonia has been divided into 74,970 grid cells, each representing a $1 \times 1$ square kilometre of physical land.
Every cell has a unique ID and $(x,y)$ coordinates, ranging between $[1,315]$ for the $x$ coordinate and $[1,238]$ for the $y$ coordinate.
For each grid cell, we are aware of the crop and livestock farms located inside that cell, the manure/nitrogen production (i.e. from the livestock farms) and the needs in nitrogen (i.e. of the crop fields). All types of livestock farms and crop fields have been taken into account.
\subsection{Objective Function}
\label{objFunctionDescription}
The problem under study is a single-objective problem, with the overall goal of optimizing the logistics process of satisfying nutrient needs of crops by means of livestock waste. This goal has the following conflicting sub-objectives:
\begin{enumerate}
\item The total nitrogen needs at the crop fields have to be satisfied as much as possible.
\item The total aggregated travel distance covered from the livestock farms to the crop fields, in order to deposit the manure/fertilizer, needs to be as short as possible.
\end{enumerate}
These two sub-objectives can be reformulated as a single one by combining them linearly, assuming the following:
\begin{itemize}
\item The price of fuel in Catalonia, Spain is 1.27 Euro per liter\footnote{GlobalPetrolPrices. \url{http://es.globalpetrolprices.com/Spain/gasoline_prices/} (for May 2019)}.
\item The fuel consumption of tanks is 0.203 liters per 100 kilometer \footnote{Natural Resources Canada. \url{http://www.nrcan.gc.ca/energy/efficiency/transportation/cars-light-trucks/buying/16745}}.
\item Based on the price of fuel in Spain, as given above, the transportation cost per kilometre is 0.257 Euro.
\item Based on the local monthly average prices for fertilizers in Catalonia\footnote{Ministry of Agriculture of Catalonia. \url{http://agricultura.gencat.cat/ca/departament/dar_estadistiques_observatoris} (ammonium sulphate in May 2019)},
the value of nitrogen is 0.225 Euro per kilogram.
\end{itemize}
Based on the aforementioned assumptions, the general objective function to be maximized is defined as:
\begin{equation}
\label{combinedObjective}
GO = (NT \times 0.225 \times l) - (TD \times 0.257 \times g)
\end{equation}
where $NT$ is the total nitrogen transferred in kilograms, and $TD$ is the total distance in kilometres
covered to transport manure, from the livestock to the crop farms. The parameter $l$ aims to capture the nutrient losses of manure during its storage time,
i.e. the time when the manure is stored at the livestock farm until it is transferred to the crop field. Depending on animal type and storage method, nutrient losses vary.
We selected a value of $l=0.60$, which is the average percentage of nitrogen remaining availability in manure according to the animal census of Catalonia,
at an expected storage time of up to three months as solid or liquid manure \citep{rotz2004management}.
Further, the parameter $g$ is a corrective factor aiming to approximate real-world distances, considering that our calculations are based on Manhattan distances between the livestock and the crop farms. The parameter $g$ weights the calculated Manhattan distance by a factor of $g = 1.30$, a value which has been found to be appropriate for approximating real-world distances in semi-rural landscapes \citep{wenzel2017comparing}.
The objective $GO$ is assumed to be in Euro, as it represents a simplified cost/benefit relationship of the manure transfer problem, i.e. benefit of selling nitrogen to the crop fields and cost of transport needed in order to transfer the nitrogen.
The overall goal is to maximize $GO$, whose value can be translated to gains or losses of each solution of the problem.
$GO$ can take also negative values, which means that some solution would have produced a loss. In this case, the transaction is not executed, since it is not rewarding. For every possible transaction, there is a minimum amount of nitrogen which yields a positive value of the objective function $GO$ (see Table \ref{tab:ParametersAIS}). The simulator compares this minimum amount to the available amount for the transfer and rejects the transfer in case the available content is less than the minimum amount. Thus, for all three methods (COA, AIA and NBS), a transfer is allowed only if the objective $GO$ gives a positive value, based on the current amount of nitrogen and the estimated travel distance, which defines the minimum amount of nitrogen required. Practically, at larger distances, it might not be beneficial to transport manure due to high transportation costs. For example, for a distance of 20 kilometres, there has to be a transfer of at least 51 kilograms of nitrogen for the transfer to be rewarding.
Moreover, there is a hard constraint set by the Ministry of Agriculture, demanding that the maximum distance travelled for manure deposit is $50$ kilometres. The reasoning behind this is that otherwise the travel time required for the transfer would have become significant and should have somehow become included in the calculations. Finally, the Ministry asked to try to maintain the average travel distance and standard deviation from every livestock farm to the crop fields as small as possible, i.e. to keep the proposed solution \textit{well-balanced and fair} for all livestock farms.
\subsection{Centralized Optimal Algorithm}
\label{COA}
A centralized optimal approach has been developed based on the following algorithm, which generalized and adapted the well-known Dijkstra's algorithm for finding shortest paths \citep{cherkassky1996shortest, dijkstra1959note}, together with the use of origin-destination (OD) cost matrices as used in the travelling salesman problem for choosing best routes \citep{lin1973effective}.
\begin{figure}[ht!]
\centering
\vspace{-0.4cm}
\includegraphics[width=1.0\linewidth]{images/OD_matrix.png}
\caption{Concept of the COA algorithm illustrated.}
\label{fig:COAconcept}
\vspace{-0.1cm}
\end{figure}
Each livestock farm aims to maximize a \textit{local $GO$}, which is the objective function applied only to this farm. In case of conflicts with other livestock farms for common use of resources, the solution that maximizes the \textit{global $GO$}, as defined in Section \ref{combinedObjective}, wins.
The concept of the algorithm in the context of the problem under study is illustrated in Figure \ref{fig:COAconcept}. Let's assume that the "travelling salesman" is the livestock farm at the red circle. This farm builds its own OD cost matrix, based on the possible values of the local objective function $GO$, applied at each nearby grid cell, up to a Manhattan distance of 50. For reasons of simplicity, Figure \ref{fig:COAconcept} shows the matrix up to a Manhattan distance of 4. We may observe that, generally, grid cells in larger distances have smaller rewards. However, some crop fields located far away might have larger demands in nitrogen, which gives larger values to the local $GO$. It is also possible that crop fields located near competing livestock farms might have reduced demands in nitrogen, as they might have already received nitrogen/fertilizer from these competing farms. After the livestock farm at the red circle builds its OD matrix, then it uses the Dijkstra's algorithm for finding the path that maximizes the local $GO$. In the example of Figure \ref{fig:COAconcept}, this is the path shown by the yellow circles and arrows, which gives a value of $GO=33$. In case of a conflict with another livestock farm (i.e. the two farms share the same grid cell in their paths), the solution maximizing the global objective $GO$ would be considered.
In detail, the algorithm works as follows:
\begin{enumerate}
\item Every livestock farm makes a complete plan, having visibility of the whole grid in regards to where to transfer manure/nitrogen. The most rewarding paths from the source (i.e. initial position) to all other cells in the grid where crop farms are located, up to a maximum distance of 50 kilometres are calculated, producing an origin-destination cost matrix. The cost or reward of every path is calculated based on the objective function $GO$, considering both the actual transportation distances, and the possible transfer of nitrogen.
\item Similar to a travelling salesman problem, the possible routes passing from more than one candidate crop farm (i.e. till availability of manure gets satisfied or the hard constrained of 50 kilometres is reached) are added to the origin-destination cost matrix. The goal is to maximize local $GO$, as it applies to the current livestock farm. The selected travel plan involves all the cells that must be visited, starting from the nearest one, which has the highest local $GO$.
\item If a conflict appears between the selected travel plans of two livestock farms (i.e. at cell $(x, y)$, where some crop farm is located), the livestock farm involved at the solution that maximizes the global $GO$ wins the conflict. Apparently, if the need of manure/nitrogen at this cell $(x, y)$ is higher than the combined availability of nitrogen by the two livestock farms, then no conflict occurs.
\item If the conflict still exists, the livestock farm which has failed in the conflict needs to recalculate a plan that maximizes its local $GO$, this time without considering the cell $(x, y)$ or considering only the remaining need of manure/nitrogen at the crop farm(s) at this cell (i.e. assuming that the livestock farm winning the conflict will deposit its nitrogen there).
\item Steps 1-4 continue iteratively till there is a global consensus, i.e. no livestock farm can find a better plan to transfer its manure. At the time of a consensus, both the global $GO$ and the individual objective functions for each livestock farm (local $GO$s) have been maximized and cannot be further improved. Any more efforts for conflict resolution do not yield a higher global $GO$.
\end{enumerate}
Summing up, the COA solves the problem by the classic Dijkstra's algorithm \cite{dijkstra1959note}, considering a shortest-path problem on an undirected, non-negative, weighted graph. To use the algorithm within the context of the problem under study, the algorithm has been modified to respect the necessary configurations and constraints, i.e. by modelling the weights of the graph to represent both transport distances and crop farms' nitrogen needs, combined using the linear function $GO$. All combinations of visits to nearby farms within 50 kilometres are added to an origin-destination cost matrix, where the most profitable route in terms of maximizing $GO$ is selected. In contrary to the typical travelling salesman problem, here the possible stop locations vary depending on which combinations of candidate crop farms maximize $GO$.
\subsection{Ant-Inspired Algorithm}
\label{AIA}
In general, the synergistic pheromone laying behaviour of ants when discovering food sources
is used as a form of indirect communication, in order to influence the movement of other ants \citep{bonabeau1999swarm, garnier2007biological}.
Pheromone laying was modelled (among others) in the Ant System \citep{dorigo1996ant, dorigo1997ant}, a probabilistic population technique
for combinatorial optimization problems where the search space can be represented by a graph.
The technique exploits the behaviour of ants following links on the graph, constructing paths between their colony and sources of food, to incrementally discover optimal paths, which would form the solution.
In the particular context of the manure transport problem, the foraging behaviour of ants has been adapted to the problem under study. Each ant (i.e. livestock farm) selects its next position from its current grid position successively
and pseudo-randomly, where the probability of next move depends on the pheromone amounts at the neighbouring grid cells.
At each iteration of the algorithm, each ant is allowed to move at a Manhattan distance of maximum one neighbouring grid cell.
Each ant examines the availability of nitrogen needs by crop fields in its neighbourhood,
and drops pheromone at its current grid cell, proportional to the local needs in nitrogen in order to inform other ants of the demand in manure at nearby crop fields.
In detail, the modelling of the problem according to ant foraging is as follows:
\begin{enumerate}
\item Every livestock farm simulates an ant.
\item Every crop field is considered as a potential source of food, analogous to its needs in nitrogen. At the beginning, the pheromone amount at each grid cell is initialized proportionally to the initial needs in nitrogen by the crop fields physically located inside the grid cell.
\item Pheromone at each grid cell is updated by pheromone deposits. Ants perform local pheromone updates to the grid cell where they are currently located while moving around, proportional to the amount of food available (i.e. nitrogen needs) in their grid-based neighbourhood of Manhattan distance (i.e. radius) $n$. The pheromone value at each grid cell increases when one or more ants reside at the cell at some point, depositing pheromone, but also evaporates with time.
\item Each ant chooses the next link of its path based on information provided by other ants, in the form of pheromone deposits at every grid cell.
\item Whenever an ant discovers a crop field with nitrogen needs at its current position (i.e. some grid cell), a transfer of nitrogen is performed from the livestock farm represented by the ant, to the crop field located at that grid cell. In this case, the need for nitrogen at that particular grid cell is reduced accordingly. The manure transaction is recorded by the system as part of the final solution.
\item If the ant still carries some manure/nitrogen, then it continues to move in the grid up to a maximum Manhattan cell-distance of $m=50$ km from its initial position.
\item Steps 3-6 continue iteratively till there is a global consensus, i.e. no livestock farm can find a better plan to transfer its manure. At the time of a consensus, the objective function $GO$ has been maximized and cannot be further improved.
\end{enumerate}
The amount of pheromone laid by each ant is calculated based on the amount of existing nitrogen needs at each neighbouring cell within radius $n$. The biological interpretation of $n$ is that it is the distance over which some ant can \textit{sniff} pheromone content released by other ants.
The Manhattan distance calculated is used to penalize neighbours at larger distances, reducing their \textit{contribution} to the pheromone deposits.
The amount of pheromone $\tau_{xy}$, laid by each ant located at grid cell $(x,y)$ at every iteration $t$ of the algorithm, is calculated using:
\begin{equation}
\label{pheromonecreation}
\tau_{xy}(t) \, = \, \tau_{xy}(t-1) + \sum_{i=x-n}^{x+n} \sum_{j=y-n}^{y+n} NN_{ij} \times \frac{1}{ d_{ijxy}}
\end{equation}
where $\tau_{xy}(t-1)$ is the previous concentration of pheromone at grid cell $(x,y)$,
$NN_{ij}$ represents the food (i.e. needs in nitrogen of the crop field in kilograms) located at grid cell $(i,j)$,
and $d_{ijxy}$ is the Manhattan distance between the ant (i.e. livestock farm) and the food (i.e. crop field).
The parameter $n$ defines which neighbours at the grid structure would be involved in the calculations of pheromone (i.e. neighbours up to $n$-cell distance).
The probability $p_{kl}$ of an ant to move from grid cell $(x,y)$ to $(k,l)$, is calculated as:
\begin{equation}
\label{antmove}
p_{kl} \, = \, \frac{\tau_{kl}} {\sum_{i=x-1}^{x+1}\sum_{j=y-1}^{y+1} \tau_{ij} }
\end{equation}
Note that paths with a higher pheromone concentration have higher probability of selection.
At each iteration $t$ of the algorithm, the pheromone concentration $\tau_{xy}(t)$ at every grid cell $(x,y)$ decays/evaporates to promote exploration:
\begin{equation}
\label{pheromoneevap}
\tau_{xy}(t) \, = \, (1-\varrho) \times \tau_{xy}(t-1)
\end{equation}
where $\varrho$ is the percentage of \textit{pheromone evaporation}.
\subsection{Neighbour-Based Search}
\label{NBS}
For comparison reasons, the method currently used in the Catalonian context was implemented \citep{teira1999case, flotats2009manure}.
What happens today is that each livestock farmer acts selfishly, trying to find the most appropriate crop field(s) based on the objective $GO$ (see Section \ref{objFunctionDescription}) to deposit the produced animal manure.
In our implementation, we refer to this method as neighbour-based search (NBS). In reality, the outcome is not optimal, because some farmers might not make the most optimal and rational choice. However, we have implemented the NBS method assuming the most optimized outcome, as if all farmers made the best possible choice.
The NBS method is described as follows:
\begin{enumerate}
\item First, for some cell $(x, y)$, try to transfer nitrogen from the livestock farm to the crop fields located at this same cell (i.e. Manhattan distance 0). Do this for all the livestock farms/grid cells.
\item Then, if availability of nitrogen still exists, try to transfer nitrogen from the cell $(x, y)$ to the crop fields located at the nearby grid cells $[x \pm 1, y \pm 1]$ (i.e. Manhattan distance 1). Perform this 1-distance calculation for all the livestock farms.
\item If the livestock farm cannot find suitable crop farms in the neighbouring cells of Manhattan distance 1, then continue this procedure for grid cells located at increasing distance $k$ each time from cell $(x, y)$.
At each step $k$, do this $k$-distance calculation for all livestock farms, before moving to a distance $k+1$ (for reasons of fairness).
\item If some suitable crop farm has been found at distance $k$, then perform the transfer of nitrogen, setting the new position of the livestock farm as the one at the grid cell of distance $k$, where the transfer happened. Then, move to Step 2.
\item If no suitable crop farm has been found at distance $k$, then Steps 3-4 are repeated until either a new crop farm has been found at distance $k+n$ or the availability of nitrogen is completely satisfied, or a maximum distance of $m=50$ (i.e. grid cells distance) has been reached.
\item Steps 2-5 are repeated for all livestock farms.
\end{enumerate}
\section{Empirical Analysis}
\label{Results}
This section first explains the reasoning towards the tuning of the control parameters of the AIA. Then, it presents and compares the findings obtained by solving the problem of manure transport optimization, using the three methods described in Sections \ref{COA}, \ref{AIA} and \ref{NBS}.
\subsection{AIA Control Parameter Tuning}
\label{AISparameterSetting}
The ant-inspired algorithm introduces the control parameters $n$ and $\varrho$.
Additionally, two more parameters involved in our model are the \textit{maximum cell-distance} $m$ and the \textit{maximum number of iterations}. The former refers to the maximum Manhattan distance
between livestock and crop farms, where nitrogen transfer could be allowed, while the latter defines the maximum number of iterations until the algorithm stops.
The algorithm could stop earlier if no more transfers occur, i.e. all needs are satisfied or no more manure is available.
All parameters involved in the AIA algorithm are listed in Table \ref{tab:ParametersAIS}.
\begin{table}[ht!]
\caption{Control parameters for the AIA algorithm.}
\label{tab:ParametersAIS}
\begin{tabular}{| p{2.6cm} | p{5.5cm} | p{2.6cm} |}
\hline\noalign{\smallskip}
\bf{Parameter Name} & \bf{Description} & \bf{Value(s)} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Pheromone evaporation, $\varrho$ & The decay of pheromone deposited by the ants, at each iteration of the algorithm. & 0-100\% \\
\hline
Neighbourhood radius, $n$ & The maximum Manhattan distance, at which neighbouring cells will contribute in calculating pheromone that would be released by the ant. All the cells up to a cell distance $n$ participate in the calculations. & 1-50 grid cells (values up to 65 have been allowed only for testing purposes)\\
\hline
Minimum nitrogen & The minimum amount of nitrogen in kilograms for a transfer to occur, yielding a positive value of the objective $GO$. & 1-150 Kilos, depending on the Manhattan distance between farms. \\
\hline
Maximum cell-distance, $m$ & The maximum Manhattan distance over which transport of animal manure/nitrogen is allowed. & 50 grid cells (values up to 60 have been allowed only for testing purposes)\\
\hline
Maximum iterations & The maximum number of iterations of the AIA algorithm. & 3,000 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
From the parameters listed in Table \ref{tab:ParametersAIS}, the ones whose value needs to be defined are the neighbourhood distance $n$ and the pheromone evaporation coefficient $\varrho$. The former takes values in the range $[0,65]$, ignoring here for reasons of comparison the hard constraint of 50 kilometres, while the latter takes values in the range $[0,100]$.
Figure \ref{fig:paramsAIA} depicts the different values of the objective $GO$, at different values of distance $n$ and percentages of $\varrho$. Note that, because the AIA algorithm is stochastic, the results presented below have been averaged over 10 independent runs of the algorithm, with different value pairs of control parameters. The maximum value was recorded for each value pair. Differences between experiments with the same value pairs were very small.
Based on the results presented in Figure \ref{fig:paramsAIA}, a value of pheromone evaporation $\varrho=85\%$
and a neighbourhood radius $n=50$ cells-distance
were selected. These parameter values provided a value of $GO = 6,718.069$. We note that values of $n$ larger than the hard constraint of 50 kilometres did not improve $GO$, and have been included for comparisons. We also note that values of $\varrho \in [85,95]$ and $n \in [50,65]$ resulted in very small differences in the $GO$ value.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{images/heatmap.png}
\caption{Impact of pheromone evaporation $\varrho$ and neighbourhood radius $n$ on the objective $GO$.}
\label{fig:paramsAIA}
\end{figure}
\subsection{Comparison of COA, AIA and NBS }
Figure \ref{fig:nitroexchanged} illustrates the total nitrogen transported from livestock to crop farms, for different grid cell Manhattan distances.
COA performs slightly better than AIA, managing to achieve a transfer of 55.3 K-tons of nitrogen (47.4\% from total availability), in comparison to 51,1 K-tons (43.8\% from total availability) for the AIA. NBS transfers less nitrogen than both COA and AIA (47.8 K-tons, 40.9\% from total availability). Hence, in terms of nitrogen transfer, the COA algorithm is 1.08 times more efficient than the AIA algorithm. At the same time, the AIA is 1.07 times more efficient than the NBS.
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\linewidth]{images/nitrogen.png}
\caption{A comparison between COA, AIA and NBS for the total nitrogen (in kilos) transferred from livestock to crop farms at different Manhattan distances.}
\label{fig:nitroexchanged}
\vspace{-0.3cm}
\end{figure}
For all the three approaches, most of the nitrogen transfer happens up to a Manhattan distance of 20 grid cells, after which nitrogen transfer becomes quite low. COA and AIA have larger quantities transferred at lower Manhattan distances (i..e up to 30 grid cells), in comparison to NBS.
Figure \ref{fig:transportdistance} presents the transportation distance covered between livestock and crop farms for every successful transfer of nitrogen, i.e. at each different Manhattan distance recorded for each transfer that took place,
for all the three algorithms. NBS is the least efficient, with a linear increase of transportation distance at larger distances between livestock and crop farms. The COA requires 27\% less distance to be covered than the AIA, while the AIA needs
57\% less distance than the NBS. Thus, AIA outperforms NBS while COA is more efficient than AIA.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\linewidth]{images/transport.png}
\caption{Total transportation distance covered between livestock and crop farms, using COA, AIA and NBS at different Manhattan distances.}
\label{fig:transportdistance}
\vspace{-0.1cm}
\end{figure}
The total transactions of animal manure performed at different Manhattan distances are presented in Figure \ref{fig:transactions}. The reader can understand the graph in the following way: when there are $x$ transactions for some Manhattan distance $y$, this means that the total transactions that occurred during the simulation, in which the livestock farm involved was located at a Manhattan distance $y$ from the crop field involved, were $x$. COA is the most efficient one, performing less transactions while transferring more manure. AIA performs more transactions than COA in almost all different Manhattan distances, especially 3-8, 27-37 and 41-50. AIA is still much more efficient than NBS. Due to the selfish and competitive behaviour of the livestock farmers at the NBS case, there exist numerous transactions of smaller amounts of animal manure, which cause transactions to increase with distance, especially till a Manhattan distance of 23.
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\linewidth]{images/transactions.png}
\caption{Total transactions of animal manure between livestock and crop farms, using COA, AIA and NBS at different Manhattan distances.}
\label{fig:transactions}
\vspace{-0.1cm}
\end{figure}
A counter-productive example of the operation of NBS is illustrated in Figure \ref{fig:NBSexample}.
In this example, a livestock farmer physically located at position (1)
moves nearby east to transfer some manure to position (2), where a crop field is located, knowing that the rest of its available manure would then be placed at position (3).
However, at the next iteration of the algorithm, the need for nitrogen at position (3) becomes satisfied by another rival livestock farmer. Thus, the farmer has to move west from his/her farm's initial position at the next step of the algorithm (i.e. position (4)) in order to deposit the remaining manure/nitrogen. This behaviour increases the overall transportation distance that needs to be covered by the farmer, as indicated in Figure \ref{fig:transportdistance}. The probability of such scenarios is small for the AIA, due to the use of pheromones that coordinate in a more well-balanced way the movement of ants along the Catalonian grid. This probability is zero for COA, because the livestock farms select their strategy a-priori, having complete information of the grid, i.e. based on the distance constraint of 50 kilometres.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.7\linewidth]{images/NBS_bhaviour.png}
\caption{An example of not productive behaviour of the NBS method.}
\label{fig:NBSexample}
\vspace{-0.1cm}
\end{figure}
Table \ref{tab:summary} summarizes the results of the experiments, including the calculations of the objective $GO$. $GO$ shows that the AIA method is 1.115 times more gainful than the NBS one, however it is 8.5\% less efficient than the COA.
The last two rows of the table denote the average total Manhattan distance that needs to be travelled by each livestock farmer and the standard deviation, in order to perform transfer(s) of animal manure. This average distance is 62 for the COA (with std. deviation of 32), 57 for the AIA method (with std. deviation of 25) and 112 for the NBS (with std. deviation of 78). This relates to the requirement stated in Section \ref{objFunctionDescription}, i.e. the proposed solution must be well-balanced and fair for all livestock farms. The results show that the AIA method is the most well-balanced in terms of transport distance travelled, followed by COA.
\begin{table}[ht!]
\caption{Summarized values of the experiments performed using COA, AIA and NBS.}
\label{tab:summary}
\centering
\begin{tabular}{| p{5.5cm} | c | c | c | }
\hline\noalign{\smallskip}
\bf{Objective} & \bf{COA} & \bf{AIA} & \bf{NBS}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
Nitrogen transferred (K-tons) & 55.385 & 51.124 & 47.786 \\
\hline
Transportation (Manhattan distance) & 402.379 & 549.829 & 1.276.371 \\
\hline
Objective $GO$ (Euro) & 7,342.535 & 6,718.069 & 6,024.735 \\
\hline
Average transportation distance of each livestock farm (Manhattan distance) & 62 & 57 & 112 \\
\hline
Standard deviation of the average transportation distance of each livestock farm (Manhattan distance) & 32 & 25 & 78 \\
\hline
Running time (minutes) & 34 & 38 & 31 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{Discussion}
\label{Discussion}
The results indicate that COA is the most efficient solution, outperforming AIA by 8.5\% in reference to a linear objective function $GO$.
This makes sense because COA has complete information of the problem, giving an optimal solution. However, AIA can be employed to solve the animal manure transport problem in a slightly fairer manner, in terms of balanced transportation distances covered by the livestock farmers. Both COA and AIA solve the problem by reducing effectively the overall transportation distance that needs to be covered from the livestock farms to the crop farm fields, keeping the nitrogen transfer at high percentages.
COA belongs to the class of network flow problems approximated by linear integer programming (ILP). COA runs on a simulator developed by the authors, choosing an adapted generalization of Dijkstra's algorithm for shortest paths, plus the use of origin-destination cost matrices for choosing optimal paths, as used in the travelling salesman problem. The development of a simulator from scratch was decided because of the scale, conditions, objectives and constraints of the problem under study, which made the use of popular ILP solvers (e.g. CPLEX, GLPK, Gurobi) difficult. Besides, the fact that more constraints are expected to be added in the future (see future work in Section \ref{FutureWork} below), influenced the decision to develop a new simulator, for reasons of flexibility and more freedom during future work performed.
The last row of Table \ref{tab:summary} shows the running time of each algorithm in minutes, on a laptop machine (2,8 GHz Intel Core i7, 6 GB 2133 MHz LPDDR3 RAM). All three algorithms have similar running times, with AIA being the slowest (38 minutes) due to the continuous movement of the ants in the Catalonian virtual grid, till they find a solution or till the constraint of 50 kilometres has been reached. COA has also a considerable running time (34 minutes) because each livestock farm needs to calculate shortest paths to all nearby farms in the radius of 50 kilometres, as well as an origin-destination cost matrix with all possible options. This matrix needs to be created only once, unless conflicts appear (see Section \ref{COA}), in which case some re-calculations need to take place for the livestock farm that has lost the conflict. Due to the fact that not many conflicts have appeared (i.e. less than 400), COA was not much
computationally intensive in the context of the Catalonian area.
The findings indicated that a cut-off Manhattan distance of $50$ was the most appropriate one for the case of Catalonia.
This cut-off distance is larger than the 30-kilometre cut-off distance selected by \citep{basnet2001selecting} for dairy manure application for the case of Louisiana, USA. A reason for this could be differences in the concentration and topology of the farming industry at the two areas.
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\linewidth]{images/map_result.png}
\caption{The map of Catalonia after the COA has been applied, showing remaining needs in manure (orange color) and remaining availability of manure (green color). The color intensity indicates different needs or availability of manure. For example, darker colours of green and orange correspond to larger availability or needs of manure at some farm. Please note that this map depicts only manure availability and needs of farms after the application of COA. This means that livestock farms whose manure availability is zero and/or crop farms whose needs in manure as fertilizer are zero, do not appear on the map.}
\label{fig:AIAappliedCat}
\vspace{-0.1cm}
\end{figure}
Figure \ref{fig:AIAappliedCat} illustrates how the application of COA in the area of Catalonia affects availability (i.e. green colour) and needs (i.e. orange colour) of manure/nitrogen. We can observe that the algorithm creates separate regions of green- and orange-coloured spots (i.e. livestock and crop farms respectively). The distance between spots of different colour is either larger than 50 kilometres, or there is not enough manure available for the transaction to be gainful, i.e. give positive values to the $GO$ function. Note that darker colours of green and orange correspond to larger availability/needs of manure at some farm respectively. Figure \ref{fig:AIAappliedCat} is another indication that COA solves the problem effectively. A very similar map was produced for the AIA case (although it was 8.5\% less efficient).
As mentioned before, AIA constitutes an important contribution of this paper, due to its decentralized nature.
AIA has potential as an efficient optimization tool in similar problems of a distributed, geospatial nature, and it could well support a dynamic, real-world scenario where supplies in manure and demand in manure/nitrogen could change continuously.
This scenario could be feasible provided that the livestock and crop farmers would be willing to share
information about their animals and their manure and crops respectively. In this case, the AIA algorithm should have been re-designed with faster pheromone evaporations. It is subject of future work.
Moreover, we note that this study constitutes only a demonstration that COA and AIA could be employed for addressing this important problem.
A complete Life-Cycle Analysis (LCA) \citep{curran2008life}, together with Life-Cycle Costing (LCC) \citep{swarr2011environmental}, would consider a more comprehensive coverage of the problem.
For example,
the profits gained by the algorithms, as summarized in Table \ref{tab:summary}, would be re-considered, taking into account the extra costs needed to maintain the vehicles used for the transfers, i.e. to compensate for the extra kilometres,
as well as the extra time wasted by the livestock farmers or the personnel in charge of realizing the transfers of animal manure. Especially for NBS, having more than triple transport needs than COA as well as double more needs than AIA, this extra cost should be considered as high under a complete LCA.
LCA/LCC could focus on environmental parameters too, incorporating actual costs and comparisons with alternatives. There are environmental consequences by moving large volumes of manure via transportation, not examined in this paper.
Through this study, we observed that there are considerable differences between larger and smaller livestock farms in terms of the production of animal manure and their overall environmental impact. It would be interesting to compare or enhance our simulator with a hybrid approach/scenario, where larger farms employ local or neighbouring manure processing units and smaller ones participate at this animal manure transfer scheme.
Finally, it is important to comment that most countries around the world have national policies related to manure management \citep{teenstra2014global}. However, these policies have inconsistencies or they are not well regulated in many countries, especially developing ones \citep{vu2007survey}. Achieving reductions of GHG emissions and meeting renewable energy targets, or lowering the energy costs at farm level are key drivers of manure-related policies, which differ at each country between storage, treatment, digestion, discharge and application \citep{oenema2007nutrient}. A general observation is that manure is not optimally used by farmers generally around the world, especially developing countries \citep{teenstra2014global, vu2007survey, oenema2007nutrient}. Our work aims to contribute to the efforts towards an effective solution to the problem, via application of manure as fertilizer to crop farms, giving insights over the implications of the problem and of its potential solutions.
\subsection{Assumptions and Limitations}
The work in this paper has addressed all the assumptions made in related work (see Section \ref{relWOrk}), being more detailed and complete. Moreover, the AIA solution is completely decentralized, and could be extended for a dynamic scenario (i.e. future work).
However, both the related work and this paper made some additional assumptions, not taking into account the following:
\begin{itemize}
\item Variation in availability of manure in different periods of the year.
\item Possibility of larger quantity of manure than the vehicle's capacity to carry, where multiple routes would be needed for the transfer.
\item Varying crop demands in manure at different seasons.
\item Used a simplified objective function to optimize, based on a general estimation of nitrogen value and transport cost (i.e. cost of fuel). Aspects of vehicles' purchase, maintenance and depreciation costs, labour costs etc., have not been considered.
\item Manure could undergo some \textit{concentration treatment} (e.g. dry cleaning) \citep{teira2003method} in order to reduce the volume transported.
\item Phosphorous, another fundamental crop nutrient present in manure, has not been considered.
\end{itemize}
An additional important assumption was the modelling via grid cells and Manhattan distances instead of actual, real-world distances. This assumption was considered due to the overall computational complexity of the problem. We tried to mitigate this issue by approximating real-world distances using the corrective factor $g$ in the objective function (see Section \ref{objFunctionDescription}), but this is only a simplified approximation. The factors of faster vs quicker routes, quality of the roads, obstacles such as mountains and city centres requiring additional kilometers to travel, slope of each route, traffic in rush hours, speed limit in different roads, constraints in the routes that trucks are allowed to take, etc. have not been taken into account. Transportation distance relates also to time waste, which has also not been considered. These, together with the assumptions mentioned before, are important aspects of future work, which is discussed below.
\subsection{Future Work}
\label{FutureWork}
Future work will continue to explore the application of the COA and the AIA to this problem, addressing the assumptions made in this paper.
More realistic transportation distances and travel times among farms for manure transport would be considered, as well as dynamic changes in production and need for nitrogen.
This will include the possibility of various routes during the year to transfer manure, calculating more precisely the seasonal effect on the nitrogen content available in the manure which is being reduced through time.
Also, the seasonal differences of various crops will be studied, which might make some crop fields unavailable for manure application at some periods of the year.
Moreover, the costs of the trucks involved in the transport (i.e. purchase, maintenance, depreciation, etc.) will be considered in the objective function, although this is complicated topic due to the possible subsidies that might be provided by the government in order to implement such a manure transport scheme.
Finally, we plan to investigate the use of local or neighbouring manure processing units in selected livestock farms. The complete environmental consequences of the problem under study would be considered too, including the pollution produced by the transportation of manure between farms.
\section{Conclusion}
\label{Conclusion}
This paper addressed the problem of the surplus of animal manure from livestock agriculture, which creates important environmental problems. The paper investigated and suggested a sustainable approach based on nutrient redistribution, where manure was transported as fertilizer from livestock farms to crop fields. Two approaches have been developed: a centralized approach (COA) based on an adapted version of Dijkstra's algorithm for finding shortest paths; as well as a
decentralized one inspired by ant foraging behaviour (AIA). AIA addressed the problem by modelling livestock farms as ants and crop fields as sources of food for the ants.
A comparison between the (centralized) COA approach and the cooperative and decentralized AIA algorithm showed that the COA was 8.5\% more efficient, based on a single-objective function. Both COA and AIA outperformed significantly a (individualist) Neighbour-Based Search (NBS) approach, which resembles the existing practice used today for transport of manure in the region of Catalonia, Spain. The AIA approach was fairer for the farmers and more balanced in terms of average transportation distances that need to be covered by each livestock farmer to transport manure.
Our work constitutes a new application of ant-inspired algorithms to an interesting real-world problem, in a domain where swarm intelligence methods are still under-exploited.
\begin{acknowledgements}
Special thanks to Mr. Jaume Boixadera Llobet and Mr. Mario Carrillo Salagre from the Ministry of Agriculture, Government of Catalonia.
Their feedback, help and advice has been very important in terms of understanding the problem of livestock agriculture in Catalonia and seeking together ways to reduce it.
This research has been supported by the P-SPHERE project, which has received
funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No 665919,
and also by the CERCA Programme/Generalitat de Catalunya.
Andreas Kamilaris has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 739578 complemented by the Government of the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development.
Francesc X. Prenafeta-Bold{\'u} belongs to the Consolidated Research Group TERRA (2017 SGR 1290), funded by the Generalitat de Catalunya.
\end{acknowledgements}
\bibliographystyle{plainnat}
\section{Introduction}
\label{intro}
The central role of the agricultural sector is to provide adequate and high-quality food to an increasing human population,
which is expected to be increased by more than 30\% by 2050 \citep{UNFood}. This means that a significant increase in food production must be achieved.
Because of its importance and relevance, agriculture is a major focus of policy agendas worldwide.
Agriculture is considered as an important contributor to the deterioration of soil, water contamination, as well as air pollution and climate change \citep{bruinsma2003world, vu2007survey}.
Intensive agriculture has been linked to excessive accumulation of soil contaminants \citep{teira2003method},
and significant groundwater pollution with nitrates \citep{stoate2009ecological, garnier1998integrated}.
In particular, intensive livestock farming could have severe negative environmental effects \citep{heinrich2014meat}.
Livestock farms produce large amounts of animal manure, which, if not properly managed, can contaminate nearby underground and aboveground water bodies \citep{cheng2007non, infascelli2010environmental, vu2007survey}.
The autonomous community of Catalonia, located at the north-east part of Spain near the borders with France (see Figure \ref{fig:Catalonia}), is facing this challenge, as livestock farming, mainly swine, has
contributed to the pollution of the physical environment of the area during the last decades \citep{Kamilaris2017AgriBigCat}.
The high density of livestock in some areas, linked to insufficient accessible arable land, has resulted in severe groundwater pollution with nitrates \citep{directive1991council}.
Catalonia is one of the European regions with the highest livestock density\footnote{According to the agricultural statistics for 2016, provided by the Ministry of Agriculture, Government of Catalonia.},
with reported numbers of around 7M pigs, 1M cattle and 32M poultry in a geographical area of 32,108 km{$^2$}.
If handled and distributed properly, manure can be applied as organic fertilizer in crop fields that produce different types of fruits and cereals, nuts and vegetables. In this way, the potential contamination of soil and water created by animal manure could be mitigated \citep{he1998preliminary, teira2003method, paudel2009geographic},
while a positive effect on soil acidity and nutrient availability is possible \citep{whalen2000cattle}.
Hence, if the animal manure is efficiently exported at specific seasons of the year to nearby or distant crop fields, manure can eventually become a valuable resource rather than waste \citep{keplinger2006economics, teenstra2014global, oenema2007nutrient}.
To achieve this aim in an optimal manner, the costs of transporting large quantities of manure must be taken into account as a limiting factor in the process of nutrients' transfer from livestock farms to agricultural fields.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{images/catalonia.png}
\caption{Geographical map of Catalonia, Spain.}
\label{fig:Catalonia}
\end{figure}
This paper proposes two methods to solve the issue of transporting manure from livestock farms to crop fields,
to be used as fertilizer in the territory of Catalonia. The first one is a centralized approach, based on an adapted version of the Dijkstra's algorithm for finding shortest paths together with origin-destination cost matrices \citep{dijkstra1959note}. The second one is a decentralized approach, motivated by the synergistic behaviour of ants at the task of depositing pheromone near food sources, in order to attract more ants to follow their trajectory. This task is foraging, which is achieved by following pheromone trails, and depositing more pheromone on trails during their traversal.
This task creates in a synergistic way promising paths in terms of discovering food \citep{bonabeau1999swarm, garnier2007biological, paredes2017milk}. Intuitively, it can be applied in the context for discovering crop farms in need of fertilizer, similar to the way it has been applied in the past to solve a milk collection problem \citep{paredes2017milk}.
Our contribution in this paper is two-fold: on the one hand, we have solved the problem of transferring animal manure in both centralized and decentralized ways, addressing some limitations of related work (see Section \ref{relWOrk}).
On the other hand, we have proposed and developed a decentralized, nature-inspired technique for a domain (i.e. smart agriculture)
where swarm intelligence methods are still under-exploited, although there is a growing research interest from a computational science perspective \citep{KamilarisSept2018NatComp}.
It is the first attempt to use an ant-inspired algorithm (AIA) for this particular and challenging real-world problem.
The rest of the paper is organized as follows:
Section \ref{relWOrk} describes related work on manure management based on geospatial analysis and on ant-inspired applications in agriculture,
while Section \ref{Methodology} presents our methodology regarding a centralized optimal algorithm (COA), an ant-inspired modelling approach (AIA), as well as a neighbour-based method (NBS). The NBS method constitutes the existing practice used today in an ad hoc, uncoordinated manner in Catalonia \citep{teira1999case, flotats2009manure}.
Section \ref{Results} analyzes the overall findings after applying the proposed methods in the Catalonian context,
and Section \ref{Discussion} discusses the results and comments on the perspectives of this research. Finally, Section \ref{Conclusion} concludes the paper and lists future work.
\section{Related Work}
\label{relWOrk}
Related work involves two main research areas: manure management based on geospatial analysis, facilitated by Geographical Information Systems (GIS) \citep{Kamilaris2018CNNAgri},
as well as applications of ant-inspired techniques in agriculture, facilitated by ant colony optimization (ACO) \citep{dorigo1996ant, dorigo1997ant}. Less relevant work is about network flow solutions applied to other agricultural problems, such as dealing with transportation of live animals to slaughterhouses \citep{oppen2008tabu}, the routing of vehicles for optimized livestock feed distribution \citep{kandiller2017multi} or for biomass transportation \citep{gracia2014application} etc.
Related work in the two main research areas mentioned above is presented below.
\subsection{Transport of Manure for Nutrient Use}
\label{transpManureRelWork}
The idea of transporting surplus manure beyond individual farms for nutrient utilization was proposed in \citep{he1998preliminary},
focusing on animal manure distribution in Michigan.
Teira-Esmatges and Flotats (2003) proposed a methodology to apply manure at a regional and municipal scale in an agronomically correct way,
i.e. by balancing manure distribution to certain crops, based on territorial nitrogen needs and also based on predictions of future needs and availability considering changes in land use.
ValorE \citep{acutis2014valore} is a GIS-based decision support system for livestock manure management,
with a small case study performed at municipality level in the Lombardy region, northern Italy,
indicating the feasibility of manure transfer.
Other researchers proposed approaches to select sites for safe application of animal manure as fertilizer to agricultural land.
Site suitability maps have been created using a GIS-based model in the Netherlands \citep{van1992computer} and in Queensland, Australia \citep{basnet2001selecting}.
Van Lanen and Wopereis (1992) found that 40\% to 60\% of Dutch rural land was found suitable for slurry injection.
Basnet et al. (2001) presented a method of selecting sites for the safe application of animal waste as fertiliser to agricultural land, concluding that 16\% of the area under study was suitable for animal manure application.
A minimum cost spatial GIS-based model for the transportation of dairy manure was proposed in \citep{paudel2009geographic}.
The model incorporated land use types, locations of dairy farms and farmlands, road networks, and distances
from each dairy farm to receiving farmlands, to identify dairy manure transportation routes that minimize costs relative to environmental and economic constraints.
Finally, an application of ACO to solve the milk blending problem with collection points, determining where the collection points should be located and which milk producers would be allocated to them for delivery is described in \citep{paredes2017milk}.
\subsection{Ant-Inspired Techniques in Agriculture}
Not much research has been done in applying ant-inspired techniques in agriculture.
Few approaches dealing with the application of ACO in agricultural problems have been recorded.
ACO is a probabilistic technique in which artificial ants (i.e. simulation agents) locate optimal solutions by moving through a parameter space representing all possible solutions. ACO generally works by searching for optimal paths in a graph, based on the behaviour of ants seeking a path between their colony and sources of food.
We note that ACO is different than the ant-inspired technique applied to this paper (see Section \ref{AIA}), due to the fact that the agents/ants in our context need to seek multiple paths, in a probabilistic travelling salesman manner.
Paredes-Belmar et al. (2017) applied ACO to solve the milk blending problem described in the previous section.
Optimal land allocation was investigated in \citep{liu2012multi}, where the ants represented candidate solutions for different types of land use allocation.
Li et al. (2010) used an ACO algorithm for feature selection in a weed recognition problem.
Optimization of field coverage plans for harvesting operations was performed by means of ACO \citep{bakhtiari2013operations}. Finally, ACO was used for
feature selection and classification of hyperspectral remote sensing images \citep{zhou2009feature}, an operation highly relevant to agriculture.
\subsection{Assumptions in Related Work}
The aforementioned related work, presented in Section \ref{transpManureRelWork}. has adopted various assumptions:
\begin{itemize}
\item aggregating geographical areas at county-level \citep{he1998preliminary};
\item selecting generally suitable sites (i.e. crop and pasture areas) to apply animal manure \citep{van1992computer, basnet2001selecting};
\item not considering transportation distances between livestock and crop farms \citep{he1998preliminary, teira2003method};
\item not calculating the particular needs of crop fields in nitrogen that depend on the land area and the type of the crop \citep{basnet2001selecting, paudel2009geographic};
\item not including actual costs involved with the proposed solution \citep{he1998preliminary, paudel2009geographic, teira2003method, basnet2001selecting};
\item not finding a balanced, fair solution that minimizes the average distance that needs to be covered by the livestock farmers (all aforementioned papers);
\item approximating the problem by means of only centralized strategies (all aforementioned papers).
\end{itemize}
\section{Problem Modelling and Methods Description}
\label{Methodology}
The overall goal is to solve the problem of how to find an optimal and economic way to distribute animal manure in order to fulfil agricultural fertilization needs.
The purpose of this section is to describe how the problem was modelled using the area of Catalonia as a case study (Section \ref{problemmodel}) and to explain how the objective function was defined (Section \ref{objFunctionDescription}).
Furthermore, this section presents the methods adopted to solve the problem under study. These methods are the centralized optimal algorithm (COA) (Section \ref{COA}), the ant-inspired algorithm (AIA) (Section \ref{AIA}), as well as a method based on neighbour search (NBS) (Section \ref{NBS}). NBS constitutes the prevalent method currently used in the territory \citep{teira1999case, flotats2009manure}, and it has been implemented for comparison purposes.
\subsection{Problem Modelling}
\label{problemmodel}
To simplify the problem, the geographical area of Catalonia has been divided into a two-dimensional grid, as shown in Figure \ref{fig:CataloniaModel} (left).
In this way, the distances between livestock farms (i.e. original grid cell) and crop fields (e.g. destination grid cell) are easier to compute, considering straight-line grid
cell Manhattan distance as the metric to use; and not actual real distance through the existing transportation network. The centre of the crop field is used for calculations. An approximation to real-world distances is attempted in Section \ref{objFunctionDescription}.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{l}
\begin{minipage}{\linewidth}
\begin{minipage}{0.50\linewidth}
\includegraphics[width=\linewidth]
{images/catalonia_grid3.png}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\includegraphics[width=\linewidth]
{images/catalonia_grid4.png}
\end{minipage}
\end{minipage}
\end{tabular}
\end{center}
\vspace{-0.3cm}
\caption{Division of the territory of Catalonia in cells of 1 square kilometre each (left). Snapshot is from the area of Cambrils, Reus and Tarragona. Demonstration of livestock/crop farms at grid cells in a dense agricultural area of the region (right). This is a zoom of the map shown on the left. Snapshot is from the area of Reus. Livestock farms are shown as brown circles, and crop fields as blue polygons. The majority of livestock farms raise pigs.}
\label{fig:CataloniaModel}
\end{figure}
Each crop and livestock farm has been assigned to the grid cell where the farm is physically located, as depicted in Figure \ref{fig:CataloniaModel} (right).
Details about livestock farms (i.e. animal types and census, location etc.) have been provided by the Ministry of Agriculture of Catalonia (Departamento de Agricultura, Ganadería, Pesca y Alimentación, Generalitat de Cataluña) for the year 2016, after signing a confidentiality agreement.
Details about crop fields (i.e. crop type, hectares, irrigation method, location, etc.) have been downloaded from the website of the Ministry\footnote{Ministry of Agriculture of Catalonia. \url{http://agricultura.gencat.cat/ca/serveis/cartografia-sig/aplicatius-tematics-geoinformacio/sigpac/}},
for the year 2015.
For every livestock farm, the yearly amount of manure produced and its equivalent in nitrogen as fertilizer have been
calculated, depending on the type and number of animals on the farm, based on the IPCC guidelines (TIER1) \citep{IPCC2006} and the work in \citep{borhan2012greenhouse}.
Similarly, for every crop field, the yearly needs in nitrogen have been computed, depending on the crop type and total hectares of land,
according to \citep{RuralCatdossier}.
The estimated total nitrogen needs of crop fields (i.e. 81,960 K-tons of nitrogen) were lower than the availability of nitrogen from animal manure (i.e. 116,746 K-tons of nitrogen). This surplus of nitrogen is evident in Catalonia and has
contributed to the pollution of the physical environment during the last decades \citep{Kamilaris2017AgriBigCat}.
This means that the produced amount of manure/nitrogen from livestock agriculture has the potential to completely satisfy the total needs of crop farms. This would be particularly important in areas corresponding to the vulnerable zones
defined by the nitrogen EU directive\footnote{The Nitrates Directive of the European Commission. \url{http://ec.europa.eu/environment/water/water-nitrates/index_en.html}}.
Summing up, the total area of Catalonia has been divided into 74,970 grid cells, each representing a $1 \times 1$ square kilometre of physical land.
Every cell has a unique ID and $(x,y)$ coordinates, ranging between $[1,315]$ for the $x$ coordinate and $[1,238]$ for the $y$ coordinate.
For each grid cell, we are aware of the crop and livestock farms located inside that cell, the manure/nitrogen production (i.e. from the livestock farms) and the needs in nitrogen (i.e. of the crop fields). All types of livestock farms and crop fields have been taken into account.
\subsection{Objective Function}
\label{objFunctionDescription}
The problem under study is a single-objective problem, with the overall goal of optimizing the logistics process of satisfying nutrient needs of crops by means of livestock waste. This goal has the following conflicting sub-objectives:
\begin{enumerate}
\item The total nitrogen needs at the crop fields have to be satisfied as much as possible.
\item The total aggregated travel distance covered from the livestock farms to the crop fields, in order to deposit the manure/fertilizer, needs to be as short as possible.
\end{enumerate}
These two sub-objectives can be reformulated as a single one by combining them linearly, assuming the following:
\begin{itemize}
\item The price of fuel in Catalonia, Spain is 1.27 Euro per liter\footnote{GlobalPetrolPrices. \url{http://es.globalpetrolprices.com/Spain/gasoline_prices/} (for May 2019)}.
\item The fuel consumption of tanks is 0.203 liters per 100 kilometer \footnote{Natural Resources Canada. \url{http://www.nrcan.gc.ca/energy/efficiency/transportation/cars-light-trucks/buying/16745}}.
\item Based on the price of fuel in Spain, as given above, the transportation cost per kilometre is 0.257 Euro.
\item Based on the local monthly average prices for fertilizers in Catalonia\footnote{Ministry of Agriculture of Catalonia. \url{http://agricultura.gencat.cat/ca/departament/dar_estadistiques_observatoris} (ammonium sulphate in May 2019)},
the value of nitrogen is 0.225 Euro per kilogram.
\end{itemize}
Based on the aforementioned assumptions, the general objective function to be maximized is defined as:
\begin{equation}
\label{combinedObjective}
GO = (NT \times 0.225 \times l) - (TD \times 0.257 \times g)
\end{equation}
where $NT$ is the total nitrogen transferred in kilograms, and $TD$ is the total distance in kilometres
covered to transport manure, from the livestock to the crop farms. The parameter $l$ aims to capture the nutrient losses of manure during its storage time,
i.e. the time when the manure is stored at the livestock farm until it is transferred to the crop field. Depending on animal type and storage method, nutrient losses vary.
We selected a value of $l=0.60$, which is the average percentage of nitrogen remaining availability in manure according to the animal census of Catalonia,
at an expected storage time of up to three months as solid or liquid manure \citep{rotz2004management}.
Further, the parameter $g$ is a corrective factor aiming to approximate real-world distances, considering that our calculations are based on Manhattan distances between the livestock and the crop farms. The parameter $g$ weights the calculated Manhattan distance by a factor of $g = 1.30$, a value which has been found to be appropriate for approximating real-world distances in semi-rural landscapes \citep{wenzel2017comparing}.
The objective $GO$ is assumed to be in Euro, as it represents a simplified cost/benefit relationship of the manure transfer problem, i.e. benefit of selling nitrogen to the crop fields and cost of transport needed in order to transfer the nitrogen.
The overall goal is to maximize $GO$, whose value can be translated to gains or losses of each solution of the problem.
$GO$ can take also negative values, which means that some solution would have produced a loss. In this case, the transaction is not executed, since it is not rewarding. For every possible transaction, there is a minimum amount of nitrogen which yields a positive value of the objective function $GO$ (see Table \ref{tab:ParametersAIS}). The simulator compares this minimum amount to the available amount for the transfer and rejects the transfer in case the available content is less than the minimum amount. Thus, for all three methods (COA, AIA and NBS), a transfer is allowed only if the objective $GO$ gives a positive value, based on the current amount of nitrogen and the estimated travel distance, which defines the minimum amount of nitrogen required. Practically, at larger distances, it might not be beneficial to transport manure due to high transportation costs. For example, for a distance of 20 kilometres, there has to be a transfer of at least 51 kilograms of nitrogen for the transfer to be rewarding.
Moreover, there is a hard constraint set by the Ministry of Agriculture, demanding that the maximum distance travelled for manure deposit is $50$ kilometres. The reasoning behind this is that otherwise the travel time required for the transfer would have become significant and should have somehow become included in the calculations. Finally, the Ministry asked to try to maintain the average travel distance and standard deviation from every livestock farm to the crop fields as small as possible, i.e. to keep the proposed solution \textit{well-balanced and fair} for all livestock farms.
\subsection{Centralized Optimal Algorithm}
\label{COA}
A centralized optimal approach has been developed based on the following algorithm, which generalized and adapted the well-known Dijkstra's algorithm for finding shortest paths \citep{cherkassky1996shortest, dijkstra1959note}, together with the use of origin-destination (OD) cost matrices as used in the travelling salesman problem for choosing best routes \citep{lin1973effective}.
\begin{figure}[ht!]
\centering
\vspace{-0.4cm}
\includegraphics[width=1.0\linewidth]{images/OD_matrix.png}
\caption{Concept of the COA algorithm illustrated.}
\label{fig:COAconcept}
\vspace{-0.1cm}
\end{figure}
Each livestock farm aims to maximize a \textit{local $GO$}, which is the objective function applied only to this farm. In case of conflicts with other livestock farms for common use of resources, the solution that maximizes the \textit{global $GO$}, as defined in Section \ref{combinedObjective}, wins.
The concept of the algorithm in the context of the problem under study is illustrated in Figure \ref{fig:COAconcept}. Let's assume that the "travelling salesman" is the livestock farm at the red circle. This farm builds its own OD cost matrix, based on the possible values of the local objective function $GO$, applied at each nearby grid cell, up to a Manhattan distance of 50. For reasons of simplicity, Figure \ref{fig:COAconcept} shows the matrix up to a Manhattan distance of 4. We may observe that, generally, grid cells in larger distances have smaller rewards. However, some crop fields located far away might have larger demands in nitrogen, which gives larger values to the local $GO$. It is also possible that crop fields located near competing livestock farms might have reduced demands in nitrogen, as they might have already received nitrogen/fertilizer from these competing farms. After the livestock farm at the red circle builds its OD matrix, then it uses the Dijkstra's algorithm for finding the path that maximizes the local $GO$. In the example of Figure \ref{fig:COAconcept}, this is the path shown by the yellow circles and arrows, which gives a value of $GO=33$. In case of a conflict with another livestock farm (i.e. the two farms share the same grid cell in their paths), the solution maximizing the global objective $GO$ would be considered.
In detail, the algorithm works as follows:
\begin{enumerate}
\item Every livestock farm makes a complete plan, having visibility of the whole grid in regards to where to transfer manure/nitrogen. The most rewarding paths from the source (i.e. initial position) to all other cells in the grid where crop farms are located, up to a maximum distance of 50 kilometres are calculated, producing an origin-destination cost matrix. The cost or reward of every path is calculated based on the objective function $GO$, considering both the actual transportation distances, and the possible transfer of nitrogen.
\item Similar to a travelling salesman problem, the possible routes passing from more than one candidate crop farm (i.e. till availability of manure gets satisfied or the hard constrained of 50 kilometres is reached) are added to the origin-destination cost matrix. The goal is to maximize local $GO$, as it applies to the current livestock farm. The selected travel plan involves all the cells that must be visited, starting from the nearest one, which has the highest local $GO$.
\item If a conflict appears between the selected travel plans of two livestock farms (i.e. at cell $(x, y)$, where some crop farm is located), the livestock farm involved at the solution that maximizes the global $GO$ wins the conflict. Apparently, if the need of manure/nitrogen at this cell $(x, y)$ is higher than the combined availability of nitrogen by the two livestock farms, then no conflict occurs.
\item If the conflict still exists, the livestock farm which has failed in the conflict needs to recalculate a plan that maximizes its local $GO$, this time without considering the cell $(x, y)$ or considering only the remaining need of manure/nitrogen at the crop farm(s) at this cell (i.e. assuming that the livestock farm winning the conflict will deposit its nitrogen there).
\item Steps 1-4 continue iteratively till there is a global consensus, i.e. no livestock farm can find a better plan to transfer its manure. At the time of a consensus, both the global $GO$ and the individual objective functions for each livestock farm (local $GO$s) have been maximized and cannot be further improved. Any more efforts for conflict resolution do not yield a higher global $GO$.
\end{enumerate}
Summing up, the COA solves the problem by the classic Dijkstra's algorithm \cite{dijkstra1959note}, considering a shortest-path problem on an undirected, non-negative, weighted graph. To use the algorithm within the context of the problem under study, the algorithm has been modified to respect the necessary configurations and constraints, i.e. by modelling the weights of the graph to represent both transport distances and crop farms' nitrogen needs, combined using the linear function $GO$. All combinations of visits to nearby farms within 50 kilometres are added to an origin-destination cost matrix, where the most profitable route in terms of maximizing $GO$ is selected. In contrary to the typical travelling salesman problem, here the possible stop locations vary depending on which combinations of candidate crop farms maximize $GO$.
\subsection{Ant-Inspired Algorithm}
\label{AIA}
In general, the synergistic pheromone laying behaviour of ants when discovering food sources
is used as a form of indirect communication, in order to influence the movement of other ants \citep{bonabeau1999swarm, garnier2007biological}.
Pheromone laying was modelled (among others) in the Ant System \citep{dorigo1996ant, dorigo1997ant}, a probabilistic population technique
for combinatorial optimization problems where the search space can be represented by a graph.
The technique exploits the behaviour of ants following links on the graph, constructing paths between their colony and sources of food, to incrementally discover optimal paths, which would form the solution.
In the particular context of the manure transport problem, the foraging behaviour of ants has been adapted to the problem under study. Each ant (i.e. livestock farm) selects its next position from its current grid position successively
and pseudo-randomly, where the probability of next move depends on the pheromone amounts at the neighbouring grid cells.
At each iteration of the algorithm, each ant is allowed to move at a Manhattan distance of maximum one neighbouring grid cell.
Each ant examines the availability of nitrogen needs by crop fields in its neighbourhood,
and drops pheromone at its current grid cell, proportional to the local needs in nitrogen in order to inform other ants of the demand in manure at nearby crop fields.
In detail, the modelling of the problem according to ant foraging is as follows:
\begin{enumerate}
\item Every livestock farm simulates an ant.
\item Every crop field is considered as a potential source of food, analogous to its needs in nitrogen. At the beginning, the pheromone amount at each grid cell is initialized proportionally to the initial needs in nitrogen by the crop fields physically located inside the grid cell.
\item Pheromone at each grid cell is updated by pheromone deposits. Ants perform local pheromone updates to the grid cell where they are currently located while moving around, proportional to the amount of food available (i.e. nitrogen needs) in their grid-based neighbourhood of Manhattan distance (i.e. radius) $n$. The pheromone value at each grid cell increases when one or more ants reside at the cell at some point, depositing pheromone, but also evaporates with time.
\item Each ant chooses the next link of its path based on information provided by other ants, in the form of pheromone deposits at every grid cell.
\item Whenever an ant discovers a crop field with nitrogen needs at its current position (i.e. some grid cell), a transfer of nitrogen is performed from the livestock farm represented by the ant, to the crop field located at that grid cell. In this case, the need for nitrogen at that particular grid cell is reduced accordingly. The manure transaction is recorded by the system as part of the final solution.
\item If the ant still carries some manure/nitrogen, then it continues to move in the grid up to a maximum Manhattan cell-distance of $m=50$ km from its initial position.
\item Steps 3-6 continue iteratively till there is a global consensus, i.e. no livestock farm can find a better plan to transfer its manure. At the time of a consensus, the objective function $GO$ has been maximized and cannot be further improved.
\end{enumerate}
The amount of pheromone laid by each ant is calculated based on the amount of existing nitrogen needs at each neighbouring cell within radius $n$. The biological interpretation of $n$ is that it is the distance over which some ant can \textit{sniff} pheromone content released by other ants.
The Manhattan distance calculated is used to penalize neighbours at larger distances, reducing their \textit{contribution} to the pheromone deposits.
The amount of pheromone $\tau_{xy}$, laid by each ant located at grid cell $(x,y)$ at every iteration $t$ of the algorithm, is calculated using:
\begin{equation}
\label{pheromonecreation}
\tau_{xy}(t) \, = \, \tau_{xy}(t-1) + \sum_{i=x-n}^{x+n} \sum_{j=y-n}^{y+n} NN_{ij} \times \frac{1}{ d_{ijxy}}
\end{equation}
where $\tau_{xy}(t-1)$ is the previous concentration of pheromone at grid cell $(x,y)$,
$NN_{ij}$ represents the food (i.e. needs in nitrogen of the crop field in kilograms) located at grid cell $(i,j)$,
and $d_{ijxy}$ is the Manhattan distance between the ant (i.e. livestock farm) and the food (i.e. crop field).
The parameter $n$ defines which neighbours at the grid structure would be involved in the calculations of pheromone (i.e. neighbours up to $n$-cell distance).
The probability $p_{kl}$ of an ant to move from grid cell $(x,y)$ to $(k,l)$, is calculated as:
\begin{equation}
\label{antmove}
p_{kl} \, = \, \frac{\tau_{kl}} {\sum_{i=x-1}^{x+1}\sum_{j=y-1}^{y+1} \tau_{ij} }
\end{equation}
Note that paths with a higher pheromone concentration have higher probability of selection.
At each iteration $t$ of the algorithm, the pheromone concentration $\tau_{xy}(t)$ at every grid cell $(x,y)$ decays/evaporates to promote exploration:
\begin{equation}
\label{pheromoneevap}
\tau_{xy}(t) \, = \, (1-\varrho) \times \tau_{xy}(t-1)
\end{equation}
where $\varrho$ is the percentage of \textit{pheromone evaporation}.
\subsection{Neighbour-Based Search}
\label{NBS}
For comparison reasons, the method currently used in the Catalonian context was implemented \citep{teira1999case, flotats2009manure}.
What happens today is that each livestock farmer acts selfishly, trying to find the most appropriate crop field(s) based on the objective $GO$ (see Section \ref{objFunctionDescription}) to deposit the produced animal manure.
In our implementation, we refer to this method as neighbour-based search (NBS). In reality, the outcome is not optimal, because some farmers might not make the most optimal and rational choice. However, we have implemented the NBS method assuming the most optimized outcome, as if all farmers made the best possible choice.
The NBS method is described as follows:
\begin{enumerate}
\item First, for some cell $(x, y)$, try to transfer nitrogen from the livestock farm to the crop fields located at this same cell (i.e. Manhattan distance 0). Do this for all the livestock farms/grid cells.
\item Then, if availability of nitrogen still exists, try to transfer nitrogen from the cell $(x, y)$ to the crop fields located at the nearby grid cells $[x \pm 1, y \pm 1]$ (i.e. Manhattan distance 1). Perform this 1-distance calculation for all the livestock farms.
\item If the livestock farm cannot find suitable crop farms in the neighbouring cells of Manhattan distance 1, then continue this procedure for grid cells located at increasing distance $k$ each time from cell $(x, y)$.
At each step $k$, do this $k$-distance calculation for all livestock farms, before moving to a distance $k+1$ (for reasons of fairness).
\item If some suitable crop farm has been found at distance $k$, then perform the transfer of nitrogen, setting the new position of the livestock farm as the one at the grid cell of distance $k$, where the transfer happened. Then, move to Step 2.
\item If no suitable crop farm has been found at distance $k$, then Steps 3-4 are repeated until either a new crop farm has been found at distance $k+n$ or the availability of nitrogen is completely satisfied, or a maximum distance of $m=50$ (i.e. grid cells distance) has been reached.
\item Steps 2-5 are repeated for all livestock farms.
\end{enumerate}
\section{Empirical Analysis}
\label{Results}
This section first explains the reasoning towards the tuning of the control parameters of the AIA. Then, it presents and compares the findings obtained by solving the problem of manure transport optimization, using the three methods described in Sections \ref{COA}, \ref{AIA} and \ref{NBS}.
\subsection{AIA Control Parameter Tuning}
\label{AISparameterSetting}
The ant-inspired algorithm introduces the control parameters $n$ and $\varrho$.
Additionally, two more parameters involved in our model are the \textit{maximum cell-distance} $m$ and the \textit{maximum number of iterations}. The former refers to the maximum Manhattan distance
between livestock and crop farms, where nitrogen transfer could be allowed, while the latter defines the maximum number of iterations until the algorithm stops.
The algorithm could stop earlier if no more transfers occur, i.e. all needs are satisfied or no more manure is available.
All parameters involved in the AIA algorithm are listed in Table \ref{tab:ParametersAIS}.
\begin{table}[ht!]
\caption{Control parameters for the AIA algorithm.}
\label{tab:ParametersAIS}
\begin{tabular}{| p{2.6cm} | p{5.5cm} | p{2.6cm} |}
\hline\noalign{\smallskip}
\bf{Parameter Name} & \bf{Description} & \bf{Value(s)} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Pheromone evaporation, $\varrho$ & The decay of pheromone deposited by the ants, at each iteration of the algorithm. & 0-100\% \\
\hline
Neighbourhood radius, $n$ & The maximum Manhattan distance, at which neighbouring cells will contribute in calculating pheromone that would be released by the ant. All the cells up to a cell distance $n$ participate in the calculations. & 1-50 grid cells (values up to 65 have been allowed only for testing purposes)\\
\hline
Minimum nitrogen & The minimum amount of nitrogen in kilograms for a transfer to occur, yielding a positive value of the objective $GO$. & 1-150 Kilos, depending on the Manhattan distance between farms. \\
\hline
Maximum cell-distance, $m$ & The maximum Manhattan distance over which transport of animal manure/nitrogen is allowed. & 50 grid cells (values up to 60 have been allowed only for testing purposes)\\
\hline
Maximum iterations & The maximum number of iterations of the AIA algorithm. & 3,000 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
From the parameters listed in Table \ref{tab:ParametersAIS}, the ones whose value needs to be defined are the neighbourhood distance $n$ and the pheromone evaporation coefficient $\varrho$. The former takes values in the range $[0,65]$, ignoring here for reasons of comparison the hard constraint of 50 kilometres, while the latter takes values in the range $[0,100]$.
Figure \ref{fig:paramsAIA} depicts the different values of the objective $GO$, at different values of distance $n$ and percentages of $\varrho$. Note that, because the AIA algorithm is stochastic, the results presented below have been averaged over 10 independent runs of the algorithm, with different value pairs of control parameters. The maximum value was recorded for each value pair. Differences between experiments with the same value pairs were very small.
Based on the results presented in Figure \ref{fig:paramsAIA}, a value of pheromone evaporation $\varrho=85\%$
and a neighbourhood radius $n=50$ cells-distance
were selected. These parameter values provided a value of $GO = 6,718.069$. We note that values of $n$ larger than the hard constraint of 50 kilometres did not improve $GO$, and have been included for comparisons. We also note that values of $\varrho \in [85,95]$ and $n \in [50,65]$ resulted in very small differences in the $GO$ value.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{images/heatmap.png}
\caption{Impact of pheromone evaporation $\varrho$ and neighbourhood radius $n$ on the objective $GO$.}
\label{fig:paramsAIA}
\end{figure}
\subsection{Comparison of COA, AIA and NBS }
Figure \ref{fig:nitroexchanged} illustrates the total nitrogen transported from livestock to crop farms, for different grid cell Manhattan distances.
COA performs slightly better than AIA, managing to achieve a transfer of 55.3 K-tons of nitrogen (47.4\% from total availability), in comparison to 51,1 K-tons (43.8\% from total availability) for the AIA. NBS transfers less nitrogen than both COA and AIA (47.8 K-tons, 40.9\% from total availability). Hence, in terms of nitrogen transfer, the COA algorithm is 1.08 times more efficient than the AIA algorithm. At the same time, the AIA is 1.07 times more efficient than the NBS.
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\linewidth]{images/nitrogen.png}
\caption{A comparison between COA, AIA and NBS for the total nitrogen (in kilos) transferred from livestock to crop farms at different Manhattan distances.}
\label{fig:nitroexchanged}
\vspace{-0.3cm}
\end{figure}
For all the three approaches, most of the nitrogen transfer happens up to a Manhattan distance of 20 grid cells, after which nitrogen transfer becomes quite low. COA and AIA have larger quantities transferred at lower Manhattan distances (i..e up to 30 grid cells), in comparison to NBS.
Figure \ref{fig:transportdistance} presents the transportation distance covered between livestock and crop farms for every successful transfer of nitrogen, i.e. at each different Manhattan distance recorded for each transfer that took place,
for all the three algorithms. NBS is the least efficient, with a linear increase of transportation distance at larger distances between livestock and crop farms. The COA requires 27\% less distance to be covered than the AIA, while the AIA needs
57\% less distance than the NBS. Thus, AIA outperforms NBS while COA is more efficient than AIA.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\linewidth]{images/transport.png}
\caption{Total transportation distance covered between livestock and crop farms, using COA, AIA and NBS at different Manhattan distances.}
\label{fig:transportdistance}
\vspace{-0.1cm}
\end{figure}
The total transactions of animal manure performed at different Manhattan distances are presented in Figure \ref{fig:transactions}. The reader can understand the graph in the following way: when there are $x$ transactions for some Manhattan distance $y$, this means that the total transactions that occurred during the simulation, in which the livestock farm involved was located at a Manhattan distance $y$ from the crop field involved, were $x$. COA is the most efficient one, performing less transactions while transferring more manure. AIA performs more transactions than COA in almost all different Manhattan distances, especially 3-8, 27-37 and 41-50. AIA is still much more efficient than NBS. Due to the selfish and competitive behaviour of the livestock farmers at the NBS case, there exist numerous transactions of smaller amounts of animal manure, which cause transactions to increase with distance, especially till a Manhattan distance of 23.
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\linewidth]{images/transactions.png}
\caption{Total transactions of animal manure between livestock and crop farms, using COA, AIA and NBS at different Manhattan distances.}
\label{fig:transactions}
\vspace{-0.1cm}
\end{figure}
A counter-productive example of the operation of NBS is illustrated in Figure \ref{fig:NBSexample}.
In this example, a livestock farmer physically located at position (1)
moves nearby east to transfer some manure to position (2), where a crop field is located, knowing that the rest of its available manure would then be placed at position (3).
However, at the next iteration of the algorithm, the need for nitrogen at position (3) becomes satisfied by another rival livestock farmer. Thus, the farmer has to move west from his/her farm's initial position at the next step of the algorithm (i.e. position (4)) in order to deposit the remaining manure/nitrogen. This behaviour increases the overall transportation distance that needs to be covered by the farmer, as indicated in Figure \ref{fig:transportdistance}. The probability of such scenarios is small for the AIA, due to the use of pheromones that coordinate in a more well-balanced way the movement of ants along the Catalonian grid. This probability is zero for COA, because the livestock farms select their strategy a-priori, having complete information of the grid, i.e. based on the distance constraint of 50 kilometres.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.7\linewidth]{images/NBS_bhaviour.png}
\caption{An example of not productive behaviour of the NBS method.}
\label{fig:NBSexample}
\vspace{-0.1cm}
\end{figure}
Table \ref{tab:summary} summarizes the results of the experiments, including the calculations of the objective $GO$. $GO$ shows that the AIA method is 1.115 times more gainful than the NBS one, however it is 8.5\% less efficient than the COA.
The last two rows of the table denote the average total Manhattan distance that needs to be travelled by each livestock farmer and the standard deviation, in order to perform transfer(s) of animal manure. This average distance is 62 for the COA (with std. deviation of 32), 57 for the AIA method (with std. deviation of 25) and 112 for the NBS (with std. deviation of 78). This relates to the requirement stated in Section \ref{objFunctionDescription}, i.e. the proposed solution must be well-balanced and fair for all livestock farms. The results show that the AIA method is the most well-balanced in terms of transport distance travelled, followed by COA.
\begin{table}[ht!]
\caption{Summarized values of the experiments performed using COA, AIA and NBS.}
\label{tab:summary}
\centering
\begin{tabular}{| p{5.5cm} | c | c | c | }
\hline\noalign{\smallskip}
\bf{Objective} & \bf{COA} & \bf{AIA} & \bf{NBS}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
Nitrogen transferred (K-tons) & 55.385 & 51.124 & 47.786 \\
\hline
Transportation (Manhattan distance) & 402.379 & 549.829 & 1.276.371 \\
\hline
Objective $GO$ (Euro) & 7,342.535 & 6,718.069 & 6,024.735 \\
\hline
Average transportation distance of each livestock farm (Manhattan distance) & 62 & 57 & 112 \\
\hline
Standard deviation of the average transportation distance of each livestock farm (Manhattan distance) & 32 & 25 & 78 \\
\hline
Running time (minutes) & 34 & 38 & 31 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{Discussion}
\label{Discussion}
The results indicate that COA is the most efficient solution, outperforming AIA by 8.5\% in reference to a linear objective function $GO$.
This makes sense because COA has complete information of the problem, giving an optimal solution. However, AIA can be employed to solve the animal manure transport problem in a slightly fairer manner, in terms of balanced transportation distances covered by the livestock farmers. Both COA and AIA solve the problem by reducing effectively the overall transportation distance that needs to be covered from the livestock farms to the crop farm fields, keeping the nitrogen transfer at high percentages.
COA belongs to the class of network flow problems approximated by linear integer programming (ILP). COA runs on a simulator developed by the authors, choosing an adapted generalization of Dijkstra's algorithm for shortest paths, plus the use of origin-destination cost matrices for choosing optimal paths, as used in the travelling salesman problem. The development of a simulator from scratch was decided because of the scale, conditions, objectives and constraints of the problem under study, which made the use of popular ILP solvers (e.g. CPLEX, GLPK, Gurobi) difficult. Besides, the fact that more constraints are expected to be added in the future (see future work in Section \ref{FutureWork} below), influenced the decision to develop a new simulator, for reasons of flexibility and more freedom during future work performed.
The last row of Table \ref{tab:summary} shows the running time of each algorithm in minutes, on a laptop machine (2,8 GHz Intel Core i7, 6 GB 2133 MHz LPDDR3 RAM). All three algorithms have similar running times, with AIA being the slowest (38 minutes) due to the continuous movement of the ants in the Catalonian virtual grid, till they find a solution or till the constraint of 50 kilometres has been reached. COA has also a considerable running time (34 minutes) because each livestock farm needs to calculate shortest paths to all nearby farms in the radius of 50 kilometres, as well as an origin-destination cost matrix with all possible options. This matrix needs to be created only once, unless conflicts appear (see Section \ref{COA}), in which case some re-calculations need to take place for the livestock farm that has lost the conflict. Due to the fact that not many conflicts have appeared (i.e. less than 400), COA was not much
computationally intensive in the context of the Catalonian area.
The findings indicated that a cut-off Manhattan distance of $50$ was the most appropriate one for the case of Catalonia.
This cut-off distance is larger than the 30-kilometre cut-off distance selected by \citep{basnet2001selecting} for dairy manure application for the case of Louisiana, USA. A reason for this could be differences in the concentration and topology of the farming industry at the two areas.
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\linewidth]{images/map_result.png}
\caption{The map of Catalonia after the COA has been applied, showing remaining needs in manure (orange color) and remaining availability of manure (green color). The color intensity indicates different needs or availability of manure. For example, darker colours of green and orange correspond to larger availability or needs of manure at some farm. Please note that this map depicts only manure availability and needs of farms after the application of COA. This means that livestock farms whose manure availability is zero and/or crop farms whose needs in manure as fertilizer are zero, do not appear on the map.}
\label{fig:AIAappliedCat}
\vspace{-0.1cm}
\end{figure}
Figure \ref{fig:AIAappliedCat} illustrates how the application of COA in the area of Catalonia affects availability (i.e. green colour) and needs (i.e. orange colour) of manure/nitrogen. We can observe that the algorithm creates separate regions of green- and orange-coloured spots (i.e. livestock and crop farms respectively). The distance between spots of different colour is either larger than 50 kilometres, or there is not enough manure available for the transaction to be gainful, i.e. give positive values to the $GO$ function. Note that darker colours of green and orange correspond to larger availability/needs of manure at some farm respectively. Figure \ref{fig:AIAappliedCat} is another indication that COA solves the problem effectively. A very similar map was produced for the AIA case (although it was 8.5\% less efficient).
As mentioned before, AIA constitutes an important contribution of this paper, due to its decentralized nature.
AIA has potential as an efficient optimization tool in similar problems of a distributed, geospatial nature, and it could well support a dynamic, real-world scenario where supplies in manure and demand in manure/nitrogen could change continuously.
This scenario could be feasible provided that the livestock and crop farmers would be willing to share
information about their animals and their manure and crops respectively. In this case, the AIA algorithm should have been re-designed with faster pheromone evaporations. It is subject of future work.
Moreover, we note that this study constitutes only a demonstration that COA and AIA could be employed for addressing this important problem.
A complete Life-Cycle Analysis (LCA) \citep{curran2008life}, together with Life-Cycle Costing (LCC) \citep{swarr2011environmental}, would consider a more comprehensive coverage of the problem.
For example,
the profits gained by the algorithms, as summarized in Table \ref{tab:summary}, would be re-considered, taking into account the extra costs needed to maintain the vehicles used for the transfers, i.e. to compensate for the extra kilometres,
as well as the extra time wasted by the livestock farmers or the personnel in charge of realizing the transfers of animal manure. Especially for NBS, having more than triple transport needs than COA as well as double more needs than AIA, this extra cost should be considered as high under a complete LCA.
LCA/LCC could focus on environmental parameters too, incorporating actual costs and comparisons with alternatives. There are environmental consequences by moving large volumes of manure via transportation, not examined in this paper.
Through this study, we observed that there are considerable differences between larger and smaller livestock farms in terms of the production of animal manure and their overall environmental impact. It would be interesting to compare or enhance our simulator with a hybrid approach/scenario, where larger farms employ local or neighbouring manure processing units and smaller ones participate at this animal manure transfer scheme.
Finally, it is important to comment that most countries around the world have national policies related to manure management \citep{teenstra2014global}. However, these policies have inconsistencies or they are not well regulated in many countries, especially developing ones \citep{vu2007survey}. Achieving reductions of GHG emissions and meeting renewable energy targets, or lowering the energy costs at farm level are key drivers of manure-related policies, which differ at each country between storage, treatment, digestion, discharge and application \citep{oenema2007nutrient}. A general observation is that manure is not optimally used by farmers generally around the world, especially developing countries \citep{teenstra2014global, vu2007survey, oenema2007nutrient}. Our work aims to contribute to the efforts towards an effective solution to the problem, via application of manure as fertilizer to crop farms, giving insights over the implications of the problem and of its potential solutions.
\subsection{Assumptions and Limitations}
The work in this paper has addressed all the assumptions made in related work (see Section \ref{relWOrk}), being more detailed and complete. Moreover, the AIA solution is completely decentralized, and could be extended for a dynamic scenario (i.e. future work).
However, both the related work and this paper made some additional assumptions, not taking into account the following:
\begin{itemize}
\item Variation in availability of manure in different periods of the year.
\item Possibility of larger quantity of manure than the vehicle's capacity to carry, where multiple routes would be needed for the transfer.
\item Varying crop demands in manure at different seasons.
\item Used a simplified objective function to optimize, based on a general estimation of nitrogen value and transport cost (i.e. cost of fuel). Aspects of vehicles' purchase, maintenance and depreciation costs, labour costs etc., have not been considered.
\item Manure could undergo some \textit{concentration treatment} (e.g. dry cleaning) \citep{teira2003method} in order to reduce the volume transported.
\item Phosphorous, another fundamental crop nutrient present in manure, has not been considered.
\end{itemize}
An additional important assumption was the modelling via grid cells and Manhattan distances instead of actual, real-world distances. This assumption was considered due to the overall computational complexity of the problem. We tried to mitigate this issue by approximating real-world distances using the corrective factor $g$ in the objective function (see Section \ref{objFunctionDescription}), but this is only a simplified approximation. The factors of faster vs quicker routes, quality of the roads, obstacles such as mountains and city centres requiring additional kilometers to travel, slope of each route, traffic in rush hours, speed limit in different roads, constraints in the routes that trucks are allowed to take, etc. have not been taken into account. Transportation distance relates also to time waste, which has also not been considered. These, together with the assumptions mentioned before, are important aspects of future work, which is discussed below.
\subsection{Future Work}
\label{FutureWork}
Future work will continue to explore the application of the COA and the AIA to this problem, addressing the assumptions made in this paper.
More realistic transportation distances and travel times among farms for manure transport would be considered, as well as dynamic changes in production and need for nitrogen.
This will include the possibility of various routes during the year to transfer manure, calculating more precisely the seasonal effect on the nitrogen content available in the manure which is being reduced through time.
Also, the seasonal differences of various crops will be studied, which might make some crop fields unavailable for manure application at some periods of the year.
Moreover, the costs of the trucks involved in the transport (i.e. purchase, maintenance, depreciation, etc.) will be considered in the objective function, although this is complicated topic due to the possible subsidies that might be provided by the government in order to implement such a manure transport scheme.
Finally, we plan to investigate the use of local or neighbouring manure processing units in selected livestock farms. The complete environmental consequences of the problem under study would be considered too, including the pollution produced by the transportation of manure between farms.
\section{Conclusion}
\label{Conclusion}
This paper addressed the problem of the surplus of animal manure from livestock agriculture, which creates important environmental problems. The paper investigated and suggested a sustainable approach based on nutrient redistribution, where manure was transported as fertilizer from livestock farms to crop fields. Two approaches have been developed: a centralized approach (COA) based on an adapted version of Dijkstra's algorithm for finding shortest paths; as well as a
decentralized one inspired by ant foraging behaviour (AIA). AIA addressed the problem by modelling livestock farms as ants and crop fields as sources of food for the ants.
A comparison between the (centralized) COA approach and the cooperative and decentralized AIA algorithm showed that the COA was 8.5\% more efficient, based on a single-objective function. Both COA and AIA outperformed significantly a (individualist) Neighbour-Based Search (NBS) approach, which resembles the existing practice used today for transport of manure in the region of Catalonia, Spain. The AIA approach was fairer for the farmers and more balanced in terms of average transportation distances that need to be covered by each livestock farmer to transport manure.
Our work constitutes a new application of ant-inspired algorithms to an interesting real-world problem, in a domain where swarm intelligence methods are still under-exploited.
\begin{acknowledgements}
Special thanks to Mr. Jaume Boixadera Llobet and Mr. Mario Carrillo Salagre from the Ministry of Agriculture, Government of Catalonia.
Their feedback, help and advice has been very important in terms of understanding the problem of livestock agriculture in Catalonia and seeking together ways to reduce it.
This research has been supported by the P-SPHERE project, which has received
funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No 665919,
and also by the CERCA Programme/Generalitat de Catalunya.
Andreas Kamilaris has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 739578 complemented by the Government of the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development.
Francesc X. Prenafeta-Bold{\'u} belongs to the Consolidated Research Group TERRA (2017 SGR 1290), funded by the Generalitat de Catalunya.
\end{acknowledgements}
\bibliographystyle{plainnat}
|
train/arxiv
|
BkiUdf04dbghZzegCF2N
| 5
| 1
|
\section{Introduction}
A 1-symmetric basis can be viewed as a rearrangement-invariant function space over $\mathbb{N}$ (equipped with the counting measure). Some bases, though, are only 1-subsymmetric, and we would like to explore an analogous property for certain nonatomic function spaces corresponding to this generalization. For instance, in the case where we have a function space $X$ on $(0,\infty)$, we say that it is {\it subrearrangement-invariant} whenever the following holds:
\begin{quotation}\noindent For every function $f\in X$, every measurable $F\subseteq(0,\infty)$, and every strictly increasing bijection $m:(0,\infty)\to F$ such that $m$ and $m^{-1}$ are both measure-preserving, we have $\|f\circ m\|_X=\|f\textbf{1}_F\|_X$.\end{quotation}
Later, in \S2, we give a broader definition for other nonatomic function spaces besides $(0,\infty)$.
D.J.H.\ Garling was the first to prove that not every subsymmetric basis is symmetric, by publishing a counterexample in 1968 (\cite[\S5]{Ga68}). Garling's sequence space, then, already furnishes us with a subrearrangement-invariant function space on $\mathbb{N}$ which is not rearrangement-invariant under any equivalent norm. It is also well-known that there exist 1-unconditional bases which are not subsymmetric, and hence function spaces on $\mathbb{N}$ which fail to admit an equivalent subrearrangement-invariant norm. In \S3, we extend these results to a purely nonatomic case, exhibiting an example of a function space on $(0,\infty)$ which is not essentially subrearrangement-invariant, and another example which is subrearrangement-invariant but not essentially rearrangement-invariant. Some geometric properties of these ``Garling function spaces'' are then explored in \S4.
All Banach spaces and function spaces are taken over the real field $\mathbb{R}$, and all measure spaces we assume to be countably additive and $\sigma$-finite. If $\theta$ and $\phi$ are real-valued functions, we use the symbolism $\theta(x)\approx_\epsilon\phi(y)$ to mean that for any $\epsilon>0$, the arguments $x$ and $y$ can be chosen such that
$$
\phi(y)-\epsilon<\theta(x)<\phi(y)+\epsilon.
$$
Beyond that, all notation and terminology is either standard (such as appears, for instance, in \cite{LT77}) or defined as encountered. In \S5 we give an appendix with proofs of some measure-theoretic results we need for the main results, that are probably already known, but which we couldn't locate in the published literature.
\section{Subrearrangement-invariant function spaces}
For the following definition, $\beta$ denotes the Borel measure.
\begin{definition}Let $(\Omega,\mu)$ be a $\sigma$-finite measure space, and let $\mathcal{M}_0^+(\Omega)$ denote the cone of (nonnegative) $(\mu,\beta)$-measurable functions $f:\Omega\to[0,\infty]$. Suppose $\rho:\mathcal{M}_0(\Omega)\to[0,\infty]$ satisfies the following properties for all $a\in(0,\infty)$ and all $f,g\in\mathcal{M}_0^+(\Omega)$:
\begin{itemize}
\item[(i)] $\rho(f+g)\leq\rho(f)+\rho(g)$;
\item[(ii)] $\rho(af)=a\rho(f)$; and
\item[(iii)] $\rho(f)=0$ if and only if $f\equiv 0$ almost everywhere.
\end{itemize}
We can then define a normed linear space $(X,\|\cdot\|_X)$ consisting the a.e.-equivalence classes of measurable functions $f:\Omega\to[-\infty,\infty]$ satisfying $\|f\|_X:=\rho(|f|)<\infty$. In this case we say that $\rho$ is a {\bf function norm} on $\Omega$, and $X$ is a {\bf function space} on $\Omega$ with respect to $\rho$.
\end{definition}
\noindent Note that our definition differs from other classes of function spaces such as Banach function spaces defined in \cite[\S1]{BS88} or K\"othe function spaces. It is suitable for the present purposes, however.
\begin{remark}
If $(e_i)_{i=1}^\infty$ is a 1-unconditional basis for a Banach space $X$, we can define a function norm $\rho_X$ by setting, for all $f:\mathbb{N}\to[0,\infty]$,
$$\rho_X(f)=\left\{\begin{array}{ll}\displaystyle\left\|\sum_{i=1}^\infty f(i)e_i\right\|_X&\text{ if }\sum_{i=1}^\infty f(i)e_i\text{ converges, and}\\\\\infty&\text{ otherwise.}\end{array}\right.$$
In this way, $X$ can be viewed as a function space on $\mathbb{N}$, with respect to $(e_i)_{i=1}^\infty$.
\end{remark}
Let $(\Omega,\mu)$ be a $\sigma$-finite measure space, and $f:\Omega\to[-\infty,\infty]$ a $(\mu,\beta)$-measurable function (where $\beta$ is the Borel measure). The {\bf distribution function} $\text{dist}_f:[0,\infty]\to[0,\infty]$ of $f$ is given by the rule
$$\text{dist}_f(s)=\mu\left\{x\in\Omega:|f(x)|>s\right\}.$$
Two such measurable functions $f$ and $g$ are said to be {\bf equimeasurable} whenever $\text{dist}_f=\text{dist}_g$. In this case we write $f\sim g$.
\begin{definition}
Let $(\Omega,\mu)$ be a $\sigma$-finite measure space. A function space $X$ on $\Omega$ is called {\bf rearrangement-invariant} iff $\|f\|_X=\|g\|_X$ for all equimeasurable functions $f,g\in X$. It is {\bf essentially rearrangement-invariant} provided it admits an equivalent rearrangement-invariant norm.
\end{definition}
\begin{remark}
The above definition follows \cite{BS88} rather than the somewhat more stilted definition of rearrangement-invariance found, for instance, in \cite{LT79}.
\end{remark}
Let $(E,\mu_E)$ and $(F,\mu_F)$ be measure spaces. A map $m:E\to F$ is called {\bf $\boldsymbol{(\mu_E,\mu_F)}$-measurable} (or, when $\mu_E$ and $\mu_F$ are clear from context, simply, {\bf measurable}) if whenever $A$ is a measurable subset of $F$, the set $m^{-1}(A)$ is measurable in $E$. The map $m$ is called a {\bf measure-preserving transformation} if whenever $A$ is a measurable subset of $F$, the set $m^{-1}(A)$ is measurable with $\mu_E(m^{-1}(A))=\mu_F(A)$. If furthermore $m$ is bijective with $m^{-1}$ also measure-preserving, we say that it is a {\bf measure-isomorphism}.
\begin{definition}
If $E$ and $F$ are totally-ordered measure spaces, we denote by $\mathbb{MO}(E,F)$ the set of all maps $m:E\to F$ such that $m$ is both a stictly increasing measure-isomorphism. Any such $m\in\mathbb{MO}(E,F)$ is called an {\bf$\boldsymbol{\mathbb{MO}}$-isomorphism} between $E$ and $F$.
\end{definition}
Let us now introduce the main subject under study.
\begin{definition}
Let $(\Omega,\mu)$ be a totally-ordered $\sigma$-finite measure space satisfying $\mu(\Omega)=\infty$. We say that a function space $X$ on $\Omega$ is {\bf subrearrangement-invariant} if for every measurable $F\subseteq\Omega$, every $m\in\mathbb{MO}(\Omega,F)$, and every $f\in X$, we have $\|f\circ m\|_X=\|f$\emph{{\bf 1}}$_F\|_X$. We say that $X$ is {\bf essentially subrearrangement-invariant} whenever it admits an equivalent subrearrangement-invariant norm.
\end{definition}
\noindent Here, the restriction $\mu(\Omega)=\infty$ has been included since $\mathbb{MO}(\Omega,F)$ would be empty otherwise, whenever $\mu(F)\neq\mu(\Omega)$, and that would make every function space on $\Omega$ trivially subrearrangement-invariant.
The following well-known fact is proved in the appendix.
\begin{proposition}\label{1-symmetric}
A 1-unconditional basis $(e_i)_{i=1}^\infty$ for a real Banach space $X$ is 1-symmetric if and only if $X$ is rearrangement-invariant as a function space on $\mathbb{N}$ with respect to $(e_i)_{i=1}^\infty$. It is symmetric if and only if $X$ is essentially rearrangement-invariant.
\end{proposition}
Next, we give a result which in some sense justifies our definition of subrearrangement-invariance.
\begin{proposition}\label{1-subsymmetric}
A 1-unconditional basis $(e_i)_{i=1}^\infty$ for a real Banach space $X$ is 1-subsymmetric if and only if $X$ is subrearrangement-invariant as a function space on $\mathbb{N}$ with respect to $(e_i)_{i=1}^\infty$. It is subsymmetric if and only if $X$ is essentially subrearrangement-invariant.
\end{proposition}
\begin{proof}
($\Rightarrow$): Suppose $(e_i)_{i=1}^\infty$ is 1-subsymmetric. Let $F\subseteq\mathbb{N}$ and $m\in\mathbb{MO}(\mathbb{N},F)$, and select any $f\in X$. By 1-subsymmetry of $(e_i)_{i=1}^\infty$ we have
\begin{multline*}
\|f\circ m\|_X
=\left\|\sum_{i=1}^\infty(f)(m(i))e_i\right\|_X
=\left\|\sum_{i\in F}f(i)e_{m^{-1}(i)}\right\|_X
\\=\left\|\sum_{i\in F}f(i)\textbf{1}_F(i)e_{m^{-1}(i)}\right\|_X
=\left\|\sum_{i=1}^\infty f(i)\textbf{1}_F(i)e_i\right\|_X
=\|f\textbf{1}_F\|_X.
\end{multline*}
Hence, $X$ is subrearrangement-invariant with respect to $(e_i)_{i=1}^\infty$.
($\Leftarrow$): Suppose now that $X$ is subrearrangement-invariant with respect to $(e_i)_{i=1}^\infty$. Let $(e_{i_k})_{k=1}^\infty$ be a subsequence and $f\in X$. Define $m(k)=i_k$ for $k\in\mathbb{N}$, and $F:=(i_k)_{k=1}^\infty$. Clearly, $m\in\mathbb{MO}(\mathbb{N},F)$. Define $g:\mathbb{N}\to[0,\infty]$ by letting $g(i)=(|f|\circ m^{-1})(i)$ if $i\in F$ and $g(i)=0$ otherwise. We will need to check that $g\in X$, but this follows from the facts below, together with the identity $g\boldsymbol{1}_F=g$. Now, by subrearrangement-invariance and 1-unconditionality we have
\begin{multline*}
\left\|\sum_{k=1}^\infty f(k)e_{i_k}\right\|_X
=\left\|\sum_{k=1}^\infty(f\circ m^{-1})(i_k)e_{i_k}\right\|_X
=\left\|\sum_{i\in F}(f\circ m^{-1})(i)e_{i}\right\|_X
\\=\left\|\sum_{i=1}^\infty g(i)\textbf{1}_F(i)e_{i}\right\|_X
=\|g\textbf{1}_F\|_X
=\|g\circ m\|_X
\\=\|f\circ m^{-1}\circ m\|_X
=\|f\|_X
=\left\|\sum_{i=1}^\infty f(i)e_i\right\|_X.
\end{multline*}
That $(e_n)_{n=1}^\infty$ is subsymmetric if and only if it is essentially subrearrangement-invariant follows easily by considering the equivalent norm $|||x|||=\sup_{(n_k)\in\mathbb{N}^\uparrow}\|\sum_{k=1}^\infty x_k^*(x)x_{n_k}\|$.
\end{proof}
It is well-known that every 1-symmetric basis is 1-subsymmetric. Similarly, it is easy to show that rearrangement-invariance implies subrearrangement-invariance. We just need a quick preliminary fact before we do.
\begin{proposition}[{\cite[Proposition 2.7.2]{BS88}}]\label{mpt-equimeasurable}
Let $m:E\to F$ be a measure preserving transformation between $\sigma$-finite measure spaces $(E,\mu_E)$ and $(F,\mu_F)$. If $f:F\to[0,\infty]$ is a $(\mu_F,\beta)$-measurable function on $F$, then $f\circ m:E\to[0,\infty]$ is a $(\mu_E,\beta)$-measurable function on $E$, and $f$ and $f\circ m$ are equimeasurable.
\end{proposition}
\begin{proposition}\label{ri-is-sri}
Let $(\Omega,\mu)$ be a totally-ordered $\sigma$-finite measure space satisfying $\mu(\Omega)=\infty$. If $X$ is a rearrangement-invariant function space on $\Omega$, then it is also subrearrangement-invariant.
\end{proposition}
\begin{proof}
Select any $f\in X$, measurable $F\subseteq\Omega$, and $m\in\mathbb{MO}(\Omega,F)$. Notice that $f|_F\circ m=f\circ m$ so that, by Proposition \ref{mpt-equimeasurable}, $f\circ m\sim f|_F$. We also clearly have $f\textbf{1}_F\sim f|_F$, and hence $f\circ m\sim f\textbf{1}_F$. By rearrangment-invariance this means $\|f\circ m\|_X=\|f\textbf{1}_F\|_X$.
\end{proof}
Let us close this section by discussing the nontriviality of essential-subrearrangement invariance. There are, after all, well-known examples of 1-unconditional bases which are not subsymmetric under any renorming, for instance the basis for the Tsirelson space. This furnishes us with examples of function spaces on $\mathbb{N}$ which are not essentially subrearrangement-invariant. The following example---a simple modification of the Schreier sequence space---gives us a function space on the purely nonatomic measure space $(0,\infty)$ which fails to be essentially subrearrangement-invariant.
\begin{example}
Denote by $\mathcal{A}$ the family of all subsets $A$ of $(0,\infty)$ satisfying $\lambda(A)\leq\sqrt{\inf A}$. For a nonnegative $(\lambda,\beta)$-measurable function $f:(0,\infty)\to[0,\infty]$, we set
$$\rho_Y(f)=\sup_{A\in\mathcal{A}}\int_Af(t)\;dt.$$
Then $\rho_Y$ is a function norm, and we can denote by $Y$ the function space it generates.
Furthermore, $Y$ is a Banach space which fails to be essentially subrearrangement-invariant.
\end{example}
\begin{proof}
That $\rho_Y$ is a function norm is clear from the definition, and it's routine (via an argument such as in Proposition \ref{is-banach}) to show completeness. So we need only prove that it fails to be essentially subrearrangement-invariant. Select any $b\in(0,\infty)$. When selecting $A\in\mathcal{A}$ to estimate $\|\boldsymbol{1}_{(0,b]}\|_Y$, we may assume without loss of generality that $\inf A\leq b$, else $\int_A\boldsymbol{1}_{(0,b]}(t)\;dt=0$. Hence,
$$
\int_A\boldsymbol{1}_{(0,b]}(t)\;dt
\leq\lambda(A)
\leq\sqrt{\inf A}
\leq\sqrt{b}
$$
so that $\|\boldsymbol{1}_{(0,b]}\|_Y\leq\sqrt{b}$. On the other hand, if $c\geq\sqrt{b}$ then
$
\|\boldsymbol{1}_{(c,c+b]}\|_Y
=b.
$
It is clear that $\boldsymbol{1}_{(c,c+b]}\circ m=\boldsymbol{1}_{(0,b]}$ for the shift map $m\in\mathbb{MO}((0,\infty),(c,\infty))$ defined by $m(t)=t+c$. Hence,
$$
\frac{\|\boldsymbol{1}_{(c,c+b]}\|_Y}{\|\boldsymbol{1}_{(c,c+b]}\circ m\|_Y}
=\frac{\|\boldsymbol{1}_{(c,c+b]}\|_Y}{\|\boldsymbol{1}_{(0,b]}\|_Y}
\geq\frac{b}{\sqrt{b}}=\sqrt{b}.
$$
As $b\in(0,\infty)$ was arbitrary, it follows that $Y$ is not essentially subrearrangement-invariant.
\end{proof}
\section{Garling function spaces}
The converse of Proposition \ref{ri-is-sri} fails to hold in general, as can be seen from the following example. If $1\leq p<\infty$ and $w=(w(k))_{k=1}^\infty$ is a nonincreasing sequence of positive real numbers satisfying $w\in c_0\setminus\ell_1$, then we can define the Garling sequence space $g(w,p)$ as the space of all scalar sequences $f:\mathbb{N}\to[-\infty,\infty]$ satisfying
$$
\|f\|_g:=\sup_{(i(k))_{k=1}^\infty\in\mathbb{N}^\uparrow}\left(\sum_{k=1}^\infty|f(i(k))|^pw(k)\right)^{1/p}
<\infty,
$$
where $\mathbb{N}^\uparrow$ denotes the family of all increasing sequences in $\mathbb{N}$. (We usually also impose the condition that $w(1)=1$ but this is not always necessary.) It is known from \cite[Proposition 2.4]{AAW18} and \cite[Lemma 3.1]{AALW18} that the unit vectors in $g(w,p)$ form a 1-unconditional basis which is 1-subsymmetric but not symmetric. In particular, thusly viewed as a function space on $\mathbb{N}$, by Propositions \ref{1-symmetric} and \ref{1-subsymmetric}, it is subrearrangement-invariant but fails to be rearrangement-invariant, or even just essentially rearrangement-invariant. Nevertheless, it remains to be seen whether essential subrearrangement-invariance is a strictly weaker condition than essential rearrangement-invariance in the nonatomic setting. We devote this section, therefore, to exhibiting a function space on $(0,\infty)$ which is subrearrangement-invariant but fails to be essentially rearrangement-invariant.
To accomplish this, we shall simply generalize Garling's construction. In fact, we will use the very same ``split into two sums'' trick that Garling did in his original paper \cite[\S5]{Ga68}. However, in order for this strategy to work, we need to make some adaptations. Part of that will involve the using the measure-theoretic results from \S2 of the present paper. Also, we need to characterize Garling sequence spaces slightly differently.
\begin{proposition}
Fix a nonincreasing function $w:\mathbb{N}\to(0,\infty)$ with $w\in c_0\setminus\ell_1$. For each function $f:\mathbb{N}\to[0,\infty]$ we define
$$\rho_g(f)=\sup_{\substack{E,F\subseteq\mathbb{N}\\m\in\mathbb{MO}(E,F)}}\left(\sum_{k\in E}(f\circ m)(k)^pw(k)\right)^{1/p}.$$
Then $\rho_g$ is a function norm generating the space $g(w,p)$.
\end{proposition}
\begin{proof}
Let $(i(k))_{k=1}^\infty\in\mathbb{N}^\uparrow$. By taking $E=\mathbb{N}$, $F=(i(k))_{k=1}^\infty$, and $m(k)=i(k)$, it is clear that $\rho_g(f)\geq\|f\|_g$. For the reverse inequality, let $E,F\subseteq\mathbb{N}$ and $m\in\mathbb{MO}(E,F)$. We may assume without loss of generality that $E$ and $F$ are both infinite. Thus, there is a unique $n\in\mathbb{MO}(\mathbb{N},E)$, and this satisfies $m\circ n\in\mathbb{MO}(\mathbb{N},F)$. Since $w$ is nonincreasing, we have
\begin{multline*}
\sum_{k\in E}(f\circ m)(k)^pw(k)
=\sum_{j=1}^\infty(f\circ m\circ n)(j)^pw(n(j))
\\\leq\sum_{j=1}^\infty(f\circ m\circ n)(j)^pw(j)
\leq\|f\|_g^p.
\end{multline*}
That $\rho_g$ is a function norm generating $g(w,p)$ follows immediately.
\end{proof}
\begin{definition}
Let $\mathbb{W}$ denote the set of all nonincreasing $(\lambda,\beta)$-measurable functions $W:(0,\infty)\to(0,\infty)$ satisfying the following conditions:
\begin{itemize}\item[(W1)] $\displaystyle\lim_{t\to\infty}W(t)=0$,
\vspace{0.1cm}
\item[(W2)] $\int_0^\infty W(t)\;dt=\infty$, and
\vspace{0.2cm}
\item[(W3)] $\int_0^1W(t)\;dt<\infty$.
\end{itemize}
We also denote by $\lambda$ the Lebesgue measure and $\Lambda$ the family of Lebesgue-measurable subsets of $(0,\infty)$. (Recall that $\beta$ is the Borel measure.) For each $(\lambda,\beta)$-measurable $f:(0,\infty)\to[0,\infty]$, set
$$
\rho_G(f)=\sup_{\substack{E,F\in\Lambda\\m\in\mathbb{MO}(E,F)}}\left(\int_E(f\circ m)(t)^pW(t)\;dt\right)^{1/p},
$$
where $W\in\mathbb{W}$ and $1\leq p<\infty$. We then define a {\bf Garling function space}, denoted $G_{W,p}(0,\infty)$, as the space of all a.e.-equivalence classes of measurable functions $f:(0,\infty)\to[-\infty,\infty]$ satisfying $\|f\|_G:=\rho_G(|f|)<\infty$.
\end{definition}
\begin{remark}
Conditions (W1) and (W2) are the only ones we use in \S3 and the proof of Proposition \ref{is-banach}. However, for the other results in \S4, condition (W3) is needed.
\end{remark}
\noindent It is clear that $\rho_G$ is a function norm, and hence $G_{W,p}(0,\infty)$ is a function space on $(0,\infty)$. We will show later in \S4 that it is in fact a Banach space, i.e.\ that it is complete.
\begin{proposition}
Fix $1\leq p<\infty$, and let $W\in\mathbb{W}$. Then $G_{W,p}(0,\infty)$ is subrearrangement-invariant.
\end{proposition}
\begin{proof}
Fix $D\in\Lambda$ and $n\in\mathbb{MO}((0,\infty),D)$, and $f\in G_{W,p}(0,\infty)$. Observe that there are $E,F\in\Lambda$ and $m\in\mathbb{MO}(E,F)$ such that
\begin{multline*}
\|f\textbf{1}_D\|_G^p
\approx_\epsilon\int_E((f\textbf{1}_D)\circ m)(t)^pW(t)\;dt
=\int_E(f\circ m)(t)^p(\textbf{1}_D\circ m)(t)W(t)\;dt
\\=\int_{m^{-1}(D)\cap E}(f\circ m)(t)^pW(t)\;dt
\leq\int_{m^{-1}(D)}(f\circ m)(t)^pW(t)\;dt
\\=\int_{m^{-1}(D)}((f\circ n)\circ(n^{-1}\circ m))(t)^pW(t)\;dt
\leq\|f\circ n\|_G^p,
\end{multline*}
where the last inequality follows from the fact that $n^{-1}\circ m$ is an $\mathbb{MO}$-isomorphism from $m^{-1}(D)$ onto its image. On the other hand, there are $A,B\in\Lambda$ and $\ell\in\mathbb{MO}(A,B)$ such that
\begin{multline*}
\|f\circ n\|_G^p
\approx_\epsilon\int_A(f\circ n\circ\ell)(t)^pW(t)\;dt
=\int_A(f\circ n\circ\ell)(t)^p(\textbf{1}_D\circ n\circ\ell)(t)W(t)\;dt
\\=\int_A((f\textbf{1}_D)\circ(n\circ\ell))(t)^pW(t)\;dt
\leq\|f\textbf{1}_D\|_G^p,
\end{multline*}
where the first equality follows due to the fact that $\textbf{1}_D\circ n\circ\ell$ is the identity function on $A$, and the final inequality follows from the fact that $n\circ\ell$ is an $\mathbb{MO}$-isomorphism from $A$ onto its image.
\end{proof}
To show that a Garling function space fails to admit an equivalent rearrangement-invariant norm, we need the following intuitively obvious lemma.
\begin{lemma}\label{technical}
Fix $p\in[1,\infty)$ and $r\in(0,\infty)$. Let $W\in\mathbb{W}$ and $f:(0,\infty)\to[0,\infty]$ a measurable function which is nondecreasing on $(0,r)$ and zero everywhere else. Then there is $s\in[0,r]$ so that
$$
\|f\|_G=\left(\int_0^sf(t+r-s)^pW(t)\;dt\right)^{1/p}
$$
\end{lemma}
\noindent Unfortunately, it requires a somewhat technical proof. We begin with some preliminaries.
\begin{proposition}[{\cite[Theorem 2.9.3]{Bo07}}]\label{distribution-integral}
Let $(\Omega,\mu)$ be a measure space and $f:\Omega\to[-\infty,\infty]$ a $(\mu,\beta)$-measurable function. Then the $\mu$-integrability of $f$ is equivalent to the Lebesgue integrability of the function $t\mapsto\text{dist}_f(t)$, and
$$\int_\Omega|f|\;d\mu=\int_0^\infty\text{dist}_f(t)\;dt.$$
\end{proposition}
\begin{corollary}\label{mpt-integral}
If $E$ and $F$ are measurable subsets of $(0,\infty)$, and $f:(0,\infty)\to[0,\infty]$ is a (nonnegative) $(\lambda,\beta)$-measurable function, then for any measure-preserving transformation $m:E\to F$ we have
$$\int_E(f\circ m)(t)\;dt=\int_F f(t)\;dt.$$
\end{corollary}
\begin{proof}
By Proposition \ref{mpt-equimeasurable}, $f$ and $f\circ m$ are equimeasurable, which is to say that $\text{dist}_f=\text{dist}_{f\circ m}$. Now by Proposition \ref{distribution-integral} we have
$$\int_E(f\circ m)(t)\;dt=\int_0^\infty\text{dist}_{f\circ m}(t)\;dt
=\int_0^\infty\text{dist}_f(t)\;dt=\int_F f(t)\;dt.$$
\end{proof}
The proof of the following is given in the appendix.
\begin{proposition}\label{mo-isomorphism}
Let $E$ be a measurable subset of $[-\infty,\infty]$ with $\lambda(E)<\infty$. Then there is a measure-zero subset $E_0$ of $E$, a measure-zero subset $D_0$ of $[0,\lambda(E)]$, and an $\mathbb{MO}$-isomorphism between $E\setminus E_0$ and $[0,\lambda(E)]\setminus D_0$.
\end{proposition}
\begin{proof}[Proof of Lemma \ref{technical}]
First, observe that since the map
$$b\mapsto\int_0^bf(t+r-b)^pW(t)\;dt$$
is continuous on the compact set $[0,r]$, we can find $s\in[0,r]$ so that
\begin{equation}\label{4}
\int_0^sf(t+r-s)^pW(t)\;dt=\sup_{b\in[0,r]}\int_0^bf(t+r-b)^pW(t)\;dt.
\end{equation}
Let $E,F\in\Lambda$ and $m\in\mathbb{MO}(E,F)$ be such that
\begin{equation}\label{3}
\|f\|_G^p\approx_\epsilon\int_E(f\circ m)(t)^pW(t)\;dt.
\end{equation}
Without loss of generality we may assume that $F\subseteq(0,r)$, and set $b:=\lambda(F)\leq r$. By Proposition \ref{mo-isomorphism} we can find measure-zero subsets $E_0$ of $E$ and $D_0$ of $(0,r)$, and an $\mathbb{MO}$-isomorphism
$$n:(0,b)\setminus D_0\to E\setminus E_0.$$
We claim that
\begin{equation}\label{1}(f\circ m\circ n)(t)^p\leq f(t+r-b)^p,\end{equation}
or, equivalently, $b-t\leq r-(m\circ n)(t)$, for each $t\in(0,b)\setminus D_0$. Indeed, as $m\circ n$ is order-preserving, we have, for $c\in(t,b)\setminus D_0$,
$$(m\circ n)((t,c)\setminus D_0)\subseteq[(m\circ n)(t),(m\circ n)(c)]$$
and since $m\circ n$ is a measure isomorphism from $(0,b)\setminus D_0$ onto its image, we also have
\begin{multline*}
c-t
=\lambda(t,c)
=\lambda((t,c)\setminus D_0)
=\lambda((m\circ n)((t,c)\setminus D_0))
\\\leq\lambda[(m\circ n)(t),(m\circ n)(c)]
=(m\circ n)(c)-(m\circ n)(t).
\end{multline*}
For $\epsilon>0$ we are free to choose $c\in(t,b)\setminus D_0$ so that $b-c<\epsilon$. Hence,
$$b-t<c-t+\epsilon\leq(m\circ n)(c)-(m\circ n)(t)+\epsilon\leq r-(m\circ n)(t)+\epsilon.$$
As $\epsilon>0$ was arbitrary, this means $b-t\leq r-(m\circ n)(t)$ as desired.
Next we claim that
\begin{equation}\label{2}(W\circ n)(t)\leq W(t),\end{equation}
or, equivalently, $t\leq n(t)$, for each $t\in(0,b)\setminus D_0$. Indeed, for $\delta\in(0,t)\setminus D_0$ we have
$$n((\delta,t)\setminus D_0)\subseteq[n(\delta),n(t)]$$
and hence
$$t-\delta=\lambda(n((\delta,t)\setminus D_0))\leq\lambda[n(\delta),n(t)]=n(t)-n(\delta)\leq n(t).$$
As $\delta\in(0,t)\setminus D_0$ can be chosen arbitrarily close to zero, this means $t\leq n(t)$ as claimed.
From \eqref{1} and \eqref{2} we obtain that
$$(f\circ m\circ n)(t)^p(W\circ n)(t)\leq f(t-r+b)^pW(t)$$
for all $t\in(0,b)\setminus D_0$, and hence, by the above together with \eqref{4}, \eqref{3}, and Corollary \ref{mpt-integral}, we have
\begin{multline*}
\|f\|_G^p
\approx_\epsilon\int_E(f\circ m)(t)^pW(t)\;dt
=\int_{E\setminus E_0}(f\circ m)(t)^pW(t)\;dt
\\=\int_{(0,b)\setminus D_0}(f\circ m\circ n)(t)^p(W\circ n)(t)\;dt
\leq\int_{(0,b)\setminus D_0}f(t+r-b)^pW(t)\;dt
\\=\int_0^bf(t+r-b)^pW(t)\;dt
\leq\int_0^sf(t+r-s)^pW(t)\;dt
\leq\|f\|_G^p.
\end{multline*}
\end{proof}
We are now set to prove the main result of this section.
\begin{theorem}
If $W(t)=(t+1)^{-1/2}$ then $G_{W,1}(0,\infty)$ fails to admit an equivalent rearrangement-invariant norm.
\end{theorem}
\begin{proof}
Fix $r\in(0,\infty)$ and let $f_r:(0,\infty)\to[0,\infty]$ and $f_r^*:(0,\infty)\to[0,\infty]$ be defined by
$$
f_r(t)=\left\{\begin{array}{ll}(r+1-t)^{-1/2}&\text{ if }0<t<r,\\0&\text{ if }r\leq t<\infty\end{array}\right.
$$
and
$$
f_r^*(t)=\left\{\begin{array}{ll}(t+1)^{-1/2}&\text{ if }0<t<r,\\0&\text{ if }r\leq t<\infty.\end{array}\right.
$$
We claim that $f_r$ and $f_r^*$ are equimeasurable. Indeed, it is clear that $\text{dist}_{f_r}(s)=\text{dist}_{f_r^*}(s)=r$ for all $0\leq s\leq(1+r)^{-1/2}$ and $\text{dist}_{f_r}(s)=\text{dist}_{f_r^*}(s)=0$ for all $1\leq s\leq\infty$. Now select $(1+r)^{-1/2}<s<1$. We have $f_r(t)>s$ if and only if both $0<t<r$ and $(r+1-t)^{-1/2}>s$, or, equivalently, $r+1-s^{-2}<t<r$. In this case we have
$$\text{dist}_{f_r}(s)=\lambda\{t\in(0,\infty):f_r(t)>s\}=\lambda(r+1-s^{-2},r)=s^{-2}-1.$$
Similarly, $f_r^*(t)>s$ if and only if both $0<t<r$ and $(t+1)^{-1/2}>s$, or, equivalently, $0<t<s^{-2}-1$. This gives us
$$\text{dist}_{f_r^*}(s)=\lambda\{t\in(0,\infty):f_r^*(t)>s\}=\lambda(0,s^{-2}-1)=s^{-2}-1$$
so that $f_r$ and $f_r^*$ are equimeasurable as claimed.
Note that
$$
\|f_r^*\|_G
\geq\int_0^r(t+1)^{-1}\;dt
=\log(r+1)\to\infty
$$
as $r\to\infty$. Thus, to complete the proof, it is enough to show that $\|f_r\|_G$ is bounded by a number not depending on $r$.
Now we apply Garling's own ``split into two sums'' trick, except in our case the ``sums'' are actually integrals. Since $f_r$ is increasing on its support $(0,r)$, and $W\in\mathbb{W}$, by Lemma \ref{technical} we must have $s\in[0,r]$ so that
\begin{multline*}
\|f_r\|_G
=\int_0^sf_r(t-s+r)W(t)\;dt.
=\int_0^s(1-t+s)^{-1/2}(t+1)^{-1/2}\;dt
\\=\int_0^{s/2}(1-t+s)^{-1/2}(t+1)^{-1/2}\;dt+\int_{s/2}^s(1-t+s)^{-1/2}(t+1)^{-1/2}\;dt.
\end{multline*}
Hence, it suffices to show that each of these pieces is bounded by a number not depending on $s$. For the first piece, note that if $t\in(0,s/2]$ then $(1-t+s)^{-1/2}\leq(s/2+1)^{-1/2}$. Hence,
\begin{multline*}
\int_0^{s/2}(1-t+s)^{-1/2}(t+1)^{-1/2}\;dt
\leq(s/2+1)^{-1/2}\int_0^{s/2}(t+1)^{-1/2}\;dt
\\=(s/2+1)^{-1/2}\cdot 2\left[(s/2+1)^{1/2}-2\right]
\leq 2.
\end{multline*}
For the second piece, note that if $t\in[s/2,s]$ then $(t+1)^{-1/2}\leq(s/2+1)^{-1/2}$, so that
\begin{multline*}
\int_{s/2}^s(1-t+s)^{-1/2}(t+1)^{-1/2}\;dt
\leq(s/2+1)^{-1/2}\int_{s/2}^s(1-t+s)^{-1/2}\;dt
\\=(s/2+1)^{-1/2}\cdot 2\left[(s/2+1)^{1/2}-2\right]
\leq 2.
\end{multline*}
\end{proof}
\section{Geometric properties of Garling function spaces}
In this section we show that Garling function spaces are Banach spaces containing $(1+\epsilon)$-isomorphic copies of $\ell_p$. As a consequence, the space $G_{W,1}(0,\infty)$ is nonreflexive. It remains an open question as to whether $G_{W,p}(0,\infty)$ is reflexive when $1<p<\infty$.
\begin{proposition}\label{is-banach}
Fix $1\leq p<\infty$ and $W\in\mathbb{W}$. Then space $G_{W,p}(0,\infty)$ is complete.
\end{proposition}
\begin{proof}
Let $(f_i)_{i=1}^\infty$ be a Cauchy sequence in $G_{W,p}$. Let $E,F\in\Lambda$ and $m\in\mathbb{MO}(E,F)$. Observe that
$$
\|f_i-f_j\|_G^p
\geq\int_0^\infty|f_i(t)-f_j(t)|^pW(t)\;dt
\geq\||f_i|W^{1/p}-|f_j|W^{1/p}\|_{L_p(0,\infty)}^p
$$
so that $(|f_i|W^{1/p})_{i=1}^\infty$ is Cauchy in $L_p(0,\infty)$. As such, it converges a.e.-pointwise to $g\in L_p(0,\infty)$. Similarly,
\begin{align*}
\|f_i-f_j\|_G^p
&\geq\int_E|(f_i\circ m)(t)-(f_j\circ m)(t)|^pW(t)\;dt
\\&\geq\||f_i\circ m|W^{1/p}-|f_j\circ m|W^{1/p}\|_{L_p(E)}^p
\end{align*}
so that $(|f_i\circ m|W^{1/p})_{i=1}^\infty$ converges both in $L_p(E)$ and a.e.-pointwise to some $g_E\in L_p(E)$. Set $f:=gW^{-1/p}$ so that $(|f_i|)_{i=1}^\infty$ converges a.e.-pointwise to $f$. As $(|f_i\circ m|W^{1/p})_{i=1}^\infty$ now converges a.e.-pointwise to $|f\circ m|W^{1/p}$, it follows that $|f\circ m|W^{1/p}$ and $g_E$ are a.e.-identical.
Since $(f_i)_{i=1}^\infty$ is Cauchy, we can find $M\in(0,\infty)$ so that $\|f_i\|_G^p\leq M$ for all $i\in\mathbb{N}$. Furthermore, we can find $i_0\in\mathbb{N}$ so that $\|g_E-|f_{i_0}\circ m|W^{1/p}\|_{L_p(E)}^p\leq 1$.
Then
\begin{multline*}
\int_E|f\circ m|(t)^pW(t)\;dt
\leq\int_E|(|f|-|f_{i_0}|)\circ m|(t)^pW(t)\;dt+\int_E|f_{i_0}\circ m|(t)^pW(t)\;dt
\\=\|g_E-|f_{i_0}\circ m|W^{1/p}\|_{L_p(E)}^p+\int_E|f_{i_0}\circ m|(t)^pW(t)\;dt
\\\leq\|g_E-|f_{i_0}\circ m|W^{1/p}\|_{L_p(E)}^p+\|f_{i_0}\|_G^p
\leq 1+M.
\end{multline*}
As $E,F,m$ were arbitrary, we have $\rho_G(|f|)\leq(1+M)^{1/p}<\infty$ so that $f\in G_{W,p}(0,\infty)$.
Next, select $\epsilon>0$ and find $N\in\mathbb{N}$ so that $\|f_i-f_j\|_G<\epsilon/2$ for all $i,j\geq N$. Select $j_0\geq N$ so that $\|g_E-|f_{j_0}\circ m|W^{1/p}\|_{L_p(E)}^p<\epsilon/2$. Then for $i\geq N$ we have
\begin{multline*}
\int_E|(f-f_i)\circ m|(t)^pW(t)\;dt
\leq\int_E||f|-|f_{j_0}||(t)^pW(t)\;dt+\int_E|f_i-f_{j_0}|(t)^pW(t)\;dt
\\\leq\|g_E-|f_{j_0}\circ m|W^{1/p}\|_{L_p(E)}^p+\|f_i-f_{j_0}\|_G^p
<\epsilon.
\end{multline*}
Again as $E,F,m$ were arbitrary and independent of $N$, it follows that $\|f-f_i\|_G^p<\epsilon$ for all $i\geq N$. As $\epsilon>0$ was also arbitrary, $f_i\to f$ in $G_{W,p}(0,\infty)$.
\end{proof}
To close, we will show that when $1\leq p<\infty$ and $W\in\mathbb{W}$, the space $G_{W,p}(0,\infty)$ contains a copy of $\ell_p$. To do this, we will use a basic sequence of characteristic functions as an auxiliary structure. Let us gather some facts about it in the next lemma. In what follows, we denote $\boldsymbol{1}_i=\boldsymbol{1}_{(i-1,i]}$ for each $i\in\mathbb{N}$.
\begin{lemma}\label{characteristic-basis}
Fix $1\leq p<\infty$ and $W\in\mathbb{W}$, and set $K=\int_0^1W(t)\;dt$. Then the sequence $(\boldsymbol{1}_i/K)_{i=1}^\infty$ is a normalized, monotone, 1-unconditional and 1-subsymmetric basic sequence in $G_{W,p}(0,\infty)$ which 1-dominates the unit vector basis $(g_i)_{i=1}^\infty$ of the Garling sequence space $g(w,p)$, where $w=(w(i))_{i=1}^\infty$ is formed by letting $w(i)=K^{-1}\int_{i-1}^iW(t)\;dt$ for each $i\in\mathbb{N}$. Furthermore, they are isometrically equivalent for constant coefficients.
\end{lemma}
\begin{proof}
By replacing $W$ with $K^{-1}W$ if necessary, we may assume without loss of generality that $K=1$.
It's clear that $(\boldsymbol{1}_i)_{i=1}^\infty$ is normalized. It is also clear that if $M<N\in\mathbb{N}$ and $(a_i)_{i=1}^\infty$ is any sequence of scalars then we have
$$
\left\|\sum_{i=1}^Ma_i\boldsymbol{1}_i\right\|_G
\leq\left\|\sum_{i=1}^Na_i\boldsymbol{1}_i\right\|_G,
$$
which is precisely the criterion for forming a monotone basic sequence.
Next we show that it is 1-unconditional. Let $(a_i)_{i=1}^\infty,(b_i)_{i=1}^\infty\in c_{00}$ and satisfy $|a_i|\leq|b_i|$ for all $i\in\mathbb{N}$. Then we can find $E,F\in\Lambda$ and $m\in\mathbb{MO}(E,F)$ such that, setting $U_j=m^{-1}(F\cap(j-1,j])$ for each $j\in\mathbb{N}$ so that $U_1<U_2<\cdots$ with $E=\bigcup_{j=1}^\infty U_j$ and each $\boldsymbol{1}_j\circ m=\boldsymbol{1}_{U_j}|_E$,
\begin{multline*}
\left\|\sum_{i=1}^\infty a_i\boldsymbol{1}_i\right\|_G^p
\approx_\epsilon\int_E\left|\sum_{i=1}^\infty a_i\boldsymbol{1}_i(m(t))\right|^pW(t)\;dt
=\int_E\sum_{i=1}^\infty|a_i|^p\boldsymbol{1}_i(m(t))W(t)\;dt
\\=\sum_{j=1}^\infty\int_{U_j}\sum_{i=1}^\infty|a_i|^p\boldsymbol{1}_i(m(t))W(t)\;dt
=\sum_{j=1}^\infty\int_{U_j}\sum_{i=1}^\infty|a_i|^p\boldsymbol{1}_{U_i}(t)W(t)\;dt
\\=\sum_{j=1}^\infty\int_{U_j}|a_j|^p\boldsymbol{1}_{U_j}(t)W(t)\;dt
\leq\sum_{j=1}^\infty\int_{U_j}|b_j|^p\boldsymbol{1}_{U_j}(t)W(t)\;dt
\end{multline*}
By an analogous argument we have
$$
\sum_{j=1}^\infty\int_{U_j}|b_j|^p\boldsymbol{1}_{U_j}(t)W(t)\;dt
=\int_E\left|\sum_{i=1}^\infty b_i\boldsymbol{1}_i(m(t))\right|^pW(t)\;dt
$$
whence
$$
\left\|\sum_{i=1}^\infty a_i\boldsymbol{1}_i\right\|_G^p
\leq\left\|\sum_{i=1}^\infty b_i\boldsymbol{1}_i\right\|_G^p
$$
so that $(\boldsymbol{1}_i)_{i=1}^\infty$ is 1-unconditional.
Let us show that it is 1-subsymmetric. Indeed, if $(a_i)_{i=1}^\infty\in c_{00}$ and $(\boldsymbol{1}_{i_k})_{k=1}^\infty$ is some subsequence, then we can find $E,F\in\Lambda$ and $m\in\mathbb{MO}(E,F)$ such that
$$
\left\|\sum_{k=1}^\infty a_k\boldsymbol{1}_{i_k}\right\|_G^p
\approx_\epsilon\int_E\left|\sum_{k=1}^\infty a_k\boldsymbol{1}_{i_k}(m(t))\right|^pW(t)\;dt.
$$
Set $E'=\bigcup_{k=1}^\infty m^{-1}(F\cap(i_k-1,i_k])$, and define an $\mathbb{MO}$-isomorphism $\ell:(0,\infty)\to\bigcup_{k=1}^\infty(i_k-1,i_k]$ by gluing together the shift maps $(k-1,k]\mapsto(i_k-1,i_k]$. Then $\ell^{-1}\circ m$ is an $\mathbb{MO}$-isomorphism between $E'$ and its image, and for each $k\in\mathbb{N}$ and $t\in E'$ we have $\boldsymbol{1}_{i_k}(m(t))=\boldsymbol{1}_k(\ell^{-1}(m(t)))$. Furthermore, $\boldsymbol{1}_{i_k}(m(t))=0$ for each $k\in\mathbb{N}$ and $t\in E\setminus E'$. Hence,
\begin{multline*}
\left\|\sum_{k=1}^\infty a_k\boldsymbol{1}_{i_k}\right\|_G^p
\approx_\epsilon\int_E\left|\sum_{k=1}^\infty a_k\boldsymbol{1}_{i_k}(m(t))\right|^pW(t)\;dt
=\int_{E'}\left|\sum_{k=1}^\infty a_k\boldsymbol{1}_{i_k}(m(t))\right|^pW(t)\;dt
\\=\int_{E'}\left|\sum_{k=1}^\infty a_k\boldsymbol{1}_k(\ell^{-1}(m(t)))\right|^pW(t)\;dt
\leq\left\|\sum_{k=1}^\infty a_k\boldsymbol{1}_k\right\|_G^p.
\end{multline*}
To show the reverse inequality, we instead choose $E,F,m$ so that
$$
\left\|\sum_{k=1}^\infty a_k\boldsymbol{1}_k\right\|_G^p
\approx_\epsilon\int_E\left|\sum_{k=1}^\infty a_k\boldsymbol{1}_k(m(t))\right|^pW(t)\;dt.
$$
Define $\ell$ as before so that $\ell\circ m$ is an $\mathbb{MO}$-isomorphism between $E$ and its image, and $\boldsymbol{1}_k(m(t))=\boldsymbol{1}_{i_k}(\ell(m(t)))$ for each $k\in\mathbb{N}$ and $t\in E$. Then
\begin{multline*}
\int_E\left|\sum_{k=1}^\infty a_k\boldsymbol{1}_k(m(t))\right|^pW(t)\;dt
\\=\int_E\left|\sum_{k=1}^\infty a_k(\boldsymbol{1}_{i_k}(\ell(m(t)))\right|^pW(t)\;dt
\leq\left\|\sum_{k=1}^\infty a_k\boldsymbol{1}_{i_k}\right\|_G^p.
\end{multline*}
It follows that $(\boldsymbol{1}_i)_{i=1}^\infty$ is 1-subsymmetric.
To show that it 1-dominates $g(w,p)$, we again let $(a_i)_{i=1}^\infty\in c_{00}$. Select any subsequence $(a_{i_k})_{k=1}^\infty$. As before, there is an $\mathbb{MO}$-isomorphism $\ell:(0,\infty)\to\bigcup_{k=1}^\infty(i_k-1,i_k]$ defined by gluing together the shift maps $(k-1,k]\mapsto(i_k-1,i_k]$. Note that $\boldsymbol{1}_{i_k}\circ\ell=\boldsymbol{1}_k$ for each $k\in\mathbb{N}$. We now have
\begin{multline*}
\sum_{k=1}^\infty|a_{i_k}|^pw(k)
=\sum_{k=1}^\infty|a_{i_k}|^p\int_{k-1}^kW(t)\;dt
=\sum_{k=1}^\infty|a_{i_k}|^p\int_0^\infty\boldsymbol{1}_k(t)W(t)\;dt
\\=\int_0^\infty\sum_{k=1}^\infty|a_{i_k}|^p\boldsymbol{1}_k(t)W(t)\;dt
=\int_0^\infty\sum_{k=1}^\infty|a_{i_k}|^p\boldsymbol{1}_{i_k}(\ell(t))W(t)\;dt
\\=\int_0^\infty\left|\sum_{n=1}^\infty a_i\boldsymbol{1}_i(\ell(t))\right|^pW(t)\;dt
\leq\left\|\sum_{i=1}^\infty a_i\boldsymbol{1}_i\right\|_G^p
\end{multline*}
By taking the supremum over all subsequences we obtain
$$
\|(a_i)_{i=1}^\infty\|_g
\leq\left\|\sum_{i=1}^\infty a_i\boldsymbol{1}_i\right\|_G.$$
Finally, we consider the last part of the lemma, about being isometrically equivalent for constant coefficients to $(g_i)_{i=1}^\infty$. Indeed, as $(\boldsymbol{1}_i)_{i=1}^\infty$ already 1-dominates it as shown above, we need only show the reverse inequality, i.e. that $(g_i)_{i=1}^\infty$ 1-dominates $(\boldsymbol{1}_i)_{i=1}^\infty$ for constant coefficients. To that end, fix $N\in\mathbb{N}$ and let $E,F\in\Lambda$ and $m\in\mathbb{MO}(E,F)$ be such that
$$
\left\|\sum_{i=1}^N\boldsymbol{1}_i\right\|_G^p
\approx_\epsilon\int_E\sum_{i=1}^N\boldsymbol{1}_i(m(t))W(t)\;dt.
$$
For each $i=1,\cdots,N$, define $A_i:=m^{-1}(F\cap(i-1,i])$, and then set $A:=\bigcup_{i=1}^NA_i$. It is clear that $\lambda(A)<\infty$, so by Proposition \ref{mo-isomorphism} we can find measure-zero subsets $D_0$ of $[0,\lambda(A)]$ and $A_0$ of $A$, and an $\mathbb{MO}$-isomorphism $n$ from $D:=[0,\lambda(A)]\setminus D_0$ onto $A\setminus A_0$. We claim that $t\leq n(t)$ for all $t\in D$. Indeed, if we set $b:=\inf n(D)$ then since $b\geq 0$ and $n(D_{\leq t})\subseteq[b,n(t)]$ we have
$$t=\lambda[0,t]=\lambda(D_{\leq t})=\lambda(n(D_{\leq t}))\leq\lambda[b,n(t)]=n(t)-b\leq n(t).$$
As $W$ is nonincreasing it follows that $W(n(t))\leq W(t)$ for all $t\in D$. Note also that $\lambda(A)\leq N$ so that $D\subseteq[0,N]$. Furthermore, it is clear that $\boldsymbol{1}_i(m(t))=0$ for all $i=1,\cdots,N$ and all $t\in E\setminus A$. Together with Corollary \ref{mpt-integral} we now obtain
\begin{multline*}
\left\|\sum_{i=1}^N\boldsymbol{1}_i\right\|_G^p
\approx_\epsilon\int_E\sum_{i=1}^N\boldsymbol{1}_i(m(t))W(t)\;dt
=\int_A\sum_{i=1}^N\boldsymbol{1}_i(m(t))W(t)\;dt
\\=\sum_{j=1}^N\int_{A_j}\sum_{i=1}^N\boldsymbol{1}_i(m(t))W(t)\;dt
=\sum_{j=1}^N\int_{A_j}\boldsymbol{1}_j(m(t))W(t)\;dt
=\sum_{j=1}^N\int_{A_j}W(t)\;dt
\\=\int_AW(t)\;dt
=\int_DW(n(t))\;dt
\leq\int_DW(t)\;dt
\leq\int_0^NW(t)\;dt
=\sum_{k=1}^Nw(k)
=\left\|\sum_{i=1}^Ng_i\right\|_g^p.
\end{multline*}
As $(g_i)_{i=1}^\infty$ and $(\boldsymbol{1}_i)_{i=1}^\infty$ are both 1-subsymmetric, we are done.
\end{proof}
\begin{theorem}
Fix $1\leq p<\infty$ and let $W\in\mathbb{W}$. Then for any $\epsilon>0$ the basic sequence $(\boldsymbol{1}_i)_{i=1}^\infty$ admits a normalized constant coefficient block basic sequence which is $(1+\epsilon)$-equivalent to $\ell_p$, and which is 2-complemented in $[\boldsymbol{1}_i]_{i=1}^\infty$.
\end{theorem}
\begin{proof}
Let $g(w,p)$, $(g_i)_{i=1}^\infty$, and $K$ be as in Lemma \ref{characteristic-basis}, so that $(\boldsymbol{1}_i/K)_{i=1}^\infty$ is isometrically equivalent to $(g_i)_{i=1}^\infty$ for constant coefficients. It was shown in \cite[\S3]{AAW18} that there exists a constant coefficient block basic sequence of $(g_n)_{n=1}^\infty$ which is $(1+\epsilon)$-equivalent to $\ell_p$, for any $\epsilon>0$. In particular, we can select
$$y'_i=\sum_{n=k_i}^{k_{i+1}-1}g_n\;\;\;\text{ and }\;\;\;y_i=\frac{y'_i}{\|y'_i\|_g}\;\;\;\text{ for each }i\in\mathbb{N},$$
where $1=k_1<k_2<k_3<\cdots\in\mathbb{N}$, so that $(y_i)_{i=1}^\infty$ is $(1+\epsilon)$-equivalent to $\ell_p$.
Next, write
$$x'_i=\sum_{n={k_i}}^{k_{i+1}-1}\boldsymbol{1}_n/K\;\;\;\text{ and }\;\;\;x_i:=\frac{x'_i}{\|x'_i\|_G}\;\;\;\text{ for each }i\in\mathbb{N},$$
where $1=k_1<k_2<k_3<\cdots\in\mathbb{N}$.
We claim that $(x_i)_{i=1}^\infty$ is 1-dominated by the unit vector basis of $\ell_p$. Indeed, if $(a_i)_{i=1}^\infty\in c_{00}$ then we can find $E,F\in\Lambda$ and $m\in\mathbb{MO}(E,F)$ such that
\begin{multline*}
\left\|\sum_{i=1}^\infty a_ix_i\right\|_G^p
\approx_\epsilon \int_E\left|\sum_{i=1}^\infty a_ix_i(m(t))\right|^pW(t)\;dt
=\int_E\sum_{i=1}^\infty|a_i|^px_i(m(t))W(t)\;dt
\\=\sum_{i=1}^\infty|a_i|^p\int_Ex_i(m(t))W(t)\;dt
\leq\sum_{i=1}^\infty|a_i|^p\|x_i\|_G^p
=\sum_{i=1}^\infty|a_i|^p
\end{multline*}
so that $(x_i)_{i=1}^\infty\lesssim_1\ell_p$ as claimed.
By Lemma \ref{characteristic-basis}, $(\boldsymbol{1}_i/K)_{i=1}^\infty$ is isometrically equivalent to $(g_i)_{i=1}^\infty$ for constant coefficients, and so $\|y'_i\|_g=\|x'_i\|_G$ for each $i\in\mathbb{N}$. Again from Lemma \ref{characteristic-basis}, we know that $(g_i)_{i=1}^\infty$ is 1-dominated by $(\boldsymbol{1}_i/K)_{i=1}^\infty$. It follows that
$$\ell_p\approx_{1+\epsilon}(y_i)_{i=1}^\infty\lesssim_1(x_i)_{i=1}^\infty\lesssim_1\ell_p.$$
That $(x_i)_{i=1}^\infty$ spans a 2-complemented subspace of $[\boldsymbol{1}_i]_{i=1}^\infty$ follows from the fact that constant-coefficient block basic sequences of a 1-subsymmetric basis are always 2-complemented (see, for instance, \cite[Proposition 3.a.4]{LT77}).
\end{proof}
\begin{remark}
Although $\ell_p$ is complemented in $[\boldsymbol{1}_i]_{i=1}^\infty$, we do not yet know if it is complemented in $G_{W,p}(0,\infty)$.
\end{remark}
\begin{corollary}
Fix $1\leq p<\infty$ and $W\in\mathbb{W}$. Then for every $\epsilon>0$, the space $G_{W,p}(0,\infty)$ contains a subspace which is $(1+\epsilon)$-isomorphic to $\ell_p$. Hence, in particular, the space $G(W,1)$ is nonreflexive.
\end{corollary}
\section{Appendix}
\begin{proposition}\label{bijection-lebesgue}
Let $E$ and $F$ be Lebesgue-measurable subsets of $\mathbb{R}$, and let $m:E\to F$ be a bijection which is both order-preserving and measure-preserving. Then $m^{-1}$ is also order-preserving and measure-preserving, i.e.\ $m\in\mathbb{MO}(E,F)$.
\end{proposition}
\begin{proof}
Clearly, it is enough to show that $m^{-1}$ is measurable. To that end, let us fix a measurable set $A\subseteq E$; we claim that $m(A)$ is also measurable, which will complete the proof.
Denote by $\mathcal{B}=\sigma(\tau)$ the Borel $\sigma$-algebra on $\mathbb{R}$, where $\tau$ denotes the usual metric topology $\mathbb{R}$. Let $\tau_E$ be the subspace topology on $E$, i.e. the topology defined by
$$\tau_E=E\cap\tau:=\left\{E\cap U:U\in\tau\right\}.$$
Similarly, we denote by $\tau_F$ the subspace topology for $F$. It is well-known (and easy to see) that the set
$$E\cap\mathcal{B}:=\{E\cap B:B\in\mathcal{B}\}$$
is a $\sigma$-algebra on $E$, called the {\it trace} $\sigma$-algebra. Since $E\cap\tau\subset E\cap\mathcal{B}$, we obtain
$$\sigma(\tau_E)=\sigma(E\cap\tau)\subseteq\sigma(E\cap\mathcal{B})=E\cap\mathcal{B}.$$
For the reverse inclusion, define
$$\Sigma:=\{Y\subseteq\mathbb{R}:E\cap Y\in\sigma(\tau_E)\}.$$
It is routine to verify that $\Sigma$ is a $\sigma$-algebra on $\mathbb{R}$. Also, it is clear that $\tau\subseteq\Sigma$, since for $U\in\tau$ we have $E\cap U\in\tau_E\subseteq\sigma(\tau_E)$. It follows that $\sigma(\tau)\subseteq\Sigma$, whence also by definition of $\Sigma$ we obtain $E\cap\sigma(\tau)\subseteq\sigma(\tau_E)$. This gives us the reverse inclusion as desired. We now have the identity
$\sigma(\tau_E)=E\cap\mathcal{B},$
and an identical argument shows that
$\sigma(\tau_F)=F\cap\mathcal{B}.$
It's a well-known fact in real analysis that we can find $C\in\mathcal{B}$ such that $A\subseteq C$ and $\lambda(C\setminus A)=0$. Now set $C'=E\cap C\in\sigma(\tau_E)$. Since $\lambda(C\setminus A)=0$ there is a measure-zero set $D\in\mathcal{B}$ with $C'\setminus A\subseteq C\setminus A\subseteq D$. Set $D':=E\cap D\in\sigma(\tau_E)$ so that $C'\setminus A\subseteq D'$ and $\lambda(D')=0$.
By a standard argument found, for instance, in the proof of \cite[Theorem 2.1.2]{Bo07}, it follows that $m(B)$ is Lebesgue-measurable whenever $B\in\sigma(\tau_E)$. Thus we have $\lambda[m(D')]=0$. Observe $m(C')\setminus m(A)=m(C'\setminus A)\subseteq m(D')$ so that (since subsets of measure-zero sets are themselves measure-zero) $\lambda[m(C')\setminus m(A)]=0$ as well. Note also that since $C'\in\sigma(\tau_E)$ we have $m(C')$ measurable. Since $A\subseteq C'$, we obtain $m(A)=m(C')\setminus[m(C')\setminus m(A)]$,
which shows that $m(A)$ is measurable.
\end{proof}
\begin{proposition}\label{bijection-exists}
Let $E$ and $F$ be Lebesgue-measurable subspaces of $[-\infty,\infty]$, and let $m:F\to E$ be a surjective measure-preserving transformation which is also order-preserving. Then there is a measure-zero subset $F_0$ of $F$ such that $m$ is a bijection between $F\setminus F_0$ and $E$.
\end{proposition}
\begin{proof}
For each $x\in E$, let $I_x$ be an interval containing $m^{-1}\{x\}$ which is minimal under the relation $\subseteq$. Since $m$ is order-preserving, the $I_x$'s are all disjoint, which means only countably many of them have positive measure. In particular, $m^{-1}\{x\}$ is a singleton for all but countably many $x\in E$. Set
$E_0:=\{x\in E:m^{-1}\{x\}\text{ is not a singleton}\}.$
For each $x\in E_0$, select some $f_x\in m^{-1}\{x\}$. Now set
$F_0:=\bigcup_{x\in E_0}\left(m^{-1}\{x\}\setminus\{f_x\}\right).$
Clearly, $m$ is a bijection between $F\setminus F_0$ and $E$. Observe that each $m^{-1}\{x\}\setminus\{f_x\}$ has measure zero and that $E_0$ is countable. It follows that $F_0$ has measure zero.
\end{proof}
Let $E$ be a totally-ordered set. An {\bf initial segment} of $E$ is any subset of $E'$ of $E$ such that $E'<E\setminus E'$.
\begin{proposition}\label{initial-segment-measure}
Let $E$ be a Lebesgue-measurable subset of $[-\infty,\infty]$ with $\lambda(E)<\infty$. Then for each $t\in[0,\lambda(E)]$ there is an initial segment $E_t$ of $E$ such that $\lambda(E_t)=t$.
\end{proposition}
\begin{proof}
Note that if $E_t$ is an initial segment of $E\setminus\{-\infty,\infty\}$ then $E_t\cup\{-\infty\}$ is an initial segment of $E$ with the same measure as $E_t$. Hence, without loss of generality, we may assume $E\subset\mathbb{R}$. We may also assume that $E$ is bounded, since if the result holds in that case then it can be extended to the unbounded case by considering the union of sets
$$E_n=[-n,-n+1)\cap E\cap(n-1,n].$$
Say $E\subseteq[a,b]$ for $-\infty<a<b<\infty$. Define $f:[a,b]\to[0,\lambda(E)]$ by the rule
$$f(x)=\lambda([a,x]\cap E).$$
Observe that if $y<x\in[a,b]$ then
$$|f(x)-f(y)|=\lambda((y,x]\cap E)\leq|x-y|$$
so that $f$ is Lipschitz, in particular, continuous. As $f(a)=0$ and $f(b)=\lambda(E)$, we may now apply the Intermediate Value Theorem.
\end{proof}
\begin{lemma}\label{mo-transformation}
Let $E$ be a measurable subset of $\mathbb{R}$ with $\lambda(E)<\infty$. For each $t\in[0,\lambda(E)]$, let $E_t$ be an initial segment of $E$ (whose existence is guaranteed by Proposition \ref{initial-segment-measure}). Define the map $m:E\to[0,\lambda(E)]$ by the rule
$$ m (x)=\inf\{t\in[0,\lambda(E)]:x\in E_t\}.$$
Then $m$ is both measure-preserving and order-preserving. Furthermore, $m$ can be extended to a map $m:\mathbb{R}\to[0,\lambda(E)]$ defined by
$$m(x)=\lambda((-\infty,x]\cap E).$$
\end{lemma}
\begin{proof}
It is obvious that $ m $ is order-preserving, and it is explicitly proved in \cite[Proposition 2.7.4]{BS88} that it is also measure-preserving.
For $x\in E$ we set $E_{\leq x}:=(-\infty,x]\cap E$, and observe that if $t>\lambda(E_{\leq x})$ then $x\in E_t$ and if $t<\lambda(E_{\leq x})$ then $x\notin E_t$. It follows that $m(x)=\lambda(E_{\leq x})$ for all $x\in E$. Thus, we can extend $m$ continuously to the function $M:\mathbb{R}\to[0,\lambda(E)]$ via the rule
$M(x)=\lambda((-\infty,x]\cap E).$
\end{proof}
\begin{proof}[Proof of Proposition \ref{mo-isomorphism}]
Since $\{-\infty,\infty\}$ has measure zero, we may assume without loss of generality that $E\subset\mathbb{R}$.
Let $m:\mathbb{R}\to E$ be as in Lemma \ref{mo-transformation}. It is clear (as in, for instance, the proof of Proposition \ref{initial-segment-measure}) that $m$ is Lipschitz, and hence continuous in the usual sense as well.
Since $\lambda$ is inner-regular, we can find a sequence $(K_n)_{n=1}^\infty$ of compact sets and a measure-zero set $L$ such that
$E=L\cup\bigcup_{n=1}^\infty K_n.$
It is known that the image of a bounded measure-zero set under a Lipschitz function is again measure-zero. Furthermore, the continuous image of a compact set is again compact, and in particular measurable. We now have
$m(E)=m\left(L\cup\bigcup_{n=1}^\infty K_n\right)=m(L)\cup\bigcup_{n=1}^\infty m(K_n).$
It follows that $m(E)$ is measurable.
We can now apply Proposition \ref{bijection-exists} to find a subset $E_0$ of measure zero such that $m$ is a bijection between $E\setminus E_0$ and $m(E)$. Set $D_0=[0,\lambda(E)]\setminus m(E)$. We have $\lambda(D_0)=0$, and by Proposition \ref{bijection-lebesgue}, $m$ is an $\mathbb{MO}$-isomorphism between $E\setminus E_0$ and $[0,\lambda(E)]\setminus D_0$.
\end{proof}
\begin{proposition}\label{ryff}
If $f,g:\mathbb{N}\to[0,\infty)$ are equimeasurable with
$$\lim_{n\to\infty}f(n)=\lim_{n\to\infty}g(n)=0$$
then either they are both identically zero or else there is a measure-isomorphism $m:\text{supp}(f)\to\text{supp}(g)$ such that $g\circ m=f$ on $\text{supp}(f)$.
\end{proposition}
\begin{proof}
Obviously, if one of $f$ and $g$ is identically zero then, since they are equimeasurable, so is the other. So let us assume that neither is identically zero. Let $f^*(n)=\inf\{\lambda:\text{dist}_f(\lambda)\leq n\}$ denote the ``decreasing rearrangement'' of $f$. Since $\lim_{n\to\infty}f(n)=0$ we have also $\lim_{n\to\infty}f^*(n)=0$. Now \cite[Corollary 7.6]{BS88} gives us a measure-preserving transformation $m_f:\text{supp}(f)\to\text{supp}(f^*)$ such that $f=f^*\circ m_f$ on $\text{supp}(f)$, and analogously we get $m_g:\text{supp}(g)\to\text{supp}(g^*)$ with $g=g^*\circ m_g$. Since $f$ and $g$ are equimeasurable, $f^*=g^*$, and hence $g=f^*\circ m_g$. This means $g\circ m_g^{-1}=f^*$ and hence, setting $m=m_g^{-1}\circ m_f$, we obtain $g\circ m=g\circ m_g^{-1}\circ m_f=f^*\circ m_f=f$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{1-symmetric}]
($\Rightarrow$): Let $(e_i)_{i=1}^\infty$ be 1-symmetric, and suppose $f$ and $g$ are equimeasurable sequences in $X$. Then so are $|f|$ and $|g|$. If $f$ and $g$ are identically zero then $\|f\|_X=0=\|g\|_X$ and we are done. Otherwise by Proposition \ref{ryff} there is a bijection $m:\text{supp}(f)\to\text{supp}(g)$ with $f=g\circ m$ on $\text{supp}(f)$. Now we have, by 1-symmetry and 1-unconditionality
\begin{multline*}
\|f\|_X
=\left\|\sum_{i=1}^\infty f(i)e_i\right\|_X
=\left\|\sum_{i\in\text{supp}(f)}|f(i)|e_i\right\|_X
=\left\|\sum_{i\in\text{supp}(f)}|g(m(i))|e_i\right\|_X
\\=\left\|\sum_{i\in\text{supp}(g)}|g(i)|e_i\right\|_X.
=\left\|\sum_{i=1}^\infty g(i)e_i\right\|_X
=\|g\|_X.
\end{multline*}
($\Leftarrow$): Suppose that $X$ is rearrangement-invariant with respect to $(e_i)_{i=1}^\infty$, and select a permutation $\pi$ of $\mathbb{N}$. Then its inverse $\pi^{-1}$ exists and is a measure-preserving transformation. Select any $f\in X$, and note that $|f(i)|<\infty$ for all $i\in\mathbb{N}$. By Proposition \ref{mpt-equimeasurable}, $|f|$ and $|f|\circ\pi^{-1}$ are equimeasurable. Now we have, by 1-unconditionality and rearrangement-invariance
\begin{multline*}
\left\|\sum_{i=1}^\infty f(i)e_{\pi(i)}\right\|_X
=\left\|\sum_{i=1}^\infty|f(i)|e_{\pi(i)}\right\|_X
=\left\|\sum_{i=1}^\infty(|f|\circ\pi^{-1})(i)e_i\right\|_X
\\=\||f|\circ\pi^{-1}\|_X
=\||f|\|_X
=\left\|\sum_{i=1}^\infty |f(i)|e_i\right\|_X
=\left\|\sum_{i=1}^\infty f(i)e_i\right\|_X.
\end{multline*}
\noindent That $(e_n)_{n=1}^\infty$ is symmetric if and only if it is essentially rearrangement-invariant is clear from considering the equivalent norm $|||x|||=\sup_{\sigma\in\Pi_\mathbb{N}}\|\sum_{n=1}^\infty e_n^*(x)e_{\sigma(n)}\|$.
\end{proof}
\noindent {\bf Acknowledgments.} Thanks to Lukas Geyer and Ramiro Affonso de Tadeu Guerreiro for assisting in the proofs of Propositions \ref{bijection-lebesgue} and \ref{mo-isomorphism}, respectively.
|
train/arxiv
|
BkiUcQHxK3YB9m3uuuYW
| 5
| 1
|
\section{Introduction}
The design and discovery of novel drugs for protein targets is powered by an understanding of the underlying principles of protein-compound interaction. Biochemical methods that measure affinity and biophysical methods that describe the interaction in atomistic level detail have provided valuable information toward a mechanistic explanation for bimolecular recognition \cite{schneider2018automating}. However, more often than not, compounds with drug potential are discovered serendipitously or by phenotypic drug discovery \cite{moffat2017opportunities} since this highly specific interaction is still difficult to predict \cite{duarte2019integration}. Protein structure based computational strategies such as docking \cite{sledz2018protein}, ultra-large library docking for discovering new chemotypes \cite{lyu2019ultra}, and molecular dynamics simulations \cite{sledz2018protein} or ligand based strategies such as quantitative structure-activity relationship (QSAR) \cite{schneider2016novo, bosc2019large}, and molecular similarity \cite{eckert2007molecular} have been powerful at narrowing down the list of compounds to be tested experimentally. With the increase in available data, machine learning and deep learning architectures are also starting to play a significant role in cheminformatics and drug discovery \cite{lo2018machine}. These approaches often require extensive computational resources or they are limited by the availability of 3D information. On the other hand, text based representations of biochemical entities are more readily available as evidenced by the 19,588 biomolecular complexes (3D structures) in PDB-Bind \cite{wang2005pdbbind} (accessed on Nov 13, 2019) compared with 561,356 (manually annotated and reviewed) protein sequences in Uniprot \cite{apweiler2004uniprot} (accessed on Nov 13, 2019) or 97 million compounds in Pubchem \cite{bolton2008pubchem} (accessed on Nov 13, 2019). The advances in natural language processing (NLP) methodologies make processing of text based representations of biomolecules an area of intense research interest.
The discipline of natural language processing (NLP) comprises a variety of methods that explore a large amount of textual data in order to bring unstructured, latent (or hidden) knowledge to the fore \cite{manning1999foundations}. Advances in this field are beneficial for tasks that use language (textual data) to build insight. The languages in the domains of bioinformatics and cheminformatics can be investigated under three categories: (i) natural language (mostly English) that is used in documents such as scientific publications, patents, and web pages, (ii) domain specific language, codified by a systematic set of rules extracted from empirical data and describing the human understanding of that domain (e.g. proteins, chemicals, etc), and (iii) structured forms such as tables, ontologies, knowledge graphs or databases \cite{oliveira2019leveraging}. Processing and extracting information from textual data written in natural languages is one of the major application areas of NLP methodologies in the biomedical domain (also known as \textit{BioNLP}). Information extracted with BioNLP methods is most often shared in structured databases or knowledge graphs \cite{ernst2015knowlife}. We refer the reader to the comprehensive review on \textit{BioNLP} by \citet{krallinger2017information}. Here, we will be focusing on the application of NLP to domain specific, unstructured biochemical textual representations toward exploration of chemical space in drug discovery efforts.
We can view the textual representation of biomedical/biochemical entities as a domain-specific language. For instance, a genome sequence is an extensive script of four characters (A, T, G, C) constituting a genomic language. In proteins, the composition of 20 different natural amino acids in varying lengths builds the protein sequences. Post-translational modifications expand this 20 letter alphabet and confer different properties to proteins \cite{karve2011small}. For chemicals there are several text based alternatives such as chemical formula, IUPAC International Chemical Identifier (InChI) \cite{heller2013inchi} and Simplified Molecular Input Line Entry Specification (SMILES) \cite{weininger1988smiles}.
Today, the era of ``big data" boosts the ``learning" aspect of computational approaches substantially, with the ever-growing amounts of information provided by publicly available databases such as PubChem \cite{bolton2008pubchem}, ChEMBL \cite{gaulton2011chembl}, UniProt \cite{apweiler2004uniprot}. These databases are rich in biochemical domain knowledge that is in textual form, thus building an efficient environment in which NLP-based techniques can thrive. Furthermore, advances in computational power allow the design of more complex methodologies, which in turn drive the fields of machine learning (ML) and NLP. However, biological and chemical interpretability and explainability remain among the major challenges of AI-based approaches. Data management in terms of access, interoperability and reusability are also critical for the development of NLP models that can be shared across disciplines.
With this review, we aim to provide an outline of how the field of NLP has influenced the studies in bioinformatics and cheminformatics and the impact it has had over the last decade. Not only are NLP methodologies facilitating processing and exploitation of biochemical text, they also promise an ``understanding" of biochemical language to elucidate the underlying principles of bimolecular recognition. NLP technologies are enhancing the biological and chemical knowledge with the final goal of accelerating drug discovery for improving human health. We highlight the significance of an interdisciplinary approach that integrates computer science and natural sciences.
\subsection{NLP Basics}
\citet{chowdhury2003natural} describes NLP on three levels: (i) the word level in which the smallest meaningful unit is extracted to define the morphological structure, (ii) the sentence level where grammar and syntactic validity are determined, and (iii) the domain or context level in which the sentences have global meaning. Similarly, our review is organized in three parts in which bio-chemical data is investigated at: (i) word level, (ii) sentence (text) level, and (iii) understanding text and generating meaningful sequences. Table \ref{tab:nlpconcepts} summarizes important NLP concepts related to the processing of biochemical data. We refer to these concepts and explain their applications in the following sections.
All NLP technology relates to specific AI architectures. In Table \ref{tab:methodologies} W-we summarize the main ML and deep learning (DL) architectures that will be mentioned throughout the review.
\section{Biochemical Language Processing} \label{section:NLP4BioChem}
The language-like properties of text-based representations of chemicals were recognized more than 50 years ago by Garfield \cite{garfield1961chemico}. He proposed a ``chemico-linguistic" approach to representing chemical nomenclature with the aim of instructing the computer to draw chemical diagrams. Protein sequence has been an important source of information about protein structure and function since Anfinsen's experiment \cite{anfinsen1973principles}. Alignment algorithms, such as Needleman-Wunsh \cite{needleman1970general} and Smith-Waterman \cite{smith1981identification}, rely on sequence information to identify functionally or structurally critical elements of proteins (or genes).
To make predictions about the structure and function of compounds or proteins, the understanding of these sequences is critical for bioinformatics tasks with the final goal of accelerating drug discovery. Much like a linguist who uses the tools of language to bring out hidden knowledge, biochemical sequences can be processed to propose novel solutions, such as predicting interactions between chemicals and proteins or generating new compounds based on the level of understanding. In this section, we will review the applications of some of the NLP-concepts to biochemical data in order to solve bio/cheminformatics problems.
\subsection{Textual Chemical Data} \label{section:availabledata}
Information about chemicals can be found in repositories such as PubChem \cite{bolton2008pubchem}, which includes information on around 100 million compounds, or Drugbank \cite{wishart2006drugbank}, which includes information on around 10,000 drugs. The main textual sources used in drug discovery are textual representations of chemicals and proteins. Table \ref{tab:resources} lists some sources that store different types of biochemical information.
Chemical structures can be represented in different forms that can be one-dimensional (1D), 2D, and 3D. Table \ref{tab:ampicillin} depicts different identifiers/representations of the drug \textit{ampicillin}. While the 2D and 3D representations are also used in ML based approaches \cite{lo2018machine}, here we focus on the 1D form, which is the representation commonly used in NLP.
\paragraph{IUPAC name} The International Union of Pure and Applied Chemistry (IUPAC) scheme (i.e. nomenclature) is used to name compounds following pre-defined rules such that the names of the compounds are unique and consistent with each other (\url{iupac.org/}).
\paragraph{Chemical Formula}
The chemical formula is one of the simplest and most widely-known ways of describing chemicals using letters (i.e. element symbols), numbers, parentheses, and (-/+) signs. This representation gives information about which elements and how many of them are present in the compound.
\paragraph{SMILES}
The Simplified Molecular Input Entry Specification (SMILES) is a text-based form of describing molecular structures and reactions \cite{weininger1988smiles}. SMILES strings can be obtained by traversing the 2D graph representation of the compound and therefore SMILES provides more complex information than the chemical formula. Moreover, due to its textual form, SMILES takes 50\% to 70\% less space than other representation methods such as an identical connection table (\url{daylight.com/dayhtml/doc/theory/theory.smiles.html}).
SMILES notation is similar to a language with its own set of rules. Just like it is possible to express the same concept with different words in natural languages, the SMILES notation allows molecules to be represented with more than one unique SMILES. Although this may sound like a significant ambiguity, the possibility of using different SMILES to represent the same molecule was successfully adopted as a data augmentation strategy by various groups (\citet{bjerrum2017smiles, kimber2018synergy, schwaller2018molecular}).
Canonical SMILES can provide a unique SMILES representation. However, different databases such as PubChem and ChEMBL might use different canonicalization algorithms to generate different unique SMILES. OpenSMILES (\url{opensmiles.org/opensmiles.html}) is a new platform that aims to universalize the SMILES notation. In isomeric SMILES, isotopism and stereochemistry information of a molecule is encoded using a variety of symbols (``/", ``\textbackslash", ``@", ``@@").
\paragraph{DeepSMILES} DeepSMILES is a novel SMILES-like notation that was proposed to address two challenges of the SMILES syntax: (i) unbalanced parentheses and (ii) ring closure pairs \cite{OBoyle2018}. It was initially designed to enhance machine/deep-learning based approaches that utilize SMILES data as input (\url{github.com/nextmovesoftware/deepsmiles}). DeepSMILES was adopted in a drug-target binding affinity prediction task in which the findings highlighted the efficacy of DeepSMILES over SMILES in terms of identifying undetectable patterns \cite{ozturk2018chemical}. DeepSMILES was also utilized in a molecule generation task in which it was compared to canonical and randomized SMILES text \cite{arus2019randomized}. Here, the results suggested that DeepSMILES might limit the learning ability of the SMILES-based molecule generation models because its syntax is more grammar sensitive with the ring closure alteration and the use of a single symbol for branching (i.e. ``)") introducing longer sequences.
\paragraph{SELFIES} SELF-referencIng Embedding Strings (SELFIES) is an alternative sequence-based representation that is built upon ``semantically constrained graphs" \cite{krenn2019selfies}. Each symbol in a SELFIES sequence indicates a recursive Chomsky-2 type grammar, and can thus be used to convert the sequence representation to a unique graph. SELFIES utilize SMILES syntax to extract words that will correspond to semantically valid graphs (\url{github.com/aspuru-guzik-group/selfies}). \citet{krenn2019selfies} compared SELFIES, DeepSMILES and SMILES representations in terms of validity in cases where random character mutations are introduced. The evaluations on the QM9 dataset yielded results in the favor of SELFIES.
\paragraph{InChI} InChI is the IUPAC International Chemical Identifier, which is a non-proprietary and open-source structural representation (\url{inchi-trust.org}) \cite{heller2015inchi}. The InChIKey is a character-based representation that is generated by hashing the InChI strings in order to shorten them. InChi representation has several layers (each) separated by the ``/" symbol.
The software that generates InChi is publicly available and InChi does not suffer from ambiguity problems. However, its less complex structure makes the SMILES representation easier to use as shown in a molecular generation study \cite{gomez2018automatic} and in building meaningful chemical representations with a translation-based system \cite{winter2019learning}. Interestingly, the translation model was able to translate from InChi to canonical SMILES, whereas it failed to translate from canonical SMILES to InChi. \citet{winter2019learning} suggested that the complex syntax of InChi made it difficult for the model to generate a correct sequence.
\paragraph{SMARTS} SMiles ARbitrary Target Specification (SMARTS) is a language that contains specialized symbols and logic operators that enable substructure (pattern) search on SMILES strings \cite{ghersi2014molblocks}. SMARTS can be used in any task that requires pattern matching on a SMILES string such as, querying databases or creating rule dictionaries such as RECAP \cite{lewell1998recap} and BRICS \cite{degen2008art} to extract fragments from SMILES (\url{daylight.com/dayhtml/doc/theory/theory.smarts.html}).
\paragraph{SMIRKS} SMIRKS notation can be used to describe generic reactions (also known as transforms) that comprise one or more changes in atoms and bonds (\url{https://daylight.com/daycgi\_tutorials/smirks\_examples.html}). These transforms are based on ``reactant to product" notation, and thus make use of SMILES and SMARTS languages. SMIRKS is utilized in tasks such as constructing an online transform database \cite{avramova2018retrotransformdb} and predicting metabolic transformations \cite{arvidsson2017prediction}. A recent study achieves a similar performance to rule-based systems in classifying chemical reactions by learning directly from SMILES text with transforms via neural networks \cite{schwaller2019data}.
\subsection{Identification of Words/Tokens} \label{subsection:words}
Similar to words in natural languages, we can assume that the ``words" of biochemical sequences convey significant information (e.g. folding, function etc) about the entities. In this regard, each compound/protein is analogous to a sentence, and each compound/protein unit is analogous to a word. Therefore, if we can decipher the grammar of biochemical languages, it would be easier to model bio/cheminformatics problems. However, protein and chemical words are not explicitly known and different approaches are needed to extract syntactically and semantically meaningful biochemical word units from these textual information sources (i.e. sequences). Here, we review some of the most common tokenization approaches used to determine the words of biochemical languages.
\paragraph{$k$-mers ($n$-grams)} One of the simplest approaches in NLP to extract a small language unit is to use $k$-mers, also known as $n$-grams. $k$-mers indicate $k$ consecutive overlapping characters that are extracted from the sequence with a sliding window approach. ``LINGO", which is one of the earliest applications of $k$-mers in cheminformatics, is the name of the overlapping $4$-mers that are extracted from SMILES strings \cite{vidal2005}. $4$-mers of the SMILES of \textit{ampicillin}, ``CC1(C(N2C(S1)C(C2=O)NC(=O)C(C3=CC=CC=C3)N)C(=O)O)C", can be listed as \{ `CC1(', `C1(C', `1(C(', ..., `O)O)', `)O)C' \}. From a sequence of length $l$, a total of $(l-n)+1$ $k$-mers can be extracted. Extracting LINGOs from SMILES is a simple yet powerful idea that has been successfully used to compute molecular similarities, to differentiate between bioisosteric and random molecular pairs \cite{vidal2005} and in a drug-target interaction prediction task \cite{ozturk2016comparative}, without requiring 2D or 3D information. The results suggested that a SMILES-based approach to compute the similarity of chemicals is not only as good as a 2D-based similarity measurement, but also faster \cite{ozturk2016comparative}.
$k$-mers were successfully utilized as protein \cite{asgari2015continuous} and chemical words \cite{ozturk2018novel} in protein family classification tasks. $3$-mers to $5$-mers were often considered as the words of the protein sequence. \citet{motomura2012word} reported that some $5$-mers could be matched to motifs and protein words are most likely a mixture of different $k$-mers. For the protein function prediction task, \citet{cao2017prolango} decided to choose among the 1000 most frequent words to build the protein vocabulary, whereas \citet{ranjan2019deep} utilized each $k$-mer type separately and showed that $4$-mers provided the best performance. In the latter work, instead of using the whole protein sequence, the words were extracted from different length protein segments, which are also long $k$-mers (i.e. 100-mer, 120-mer) with 30 amino-acid gaps. The use of segmented protein sequences yielded better results than using the whole protein sequence, and important and conserved subsequences were highlighted. $k$-mers were also used as features, along with position specific score matrix features, in the protein fold prediction problem \cite{wei2015enhanced}.
\paragraph{Longest Common Subsequences} The identification of the longest common subsequence (LCS) of two sequences is critical for detecting their similarity. When there are multiple sequences, LCSs can point to informative patterns. LCSs extracted from SMILES sequences performed similarly well to $4$-mers in chemical similarity calculation \cite{ozturk2016comparative}.
\paragraph{Maximum Common Substructure} \citet{cadeddu2014organic} investigated organic chemistry as a language in an interesting study that extracts maximum common substructures (MCS) from the 2D structures of pairs of compounds to build a vocabulary of the molecule corpus. Contrary to the common idea of functional groups (e.g. methyl, ethyl etc.) being ``words" of the chemical language, the authors argued that MCSs (i.e. fragments) can be described as the words of the chemical language \cite{cadeddu2014organic}. A recent work investigated the distribution of these words in different molecule subsets \cite{wozniak2018linguistic}. The ``words" followed \textit{Zipf's Law}, which indicates the relationship between the frequency of a word and its rank (based on the frequency) \cite{zipf1949human}, similar to most natural languages. Their results also showed that drug ``words" are shorter compared to natural product ``words".
\paragraph{Minimum Description Length}
Minimum Description Length (MDL) is an unsupervised compression-based word segmentation technique in which words of an unknown language are detected by compressing the text corpus. In a protein classification task, each protein was assigned to the family in which its sequence is compressed the most, according to the MDL-based representation \cite{ganesan2017protein}. \citet{ganesan2017protein} investigated whether the MDL-based words of the proteins show similarities to PROSITE patterns \cite{hulo2006prosite} and showed that less conserved residues were compressed less by the algorithm. \citet{ganesan2017protein} also emphasized that the integration of domain knowledge, such as the consideration of the hydrophilic and hydrophobic aminoacids in the words (i.e. grammar building), might prove effective.
\paragraph{Byte-Pair Encoding}
Byte-Pair Encoding (BPE) generates words based on high frequency subsequences starting from frequent characters \cite{sennrich2015neural}. A recent study adopted a linguistic-inspired approach to predict protein-protein interactions (PPIs) \cite{wang2019high}. Their model was built upon ``words" (i.e. bio-words) of the protein language, in which BPE was utilized to build the bio-word vocabulary. \citet{wang2019high} suggested that BPE-segmented words indicate a language-like behavior for the protein sequences and reported improved accuracy results compared to using $3$-mers as words.
\paragraph{Pattern-based words} Subsequences that are conserved throughout evolution are usually associated with protein structure and function. These conserved sequences can be detected as patterns via multiple sequence alignment (MSA) techniques and Hidden Markov Models (HMM). PROSITE \cite{hulo2006prosite}, a public database that provides information on domains and motifs of proteins, uses regular expressions (i.e. RE or regex) to match these subsequences.
Protein domains have been investigated for their potential of being the words of the protein language. One earlier study suggested that folded domains could be considered as ``phrases/clauses" rather than ``words" because of the higher semantic complexity between them \cite{gimona2006protein}. Later, domains were described as the words, and domain architectures as sentences of the language \cite{scaiewicz2015language, yu2019grammar}.
Protein domains were treated as the words of multi-domain proteins in order to evaluate the semantic meaning behind the domains \cite{buchan2019inferring}. The study supported prior work by \citet{yu2019grammar} suggesting that domains displayed syntactic and semantic features, but there are only a few multi-domain proteins with more than six domains limiting the use of domains as words to build sentences. Protein domains and motifs have also been utilized as words in different drug discovery tasks such as the prediction of drug-target interaction affinity \cite{greenside2017prediction, ozturk2019widedta}. These studies showed that motifs and domains together contribute to the prediction as much as the use of the full protein sequence.
SMARTS is a well-known regex-based querying language that is used to identify patterns in a SMILES string. SMARTS has been utilized to build specific rules for small-molecule protonation \cite{ropp2019dimorphite}, to design novel ligands based on the fragments connected to the active site of a target \cite{cheron2015opengrowth}, and to help generate products in reaction prediction \cite{wei2016neural}. MolBlocks, a molecular fragmentation tool, also adopted SMARTS dictionaries to partition a SMILES string into overlapping fragments \cite{ghersi2014molblocks}. Furthermore, MACCS \cite{durant2002reoptimization} and PubChem \cite{bolton2008pubchem} Fingerprints (FP) are molecular descriptors that are described as binary vectors based on the absence/presence of substructures that are predefined with SMARTS language. A recent study on protein family clustering uses a ligand-centric representation to describe proteins in which ligands were represented with SMILES-based (i.e. 8-mers) representation, MACCS and Extended Connectivity Fingerprint (ECFP6) \cite{ozturk2018novel}. The results indicate that three of the ligand representation approaches provide similar performances for protein family clustering.
To the best of our knowledge, there is no comprehensive evaluation of the different word extraction techniques except a comparison by \citet{wang2019high} of the performance of BPE-based words against $k$-mers in a PPI prediction task. Such comparison would provide important insights to the bio/cheminformatics community.
\subsection{Text representation}
The representation of a text (e.g. molecule or protein sequence) aims to capture syntactic, semantic or relational meaning. In the widely used Vector Space Model (VSM), a text is represented by a feature vector of either weighted or un-weighted terms \cite{salton1975vector}. The terms of this vector may correspond to words, phrases, k-grams, characters, or dimensions in a semantic space such as in the distributed word embedding representation models. The similarity between two texts represented in the vector space model is usually computed using the cosine similarity metric \cite{bilenko2003adaptive}, which corresponds to the cosine of the angle between the two vectors.
Similarly to the one-hot encoding scheme \cite{bishop2006pattern}, in the traditional bag-of-words \cite{turney2010frequency} and term frequency-inverse document frequency (TF-IDF) \cite{jones2004statistical} text representation models, each word corresponds to a different dimension in the vector space. Therefore, the similarity between two words in the vector space is zero, even if they are synonymous or related to each other. In the distributed representation models \cite{mikolov2013distributed} on the other hand, words are represented as dense vectors based on their context. Words that occur in similar contexts have similar vector representations. In this subsection, we review these commonly used text representation models with their applications in cheminformatics.
\paragraph{Bag-of-words representation} In this representation model, a text is represented as a vector of \textit{bag-of-words}, where the multiplicity of the words is taken into account, but the order of the words in the text is lost \cite{turney2010frequency}. For instance, the SMILES of ampicillin ``CC1(C(N2C(S1)C(C2=O)NC(=O)C(\\C3=CC=CC=C3)N)C(=O)O)C" can be represented as a bag-of $8$-mers as follows: \{``CC1(C(N2", ``C1(C(N2C", ``1(C(N2C(", ``(C(N2C(S",...,``N)C(=O)O" ,``)C(=O)O)" ,``C(=O)O)C" \}. We can vectorize it as $S = [1, 1, 1, 1, ...,1, 1, 1]$ in which each number refers to the frequency of the corresponding $8$-mer.
Bag-of-words representation was used in molecular similarity computation, in which the SMILES string and the LINGOs extracted from it were treated as the \textit{sentence} and \textit{words}, respectively \cite{vidal2005}. The unique LINGOs were considered for each pair and a Tanimoto coefficient was used to measure the similarity \cite{vidal2005}. Another approach called SMILES Fingerprint (SMIfp) also adopted bag-of-words to create representations of molecules for a ligand-based virtual screening task \cite{schwartz2013smifp}. SMIfp considered 34 unique symbols in SMILES strings to create a frequency-based vector representation, which was utilized to compute molecular similarity. SMIfp provided comparable results to a chemical representation technique that also incorporated polar group and topological information, as well as atom and bond information, in recovering active compounds amongst decoys \cite{schwartz2013smifp}.
\paragraph{TF-IDF}
The bag-of-words model, which is based on counting the terms of the sentence/document, might prioritize insignificant but frequent words. To overcome this issue, a weighting scheme can be integrated into the vector representation in order to give more importance to the rare terms that might play a key role in detecting similarity between two documents. One popular weighting approach is to use term frequency-inverse document frequency (TF-IDF) \cite{jones2004statistical}. TF refers to the frequency of a term in the document, and IDF denotes the logarithm of the total number of documents over the number of documents in which the term appears. IDF is therefore an indicator of uniqueness. For instance, the IDF of ``C3=CC=CC" is lower than that of ``(C(N2C(S", which appears in fewer compounds. Therefore, the existence of ``(C(N2C(S" in a compound may be more informative.
TF-IDF weigthing was utilized to assign weights to LINGOs that were extracted from SMILES in order to compute molecule similarity using cosine similarity \cite{ozturk2016comparative}. Molecular similarities were then used as input for drug-target interaction prediction. A similar performance between TF-IDF weighted LINGO and a graph-based chemical similarity measurement was obtained. \citet{cadeddu2014organic} used TF-IDF weighting on chemical bonds to show that bonds with higher TF-IDF scores have a higher probability of breaking.
\paragraph{One-hot representation}
In one-hot representation, for a given vocabulary of a text, each unique word/character is represented with a binary vector that has a $1$ in the corresponding position, while the vector positions for the remaining words/characters are filled with $0$s \cite{bishop2006pattern}. One-hot encoding is fast to build, but might lead to sparse vectors with large dimensions based on the size of the vocabulary (e.g. one million unique words in the vocabulary means one million dimensional binary vectors filled with zeros except one). It is a popular choice, especially in machine learning-based bio/cheminformatic studies to encode different types of information such as SMILES characters \cite{segler2017generating, kwon2017deepcci}, atom/bond types \cite{preuer2019interpretable, de2018molgan} and molecular properties \cite{mayr2016deeptox}.
\paragraph{Distributed representations}
The one-hot encoding builds discrete representations, and thus does not consider the relationships between words. For instance, the cosine similarity of two different words is 0 even if they are semantically similar. However, if the word (i.e. $8$-mer) ``(C(N2C(S" frequently appears together with the word ``C(C2=O)N" in SMILES strings, this might suggest that they have related ``meanings". Furthermore, two words might have similar semantic meanings even though they are syntactically apart. This is where distributed vector representations come into play.
The distributed word embeddings models gained popularity with the introduction of Word2Vec \cite{mikolov2013distributed} and GloVe \cite{pennington2014glove}. The main motivation behind the Word2Vec model is to build real-valued high-dimensional vectors for each word in the vocabulary based on the context in which they appear. There are two main approaches in Word2Vec: (i) Skip-Gram and (ii) Continuous Bag of Words (CBOW). The aim of the Skip-Gram model is to predict context words given the center word, whereas in CBOW the objective is to predict the target word given the context words. Figure \ref{fig:word2vec} depicts the Skip-gram architecture in Word2Vec \cite{mikolov2013distributed}. For the vocabulary of size $V$, given the target word ``2C(S", the model learns to predict two context words. Both target word and context words are represented as one-hot encoded binary vectors of size $V$. The number of neurons in the hidden layer determines the size of the embedding vectors. The weight matrix between the input layer and the hidden layer stores the embeddings of the vocabulary words. The $i^{th}$ row of the embedding matrix corresponds to the embedding of the $i^{th}$ word.
The Word2Vec architecture has inspired a great deal of research in the bio/cheminformatics domains. The Word2Vec algorithm has been successfully applied for determining protein classes \cite{asgari2015continuous} and protein-protein interactions (PPI) \cite{wang2019high}. \citet{asgari2015continuous} treated $3$-mers as the words of the protein sequence and observed that $3$-mers with similar biophysical and biochemical properties clustered together when their embeddings were mapped onto the 2D space. \citet{wang2019high}, on the other hand, utilized BPE-based word segmentation (i.e. bio-words) to determine the words. The authors argued that the improved performance for bio-words in the PPI prediction task might be due to the segmentation-based model providing more distinct words than $k$-mers, which include repetitive segments. Another recent study treated multi-domain proteins as sentences in which each domain was recognized as a word \cite{buchan2019inferring}. The Word2Vec algorithm was trained on the domains (i.e. PFAM domain identifiers) of eukaryotic protein sequences to learn semantically interpretable representations of them. The domain representations were then investigated in terms of the Gene Ontology (GO) annotations that they inherit. The results indicated that semantically similar domains share similar GO terms.
The Word2Vec algorithm was also utilized for representation of chemicals. SMILESVec, a text-based ligand representation technique, utilized Word2Vec to learn embeddings for $8$-mers (i.e. chemical words) that are extracted from SMILES strings \cite{ozturk2018novel}. SMILESVec was utilized in protein representation such that proteins were represented as the average of the SMILESVec vectors of their interacting ligands. The results indicated comparable performances for ligand-based and sequence based protein representations in protein family/superfamily clustering. Mol2Vec \cite{jaeger2018mol2vec}, on the other hand, was based on the identifiers of the substructures (i.e. words of the chemical) that were extracted via Extended Connectivity Fingerprint (ECFP) \cite{rogers2010extended}. The results showed a better performance with Mol2Vec than with the simple Morgan Fingerprint in a solubility prediction task, and a comparable performance to graph-based chemical representation \cite{wu2018moleculenet}. \citet{chakravarti2018distributed} also employed the Word2vec model that was trained on the fragments that are extracted from SMILES strings using a graph traversing algorithm. The results favored the distributed fragment-based ligand representation over fragment-based binary vector representation in a ring system clustering task and showed a comparable performance in the prediction of toxicity against \textit{Tetrahymena} \cite{chakravarti2018distributed}. Figure \ref{fig:scheme} illustrates the pipeline of a text-based molecule representation based on $k$-mers.
FP2Vec is another method that utilizes embedding representation for molecules, however instead of the Word2Vec algorithm, it depends on a Convolutional Neural Network (CNN) to build molecule representations to be used in toxicity prediction tasks \cite{jeon2019fp2vec}. CNN architectures have also been utilized for drug-target binding affinity prediction \cite{ozturk2018deepdta} and drug-drug interaction prediction \cite{kwon2017deepcci} to build representations for chemicals from raw SMILES strings, as well as for protein fold prediction \cite{hou2017deepsf} to learn representations for proteins from amino-acid sequences. SMILES2Vec adopted different DL architectures (GRU, LSTM, CNN+GRU, and CNN+LSTM) to learn molecule embeddings, which were then used to predict toxicity, affinity and solubility \cite{goh2017smiles2vec}. A CNN+GRU combination was better at the prediction of chemical properties. A recent study compared several DL approaches to investigate the effect of different chemical representations, which were learned through these architectures, on a chemical property prediction problem \cite{paul2018chemixnet}. The authors also combined DL architectures that were trained on SMILES strings with the MACCS fingerprint, proposing a combined representation for molecules (i.e. CheMixNet). The CheMixNet representation outperformed the other representations that were trained on a single data type such as SMILES2Vec (i.e. SMILES) and Chemception (i.e. 2D graph) \cite{goh2017chemception}.
\subsection{Text generation} \label{section:textgeneration}
Text generation is a primary NLP task, where the aim is to generate grammatically and semantically correct text, with many applications ranging from question answering to machine translation \cite{wang2019topic}. It is generally formulated as a language modeling task, where a statistical model is trained using a large corpus to predict the distribution of the next word in a given context. In machine translation, the generated text is the translation of an input text in another language.
Medicinal chemistry campaigns use methods such as scaffold hopping \cite{grisoni2018scaffold} or fragment-based drug design \cite{sledz2018protein} to build and test novel molecules but the chemotype diversity and novelty may be limited. It is possible to explore uncharted chemical space with text generation models, which learn a distribution from the available data (i.e. SMILES language) and generate novel molecules that share similar physicochemical properties with the existing molecules \cite{segler2017generating}. Molecule generation can then be followed by assessing physicochemical properties of the generated compound or its binding potential to a target protein \cite{segler2017generating}. For a comprehensive review of molecule generation methodologies, including graph-based models, we refer the reader to the review of \citet{elton2019deep}. Machine translation models have also been recently adapted to text-based molecule generation, which start with one ``language" such as that of reactants and generate a novel text in another ``language" such as that of products \cite{schwaller2018molecular}. Below, we present recent studies on text based molecule generation.
RNN models, which learn a probability distribution from a training set of molecules, are commonly used in molecule generation to propose novel molecules similar to the ones in the training data set. For instance, given the SMILES sequence ``C(=O", the model would predict the next character to be ``)" with a higher probability than ``(". The production of valid SMILES strings, however, is a challenge because of the complicated SMILES syntax that utilizes parentheses to indicate branches and ring numbers. The sequential nature of RNNs, which may miss long range dependencies, is a disadvantage of these models \cite{segler2017generating}. RNN descendants LSTM and GRU, which model long-term dependencies, are better suited for remembering matching rings and branch closures. Motivated by such a hypothesis, \citet{segler2017generating} and \citet{ertl2017silico} successfully pioneered \textit{de novo} molecule generation using LSTM architecture to generate valid novel SMILES. \citet{segler2017generating} further modified their model to generate target-specific molecules by integrating a target bioactivity prediction step to filter out inactive molecules and then retraining the LSTM network. In another study, transfer learning was adopted to fine-tune an LSTM-based SMILES generation model so that structurally similar leads were generated for targets with few known ligands \cite{gupta2018generative}. \citet{olivecrona2017molecular} and \citet{popova2018deep} used reinforcement learning (RL) to bias their model toward compounds with desired properties. Merk et al. \cite{merk2018novo, merk2018tuning} fine-tuned their LSTM model on a target-focused library of active molecules and synthesized some novel compounds. \citet{arus2019exploring} explored how much of the GDB-13 database \cite{blum2009970} they could rediscover by using an RNN-based generative model.
The variational Auto-encoder (VAE) is another widely adopted text generation architecture \cite{bowman2015generating}. \citet{gomez2018automatic} adopted this architecture for molecule generation. A traditional auto-encoder encodes the input into the latent space, which is then decoded to reconstruct the input. VAE differs from AE by explicitly defining a probability distribution on the latent space to generate new samples. \citet{gomez2018automatic} hypothesized that the variational part of the system integrates noise to the encoder, so that the decoder can be more robust to the large diversity of molecules. However, the authors also reported that the non-context free property of SMILES caused by matching ring numbers and parentheses might often lead the decoder to generate invalid SMILES strings. A grammar variational auto-encoder (GVAE), where the grammar for SMILES is explicitly defined instead of the auto-encoder learning the grammar itself, was proposed to address this issue \cite{kusner2017grammar}. This way, the generation is based on the pre-defined grammar rules and the decoding process generates grammar production rules that should also be grammatically valid. Although syntactic validity would be ensured, the molecules may not have semantic validity (chemical validity). \citet{dai2018syntax} built upon the VAE \cite{gomez2018automatic} and GVAE \cite{kusner2017grammar} architectures and introduced a syntax-directed variational autoencoder (SD-VAE) model for the molecular generation task. The syntax-direct generative mechanism in the decoder contributed to creating both syntactically and semantically valid SMILES sequences. \citet{dai2018syntax} compared the latent representations of molecules generated by VAE, GVAE, and SD-VAE, and showed that SD-VAE provided better discriminative features for druglikeness. \citet{blaschke2018application} proposed an adversarial AE for the same task. Conditional VAEs \cite{lim2018molecular, kang2018conditional} were trained to generate molecules conditioned on a desired property. The challenges that SMILES syntax presents inspired the introduction of new syntax such as DeepSMILES \cite{OBoyle2018} and SELFIES \cite{krenn2019selfies} (details in Section \ref{section:availabledata}).
Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules \cite{hong2019generative}. In text generation models, the novel molecules are drawn from a distribution, which are then fine-tuned to obtain specific features, whereas adversarial learning utilizes generator and discriminator networks to produce novel molecules \cite{hong2019generative, guimaraes2017objective}. ORGAN \cite{guimaraes2017objective}, a molecular generation methodology, was built upon a sequence generative adversarial network (SeqGAN) from NLP \cite{yu2017seqgan}. ORGAN integrated RL in order to generate molecules with desirable properties such as solubility, druglikeness, and synthetizability through using domain-specific rewards \cite{guimaraes2017objective}.
\paragraph{Machine Translation} Machine translation finds use in cheminformatics in ``translation" from one language (e.g. reactants) to another (e.g. products). Machine translation is a challenging task because the syntactic and semantic dependencies of each language differ from one another and this may give rise to ambiguities. Neural Machine Translation (NMT) models benefit from the potential of deep learning architectures to build a statistical model that aims to find the most probable target sequence for an input sequence by learning from a corpus of examples \cite{sutskever2014sequence, cho2014learning}. The main advantage of NMT models is that they provide an end-to-end system that utilizes a single neural network to convert the source sequence into the target sequence. \citet{sutskever2014sequence} refer to their model as a sequence-to-sequence (seq2seq) system that addresses a major limitation of DNNs that can only work with fixed-dimensionality information as input and output. However, in the machine translation task, the length of the input sequences is not fixed, and the length of the output sequences is not known in advance.
The NMT models are based on an encoder-decoder architecture that aims to maximize the probability of generating the target sequence (i.e. most likely correct translation) for the given source sequence. The first encoder-decoder architectures in NMT performed poorly as the sequence length increased mainly because the encoder mapped the source sequence into a single fixed-length vector. However, fixed-size representation may be too small to encode all the information required to translate long sequences \cite{bahdanau2014neural}. To overcome the issue of the fixed context vector (Figure \ref{fig:smilestranslate}a), a new method was developed, in which every source token was encoded into a memory bank independently (Figure \ref{fig:smilestranslate}b). The decoder could then selectively focus on parts of this memory bank during translation \cite{bahdanau2014neural, luong2015effective}. This technique is known as ``attention mechanism" \cite{graves2013generating}.
Inspired by the successes in NMT, the first application of seq2seq models in cheminformatics was for reaction prediction by \citet{nam2016linking}, who proposed to translate the SMILES strings of reactants and separated reagents to the corresponding product SMILES. The authors hypothesized that the reaction prediction problem can be re-modelled as a translation system in which both inputs and output are sequences. Their model used GRUs for the encoder-decoder and a Bahdanau \cite{bahdanau2014neural} attention layer in between. \citet{liu2017retrosynthetic} in contrast, performed the opposite task, the single-step retrosynthesis prediction, using a similar encoder-decoder model. When given a product and a reaction class, their model predicted the reactants that would react together to form that product. One major challenge in the retrosynthesis prediction task is the possibility of multiple correct targets, because more than one reactant combination could lead to the same product. Similarly to \citet{nam2016linking}, \citet{schwaller2018found} also adopted a seq2seq model to translate precursors into products, utilizing the SMILES representation for the reaction prediction problem. Their model used a different attention mechanism by \citet{luong2015effective} and LSTMs in the encoder and decoder. By visualizing the attention weights, an atom-wise mapping between the product and the reactants could be obtained and used to understand the predictions better. \citet{schwaller2018found} showed that seq2seq models could compete with graph neural network-based models in the reaction prediction task \cite{jin2017predicting}.
A translation model was also employed to learn a data-driven representation of molecules \cite{winter2019learning}. \citet{winter2019learning} translated between two textual representations of a chemical, InChi and SMILES, to extract latent representations that can integrate the semantic ``meaning" of the molecule. The results indicated a statistically significant improvement with the latent representations in a ligand-based virtual screening task against fingerprint methods such as ECFP (i.e. Morgan algorithm). NMT architectures were also adopted in a protein function prediction task for the first time, in which ``words" that were extracted from protein sequences are translated into GO identifiers using RNNs as encoder and decoder \cite{cao2017prolango}. Although exhibiting a comparable performance to the state-of-the-art protein function prediction methods, the authors argued that the performance of the model could be improved by determining more meaningful ``words" such as biologically interpretable fragments.
Transformer is an attention-based encoder-decoder architecture that was introduced in NMT by \citet{Vaswani:2017ul}. Although similar to previous studies \cite{sutskever2014sequence, cho2014learning, bahdanau2014neural} in terms of adopting an encoder-decoder architecture, Transformer differs from the others because it only consists of attention and feed-forward layers in the encoder and decoder. As transformers do not contain an RNN, positional embeddings are needed to capture order relationships in the sequences. \citet{schwaller2018molecular} were the first to adopt the Transformer architecture in cheminformatics and designed a \textit{Molecular Transformer} for the chemical reaction prediction task. The Molecular Transformer, which was atom-mapping independent, outperformed the other algorithms (e.g. based on a two-step convolutional graph neural network \cite{coley2019graph}) on commonly used benchmark data sets. Transformer architecture was also adopted to learn representations for chemicals in prediction of drug-target interactions \cite{shin2019self} and molecular properties \cite{wang2019smiles} in which the proposed systems either outperformed the state-of-the-art systems or obtained comparable results.
\section{Future Perspectives}
The increase in the biochemical data available in public databases combined with the advances in computational power and NLP methodologies have given rise to a rapid growth in the publication rate in bio/cheminformatics, especially through pre-print servers. As this interdisciplinary field grows, novel opportunities come hand in hand with novel challenges.
\subsection{Challenges}
The major challenges that can be observed from investigating these studies can be summarized as follows: (i) the need for universalized benchmarks and metrics, (ii) reproducibility of the published methodologies, (iii) bias in available data, and (iv) biological and chemical interpretability/explainability of the solutions.
\paragraph{Benchmarking} There are several steps in the drug discovery pipeline, from affinity prediction to the prediction of other chemical properties such as toxicity, and solubility. The use of different datasets and different evaluation metrics makes the assessment of model performance challenging. Comprehensive benchmarking platforms that can assess the success of different tools are still lacking. A benchmarking environment rigorously brings together the suitable data sets and evaluation methodologies in order to provide a fair comparison between the available tools. Such environments are available for molecule generation task from MOSES \cite{polykovskiy2018molecular} and GuacaMol \cite{brown2019guacamol}. \textit{MoleculeNet} is also a similar attempt to build a benchmarking platform for tasks such as prediction of binding affinity and toxicity \cite{wu2018moleculenet}.
\paragraph{Reproducibility} Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org), it is still a challenge to use data or code from other groups. The use of FAIR (Findable, Accessible, Interoperable and Reusable) (meta)data principles can guide the management of scientific data \cite{wilkinson2016fair}. Automated workflows that are easy to use and do not require programming knowledge encourage the flow of information from one discipline to the other. Platform-free solutions such as Docker (docker.com) in which an image of the source code is saved and can be opened without requiring further installation could accelerate the reproduction process. A recent initiative to provide a unified-framework for predictive models in genomics can quickly be adopted by the medicinal chemistry community \cite{avsec2019kipoi}.
\paragraph{Bias in data} The available data has two significant sources of bias, one related to the limited sampling of chemical space and the other related to the quality and reproducibility of the data. The lack of information about some regions of the protein/chemical landscape limits the current methodologies to the exploitation of data rather than full exploration. The data on protein-compound interactions is biased toward some privileged molecules or proteins because the protein targets are related to common diseases or the molecules are similar to known actives. Hence, not all of chemical space is sampled, and chemical space is expanded based on the similarity of an active compound to others, which is also referred to as inductive bias \cite{cleves2008effects}. Data about proteins or molecules related to rare diseases is limited and inactive molecules are frequently not reported. Moreover, some experimental measurements that are not reproducible across different labs or conditions limit their reliability \cite{pogue2018rare}. \citet{sieg2019need} and \citet{zhang2019bayesian} have recently discussed the bias factors in dataset composition. Zhang and Lee have also addressed the sources of bias in the data and proposed to use Bayesian deep learning to quantify uncertainty.
\paragraph{Interpretability} The black box nature of ML/DL methodologies makes assigning meaning to the results difficult. Explainability of an ML model is especially critical in drug discovery to facilitate the use of these findings by medicinal chemists, who can contribute to the knowledge loop. \textit{explainable-AI} (XAI) is a current challenge that calls for increased interpretability of AI solutions for a given context and includes several factors such as trust, safety, privacy, security, fairness and confidence \cite{holzinger2017we}. Explainability is also critical for the domain experts to assess the reliability of new methodolodogies. Interpretability is usually classified into two categories: post-hoc (i.e. after) and ante-hoc (i.e. before). Post-hoc approaches explain the predictions of the model, whereas ante-hoc approaches integrate explainability into the model. Recent studies have already aimed to map the semantic meaning behind the models onto the biochemical description. An attentive pooling network, a two-way attention system that extends the attention mechanism by allowing input nodes to be aware of one another, is one approach that has been employed in drug-target interaction prediction \cite{gao2018interpretable}. \citet{preuer2019interpretable} showed that mapping activations of hidden neurons in feed-forward neural networks to pharmacophores, or linking atom representations computed by convolutional filters to substructures in a graph-convolution model, are possible ways of integrating explainability into AI-based drug discovery systems.
\citet{bradshaw2019} also demonstrated a novel approach that combines molecule generation and retrosynthesis prediction to generate synthesizable molecules. Integration of such solutions to drug discovery problems will not only be useful for computational researchers but also for the medicinal chemistry community.
\subsection{Opportunities}
The NLP field has seen tremendous advances in the past five years, starting with the introduction of distributed word embedding algorithms such as Word2Vec \cite{mikolov2013distributed} and Glove \cite{pennington2014glove}. The concept of contextualized word embeddings (i.e. ELMo) was introduced soon after \cite{peters2018deep}. Here, the embedding of the word is not fixed, but changes according to the context (i.e. sentence) in which it appears. These advances continued with more complicated architectures such as Transformer (i.e. Generative Pre-Training or GPT) \cite{radford2018improving} and BERT \cite{devlin2018bert}, RoBERTa \cite{liu2019roberta}, GPT2 \cite{radford2019language}, Transformer-XL \cite{dai2019transformer}, and XLNet \cite{yang2019xlnet} models. Such models with a focus on context might have significant impact not only on drug discovery, but also on the protein folding problem, which is critical for predicting structural properties of the protein partner. Secondary structure \cite{hanson2019getting, zhu2019predicting, wang2016protein}, domain boundary \cite{shi2019dnn} and fold \cite{wei2015enhanced} prediction studies often use sequence information in combination with similarity to available structures. The recent success of AlphaFold \cite{evans2018novo} in Critical Assessment of Protein Structure Prediction (CASP) competitions (\url{http://predictioncenter.org/}) showed that the enhanced definitions of context, brought about by the advances in machine/deep learning systems, might be useful for capturing the global dependencies in protein sequences to detect interactions between residues separated in sequence space but close together in 3D space \cite{hanson2019getting}.
Unsupervised learning can be used on ``big" textual data through using language models with attention \cite{Vaswani:2017ul} and using pre-trained checkpoints from language models \cite{Rothe:2019wo}. Encoder-decoder architectures have also had significant impact on solving text generation and machine translation problems and were successfully applied to molecule generation problem. As NLP moves forward, the most recent approaches such as Topic-Guided VAE \cite{wang2019topic} and knowledge graphs with graph transformers \cite{koncel2019text} will easily find application in bio/cheminformatics.
Recent NLP models are not domain-specific, and they can help with the generalization of models \cite{radford2019language}. Current studies emphasize multi-task learning, which requires the use of DNNs that share parameters to learn more information from related but individual tasks \cite{ruder2019neural, radford2019language}. Combined with the transferability of contextual word representation models, multi-task learning can also provide solutions to drug discovery which has many interwoven tasks, such as chemical property prediction and molecule generation.
Language has an important power, not only for daily communication but also for the communication of codified domain knowledge. Deciphering the meaning behind text is the primary purpose of NLP, which inevitably has found its way to bio/cheminformatics. The complicated nature of biochemical text makes understanding the semantic construction of the hidden words all the more challenging and interesting. The applications we discussed in this review provide a broad perspective of how NLP is already integrated with the processing of biochemical text. A common theme in all of these applications is the use of AI-based methodologies that drive and benefit from the NLP field. Novel advances in NLP and ML are providing auspicious results to solving long-standing bio/cheminformatics problems.
With this review, we have summarized the impact of NLP on bio/cheminformatics to encourage this already interdisciplinary field to take advantage of recent advances. The communication between researchers from different backgrounds and domains can be enhanced through establishing a common vocabulary toward common goals. This review has been an attempt to facilitate this conversation.
\section*{Acknowledgement}
This work is partially supported by TUBITAK (The Scientific and Technological Research Council of Turkey) under grant number 119E133. HO acknowledges TUBITAK-BIDEB 2211 scholarship program and thanks G\"{o}k\c{c}e Uludo\u{g}an for her comments on figures. EO thanks Prof. Amedeo Caflisch for hosting her at the University of Zurich during her sabbatical.
|
train/arxiv
|
BkiUbsDxK5YsWV5L2k92
| 5
| 1
|
\section{Introduction}
One of the great breakthroughs in the understanding of physical theories was the construction of Reshetikhin--Turaev
(2+1)-dimensional topological quantum field theories \cite{Turaev10, MR1091619}.
Prior to this, Atiyah axiomatized a TQFT in \cite{MR1001453}.
The simpler (1+1)-dimensional theory was
nicely formulated by \cite{Dijkgraafthesis}.
The latter construction was lifted to a conformal field theory by Segal \cite{MR981378}.
In all these cases, one has a functor from a cobordism category to an algebraic category.
Going back to Freed and Quinn \cite{MR1240583}, Cardy and Lewellen \cite{MR1107480} ,
there has been an interest in including boundary conditions/information.
Besides the physical challenges, this poses a mathematical problem as both the geometric and the algebraic category need to be moved into higher categories.
Naively, a cobordism with corners is a 2-category, by viewing the corners as objects, the boundaries as cobordisms between them and the cobordism as a cobordism of cobordisms.
The devil is of course in the details. There have been several
approaches to the problem such as Lurie's approach \cite{Lurie2009}, \cite{MR2648901}, \cite{MR2713992}
and the project \cite{DSS}.
Taking a step back to the (1+1)-dimensional situation, the TQFTs with boundaries have been nicely characterized and
given rise to new axioms such as the Cardy axiom, see e.g.\ \cite{MR2395583} for a nice introduction or \cite{MR2242677} for a model free approach.
Here the objects are not quite cobordisms with corners, but more simply surfaces with boundaries
and points on the boundary.
In this paper we will give a constructive solution to the problem by
augmenting the setup of the Reshetikhin--Turaev theory.
We give very careful treatment and check all the details.
In order to do this we use an algebraic and a geometric 2-category and define a 2-functor (with anomaly) between them.
The source geometric 2-category $\mathrm{Co}$ is constructed extending the category of decorated cobordisms of the RT TQFT.
The target algebraic (weak) 2-category is the Kapranov-Voevodsky 2-vector spaces $2\-\mathrm{Vect}$ defined in \cite{KV1994}.
Great caution has to be used in the definition of gluing of decorated cobordisms with corners.
As objects and 1-morphisms, we fix the \textit{standard circles} and the \textit{standard surfaces}.
Thus, the 2-category $\mathrm{Co}$ on the level of objects and 1-morphisms is combinatoric.
The topological nature of $\mathrm{Co}$ lies in the 2-morphism level.
A 2-morphisms of $\mathrm{Co}$ is a decorated 3-manifold with corners.
One of the property of a decorated 3-manifold is that the boundary of the manifold is parametrized.
Namely, there are several embeddings from the standard surfaces to the boundary of the manifold.
This is where we need to be careful.
The gluing of standard surfaces is not a standard surface but homeomorphic to it.
Therefore to define horizontal composition of 2-morphisms, we need to choose and fix a homeomorphism between these spaces with caution.
In order to construct a 2-functor from $\mathrm{Co}$ to $2\-\mathrm{Vect}$, we "cap off" the corners of a cobordism to reduce it to a cobordism without corners.
Then we apply the original RT TQFT.
To prove the functoriality of this 2-functor, we develop a technique of representing cobordisms by special ribbon graphs and reduce calculations on manifolds to calculations on special ribbon graphs.
Our work is intended as a bridge between several subjects. The presentation of the categories was chosen to match with the considerations of string topology \cite{MR2597733} in the formalism of \cite{MR2314100,MR2411420}, which will hopefully lead
to some new connections between the two subjects.
Once the classification of \cite{DSPVB} is available, it will be interesting to see how our concrete realization of 3-2-1 TQFT fits.
Their results and those of \cite{DSS} are for a different target category.
We will provide a link in Section \ref{sec:comments}.
Further studies of 3-2-1 extensions but for Turaev--Viro are in \cite{Turaev2010} and in \cite{BalsamI, BalsamII, BalsamIII}.
It would be interesting to relate these to our construction using the relationship between the RT and the TV invariants \cite{Turaev2010}.
The paper is organized as follows.
In Section \ref{sec:A 2-category of cobordisms with corners}, we will introduce the 2-category $\mathrm{Co}$.
The 2-category $\mathrm{Co}$ is our choice of a 2-category of decorated cobordisms with corners.
In Section \ref{sec:A 2-category of the Kapranov-Voevodsky 2-vector spaces}, we will recall the Kapranov-Voevodsky 2-vector spaces 2-$\mathrm{Vect}$ as the target 2-category of the extended TQFT.
Before we will start the discussion of an extension of the RT TQFT to the cobordisms with corners, we will review some of the original construction of the Reshetikhin-Turaev TQFT in Section \ref{sec:Review and Modification of the Reshetikhin-Turaev TQFT} since we will extensively use the original theory.
This will also serve as a quick reference of the notations and definitions of \cite{Turaev10}.
In Section \ref{sec:An extended TQFT}, we construct an extended TQFT $\mathcal{X}$ from $\mathrm{Co}$ to $2\-\mathrm{Vect}$ and all the details of several compatibility of gluings, which is the main part of the paper, will be proved in Section \ref{sec:Main Theorem}.
We also show that this is indeed an extension by showing when it is restricted to regular cobordisms it produce the RT TQFT.
In Appendix, we review B\'{e}nabou's definition of a bicategory and a pseudo 2-functor \cite{MR0220789} and extend it to the definition of \textit{projective pseudo 2-functor}.
\section*{Acknowledgments}
I would like to show my greatest appreciation to Professor Ralph M.\ Kaufmann whose advice, guidance, suggestions were of inevitable value for my study.
I also owe a very important debt to Professor Alexander A.\ Voronov and Professor Christopher Schommer-Pries for their interest in the current paper and valuable feedback.
I would like to thank the referee for carefully reading my manuscript and for giving such constructive comments which substantially helped improving the quality of the current paper.
\section{A 2-category of cobordisms with corners}\label{sec:A 2-category of cobordisms with corners}
In this section we define a 2-category of decorated cobordisms with corners $\mathrm{Co}$ which is the source 2-category of our extended TQFT as a projective pseudo 2-functor.
Let us explain the outline of our construction of a 2-category of decorated cobordisms with corners $\mathrm{Co}$.
We will give the precise definitions later.
The objects are standard circles.
The 1-morphisms are standard surfaces with boundaries.
The 2-morphisms are decorated 3-manifolds with corners with parametrized boundaries.
In the literature, there are many kind of definitions of a 2-category of cobordism with corners but as far as the author knows there has not been a definition of 2-category of decorated cobordisms with corners.
(Remark: We found that the definition given by Kerler and Lyubashenko \cite{MR1862634} is close to our definition.
They use a double category instead of a 2-category.)
One of the difference between our 2-category of cobordisms with corners from others is that 2-morphism cobordisms are parametrized.
This means that we fix standard surfaces and each cobordism is equipped with homeomorphisms from standard surfaces to its boundary components.
The difficulty with standard surfaces is that the composite of two standard surfaces is not a standard surface.
Thus we need to deal with compositions carefully.
Even though we choose to use standard circles and standard surfaces as our objects and 1-morphisms of $\mathrm{Co}$, the essence is combinatorial.
Namely, we only need the number of components of circles and \textit{decorated type} of a surface.
Topological information lives in the level of 2-morphisms.
Thus our definition of $\mathrm{Co}$ can be regarded as a geometric realization of combinatorial data on objects and 1-morphisms. See the table below.
\begin{center}
\begin{tabular}{ | l | l | l | }
\hline
$\mathrm{Co}$ & Geometric realization & Combinatorial data \\ \hline
Objects & Standar circles & Integers
\\ \hline
1-morphisms & Standard surfaces & Decorated types \\ \hline
2-morphisms & Classes of decorated cobordisms with corners & \\
\hline
\end{tabular}
\end{center}
In our setting, surfaces are restricted to connected ones.
This restriction makes the theory simpler and fit rigorously into a 2-categorical setting.
Also, to avoid non-connected surfaces, we introduce two formal objects $_*\emptyset$ and $\emptyset_*$, which we call the left and the right empty sets.
As a just 2-category of cobordisms with corners, including non-connected surfaces is natural and simpler.
However including general non-connected surfaces makes it complicated to construct an extended TQFT.
In fact, our technique of representing cobordisms by special ribbon graphs does not generalize to the case of non-connected surfaces.
For this reason, the non-connected surfaces will be dealt with in a future paper.
Now we are going to explain the rigorous definitions.
Along with doing so, we need to modify and extend several definitions used in the Reshetikhin-Turaev TQFT.
We define data $\mathrm{Co}$ consisting of objects, 1-morphisms, and 2-morphisms.
It will be shown that the data $\mathrm{Co}$ is indeed a 2-category.
\subsection{Objects of $\mathrm{Co}$}
Let us consider the 1-dimensional circle $S^1=\{(x, y) \in \mathbb{R}^2 \mid x^2+y^2=1\}$ in $\mathbb{R}^2$.
We call the pair $(S^1, (0,-1))$ the \textit{ 1-dimensional standard circle}.
We often omit the point $(0, -1)$ in the notation and just write $S^1$.
For each natural number $n$, the ordered disjoint union of $n$ 1-dimensional standard circles is called the \textit{ $n$-standard circles}.
We denote $n$-standard circle by $S^{\sqcup n}:=(S^1, i_1=(0,-1))\sqcup \cdots \sqcup (S^1, i_n=(0,-1))$, where $i_k$ are called the \textit{$k$-th base point} for $k=1, \dots, n$.
In general, a pair of a connected manifold and a point of the manifold is called a \textit{pointed} manifold and the specified point is called the \textit{base point}.
The disjoint union of several pointed manifolds is called a multi-pointed manifold.
Thus the $n$-standard circles is a multi-pointed manifold.
\begin{Definition}
We define an \textit{object} of the data $\mathrm{Co}$ to be the $n$-standard surface $S^{\sqcup n}$ for each natural number $n$. We also include two formal symbols $\emptyset_*$ and ${_* \emptyset}$. We call them the left and the right empty set, respectively.
These two formal symbols are needed to confine ourselves to connected surfaces.
\end{Definition}
\subsection{1-Morphisms of $\mathrm{Co}$}
In the previous section we defined the objects of $\mathrm{Co}$.
Now we define a 1-morphism between two objects of $\mathrm{Co}$.
A 1-morphism will be defined to be a standard surface, which we define below.
First, we define decorated types and decorated surface needed to define the standard surfaces.
\subsubsection{Decorated types and decorated surfaces}
Already Reshetikhin-Turaev's construction uses surfaces that are decorated by objects of a modular tensor category.
We extend their definition to include surfaces with boundaries.
Fix a modular category $\mathcal{V}$.
Let
\begin{equation}\label{equ:type}
t=(m,n; a_1, a_2, \dots, a_p)
\end{equation}
be a tuple consisting of non-negative integers $m$, $n$, and for $i=1, \dots, p$, $a_i$ is either a non-negative integer or a pair $(W, \nu)$, where $W$ is an object of the modular category $\mathcal{V}$ and $\nu$ is either $1$ or $-1$.
Such a pair is called a \textit{mark} or a \textit{signed object} of $\mathcal{V}$.
The tuple $t$ is called a \textit{decorated type} or when confusion is unlikely we simply call it \textit{type}.
Let $L(t)=m$ and $R(t)=n$ denote the first and the second integer of the type $t$, respectively.
By an \textit{arc} on a surface $\Sigma$, we mean a simple oriented arc lying $\Sigma \setminus \partial \Sigma$.
An arc on $\Sigma$ endowed with an object $W$ of $\mathcal{V}$ and a sign $\nu=\pm 1$ is said to be \textit{marked}.
A connected compact orientable surface $\Sigma$ is said to be \textit{decorated} by a decorated type $t=(m,n; a_1, a_2, \dots, a_p)$ if the following conditions are satisfied.
\begin{enumerate}
\item There are $m+n$ boundary components of $\Sigma$ and the boundary components are totally ordered.
The first $m$ components are called \textit{inboundary} or \textit{left boundary} and the last $n$ components are called \textit{outboundary} or \textit{right boundary}.
\item The boundary $\partial \Sigma$ is a multi-pointed manifold.
\item For each singed object entry of $a_i$, the surface $\Sigma$ is equipped with a marked arc with the mark $a_i$.
\item The genus of $\Sigma$ is the sum of the all integer components $a_i$ except for the first and the second integers.
\end{enumerate}
A \textit{$d$-homeomorphism} of decorated surfaces is a degree 1 homeomorphism of the underlying surfaces preserving the order of boundary components, base points, orientation, the distinguished arcs together with their orientations, marks, and order.
There is a natural \textit{negation} of the structure on a decorated surface.
First, for a type $t=(m,n; a_1, a_2, \dots, a_p)$ we define its opposite type $-t=(m,n; b_1, b_2,\dots, b_p)$ as follows.
If $a_i$ is an integer entry, then let $b_i=a_i$.
If $a_i=(W,\nu)$ is a mark, then let $b_i=(W, -\nu)$.
For a decorated surface $\Sigma$, its opposite decorated surface $-\Sigma$ is obtained from $\Sigma$ by reversing the orientation of $\Sigma$, reversing the orientation of its distinguished arcs, and multiplying the signs of all distinguished arcs by $-1$ while keeping the labels and the order of these arcs.
Note that the decorated type of $-\Sigma$ is the opposite type of $\Sigma$.
\subsubsection{Remark}\label{subsec:Remark}
In the Reshetikhin-Turaev theory for the cobordisms without corners, a decorated type is denoted by
\[t_{\text{RT}}=(g; (W_1, \nu_1), \dots, (W_m, \nu_m)),\]
where $g$ is an integer indicating a genus and $W_i$ is an object of a modular category and $\nu_i$ is either $1$ or $-1$ for $i=1, \dots, m$.
In our notation, this decorated type is expressed by the type
\[t=(0, 0; (W_1, \nu_1), \dots, (W_m, \nu_m), 1,1,\dots, 1),\]
where the number of $1$'s is $g$.
Thus our theory includes the RT theory.
\subsubsection{Standard surfaces}
For each type $t$ we define the standard surface of type $t$.
Let $t=(m,n; a_1, a_2, \dots, a_p)$ be a decorated type.
To construct the standard surface, we first define a ``block" of a ribbon graph, which can be thought of an elementary core of the standard surface.
For a mark $a=(W, \nu)$, the block for $(W, \nu)$ is defined to be a $1\times 1$ square coupon in $\mathbb{R}^2 $ and a length 1 band attached to the top of the coupon and the band is colored by $W$ if $\nu=1$ and $W^*$ if $\nu=-1$.
See Figure \ref{fig:blockmark}.
\begin{figure}[h]
\center
\includegraphics[width=1.2in]{Blockmark.pdf}
\caption{The block for $(W,\nu)$}
\label{fig:blockmark}
\end{figure}
The block for a positive integer $a$ consists of a $1 \times 1$ square coupon in $\mathbb{R}^2$ and rainbow like bands with $a$ bands on the top of the square.
These bands are not colored and their cores are oriented from right to left.
See Figure \ref{fig:blockrainbow}.
\begin{figure}[h]
\center
\includegraphics[width=2.2in]{Blockrainbow.pdf}
\caption{The block for an integer}
\label{fig:blockrainbow}
\end{figure}
For the first entry integer $m$ of the type $t$, the block for $m$ is defined as in the left figure of Figure \ref{fig:side ribbons}.
There are $m$ bands attached to the top of the square and the bands are bent so that the ends of bands have the same $x$-coordinates as in the figure.
Similarly, for the second entry integer $n$ of the type $t$, the block for $n$ is defined as in the right figure of Figure \ref{fig:side ribbons}.
For each integer, the left and the right ribbon graphs in Figure \ref{fig:side ribbons} are mirror reflection with respect to $y$-axis.
\begin{figure}[h]
\center
\includegraphics[width=2.2in]{Side-ribbons.pdf}
\caption{The block for the fist and the second integers}
\label{fig:side ribbons}
\end{figure}
Now let $R_t$ be a ribbon graph in $\mathbb{R}^3$ constructed by arranging, in the strip $\mathbb{R} \times 0 \times [-1, 1] \subset \mathbb{R}^3$,
the block for $m$ so that the top left corner is at $(0,0,0)$ and
the block for $a_i$ so that the top left corner of the square of the block is located at $(i,0,0)$ for $i=1,\dots, p$ and the block for $n$ so that the top left corner is at $(p+1,0,0)$.
We delete the joint segments of the coupons and make it a single coupon with length $p+2$.
Let $R_t$ denote the resulting ribbon graph.
See Figure \ref{fig:Rtnew} for an example of $R_t$ with the type
\[t=(2,3; (W_1, \nu_1), 1, (W_2, \nu_2),3, (W_3, \nu_3),2).\]
\begin{figure}[h]
\center
\includegraphics[width=4in]{Rtnew2.pdf}
\caption{The ribbon graph $R_t$}
\label{fig:Rtnew}
\end{figure}
Let $l$ be the number of entries in the type $t$, which is the width of the coupon in $R_t$.
Fix a close regular neighborhood $U_t$ of the ribbon graph $R_t$ in the stripe $[0,l]\times \mathbb{R} \times [-2,1] \subset \mathbb{R}^3$.
We provide $U_t$ with right-handed orientation and provide the boundary surface $\partial U_t$ with the induced orientation.
We assume by shrinking the coupon slightly so that the graph $R_t$ intersect with $\partial U_t$ only at the ends of short bands.
If a band has a mark $(W_i, \nu_i)$, provide the intersection arc with this mark.
The surface $\partial U_t$ with these intersection arcs with marks and $m+n$ non-marked arcs is called the \textit{capped standard surface} for the type $t$ and denoted by $\hat{\Sigma}_t$.
Fix an embedding of the disjoint union of $m+n$ 2-dimensional disks $D^2$'s into $\hat \Sigma_t$ so that each boundary circle enclose exactly one non-marked arc of $\hat \Sigma_t$.
Each image of $[-1/2,1/2]\subset D^2$ is one of the arcs.
Cutting out the image of the interior of these disks we obtain a surface with marked arcs and boundary.
Each boundary component has a base point which is an image of $(0, -1)$.
We assume that the intersection of planes $\{0\}\times \mathbb{R}^2$ and $\{l\}\times \mathbb{R}^2$ with $U_t$ are those embedded disks.
The resulting surface is called the \textit{standard surface} of type $t$ and denoted by $\Sigma_t$.
The boundary components of $\Sigma_t$ corresponding to the uncolored left $m$ bands are called the left boundary and denoted by $\partial_{L} \Sigma_t$ and the boundary components corresponding to the uncolored right $n$ bands are called the right boundary and denoted by $\partial_{R} \Sigma_t$.
The left boundary circles are ordered according to the order of the left bands ordered from left to right.
The right boundary circles are ordered according to the order of the right bands ordered from right to left.
The 3-manifold $U_t$ with the ribbon graph $R_t$ sitting inside $U_t$ is called the \textit{standard handle body} for type $t$.
\begin{figure}[h]
\center
\includegraphics[width=3in]{Standard-handlebody.pdf}
\caption{The standard handle body and embedded disks}
\label{fig:capped standard handlebody}
\end{figure}
Analogously, consider the mirror reflection $-R_t:=\mathrm{mir}(R_t)$, where $\mathrm{mir}:\mathbb{R}^3 \to \mathbb{R}^3$ is a reflection with respect to a plane $\mathbb{R}^2\times \{1/2\} \subset \mathbb{R}^3$.
Set $U_t^-=\mathrm{mir}(U_t)$.
We provide $U_t^-$ with right-handed orientation and provide $\partial(U_t^-)$ with the induced orientation.
For the $i$-th arc of the intersection $-R_t \cap \partial (U_t^-)$, we assign marks $(W_i, -\nu_i)$.
Set $\Sigma_t^-:=\partial U_t^-$.
If we confine ourselves to closed surfaces, the definition of the standard surfaces are minor modification of that of Turaev's.
For our purpose, we need to consider gluings of surfaces along boundaries.
Thus we need to deal with the composition of these data we defined.
Two types $t=(l,m; a_1, a_2, \dots, a_p)$ and $s=(m',n; b_1, b_2, \dots, b_q)$ are said to be \textit{composable} if $m=m'$.
If they are composable, the composition of $t$ and $s$ is defined to be
\begin{equation}\label{equ:composition of types}
t\circ s =(l,n; a_1, a_2, \dots, a_p, m-1, b_1, b_2,\dots, b_q).
\end{equation}
As we need it later, we also define $D_n$ (Figure \ref{fig:Dn}) to be the disjoint union of $n$ cylinder $D^2\times [0, 1]$, where $D^2=\{ (x,y)\in \mathbb{R}^2 \mid x^2+y^2 \leq 1 \}$, with an uncolored untwisted band $[-1/2, 1/2] \times [0, 1]$ in each cylinder that only intersects with the boundary of the cylinder at the bottom disk $D^2 \times \{0\}$ and the top disk $D^2 \times \{1\}$ transversally.
Let $C(n)=\sqcup_n \partial (D^2) \times [0,1]$.
The space $C(n)$ is the boundary of $D_n$ minus the interior of the union of the top boundary $\sqcup_n \partial(D^2) \times \{1\}$ and the bottom boundary $\sqcup_n \partial(D^2) \times \{0\}$.
The points in the boundary of $C(n)$ corresponding to the point $(0,-1) \times \{0\}$ and $(0,-1)\times \{1\}$ in $D^2 \times [0,1]$ are base points of $C(n)$.
We provide $D_n$ with right-handed orientation and provide the boundary surface $C(n)$ with the induced orientation.
Let $\mathrm{ref}:D_n \to D_n$ be an orientation reversing homeomorphism that is induced by the map sending $(x, y)\times \{t\}$ to $(-x, y) \times \{t\}$ in $D^2 \times [0, 1]$.
Thus the map $\mathrm{ref}$ is a reflection map with respect to $y$-$z$ plane in $\mathbb{R}^3$.
Restricting on $C(n)$, the map $\mathrm{ref}$ induces an orientation reversing map on $C(n)$, which is also denoted by $\mathrm{ref}$.
\begin{figure}[h]
\center
\includegraphics[width=4.4in]{Solid-cylinders.pdf}
\caption{The cylinder $D_n$}
\label{fig:Dn}
\end{figure}
\begin{Definition}[1-morphisms of $\mathrm{Co}$]\label{def:1-morphism of Co}
Let $X$ and $Y$ be objects of $\mathrm{Co}$.
A 1-morphism from $X$ to $Y$ is defined to be the standard surface $\Sigma_t$ for a decorated type $t$ depending on $X$ and $Y$ as follows.
\begin{enumerate}
\item If $X=\nstand{m}$ and $Y=\nstand{n}$, then $t=(m, n;a_1, a_2, \dots, a_p)$.
\item If $X={_* \emptyset}$ and $Y=\nstand{n}$, then $t=(0, n;a_1, a_2, \dots, a_p)$.
\item If $X=\nstand{m}$ and $Y=\emptyset_*$, then $t=(m, 0;a_1, a_2, \dots, a_p)$.
\item If $X={_* \emptyset}$ and $Y=\emptyset_*$, then $t=(0, 0;a_1, a_2, \dots, a_p)$.
\end{enumerate}
We add formal identity symbols $\mathrm{id}_n:\nstand{n} \to \nstand{n}$ in the set of 1-morphism for each natural number $n$.
If we agree with the convention that the source object $X=\nstand{0}$ denotes ${_* \emptyset}$ and the target object $Y=\nstand{0}$ denotes $\emptyset_*$, then the definition (2)-(4) are special cases of (1).
\end{Definition}
We will explain the role of the formal symbol $\mathrm{id}_n$ later when we discuss compositions of $\mathrm{Co}$.
\subsection{2-morphisms of $\mathrm{Co}$}
A 2-morphism of $\mathrm{Co}$ will be an equivalence class of a \textit{decorated} cobordism, which we are going to define.
Let
\[ t=(m,n; a_1, a_2, \dots, a_p) \mbox{ and } s=(m,n; b_1, b_2, \dots, b_q)\] be types.
Let $\Sigma_{t}$ and $\Sigma_{s}$ be 1-morphisms from $\nstand{m}$ to $\nstand{m}$.
We define a \textit{decorated cobordism with corner} from $\Sigma_{t}$ to $\Sigma_{s}$ as follows.
Consider a compact oriented 3-manifold $M$ whose boundary decomposes into four pieces as
\[\partial M= \partial_{B} M \cup \partial_{T} M \cup \partial_{L} M \cup \partial_{R} M,\]
such that
\begin{enumerate}
\item $\partial_{B} M \cap \partial_{T} M=\emptyset$, $\partial_{L} M \cap \partial_{R} M=\emptyset$
\item The intersections $\partial_{B} M \cap \partial_{L} M$ and $\partial_{T} M \cap \partial_{L} M$ consist of $m$ circles, respectively.
\item The intersections $\partial_{B} M \cap \partial_{R} M$ and $\partial_{T} M \cap \partial_{R} M$ consist of $n$ circles, respectively.
\item The surfaces $\partial_{B} M$ and $\partial_{T} M$ are decorated surface of type $-t$ and $-s$, respectively.
\item The surfaces $\partial_{L} M$ and $\partial_{R} M$ are multi-pointed surface which are homeomorphic to $m$ cylinder over a circle and $n$ cylinder over a circle, respectively.
\item The base points of these four surfaces agree on their intersections.
\end{enumerate}
A ribbon graph $\Omega$ in $M$ meets $\partial M$ transversely along the distinguished arcs in $\partial_{B} M \cup \partial_{T} M \subset \partial M$ which are bases of certain bands of $\Omega$.
Such a manifold $M$ together with a $v$-colored ribbon graph $\Omega$ is said to be \textit{decorated} if the surfaces $\partial_{B} M$, $\partial_{T} M$, $\partial_{L} M$, and $\partial_{R} M$ are \textit{parametrized}.
This means that there are $d$-homeomorphism (for bottom and top) and base point preserving homeomorphisms (for left and right)
\[\phi_B:\Sigma_{t} \to -\partial_{B} M,\]
\[\phi_T: \Sigma_{s}^{-} \to \partial_{T} M,\]
\[\phi_L: C(m) \to \partial_{L} M, \]
\[\phi_R: C(n) \to \partial_{R} M. \]
We call $\phi=(\phi_B,\phi_T,\phi_L,\phi_R)$ a \textit{parametrization} of $\partial M$ (or $M$).
A $d$-homeomorphism of decorated 3-manifolds is a homeomorphism of the underlying 3-manifold preserving all additional structures in question.
In the sequel, we often call $d$-homeomorphism simply homeomorphism when the domain and the range are decorated cobordisms with corners.
We say that such pairs $(M, \phi)$ and $(M', \phi')$ are equivalent if there exist a ($d$-)homeomorphism $f$ from $M$ to $M'$ such that it commutes with parametrizations: $f\circ\phi_*= \phi'_*$ for $*=B$, $T$, $L$, $R$.
This is clearly an equivalence relation.
\begin{Definition}[2-morphisms of $\mathrm{Co}$]
Let $\Sigma_{t}$ and $\Sigma_{s}$ be 1-morphisms from $\nstand{m}$ to $\nstand{m}$.
A \textit{2-morphism} from $\Sigma_{t}$ to $\Sigma_{s}$ is the class $[(M,\phi)]$ of a pair of a decorated cobordism with corners from $\Sigma_t$ to $\Sigma_s$ and its parametrization $\phi$.
For each 1-morphism $X$, we add the formal identity symbol $\mathrm{id}_{X}$.
If one of the 1-morphisms is a formal identity 1-morphism $\mathrm{id}_n$, then there is no 2-morphism unless both are formal identity 1-morphisms and for this case there is only one formal identity 2-morphism $\mathrm{id}_{\mathrm{id}_n}$.
\end{Definition}
Let $(M,\phi)$ be a representative of the 2-morphisms $[(M,\phi)]$ from $\Sigma_t$ to $\Sigma_s$.
We define the \textit{standard boundary} $\Sigma(\phi)$ for the parametrization $\phi$ to be the surface obtained from $\Sigma_t$, $\Sigma_s^-$, $C(m)$, and $C(n)$ by identifying the boundaries via homeomorphisms of boundaries
\[g_{BL}:=\phi_L^{-1}\circ \phi_B |_{\partial_{L} \Sigma_t},\]
\[g_{BR}:=\phi_R^{-1}\circ \phi_B |_{\partial_{R} \Sigma_t},\]
\[g_{TL}:=\phi_L^{-1}\circ \phi_T |_{\partial_{L} \Sigma^{-}_s},\]
\[g_{TR}:=\phi_R^{-1}\circ \phi_T |_{\partial_{R} \Sigma^{-}_s}.\]
Hence
\[\Sigma(\phi)=(\Sigma_t \sqcup \Sigma^{-}_s) \cup_{\mbox{glue}}( C(m) \sqcup C(n)),\]
where ``glue'' means the identification of boundaries by the homeomorphisms $g_{BL}$, $g_{BR}$, $g_{TL}$, $g_{TR}$.
Then the parametrization $\phi$ of $M$ induces the homeomorphism, also denoted by $\phi$, from $\Sigma(\phi)$ to $\partial M$.
In addition to decorated 3-manifolds with specific parametrizations, we will add formal identities in the set of 2-morphism.
The details are explained below when we deal with compositions.
We now introduce the notion of \textit{isotopy} in decorated cobordisms with corners.
Recall that if $\Sigma$ is a parametrized $d$-surface, then the cylinder $\Sigma \times [0, 1]$ has a natural structure of a decorated cobordism.
\begin{Definition}
Let $\phi$ and $\phi'$ be two parametrizations of $M$.
We say that $\phi=(\phi_B, \phi_T,\phi_L,\phi_R)$ and $\phi'=(\phi'_B, \phi'_T,\phi'_L,\phi'_R)$ are \textit{isotopic} if the following conditions are satisfied.
Let $S_*$ be a standard boundary corresponding to each $*=B$, $T$, $L$, $R$.
\begin{enumerate}
\item $\phi_*$ is equal to $\phi'_*$ on the boundary circles $\partial S_*$ for each $*$.
\item
There is a homeomorphism $F_*:S_* \times [0, 1]\to \partial_* M \times [0,1]$ for each $*$ satisfying the following conditions.
\begin{enumerate}
\item $F_*(x,0)=\phi_*(x) \times \{0\}$ and $F_*(x, 1)=\phi'_*(x)\times \{1\}$.
\item Its restriction on $\partial S_* \times [0,1]$ agrees with $\phi_* \times \mathrm{id}_{[0,1]}$
\end{enumerate}
\end{enumerate}
\end{Definition}
If two parametrizations $\phi$ and $\phi'$ are isotopic, then we have $\Sigma(\phi)=\Sigma(\phi')$ since the gluing maps are the same by the condition (1).
The condition (b) guarantees that we can combine four homeomorphisms $F_*$ with $*=B$, $T$, $L$, $R$ into a homeomorphism
\[F:\Sigma(\phi) \times [0,1] \to \partial M \times [0,1]\]
such that $F(x, 0)=\phi(x) \times \{0\}$ and $F(x,1)=\phi'(x) \times \{1\}$.
\begin{lemma}\label{lem:isotopy equivalence}
Let $\phi$ and $\phi'$ be two parametrizations on a decorated cobordism $M$.
Assume that $ \phi $ and $ \phi' $ are isotopic.
Then $(M, \phi)$ is equivalent to $(M, \phi')$.
\end{lemma}
\begin{proof}
Let us just write $\Sigma$ for $\Sigma(\phi)=\Sigma(\phi')$.
Let $F:\Sigma\times [0,1] \to \partial M \times [0, 1]$ be a $d$-homeomorphism that gives an isotopy between $\phi$ and $\phi'$ so that we have $F(x, 0)=\phi(x)\times \{0\}$ and $F(x, 1)=\phi'(x)\times \{1\}$.
Consider a collar neighborhood $U=\partial M \times [-1,0]$ of $\partial M$ in $M$.
Also consider the space $M \cup_{\partial M \times \{0\}} (\partial M \times [0,1])$ obtained by attaching $\partial M \times [0,1]$ to $M$ along $\partial M\times \{0\}=\partial M$.
Let $f:M \to M \cup_{\partial M \times \{0\}} (\partial M \times [0,1])$ be a map that is identity outside $U$ and sends each point $x \times t \in U$ with $x\in \partial M$ and $t\in[-1, 0]$ to the point
\[ x \times (2t+1)\in U \cup_{\partial M \times \{0\}} (\partial M \times [0,1]) \subset M \cup_{\partial M \times \{0\}} (\partial M \times [0,1]).\]
See Figure \ref{fig:callar f}.
It is easy to see that the map $f$ is a $d$-homeomorphism.
\begin{figure}[h]
\center
\includegraphics[width=4.2in]{Callarhomeof.pdf}
\caption{The $d$-homeomorphism $f$}
\label{fig:callar f}
\end{figure}
Now the proof of the lemma is summarized into the following commutative diagram.
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=3em, row sep=1em]
{ & $M$ & $M \cup_{\partial M \times \{0\}} (\partial M \times [0,1])$ &\\
$\Sigma$ & & & $M\cup_{\phi^{-1}} (\Sigma \times [0,1])$ \\
& $M$ & $M \cup_{\partial M \times \{0\}} (\partial M \times [0,1])$ & \\};
\path[->, font=\scriptsize]
(m-2-1) edge node[above] {$\phi$} (m-1-2);
\path[->, font=\scriptsize]
(m-1-2) edge node[above] {$f$} node[below]{$\sim$} (m-1-3);
\path[->, font=\scriptsize]
(m-1-3.east) edge node[auto] {$\mathrm{id}_M\cup (\phi^{-1} \times \mathrm{id}_{[0,1]})$}
node[below right, sloped] {$\sim$} (m-2-4.north west);
\path[->, font=\scriptsize]
(m-3-3) edge node[above]{$f^{-1}$} node[auto] {$\sim$} (m-3-2);
\path[->, font=\scriptsize]
(m-2-4.south west) edge node[auto] {$\mathrm{id}_M\cup F$}
node[above left, sloped] {$\sim$} (m-3-3.east);
\path[->, font=\scriptsize]
(m-2-1) edge node[below] {$\phi'$} (m-3-2);
\path[->, font=\scriptsize, dashed]
(m-1-2) edge node[above] {} (m-3-2);
\end{tikzpicture}
\end{center}
The collar homeomorphism gives the homeomorphisms at the top and the bottom of the diagram above.
The space $ M \cup (\partial M_B \times [0,1])$ is further homeomorphic to $M\cup_{\phi^{-1}} (\Sigma \times [0,1])$, where we identify $\partial M$ with $\Sigma\times \{0\}$ by the inverse of the parametrization $\phi^{-1}$.
The homeomorphism is given by the identity on $M$ and $[0, 1]$, and $\phi^{-1}$ from $\partial M$ to $\Sigma$.
The next homeomorphism from $M\cup_{\phi^{-1}} (\Sigma \times [0,1])$ to $M \cup (\partial M \times [0,1])$ is given by the identity on $M$ and $F$ on the rest.
The map $\mathrm{id}\cup F$ is compatible with the unions:
every element $x\in \partial M$ is identified with $(\phi^{-1}(x), 0) \in\Sigma \times \{0\}$.
This is, in turn, mapped to $F(\phi^{-1}(x), 0)=(\phi\circ\phi^{-1}(x), 0)=(x, 0) \in \partial M \times \{0\}$ and this is identified with $x=\mathrm{id}(x)\in \partial M$.
Thus the map $\mathrm{id} \cup F$ is well-defined.
Composing these homeomorphisms, we obtain a homeomorphism from $M$ to $M$ (the dashed arrow in the diagram).
Now we show that this homeomorphism commutes with parametrizations.
For each element $x\in \Sigma$, we have the following commutative diagram and it shows that the homeomorphism commutes with parametrizations.
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=3em, row sep=3em]
{ & $\phi(x)\in M$ & $\phi(x) \times \{1\} \in M \cup (\partial M \times [0,1])$ \\
$x \in \Sigma$ & & $x\times \{1\} \in M\cup_{\phi^{-1}} (\Sigma \times [0,1])$ \\
& $\phi'(x)\in M$ & $F(x,1)=\phi'(x)\times \{1\} \in M \cup (\partial M \times [0,1])$ \\};
\path[|->, font=\scriptsize]
(m-2-1) edge node[above] {$\phi$} (m-1-2);
\path[|->, font=\scriptsize]
(m-1-2) edge (m-1-3);
\path[|->, font=\scriptsize]
(m-1-3.south) edge node[auto] {$\mathrm{id}\cup \phi^{-1} \cup \mathrm{id}$}
(m-2-3);
\path[|->, font=\scriptsize]
(m-3-3) edge (m-3-2);
\path[|->, font=\scriptsize]
(m-2-3) edge node[auto] {$\mathrm{id}\cup F$}
(m-3-3.north);
\path[|->, font=\scriptsize]
(m-2-1) edge node[below] {$\phi'$} (m-3-2);
\end{tikzpicture}
\end{proof}
\subsection{Proving that $\mathrm{Co}$ is a 2-category}
Now that we defined the data $\mathrm{Co}$, in this section we will show the following proposition:
\begin{prop}
The data $\mathrm{Co}$ is a 2-category.
\end{prop}
Our convention about 2-categories is summarized in Section \ref{sec:appendix:bicategory}.
To claim that $\mathrm{Co}$ is a 2-category, we need to define several composition rules for 1-morphisms and 2-morphisms among other things.
Let $\Sigma_t :\nstand{l} \to \nstand{m}$ and $\Sigma_s :\nstand{m} \to \nstand{n}$ be two 1-morphism so that the target object of $\Sigma_t$ is the source object of $\Sigma_s$ (including the cases when $l=0$ or $n=0$).
We define the composition of $\Sigma_t$ and $\Sigma_s$ to be $\Sigma_{t\circ s}$, where $t\circ s$ is the composition of types defined in (\ref{equ:composition of types}).
This composition is associative since the composition of types is associative.
For each object $\nstand{n}$ with an integer $n$, we let the formal symbol $\mathrm{id}_n$ act as an identity.
Remark: note that the composition of 1-morphism is not a topological gluing of surfaces along boundaries.
As 1-morphism surfaces, we fixed the standard surfaces and we need that the composite of 1-morphisms is also a standard surface.
Also note that on the 1-morphism level, the boundary circles are not parametrized and hence there is no canonical homeomorphism of boundary circles.
\subsubsection{Vertical Gluing}
We define the vertical composition.
Let $[(M_1, \phi_1)]:\Sigma_{t_1}\Rightarrow \Sigma_{t_2}: \nstand{m} \to \nstand{n}$ and $[(M_2, \phi_2)]:\Sigma_{t_2}\Rightarrow \Sigma_{t_3}: \nstand{m} \to \nstand{n}$ be 2-morphisms of $\mathrm{Co}$ so that the target 1-morphism of $[M_1]$ is equal to the source 1-morphism of $[M_2]$.
We define the vertical composite $[M_1]\cdot [M_2]$ of $[M_1]$ and $[M_2]$.
The vertical composite will be a 2-morphism $[M_1]\cdot [M_2]: \Sigma_{t_1}\Rightarrow \Sigma_{t_3}: \nstand{l} \to \nstand{n}$.
Let us fix representative $(M_1, \phi_1)$ and $(M_2, \phi_2)$ of these 2-morphisms.
We first glue $M_1$ and $M_2$ along the top boundary $\phi_1(\Sigma_{t_2}^-)$ of $M_1$ and the bottom boundary $\phi_2(\Sigma_{t_2})$ via the homeomorphism obtained from the composition of the following homeomorphisms
\[\partial M_1 \supset \phi_1(\Sigma_{t_2}^-) \xrightarrow{\phi_1^{-1}} \Sigma_{t_2}^- \xrightarrow{(\mathrm{mir})^{-1}} \Sigma_{t_2} \xrightarrow{\phi_2} \phi_2(\Sigma_{t_2}) \subset \partial M_2.\]
Denote the resulting manifold by $M_1 \cdot M_2$.
Now we need to construct a parametrization from $\Sigma_{t_1} \sqcup \Sigma_{t_3}^{-} \sqcup C(m) \sqcup C(n)$ to the boundary of $M_1\cdot M_2$.
There is a natural parametrization, which we denote by $\phi_1\cdot_{\text{v}} \phi_2$, obtained as follows.
The map $\phi_1\cdot_{\text{v}} \phi_2$ restricts to $\phi_1$ and $\phi_2$ on $\Sigma_{t_1}$ and $\Sigma_{t_3}^-$.
This means that $(\phi_1 \cdot_{\text{v}} \phi_2)_B=(\phi_1)_B$ and $(\phi_1 \cdot_{\text{v}} \phi_2)_T=(\phi_2)_T$.
Next we define $(\phi_1 \cdot_{\text{v}} \phi_2)_L$ as follows.
Let $C_1(m)$ and $C_2(m)$ be copies of $C(m)$ with $(\phi_i)_L: C_i(m) \to \partial_{R} M_i$ for $i=1,2$.
We identify the top boundary of the cylinder $C_1(m)$ with the bottom boundary of $C_2(m)$ via the homeomorphism
\begin{equation}\label{equ:zeta vertical gluing map}
\zeta:=(\phi_2)^{-1}_L (\phi_2)_B (\mathrm{mir})^{-1} (\phi_1)_T^{-1} (\phi_1)_L |_{\partial_{T} C_1(m)}.
\end{equation}
The following diagram summarizes the definition of $\zeta$.
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=3em, row sep=1em]
{
$C_1(m)$ & & & $C_2(m)$ \\
& $\Sigma_{t_2}^{-}$ & $\Sigma_{t_2}$ \\
$M_1$ & & & $M_2$ \\
};
\path[->, font=\scriptsize]
(m-1-1) edge node[auto] {$\zeta$} (m-1-4);
\path[->, font=\scriptsize]
(m-1-1) edge node[auto] {$(\phi_1)_L$}
(m-3-1);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$(\phi_1)_T$}
(m-3-1);
\path[->, font=\scriptsize]
(m-1-4) edge node[auto] {$(\phi_2)_L$}
(m-3-4);
\path[->, font=\scriptsize]
(m-2-3) edge node[auto] {$(\phi_2)_B$}
(m-3-4);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$(\mathrm{mir})^{-1}$} (m-2-3);
\end{tikzpicture}
\end{center}
Then there is a natural homeomorphism extending $(\phi_1)_L$ and $(\phi_2)_L$ from $C_1(m) \cup_{\zeta} C_2(m)$ to $\partial_{R} (M_1\cdot M_2)$.
We denote this map by $\phi_1 \cup_{\zeta} \phi_2$.
A problem is that $C_1(m) \cup_{\zeta} C_2(m)$ is not a standard surface.
However, this can be easily remedied thanks to the cylindrical structure of $C(m)=\nstand{m}\times [0,1]$.
Let us define the stretching map $s$ from $C(m)$ to $C_1(m) \cup_{\zeta} C_2(m)$ by sending $(x, t) \in C(m)$ to $(x, 2t) \in C_1(m)$ if $t \leq 1/2$ and to $(\zeta(x), 2t)$ if $t >1/2$.
We define the parametrization $(\phi_1 \cdot_{\text{v}} \phi_2)_L: C(m) \to \partial_{R}(M_1\cdot M_2)$ to be the composite $(\phi_1 \cup_{\zeta} \phi_2) \circ s$.
Similarly we define the right parametrization.
Next we need to show that a different choice of representative gives rise to an equivalent parametrized manifold.
Let $(N_1, \psi_1)$ and $(N_2, \psi_2)$ be another choice of representatives for $[(M_1, \phi_1)]$ and $[(M_2, \phi_2)]$, respectively.
By the definition of the equivalence, we have homeomorphisms $\alpha:N_1 \to M_1$ and $\beta:N_2\to M_2$ such that the parametrizations commute: $\phi_1= \alpha \circ \psi_1$ and $\phi_2= \beta \circ \psi_2$.
These homeomorphisms induce a homeomorphism $\alpha \cup \beta: N_1\cdot N_2 \to M_1\cdot M_2$ such that $(\alpha \cup \beta)|_{N_1}=\alpha$ and $(\alpha \cup \beta)|_{N_2}=\beta$.
This is well-defined since on the glued components, we have the following commutative diagram.
\[
\begin{CD}
\partial_{T} N_1 @> \psi_2\circ (\mathrm{mir})^{-1}\circ \psi_1^{-1} >> \partial_{B} N_2\\
@VV \alpha V @VV \beta V\\
\partial_{T} M_1 @> \phi_2 \circ (\mathrm{mir})^{-1} \circ \phi_1^{-1} >> \partial_{B} M_2\\
\end{CD}
\]
Then we claim that the homeomorphism $\alpha \cup \beta: N_1 \cdot N_2 \to M_1\cdot M_2$ commutes with parametrizations: $\alpha \circ(\psi_1\cdot_{\text{v}} \psi_2)=\phi_1\cdot_{\text{v}} \phi_2$.
On the bottom and top boundaries, this is clear.
Let us check this equation on the left boundary.
Recall that the left parametrization is defined to be $(\phi_1 \cdot_{\text{v}} \phi_2)|_{C(m)}=(\phi_1 \cup_{\zeta} \phi_2)\circ s$, where $\zeta$ is the gluing map of two copies of cylinders $C_1(m)$ and $C_2(m)$ defined in (\ref{equ:zeta vertical gluing map}) and $s$ is the stretching map.
For the second pair $(N_1, \psi_1)$ and $(N_2, \psi_2)$, we also have $(\psi_1 \cdot_{\text{v}} \psi_2)|_{C(m)}=(\psi_1 \cup_{\eta} \psi_2)\circ s$, where $\eta$ is the gluing map of cylinders defined by
\[\eta:=(\psi_2)^{-1}_L (\psi_2)_B (\mathrm{mir})^{-1} (\psi_1)_T^{-1} (\psi_1)_L |_{\partial_{T} C_1(m)}.\]
Since we have the following commutative diagram, we have in fact $\zeta=\eta$.
The commutativity of the left rectangle in the diagram is the definition of $\eta$ and the commutativity of the right rectangle follows since $\alpha$ and $\beta$ commute with parametrizations.
Note that the big rectangle is the definition of $\zeta$ since $\alpha \circ (\psi_1)_L=(\phi_2)_L$ and $\beta \circ (\psi_2)_L=(\phi_2)_L$.
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=7em, row sep=1em]
{
$\partial_{B} C_2(m)$ & $\partial_{L} N_2$ & $\partial_{L} M_2$ \\
$\partial_{T} C_1(m)$ & $\partial_{L} N_1$ & $\partial_{L} M_1$ \\
};
\path[->, font=\scriptsize]
(m-2-1) edge node[auto] {$\eta$}
(m-1-1);
\path[->, font=\scriptsize]
(m-2-1) edge node[below] {$(\psi_1)_L$}
(m-2-2);
\path[->, font=\scriptsize]
(m-1-1) edge node[auto] {$(\psi_2)_L$}
(m-1-2);
\path[->, font=\scriptsize]
(m-1-2) edge node[auto] {$\beta$}
(m-1-3);
\path[->, font=\scriptsize]
(m-2-2) edge node[below] {$\alpha$}
(m-2-3);
\path[->, font=\scriptsize]
(m-2-3) edge node[auto] {$\phi_2 (\mathrm{mir})^{-1} \phi_1^{-1}$}
(m-1-3);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$\psi_2 (\mathrm{mir})^{-1} \psi_1^{-1}$}
(m-1-2);
\end{tikzpicture}
\end{center}
The commutativity of this diagram also shows that we have
\[(\alpha \cup \beta) \circ (\psi_1 \cup_{\eta} \psi_2)\circ s=(\phi_1 \cup_{\zeta} \phi_2)\circ s\]
and hence we have
\[(\alpha \cup \beta) \circ (\psi_1 \cdot_{\text{v}} \psi_2)|_{C(m)}=(\phi_1 \cdot_{\text{v}} \phi_2)|_{C(m)}\]
Similarly for the right boundaries.
Thus the definition of the vertical composite
\[ [(M_1, \phi_1)]\cdot [(M_2, \phi_2)]:=[(M_1\cdot M_2, \phi_1 \cdot_{\text{v}} \phi_2)] \]
is independent of the choice of representatives.
\subsubsection{Associativity for the vertical composition}
Let $[(M_1, \phi_1)]:\Sigma_{t_1}\Rightarrow \Sigma_{t_2}:\nstand{m}\to \nstand{n}$, $[(M_2, \phi_2)]:\Sigma_{t_2}\Rightarrow \Sigma_{t_3}:\nstand{m}\to \nstand{n}$, and $[(M_3, \phi_3)]:\Sigma_{t_3}\Rightarrow \Sigma_{t_4}:\nstand{m}\to \nstand{n}$ be 2-morphisms.
The pairs $[M_1]$ and $[M_2]$, $[M_2]$ and $[M_3]$ are vertically composable.
We show that the associativity holds: We show that
\[\Bigl((([M_1]\cdot [M_2])\cdot [M_3]), \quad (\phi_1 \cdot_{\text{v}} \phi_2)\cdot_{\text{v}} \phi_3 \Bigr)=
\Bigl(([M_1]\cdot ([M_2]\cdot [M_3])), \quad \phi_1 \cdot_{\text{v}} (\phi_2 \cdot_{\text{v}} \phi_3) \Bigl) \]
The both sides equal to $M_1 \cdot M_2 \cdot M_3$ as a manifold.
Thus it suffices to show that the parametrization $(\phi_1 \cdot_{\text{v}} \phi_2)\cdot_{\text{v}} \phi_3$ is isotopic to the parametrization $\phi_1 \cdot_{\text{v}} (\phi_2 \cdot_{\text{v}} \phi_3)$ by Lemma \ref{lem:isotopy equivalence}.
Checking this on the top and bottom boundaries are again trivial.
Let us check the isotopy on the left boundary.
From the definition of vertical composition, we need to consider three copies of the cylinder $C(m)$.
Name them $C_i(m)$ for $i=1,2,3$ for each $M_i$.
Let $\zeta_1$ be the gluing map for the cylinders $C_1(m)$ and $C_2(m)$ and let $\zeta_2$ be the gluing map for the cylinders $C_2(m)$ and $C_3(m)$ as in (\ref{equ:zeta vertical gluing map}).
(Technically, $\zeta_i$ depends an order of gluing of three cylinders but it is straightforward to see that $\zeta_i$ is defined independently of the order.)
Now by definition $(\phi_1 \cdot_{\text{v}} \phi_2)\cdot_{\text{v}} \phi_3$ on the left cylinder $C(m)$ is equal to $[ (\phi_1 \cup_{\zeta_1} \phi_2)\circ s_1 \cup_{\zeta_2} \phi_3]\circ s_2 $, where $s_i$ is the stretching map corresponding to $\zeta_i$.
This is equal to $[\phi_1 \cup_{\zeta_1} \phi_2 \cup_{\zeta_2} \phi_3 ] \circ (s_1 \cup \mathrm{id} )\circ s_2$.
Now we see that the map $(s_1 \cup \mathrm{id} )\circ s_2$ is isotopic to the map $(\mathrm{id} \cup s_2)\circ s_1$.
(This is like the proof of associativity of fundamental groups.)
Hence we see that $(\phi_1 \cdot_{\text{v}} \phi_2)\cdot_{\text{v}} \phi_3$ is isotopic to $\phi_1 \cdot_{\text{v}} (\phi_2 \cdot_{\text{v}} \phi_3)$ on the left boundary.
Similarly for the right boundary.
Therefore the associativity follows.
\subsubsection{Units for vertical composition}\label{subsubsec:unit for the vertical composition of Co}
From the above arguments, for each pair of objects $\nstand{m}$, $\nstand{n}$ of $\mathrm{Co}$, we have the semigroupoid (category without identity) $\mathrm{Co}(\nstand{m}, \nstand{n})$ whose objects are 1-morphisms from $\nstand{n}$ to $\nstand{m}$ of $\mathrm{Co}$ and whose morphisms are 2-morphisms between such 1-morphisms in $\mathrm{Co}$.
To make the semigroupoid $\mathrm{Co}(\nstand{m}, \nstand{n})$ a category, we need to specify the identity morphism for each object of $\mathrm{Co}(\nstand{m}, \nstand{n})$.
For each object $X$ of $\mathrm{Co}(\nstand{m}, \nstand{n})$ (a standard surface), we just use a formal unit $\mathrm{id}_{X}$, rather than construct the identity cobordism.
Thus the formal unit $\mathrm{id}_{X}$ should act as the identity morphism in the category $\mathrm{Co}(\nstand{m}, \nstand{n})$.
Remark: one reason to use formal units is that otherwise we need to construct a concrete cobordism with a parametrization.
The obvious candidate for the identity cobordism is the cylinder over the standard surface $\Sigma_{t}\times [0,1]$, where $t$ is the type with $L(t)=m$ and $R(t)=n$.
The bottom boundary $\Sigma_t\times \{0\}$ can be identified with the standard surface $\Sigma_t$ and the identity map can be used as a parametrization.
The top boundary $\Sigma_t \times \{1\}$ also can be identified with $\Sigma_t$ and $\mathrm{mir}:\Sigma_t^- \to \Sigma_t$ can be used as a parametrization.
The problem is to define parametrizations for the left and the right boundaries.
We need to construct parametrization homeomorphism from $C(m)$ and $C(n)$ to the side boundaries of $\Sigma_t \times [0,1]$, which are cylinders over the boundary circles of $\Sigma_t$.
However there is no canonical homeomorphism at hand.
Since it does not seem that this gives more insights in our theory we just avoid the burden by introducing the formal units.
On the other hand, there is no obstruction in the case when $m=n=0$ since there is no side boundaries.
Using the cylindrical neighborhood, we can prove the cylinder over a standard surface is in fact the identity.
\subsubsection{Horizontal composition}
Next, we define the horizontal composition of 2-morphisms.
Let $X=\nstand{l}$, $Y=\nstand{m}$ and $Z=\nstand{n}$ be objects of $\mathrm{Co}$.
Let $\Sigma_{t_i}: X \to Y$ and $\Sigma_{s_i}: Y \to Z$ be 1-morphisms for $i=1,2$.
Let $[M]: \Sigma_{t_1} \Rightarrow \Sigma_{t_2}: X \to Y$ and $[M']:\Sigma_{s_1} \Rightarrow \Sigma_{s_2}: Y \to Z$ be 2-morphisms.
We define the horizontal composite $[M]\circ [M']$ of $[M]$ and $[M']$ as follows.
The composite $[M]\circ [M']$ will be a 2-morphism from $\Sigma_{t_1\circ s_1}: X\to Z$ to $\Sigma_{t_2 \circ s_2}: X \to Z$.
Pick representatives $(M,\phi)$ and $(M', \phi') $ for $[M]$ and $[M']$, respectively.
Here $\phi: \Sigma(\phi) \to \partial M$ and $\phi': \Sigma(\phi') \to \partial M'$ are parametrizations of boundaries of decorated cobordisms $M$ and $M'$, respectively.
We glue $M$ and $M'$ by identifying $\partial_{R} M$ and $\partial_{L} M'$ via the homeomorphism $\phi'_L \circ \mathrm{ref} \circ \phi_R^{-1}: \partial_{R} M \to \partial_{L} M'$.
Since this homeomorphisms is the composite of three orientation reversing maps, this map is orientation reversing.
In the sequel, we omit writing the map $\mathrm{ref}$ to simplify expressions.
Denote the resulting manifold by
\[ M\circ M'=M\cup_{\phi'_L \circ \phi_R^{-1}}M'.\]
The next task it to construct a parametrization of $M\circ M'$ from $C_{l}\sqcup C_{n} \sqcup\Sigma_{t_1\circ s_1} \sqcup\Sigma_{t_2\circ s_2}^-$ and then the equivalence class of this pair will be the horizontal composite.
On the left and right boundaries, the parametrizations are just $\phi$ and $\phi'$, respectively.
We now define a parametrization homeomorphism from the standard surface $\Sigma_{t_1\circ t_2}$ to the bottom boundary of $M\circ M'$.
This is not a straightforward task because the topological gluing of standard surfaces are not a standard surface.
First let us write
\[ g:=\phi'^{-1}_{B} \circ \phi'_L \circ \phi_{R}^{-1}\circ \phi_B: \partial_{R} \Sigma_{t_1} \to \partial_{L} \Sigma_{s_1}.\]
Gluing via this homeomorphism we obtain the surface $\Sigma_{t_1}\cup_g \Sigma_{s_1}$.
Define $\Phi=\Phi(\phi_B,\phi'_B): \Sigma_{t_1} \cup_g \Sigma_{s_1} \to \partial_{B} (M_1 \circ M_2)$ by
\[\Phi(x)=\Phi(\phi_B,\phi'_B)(x)=
\begin{cases}
\phi_B(x) & \mbox{ if } x \in \Sigma_{t_1} \\
\phi'_B(x) & \mbox{ if } x\in \Sigma_{s_1}
\end{cases}
\]
This is well-defined since $\partial_{B} (M_1 \circ M_2)=\partial_{B} M_1 \cup_{\phi'_L \circ \phi_R^{-1}} \partial_{B} M'$.
Next, because the surface $\Sigma_{t_1}\cup_g \Sigma_{s_1}$ is not a standard surface, we define a homeomorphism $\Sigma_{t_1 \circ s_1} \to \Sigma_{t_1}\cup_g \Sigma_{s_1}$.
This homeomorphism will depend on several choices.
However, two different choices give homeomorphisms that differ only by an isotopy.
The standard surfaces $\Sigma_{t_1}$ and $\Sigma_{s_1}$ are by definition sitting in $\mathbb{R}^3$.
There is a translation map $\tau$ of $\mathbb{R}^3$ that maps $\partial_{R} \Sigma_{t_1}$ to $\partial_{L} \Sigma_{s_1}$.
Now both $\Sigma_{t_1\circ s_1}$ and $\tau(\Sigma_{t_1})\cup \Sigma_{s_1}$ are in $\mathbb{R}^3$ and they are homeomorphic.
We fix a homeomorphism $h(t_1, s_1):\Sigma_{t_1\circ s_1 }\to \tau(\Sigma_{t_1})\cup \Sigma_{s_1} $ as follows.
Recall that the standard surface $\Sigma_t$ is obtained from the boundary of the standard handlebody $U_t$ in $\mathbb{R}^3$ with several disks removed.
There is an ambient isotopy $F: \mathbb{R}^3 \times [0,1] \to \mathbb{R}^3$ of handlebodies $U_{t_1 \circ s_1}$ and $\tau(U_{t_1})\cup U_{s_1}$ which maps the ribbon graph in one to the other and when $F$ is restricted on $\partial_{L} \Sigma_{t_1 \circ s_1}\subset U_{t_1 \circ s_1}$ it is just a translation in the $x$-coordinate in $\mathbb{R}^3$
and when $F$ is restricted to $\partial_{R} \Sigma_{t_1 \circ s_1}\subset U_{t_1 \circ s_1}$, it is also just a translation but it might move different amount in $x$-direction.
Then we define $h(t_1, s_1)$ to be the restriction of $F$ to $\Sigma_{t_1 \circ s_1}$.
There is a canonical homeomorphism $f_{\tau}:\tau(\Sigma_{t_1}) \cup \Sigma_{s_1} \to \Sigma_{t}\cup_{\tau} \Sigma_{s_1} $.
Here the latter space is obtained by regarding $\tau$ as a homeomorphism from $\partial_{R} \Sigma_{t_1}$ to $\partial_{L} \Sigma_{s_1}$ and gluing $\Sigma_{t_1}$ and $\Sigma_{s_1}$ along $\tau$.
Finally we need to choose a homeomorphism $\Sigma_{t}\cup_{\tau} \Sigma_{s_1} \to \Sigma_{t}\cup_{g} \Sigma_{s_1} $, which seems to be the most arbitrary.
First we have the following result which follows from Lemma in Appendix III of \cite{Turaev10}.
\begin{lemma}\label{lem:Turaev Appendix III}
Let $X=\partial_{R} \Sigma_t$ and let $f:X \to X$ be a homeomorphism preserving the orientation and the base points $\{x_i\}$ in each component of $X$.
Then there exists a homeomorphism $\Psi: \Sigma_t \to \Sigma_t$ satisfying the following.
\begin{enumerate}
\item The restriction $\Psi|_{X}=f$.
\item The homeomorphism $\Psi$ is the identity on $(\Sigma_t \setminus \mathrm{int}(U))\cup (\{x_i\}\times [0,1])$, where $U=X \times [0,1]$ is a cylindrical collar neighborhood of $X=X\times \{0\}$ in $\Sigma_t$.
\item The homeomorphism $\Psi$ carries $X \times \{t\} \subset U$ into $X \times \{t\}$ for all $t \in [0,1]$.
\end{enumerate}
Any two such homeomorphisms $\Psi:\Sigma \to \Sigma$ are isotopic via an isotopy constant on $\partial \Sigma$.
\end{lemma}
Now from two homeomorphisms $\tau$ and $g$ from $\partial_{R} \Sigma_{t_1}$ to $\partial_{L} \Sigma_{s_1}$ we obtain the self homeomorphism $f=\tau^{-1}\circ g$ of $X=\partial_{R} \Sigma_{t_1}$.
Lemma \ref{lem:Turaev Appendix III} yields a homeomorphism $\Psi(f):\Sigma_{t_1} \to \Sigma_{t_1}$ that extends $f=\tau^{-1}\circ g$ and any two such homeomorphisms are isotopic.
We fix one such $\Psi(f)$.
Thus $\Psi(f)$ induces an homeomorphism $\Psi(g, \tau):\Sigma_{t}\cup_{\tau} \Sigma_{s_1} \to \Sigma_{t}\cup_{g} \Sigma_{s_1} $ and different choice of $\Phi(f)$ induces an isotopic homeomorphism $\Psi(g, \tau)$.
Then we define the parametrization homeomorphism $\phi_B\circ_h \phi'_B$ of $M \circ M'$ to be
\begin{equation}\label{equ:horizontal parametrizations}
\phi_B\circ_h \phi'_B:=\Phi(\phi_B, \phi'_B)\circ \Psi(g, \tau) \circ f_{\tau} \circ h(t_1, s_1): \Sigma_{t_1 \circ s_1} \to \partial_{B} (M \circ M').
\end{equation}
Similarly we obtain the top parametrization $\phi_T\circ_h \phi'_T:\Sigma_{t_2\circ s_2}^{-} \to \partial_{T} (M\circ M')$.
Now we obtained the pair $(M\circ M',\phi\circ_h\phi')$.
After choosing the representatives $M$ and $M'$, there are several choices we made to define the parametrization.
Namely, $h(t_1, s_1)$ and $\Psi(g, \tau)$ are defined up to isotopy fixing boundaries.
But this ambiguity does not affect the equivalence class by virtue of Lemma \ref{lem:isotopy equivalence}.
The next lemma shows that a different choice of representatives of a 2-morphism gives an equivalent pair.
\begin{lemma}
Suppose that $(M,\phi)$ is equivalent to $(N, \psi)$ and $(M', \phi')$ is equivalent to $(N',\psi')$ then $(M\circ M, \phi\circ_h \phi')$ is equivalent to $(N\circ N', \psi\circ_h \psi')$.
\end{lemma}
\begin{proof}
By the definition of the equivalence, there are homeomorphisms $\alpha:N \to M$ and $\beta:N'\to M'$ such that $\phi= \alpha \circ \psi$ and $\phi'=\beta \circ \psi'$.
They induce a homeomorphism $\alpha \cup \beta : N\circ N' \to M \circ M'$.
This homeomorphism is well-defined since on the common boundary we have the following commutative diagram:
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=3em, row sep=1.5em]
{$C(m)$ & & & $C(m)$ \\
& $N$ & $N'$ &\\
& $M$ & $M'$ &\\
$C(m)$ & & & $C(m)$\\};
\path[->, font=\scriptsize]
(m-1-1) edge node[auto] {$=$} (m-1-4);
\path[->, font=\scriptsize]
(m-1-1) edge node[auto] {$=$} (m-4-1);
\path[->, font=\scriptsize]
(m-1-1) edge node[auto] {$\psi_R$} (m-2-2);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$\psi'_L \circ \psi^{-1}_R$} (m-2-3);
\path[->, font=\scriptsize]
(m-2-3) edge node[auto] {$\beta$} (m-3-3);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$\alpha$} (m-3-2);
\path[->, font=\scriptsize]
(m-3-2) edge node[auto] {$\phi'_L \circ \phi^{-1}_R$} (m-3-3);
\path[->, font=\scriptsize]
(m-1-4) edge node[auto] {$=$} (m-4-4);
\path[->, font=\scriptsize]
(m-4-1) edge node[auto] {$=$} (m-4-4);
\path[->, font=\scriptsize]
(m-4-1) edge node[auto] {$\phi_R$} (m-3-2);
\path[->, font=\scriptsize]
(m-4-4) edge node[auto] {$\phi_L'$} (m-3-3);
\path[->, font=\scriptsize]
(m-1-4) edge node[auto] {$\psi'_L$} (m-2-3);
\end{tikzpicture}
\end{center}
We need to show that
\begin{equation}\label{equ:horizontal parametrization commutes}
\phi \circ_{\text{h}} \phi'=(\alpha \cup \beta)\circ (\psi \circ_{\text{h}} \psi').
\end{equation}
The right and the left boundary parts just follow from the definition of $\alpha$ and $\beta$..
Let us check this equality on the bottom boundary.
By the remark before the lemma, we can choose the parametrizations as follows.
\[\phi_B\circ_h \phi'_B:=\Phi(\phi_B, \phi'_B)\circ \Psi(g, \tau) \circ f_{\tau} \circ h(t_1, s_1): \Sigma_{t_1 \circ s_1} \to \partial_{B} (M \circ M')\]
and
\[\psi_B\circ_h \psi'_B:=\Phi(\psi_B, \psi'_B)\circ \Psi(g', \tau) \circ f_{\tau} \circ h(t_1, s_1): \Sigma_{t_1 \circ s_1} \to \partial_{B} (N \circ N'),\]
where
\[ g:=\phi'^{-1}_{B} \circ \phi'_L \circ \phi_{R}^{-1}\circ \phi_B: \partial_{R} \Sigma_{t_1} \to \partial_{L} \Sigma_{s_1}\]
and
\[ g':=\psi'^{-1}_{B} \circ \psi'_L \circ \psi_{R}^{-1}\circ \psi_B: \partial_{R} \Sigma_{t_1} \to \partial_{L} \Sigma_{s_1}.\]
Because we have homeomorphism commuting with parametrizations, we have the following commutative diagram and we have in fact $g=g'$
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=3em, row sep=2em]
{
& $M$ & & $M'$ & \\
$\Sigma_{t}$ & & $C(m)$& & $ \Sigma_{s_1}$ \\
& $N$ & & $N'$ & \\};
\path[->, font=\scriptsize]
(m-2-1) edge node[auto] {$\phi_B$} (m-1-2);
\path[->, font=\scriptsize]
(m-2-1) edge node[left]{$\psi_B$} (m-3-2);
\path[->, font=\scriptsize]
(m-2-3) edge node[auto] {$\phi_R$}
(m-1-2);
\path[->, font=\scriptsize]
(m-2-3) edge node[below] {$\psi_R$} (m-3-2);
\path[->, font=\scriptsize]
(m-2-3) edge node[auto] {$\phi'_L$}
(m-1-4);
\path[->, font=\scriptsize]
(m-2-3) edge node[auto] {$\psi'_L$}
(m-3-4);
\path[->, font=\scriptsize]
(m-2-5) edge node[auto] {$\phi'_B$}
(m-1-4);
\path[->, font=\scriptsize]
(m-2-5) edge node[auto] {$\psi'_B$}
(m-3-4);
\path[->, font=\scriptsize]
(m-3-2) edge node[auto] {$\alpha$}
(m-1-2);
\path[->, font=\scriptsize]
(m-3-4) edge node[auto] {$\beta$}
(m-1-4);
\end{tikzpicture}
\end{center}
We also have the following commutative diagram and the equality (\ref{equ:horizontal parametrization commutes}) holds on the bottom boundary.
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=6em, row sep=2em]
{ & & $\partial_{B}(M\circ M')$ \\
$\Sigma_{t\circ t'}$ & $\Sigma_{t} \cup_{g} \Sigma_{t'}$ \\
& & $\partial_{B} (N \circ N')$ \\};
\path[->, font=\scriptsize]
(m-2-1) edge node[auto] {$\Psi(g, \tau) f_{\tau}h(t_1, s_1)$}
(m-2-2);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$\Phi(\phi_B, \phi_{B'})$}
(m-1-3);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$\Phi(\psi_B, \psi_{B'})$}
(m-3-3);
\path[->, font=\scriptsize]
(m-1-3) edge node[auto] {$\alpha \cup \beta $}
(m-3-3);
\end{tikzpicture}
\end{center}
Similarly for the top boundary.
Hence we have $\phi \circ_{\text{h}} \phi'=(\alpha \cup \beta )\circ (\psi \circ_{\text{h}} \psi')$ and conclude that
$(M\circ M, \phi\circ_h \phi')$ is equivalent to $(N\circ N', \psi\circ_h \psi')$.
\end{proof}
\subsubsection{Associativity for horizontal composition}
Let $(M, \phi)$, $(M', \phi')$, and $(M'', \phi'')$ be representative of 2-morphisms of $\mathrm{Co}$ such that $(M, \phi)$ and $(M', \phi')$, $(M', \phi')$ and $(M'', \phi'')$ are horizontally composable.
We show that horizontal composition of $Co$ is associative.
It suffices to show that the map $(\phi\circ_{\text{h}} \phi') \circ_{\text{h}} \phi''$ is isotopic to $\phi \circ_{\text{h}} (\phi' \circ_{\text{h}} \phi'')$.
We check this on the bottom part.
Recall the definition of horizontal composition of parametrizations from (\ref{equ:horizontal parametrizations}).
For the sake of simplicity, we use the letter $\tau$ for translations in $\mathbb{R}^3$ and we
denote by $\bar h$ the composite of homeomorphism $f_{\tau}\circ h(t, s): \Sigma_{t\circ s} \to \Sigma_{t} \cup_{\tau} \Sigma_{s}$.
Thus maps $\tau$ and $\bar h$ should be understood from the context.
Let $g_1$ and $g_2$ be the homeomorphism defined by the following commutative diagram.
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=3em, row sep=1em]
{
$\partial_{R} \Sigma_{t}$ & & $\partial_{L} \Sigma_{t'}, \partial_{R} \Sigma_{t'}$& & $ \partial_{L} \Sigma_{t''}$ \\
& $C(m)$ & & $C(n)$ & \\
$\partial_{B} M$ & & $\partial_{B} M'$& & $ \partial_{B} M''$ \\
};
\path[->, font=\scriptsize]
(m-1-1) edge node[auto] {$g_1$} (m-1-3);
\path[->, font=\scriptsize]
(m-1-3) edge node[auto]{$g_2$} (m-1-5);
\path[->, font=\scriptsize]
(m-1-1) edge node[auto] {$\phi_B$}
(m-3-1);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$\phi_R$}
(m-3-1);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$\phi'_L$}
(m-3-3);
\path[->, font=\scriptsize]
(m-2-4) edge node[auto] {$\phi'_R$}
(m-3-3);
\path[->, font=\scriptsize]
(m-2-4) edge node[auto] {$\phi''_L$}
(m-3-5);
\path[->, font=\scriptsize]
(m-1-5) edge node[auto] {$\phi''_B$}
(m-3-5);
\path[->, font=\scriptsize]
(m-1-3) edge node[auto] {$\phi'_B$}
(m-3-3);
\end{tikzpicture}
\end{center}
Thus $g_1$ and $g_2$ are gluing homeomorphism of standard surfaces induced by parametrizations.
To calculate $(\phi\circ_{\text{h}} \phi') \circ_{\text{h}} \phi''$ and $\phi \circ_{\text{h}} (\phi' \circ_{\text{h}} \phi'')$ , we also need the following gluing homeomorphisms $g_3$ and $g_4$, respectively.
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=7em, row sep=1em]
{
$\partial_{R} \Sigma_{t}$ & & $\partial_{L} \Sigma_{t'\circ t''}$\\
& $C(m)$ &\\
$\partial_{B} M$ & & $\partial_{B} (M'\circ M'') $ \\
};
\path[->, font=\scriptsize]
(m-1-1) edge node[auto] {$g_3$} (m-1-3);
\path[->, font=\scriptsize]
(m-1-1) edge node[auto] {$\phi_B$}
(m-3-1);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$\phi_R$}
(m-3-1);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$(\phi' \circ_{\text{h}} \phi'')_L=\phi'_L$}
(m-3-3);
\path[->, font=\scriptsize]
(m-1-3) edge node[auto] {$(\phi' \circ_{\text{h}} \phi'')_B$}
(m-3-3);
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=7em, row sep=1em]
{
$\partial_{R} (\Sigma_{t\circ t'})$ & & $\partial_{L} \Sigma_{ t''}$\\
& $C(n)$ & \\
$\partial_{B} (M\circ M') $ & & $\partial_{B} M'' $ \\
};
\path[->, font=\scriptsize]
(m-1-1) edge node[auto] {$g_4$} (m-1-3);
\path[->, font=\scriptsize]
(m-1-1) edge node[auto] {$(\phi\circ_{\text{h}} \phi')_B$}
(m-3-1);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$(\phi\circ_{\text{h}} \phi')_R=\phi_R$}
(m-3-1);
\path[->, font=\scriptsize]
(m-2-2) edge node[auto] {$ \phi''_L$}
(m-3-3);
\path[->, font=\scriptsize]
(m-1-3) edge node[auto] {$ \phi''_B$}
(m-3-3);
\end{tikzpicture}
\end{center}
Let $\Psi_i:=\Psi(g_i, \tau)$.
Note that since $(\phi' \circ_{\text{h}} \phi'')_B |_{\partial_{L} (\Sigma_{t' \circ t''})}=\phi'_B \circ \Psi_2\circ \bar h |_{\partial_{L}(\Sigma_{t'\circ t''})}$, we have $\Psi_2 \circ \bar h \circ g_3=g_1$.
Moreover $\Psi_2 \circ \bar h$ is identity on the boundary $\partial_{L} \Sigma_{t' \circ t''}$, thus in fact we have $g_1=g_3$, and hence $\Psi_1=\Psi_3$.
Also since $(\phi \circ_{\text{h}} \phi')_B|_{\partial_{R} (\Sigma_{t \circ t'})}=\phi'_B \circ \Psi_1\circ \bar h |_{\partial_{R}(\Sigma_{t \circ t'})}$, we have $g_2\circ \Psi_1 \circ \bar h =g_4$.
Deciphering the definition of maps we have the following diagram, where the left path is a parametrization $\phi_B \circ_{\text{h}} (\phi'_B \circ_{\text{h}} \phi''_B)$ and the right path is the parametrization $(\phi_B \circ_{\text{h}} \phi'_B)\circ_{\text{h}} \phi''_B$.
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=2em, row sep=1em]
{
& $\Sigma_{t\circ t' \circ t''}$& \\
$\Sigma_t \cup_{\tau} \Sigma_{t' \circ t''}$ & & $\Sigma_{t \circ t'} \cup_{\tau} \Sigma_{t''}$ \\
$\Sigma_t \cup_{g_3} \Sigma_{t' \circ t''}$ & & $\Sigma_{t \circ t'} \cup_{g_4} \Sigma_{t''} $ \\
$\Sigma_t \cup_{\bar h g_3} (\Sigma_{t'} \cup_{\tau} \Sigma_{t''})$ & &$(\Sigma_{t} \cup_{\tau} \Sigma_{t'}) \cup_{g_4 \bar{h}^{-1}} \Sigma_{t''} $ \\
$\Sigma_t \cup_{\Psi_2 \bar h g_3} (\Sigma_{t'} \cup_{g_2} \Sigma_{t''})$ & & $(\Sigma_{t} \cup_{g_1} \Sigma_{t'}) \cup_{g_4 \bar{h}^{-1} \Psi^{-1}_1} \Sigma_{t''} $ \\
&$\partial_{B} (M\circ M' \circ M'')$ & \\
};
\path[->, font=\scriptsize]
(m-1-2) edge node[above] {$\bar h$} (m-2-1);
\path[->, font=\scriptsize]
(m-1-2) edge node[auto]{$\bar h$} (m-2-3);
\path[->, font=\scriptsize]
(m-2-1) edge node[auto] {$\Psi_3$} (m-3-1);
\path[->, font=\scriptsize]
(m-3-1) edge node[auto]{$\bar h$} (m-4-1);
\path[->, font=\scriptsize]
(m-4-1) edge node[auto]{$\Psi_2$} (m-5-1);
\path[->, font=\scriptsize]
(m-2-3) edge node[auto] {$\Psi_4$} (m-3-3);
\path[->, font=\scriptsize]
(m-3-3) edge node[auto]{$\bar h$} (m-4-3);
\path[->, font=\scriptsize]
(m-4-3) edge node[auto]{$\Psi_1$} (m-5-3);
\path[->, font=\scriptsize]
(m-5-1) edge node[auto] {$\phi_B \cup \phi'_B \cup \phi''_B$} (m-6-2);
\path[->, font=\scriptsize]
(m-5-3) edge node[auto]{$\phi_B \cup \phi'_B \cup \phi''_B$} (m-6-2);
\end{tikzpicture}
\end{center}
On the left path, we have $\bar h \circ \Psi_3=\Psi_3\circ \bar h$ since $\Psi_3$ changes only $\Sigma_t$ part and $\bar h$ changes only the $\Sigma_{t'\circ t''}$ part.
On the right path, $\Psi_4$ and $\bar h$ does not commute.
This diagram can be also written as follows.
Let $\tilde \Psi_4:=\bar h \Psi_4 \bar{h}^{-1}$.
Then we have $\bar h \circ \Psi_4=\tilde \Psi_4 \circ \bar h$ and thus we have the following diagram.
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=2em, row sep=1em]
{
& $\Sigma_{t\circ t' \circ t''}$& \\
$\Sigma_t \cup_{\tau} \Sigma_{t' \circ t''}$ & & $\Sigma_{t \circ t'} \cup_{\tau} \Sigma_{t''}$ \\
$\Sigma_t \cup_{\bar h \tau} (\Sigma_{t'} \cup_{\tau} \Sigma_{t''})$ & & $(\Sigma_{t} \cup_{\tau} \Sigma_{t'}) \cup_{\tau} \Sigma_{t''} $ \\
$\Sigma_t \cup_{\bar h g_3} (\Sigma_{t'} \cup_{\tau} \Sigma_{t''})$ & &$(\Sigma_{t} \cup_{\tau} \Sigma_{t'}) \cup_{g_4 \bar{h}^{-1}} \Sigma_{t''} $ \\
$\Sigma_t \cup_{\Psi_2 \bar h g_3} (\Sigma_{t'} \cup_{g_2} \Sigma_{t''})$ & & $(\Sigma_{t} \cup_{g_1} \Sigma_{t'}) \cup_{g_4 \bar{h}^{-1} \Psi^{-1}_1} \Sigma_{t''} $ \\
&$\partial_{B} (M\circ M' \circ M'')$ & \\
};
\path[->, font=\scriptsize]
(m-1-2) edge node[above] {$\bar h$} (m-2-1);
\path[->, font=\scriptsize]
(m-1-2) edge node[auto]{$\bar h$} (m-2-3);
\path[->, font=\scriptsize]
(m-2-1) edge node[auto] {$\bar h$} (m-3-1);
\path[->, font=\scriptsize]
(m-3-1) edge node[auto]{$\Psi_3$} (m-4-1);
\path[->, font=\scriptsize]
(m-4-1) edge node[auto]{$\Psi_2$} (m-5-1);
\path[->, font=\scriptsize]
(m-2-3) edge node[auto] {$\bar h$} (m-3-3);
\path[->, font=\scriptsize]
(m-3-3) edge node[auto]{$\tilde \Psi_4$} (m-4-3);
\path[->, font=\scriptsize]
(m-4-3) edge node[auto]{$\Psi_1$} (m-5-3);
\path[->, font=\scriptsize]
(m-5-1) edge node[auto] {$\phi_B \cup \phi'_B \cup \phi''_B$} (m-6-2);
\path[->, font=\scriptsize]
(m-5-3) edge node[auto]{$\phi_B \cup \phi'_B \cup \phi''_B$} (m-6-2);
\end{tikzpicture}
\end{center}
Investigating the gluing maps, we obtain the following diagram.
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=2em, row sep=1em]
{
& $\Sigma_{t\circ t' \circ t''}$& \\
$\Sigma_t \cup_{\tau} \Sigma_{t' \circ t''}$ & & $\Sigma_{t \circ t'} \cup_{\tau} \Sigma_{t''}$ \\
$\Sigma_t \cup_{ \tau} (\Sigma_{t'} \cup_{\tau} \Sigma_{t''})$ & & $(\Sigma_{t} \cup_{\tau} \Sigma_{t'}) \cup_{\tau} \Sigma_{t''} $ \\
$\Sigma_t \cup_{\bar h g_3} (\Sigma_{t'} \cup_{\tau} \Sigma_{t''})$ & &$(\Sigma_{t} \cup_{\tau} \Sigma_{t'}) \cup_{g_4 \bar{h}^{-1}} \Sigma_{t''} $ \\
$\Sigma_t \cup_{g_1} (\Sigma_{t'} \cup_{g_2} \Sigma_{t''})$ & & $(\Sigma_{t} \cup_{g_1} \Sigma_{t'}) \cup_{g_2} \Sigma_{t''} $ \\
&$\partial_{B} (M\circ M' \circ M'')$ & \\
};
\path[->, font=\scriptsize]
(m-1-2) edge node[above] {$\bar h$} (m-2-1);
\path[->, font=\scriptsize]
(m-1-2) edge node[auto]{$\bar h$} (m-2-3);
\path[->, font=\scriptsize]
(m-3-1) edge node[above] {$=$} (m-3-3);
\path[->, font=\scriptsize]
(m-2-1) edge node[auto] {$\bar h$} (m-3-1);
\path[->, font=\scriptsize]
(m-3-1) edge node[auto]{$\Psi_3$} (m-4-1);
\path[->, font=\scriptsize]
(m-4-1) edge node[auto]{$\Psi_2$} (m-5-1);
\path[->, font=\scriptsize]
(m-2-3) edge node[auto] {$\bar h$} (m-3-3);
\path[->, font=\scriptsize]
(m-3-3) edge node[auto]{$\tilde \Psi_4$} (m-4-3);
\path[->, font=\scriptsize]
(m-4-3) edge node[auto]{$\Psi_1$} (m-5-3);
\path[->, font=\scriptsize]
(m-5-1) edge node[auto] {$\phi_B \cup \phi'_B \cup \phi''_B$} (m-6-2);
\path[->, font=\scriptsize]
(m-5-3) edge node[auto]{$\phi_B \cup \phi'_B \cup \phi''_B$} (m-6-2);
\end{tikzpicture}
\end{center}
Here the top pentagon is commutative up to isotopy: this is again similar to the proof of associativity of the fundamental group.
The bottom heptagon is also commutative up to isotopy.
To see this, first note that $\tilde \Psi_4$ is identity on $\Sigma_t \subset \Sigma_t \cup_{\tau} \Sigma_{t'}$ since $\Psi_4$ is identity outside of a collar neighborhood of $\partial_{R} \Sigma_{t \circ t'} \subset \bar {h}^{-1} (\Sigma_{t'})$.
Thus the restriction of $\bar \Psi_4$ on $\Sigma_{t'}$ part is isotopic to $\Psi_2$ by Lemma \ref{lem:Turaev Appendix III} since on the boundary $\partial_{R} \Sigma_{t'}$ they agree.
Since $\Psi_1$ and $\Psi_2$ commute, the heptagon is commutative up to isotopy.
By Lemma \ref{lem:isotopy equivalence} the associativity on the level of classses holds.
\subsubsection{Units for horizontal composition}
As in the case of vertical composition, we use formal units for horizontal composition.
This means that for each object $\nstand{n}$ of $\mathrm{Co}$, we add the formal identity object $\mathrm{id}_n$ to $\mathrm{Co}(\nstand{n}, \nstand{n})$ and we also add the formal identity 2-morphism $\mathrm{id}_{\mathrm{id}_n}$ between the object $\mathrm{id}_n$.
These formal identities act as identity for the horizontal composition.
\subsubsection{Interchange law}
We check the interchange law.
For four 2-morphisms
\begin{align*}
&[(M_1, \phi_1)]: \Sigma_{t_1} \Rightarrow \Sigma_{t_2}: \nstand{l} \to \nstand{m}, \qquad &[(M_2, \phi_2)]: \Sigma_{t_2} \Rightarrow \Sigma_{t_3}: \nstand{l} \to \nstand{m},\\
&[(M_1', \psi_1)]: \Sigma_{s_1} \Rightarrow \Sigma_{s_2}: \nstand{m} \to \nstand{n}, \qquad &[(M_2', \psi_2)]: \Sigma_{s_2} \Rightarrow \Sigma_{s_3}: \nstand{m} \to \nstand{n},
\end{align*}
\begin{center}
\begin{tikzcd}
\nstand{l} \arrow[bend left=50]{r}[name=U,below]{}{\Sigma_{t_1}}
\arrow{r}[name=M,below]{}{\Sigma_{t_2}}
\arrow[bend right=50]{r}[name=D]{}{\Sigma_{t_3}}
&\nstand{m}
\arrow[bend left=50]{r}[name=U',below]{}{\Sigma_{s_1}}
\arrow{r}[name=M',below]{}{\Sigma_{s_2}}
\arrow[bend right=50]{r}[name=D']{}{\Sigma_{s_3}}
& \nstand{n}
\end{tikzcd}
\end{center}
the interchange law says that the following equality holds;
\[ ([M_1]\circ[M_1'])\cdot ([M_2] \circ [M_2'])=( [M_1]\cdot [M_2]) \circ ([M_1'] \cdot [M_2']) \]
as a 2-morphism $\Sigma_{t_1\circ s_1}\Rightarrow \Sigma_{t_3\circ s_3}: \nstand{l}\to \nstand{n}$.
As a manifold both sides are the same.
Hence we only need to check whether
\[(\phi_1\circ_{\text{h}} \psi_1)\cdot_{\text{v}} (\phi_2 \circ_{\text{h}}\psi_2) =(\phi_1 \cdot_{\text{v}} \phi_2)\circ_{\text{h}} (\psi_1\cdot_{\text{v}} \psi_2). \]
This equality is true because that horizontal composition does not change the parametrizations on the side boundaries and vertical composition does not change the parametrizations on the top and the bottom boundaries.
\section{A bicategory of the Kapranov-Voevodsky 2-vector spaces}\label{sec:A 2-category of the Kapranov-Voevodsky 2-vector spaces}
The target bicategory of our extended TQFT will be the Kapranov-Voevodsky (KV) 2-vector spaces.
The reason that we chose the KV 2-vector spaces as a target algebraic bicategory is that it is a natural extension of the usual category of vector spaces and the calculations are very explicit.
We recall the relevant definitions.
\begin{Definition}
\begin{enumerate}
Let $K$ be a commutative ring.
\item
A \textit{2-matrix} is an $m\times n$ matrix such that the $(i,j)$-component is a projective module over $K$.
\item
A \textit{2-homomorphism} from an $m\times n$ 2-matrix $V$ to an $m \times n$ 2-matrix $W$ is an $m\times n$ matrix of $K$-homomorphisms.
In other words, the $(i, j)$-components of a matrix of homomorphisms is a $K$-homomorphism from $V_{ij}$ to $W_{ij}$.
\item Two 2-matrices $V$ and $W$ are said to be \textit{isomorphic} if there is a 2-homomorphism $T$ from $V$ to $W$ such that each entry of $T$ is an isomorphism.
\end{enumerate}
\end{Definition}
\begin{Definition}
The \textit{Kapranov-Voevodsky 2-vector spaces}, $2\-\mathrm{Vect}$, consists of the followings.
\begin{enumerate}
\item The \textit{objects} of $2\-\mathrm{Vect}$ are symbols $\{n\}$ for non-negative integers $n$.
\item A \textit{1-morphism} from $\{m\}$ to $\{n\}$ is an $(m\times n)$ 2-matrix $V$. We denote a 1-morphism from $\{m\}$ to $\{n\}$ by $V: \{m\}\to \{n\}$.
\item A \textit{2-morphism} from a 1-morphism $V:\{m\}\to \{n\}$ to a 1-morphism $W:\{m\}\to \{n\}$ is a 2-homomorphism from $V$ to $W$.
\end{enumerate}
Usual matrix calculations extend to this setting if we replace a multiplication by $\otimes$ and an addition by $\oplus$.
Horizontal composition is given by matrix multiplication and vertical composition is given by the composition of each entries.
With these composition operations, the Kapranov-Voevodsky 2-vector spaces $2\-\mathrm{Vect}$ is indeed a bicategory.
(For details, see \cite{KV1994} on page 226.)
\end{Definition}
\section{Review and Modification of the Reshetikhin-Turaev TQFT}\label{sec:Review and Modification of the Reshetikhin-Turaev TQFT}
Our construction of a projective pseudo 2-functor from the 2-category $\mathrm{Co}$ to the Kapranov-Voevodsky 2-vector spaces $2\-\mathrm{Vect}$ requires the original Reshetikhin-Turaev theory.
In this section we review some of the relevant part of the RT theory.
\subsection{Operator Invariant}\label{subsec:operator invariant}
One of the key ingredient to construct the RT TQFT is so called the ``operator invariant''.
Let $\mathcal{V}$ be a modular category. (More generally, the operator invariant exists for a strict ribbon category $\mathcal{V}$.)
Define the category $\mathrm{Rib}_{\V}$ of the ribbon graphs over $\mathcal{V}$ as follows.
The objects of $\mathrm{Rib}_{\V}$ are finite sequences of the form $((V_1, \epsilon_1), \dots, (V_m, \epsilon_m))$, where $V_i$ is an object of the modular category $\mathcal{V}$ and $\epsilon_i$ is either $\pm1$ for $i=1, \dots, m$.
A morphism $\eta \to \eta'$ in $\mathrm{Rib}_{\V}$ is an isotopy type of a $v$-colored ribbon graph over $\mathcal{V}$ such that $\eta$ (resp. $\eta'$) is the sequence of colors and directions of those bands which hit the bottom (resp. top) boundary intervals.
The downward direction near the corresponding boundary corresponds $\epsilon=1$, and $\epsilon=-1$ corresponds to the band directed up.
For example, the ribbon graph drawn in Figure \ref{fig:ribbon graph} represents a morphism from $((V_1, -1), (V_2,1), (V_3, 1), (U, 1))$ to $((V_2, -1), (V_1, -1), (V_3, 1), (V, 1))$.
\begin{figure}[h]
\center
\includegraphics[width=2.6in]{Ribv2.pdf}
\caption{$v$-colored Ribbon graph}
\label{fig:ribbon graph}
\end{figure}
The composition of morphisms of $\mathrm{Rib}_{\V}$ is given by concatenation of ribbon graphs.
The juxtaposition of ribbon graphs provides $\mathrm{Rib}_{\V}$ with the structure of a monoidal category.
Then it is a fact that there is a unique monoidal functor $F=F_{\mathcal{V}}: \mathrm{Rib}_{\V} \to \mathcal{V}$ satisfying the following conditions:
\begin{enumerate}
\item $F$ transforms any object $(V, +1)$ into $V$ and any object $(V, -1)$ into $V^*$
\item $F$ maps a crossing ribbon graph to a braiding, a twist ribbon graph to a twist, a cup like band and a cap like band to a corresponding duality map in $V$.
(The X-shape ribbon in Figure \ref{fig:ribbon graph} is one of crossing ribbons and the once-curled ribbon in the middle is one of twist ribbons. The others obtained by changing directions and crossings.)
\item For each elementary $v$-colored ribbon graph $\Gamma$, we have $F(\Gamma)=f$, where $f$ is the color of the only coupon of $\Gamma$.
(An example of elementary $v$-colored ribbon graph is the ribbon with one coupon colored by $f$ on the right in Figure \ref{fig:ribbon graph}. In general there may be multiple vertical bands attached to one coupon.)
\end{enumerate}
(For details, see Theorem I.2.5 in \cite{Turaev10}.)
The morphism $F(\Omega)$ associated to a $v$-colored ribbon graph $\Omega$ is called the \textit{operator invariant} of $\Omega$.
\subsection{Modular categories}
A modular category is an input for the RT TQFT.
The definition of a modular category is a ribbon category with a finite many simple objects satisfying several axioms.
(See \cite{Turaev10} for a full definition and refer to \cite{MR1321145} for a ribbon category.)
The objects of a modular category are used as decorations of surfaces as we saw in Section \ref{sec:A 2-category of cobordisms with corners} .
Here we set several notations and state an important lemma.
Let $\mathcal{V}$ be a modular category with a finite set $\{V_i\}_{i\in I}$ of simple objects, where $I$ is a finite index set $I=\{1,2, \dots, \mathbf{k}\}$.
We assume that $V_1$ is the unit object $\mathbb{1}$ of $\mathcal{V}$.
The ring $K:=\mathrm{Hom}(\mathbb{1}, \mathbb{1})$ is called the \textit{ground ring}.
The ground ring $K$ is known to be commutative.
We assume that $\mathcal{V}$ has an element $\mathcal{D}$ called a \textit{rank} of $\mathcal{V}$ given by the formula
\begin{equation*}\label{equ:rank}
\mathcal{D}^2= \sum_{i\in I} \left( \dim(V_i) \right)^2.
\end{equation*}
(This assumption is not essential.)
Besides the rank $\mathcal{D}$, we need another element $\Delta$ defined as follows.
The modular category has a twist morphism $\theta_{V}: V \to V$ for each object $V$ of $\mathcal{V}$.
Since $V_i$ is a simple object, the twist $\theta_{V_i}$ acts in $V_i$ as multiplication by a certain $v_i \in K$.
Since the twist acts via isomorphism, the element $v_i$ is invertible in $K$.
We set
\begin{equation*}\label{equ:Delta}
\Delta=\sum_{i \in I} v_i^{-1} \left( \dim(V_i) \right)^2 \in K.
\end{equation*}
The elements $\mathcal{D}$ and $\Delta$ are known to be invertible in $K$.
The following lemma is very important.
\begin{lemma}\label{lem:sum over simple 2}
For any objects $V, W$ of the modular category $\mathcal{V}$, there is a canonical $K$-linear splitting
\begin{equation*}
\mathrm{Hom}(\mathbb{1}, V \otimes W)= \bigoplus_{i \in I} \left( \mathrm{Hom}(\mathbb{1}, V\otimes V_{i}^* )\otimes_K \mathrm{Hom}(\mathbb{1}, V_{i} \otimes W) \right)
\end{equation*}
The isomorphism $u$ transforming the right-hand side into the left-hand side is given by the formula
\begin{equation}\label{equ: cap isom}
u_i: x \otimes y \mapsto (\mathrm{id}_V\otimes d_{V_i} \otimes \mathrm{id}_W) (x \otimes y),
\end{equation}
where $x \in \mathrm{Hom}(\mathbb{1}, V\otimes V^*_{i})$, $y\in \mathrm{Hom}(\mathbb{1}, V_{i} \otimes W)$.
The map (\ref{equ: cap isom}) is given graphically as in Figure \ref{fig:the map u_i}.
\end{lemma}
For a proof, see Lemma I\hspace{-.1em}V.2.2.2 in \cite{Turaev10}.
\begin{figure}[h]
\center
\includegraphics[width=3.5in]{Isomu.pdf}
\caption{The map $u_i$}
\label{fig:the map u_i}
\end{figure}
\subsection{Invariants of 3-manifolds with ribbon graphs}
We review an invariant of closed 3-manifolds with ribbon graphs sitting inside manifolds.
When we will construct the RT TQFT below, a cobordism will be turned into a closed 3-manifold and we apply this invariant.
Let $M$ be a closed connected oriented 3-manifold.
Let $\Omega$ be a $v$-colored ribbon graph over $\mathcal{V}$ in $M$.
Present $M$ as the result of surgery on $S^3$ along a framed link $L$ with components $L_1, \dots, L_m$. Fix an arbitrary orientation of $L$.
This choice can be shown to be irrelevant.
We may assume that $\Omega \subset S^3 \setminus U$, where $U$ is a closed regular neighborhood of $L$ in $S^3$ by applying an isotopy to $\Omega$ if necessary.
Denote by $\mathrm{col}(L)$ the set of all mappings from the set of components of $L$ into the index set $I$.
For each $\lambda\in \mathrm{col}(L)$, the pair $(L, \lambda)$ determines a colored ribbon graph $\Gamma(L, \lambda)$ formed by $m$ annuli.
The cores of these annuli are the oriented circles $L_1, \dots, L_m$, the normal vector field on the cores transversal to the annuli represents the given framing.
The color of the $i$-th annuli is $V_{\lambda(L_i)}$.
Since the union $\Gamma(L, \lambda)\cup \Omega$ is a $v$-colored ribbon graph and it has no free ends, the operator invariant $F(\Gamma(L, \lambda) \cup \Omega) \in K=\mathrm{End}(\mathbb{1})$.
Set
\[ \{L, \Omega\}=\sum_{ \lambda \in \mathrm{col}(L)}\dim(\lambda)F(\Gamma(L, \lambda) \cup \Omega) \in K=\mathrm{End}(\mathbb{1}), \]
where
\[\dim(\lambda)=\prod_{n=1}^m \dim\left( \lambda(L_n) \right) \mbox{ with } \dim(i):=\dim(V_i).\]
Set
\begin{equation}\label{equ:invariant tau}
\tau(M, \Omega)=\Delta^{\sigma(L)} \mathcal{D}^{-\sigma(L)-m-1} \{L, \Omega\}.
\end{equation}
Here $\sigma(L)$ is the signature of the surgery link $L$.
It is an important fact that $\tau(M, \Omega)$ is a topological invariant of the pair $(M, \Omega)$.
The invariants $\tau$ extends to $v$-colored ribbon graphs in any non-connected closed oriented 3-manifold $M$ by the formula
\[\tau(M, \Omega)=\prod_{r} \tau(M_r, \Omega_r),\]
where $M_r$ runs over the connected components of $M$ and $\Omega_r$ denotes the part of $\Omega$ lying in $M_r$.
The invariant $\tau(M, \Omega)$ satisfies the following multiplicativity law:
\begin{equation}\label{equ:tau multiplicative}
\tau(M_1 \# M_2, \Omega_1 \sqcup \Omega_2)=\mathcal{D} \tau(M_1, \Omega_1)\tau(M_2, \Omega_2),
\end{equation}
where $\Omega_1$ and $\Omega_2$ are $v$-colored ribbon graphs in closed connected oriented 3-manifolds $M_1$ and $M_2$, respectively.
This can be seen as follows.
Let $L$ and $L'$ be surgery links for $M_1$ and $M_2$, respectively.
Then the ribbon $L\cup \Omega_1$ and $L'\cup \Omega_2$ sitting inside the same $S^3$ separately is a pair of a surgery link for $M_1 \# M_2$ and $\Omega_1\sqcup \Omega_2$.
Then the formula (\ref{equ:tau multiplicative}) follows from the direct calculation using the defining formula (\ref{equ:invariant tau}).
\subsection{Construction of the (non-extended) Reshetikhin-Turaev TQFT}
Now we review the construction of the original RT TQFT for closed surfaces and cobordisms without corners.
Instead of going over the original construction, we modify it so that it adapts in our setting.
Let us fix a modular category $\mathcal{V}$.
As we noted in Section \ref{subsec:Remark}, the original theory concerns only decorated types of the form
\[t=(0, 0; (W_1, \nu_1), \dots, (W_m, \nu_m), 1,1,\dots, 1),\]
where $W_i$ is an object of a modular category and $\nu_i$ is either $1$ or $-1$ for $i=1, \dots, m$.
We can modify the theory using more general decorated types
\begin{equation}\label{equ:type 0 0}
t=(0,0; a_1, \dots, a_p),
\end{equation}
where $a_i$ is non-negative integer or the pair of an object of the modular category $\mathcal{V}$ and a sign $\pm 1$.
Note that the first two entries should be zero since these numbers encode the number of boundary components of surfaces, which is zero for the original RT theory.
In the following, we review the RT theory replacing the original decorated types with types of the form in (\ref{equ:type 0 0}).
Let $t=(0,0; a_1, \dots, a_p)$ and $s=(0,0; b_1, \dots, b_q)$ be decorated types whose first two entries are zero.
Let $[(M, \phi)]: \Sigma_t \Rightarrow \Sigma_s:{_* \emptyset} \to \emptyset_*$ be a 2-morphism of the 2-category $\mathrm{Co}$.
Note that since the first two entries of the types $t, s$ are zero, we have $\Sigma(\phi)=\Sigma_t \sqcup \Sigma_s^-$.
The images $\phi(\Sigma_t)$ and $\phi(\Sigma_s^-)$ are denoted by $\partial_{B} M$ and $\partial_{T} M$, respectively as before.
The Reshetikhin-Turaev TQFT is a pair of assignments $(\tau, \mathcal{T})$ which will be constructed below so that $\tau(M)$ is a $K$-homomorphism from a projective module $\mathcal{T}(\partial_{B} M)$ to a projective module $\mathcal{T}(\partial_{T} M)$.
\subsubsection{Definition of the projective module $\mathcal{T}(S)$}
For a decorated surface $S$ of type $t=(0,0; a_1, \dots, a_p)$, the projective module $\mathcal{T}(S)$ is defined as follows.
In a non-precise but instructive way, we think that $\mathcal{T}(S)$ is a projective module of all possible colors for the coupon of the ribbon graph $R_t$ in Figure \ref{fig:Rtnew}.
To make it precise, we set up several notations.
First, we define an object $H_i^a$ of the modular category $\mathcal{V}$ as follows.
Here $a$ is either a positive integer or $a=(W, \nu)$ is a signed object of the modular category $\mathcal{V}$.
For a positive integer $a$ and $i=(i_1,\dots, i_a)\in I^a$, we set
\begin{equation}\label{equ:H_i^a}
H^a_i=V_{i_1}\otimes V_{i_2}\otimes \cdots \otimes V_{i_a} \otimes V^*_{i_a}\otimes \cdots \otimes V_{i_2}^* \otimes V^*_{i_1}.
\end{equation}
If $a=(W, \nu)$ is a signed object of $\mathcal{V}$, we set $I^a$ to be a set of only one element and
set $H^a_i=W^{\nu}$ for $i\in I^a=\{i\}$.
Here we used the letter $i$ for the unique element of $I^a$ to streamline notations.
Note that the tensor product in (\ref{equ:H_i^a}) can be used as a color for rainbow like bands in $R_t$ corresponding to an integer entry $a$.
For a type $t=(m, n; a_1, a_2, \dots, a_p)$, we write
\begin{equation}\label{equ:I^t}
I^t:=I^{a_1} \times \cdots \times I^{a_p}.
\end{equation}
For $\zeta=(\zeta_1, \dots, \zeta_p) \in I^t$ with $\zeta_1=(\zeta_1^1, \dots, \zeta_1^{a_1}), \dots, \zeta_p=(\zeta_p^1, \dots, \zeta_p^{a_p})$, we set
\begin{equation}\label{equ:Phi.t.zeta}
\Phi(t; \zeta)= H^{a_1}_{\zeta_1} \otimes H^{a_2}_{\zeta_2} \otimes \cdots \otimes H^{a_p}_{\zeta_p}.
\end{equation}
Note that each choice of $\zeta=(\zeta_1, \dots, \zeta_p) \in I^t$ determines the color of ribbon graph $R_t$ except for the coupon.
The coupon can be colored by a morphism from the monoidal unit $\mathbb{1}$ to the element $\Phi(t; \zeta)$.
Thus all the possible colors of the coupon of $R_t$ varying $\zeta \in I^t$ is
\begin{equation}\label{equ:T(S)}
\mathcal{T}(S):=\bigoplus_{\zeta \in I^t} \mathrm{Hom} \big(\mathbb{1}, \Phi(t; \zeta) \big)
\end{equation}
and we define it to be $\mathcal{T}(S)$.
Since $\Phi(t;\zeta)$ is an object of the modular category $\mathcal{V}$, $\mathcal{T}(S)$ is a projective module over the grand ring $K=\mathrm{Hom}(\mathbb{1}, \mathbb{1})$.
\subsubsection{Definition of $K$-homomorphism $\tau(M)$}\label{subsec:tau M}
Let $[(M, \phi)]: \Sigma_t \Rightarrow \Sigma_s: {_* \emptyset} \to \emptyset_*$ be a 2-morphism of the 2-category $\mathrm{Co}$ and fix a representative $(M, \phi)$.
We explain the construction of the corresponding $K$-homomorphism $\tau(M)$ from $\mathcal{T}(\partial_{B} M)$ to $\mathcal{T}(\partial_{T} M)$.
Glue the standard handlebody $U_t$ and $U_s^-$ to $M$ along the parametrization $\phi$.
The resulting manifold $\tilde{M}$ is a closed 3-manifold with ribbon graph $\tilde{\Omega}$ sitting inside $\tilde{M}$.
The ribbon graph $\tilde{\Omega}$ is obtained by gluing the ribbon graph in $M$ and the ribbon graph $R_t$ and $-R_s$ sitting inside the standard handlebodies $U_t$ and $U_s^-$.
The ribbon graph $\tilde{\Omega}$ is not $v$-colored since the cap-like rainbow bands and the cup-like rainbow bands and the coupons of $R_t$ and $-R_s$ in the newly glued handlebodies are not colored.
By its definition, each element of the module $\mathcal{T}(\partial_{-} M)$ determines a color of $R_t$ and each element of $\mathcal{T}(\partial_{T} M)^*$ determines a color of $-R_s$.
For such a choice of color $y$ of $\tilde{\Omega}$ we obtain a $v$-coloring of $\tilde{\Omega}$.
Applying the invariant $\tau$ of $v$-colored ribbon graph in a closed 3-manifold defined in (\ref{equ:invariant tau}), we obtain a certain element $\tau(\tilde{M}, \tilde{\Omega}, y)\in K$.
This induces a $K$-homomorphism $\mathcal{T}(\partial_{B} M) \otimes_K \mathcal{T}(\partial_{T} M)^* \to K$.
Taking adjoints, we get a $K$-homomorphism
\begin{equation}\label{equ:after adjoint}
T(\partial_{B} M) \to \mathcal{T}(\partial_{T} M).
\end{equation}
To finish the construction of $\tau(M):T(\partial_{B} M) \to \mathcal{T}(\partial_{T} M)$, we compose the above $K$-homomorphism with an endomorphism $\eta$ defined as follows.
Let $S$ be a connected parametrized $d$-surface of type $t=(0,0; a_1, \dots, a_p)$.
The endomorphism $\eta(S): \mathcal{T}(S) \to T(S)$ preserves the splitting (\ref{equ:T(S)}) and acts in each summand $\mathrm{Hom}(\mathbb{1}, \Phi(t;\zeta))$ as multiplication by $\mathcal{D}^{1-g}\dim(\zeta)$, where $g$ is the sum of integer entries of the type $t$ and
\[\dim(\zeta):=\prod_{i=1}^p\dim(\zeta_i) \]
with
\[\dim(\zeta_i):=
\begin{cases}
\prod_{l=1}^{a_i} \dim(\zeta_i^l) & \mbox{ if } a_i \in \Z \\
1 & \mbox{ if } a_i \mbox{ is a mark.}
\end{cases}
\]
Recall that $\dim(\zeta_i^l)$ denotes the dimension of the simple object $V_{\zeta_i^l}$.
Now we complete the construction of $\tau(M):T(\partial_{B} M) \to \mathcal{T}(\partial_{T} M)$ by composing the $K$-homomorphism (\ref{equ:after adjoint}) with $\eta(\partial_{T} M): \mathcal{T}(\partial_{T} M) \to \mathcal{T}(\partial_{T} M)$.
The pair $(\tau, \mathcal{T})$ is the \textit{Reshetikhin-Turaev TQFT}.
In general, this is not a functor because it has \textit{gluing anomaly}.
\subsubsection{Explicit Formula for the homomorphism $\tau(M)$}\label{subsec:explicit formula for tau(M)}
We will develop a technique of presentation of a decorated connected 3-cobordism $M$ by a certain ribbon graph in $\mathbb{R}^3$ and give the explicit formula to calculate the homomorphism $\tau(M)$ using this ribbon graph.
First as we are in the closed case, let $t=(0,0; a_1, \dots, a_p)$ and $s=(0,0; b_1, \dots, b_q)$ be decorated types whose first two entries are zero.
Let $[(M, \phi)]: \Sigma_t \Rightarrow \Sigma_s:{_* \emptyset} \to \emptyset_*$ be a 2-morphism of the 2-category $\mathrm{Co}$ and fix a representative $(M, \phi)$.
We assume that the cobordism $M$ is connected.
As above, we glue the standard handlebodies $U_t$ and $U_s^-$ using the parametrization $\phi$ to $M$.
We obtain the closed connected 3-manifold $\tilde{M}$ and the partially colored ribbon graph $\tilde{\Omega}$ sitting inside $\tilde{M}$.
Present $\tilde{M}$ as the result of surgery on a framed link $L$ in $S^3$.
Namely, we have a homeomorphism from $M_L$ to $\tilde{M}$, where $M_L$ is the resulting 3-manifold of surgery along $L$.
Let $H=U_t \cup U_s^- \subset S^3$.
We may think that $H$ is a subset of $S^3 \setminus T(L)\subset M_L$, where $T(L)$ is a closed tubular neighborhood of the link $L$.
Restricting this homeomorphism, we see that the pair $(M, \phi)$ is equivalent to $(M_L, \mathrm{id})$.
Thus we may assume that $\tilde{\Omega}$ is the union of $R_t, -R_s$ and a surgery link and a ribbon graph in $M$ in $\mathbb{R}^2 \times [0, 1] \subset \mathbb{R}^3 \subset S^3$.
Of course, the surgery link might be tangled with $R_t$ and $-R_s$.
By isotopy, we pull $R_t$ down so that the top of the coupon of $R_t$ lies in $\mathbb{R} \times \{0\}\times \{0\}$.
Also we move $-R_s$ up so that the bottom of the coupon $-R_s$ lies in $\mathbb{R} \times \{0\} \times \{1\}$ and move the rest of the ribbon in $\mathbb{R}^2\times (0, 1)$.
See Figure \ref{fig:special ribbon graph}.
\begin{figure}[h]
\center
\includegraphics[width=3.8in]{presentationbyribbon2.pdf}
\caption{Special ribbon graph}
\label{fig:special ribbon graph}
\end{figure}
Let $\Omega_M$ be a ribbon graph obtained by removing the coupons of $R_t$ and $-R_s$ from $\tilde{\Omega}$.
We call $\Omega_M$ \textit{special ribbon graph} for $M$.
We now give an explicit formula for computing the homomorphism $\tau(M): \mathcal{T}(\partial_{B} M) \to \mathcal{T}(\partial_{T} M)$ from the operator invariants of the special ribbon graph $\Omega_M$ (after coloring $\Omega_M$).
With respect to the splittings (\ref{equ:T(S)}) of $\mathcal{T}(\partial_{B} M)$ and $\mathcal{T}(\partial_{T} M)$, the homomorphism $\tau(M)$ may be presented by a block matrix $\tau_{\zeta}^{\eta}$, where
\[\zeta=(\zeta_1, \dots, \zeta_p) \in I^t \mbox{ with } \zeta_1=(\zeta_1^1, \dots, \zeta_1^{a_1}) \in I^{a_1}, \dots, \zeta_p=(\zeta_p^1, \dots, \zeta_p^{a_p})\in I^{a_p}\]
and
\[\eta=(\eta_1, \dots, \eta_q) \in I^s \mbox{ with } \eta_1=(\eta_1^1, \dots, \eta_1^{b_1}) \in I^{b_1}, \dots, \eta_q=(\eta_q^1, \dots, \eta_q^{b_q})\in I^{b_q}.\]
Each such $\zeta\in I^t$ determines a coloring of the cap-like rainbow bands of $R_t$ in $\Omega_M$.
Similarly each such $\eta \in I^s$ determines a coloring of the cup-like rainbow bands of $-R_s$ in $\Omega_M$.
Therefore a pair $(\zeta, \eta)\in I^t \times I^s$ determines a coloring of uncolored bands of $\Omega_M$.
Note that the surgery link $L$ in $\Omega_M$ is not colored.
(More precisely, the ribbon graph obtained by thickening the surgery link along framing.)
Every element $\lambda \in \mathrm{col}(L)$ determines coloring of the surgery ribbon.
Thus every element $(\zeta, \eta, \lambda)\in I^t \times I^s \times \mathrm{col}(L)$ determines a $v$-coloring of $\Omega_M$.
Denote the resulting $v$-colored ribbon graph in $R^3$ by $(\Omega_M, \zeta, \eta, \lambda)$.
Consider its operator invariant $F(\Omega_M, \zeta, \eta, \lambda): \Phi(t; \zeta)\to \Phi(s; \eta)$ defined in Section \ref{subsec:operator invariant}.
The composition of a morphism $\mathbb{1} \to \Phi(t; \zeta)$ with $F(\Omega_M, \zeta, \eta, \lambda)$ defines a $K$-linear homomorphism $\mathrm{Hom}(\mathbb{1}, \Phi(t; \zeta))\to \mathrm{Hom}(\mathbb{1}, \Phi(s;\eta))$ denoted by $F_0(\Omega_M, \zeta, \eta, \lambda)$.
It follows from the very definition of $\tau(M)$ given in Section \ref{subsec:tau M} that
\begin{equation}\label{equ:tau zeta eta}
\tau_{\zeta}^{\eta}= \Delta^{\sigma(L)} \mathcal{D}^{-g^+ -\sigma(L) - m} \dim(\eta) \sum_{\lambda \in \mathrm{col}(L)} \dim(\lambda) \mathcal{F}_0(\Omega_M, \zeta, \eta, \lambda),
\end{equation}
where $g^+$ is the sum of the integer entries of $s$ and
\[\dim(\eta):=\prod_{i=1}^q\dim(\eta_i) \]
with
\[\dim(\eta_i):=
\begin{cases}
\prod_{l=1}^{b_i} \dim(\eta_i^l) & \mbox{ if } b_i \in \Z \\
1 & \mbox{ if } b_i \mbox{ is a mark.}
\end{cases}
\]
\section{An extended TQFT $\mathcal{X}$}\label{sec:An extended TQFT}
Now we proceed to construct a projective pseudo 2-functor $\mathcal{X}$ from the 2-category $\mathrm{Co}$ of decorated cobordisms with corners to the Kapranov-Voevodsky 2-vector spaces $2\-\mathrm{Vect}$ that will be our extension of the Reshetikhin-Turaev TQFT functor.
For our convention of the language of 2-category, see Appendix.
As in the Reshetikhin-Turaev theory, we fix a modular category $\mathcal{V}$ with $\mathbf{k}$ simple objects $V_1, \dots, V_{\mathbf{k}}$.
Let $I$ be the index set of simple objects, hence its cardinality is $|I|=\mathbf{k}$.
Let $K$ denote the ground ring $\mathrm{Hom}(\mathbb{1},\mathbb{1})$.
\subsection{$\mathcal{X}$ on objects}
Each object $\nstand{n}$ of $\mathrm{Co}$ for a natural number $n$ is mapped by $\mathcal{X}$ to the object $\{\mathbf{k}^n\}$ of $2\-\mathrm{Vect}$.
The formal symbol objects ${_* \emptyset}$ and $\emptyset_*$ are mapped to the object $\{1\}$.
\subsection{$\mathcal{X}$ on 1-morphisms}\label{sec:X on 1-morphisms}
For a 1-morphism $\Sigma_t: \nstand{m} \to \nstand{n}$ with a type
\[t=(m, n;a_1, a_2, \dots, a_p),\]
we need to define a $\mathbf{k}^m\times \mathbf{k}^n$ 2-matrix $\mathcal{X}(\Sigma_t)$.
Using the lexicographic order of the power sets $I^m$ and $I^n$, pick $i$-th element of $I^m$ and $j$-th element of $I^n$.
Abusing notation, we write the $i$-th element of $I^m$ as $i=(i_1, i_2, \dots, i_m)$ and the $j$-th element of $I^n$ as $j=(j_1, \dots, j_n)$.
For each decorated type $t$, we defined the ribbon graph $R_t$ (Figure \ref{fig:Rtnew}).
In the ribbon graph $R_t$, there are $m=L(t)$ uncolored bands on the left and $n=R(t)$ uncolored bands on the right.
Those bands are bent for the convenience of horizontal gluing.
From now on we just draw vertical bands instead of bent ones for the sake of simple graphics as in Figure \ref{fig:non bend Rt}.
\begin{figure}[h]
\center
\includegraphics[width=3.4in]{Rtnew.pdf}
\caption{The ribbon graph $R_t$}
\label{fig:non bend Rt}
\end{figure}
Recall that those left bands are ordered from the left and those right bands are ordered from the right.
We color the $k$-th left uncolored band with the simple object $V_{i_k}$ for $k=1, \dots, m$.
Also we color the $l$-th right uncolored band with the simple object $V_{j_l}$ for $l=1, \dots, n$.
Let us denote the ribbon graph obtained by this way $R_t(i, j)$.
The only uncolored ribbons in $R_t(i,j)$ are the cap-like bands.
See Figure \ref{fig:ijcolored}.
\begin{figure}[h]
\center
\includegraphics[width=4in]{Rtnewcolored.pdf}
\caption{The ribbon graph $R_t(i,j)$}
\label{fig:ijcolored}
\end{figure}
Fixing $i\in I^m$ and $j\in I^n$, we consider all the possible colors of the uncolored cap-like bands by simple objects.
The $(i, j)$-component module $\mathcal{X}(\Sigma_t)_{ij}$ of the 2-matrix $\mathcal{X}(\Sigma_t)$ will be the projective module of all the possible colors of the coupon $R_t(i ,j)$.
Recall the definition of the object $H_i^a$ of the modular category $\mathcal{V}$ defined in (\ref{equ:H_i^a}).
For a positive integer $a$ and $i=(i_1,\dots, i_a)\in I^a$, the object was defined to be
\begin{equation}\label{equ:Hai}
H^a_i=V_{i_1}\otimes V_{i_2}\otimes \cdots \otimes V_{i_a} \otimes V^*_{i_a}\otimes \cdots \otimes V_{i_2}^* \otimes V^*_{i_1}.
\end{equation}
If $a=(W, \nu)$ is a signed object of $\mathcal{V}$, we set $I^a$ to be a set of only one element and
set $H^a_i=W^{\nu}$ for $i\in I^a$.
Note that the tensor product in (\ref{equ:Hai}) can be used as a color for rainbow like bands corresponding to an integer entry $a$ in a decorated type.
For $\zeta=(\zeta_1, \dots, \zeta_p) \in I^t$ with $\zeta_1=(\zeta_1^1, \dots, \zeta_1^{a_1}), \dots, \zeta_p=(\zeta_p^1, \dots, \zeta_p^{a_p})$, recall the notation given in (\ref{equ:Phi.t.zeta}):
\begin{equation*}
\Phi(t; \zeta)= H^{a_1}_{\zeta_1} \otimes H^{a_2}_{\zeta_2} \otimes \cdots \otimes H^{a_p}_{\zeta_p}.
\end{equation*}
The next one is a new notation.
We set
\begin{equation}\label{equ: Phi for e-type}
\Phi(t; \zeta; i, j)=V^*_{i_1} \otimes V^*_{i_2} \otimes \cdots \otimes V^*_{i_m} \otimes \Phi(t, \zeta) \otimes V_{j_n}\otimes V_{j_{n-1}}\otimes \cdots \otimes V_{j_1},
\end{equation}
Note that each choice of $\zeta=(\zeta_1, \dots, \zeta_p) \in I^t$ determines a color of ribbon graph $R_t(i, j)$ via $\Phi(t; \zeta; i, j)$ except for the coupon.
The coupon can be colored by a morphism from the monoidal unit $\mathbb{1}$ to the object $\Phi(t; \zeta; i, j)$.
Thus all the possible colors of the coupon of $R_t(i, j)$ varying $\zeta \in I^t$ is
\begin{equation}\label{equ:tsigma}
\mathcal{X}(\Sigma_t)_{ij}=\bigoplus_{\zeta \in I^t} \mathrm{Hom} \big(\mathbb{1}, \Phi(t; \zeta;i,j) \big)
\end{equation}
and we define this to be the $(i, j)$-component projective module $\mathcal{X}(\Sigma_t)_{ij}$.
We also need to specify the assignment of $\mathcal{X}$ on each formal identity 1-morphism $\mathrm{id}_n: \nstand{n} \to \nstand{n}$ with an integer $n$.
The $\mathbf{k}^n \times \mathbf{k}^n$ 2-matrix $\mathcal{X}(\Sigma_n)$ is defined to be the identity $\mathbf{k}^n \times \mathbf{k}^n$ 2-matrix.
Namely each diagonal entry of the 2-matrix $\mathcal{X}(\mathrm{id}_n)$ is the ground ring $K$ and each entry off the diagonal is zero.
\subsection{$\mathcal{X}$ on 2-morphisms}
Let $[M]: \Sigma_{t} \Rightarrow \Sigma_{s}: \nstand{m} \to \nstand{n}$ be a 2-morphism of $\mathrm{Co}$.
We need to define a $K$-homomorphism $\mathcal{X}([M])_{ij}$ from $\mathcal{X}(\Sigma_{t})_{ij}$ to $\mathcal{X}(\Sigma_{s})_{ij}$ for each $i\in I^m$ and $j\in I^n$.
This homomorphism will be obtained by applying the Reshetikhin-Turaev TQFT to the decorated cobordism obtained by ``capping'' or ``filling'' the left and the right boundaries of $M$.
\subsubsection{Filling $M$ by the standard handlebodies}
Let $(M, \phi)$ be a representative of the 2-morphism $[M]: \Sigma_{t} \Rightarrow \Sigma_{s}: \nstand{m} \to \nstand{n}$.
Here $\phi$ is a parametrization of the boundary $\partial M$.
Recall that using the parametrization, we can form the surface $\Sigma(\phi)$ by gluing standard surfaces.
Then the parametrization $\phi$ can be regarded as a homeomorphism from $\Sigma(\phi)$ to $\partial M$.
Consider the standard handlebodies $U_t$, $U_s^-$ and solid cylinders $D_m$, $D_n$ (see Figure \ref{fig:Dn}).
Their boundaries are capped standard surfaces.
Then the gluing map induced by the parametrization extends to the disks enclosed by boundary circles of the standard surfaces.
Gluing $U_t$, $U_s^-$, $D_m$, and $D_n$ along this homeomorphism, we obtain a 3-manifold whose boundary is $\Sigma(\phi)$.
We also can assume that the ribbon graphs glues well under this gluing.
Let $\mathcal{M}(t,s)$ be the manifold obtained by the procedure and we call it the \textit{standard handlebody} for the pair $(t,s)$.
By the Alexander trick, the manifold $\mathcal{M}(t, s)$ is defined up to homeomorphism.
The manifold $\mathcal{M}(t, s)$ is equiped with a ribbon graph obtained from the ribbon graph $R_t$ and $R_s$ in $U_t$ and $U_s^-$ respectively, and the vertical bands in $D_m$, $D_n$ joining uncolored bands along the embedded disks.
We denote the ribbon graph by $R(t,s)$ and call this ribbon graph in $\mathcal{M}(t, s)$ the \textit{standard ribbon graph} for the pair $(t, s)$.
\begin{figure}[h]
\center
\includegraphics[width=3in]{ConnectedM2.pdf}
\caption{The standard handlebody $\mathcal{M}(t, s)$ and the standard ribbon graph $R_t(i,j)$}
\label{fig:standard handlebody with the standard ribbon}
\end{figure}
Now we glue the manifolds $M$ and $\mathcal{M}(t, s)$ along the boundaries by the parametrization $\phi$ and obtain the closed 3-manifold
\begin{equation}\label{equ: tilde M}
\mathrm{Fill}(M):=M\cup_{\phi} \mathcal{M}(t, s).
\end{equation}
In a sense, we ``filled'' the boundary of $M$ by the standard handlebody $\mathcal{M}(t, s)$.
It equips with a ribbon graph coming from a ribbon graph in $M$ and the standard ribbon graph $R(t, s)$ in $\mathcal{M}(t, s)$
The same manifold can be obtained differently as follows.
First, we glue cylinders $D_m$ and $D_n$ to $M$ via $\phi$, which we denote by
\begin{equation}
\mathrm{Fill}_{\mathrm{c}}(M):=M\cup_{\phi}(D_m\sqcup D_n)
\end{equation}
Namely, we filled only cylindrical parts of $M$.
The subscript $\mathrm{c}$ in $\mathrm{Fill}_{\mathrm{c}}$ stands for ``cylinder''.
Then the boundary of $M\cup_{\phi}(D_m\sqcup D_n)$ is homeomorphic to the disjoint union of the capped standard boundaries $\hat{\Sigma}_t \sqcup \hat{\Sigma}_s^-$.
The gluing homeomorphism is given by the parametrization $\phi$ extended to embedded disks.
Thus $\mathrm{Fill}_{\mathrm{c}}(M)$ is a usual parametrized cobordism except that the ribbons in $D_m\sqcup D_n$ are not colored.
If we glue the standard handlebodies $U_t$ and $U_s^-$ via the parametrization we obtain the same manifold $\mathrm{Fill}(M)$ as in (\ref{equ: tilde M}).
\subsubsection{Definition of a $K$-homomorphism $\mathcal{X}([M])_{ij}$}
Fix the $i$-th element $i=(i_1, i_2, \dots, i_m)$ of $I^m$ and the $j$-th element $j=(j_1, \dots, j_n)$ of $I^n$ as before.
We will construct a $K$-homomorphism $\mathcal{X}([M])_{ij}$ from $\mathcal{X}(\Sigma_{t})_{ij}$ to $\mathcal{X}(\Sigma_{s})_{ij}$.
Let us give colors to the uncolored ribbon graphs of $D_m \sqcup D_n$ in $\mathrm{Fill}_{\mathrm{c}}(M)=M\cup_{\phi}(D_m\sqcup D_n)$ as follows.
Order the uncolored bands in $D_m$ from the left and order the bands in $D_n$ from the right according to the order of the circles in the boundary surface.
We color the $k$-th left uncolored band with the simple object $V_{i_k}$ for $k=1, \dots, m$.
Also we color the $l$-th right uncolored band with the simple object $V_{j_l}$ for $l=1, \dots, n$.
Then $\mathrm{Fill}_{\mathrm{c}}(M)$ together with this $v$-colored ribbon graph in $\mathrm{Fill}_{\mathrm{c}}(M)$, which we denote by $\mathrm{Fill}_{\mathrm{c}}(M)_{ij}$, is the normal cobordism of the Reshetikhin-Turaev type.
Apply the Reshetikhin-Turaev TQFT, we obtain a $K$-homomorphism $\tau(\mathrm{Fill}_{\mathrm{c}}(M)_{ij})$ from $\mathcal{X}(\Sigma_t)_{ij}$ to $\mathcal{X}(\Sigma_s)_{ij}$.
\begin{lemma}
If $(M, \phi)$ is equivalent to $(N, \psi)$, then $\tau(\mathrm{Fill}_{\mathrm{c}}(M)_{ij})=\tau(\mathrm{Fill}_{\mathrm{c}}(N)_{ij})$.
\end{lemma}
\begin{proof}
Since $M$ and $M$ are equivalent, we see that $\mathrm{Fill}_{\mathrm{c}}(M)_{ij}$ is $d$-homeomorphic to $\mathrm{Fill}_{\mathrm{c}}(N)_{ij}$.
Since the Reshetikhin-Turaev TQFT $\tau$ is invariant under $d$-homeomorphisms, we have the result.
(To see this invariance, note that $d$-homeomorphic cobordisms have the same special ribbon graph representation.)
\end{proof}
Thus the $K$-homomorphism $\tau(\mathrm{Fill}_{\mathrm{c}}(M)_{ij})$ is independent of the choice of a representative of $[M]$.
Hence we can define the $(i,j)$-entry of the 2-matrix $\mathcal{X}(M)$ to be
\begin{equation}\label{equ:XM ij}
\mathcal{X}([M])_{ij}:=\tau(\mathrm{Fill}_{\mathrm{c}}(M)_{ij})
\end{equation}
For each formal identity $\mathrm{id}_{\Sigma_t} \in \mathrm{Co}(\nstand{m}, \nstand{n})$, $\mathcal{X}$ assigns the $\mathbf{k}^m$ by $\mathbf{k}^n$ 2-homomorphism matrix whose $(i,j)$-entry is the identity self-homomorphism of the module $\mathcal{X}(\Sigma_t)_{ij}$.
For the formal horizontal unit 2-morphism $\mathrm{id}_{\mathrm{id}_n}$, $\mathcal{X}$ assigns the $k^n$ by $k^n$ identity 2-homomorphism.
Namely, each of its diagonal entry is the identity self-homomorphism of the base ring $K$ and off-diagonal entries are zero.
\subsubsection{Representation by a ribbon graph}
The rest of the paper will be devoted to prove that the assignment $\mathcal{X}$ is indeed a projective pseudo 2-functor.
The key ingredient of the proof is the explicit formula to calculate the homomorphism $\mathcal{X}(M)_{ij}$ obtained by representing $M$ by a special ribbon graph as in Section \ref{subsec:explicit formula for tau(M)}.
We define a \textit{special ribbon graph} for $(M, i, j)$ to be a special ribbon graph for $\mathrm{Fill}_{\mathrm{c}}(M)_{ij}$.
In place of $M$, $R_t$, and $-R_s$ in Section \ref{subsec:explicit formula for tau(M)}, we just need to use $\mathrm{Fill}_{\mathrm{c}}(M)_{ij}$, $R_t(i,j)$, and $-R_s(i,j)$.
As noted above, gluing handlebodies to fill the boundary of $\mathrm{Fill}_{\mathrm{c}}(M)_{ij}$ produces $\mathrm{Fill}(M)$ with uncolored vertical bands are colored according to $(i,j)$.
The $(i,j)$-colored $\mathrm{Fill}(M)$ is denoted by $\mathrm{Fill}(M)_{ij}$ and the $(i, j)$-colored standard ribbon graph $R(t,s)$ is denoted by $R(t,s)_{ij}$.
By changing to an equivalent manifold if necessarily, we may assume that a special ribbon graph for $(M, i,j)$, which is denoted by $\Omega_{(M, i,j)}$, is a disjoint union of the standard ribbon graph $R(t,s)_{ij}$ and a surgery link $L=L_1\cup \cdots\cup L_{\mu}$ and a ribbon graph of $M$.
The ribbon graph obtained by replacing $R(t,s)_{ij}$ by $R(t,s)$ is denoted by $\Omega_M$.
This ribbon graph $\Omega_M$ is thus obtained by removing the colors of left and right vertical bands.
Note that if $M$ has no corners then $\Omega_M$ is the same as the definition of special ribbon graph given in Section \ref{subsec:explicit formula for tau(M)}.
In summery we have the following explicit formula for the $(\zeta, \eta)$-block matrix
\begin{align}\label{equ:tau zeta eta extended}
(\mathcal{X}(M)_{ij})_{\zeta}^{\eta}&=\tau(\mathrm{Fill}_{\mathrm{c}}(M)_{ij})_{\zeta}^{\eta} \notag \\
&= \Delta^{\sigma(L)} \mathcal{D}^{-g^+ -\sigma(L) - \mu} \dim(\eta) \sum_{\lambda \in \mathrm{col}(L)} \dim(\lambda) \mathcal{F}_0(\Omega_{(M, i, j)}, \zeta, \eta, \lambda),
\end{align}
where $g^+$ is the sum of the integer entries of $s$ and
\[\dim(\eta):=\prod_{i=1}^q\dim(\eta_i) \]
with
\[\dim(\eta_i):=
\begin{cases}
\prod_{l=1}^{b_i} \dim(\eta_i^l) & \mbox{ if } b_i \in \Z \\
1 & \mbox{ if } b_i \mbox{ is a mark.}
\end{cases}
\]
\section{Main Theorem}\label{sec:Main Theorem}
So far we defined the 2-category of decorated cobordisms with corners $\mathrm{Co}$, where cobordisms are decorated by a modular category $\mathcal{V}$.
We constructed the assignment $\mathcal{X}$ from $\mathrm{Co}$ to the Kapranov-Voevodsky 2-vector spaces $2\-\mathrm{Vect}$.
The explicit formula was obtained by expressing a cobordism with corners $M$ by the special ribbon graph $\Omega_M$ in $S^3$.
Now we prove the following main theorem of the current paper.
\begin{thm}\label{thm:main theorem}
The assignment $\mathcal{X}$ defined above is a projective pseudo 2-functor from $\mathrm{Co}$ to $2\-\mathrm{Vect}$.
\end{thm}
For our convention of the language of 2-category, see Appendix.
This theorem follows from several propositions below.
The idea of the proof is that we reduce the gluings of cobordisms to the gluing of special ribbon graphs and work with the explicit formula.
\subsection{Vertical projective Functor}
The vertical composition is not preserved by $\mathcal{X}$.
This is because of an anomaly in the Reshetikhin-Turaev TQFT.
Thus we instead claim that $\mathcal{X}$ is a projective functor on the hom-category $\mathrm{Co}(\nstand{m}, \nstand{n})$.
For a projective functor to exist, the target category should be an $K$-module for some commutative ring $K$.
In our case, the target category is the hom-category $2\-\mathrm{Vect}(\mathbf{k}^m, \mathbf{k}^n)$.
This hom-category is a $K=\mathrm{Hom}(\mathbb{1},\mathbb{1})$-module by multiplying an element $k\in K$ component-wise:
\[ k\cdot (f_{ij})_{ij}:=(kf_{ij})_{ij},\]
where $(f_{ij})_{ij}$ is a 2-morphism in $2\-\mathrm{Vect}$.
First we state results regarding the anomaly of the original RT TQFT.
Let $M_1$ and $M_2$ be a composable decorated cobordisms (without corners).
(As always in this paper, we assume the source and the target boundary surfaces are both connected.)
Let $L_1$, $L_2$ and $L$ be surgery links for special ribbon graphs $\Omega_{M_1}$, $\Omega_{M_2}$ and $\Omega_{M_1\cdot M_2}$ of $M_1$ and $M_2$ and $M_1\cdot M_2$, respectively.
\begin{lemma}
Using notations above, the vertical concatenation $\Omega_{M_1}\cdot \Omega_{M_2}$ of the special ribbon graphs $\Omega_{M_1}$ and $\Omega_{M_2}$ is a special ribbon graph for $M_1 \cdot M_2$.
\end{lemma}
\begin{lemma}\label{lem:vertical anomaly of original RT}
With the same notations as above, we have
\[\tau(M_1\cdot M_2)= k(M_1, M_2)\tau(M_1)\cdot \tau(M_2),\]
where
\[k(M_1, M_2)=(\mathcal{D} \Delta)^{\sigma(L_1)+\sigma(L_2)-\sigma(L)}. \]
\end{lemma}
The proofs of both lemmas can be found in \cite[Lemma I\hspace{-.1em}V 2.1.2]{Turaev10}.
\begin{prop}[Vertical composition]\label{lem:2vertical composition}
Let $[(M_1, \phi_1)]:\Sigma_{t_1}\Rightarrow \Sigma_{t_2}: \nstand{m} \to \nstand{n}$ and $[(M_2, \phi_2)]:\Sigma_{t_2}\Rightarrow \Sigma_{t_3}: \nstand{m} \to \nstand{n}$ be (non-formal) 2-morphisms of $\mathrm{Co}$ so that the target 1-morphism of $[M_1]$ is equal to the source 1-morphism of $[M_2]$.
Then we have
\[\mathcal{X}(M_1\cdot M_2)=k(M_1, M_2) \mathcal{X}(M_1)\cdot \mathcal{X}(M_2),\]
where $k(M_1, M_2) \in K$ is a gluing anomaly of the pair $(M_1, M_2)$ given as follows.
Let $L_1$, $L_2$ and $L$ be surgery links of $\Omega_{M_1}$, $\Omega_{M_2}$ and $\Omega_{M_1} \cdot \Omega_{M_2}$, respectively.
Then
\[k(M_1, M_2)=(\mathcal{D} \Delta)^{\sigma(L_1)+\sigma(L_2)-\sigma(L)}.\]
\end{prop}
\begin{proof}
It suffices to show that the equality
\begin{equation*}\label{equ:XMM}
\mathcal{X}(M_1 \cdot M_2)_{ij} =k(M_1, M_2) \left( \mathcal{X}(M_1) \cdot \mathcal{X}(M_2) \right)_{ij}
\end{equation*}
holds for $i\in I^m$ and $j\in I^n$.
Note that the surgery links $L_1$, $L_2$ and $L$ are independent of the indices $i$ and $j$, so is $k(M_1, M_2$), hence the result follows.
By the definition of $\mathcal{X}(M)_{ij}$ (see (\ref{equ:XM ij})), the above equality is equivalent to show the following equality:
\begin{equation}\label{equ:tau fill MM}
\tau\left(\mathrm{Fill}_{\mathrm{c}}(M_1 \cdot M_2\right)_{ij})=k(M_1, M_2) \tau\left(\mathrm{Fill}_{\mathrm{c}}(M_1)_{ij}\right) \circ \tau\left(\mathrm{Fill}_{\mathrm{c}}(M_2)_{ij}\right).
\end{equation}
Since filling corners and vertical gluing commute, we have
\[\mathrm{Fill}_{\mathrm{c}}(M_1 \cdot M_2)_{ij} = \mathrm{Fill}_{\mathrm{c}}(M_1)_{ij} \cdot \mathrm{Fill}_{\mathrm{c}}(M_2)_{ij}.\]
By definition, the special ribbon graphs $(\Omega_{M_1})_{ij}$ and $(\Omega_{M_2})_{ij}$ represent the cobordisms $\mathrm{Fill}_{\mathrm{c}}(M_1)_{ij}$ and $\mathrm{Fill}_{\mathrm{c}}(M_2)_{ij}$, respectively.
Note that the surgery links of $(\Omega_{M_1})_{ij}$ and $(\Omega_{M_2})_{ij}$ are the same as the surgery links of $\Omega_{M_1}$ and $\Omega_{M_2}$ since only difference between them is the colors of the left and the right vertical bands, which are not surgery links.
Thus, the equality (\ref{equ:tau fill MM}) follows from Lemma \ref{lem:vertical anomaly of original RT}.
\end{proof}
If one of $M_1$ and $M_2$ is the vertical identity, then we set $k(M_1\cdot M_2)=1 \in K$.
\begin{lemma}\label{lem:vertical anomaly associativity}
Suppose that $M_1$, $M_2$, and $M_3$ are three vertically composable 2-morphisms of $\mathrm{Co}$.
Namely, we can form the 2-morphism $M_1 \cdot M_2 \cdot M_3$.
Then we have
\begin{equation}\label{equ:vertical anomaly}
k(M_1, M_2 \cdot M_3)k(M_2\cdot M_3)=k(M_1 \cdot M_2, M_3)k(M_1, M_2).
\end{equation}
\begin{proof}
For $i=1,2,3$ present the cobordisms $M_i$ by a special ribbon graph $\Omega_{M_i}$ and let $L_i$ be the surgery link in $\Omega_i$.
The cobordism $M_1\cdot M_2$ is represented by the special ribbon graph $\Omega_{M_1} \cdot \Omega_{M_2}$.
Let $L_{12}$ be a part of the surgery link of $\Omega_{M_1} \cdot \Omega_{M_2}$ that is not in $L_1 \cup L_2$.
Namely, the surgery link $L_{12}$ is a newly emerged ribbons when we concatenate the ribbons $\Omega_{M_1}$ and $\Omega_{M_2}$.
Similarly, let $L_{23}$ be the surgery link that is not in $L_2 \cup L_3$.
Let $L_{123}$ be the surgery link of $\Omega_{M_1}\cdot \Omega_{M_2} \cdot \Omega_{M_3}$, namely $L_{123}$ is the union of all of the above surgery links.
By Lemma \ref{lem:2vertical composition}, anomalies can be computed by signatures of surgery links.
Thus, the equality \ref{equ:vertical anomaly} is equivalent to the equality
\begin{align*}
[\sigma(L)+\sigma(L_3)-\sigma(L_{123})] +[ \sigma(L_1)+\sigma(L_2)-\sigma(L)]\\
=[\sigma(L_1)+\sigma(L')-\sigma(L_{123})]+[\sigma(L_2) +\sigma(L_3)-\sigma(L')].
\end{align*}
Since both sides are equal to $\sigma(L_1)+\sigma(L_2)+\sigma(L_3)-\sigma(L_{123})$, the equality holds.
\end{proof}
\end{lemma}
The results of Proposition \ref{lem:2vertical composition} and Lemma \ref{lem:vertical anomaly associativity} can be summarized into:
\begin{prop}\label{prop:vertical projective functor}
The assignment $\mathcal{X}$ is a projective functor from the hom-category $\mathrm{Co}(\nstand{m}, \nstand{n})$ to the hom-category $2\-\mathrm{Vect}(\mathbf{k}^m, \mathbf{k}^n)$.
\end{prop}
\subsection{Horizontal Axioms}
Now we are going to study how the assignment $\mathcal{X}$ behaves on horizontal gluings.
For each type $t=(m,n; a_1, \cdots, a_p)$, recall the following notations from Section \ref{sec:X on 1-morphisms}:
\begin{equation*}
\Phi(t; \zeta)= H^{a_1}_{\zeta_1} \otimes H^{a_2}_{\zeta_2} \otimes \cdots \otimes H^{a_p}_{\zeta_p}
\end{equation*}
and
\begin{equation*}
\Phi(t; \zeta; i, j)=V^*_{i_1} \otimes V^*_{i_2} \otimes \cdots \otimes V^*_{i_m} \otimes \Phi(t, \zeta) \otimes V_{j_n}\otimes V_{j_{n-1}}\otimes \cdots \otimes V_{j_1}.
\end{equation*}
Also recall that we defined the module
\begin{equation*}
\mathcal{X}(\Sigma_t)_{ij}=\bigoplus_{\zeta \in I^t} \mathrm{Hom} \big(\mathbb{1}, \Phi(t; \zeta;i,j) \big)
\end{equation*}
\begin{prop}\label{prop:2-functor on 1-morphisms}
Let $\Sigma_{t_1}: l \to m$ and $\Sigma_{t_2}:m \to n$ be composable 1-morphisms of $\mathrm{Co}$.
Then the 2-matrix $\mathcal{X}(\Sigma_{t_1}\circ \Sigma_{t_2})$ is canonically isomorphic to the 2-matrix $\mathcal{X}(\Sigma_{t_1})\mathcal{X}(\Sigma_{t_2})$.
\end{prop}
\begin{proof}
The $(h, j)$-component of the product of the 2-matrices $\mathcal{X}(\Sigma_{t_1})$ and $\mathcal{X}(\Sigma_{t_2})$ is the module
\begin{align}\label{equ:product of 2matrices}
\notag \left( \mathcal{X}(\Sigma_{t_1}) \circ \mathcal{X}(\Sigma_{t_2}) \right)_{hj}=\bigoplus_{1 \leq i \leq \mathbf{k}^m} \mathcal{X}(\Sigma_{t_1})_{h, i}\otimes \mathcal{X}(\Sigma_{t_2})_{i, j}\\
= \bigoplus_{1 \leq i \leq \mathbf{k}^m} \left[ \bigoplus_{\zeta \in I^{t_1}} \mathrm{Hom} \big(\mathbb{1}, \Phi(t_1; \zeta;h,i) \big) \otimes \bigoplus_{\eta \in I^{t_2}} \mathrm{Hom} \big(\mathbb{1}, \Phi(t_2; \eta;i,j) \big) \right]
\end{align}
Using Lemma \ref{lem:sum over simple 2} we sum over $i_1$ and the module (\ref{equ:product of 2matrices}) is isomorphic to
\begin{equation}\label{equ:product of 2matrices second}
\bigoplus_{i=(i_2, \dots, i_{m-1})\in I^{m-1}} \bigoplus_{\zeta \in I^{t_1}, \eta \in I^{t_2}}\mathrm{Hom}\left(\mathbb{1}, U(i, \zeta, \eta) \right),
\end{equation}
where $U(i, \zeta, \eta)$ is the following module.
\begin{multline*}
V^*_{h_1}\otimes \cdots \otimes V^*_{h_l} \otimes \Phi(t_1, \zeta) \\
\otimes V_{i_m}\otimes V_{i_{m-1}}\otimes \cdots \otimes V_{i_{2}}
\otimes V_{i_{2}}^*\otimes V_{i_{3}}^*
\otimes \cdots \otimes V_{i_m}^* \\
\otimes \Phi(t_2, \eta) \otimes V_{j_n}\otimes \cdots \otimes V_{1}
\end{multline*}
Note that we have the equality
\[\bigoplus_{i=(i_2, \dots, i_{m})\in I^{m-1}} \bigoplus_{\zeta \in I^{t_1}, \eta \in I^{t_2}}=\bigoplus_{\xi \in I^{t_1 \circ t_2} }\]
and for $\xi=(\zeta, i, \eta)\in I^{t_1\circ t_2}$ with $\zeta \in I^{t_1}, i\in I^{m-1}, \eta \in I^{t_2}$, the object $\Phi(t_1 \circ t_2, \xi)$ is equal to
\[\Phi(t_1, \zeta)\otimes V_{i_m}\otimes V_{i_{m-1}}\otimes \cdots \otimes V_{i_{2}}
\otimes V_{i_{2}}^*\otimes V_{i_{3}}^*
\otimes \cdots \otimes V_{i_m}^*
\otimes \Phi(t_2, \eta)\]
Thus the module (\ref{equ:product of 2matrices second}) is equal to the module
\begin{align*}
&\bigoplus_{\xi \in I^{t_1 \circ t_2} }\mathrm{Hom}(\mathbb{1}, V^*_{h_1}\otimes \cdots \otimes V^*_{h_l}\otimes \Phi(t_1 \circ t_2, \xi) \otimes V_{j_n}\otimes \cdots \otimes V_{1}) \\
&=\bigoplus_{\xi \in I^{t_1 \circ t_2} }\mathrm{Hom}(\mathbb{1}, \Phi(t_1\circ t_2; \xi; h,j)) =\mathcal{X}(\Sigma_{t_1\circ t_2})_{hj}
\end{align*}
Note that the isomorphism from $\mathcal{X}(\Sigma_{t_1}) \circ \mathcal{X}(\Sigma_{t_2})$ to $\mathcal{X}(\Sigma_{t_1\circ t_2})$ is given by the isomorphism $u$ of Lemma \ref{lem:sum over simple 2}.
This fact will be used in the proof of Lemma \ref{prop:2horizontal} below.
\end{proof}
We saw that vertical composition of cobordisms corresponds to concatenation of their special ribbon graphs.
This correspondence was the key observation to prove the projective functoriality of $\mathcal{X}$.
Similarly, to investigate how horizontal composition behave under the map $\mathcal{X}$, we first need to study how horizontal composition of cobordisms can be expressed as an operation on the special ribbon graph side.
The obvious guess is to juxtapose two special ribbon graphs.
But juxtaposing does not correspond to horizontal composition of cobordisms.
This can be seen, for instance, by noting that the type of bottom surface is not the desired one.
Let $[M]: \Sigma_{t_1} \Rightarrow \Sigma_{t_2}: \nstand{l} \to \nstand{m}$ and $[M']:\Sigma_{s_1} \Rightarrow \Sigma_{s_2}: \nstand{m}\to \nstand{n}$ be 2-morphisms which can be glued horizontally.
Let $\Omega_M$ and $\Omega_{M'}$ be special ribbon graphs representing the cobordisms $M$ and $M'$, respectively.
Recall that the special ribbon graph $\Omega_M$ consists of ribbons from $R_{t_1}$ and $-R_{t_2}$ with uncolored vertical bands connected, and a surgery link.
The surgery link may be tangled with $R_{t_1}$ and $-R_{t_2}$ as in Figure \ref{fig:OmegaM}.
\begin{figure}[h]
\center
\includegraphics[width=3.8in]{OmegaM2.pdf}
\caption{The special ribbon graph $\Omega_M$}
\label{fig:OmegaM}
\end{figure}
We may assume that the surgery link in $\Omega_M$ is away from the rightmost vertical band of $\Omega_M$ by pulling a component of the surgery link over the top coupon and bring it to the other side.
Similarly, we may assume that no component of surgery link in $\Omega_{M'}$ is tangled with the leftmost uncolored vertical band of $\Omega_{M'}$ as in Figure \ref{fig:nosurgerylink}.
\begin{figure}[h]
\center
\includegraphics[width=4.8in]{Nosurgerylink2.pdf}
\caption{No surgery link tangled at the rightmost and the leftmost}
\label{fig:nosurgerylink}
\end{figure}
We construct a new ribbon graph from these ribbon graphs $\Omega_{M}$ and $\Omega_{M'}$ as follows.
From $\Omega_{M}$, remove the rightmost uncolored vertical bands and denote the resulting ribbon graph by $\Omega_{M}^{-}$.
Similarly, remove the leftmost uncolored vertical bands from $\Omega_{M'}$ and denote the resulting ribbon graph by ${^{-}\Omega_{M'}}$.
We juxtapose $\Omega_{M}^{-}$ and $^{-}{\Omega_{M'}}$ so that $\Omega_{M}^{-}$ is on the left of ${^{-}\Omega_{M'}}$, namely $\Omega_{M}^{-} \otimes {^{-}\Omega_{M'}}$ in the category $\mathrm{Rib}_{\V}$.
In the middle of the ribbon graph $\Omega_{M}^{-} \otimes {^{-}\Omega_{M'}}$, there are $2(m-1)$ uncolored vertical bands coming from the right uncolored vertical bands of $\Omega_{M}^{-}$ and the left uncolored vertical bands of ${^{-}\Omega_{M'}}$.
For each natural number $n$, let $\omega_{n}$ be a ribbon graph in $\mathbb{R}^2\times [0,1]\subset R^3$ defined in Figure \ref{fig:omega}.
The number of annulus ribbons in $\omega_{n}$ is $n$.
\begin{figure}[h]
\center
\includegraphics[width=3in]{Omegagraph.pdf}
\caption{Ribbon graph $\omega_{n}$}
\label{fig:omega}
\end{figure}
On the bottom of these $2(m-1)$ bands, we attach the ribbon graph $\omega_{m-1}$ defined in Figure \ref{fig:omega}.
Let $\Omega_{M, M'}$ denote the resulting ribbon graphs fitted in $\mathbb{R}^2 \times [0,1]$.
See Figure \ref{fig:Horizontal Special Ribbon } for an example.
\begin{figure}[h]
\center
\includegraphics[width=4in]{HorizontalSpecialRibbon2.pdf}
\caption{Special ribbon graph $\Omega_{M, M'}$ for horizontal gluing}
\label{fig:Horizontal Special Ribbon }
\end{figure}
\begin{lemma}\label{lem:horizontal glue of ribbons}
The ribbon graph $\Omega_{M, M'}$ constructed above represents the horizontally glued cobordism $M\circ M'$.
\end{lemma}
\begin{proof}
First note that since surgery links of $\Omega_M$ and $\Omega_{M'}$ are away from the neighborhoods of uncolored vertical bands of $\Omega_M$ and $\Omega_{M'}$, the order of the gluing and surgery is interchangeable.
From $S^3$ with the ribbon graph $\Omega_M$ in it, let us cut out a regular neighborhood of the bottom coupon and top coupon and rainbow bands attached to them.
Assuming the neighborhood of the top coupon contains the infinity in $S^3=\mathbb{R}^3 \cup \{ \infty \}$, we may assume that the rest of the ribbon graph lies in $S^2 \times [0,1] \subset \mathbb{R}^3$.
Similarly for $\Omega_{M'}$.
The horizontal gluing of $M$ and $M'$ now corresponds to cutting out the regular neighborhoods of the right vertical bands of $\Omega_M\subset S^2 \times [0,1]$ and the left vertical bands of $\Omega_{M'} \subset S^2 \times [0,1]$ and identify their boundaries and do surgery.
We decompose this procedure in several steps.
Instead of cutting out those neighborhoods at the same time, we first cut out only the rightmost vertical band of $\Omega_M$ and the rightmost vertical band of $\Omega_{M'}$.
Then we identify the boundary.
This gluing can be realized in $R^3$ as in Figure \ref{fig:first gluing}.
\begin{figure}[h]
\center
\includegraphics[width=4.6in]{Firstgluing2.pdf}
\caption{Gluing the first corners}
\label{fig:first gluing}
\end{figure}
Thus the horizontally glued cobordism $M \circ M'$ can be obtained from the ribbon graph $\Omega_{M}^{-} \otimes {^{-}\Omega_{M'}}$ sitting in $S^2\times [0,1]$ by removing the neighborhoods of the middle $2(m-1)$ uncolored vertical bands and identify their boundaries.
Now we start from the ribbon graph $\Omega_{M, M'}$.
Attach coupons on the top and the bottom of the graph $\Omega_{M, M'}$.
Cut out the regular neighborhoods $T$ of the top and the bottom coupons and rainbow bands.
The rest of the ribbon lies in $S^2 \times [0,1]\subset R^3$.
We do surgery along the surgery link of $\omega_{m-1}$.
Let us describe this surgery carefully.
We follow the argument given in \cite[Lemma I\hspace{-.1em}V 2.6]{Turaev10}.
The ribbon graph $\omega_{m-1}$ has $m-1$ annuli along which we do surgery.
Let $A_r$ be the $r$-th annulus of $\omega_{m-1}$ for $r=1,\dots, m-1$.
We present this annulus in the form $A_r=D_r \setminus \mathrm{Int}(D_r')$, where $D_r$ and $D_r'$ are concentric 2-disks in $\mathbb{R}^2 \times [0,1]$ such that $D_r' \subset \mathrm{Int}(D_r)$ and $D_r'$ transversally intersects $\omega_{m-1}$ along two short intervals lying on two bands of $\omega_{m-1}$ linked by the annulus $A_r$, see Figure \ref{fig:annulus in omega}.
\begin{figure}[htbp]
\begin{minipage}{0.4\hsize}
\begin{center}
\includegraphics[width=2in]{figure66.pdf}
\end{center}
\caption{The annulus $A_r$ in $\omega_{m-1}$}
\label{fig:annulus in omega}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=3in]{Figure67.pdf}
\end{center}
\caption{Gluing $F(r)$}
\label{fig:gluing F(r)}
\end{minipage}
\end{figure}
Consider a regular neighborhood $D_r \times [-1, 1]$ in $\mathbb{R}^2 \times (0,1)$ of the larger disk $D_r$.
We think that $D_r$ lies in $D_r\times \{0\}$.
We assume that there are no redundant crossings.
Namely, locally the picture is as in Figure \ref{fig:annulus in omega}.
Let $B_r^{-}$ and $B_r^+$ be small closed disjoint 2-disks in $\mathrm{Int}(D_r')$ and we assume that the intersection of $D_r \times [-1, 1]$ and $T$ is subcylinder $B_r^-\times [-1, 1]$ and $B_r^+ \times [-1,1]$.
The surgery along the framed knot defined by $A_r$ may be described as follows.
Consider the solid torus
\[A_r \times [-1,1]= (D_r \times [-1,1]) \setminus (\mathrm{Int}(D_r') \times [-1,1]) \subset S^3.\]
Its boundary consists of four annuli $A_r \times \{-1\}$, $A_r \times \{1\}$, $\partial D_r \times [-1,1]$, $\partial D_r' \times [-1,1]$.
We remove the interior of $A_r \times [-1,1]$ from $S^3 \setminus \mathrm{Int}(T)$ and glue in its place the standard solid torus $D^2\times S^1$.
The gluing is performed along a homeomorphism $\partial(A_r \times [-1, 1]) \to \partial (D^2 \times S^1)$ carrying each circle $\partial D_r' \times \{t\}$ with $t\in [-1,1]$ onto a circle $\partial D^2 \times \{x\}$ with $x\in S^1$.
Let $E(r)$ denote the solid 3-cylinder formed by the disks $D^2 \times \{x\}$ glued to $\partial D_r' \times \{t\}$ with $t\in [-1,1]$.
Let $F(r)$ denote the complementary solid 3-cylinder $\overline{(D^2 \times S^1) \setminus E(r)}$.
For $r=1, \dots, m-1$ consider the genus 2 handlebody
\[ (D_r' \times [-1,1]) \setminus \mathrm{Int}(T) =(D_r' \setminus (\mathrm{Int}(B_r^- \cup B_r^+))) \times [-1, 1] \]
and glue $E(r)$ to it as specified above.
This gives a 3-cobordism with bases $\partial B_r^- \times [-1, 1]$ and $\partial B_r^+ \times [-1,1]$ lying in the bottom boundary and the top boundary, respectively, of the cobordism represented by $\Omega_{M, M'}$.
This cobordism is a cylinder over $\partial B_r^- \times [-1,1]$.
Indeed, for $t\in [-1,1]$, the disk $D^2 \times \{x\} \subset E(r)$ glued to $\partial D_r' \times \{t\}$ and the disk with two hole $(D_r' \setminus \mathrm{Int}(B_r^- \cup B_r^+)) \times \{t\}$ form an annulus with bases $\partial B_r^- \times \{t\}$ and $\partial B_r^+ \times \{t\}$.
These annuli corresponding to all $t \in [-1,1]$ form the cylinder in question.
When $r$ runs over $1, \dots, m-1$, we get $m-1$ cylinder cobordism.
We may glue each $F(r)$ inside $D_r \times [-1,1]\subset S^3$.
Then locally this is a complement of two cylinders as in Figure \ref{fig:gluing F(r)}.
Note that the union of these spaces corresponds to the identification of the cylindrical boundaries.
In Figure \ref{fig:identification of boundaries}, the space described on the left is the compliment of cylinders.
The second space is $\partial B_r^- \times [-1,1] \times [0,1]$.
The inner boundary corresponds to $\partial B_r^- \times [-1, 1] \times \{0\}$ and the outer boundary is $\partial B_r^+ \times [-1,1]\times \{1\}$.
For each $s\in[0,1]$, the cylinder $\partial B_r^-\times [-1,1]\times \{s\}$ is glued to the first space.
(The red circles indicate where to glue and the blue line indicate the interval $[0,1]$.)
This gluing of cylinder corresponds to identifying the time $s$ circles of the cylindrical boundaries.
\begin{figure}[h]
\center
\includegraphics[width=4in]{Figure68.pdf}
\caption{Identification of boundaries}
\label{fig:identification of boundaries}
\end{figure}
Thus the surgery along framed link in $\omega_{m-1}$ is the same as cutting out regular neighborhoods of the middle uncolored vertical bands of $\Omega_{M}^{-} \otimes {^{-}\Omega_{M'}}$ and identifying the boundaries (after absorbing the small top and bottom cylindrical parts into the top and bottom boundaries by isotopy respectively.)
\end{proof}
Now that we obtained the ribbon graph operation for horizontal gluing, we use it to study the behavior of the assignment $\mathcal{X}$ under horizontal gluing.
Let $[M_1]: \Sigma_{t_1} \Rightarrow \Sigma_{t_2}: \nstand{l} \to \nstand{m}$ and $[M_2]:\Sigma_{s_1} \Rightarrow \Sigma_{s_2}: \nstand{m} \to \nstand{n}$ be 2-morphisms of $\mathrm{Co}$ that can be glued horizontally.
Recall the canonical isomorphisms $u_1:\mathcal{X}(\Sigma_{t_1})\circ\mathcal{X}(\Sigma_{s_1}) \to \mathcal{X}(\Sigma_{t_1} \circ \Sigma_{s_1}) $ and $u_2:\mathcal{X}(\Sigma_{t_2})\circ \mathcal{X}(\Sigma_{s_2}) \to \mathcal{X}(\Sigma_{t_2} \circ \Sigma_{s_2})$ given in the proof of Proposition \ref{prop:2-functor on 1-morphisms}.
The next lemma shows that these isomorphisms commute with 2-homomorphisms $\mathcal{X}(M_1 \circ M_2)$ and $\mathcal{X}(M_1) \circ \mathcal{X}(M_2)$.
\begin{prop}[Horizontal composition]\label{prop:2horizontal}
Let $[M_1]: \Sigma_{t_1} \Rightarrow \Sigma_{t_2}: \nstand{l} \to \nstand{m}$ and $[M_2]:\Sigma_{s_1} \Rightarrow \Sigma_{s_2}: \nstand{m} \to \nstand{n}$ be 2-morphisms of $\mathrm{Co}$.
Then we have $\mathcal{X}(M_1\circ M_2)u_1=u_2(\mathcal{X}(M_1)\circ \mathcal{X}(M_2))$.
\end{prop}
\begin{proof}
Let $\Omega_1=\Omega_{M_1}$ and $\Omega_2=\Omega_{M_2}$ be special ribbon graphs representing $M_1$ and $M_2$, respectively, so that no surgery links are tangled with the rightmost vertical band of $\Omega_{1}$ and the leftmost vertical band of $\Omega_{2}$.
Then the special ribbon graph $\Omega=\Omega_{M_1, M_2}$ represents the horizontally glued cobordism $M_1\circ M_2$ by Lemma \ref{lem:horizontal glue of ribbons}.
From $\Omega_{1}$, remove the rightmost uncolored vertical bands and denote the resulting ribbon graph by $\Omega_{1}^{-}$.
Similarly, remove the leftmost uncolored vertical bands from $\Omega_{2}$ and denote the resulting ribbon graph by ${^{-}\Omega_{2}}$.
For $i=1,2$, we define several notations.
Let $g_i^+$ be the number of cup like bands in $\Omega_{i}$.
Let $g^+$ be the number of cup like band in $\Omega$.
Since there are $m-1$ cup like bands in $\omega_{m-1}$, we have $g^+=g_1^+ + g_2^+ +(m-1)$.
Let $L_i$ be surgery links in $\Omega_{i}$.
The $m-1$ annuli in $\omega_{m-1}$ is denoted by $L_3$.
Denote by $L$ the surgery links of the ribbon graph $\Omega=\Omega_{M_1, M_2}$.
Then $L$ is a disjoint (unlinked) union of $L_1$ and $L_2$ and $L_3$.
Let $\mu, \mu_1, \mu_2$ be the number of components of $L, L_1, L_2$ respectively.
We have $\mu=\mu_1 + \mu_2 + m-1$.
The $(h,j)$-component homomorphism $\mathcal{X}(M_1 \circ M_2)_{h,j}: \mathcal{X}(\Sigma_{t_1\circ s_1})_{h, j} \to \mathcal{X}(\Sigma_{t_2 \circ s_2})_{h, j}$ can be calculated by Formula (\ref{equ:tau zeta eta extended}).
Let $\zeta$ and $\eta$ be a color for cap like bands and cup like bands of $\Omega$, respectively.
We calculate $(\zeta, \eta)$-block
\[\mathcal{X}_{\zeta}^{\eta}:=\left(\mathcal{X}(M_1 \circ M_2)_{h, j}\right)_{\zeta}^{\eta}.\]
By Formula (\ref{equ:tau zeta eta extended}), we have
\begin{equation*}
\mathcal{X}_{\zeta}^{\eta}=\Delta^{\sigma(L)} \mathcal{D}^{-g^+ -\sigma(L)-\mu} \dim (\eta) \sum_{\lambda \in \mathrm{col}(L)} \dim (\lambda) F_0({_h\Omega_{j}}, \zeta, \eta, \lambda),
\end{equation*}
where ${_h\Omega_{j}}$ is the ribbon graph $\Omega$ with the left vertical bands colored by $h$ and the right vertical bands colored by $j$.
The ribbon graph ${_h\Omega_{j}}$ is the same as $\Omega_{(M\circ M', h, j)}$ in the notation of Formula (\ref{equ:tau zeta eta extended}).
Note that we have $\sigma(L)=\sigma(L_1)+\sigma(L_2)+\sigma(L_3)=\sigma(L_1)+\sigma(L_2)$, since the annuli of $\omega_{m-1}$ are separated, thus $\sigma(L_3)=0$.
We write $\eta=\eta_1+\eta_2+\eta_3$, where $\eta_i$ is a color of the cup like bands of $\Omega_i$ for $i=1, 2$, and $\eta_3$ is a color of the cup like bands of $\omega_{m-1}$.
Then we have $\dim(\eta)=\dim(\eta_1)\dim(\eta_2)\dim(\eta_3)$.
Write analogously $\zeta=\zeta_1 +\zeta_2 +\zeta_3$ for the cap like bands.
Similarly, we decompose a color $\lambda=\lambda_1 + \lambda_2 + \lambda_3$, where $\lambda_i$ is a color of $L_i$ for $i=1,2,3$.
Then we have $\dim(\lambda)=\dim(\lambda_1)\dim(\lambda_2)\dim(\lambda_3)$.
Expressing the ribbon graph $\Omega$ as a morphism of $\mathrm{Rib}_{\V}$, we have
\[\Omega=(\Omega_{1}^{-} \otimes {^{-}\Omega_{2}})(\mathrm{id}_1\otimes \omega_{m-1} \otimes \mathrm{id}_2).\]
See Figure \ref{fig:Horizontal Special Ribbon }.
Since the operator invariant $F$ is a monoidal functor from $\mathrm{Rib}_{\V}$, we have $F(_h\Omega_{j}, \zeta, \eta, \lambda)$
\[=\left( F(_h(\Omega_1^-), \zeta_1,\eta_1, \lambda_1) \otimes F(({^-\Omega_2})_j,\zeta_2, \eta_2, \lambda_2) \right) F(\mathrm{id}_1\otimes \omega_{m-1} \otimes \mathrm{id}_2, \zeta_3, \eta_3,\lambda_3).\]
Then $\mathcal{X}_{\zeta}^{\eta}$ is the composition of a morphism $\mathbb{1} \to \Phi(t_1\circ s_1; \zeta; h,j)$ with
\begin{multline*}
\bigg( \Delta^{\sigma(L_1)} \mathcal{D}^{-g_1^+ -\sigma(L_1)-\mu_1} \dim (\eta_1) \sum_{\lambda_1 \in \mathrm{col}(L_1)} \dim (\lambda) F(_h(\Omega_1^-), \zeta_1, \eta_1, \lambda_1)\\
\otimes
\Delta^{\sigma(L_2)} \mathcal{D}^{-g_2^+ -\sigma(L_2)-\mu_2} \dim (\eta_2) \sum_{\lambda_2 \in \mathrm{col}(L_2)} \dim (\lambda_2) F(({^-\Omega_2})_j, \zeta_2, \eta_2, \lambda_2) \bigg)\\
\Delta^{\sigma(L_3)} \mathcal{D}^{-(m-1) -\sigma(L_3)-(m-1)} \dim (\eta_3) \sum_{\lambda_3 \in \mathrm{col}(L_3)} \dim (\lambda_3) F(\mathrm{id}_1\otimes \omega_{m-1} \otimes \mathrm{id}_2, \zeta_3, \eta_3, \lambda_3).
\end{multline*}
We compute the last term.
First, since $\sigma(L_3)=0$, the last term reduces to
\begin{equation}
\mathcal{D}^{-2(m-1)} \dim (\eta_3) \sum_{\lambda_3 \in \mathrm{col}(L_3)} \dim (\lambda_3) F(\mathrm{id}_1\otimes \omega_{m-1} \otimes \mathrm{id}_2, \zeta_3, \eta_3, \lambda_3).
\end{equation}
We claim that the sum is zero unless $\zeta_3=\eta_3$.
If $\zeta_3=\eta_3$, then the sum is equal to the operator invariant of $\mathrm{id}_{\zeta}$.
Here $\mathrm{id}_{\zeta}$ is vertical bands whose colors are determined according to $\zeta$.
We postpone the proof of this claim.
We will prove this lemma as a consequence of some graphical calculations.
See Lemma \ref{lem:claim} below.
Assuming the claim, we complete the current proof.
To show $u_2\mathcal{X}(M_1\circ M_2)=(\mathcal{X}(M_1)\circ \mathcal{X}(M_2))u_1$, it suffices to show
\begin{equation}\label{equ:ugg=gu}
\mathcal{X}(M_1\circ M_2)_{h,j}u_1=u_2(\mathcal{X}(M_1)_{h,i}\circ \mathcal{X}(M_2)_{i,j}),
\end{equation}
where $u_i$ is an isomorphism in Lemma \ref{lem:sum over simple 2}.
The left hand side of (\ref{equ:ugg=gu}) is equal to
\begin{equation}\label{equ:g (Omega Omega)}
\biggr[ \bigoplus_{\zeta, \eta} \Delta^{\sigma(L)}\mathcal{D}^{-g^+ - \sigma(L) - \mu} \dim (\eta) \sum_{\lambda} \dim (\lambda) F_0(_h \Omega_j, \zeta, \eta, \lambda) \biggr]u_1,
\end{equation}
where $\zeta \in I^{t_1\circ s_1}$ and $\eta \in I^{t_2 \circ s_2}$ decompose as $\zeta=\zeta_1+\zeta_2+\zeta_3$ and $\eta=\eta_1+\eta_2+\eta_3$ with notations as above.
By the claim we may assume that $\zeta_3=\eta_3$.
Hence we can write the equation (\ref{equ:g (Omega Omega)}) as follows.
\begin{multline*}
\bigoplus_{i\in I^{m-1} } \bigoplus_{\zeta_1, \zeta_2} \bigoplus_{\eta_1, \eta_2} \\\biggl[
\Delta^{\sigma(L_1)} \mathcal{D}^{-g_1^+ -\sigma(L_1)-\mu_1} \dim (\eta_1) \sum_{\lambda_1 \in \mathrm{col}(L_1)} \dim (\lambda) F(_h(\Omega_1^-)_i, \zeta_1, \eta_1, \lambda_1)\\
\otimes
\Delta^{\sigma(L_2)} \mathcal{D}^{-g_2^+ -\sigma(L_2)-\mu_2} \dim (\eta_2) \sum_{\lambda_2 \in \mathrm{col}(L_2)} \dim (\lambda_2) F(_i(^-\Omega_2)_j, \zeta_2, \eta_2, \lambda_2)\biggr]u_1.
\end{multline*}
The above equation is further equal to
\begin{multline*}\label{equ:big big bigoplus 2}
u_2\bigoplus_{i\in I^{m} } \bigoplus_{\zeta_1, \zeta_2} \bigoplus_{\eta_1, \eta_2} \\ \biggl[
\Delta^{\sigma(L_1)} \mathcal{D}^{-g_1^+ -\sigma(L_1)-\mu_1} \dim (\eta_1) \sum_{\lambda_1 \in \mathrm{col}(L_1)} \dim (\lambda) F(_h(\Omega_1)_i, \zeta_1, \eta_1, \lambda_1)\\
\otimes
\Delta^{\sigma(L_2)} \mathcal{D}^{-g_2^+ -\sigma(L_2)-\mu_2} \dim (\eta_2) \sum_{\lambda_2 \in \mathrm{col}(L_2)} \dim (\lambda_2) F(_i(\Omega_2)_j, \zeta_2, \eta_2, \lambda_2)\biggr].
\end{multline*}
To see this, note that as a graphical calculation $u_2$ connects the top of the rightmost band of $\Omega_1$ and the leftmost band of $\Omega_2$ by a cap like band as in Figure \ref{fig:ugg-gu}.
Since no surgery links are tangled with those bands, we can push down the cap like bands.
This explain the equality of Figure \ref{fig:ugg-gu}.
\begin{figure}
\center
\includegraphics[width=3in]{Graphicalcalculation.pdf}
\caption{The graphical calculation for (\ref{equ:ugg=gu}) }
\label{fig:ugg-gu}
\end{figure}
Finally by Formula (\ref{equ:tau zeta eta extended}), the above equation is equal to
\[ (\mathcal{X}(M_1)_{h,i} \circ \mathcal{X}(M_2))_{i,j} u_1.\]
Thus the proof is complete assuming the claim, which we prove below.
\end{proof}
In the following graphical calculations, the equality with dot $\stackrel{\bullet}{=}$ means the equality after applying the operator invariant functor $F$ to the ribbon graphs.
Consider the ribbon graphs in Figure \ref{fig:Fig310}.
The label $i$ is an arbitrary element of $I$.
\begin{figure}[h]
\center
\includegraphics[width=4in]{beforelemma611.pdf}
\caption{}
\label{fig:Fig310}
\end{figure}
\begin{lemma}\label{lem:Fig310}
The equality in Figure \ref{fig:Fig310} holds.
If the color of the left vertical strand is replaced with $j\neq i$, then the sum on the left hand side is equal to $0$.
\end{lemma}
\begin{proof}
See \cite[Section II 3 p.98]{Turaev10}
\end{proof}
\begin{lemma}\label{lem:claim}
For each ribbon graph $\omega_n$, let $\zeta$ and $\eta$ be sequences of colors of the bottom and top rainbow like bands of $\omega_n$.
Then we have
\begin{align}\label{equ:F of omega}
&\mathcal{D}^{-2n} \dim (\eta) \sum_{\lambda \in \mathrm{col}(L)} \dim (\lambda) F(\omega_{n}, \zeta, \eta, \lambda)\\
& \stackrel{\bullet}{=} \begin{cases} \mathrm{id}_{\zeta} &\mbox{if } \zeta=\eta \\ \notag
0 & \mbox{if } \zeta \neq \eta. \end{cases}
\end{align}
\end{lemma}
\begin{proof}
A part of the ribbon graph $\omega_n$ that consists of an annulus and a pair of a cup like band and a cap like band can be deformed so that it contains the ribbon graphs that appeared on the left hand side of the equation in Figure \ref{fig:Fig310}.
This deformation is depicted in the first equality in Figure \ref{fig:graphical calculation 2}.
It follows from Lemma \ref{lem:Fig310} that the sum on the left hand side of (\ref{equ:F of omega}) is zero unless $\zeta=\eta$.
If $\zeta=\eta$, then the calculation in Figure \ref{fig:graphical calculation 2} shows that each annuli part gives rise to a factor $\mathcal{D}^2$ and the ribbon becomes the $2n$ vertical bands.
\begin{figure}[h]
\center
\includegraphics[width=7in]{lemma611.pdf}
\caption{}
\label{fig:graphical calculation 2}
\end{figure}
\end{proof}
To wrap up this section, we state here the main theorem (Theorem \ref{thm:main theorem}) again and complete its proof.
\begin{thm}
The assignment $\mathcal{X}$ is a projective pseudo 2-functor from the 2-category $\mathrm{Co}$ of decorated cobordisms with corners to the Kapranov-Voevodsky 2-vector spaces $2\-\mathrm{Vect}$.
\end{thm}
\begin{proof}
We check the conditions (1)$\sim$(4) and (M.1), (M.2) of a projective pseudo 2-functor in Section \ref{sec:2-functor}.
(A projectivity is defined in Definition \ref{def:projective functor}.)
In the current case $(F, \phi)$ in the notation of Section \ref{sec:2-functor} is $(\mathcal{X}, u)$, where $u$ is the map used in the proof of Proposition \ref{prop:2-functor on 1-morphisms}.
The condition (1) is just the definition of $\mathcal{X}$.
The condition (2) follows from Proposition \ref{prop:vertical projective functor}.
The condition (3) as well as (M.2) is satisfied since we just use formal identities.
The condition (4) follows from Proposition \ref{prop:2-functor on 1-morphisms} and \ref{prop:2horizontal}.
Finally the condition (M.1) follows since the isomorphism $u$ in Lemma \ref{lem:sum over simple 2} satisfies the analogue diagram.
\end{proof}
\subsection{The extended TQFT $\mathcal{X}
$}\label{sec:the extended tqft}
\begin{Definition}
An \textit{extended TQFT} is a projective pseudo 2-functor from $\mathrm{Co}$ to $2\-\mathrm{Vect}$.
An extended TQFT \textit{extends} the Reshetikhin-Turaev TQFT if when it is restricted to the category $\mathrm{Co}({_* \emptyset}, \emptyset_*)$ of cobordisms without corners, it is the Reshetikhin Turaev TQFT.
\end{Definition}
Our candidate for an extended TQFT that extends the Reshetikhin-Turaev TQFT is the projective pseudo 2-functor $\mathcal{X}$.
By definition, the 2-functor $\mathcal{X}$ is an extended TQFT.
\begin{prop}
The extended TQFT $\mathcal{X}$ extends the Reshetikhin-Turaev TQFT.
\end{prop}
\begin{proof}
Suppose $(M, \partial_{B} M, \partial_{T} M)$ is a cobordism without corners.
Then $M$ is represented by a special ribbon graph with a bottom type
$t^-=(0,0; a_1, \dots, a_p)$ and a top type $t^+=(0,0; b_1, \dots, b_q)$.
Since there are no left and right circles on boundary surfaces of $M$,
$\mathcal{X}(\partial_{-} M)$ is a $(1 \times 1)$ 2-matrix and we can canonically identify the 2-matrix with its only entry $\bigoplus_{i \in I^{t^-}}\mathrm{Hom}(\mathbb{1}, \Phi(t^-; i))$.
This module is what the RT TQFT assigns to $\partial_{-} M$.
Similarly for $\partial_{+} M$.
Then we can identify $\mathcal{X}(M)$ as a homomorphism from the module $\mathcal{X}(\partial_{-} M)$ to the module $\mathcal{X}(\partial_{+} M)$, which is the same as the RT TQFT by definition.
\end{proof}
\section{Comments}\label{sec:comments}
\subsection{A 2-category of Special Ribbon Graphs}
As we have seen, the construction of the projective pseudo 2-functor $\mathcal{X}$ from $\mathrm{Co}$ to $2\-\mathrm{Vect}$ extensively depends on the technique of representing a cobordism by a special ribbon graphs.
On the level of objects, the 2-functor $X$ extracts just the number of components for each 1-manifold.
On the 1-morphisms, the 2-functor $X$ reads the type of each surface and outputs the projective module constructed only from the information of the type.
We can hence consider the 2-category $\mathrm{Srg}$ whose objects are integers and whose 1-morphisms are decorated types and whose 2-morphisms are special ribbon graphs.
we could have defined $\mathcal{X}$ alternatively by the composition of a 2-functors from $\mathrm{Co}$ to $\mathrm{Srg}$ and a 2-functor from $\mathrm{Srg}$ and $2\-\mathrm{Vect}$.
However, to make it meaningful we need to define composition in $\mathrm{Srg}$ independently of $\mathrm{Co}$.
This would digress from the main stream of our argument and thus we did not choose to do so.
\begin{center}
\begin{tabular}{ | l | l | l | }
\hline
& $\mathrm{Co}$ & $\mathrm{Srg}$ \\ \hline
& Geometric realization & Combinatorial data \\ \hline
Objects & Standar circles & Integers
\\ \hline
1-morphisms & Standard surfaces & Decorated types \\ \hline
2-morphisms & Classes of decorated cobordisms with corners & Special ribbon graphs \\
\hline
\end{tabular}
\end{center}
\subsection{A connection to other work}
We chose that our extended TQFT takes values in $2\-\mathrm{Vect}$.
There are 2-functors 2-$\mathrm{Vect} \to \mathrm{Bimod} \to \mathrm{Cat}$.
Here $\mathrm{Bimod}$ is a 2-category whose objects are $K$-algebra, where $K$ is the ground ring, and whose 1-morphisms are bimodules and 2-morphisms are equivalence classes of $K$-homomorphisms.
The 2-category $\mathrm{Cat}$ consists of categories for objects, functors for 1-morphisms, and natural transformations for 2-morphisms.
Thus our extended TQFT can also take values in $\mathrm{Bimod}$ or $\mathrm{Cat}$ and this make a connection with the work of \cite{DSS}.
The 2-functors can be constructed as follows.
First let us define the 2-functor 2-$\mathrm{Vect} \to \mathrm{Bimod}$.
On object level, to each object $n \in \Z$ of 2-$\mathrm{Vect}$ we assign $K^n$.
On 1-morphism level, to each $m\times n$ 2-matrix $(V_{ij})_{ij}$ we assign a $(K^m, K^n)$-bimodule $\bigoplus_{i,j}V_{ij}$, where the bimodule structure is induced by the multiplication of $1 \times m$ matrix from the left of $(V_{ij})_{ij}$ and the multiplication of $n \times 1$ matrix from the right of $(V_{ij})_{ij}$. On 2-morphism level, if $(T_{ij})$ is a 2-morphism from $(V_{ij})$ to $(W_{ij})$, we assign $\bigoplus_{i,j} T_{i,j}: \bigoplus_{i,j}V_{ij} \to \bigoplus_{i,j}W_{ij}$.
These assignments are easily seen to be a 2-functor.
The 2-functor from $\mathrm{Bimod}$ to $\mathrm{Cat}$ assigns on object level, to each $K$-algebra $A$, the category of right $A$-modules whose objects are right $A$-modules and morphisms are homomorphisms. On 1-morphism level, the assignment is induced by tensoring a bimodule from the right. On 2-morphism level, natural transformations are induced by homomorphisms of bimodules.
Composing this 2-functor with the extended TQFT $\mathcal{X}: \mathrm{Co} \to 2\mbox{-}\mathrm{Vect}$, we have a 2-functor from the 2-category of cobordisms with corners to the 2-category $\mathrm{Cat}$.
\section{Appendix: Bicategories}\label{sec:appendix:bicategory}
Here we review the definition of bicategories.
The following definitions of a bicategory and a pseudo 2-functor are excerpted from the paper \cite{MR0220789}.
\subsection{Bicategories}
A \textit{bicategory} $\underbar{S}$ is determined by the following data:
\begin{enumerate}
\item A set $\underbar{S}_0 =\mathrm{Ob}(S)$ called set of objects of $\underbar{S}$.
\item For each pair $(A, B)$ of objects, a category $\underbar{S}(A, B)$.
An object $S$ of $\underbar{S}(A,B)$ is called a \textit{morphism} of $S$, and written $A \xrightarrow{S} B$; the composition sign $\circ$ of maps in $\underbar{S}(A,B)$ will usually be omitted.
A map $s$ from $S$ to $S'$ will be called a \textit{2-morphism} and written $s: S \Rightarrow S'$, or better
will be represented by:
\begin{center}
\begin{tikzpicture}
\node (A) at (-1,0) {$A$};
\node (B) at (1,0) {$B$};
\node at (0,0) {\rotatebox{270}{$\Rightarrow$}};
\path[->,font=\scriptsize,>=angle 90] node[right]{$s$}
(A) edge [bend left] node[above] {$S$} (B)
edge [bend right] node[below] {$S'$} (B);
\end{tikzpicture}
\end{center}
\item For each triple $(A, B, C)$ of objects of $\underbar{S}$, a \textit{composition functor}:
\[c(A, B,C): \underbar{S}(A,B) \times \underbar{S}(B,C) \to \underbar{S}(A,C).\]
We write $S\circ T$ and $s\circ t$ instead of $c(A,B,C)(S,T)$ and $c(A,B,C)(s,t)$ for $(S,T)$ and $(s,t)$ objects and maps of $\underbar{S}(A,B) \times \underbar{S}(B,C)$, and abbreviate $\mathrm{id}_{S} \circ t$ and $s\circ \mathrm{id}_{T}$ into $S\circ t$ and $s\circ T$.
This composition corresponds to the pasting:
\item For each object $A$ of $\underbar{S}$ an object $I_A$ of $\underbar{S}(A,A)$ called an \textit{identity morphism} of $A$.The identity map of $I_A$ is $\underbar{S}(A,A)$ is denoted $i_A:I_A \Rightarrow I_A$ and called an \textit{identity 2-morphism} of $A$.
\item For each quadruple $(A,B,C,D)$ of objects of $\underbar{S}$, a natural isomorphism $a(A, B, C, D)$, called an \textit{associativity isomorphism}, between the two composite functors bounding the diagram:
\begin{center}
\begin{tikzcd}[column sep=3cm]
\underbar{S}(A, B) \times \underbar{S}(B, C) \times \underbar{S}(C, D) \arrow{r}{\mathrm{id} \times c(B, C, D)} \arrow{d}[swap]{c(A,B,C) \times \mathrm{id}}
&\underbar{S}(A,B) \times \underbar{S}(B,D) \arrow{d}{c(A,B,D)}\\
\underbar{S}(A,C) \times \underbar{S}(C,D) \arrow[Rightarrow]{ru}{a(A, B, C,D)} \arrow{r}{c(A,C,D)} & \underbar{S}(A,D)
\end{tikzcd}
\end{center}
Explicitly:
\[a(A,B,C,D): c(A,C,D) \circ ( c(A, B, C)\times \mathrm{id}) \to c(A, B,D) \circ (\it \times c(B,C,D)).\]
If $(S,T,U)$ is an object of $\underbar{S}(A, B) \times \underbar{S}(B, C) \times \underbar{S}(C, D)$ the isomorphism
\[a(A, B,C,D)(S,T,U): (S \circ T) \circ U \xrightarrow{\sim} S \circ (T \circ U)\]
in $\underbar{S}(A, D)$ is called the \textit{component} of $a(A,B,C,D)$ at $(S,T,U)$ and is abbreviated into $a(S,T,U)$ or even $a$, except when confusions are possible.
\item For eahc pair $(A,B)$ of objects of $\underbar{S}$, two natural isomorphisms $l(A,B)$ and $r(A,B)$, called \textit{left} and \textit{right} identities, between the functors bounding the diagrams:
\begin{center}
\begin{tikzcd}[column sep=3cm]
1\times \underbar{S}(A, B) \arrow{r}{I_A \times \mathrm{id}} \arrow{d}[swap]{\mbox{canonical}}
&\underbar{S}(A,A) \times \underbar{S}(A,B) \arrow{d}{c(A,A,B)}\\
\underbar{S}(A,B) \arrow[Rightarrow]{ru}{l(A,B)} \arrow{r}{=} & \underbar{S}(A,B)
\end{tikzcd}
\end{center}
\begin{center}
\begin{tikzcd}[column sep=3cm]
\underbar{S}(A, B)\times 1 \arrow{r}{\mathrm{id} \times I_B} \arrow{d}[swap]{\mbox{canonical}}
&\underbar{S}(A,B) \times \underbar{S}(B,B) \arrow{d}{c(A,B,B)}\\
\underbar{S}(A,B) \arrow[Rightarrow]{ru}{l(A,B)} \arrow{r}{=} & \underbar{S}(A,B)
\end{tikzcd}
\end{center}
If $S$ is an object of $\underbar{S}(A, B)$, the isomorphism, component at $S$ of $l(A,B)$,
\[l(A,B)(S): I_A \circ S \xrightarrow{\sim} S \]
is abbreviated into $l(S)$ or even $l$, and similarly we write:
\[r=r(S)=r(A,B)(S): S \circ I_B \xrightarrow{\sim} S.\]
\end{enumerate}
The families of natural isomorphisms $a(A,B,C,D)$, $l(A,B)$ and $r(A,B)$ are furthermore required to satisfy the following axioms:
\begin{enumerate}
\item[(A.C.)] Associativity coherence: If $(S, T,U,V)$ is an object of
\[\underbar{S}(A,B)\times \underbar{S}(B,C) \times \underbar{S}(C,D)\times \underbar{S}(D,E)\]
the following diagram commutes:
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=3em, row sep=1.5em]
{ $((S\circ T) \circ U)\circ V$ & & $(S\circ (T\circ U)) \circ V$ \\
$(S\circ T) \circ (U \circ V)$ & & $S \circ ((T \circ U) \circ V)$ \\
& $S\circ (T \circ (U \circ V))$ & \\};
\path[->, font=\scriptsize]
(m-1-1) edge node[above] {$a(S,T,U)\circ \mathrm{id}$} (m-1-3);
\path[->, font=\scriptsize]
(m-1-1) edge node[left] {$a(A\circ T, U, V)$} (m-2-1);
\path[->, font=\scriptsize]
(m-1-3) edge node[auto] {$a(S, T\circ U, V)$}
(m-2-3);
\path[->, font=\scriptsize]
(m-2-1) edge node[below left] {$a(S,T, U \circ V)$} (m-3-2);
\path[->, font=\scriptsize]
(m-2-3) edge node[auto] {$\mathrm{id} \circ a(T, U, V)$} (m-3-2);
\end{tikzpicture}
\end{center}
\item[(I. C.)] Identity coherence: If $(S, T)$ is an object of $\underbar{S}(A,B) \times \underbar{S}(B,C)$ the following diagram commutes:
\begin{center}
\begin{tikzpicture}
\matrix (m) [matrix of nodes, column sep=3em, row sep=1.5em]
{ $(S\circ I_B) \circ T$ & & $S\circ (I_B \circ T)$ \\
& $S \circ T$ & \\};
\path[->, font=\scriptsize]
(m-1-1) edge node[above] {$a(S,I_B, T)$} (m-1-3);
\path[->, font=\scriptsize]
(m-1-1) edge node[below left] {$r(S) \circ \mathrm{id}$} (m-2-2);
\path[->, font=\scriptsize]
(m-1-3) edge node[auto] {$\mathrm{id} \circ l(T)$}
(m-2-2);
\end{tikzpicture}
\end{center}
\end{enumerate}
\subsection{Pseudo 2-functors of bicategories}\label{sec:2-functor}
Let $\underbar{S}=( \underbar{S}_0, c, I, a, l, r)$ and $\bar{\underbar{S}}=( \bar{\underbar{S}_0}, \bar{c}, \bar{I}, \bar{a}, \bar{l}, \bar{r})$ be two bicategories.
A \textit{pseudo 2-functor} $\Phi=(F, \phi)$ from $\underbar{S}$ to $\bar{\underbar{S}}$ is determined by the following:
\begin{enumerate}
\item A map $F: \underbar{S}_0 \to \bar{\underbar{S}_0}, A \mapsto FA$.
\item A family of functors
\[ F(A, B): \underbar{S}(A, B) \to \bar{\underbar{S}}(FA, FB), \quad S \mapsto FS, \quad s \mapsto Fs.\]
\item For each object $A$ of $\underbar{S}$, an arrow of $S(FA, FA)$ (i.e., a 2-cell of $\underbar{S}$)
\[ \phi_A: \bar{I}_{FA} \to F(I_A).\]
\begin{center}
\begin{tikzpicture}
\node (A) at (-1,0) {$FA$};
\node (B) at (1,0) {$FA$};
\node at (0,0) {\rotatebox{270}{$\Rightarrow$}};
\path[->,font=\scriptsize,>=angle 90] node[right] {$\phi_A$}
(A) edge [bend left] node[above] {$\bar{I}_{FA}$} (B)
edge [bend right] node[below] {$F(I_A)$} (B);
\end{tikzpicture}
\end{center}
\item A family of natural transformations:
\[\phi(A, B, C): \bar{c}(FA, FB, FC) \circ (F(A, B) \times F(B, C)) \to F(A, C) \circ c(A, B, C). \]
\begin{center}
\begin{tikzcd}[column sep=3cm]
\underbar{S}(A, C) \arrow[leftarrow]{r}{c(A, B, C)} \arrow{d}{F(A, C)} \arrow[Leftarrow]{rd}{\phi(A, B, C)}
&\underbar{S}(A,B) \times \underbar{S}(B,C) \arrow{d}{F(A, B) \times F(B,C)}\\
\bar{\underbar{S}}(FA,FC) \arrow[leftarrow]{r}{\bar{c}(FA,FB,FC)} & \bar{\underbar{S}}(FA,FB) \times \bar{\underbar{S}}(FB,FC)
\end{tikzcd}
\end{center}
If $(S,T)$ is an object of $\bar{\underbar{S}}(A,B) \times \bar{\underbar{S}}(B,C)$ the $(S,T)$-component of $\phi(A,B,C)$
\[FS \circ FT \xrightarrow{\phi(A,B,C)(S,T)} F(S \circ T) \]
shall usually be abbreviated into $\phi(S, T)$ or even $\phi$.
\end{enumerate}
These data are required to satisfy the following coherence axioms:
\begin{enumerate}
\item[(M. 1)] If $(S, T, U)$ is an object of $\underbar{S}(A, B) \times \underbar{S}(B,C) \times \underbar{S}(C,D)$ the following diagram, where indices $A, B, C, D$ have been omitted, is commutative:
\begin{tikzcd}[column sep=large]
(FS \circ FT) \circ FU \arrow{r}{ \phi(S, T) \circ \mathrm{id}} \arrow{d}{ \bar{a}(FS, FT, FU)}
&F(S\circ T) \circ FU \arrow{r}{ \phi(S \circ T, U)} & F((S\circ T) \circ U) \arrow{d}{F(a(S,T,U))} \\
FS\circ (FT \circ FU) \arrow{r}{\mathrm{id} \circ \phi(T, U)} & FS \circ F(T \circ U) \arrow{r}{\phi(S, T \circ U)} & F(S \circ (T \circ U))
\end{tikzcd}
\item[(M. 2)] If $S$ is an object of $\underbar{S}(A,B)$ the following diagrams commute:
\begin{tikzcd}
FS \arrow[leftarrow]{r}{Fr}
& F(S \circ I_B) \\
FS\circ \bar{I}_{FB} \arrow{u}{\bar{r}} \arrow{r}{ \mathrm{id} \circ \phi_B} & FS \circ FI_B \arrow{u}{\phi(S, I_B)}
\end{tikzcd}
\qquad
\begin{tikzcd}
F(I_A \circ S) \arrow{r}{Fl}
& FS \\
FI_A \circ FS \arrow{u}{\phi(I_A, S)}\arrow[leftarrow]{r}{\phi_A \circ \mathrm{id}} & \bar{I}_{FA} \circ FS \arrow{u}{\bar{l}}
\end{tikzcd}
\end{enumerate}
\subsection{Projective Functors and Projective Pseudofunctors}
Let $\mathscr{C}$ and $\mathscr{D}$ be categories.
We introduce the notion of a \textit{projective functor} from $\mathscr{C}$ to $\mathscr{D}$.
This notion deals with the anomaly of the Reshetikhin-Turaev TQFT.
\begin{Definition}\label{def:projective functor}
Assume that the set of morphisms of $\mathscr{D}$ is an $R$-module for some ring $R$.
The following assignment $F$ is called \textit{projective functor}
\begin{enumerate}
\item For each object $X\in \mathrm{Obj}(\mathscr{C})$, an object $F(X) \in \mathrm{Obj}(\mathscr{D})$.
\item For a morphism $f: X\to Y$ in $\mathscr{C}$, a morphism $F(f): F(X) \to F(Y)$ in $\mathscr{D}$ satisfying the following conditions:
\begin{enumerate}
\item(Unit) For any object $X$ in $\mathscr{C}$, the identity morphism $\mathrm{id}_X: X \to X$ in $\mathscr{C}$ is mapped to the identity morphism $\mathrm{id}_{F(X)}: F(X) \to F(X)$ in $\mathscr{D}$.
\item(Projectivity) For composable two morphisms $f$ and $g$ in $\mathscr{C}$, there exist unique element $k(f,g)$, which is called an \textit{anomaly}, of the ring $R$ such that
\begin{equation}
F(f\circ g)=k(f,g) F(f) \circ F(g).
\end{equation}
\item If one of $f$ and $g$ above is the identity morphism, then $k(f, g)$ is the unit element of $R$.
\item (Associativity) For composable three morphisms $f, g, h$ in $\mathscr{C}$, we have
\begin{equation}
k(f, g\circ h)k(g,h)=k(f \circ g, h)k(f,g)
\end{equation}
\end{enumerate}
\end{enumerate}
\end{Definition}
When every anomaly is the unit, then the notion of a projective functor is the same as the usual notion of a functor.
The notion of a natural transformation between functors can be extended to a natural transformation between projective functors if anomaly factors are the same for both functors.
We call a natural transformation between projective functors \textit{projective natural transformation}
Then, we define a \textit{projective pseudo 2-functor} by replacing functors and natural transformations in the definition of a pseudo 2-functor with projective functors and projective natural transformations.
|
train/arxiv
|
BkiUd4E4eIOjRtI4aEGC
| 5
| 1
|
\section{Introduction}\lab{sec-intro}
This note is an addendum to our earlier paper of the same title \cite{BC}.
Our aim here will be to construct invariants for framed
3-dimensional homology spheres $(M,f)$, associated to an acyclic
orthogonal local system $E$ on $M$.
Like in our earlier note, we follow the guidelines of the
Axelrod--Singer paper \cite{AS} on the asymptotic of the
Chern--Simons theory, and we have again put aside the
physics inspired aspects of the subject, concentrating
our efforts on the construction of potential configuration-space
integral invariants of $(M,f)$. More precisely we are seeking
invariants that depend on the diffeomorphism type of $M$
and the {\em homotopy class} of the framing $f$.
For simplicity we assume throughout that $M$ is a connected, oriented
3-dim\-en\-sion\-al homology sphere so that---up to
conjugacy---local systems over $M$ are classified by representations
of $\pi_1(M;p)$ where $p$ is some fixed point in $M$.
Our invariants are associated to local systems $E$ which are
induced by an {\em orthogonal} representation $\rho_E$ of $\pi_1(M;p)$
on some ${\mathbb{R}}^m$, and we call such systems orthogonal.
Furthermore, a local system $E$ is called acyclic if
$H^*(M;E)=0$.
With this understood, our principal observation is given by the following
\begin{Thm}
An orthogonal and acyclic local system $E$ over $M$ gives rise
to a purely combinatorial graph cohomology $\mathcal{G}_E$, and if\/
$\Gamma\in\mathcal{G}_E$ is a connected trivalent cocycle of $\mathcal{G}_E$,
then\/ $\Gamma$ determines a numerical invariant $I_\Gamma(M,f)$
of $(M,f)$.
This invariant has the structural form:
\begin{equation}
I_\Gamma(M,f) = A_\Gamma(M) + \phi(\Gamma)\,{\rm CS}(M,f),
\end{equation}
where $A_\Gamma(M)$ denotes a sum of configuration-space integrals
specified by $\Gamma$ and a fixed---but arbitrary---Riemannian
structure $g$ on $M$, $\phi(\Gamma)$ is a number universally
associated to\/ $\Gamma$, and\/ ${\rm CS}(M,f)$ stands for the Chern--Simons
integral of $M$ relative to $f$ and the Levi-Civita connection of $g$.
\lab{thm-1}
\end{Thm}
Combined with the invariants described in \cite{BC}, where we treated
the trivial coefficient system ${\mathbb{R}}$---which is not
acyclic---, one is therefore in the possession of a large number
of integral invariants of $(M,f)$, and it would be very interesting
to understand their relation to the finite type invariants described
in \cite{FTI} (see also \cite{BGRT})
and whether in their totality they are in any sense exhaustive.
The proof of Theorem \ref{thm-1}, as well as the precise definition
of $\mathcal{G}_E$, will be brought in sections \ref{sec-Theta} and
\ref{sec-hi}, and runs pretty well along the lines of our earlier paper
\cite{BC}. In fact the acyclicity of $E$ allows for a simplification
of the initial step in our procedure, and we will explain this
phenomenon here and now.
Recall that the compactified configuration space $C_2(M)$ is a manifold
with boundary isomorphic to $M\times M$ with its diagonal,
$\Delta$, blown up. Thus $\partial C_2(M) = S$ is isomorphic
to the unit sphere bundle of the tangent bundle of $M$.
This situation therefore gives rise to the diagram below
consisting of sections of the exact sequences associated
to the pair $(C_2(M),S)$ and its image $(M\times M,\Delta)$
under the natural projection $\pi$ of $C_2(M)$ to $M\times M$.
{\tiny
\begin{equation}
\begin{CD}
@>>> H^2(C_2(M)) @>>> H^2(S) @>{\delta}>> H^3(C_2(M),S)
@>>> H^3(C_2(M)) @>>> \\
@. @A{\pi^*}AA @A{\pi^*}AA @A{\pi^*}A{\thickapprox}A @A{\pi^*}AA \\
@>>> H^2(M\times M) @>>> H^2(\Delta) @>{\delta}>> H^3(M\times M,\Delta)
@>>> H^3(M\times M) @>>>
\end{CD}
\lab{seq}
\end{equation} }
The vertical isomorphism in the third column of \eqref{seq}
follows from excision near the blow-up of $\Delta$.
Note also that this diagram is acted upon by the involution $T$
which exchanges the factors in $M\times M$, and each of the sequences
therefore splits canonically into a $+$ and $-$ part
corresponding to the $\pm1$ eigenvalues of $T$.
In the bottom row $H^*(\Delta)$ is clearly invariant under $T$
so that the antisymmetric part of \eqref{seq} reduces to
\begin{equation}
\begin{CD}
H^2_-(C_2(M)) @>>> H^2_-(S) @>{\delta}>> H^3_-(C_2(M),S)\\
@. @. @A{\pi^*}A{\thickapprox}A \\
@. 0 @>>> H^3_-(M\times M,\Delta)
@>{\thickapprox}>> H^3_-(M\times M)
\end{CD}
\lab{redseq}
\end{equation}
Now in \cite{BC} we showed that the form $\eta$ given by half
the Euler form of the tangent bundle along the fiber in the
fibering $S\to\Delta$ generates $H^2(S;{\mathbb{R}})$
as a module over $H^*(\Delta;{\mathbb{R}})$ and that this $\eta$
is antisymmetric: $T^*\eta=-\eta$. In short $[\eta]$
generates $H^2_-(S;{\mathbb{R}})$.
So far our discussion involved the constant coefficient system ${\mathbb{R}}$.
But the sequences \eqref{seq} and \eqref{redseq} as well as the
action of $T$ remain valid for a general local system $E$ on $M$,
provided we use the local system
$F = \pi_1^{-1}E\otimes\pi_2^{-1}E$
on $M\times M$, and $\pi^{-1}F$ on $C_2(M)$.
This understood, assume now that $E$ is orthogonal and acyclic.
The orthogonality gives rise to an arrow
\begin{equation}
I:{\mathbb{R}}\to E\otimes E,
\end{equation}
defined by sending 1 to $\sum_i e_i\otimes e_i$, where
$\{e_i\}$ is any orthonormal frame in $E$. We may therefore
also apply $I$ to $\eta$ to obtain a {\em closed}
form $I(\eta)\in\Omega^2(S;E\otimes E)$.
Next observe that the K\"unneth formula implies that
\[
H^*(M\times M;\pi_1^{-1}E\otimes\pi_2^{-1}E)=
H^*(M;E)\otimes H^*(M;E).
\]
Hence, under our acyclicity assumption, all the terms on the right
of $\delta$ in \eqref{redseq} vanish! It follows immediately
that the class of $I(\eta)\in\Omega^2(S;E\otimes E)$
is in the image of a class $[\hat\eta]\in H^2(C_2(M);\pi^{-1}F)$.
Actually we need the following slight refinement of this assertion:
\begin{Lem}
Under the orthogonality and acyclicity assumptions
there exists a form $\hat\eta\in\Omega^2(C_2(M);\pi^{-1}F)$
with the following properties:
\begin{enumerate}
\item The restriction of $\hat\eta$ to $S$ is $I(\eta)$:
$i_\partial^*\hat\eta=I(\eta)$.
\item $\hat\eta$ is closed under $d_{\pi^{-1}F}$: $d_{\pi^{-1}F}\hat\eta=0$.
\item $\hat\eta$ is antisymmetric: $T^*\hat\eta=-\hat\eta$.
\end{enumerate}
\lab{lem-heta}
\end{Lem}
The construction of $\hat\eta$ proceeds precisely along the guidelines
given in \cite{BC}.
Let $U$ be a tubular neighborhood of $\Delta$ in $M\times M$,
and let $p:U\to\Delta$ be a projection which
fibers $U$ over $\Delta$
into discs on which $T$ acts linearly as the antipodal map,
and such that $\partial U$ can be identified with $S$.
Then $\widetilde{U}=\pi^{-1}U$ has the structure of $S\times[0,1]$
and hence fibers over $S$ with the unit interval as fiber.
We write $\sigma:\widetilde{U}\to S$ for the projection onto $S$
in this fibering, and note that $T$ acts on $S\times[0,1]$
by the antipodal map on $S$ crossed with the identity on $[0,1]$.
Now choose a smooth function $\chi$ on $[0,1]$ which is
identically $+1$ near 0 and identically 0 near $+1$, and write
$\chi$ also for its pullback to $\widetilde{U}$.
It follows that the form
\[
\widetilde{\eta} = \sigma^*I(\eta)\,\chi
\]
on $\widetilde{U}$ extends by $0$ to a form on all of $C_2(M)$ with values
in $\pi^{-1}F$. It is also clear that $\widetilde{\eta}$ restricts to $I(\eta)$
on $S$, so that $d_{\pi^{-1}F}\widetilde{\eta}$ represents $\delta(I(\eta))$
in the upper sequence. On the other hand $d_{\pi^{-1}F}\widetilde{\eta}$ vanishes
identically near $S$ and so may be considered an antisymmetric
form on $M\times M$. But then, by the acyclicity assumption,
there must exist an antisymmetric form $\alpha$ on $M\times M$
such that
\[
d_F\alpha = d_{\pi^{-1}F}\widetilde{\eta}.
\]
Now $\hat\eta=\widetilde{\eta}-\pi^*\alpha$ has all the desired properties.
\section{The $\Theta$-invariant} \lab{sec-Theta}
Using the closed form $\hat\eta$ defined in the previous section,
see Lemma \ref{lem-heta}, we can define an invariant
for the framed 3-dimensional homology sphere $(M,f)$.
First we notice that $\hat\eta^3$ is a 6-form on the 6-dimensional
space $C_2(M)$ which takes values in
$\pi_1^{-1}E^{\otimes3}\otimes\pi_2^{-1}E^{\otimes3}$.
If we associate
to each vertex $i$ ($i=1,2$) a homomorphism
\begin{equation}
\rho_i:{\mathbb{R}}\to E\otimes E\otimes E
\end{equation}
which is equivariant as a module over $\pi_1(M)$,
then we obtain the {\em closed} real-valued
6-form $\braket{\rho_1\rho_2}{\hat\eta^3}$.
Here $\braket{\cdot}{\cdot}$ denotes the scalar product
over $E$ and its extensions to $E^{\otimes3}$ and to
$\pi_1^{-1}E^{\otimes3}\otimes\pi_2^{-1}E^{\otimes3}$.
\begin{Rem}
The existence of such homomorphisms depends on $E$. In some
cases, the only possible choice will be the trivial one: $\rho=0$.
If the vector
space spanned by these homomorphisms has dimension greater than one,
then one can choose $\rho_1$ and $\rho_2$ linearly independent.
\end{Rem}
\begin{Exa}
A particular case, considered in \cite{AS}, occurs when
$E$ is the adjoint representation of a compact Lie group $G$.
Then a natural
choice for $\rho$ is obtained by using the structure constants
$f_{abc}$ relative to a left- and right-invariant inner product
on the Lie algebra of $G$; namely,
\[
\rho(x) = x\,\sum_{abc} f_{abc}\;e_a\otimes e_b\otimes e_c.
\]
The equivariance under the full group $G$ ensures the equivariance
under the action of $\pi_1(M)$. Notice that the antisymmetry of
the structure constants implies that this homomorphism is completely
antisymmetric.
Note that if $E$ denotes a representation of such a Lie group $G$,
the equivariant homomorphisms are dual
to the projections to the trivial representations
in $E\otimes E\otimes E$. Again,
the equivariance under the full group $G$ ensures the equivariance
under the action of $\pi_1(M)$.
\lab{exa-adj}
\end{Exa}
\begin{Exa}
If $E_j$ denotes the irreducible representation of spin $j\in{\mathbb{Z}}/2$ of
$SU(2)$, then the Clebsch--Gordan formula,
\[
E_j\otimes E_k = \bigoplus_{l=|j-k|}^{j+k} E_l,
\]
implies that
\[
E_j\otimes E_j\otimes E_j =
\bigoplus_{k=j-\lfloor j\rfloor}^j(2k+1)E_k \oplus
\bigoplus_{k=j+1}^{3j}(3j-k+1)E_k.
\]
So $E_j^{\otimes3}$ contains no trivial representations
if $j$ is a half-integer and one trivial representation if
$j$ is an integer. In the case $j=1$ we recover the choice of
example \ref{exa-adj}. Notice that this projection is
obtained by selecting the representation of spin $j$ in
$E_j\otimes E_j$, then by tensoring by the last copy of $E_j$,
and finally by projecting on the trivial representation. Therefore,
all these projections (and the corresponding homomorphisms) are
completely antisymmetric.
\end{Exa}
\begin{Exa}
With the notations of the previous example, consider
$E=E_{1/2}\oplus E_1$. It turns out that $E^{\otimes3}$
contains three copies of the trivial representation:
the first is obtained by choosing the trivial representation
in $E_1^{\otimes3}$; the second by choosing
the trivial representation in
$E_{1/2}\otimes E_{1/2}\otimes E_1$, and the other two by
cyclic rotations of the second. Notice that the second
projection is obtained by selecting the representation of
spin 1 in $E_{1/2}\otimes E_{1/2}$, then by tensoring by
$E_1$, and finally by projecting on the trivial representation.
Therefore, this projection is symmetric with respect
to the exchange of the spin-$(1/2)$ components.
\lab{exa-121}
\end{Exa}
Integrating the closed form we have obtained
over $C_2(M)$ yields the number
\begin{equation}
A_{(\Theta,\rho_1,\rho_2)} \doteq \int_{C_2(M)}
\braket{\rho_1\rho_2}{\hat\eta^3},
\end{equation}
which is our first potential invariant. We recall, see \cite{BC},
that the definition of $\eta$ relies on the choice of a metric
on $M$ and of a compatible connection;
moreover, the construction of $\hat\eta$ requires the choice of a function
$\chi$ and of a 2-form $\alpha$
as explained after Lemma \ref{lem-heta}. An invariant must
be independent of all these choices. Actually, we have
the following
\begin{Thm}
Given a section $f$ of the orthonormal frame bundle, the
combination
\begin{equation}
I_{(\Theta,\rho_1,\rho_2)}(M,f) =
A_{(\Theta,\rho_1,\rho_2)}(M) -
\frac{\braket{\rho_1}{\rho_2}}4\, {\rm CS}(M,f),
\lab{ITheta}
\end{equation}
is independent of all the choices involved (except for the framing).
Here
\begin{multline}
{\rm CS}(M,f)=-\frac1{8\pi^2}\int_Mf^*\operatorname{Tr}\left({
\theta\,d\theta +\frac23\,\theta^3}\right)=\\
\frac1{4\pi^2}\int_M f^*\left({
\theta^id\theta_i-\frac13\,\epsilon_{ijk}\theta^i\theta^j\theta^k
}\right),
\end{multline}
is the Chern--Simons integral of the same metric
connection used to define $\eta$.
Thus, $I_{(\Theta,\rho_1,\rho_2)}(M,f)$
is an invariant for the framed rational homology
sphere $(M,f)$.
\lab{thm-Theta}
\end{Thm}
\begin{Rem}
In the case discussed in example \ref{exa-adj}, we have
\[
\braket{\rho_1}{\rho_2} = \sum_{abc} f_{abc}f_{abc} =
-c_v\,\dim G,
\]
where $c_v$ is the Casimir of the adjoint representation of $G$.
\end{Rem}
\begin{proof}
As in \cite{BC}, we introduce the unit interval $I$ as
a parameter space, and recall that, as shown there,
letting $\theta$ vary on $I$ corresponds to defining
on $S\times I$ a form---which we still denote by
$\eta$---given by half the Euler form of the tangent bundle
along the fiber in the fibering $S\times I\to\Delta\times I$.
Then all the arguments contained in section \ref{sec-intro} are
still true if we multiply by $I$ each space involved (say, $M$,
$M\times M$, $\Delta$, $C_2(M)$ and $S$),
since $H^n(I)=\delta_{n0}\,{\mathbb{R}}$.
In particular, we have a form---which we keep denoting by
$\hat\eta$---which satisfies the properties of Lemma \ref{lem-heta}
with $C_2(M)$ replaced by $C_2(M)\times I$. (To be precise,
by $\pi$ now we mean the projection
$C_2(M)\times I\to M\times M\times I$.)
If we denote by $\sigma$ the projection $C_2(M)\times I\to I$,
then
\[
A_{(\Theta,\rho_1,\rho_2),\tau} =
\sigma_*\braket{\rho_1\rho_2}{\hat\eta^3}
\]
is a function depending on the parameter $\tau\in I$, in whose
variations we are interested.
To do so, we recall that, given two
spaces $M_1$ and $M_2$ and projections $\pi_i:M_1\times M_2\to M_i$,
Stokes' theorem can be rewritten as
\begin{equation}
d\pi_{i*}\omega = \pi_{i*}d\omega -
(-1)^{\deg\pi_{i*}^\partial\omega}\,\pi_{i*}^\partial\omega,
\lab{dpi*}
\end{equation}
where $\pi_{1*}^\partial$ denotes integration along the boundary of
$M_2$ and vice versa. (Notice that the signs in \eqref{dpi*}
are correct if integration acts from the right.)
Since $\braket{\rho_1\rho_2}{\hat\eta^3}$ is a closed form, we simply
have
\[
dA_{(\Theta,\rho_1,\rho_2),\tau} =
\sigma_*^{\partial}\braket{\rho_1\rho_2}{\hat\eta^3} =
\braket{\rho_1}{\rho_2}\sigma_*^{\partial}\eta^3,
\]
the last identity following from property 1 in Lemma \ref{lem-heta}.
Now we recall that in \cite{BC} (see Lemma 3.15 there)
we proved that
\[
\pi_*^{\partial}\eta^3 = \frac14\, p_1,
\]
where $\pi^\partial$
is the projection $S\times I\to\Delta\times I$, and
$p_1$ is the first Pontrjagin form on
$\Delta\times I=M\times I$.
Denoting by $\sigma_M$ the projection
$M\times I\to I$, we finally get
\[
dA_{(\Theta,\rho_1,\rho_2),\tau} =
\frac{\braket{\rho_1}{\rho_2}}4\,\sigma_{M*} p_1,
\]
from which the theorem follows.
\end{proof}
\section{The higher invariants}\lab{sec-hi}
Using the natural projections $\pi_{ij}:C_n(M)\to C_2(M)$
we can pull back the form $\hat\eta$ defined in section \ref{sec-intro}.
We will write
\[
\hat\eta_{ij} = \pi_{ij}^*\hat\eta,
\]
and by property 3 of Lemma \ref{lem-heta} we have
\[
\hat\eta_{ij} = -\hat\eta_{ji}.
\]
These forms on $C_n(M)$ allow for
writing other invariants of the 3-dimensional
homology sphere $M$ associated to cocycles in an appropriate graph
cohomology (depending on the bundle $E$).
\begin{Def}
We call a decorated graph a graph with oriented and numbered edges and
numbered vertices (by convention we start the
enumeration by 1). We require edges always to connect distinct
vertices. If two vertices are connected by exactly one edge,
we call that edge {\em regular}.
The edge numbering induces a numbering of the $v_i$
half-edges at each vertex $i$,
corresponding to which we attach a homomorphism
\[
\rho_i : {\mathbb{R}}\to E^{\otimes v_i}
\]
which is equivariant as a module over $\pi_1(M)$.
Denoting by $V$ the number of vertices and
by $E$ the number of edges, we grade the collection of
decorated graphs by
\begin{equation}
\begin{split}
\operatorname{ord}\Gamma &= E-V,\\
\deg\Gamma &= 2E -3V.
\end{split}
\lab{ord}
\end{equation}
\end{Def}
\begin{Rem}
Compared with the decorated graphs we introduced in \cite{BC},
this definition adds two further decorations: the numbering
of the edges and the equivariant homomorphisms attached to the vertices.
\end{Rem}
\begin{Rem}
A trivalent diagram has degree zero, and its order is
given by $m=V/2=E/3$.
We thank S.~Garoufalidis for pointing out that our choice
of the words ``order" and ``degree" is a bit unfortunate, for
people working with finite type invariants
call $m$ the degree (instead of the order)
of a trivalent graph.
However, we prefer to stick to our old notation \cite{BC} since
the term degree is consistent with the cohomology defined
by the coboundary operator $\delta$ (see Proposition
\ref{prop-delta}).
\end{Rem}
Denoting by $v(\Gamma)$ the set of vertices and by $e(\Gamma)$ the
ordered set of oriented edges in $\Gamma$, we can associate to
the 3-dimensional homology sphere $M$ and to the {\em trivalent}
decorated graph $\Gamma$ the number
\begin{equation}
A_\Gamma(M) \doteq \int_{C_n(M)}
\braket{\prod_{i\in v(\Gamma)}\rho_i}
{\prod_{(ij)\in e(\Gamma)}\hat\eta_{ij}},
\lab{defAGamma}
\end{equation}
where $n=2\operatorname{ord}\Gamma$ is the number of vertices and $(ij)$ denotes
the edge connecting the vertex $i$ to the vertex $j$.
Next we give the collection of
decorated graphs the structure of an algebra over
${\mathbb{Q}}$ (the product simply being the disjoint union
of graphs). We will denote this algebra by
$\mathcal{G}_E^0$ and will extend \eqref{defAGamma} by linearity.
In view of the definition of $A_\Gamma(M)$, we introduce
the following equivalence relation on $\mathcal{G}_E^0$:
if two decorated graphs $\Gamma$ and $\Gamma'$
differ only by a permutation of order $p$ in
the vertex numbering and by $l$ edge-orientation reversals,
we set
\begin{equation}
\Gamma=(-1)^{(p+l)}\,\Gamma'.
\lab{eqrel}
\end{equation}
Notice that to equivalent
graphs we associate the same number $A_\Gamma(M)$. We will denote
by $\mathcal{G}_E$ the algebra of graphs modulo the above equivalence
relation.
Then we introduce an operator $\delta$ on $\mathcal{G}_E^0$
that acts by contracting a regular edge one at a time in $\Gamma$,
followed by a consistent renumbering of edges and vertices.
To the contraction of the regular edge connecting the vertex $i$ to
the vertex $j$ we associate a sign $\sigma(i,j)$
defined by
\begin{equation}
\sigma(i,j) = \begin{cases}
(-1)^j &\text{if $j>i$,}\\
(-1)^{i+1} &\text{if $j<i$}.
\end{cases}
\lab{sigmaij}
\end{equation}
Assuming that this edge corresponds to the $k$th of the $v_i$ half-edges
at $i$ and to the $l$th of the $v_j$ half-edges at $j$, we attach
to the vertex obtained after contraction the equivariant homomorphism
\[
\widetilde{\rho}_i:{\mathbb{R}}\to E^{\otimes(v_i+v_j-2)}
\]
defined by
\[
\widetilde{\rho}_i = m_{k,v_i+l}(\rho_i\otimes\rho_j),
\]
where $m_{rs}$ denotes the scalar product between the $r$th and
the $s$th terms in the tensor product.
Notice that the homomorphism attached to a vertex after contracting
two different (regular) edges starting from it does not depend on
the order of the contractions. Therefore, the argument we gave
in \cite{BC} is enough to prove the following
\begin{Prop}
The operator $\delta$ descends to $\mathcal{G}_E$ and satisfies
$\delta^2=0$ there. Moreover, if we denote by $\GEi nt$
the (equivalence classes of) decorated graphs of order $n$ and
degree $t$, we have
\[
\delta : \GEi nt \to \GEi n{t+1}.
\]
\lab{prop-delta}
\end{Prop}
We call a cocycle an element of the kernel of $\delta$ in
$\mathcal{G}_E$. Notice that the action of $\delta$ can be
restricted to the algebra of (equivalence classes of) decorated
connected graphs. Now we are in a position to prove Theorem
\ref{thm-1}.
\begin{proof}[Proof of Theorem \ref{thm-1}]
As in the proof of Theorem \ref{thm-Theta} we introduce
the unit interval $I$ as a parameter space, and define
$\hat\eta$ as a form on $C_n(M)\times I$. Denoting by $\sigma$
the projection $C_n(M)\times I\to C_n(M)$, we define
\[
A_{\Gamma,\tau}(M) = \sigma_*
\braket{\prod_{i\in v(\Gamma)}\rho_i}
{\prod_{(ij)\in e(\Gamma)}\hat\eta_{ij}},
\]
and consider its change as $\tau$ varies on $I$.
Since the integrand form is closed, equation \eqref{dpi*}
implies that $dA_{\Gamma,\tau}(M)$ is given by boundary
contributions only.
But on the boundary $\hat\eta$ reduces to $I(\eta)$---see Lemma
\ref{lem-heta}---so we can use essentially the same arguments
as in the trivial coefficient case \cite{BC}. Therefore, we will only
give a brief sketch of the proof here and refer to \cite{BC}
for further details.
First recall that the face
in $\partial C_n(M)$ corresponding to the collapse of
$q$ points has the structure of a fibering over $C_{n-q+1}$,
with the $(3q-4)$-dimensional fiber isomorphic to
$C_q({\mathbb{R}})$ modulo translations and scalings.
If we denote by $e$ the number of edges connecting two collapsing
vertices, we see that the vertical
form-degree is given by $2e$. Moreover, since we are considering
trivalent graphs, we have the relation $2e+e_0=3q$, where
$e_0$ denotes the number of edges connecting a collapsing
vertex with a non collapsing one. Therefore, the push-forward
along the fiber of the form associated to the edges connecting
collapsing vertices yields a form of degree $4-e_0$ if
$e_0\leq4$, and zero otherwise.
By a theorem due to Axelrod and Singer \cite{AS}---which we
have recast in \cite{BC} in a form suitable to our
construction---this form must be the pullback of a
multiple of a characteristic form on $M\times I$, namely, the
constant function or the first Pontrjagin form $p_1$.
The former case corresponds to $e_0=4$, and it can be shown
that the only case when the integral does not vanish is when
this is given by the collapse of just two vertices.
These boundary terms are then taken care of by the requirement
that $\Gamma$ be a cocycle.
The latter case corresponds to $e_0=0$, that is, to the case
when all points collapse since we assume the diagrams to be connected.
These terms are then taken care of by the correction
$\phi(\Gamma)\,{\rm CS}$.
\end{proof}
\begin{Rem}
It is clear from the above proof that $\phi(\Gamma)$ is linear.
Moreover, if $\Gamma$ is a decorated
graph, we can write
\[
\phi(\Gamma) = \rho(\Gamma)\,\phi_0(\Gamma),
\]
where $\rho(\Gamma)$ is a purely algebraic factor obtained
from the homomorphisms $\rho_i$
by associating a scalar product to each edge in $\Gamma$,
while
$\phi_0(\Gamma)$ is given by the boundary integral involving
the forms $\eta$, and so it is the same as in the trivial coefficient
case.
If $\operatorname{ord}\Gamma$ is even, there exists an orientation reversing
involution under which the integrand form turns out to be odd
(see \cite{AS} or \cite{BC}). Therefore, in this case,
$\phi_0(\Gamma)$ vanishes, and so does $\phi(\Gamma)$.
\end{Rem}
{}From the above remark and from
Theorems \ref{thm-1} and \ref{thm-Theta},
we get the following
\begin{Cor}
If\/ $\Gamma$ is a connected trivalent cocycle of $\mathcal{G}_E$,
and $\rho_1$ and $\rho_2$ are equivariant homomorphisms such that
$\braket{\rho_1}{\rho_2}\not=0$,
then the quantity
\[
J_{\Gamma;\rho_1,\rho_2}(M)
= A_\Gamma(M) + \frac4{\braket{\rho_1}{\rho_2}}
\,\phi(\Gamma)\, A_{(\Theta,\rho_1,\rho_2)}(M)
\]
is an invariant for the rational homology 3-sphere $M$.
Moreover, if\/
$\operatorname{ord}\Gamma$ is even, then $\phi(\Gamma)=0$.
\end{Cor}
\section{Discussion}\lab{sec-disc}
The graph cohomology introduced in the previous section
is in principle more general than those introduced in
\cite{AS} and \cite{BC} and might give rise to more general
invariants.
In the case when $E$ is the adjoint bundle of a Lie group
$G$, we can choose all the equivariant homomorphisms
associated to a trivalent graph to be
determined by the structure constants, as explained in example
\ref{exa-adj}. The cocycles in this case are those studied in
\cite{AS} and come naturally from perturbative Chern--Simons
theory. Antisymmetry of the structure constants implies
that it is enough to give a {\em cyclic order} of the three half-edges
at each vertex. The Jacobi identity then implies that the cocycles
satisfy the so-called IHX relation (see \cite{B-N}).
It is a non-trivial fact (and we thank S.~Garoufalidis for pointing this out)
that these cocycles are in one-to-one correspondence with
the cocycles of the trivial coefficient case.
If $E_{1/2}\oplus E_1$ (where $E_{1/2}$ and $E_1$
denote the representation of $SU(2)$ with spin $1/2$ and 1)
is an orthogonal and acyclic local system, we can choose each
homomorphism as the dual of the second projection
considered in example \ref{exa-121}.
In this case we can think of the diagram as carrying
spin $1/2$ over two of the three half-edges at each vertex
and spin 1 over the last half-edge.
Since each of these
homomorphisms is {\em symmetric} with respect to the
exchange of the two spin-$1/2$ representations, the diagram
is symmetric under the exchange of the corrresponding
half-edges.
It would be interesting to see if this or more general choices of
the bundle $E$ and of the equivariant homomorphisms give rise
to new inequivalent cocycles.
\section*{Acknowledgements}
We again thankfully acknowledge helpful conversations with
Scott Axelrod, Robin Forman and Cliff Taubes.
We are especially thankful to Stavros Garoufalidis
for pointing out the isomorphisms in various graph
cohomologies mentioned in section \ref{sec-disc}.
One of these isomorphisms was also explained earlier
by Dylan Thurston in his senior thesis at Harvard.
\thebibliography{9}
\bibitem{AS} S. Axelrod and I. M. Singer, ``Chern--Simons Perturbation
Theory,'' in {\em Proceedings of the XXth DGM Conference}, edited by
S.~Catto and A.~Rocha (World Scientific, Singapore, 1992),
pp.\ 3--45; ``Chern--Simons Perturbation
Theory.\ II,'' \jdg{39} (1994), 173--213.
\bibitem{B-N} D.~Bar-Natan, ``On the Vassiliev Knot Invariants,''
Topology {\bf 34} (1995), 423--472.
\bibitem{BGRT} D.~Bar-Natan, S.~Garoufalidis, L.~Rozansky and
D.~P.~Thurston, ``The \AA{}rhus Invariant of Rational Homology
3-Spheres: A Highly Non Trivial Flat Connection on $S^3$,''
q-alg/9706004.
\bibitem{BC} R. Bott and A. S. Cattaneo,
``Integral Invariants of 3-Manifolds,'' dg-ga/9710001, to appear in
J.\ Diff.\ Geom.
\bibitem{FTI} T.~Q.~T.~Le, J.~Murakami and T.~Ohtsuki, ``On a Universal
Quantum Invariant of 3-Manifolds,'' q-alg/9512002,
to appear in Topology.
\end{document}
|
train/arxiv
|
BkiUbYXxK4sA-5fmveW9
| 5
| 1
|
\section{Introduction and main results}
\makeatletter{\renewcommand*{\@makefnmark}{}
\footnotetext{AMS subject classification: Primary 47D06, 47D08; Secondary 35A08, 35B25}
\footnotetext{Keywords and phrases: Schr\"odinger perturbation, Gaussian estimates}
\footnotetext{The research of the first author was supported by the NCN grant 2015/18/E/ST1/00239\makeatother}}
Let $d=1,2,\ldots$.
We consider the Gauss-Weierstrass kernel,
\[g(t,x,y)=
(4\pi t)^{-d/2} e^{-\frac{|y-x|^2}{4t}}, \qquad t>0,\ x,y\in\RR^d.\]
It is well known that $g$ is the fundamental solution of the equation $\partial_t u=\Delta u$, and time-homogeneous probability transition
density -- the heat kernel of $\Delta$.
Throughout the paper we let
$V: \RR^d\to \mathbb{R}$
to be a Borel measurable function.
We call
$G:(0,\infty)\times \RR^d\times \RR^d\to [0,\infty]$
the heat kernel of $\Delta+V$
or the Schr\"odinger perturbation of $g$ by $V$, if the following
Duhamel or perturbation formula holds for $t>0$, $x,y\in \RR^d$,
\[
G(t,x,y)=g(t,x,y)+\int_0^t \int_{\RR^d} G(s,x,z)V(z)g(t-s,z,y)dzds.
\]
One of the directions in the study of $G(t,x,y)$
is to find its estimates or bounds.
It is
natural to
ask if there are positive numbers, i.e., {\it constants} $0<c_1\le c_2<\infty$ such that the following two-sided bound holds,
\begin{align}\label{est:sharp_uni}
c_1 \leq \frac{G(t,x,y)}{g(t,x,y)}\leq c_2,\qquad t>0,\ x,y\in \RR^d.
\end{align}
We call \eqref{est:sharp_uni}
{\it sharp Gaussian estimates} (or bounds) {\it global} (or uniform) in time.
One can also ponder a weaker property -- if for a given $T\in (0,\infty)$,
\begin{align}\label{est:sharp_time}
c_1 \leq \frac{G(t,x,y)}{g(t,x,y)}\leq c_2 \,,\qquad 0<t\le T,\ x,y\in \RR^d.
\end{align}
We call \eqref{est:sharp_time} {\it sharp Gaussian estimates local} in time.
We observe that the inequality in \eqref{est:sharp_uni} is stronger than
the
{\it
plain
Gaussian estimates global} in time
\begin{align}\label{est:gaus}
c_1\, (4\pi t)^{-d/2} e^{-\frac{|y-x|^2}{4t\varepsilon_1}} \leq G(t,x,y)\leq c_2\, (4\pi t)^{-d/2} e^{-\frac{|y-x|^2}{4t\varepsilon_2}},\qquad t>0,\ x,y\in \RR^d,
\end{align}
where
$0<\varepsilon_1 \le 1\le \varepsilon_2<\infty$.
Similarly, \eqref{est:sharp_time} is stronger than
the
{\it
plain
Gaussian estimates local} in time
\begin{align}\label{est:gaus_b}
c_1\, (4\pi t)^{-d/2} e^{-\frac{|y-x|^2}{4t\varepsilon_1}} \leq G(t,x,y)\leq c_2\, (4\pi t)^{-d/2} e^{-\frac{|y-x|^2}{4t\varepsilon_2}},\qquad 0<t\leq T,\ x,y\in \RR^d.
\end{align}
We refer the reader to
\cite{MR3914946}
and \cite{MR3200161}
for a brief survey on the literature
concerning \eqref{est:sharp_uni}, \eqref{est:sharp_time},
\eqref{est:gaus}
and \eqref{est:gaus_b}, in particular,
on the results of
\cite{MR1978999},
\cite{MR1994762} and \cite{MR4093916}.
In the present paper our main focus is on
the distinction between local sharp Gaussian estimates \eqref{est:sharp_time}
and local plain Gaussian estimates
\eqref{est:gaus_b}.
In Theorem~\ref{thm:t1}
we combine our findings with those of \cite{MR3914946} to depict when for $V\leq 0$
local (or global) {\it sharp} Gaussian estimates
hold
if and only if local (or global) {\it plain} Gaussian estimates hold.
\begin{thm}\label{thm:t1}
Let $V\leq 0$. Then, \eqref{est:sharp_time} holds if and only if
\eqref{est:gaus_b} holds
according to the {\rm 'local in time'} column of Table~\ref{t:1}.
Similarly,
\eqref{est:sharp_uni} holds if and only if \eqref{est:gaus} holds according to the
{\rm 'global in time'} column.
\begin{table}[h!]\label{t:1}
\begin{center}
\begin{tabular}[t]{ m{3cm} |c|l|l| m{3cm}}
\cline{2-4}
&dimension & \makecell{local\\in time} & \makecell{global\\ in time} & \\
\cline{2-4}
\noalign{\vskip\doublerulesep
\vskip-\arrayrulewidth}
\cline{2-4}
& $d\geq 4$ & No & No \\
\cline{2-4}
& $d=3$ & No\textsuperscript{\tiny{1)}} & Yes\textsuperscript{\tiny{2)}} \\
\cline{2-4}
& $d=2$ & Yes\textsuperscript{\tiny{3)}} & Yes\textsuperscript{\tiny{5)}} \\
\cline{2-4}
& $d=1$ & Yes\textsuperscript{\tiny{4)}} & Yes\textsuperscript{\tiny{5)}} \\
\cline{2-4}
\end{tabular}
\caption{
Equivalence of {\it sharp} and {\it plain} Gaussian bounds for $V\leq 0$.}
\end{center}
\end{table}
\end{thm}
At this point we enclose some comments and references that complete Table~\ref{t:1} and which can also be tracked in other places in the paper.
\begin{rem}
Let $V\leq 0$. We list the superscripts of Table~\ref{t:1}.
\begin{enumerate}
\item[{\rm 1)}] we refer the reader to \cite[Theorem~1B]{MR1994762};
\item[{\rm 2)}] \eqref{est:sharp_uni} and \eqref{est:gaus} are equivalent to the potential boundedness of $V$ if $d=3$, see \cite{MR3914946};
\item[{\rm 3)}] \eqref{est:sharp_time} and \eqref{est:gaus_b} are equivalent to the enlarged Kato class condition on $V$ if $d=2$, see \eqref{enKato} and Corollary~\ref{cor:d2_1};
\item[{\rm 4)}] \eqref{est:sharp_time} and \eqref{est:gaus_b} are equivalent to Kato class condition on $V$ (uniform local integrability of $V$) if $d=1$, see \eqref{Kato} and Corollary~\ref{cor:d1_1};
\item[{\rm 5)}] \eqref{est:sharp_uni} as well as \eqref{est:gaus} are impossible for non-trivial $V$ if $d\leq 2$, see \cite[page 3]{MR3914946}.
\end{enumerate}
\end{rem}
In the literature
there exist several intrinsic quantities
that are used to
characterize $V \leq 0$
for which \eqref{est:sharp_time} holds,
and
to formulate
necessary and (separately) sufficient conditions
for \eqref{est:sharp_time} if $V\geq 0$.
Let us start with one that
derives from
Zhang \cite[Lemma~3.1 and Lemma~3.2]{MR1978999} and from Bogdan, Jakubowski and Hansen \cite[(1)]{MR2457489}.
For $t>0$ and $x,y\in\RR^d$
we define
\begin{align}\label{def:S}
S(V,t,x,y)=\int_0^t \int_{\RR^d} \frac{g(s,x,z)g(t-s,z,y)}{g(t,x,y)}|V(z)|\,dzds\,.
\end{align}
Further, we let
\begin{align*}
\|S(V,t)\|_{\infty}= \sup_{x,y\in\RR^d} S(V,t,x,y)\,,\qquad
\|S(V)\|_{T,\infty}=\sup_{0<t\leq T} \|S(V,t)\|_{\infty}\,.
\end{align*}
Other quantities are surveyed in Section~\ref{sec:overview}.
The following lemma is an excerpt from
\cite{MR3914946}
that
exposes the relation between $\|S(V)\|_{T,\infty}$
and \eqref{est:sharp_time},
and
will
suffice for our discussion and purposes.
We write as usually $f^+=\max\{0,f\}$, $f^-=\max\{0,-f\}$.
\begin{lem}\label{lem:comb}
We have
\begin{enumerate}
\item[1)] If $V\leq 0$, then
for each $T\in (0,\infty)$,
\eqref{est:sharp_time} is equivalent to
$\|S(V)\|_{T,\infty}<\infty$.
\item[2)] If $V \geq 0$, then \eqref{est:sharp_time} implies
$\|S(V)\|_{T,\infty}<\infty$
for each $T\in (0,\infty)$.
\item[3)] If for some
$h>0$ and $0\le \eta<1$ we have
$\|S(V^+)\|_{h,\infty}\leq \eta$
and if $S(V^-,t,x,y)$ is bounded on bounded subsets of
$(0,\infty)\times\RR^d\times\RR^d$,
then
\begin{align}\label{gen_est}
e^{-S(V^-,t,x,y)} \leq \frac{G(t,x,y)}{g(t,x,y)}\leq \left(\frac{1}{1-\eta}\right)^{1+t/h}, \qquad t>0, \ x,y\in \RR^d \,.
\end{align}
\end{enumerate}
\end{lem}
The relation between the bound of \eqref{def:S} and the upper bound in \eqref{gen_est}, in the framework of integral kernels, can be found in \cite{MR3000465}. For some other variants see \cite{MR4058740}.
Recall that the celebrated sufficient condition
for the local plain Gaussian estimates \eqref{est:gaus_b} is that $V$ belongs to {\it the Kato class}
(\cite{MR644024}, \cite{MR670130},
\cite{MR333833}, \cite{MR0203473}),
which we abbreviate to
$V\in \mathcal{K}_d$. More precisely, $V\in \mathcal{K}_d$ if
\begin{align}\label{Kato}
\lim_{t\to 0^+} \sup_{x\in\RR^d} \int_0^t\int_{\RR^d} g(s,x,z) |V(z)|\,dzds=0\,.
\end{align}
We say that
$V$ belongs to
{\it the enlarged Kato class},
which we denote by $V\in \widehat{\mathcal{K}}_d$,
if
\begin{align}\label{enKato}
\sup_{x\in\RR^d} \int_0^t\int_{\RR^d} g(s,x,z) |V(z)|\,dzds<\infty\,,
\end{align}
holds for some (every) $t>0$
(see \cite[Proposition~5.1]{MR845197}).
The class $\widehat{\mathcal{K}}_d$ is also known as {\it the Dynkin class} in a measure theory context.
We refer the reader to
\cite{MR1132313},
\cite{MR2345907}
and
\cite{MR3713578}
for a wider perspective on the Kato class;
and to
\cite{MR1488344},
\cite{MR1687500},
\cite{MR1783642},
\cite{MR1994762},
\cite{MR2253015},
\cite{MR2253111},
\cite{MR3914946}
for a corresponding class and results for time-dependent $V$.
We will also use the following notation
$$
\Delta^{-1} V (x)= - \int_0^{\infty}\int_{\RR^d} g(s,x,z) V(z)\,dzds\,,
\qquad
\qquad
\|\Delta^{-1}V\|_{\infty}=\sup_{x\in \RR^d} |\Delta^{-1} V (x)|\,.
$$
We give main results concerning the difference between sharp and plain Gaussian estimates.
We distinguish four cases: $d\geq 4$, $d=3$, $d=2$, $d=1$.
\begin{thm}\label{thm:dgeq4}
Let $d\geq 4$. There exists $V \leq 0$ with the following properties
\begin{enumerate}
\item[{\rm(a)}] ${\rm supp} (V)\subseteq B(0,1)$,
\item[{\rm(b)}] $V\in\mathcal{K}_d$,
\item[{\rm(c)}] $\|\Delta^{-1} V\|_{\infty}<\infty$,
\item[{\rm(d)}] $\|S(V,t)\|_{\infty}=\infty$ for every $t>0$.
\end{enumerate}
\end{thm}
Such a strong result is not possible if $d=3$. Indeed, in this dimension the condition
$\|\Delta^{-1} V\|_{\infty}<\infty$ implies (is equivalent to) $\sup_{t>0}\|S(V,t)\|_{\infty}<\infty$, see
\cite[(7) and (8)]{MR3914946}. In particular, if $V\in\mathcal{K}_d$ has compact support,
then $\|\Delta^{-1} V\|_{\infty}<\infty$.
\begin{thm}\label{thm:d3}
Let $d=3$. There exists $V\leq 0$ with the following properties
\begin{enumerate}
\item[{\rm(a)}] $V\in\mathcal{K}_3$,
\item[{\rm(b)}] $\|S(V ,t)\|_{\infty}=\infty$ for every $t>0$.
\end{enumerate}
\end{thm}
Theorems \ref{thm:dgeq4} and \ref{thm:d3} yield that for $d \ge 3$ there is a function $V\leq 0$
such that \eqref{est:gaus_b} holds with $\varepsilon_1< 1<\varepsilon_2$ arbitrarily close to $1$ and \eqref{est:sharp_time}
does not hold. Additionally, for $d \ge 4$ the function $V$ may be chosen in a such a way that $\supp V$ is compact and \eqref{est:gaus}
holds, see Corollaries~\ref{cor:d4} and~\ref{cor:d3}.
We note that the latter cannot be done in the dimension $3$. In fact, if $d=3$ and $V\leq 0$, the global plain Gaussian estimates \eqref{est:gaus} hold if and only if global sharp Gaussian estimates \eqref{est:sharp_uni} hold, see \cite[Page 6]{MR3914946}.
From Theorem \ref{thm:d3} we deduce that such phenomenon does not occur for local in time bounds.
The situation is radically different if $d\le2$. In this case the condition $V \in \mathcal{K}_d$ yields $\|S(V ,t)\|_{\infty}<\infty$. It is a consequence of the following theorem.
\begin{thm}\label{thm:d2d1}
Let $d=2$ or $d=1$. There exists an absolute constant $c>0$ such that for all $T>0$ and $V$ we have
\begin{align}\label{ineq:d2d1}
c^{-1} \sup_{x\in\RR^d} \int_0^T\int_{\RR^d} g(s,x,z) |V(z)|dzds \leq
\|S(V)\|_{T,\infty} \leq c \sup_{x\in\RR^d} \int_0^T \int_{\RR^d} g(s,x,z) |V(z)|dzds\,.
\end{align}
\end{thm}
As a corollary of Theorem \ref{thm:d2d1}, we characterize classes $\mathcal{K}_d$ and $\widehat{\mathcal{K}}_d$ for $d\le 2$, by using the quantity $\|S(V)\|_{T,\infty}$, see Corollaries~\ref{cor:d2_1} and~\ref{cor:d1_1}. Additionally, we obtain that for $d\le 2$ and $V\le0$, \eqref{est:sharp_time} holds if and only if $V\in \widehat{\mathcal{K}}_2$. For $d=1$ the same property holds for $V\ge0$. See Corollaries~\ref{cor:d2_1} and~\ref{cor:d1_2}.
The rest of the paper is organized as follows. In Section~\ref{sec:prel} we collect other quantities used in the literature to analyse \eqref{est:sharp_time}, and we show that they are comparable. We also discuss analogies with various descriptions of the Kato class.
In Section~\ref{sec:new_test} we introduce an explicit kernel $K(t,x,y)$ and use it to propose another test for \eqref{est:sharp_time} to hold.
In that section we also formulate and prove Theorem~\ref{thm:new_equiv}. In Section~\ref{sec:proofs}
we prove Theorems \ref{thm:dgeq4} -- \ref{thm:d2d1}.
In Section 5 we give corollaries of the main results of the paper
and the proof of Theorem~\ref{thm:t1}.
Throughout the paper $B(x,r)$ denotes a ball of radius $r>0$ in $\RR^d$ centred at $x\in\RR^d$. In short we write $B_r=B(0,r)$.
\subsection*{Acknowledgements}
We thank Krzysztof Bogdan for helpful comments on the paper.
\section{Preliminaries}\label{sec:prel}
\subsection{An overview of tests for sharp bounds}\label{sec:overview}
We have already seen in Lemma~\ref{lem:comb}
how to use a test based on $S(V,t,x,y)$
to analyse \eqref{est:sharp_time}.
In \cite{MR1978999} Zhang introduced yet another object,
for $t>0$ and $x,y\in\RR^d$,
\begin{align*}
N(V,t,x,y)=&\int_0^{t/2}\int_{\RR^d}\frac{e^{-|z-y+(\tau/t)(y-x)|^2/(4\tau)}}{\tau^{d/2}}|V(z)|dzd\tau\\
&\qquad+\int_{t/2}^t\int_{\RR^d} \frac{e^{-|z-y+(\tau/t)(y-x)|^2/(4(t-\tau))}}{(t-\tau)^{d/2}} |V(z)|dzd\tau \,.
\end{align*}
It is actually comparable with $S$ in the following sense,
\begin{align}
S(V,t,x,y)&\geq m_1\, N(V,t/2,x,y)\,,\tag{L} \label{L}\\
S(V,t,x,y)&\leq m_2\, N(V,t,x,y)\,,\tag{U} \label{U}
\end{align}
where constants $m_1$, $m_2$ depend only on $d$, see \cite[(L) and (U) on page 5]{MR3914946}.
The quantity $N$ gives rise to
\begin{align*}
\|N(V,t)\|_{\infty}= \sup_{x,y\in\RR^d} N(V,t,x,y)\,,\qquad
\|N(V)\|_{T,\infty}= \sup_{0<t\leq T} \|N(V,t)\|_{\infty}\,.
\end{align*}
On the other hand, in \cite{MR1994762} Milman and Semenov (for $d\geq 3$) proposed to use for $\lambda> 0$,
$$
e_*(V,\lambda)= \sup_{\alpha\in\RR^d} \|(\lambda-\Delta +2\alpha \cdot \nabla)^{-1} |V| \|_{\infty}\,.
$$
The operator $(\lambda-\Delta +2\alpha \cdot \nabla)^{-1}$
is an integral operator with a kernel equal to
$\int_0^{\infty} e^{-\lambda s} p_{\alpha}(s,x,y) ds$,
where
for $\alpha\in\RR^d$ and
$t>0$, $x,y\in\RR^d$,
the function
$p_{\alpha}(t,x,y)$
is the fundamental solution of the equation $\partial_t =\Delta-2\alpha\cdot \nabla$, i.e.,
$$p_{\alpha}(t,x,y)= g(t,x-2\alpha t,y)\,.$$
We will show that $e_*$ is also comparable with $S$ and $N$. To this end we will use
$$
r_*(V,t)= \sup_{\alpha, x\in\RR^d} \int_0^{t}\int_{\RR^d} p_{\alpha}(s,x,z) |V(z)|\,dzds\,.
$$
\begin{lem}\label{lem:N_J}
For all $t>0$ and $V$ we have
\begin{align*}
r_*(V,t/2) \leq (4\pi)^{-d/2}\|N(V,t)\|_{\infty}
\leq 2\, r_*(V,t/2)\,.
\end{align*}
\end{lem}
\begin{proof}
Note that
\begin{align*}
\sup_{x,y\in\RR^d} \int_0^{t/2}\int_{\RR^d}\frac{e^{-|z-y+(\tau/t)(y-x)|^2/(4\tau)}}{\tau^{d/2}}|V(z)|\,dzd\tau
&=\sup_{\alpha, x\in\RR^d} \int_0^{t/2}\int_{\RR^d}\frac{e^{-|z-x+ 2\alpha \tau|^2/(4\tau)}}{\tau^{d/2}}|V(z)|\,dzd\tau\\
&=(4\pi)^{d/2} \sup_{\alpha, x\in\RR^d}
\int_0^{t/2}\int_{\RR^d} p_{\alpha}(\tau,x,z) |V(z)|\,dzd\tau\,.
\end{align*}
The assertion of the lemma follows from \cite[Lemma~3.1]{MR3914946}.
\end{proof}
\begin{lem}\label{lem:R_J}
For all $\lambda>0$, $\alpha\in\RR^d$ and $V$ we have
\begin{align*}
(1-e^{-1})\, \| (\lambda -\Delta +2\alpha\cdot \nabla )^{-1} |V| \|_{\infty}
&\leq
\sup_{x\in\RR^d} \int_0^{1/\lambda}\int_{\RR^d} p_{\alpha}(s,x,z) |V(z)|\,dzds\,,\\
e\, \| (\lambda -\Delta +2\alpha \cdot \nabla )^{-1} |V| \|_{\infty}
&\geq \sup_{x\in\RR^d} \int_0^{1/\lambda}\int_{\RR^d} p_{\alpha}(s,x,z) |V(z)|\,dzds\,.
\end{align*}
\end{lem}
\begin{proof}
For $t>0$, $x\in\RR^d$ we let
$P_t f(x)=\int_{\RR^d} p_{\alpha}(t,x,z)f(z)\,dz$. Note that
$$
\| (\lambda -\Delta +2\alpha\cdot \nabla )^{-1} |V| \|_{\infty}
= \sup_{x\in\RR^d} \int_0^{\infty} e^{-\lambda t}P_t |V|(x)\,dt\,,
$$
and
$$
\sup_{x\in\RR^d}\int_0^{1/\lambda}\int_{\RR^d} p_{\alpha}(s,x,z) |V(z)|\,dzds = \sup_{x\in\RR^d} \int_0^{1/\lambda} P_t |V| (x)\,dz\,.
$$
Therefore, the desired inequalities follow from \cite[Lemma~3.3]{MR3713578}.
\end{proof}
Recall from \cite[Corollary~2.3]{MR3914946} that
for all $T>0$ and $V$ we have
\begin{align}\label{ineq:2T-T}
\|S(V)\|_{2T,\infty}\leq 2 \|S(V)\|_{T,\infty}
\end{align}
Now,
\eqref{L}, \eqref{U}, \eqref{ineq:2T-T}, Lemma~\ref{lem:N_J} and Lemma~\ref{lem:R_J}
provide the following comparability.
\begin{prop}\label{prop:comp}
For all $T>0$ and $V$ we have
\begin{align}\label{ineq:S_N}
\frac{m_1}{2} \|N(V)\|_{T,\infty} &\leq \|S(V)\|_{T,\infty}\leq m_2 \|N(V) \|_{T,\infty}\,,
\end{align}
as well as
\begin{align}\label{ineq:N_r}
r_*(V,T/2) &\leq (4\pi)^{-d/2}\,\|N(V)\|_{T,\infty} \leq 2 \, r_*(V,T/2)\,,
\end{align}
and
\begin{align}\label{ineq:resolvent_semigroup}
(1-e^{-1})\, e_*(V,1/T) &\leq
r_*(V,T)
\leq e \, e_*(V,1/T)\,.
\end{align}
\end{prop}
Thus, from Proposition~\ref{prop:comp}
and
Lemma~\ref{lem:comb}
we conclude that the four tests on $V$, for
the local sharp Gaussian estimates \eqref{est:sharp_time}
to hold,
based on
$S$, $N$, $r_*$ and $e_*$
are equivalent if $V\leq 0$;
and comparable if $V\geq 0$
(in that case the exact magnitudes of quantities used in those tests matter, see part 3) of Lemma~\ref{lem:comb}).
In this context we highly
recommend the reader
to get familiar with
\cite[Theorem~1B and Theorem~1C]{MR1994762}), where $e_*$ is brought into play.
We end this subsection by one more observation on $S$ and $N$.
Due to
Lemma~\ref{lem:N_J},
\eqref{ineq:2T-T},
\eqref{ineq:S_N}
and
\eqref{L}
the supremum over $0<t\leq T$
in $\|S(V)\|_{T,\infty}$ and $\|N(V)\|_{T,\infty}$ is, in a sense, dispensable.
\begin{cor}\label{cor:reduction}
For all $T>0$ and $V$ we have
$$\|N(V,T)\|_{\infty} \leq \|N(V)\|_{T,\infty}\leq 2 \|N(V,T)\|_{\infty}\,,$$
and
$$\|S(V,T)\|_{\infty} \leq \|S(V)\|_{T,\infty}\leq 4(m_2/m_1) \|S(V,T)\|_{\infty}\,.$$
\end{cor}
\subsection{Kato class analogies}
It is well known that $V\in \mathcal{K}_d$ if and only if
\begin{align*}
\lim_{\lambda \to \infty} \| (\lambda -\Delta)^{-1}|V| \|_{\infty} =0\,.
\end{align*}
Actually, taking $\alpha=0$ in Lemma~\ref{lem:R_J}, for all $\lambda>0$ and $V$ we get
\begin{align*}
(1-e^{-1})\| (\lambda -\Delta)^{-1}|V| \|_{\infty}
\leq
\sup_{x\in\RR^d} \int_0^{1/\lambda}\int_{\RR^d} g(s,x,z) |V(z)|\,dzds
\leq
e
\| (\lambda -\Delta)^{-1}|V| \|_{\infty}\,,
\end{align*}
which is rather a general relation between a semigroup and its resolvent, see \cite[Lemma~3.3]{MR3713578}.
In particular,
$V$ belongs to
the enlarged Kato class
if and only if
$\| (\lambda -\Delta)^{-1}|V| \|_{\infty}<\infty$
for some (every) $\lambda>0$.
In view of our main discourse
on sharp Gaussian estimates
a counterpart of those inequalities
is given in
\eqref{ineq:resolvent_semigroup},
also as a consequence of Lemma~\ref{lem:R_J}.
The following result leads to an alternative description of the Kato class
(see \cite[Theorem~1.27]{MR1772266}).
\begin{lem}\label{lem:Katodescr}
There are constants $C_1$ and $C_2$
that depend only on dimension $d$ and
such that for all $t>0$ and $V$ we have
\begin{align}
C_1 A(t) &\le \left[\sup_{x\in\RR^d} \int_{|z-x|<\sqrt{4t}} \frac{|V(z)|}{|z-x|^{d-2}}\,dz\right] \le C_2 A(t)\,,&\qquad d\geq 3; \label{est:A3} \\
C_1 A(t) &\le \left[\sup_{x\in\mathbb{R}^2} \int_{|z-x|<\sqrt{4t}} |V(z)|\log\frac{4t}{|z-x|^2} \,dz\right] \le C_2 A(t) \,,&\qquad d=2; \label{est:A2}\\
C_1 A(t) &\le
\left[ \sup_{x\in\mathbb{R}} \, \sqrt{t}\!\!\! \int_{|z-x|<\sqrt{4t}} |V(z)|\,dz\right] \le C_2 A(t) \,, &\qquad d=1; \label{est:A1}
\end{align}
where
\begin{align*}
A(t) = \sup_{x\in\RR^d} \int_0^{t}\int_{\RR^d} g(s,x,z)|V(z)|\,dzds\,.
\end{align*}
\end{lem}
\begin{proof}
First note that the heat kernel $p_{0,d}$ defined in
\cite[page~47]{MR1772266} has a different time scaling than~$g$, i.e., $g(t,x,y)=p_{0,d}(2t,x,y)$ and $\int_0^t\int_{\RR^d} g(s,x,z)|V(z)|dzds=\frac12\int_0^{2t}\int_{\RR^d} p_{0,d}(s,x,z)|V(z)|dzds$.
The inequalities \eqref{est:A3}
are now deduced from
\cite[Theorem~1.28(a)]{MR1772266}.
The upper bound in \eqref{est:A2}
follows from the lower bound in
\cite[Theorem~1.28(b)]{MR1772266}.
To prove the lower bound in
\eqref{est:A2} we note that
\begin{align*}
\int_{|z-x|< \sqrt{4t}} |V(z)|dz
&=\int_{|z-x|< \frac32 \sqrt{t}} |V(z)|dz+
\int_{\frac32 \sqrt{t} \leq |z-x|<2 \sqrt{t}} |V(z)|dz\\
&\leq 5 \sup_{x\in\mathbb{R}^2} \int_{|z-x|<\frac32 \sqrt{t}} |V(z)|dz
\leq \frac{5}{2\log(4/3)} \sup_{x\in\mathbb{R}^2} \int_{|z-x|<\frac32 \sqrt{t}} |V(z)| \log \frac{4t}{|z-x|^2} dz\,,
\end{align*}
and apply the upper bound in
\cite[Theorem~1.28(b)]{MR1772266}.
Finally we look at \eqref{est:A1},
and due to \cite[Theorem~1.28(b)]{MR1772266}
it suffices to show the upper bound in \eqref{est:A1}.
To this end we observe that
\begin{align*}
\int_{|z-x|<\sqrt{4t}} \left(\sqrt{4t}-|z-x|\right)|V(z)|dz
+\int_{|z-x|<\sqrt{4t}}|z-x||V(z)|dz &\\
\geq
\int_{|z-x|<\sqrt{t}} \sqrt{t}\, |V(z)|dz
+\int_{\sqrt{t} \leq |z-x|<\sqrt{4t}}\sqrt{t}|V(z)|dz
& \geq
\sqrt{t} \int_{|z-x|<\sqrt{4t}}|V(z)|dz\,,
\end{align*}
and use the lower bound in \cite[Theorem~1.28(c)]{MR1772266}.
\end{proof}
Therefore, $V$ belongs to the Kato class if the expressions in the square brackets
of Lemma~\ref{lem:Katodescr}
converge to 0, see also
\cite[Theorem~4.5]{MR644024},
\cite[Proposition~A.2.6]{MR670130},
\cite[Theorem~3.6]{MR1329992},
\cite[Proposition~4.3]{MR3914946}.
In Section~\ref{sec:new_test} we establish a counterpart of Lemma~\ref{lem:Katodescr} describing sharp Gaussian estimates \eqref{est:sharp_time}.
At least in high dimensions the latter description of the Kato class may be viewed through the prism of the following property:
for every $d\geq 3$ there exists a constant $c>0$ that depends only on $d$ and such that for all
$t>0$,
$x,z\in\RR^d$ satisfying
$|z-x|\leq \sqrt{4t}$ we have
\begin{align*
c^{-1} \int_0^{\infty} g(s,x,z)\,ds
\leq \int_0^{t} g(s,x,z)\,ds
\leq \int_0^{\infty} g(s,x,z)\,ds\,.
\end{align*}
In the context of sharp Gaussian estimates
an analogue of that observation is proven in
Proposition~\ref{thm:J_K_new}, more precisely in
\eqref{ineq:t_vs_infty}.
\section{A new test for sharp bounds}\label{sec:new_test}
Each of the tests based on $S$, $N$, $r_*$ or $e_*$
may have
various
advantages and disadvantages when applying to particular functions $V$.
The utility of
the condition based on $S$
has already been exposed in
\cite[Section~1.2]{MR3914946}
for functions $V$ that factorize.
We use this paper as an opportunity
to propose another equivalent test based on a function $K(t,x,y)$, which originates in $r_*(V,T)$.
More precisely, we will estimate $r_*(V,T)$ by investigating the kernel
$\int_0^T p_{\alpha}(s,x,z)ds$ on a certain crucial region.
In what follows the notation is chosen to be consistent with \cite{MR3914946}.
For $t>0$, $x,y\in\RR^d$
we let:\\
\begin{align*}
K(t,x,y)&=
e^{-\frac{|x||y|-\left<x,y\right>}{2}} \dfrac{1}{|x|^{d-2}} \left(1+|x||y|\right)^{d/2-3/2}{\bf 1}_{|x|\leq t|y|}\,,&
{\rm if}\quad d\geq 3;\\
&&\\
K(t,x,y)&=
e^{-\frac{|x||y|-\left<x,y\right>}{2}} \log\left( 1+\dfrac{1}{\sqrt{|x||y|}}\right){\bf 1}_{|x|\leq t|y|}\,,&
{\rm if}\quad d= 2;\\
&&\\
K(t,x,y)&=
e^{-\frac{|x||y|-\left<x,y\right>}{2}}
\sqrt{t}\left(1+t|y|^2\right)^{-1/2}
{\bf 1}_{|x|\leq t|y|}\,,&
{\rm if}\quad d=1.\\
\end{align*}
We further define
\begin{align*}
K(V,t,x,y)=\int_{\RR^d} K(t,z-x,y) |V(z)|\,dz\,,
\qquad\qquad
\|K(V,t)\|_{\infty}=\sup_{x,y\in\RR^d} K(V,t,x,y)\,.
\end{align*}
\begin{thm}\label{thm:new_equiv}
There are constants $0<C_1<C_2<\infty$ that depend only on $d$ and such that
for all $T>0$ and $V$ we have
$$
C_1 \|K(V,T)\|_{\infty} \leq \|S(V)\|_{T,\infty} \leq C_2 \|K(V,T)\|_{\infty}\,.
$$
\end{thm}
Before giving the proof of Theorem~\ref{thm:new_equiv}
we provide
consequences, comments and auxiliary results.
\begin{cor}
Let $V\leq 0$.
Then \eqref{est:sharp_time}
holds if and only if $\|K(V,T)\|_{\infty}<\infty$
for some (every) $T>0$.
\end{cor}
\begin{rem}
If $d\geq 3$, using Proposition~\ref{prop:comp}, Theorem~\ref{thm:new_equiv} and letting $T\to \infty$ we recover the result of \cite[Theorem~1.4]{MR3914946} that concerns global sharp Gaussian estimates \eqref{est:sharp_uni}.
\end{rem}
We note that the kernels of $S$ and $N$ are given explicitly, but
they are of rather complex structure that involve three parameters
$0<t\leq T$, $x,y\in\RR^d$
that the supremum is taken of.
Corollary~\ref{cor:reduction} makes it possible to remove one parameter from $S$ and $N$.
Certain reduction is also made in
$e_*$ and $r_*$, where only two parameters $\alpha,x\in\RR^d$ appear.
It is also known and results from a simple substitution
(see \cite[8.432, formula~6.]{MR3307944}) that for $\lambda>0$
and $x,z,\alpha\in\RR^d$,
\begin{align}\label{eq:rez-ker}
\int_0^{\infty}e^{-\lambda s} p_{\alpha}(s,x,z)\,ds=
(2\pi)^{-d/2}
e^{-\left<z-x, \alpha\right>}
\left(\frac{\sqrt{\lambda+|\alpha|^2}}{|z-x|}\right)^{d/2-1}
K_{d/2-1}\left(|z-x|\sqrt{\lambda+|\alpha|^2}\right),
\end{align}
where $K_{\nu}$ is the modified Bessel function of the second kind.
Thus,
\begin{align*}
e_*(V,\lambda)=
(2\pi)^{-d/2}
\sup_{\alpha,x\in\RR^d}
\int_{\RR^d} e^{-\left<z-x, \alpha\right>}
\left(\frac{\sqrt{\lambda+|\alpha|^2}}{|z-x|}\right)^{d/2-1}
K_{d/2-1}\left(|z-x|\sqrt{\lambda+|\alpha|^2}\right) |V(z)|\,dz\,.
\end{align*}
It is well known that $K_{d/2-1}$ admits the following estimates
$K_{d/2-1} \approx z^{1-d/2} e^{-z} (1+z)^{d/2-3/2}$, $d\geq 3$,
$K_0 \approx \ln(1+z^{-1/2})e^{-z}$
(see \cite[formulas 9.6.6, 9.6.8, 9.6.9, 9.7.2]{MR0167642}, \cite[page 11]{MR3914946})
and additionally
$K_{-1/2}(z) = \sqrt{2/\pi} e^{-z} z^{-1/2}$
(see \cite[formula 10.2.16, 10.2.17]{MR0167642}).
Hence,
\begin{align*}
e_*(V,\lambda)
&\approx
\sup_{\alpha,x\in\RR^d}
\int_{\RR^d}\frac{e^{-\left<z,\alpha\right>-|z|\sqrt{\lambda+|\alpha|^2}}}{|z|^{d-2}}
\left(1+|z|\sqrt{\lambda+|\alpha|^2}\right)^{d/2-3/2}|V(z+x)|\,dz \,,
& {\rm if}\quad d\geq 3;\\
&&\\
e_*(V,\lambda)&\approx
\sup_{\alpha,x\in\mathbb{R}^2} \int_{\mathbb{R}^2} e^{-\left<z,\alpha\right>-|z|\sqrt{\lambda+|\alpha|^2}} \log\left(1+\left({|z|\sqrt{\lambda+|\alpha|^2}}\right)^{-1/2}\right)
|V(z+x)|\,dz\,,
& {\rm if} \quad d=2;\\
&&\\
e_*(V,\lambda)&=
\sup_{\alpha,x\in\mathbb{R}}
\frac{1}{2}
\int_{\mathbb{R}} e^{-\left<z,\alpha\right>-|z|\sqrt{\lambda+|\alpha|^2}} \left( \sqrt{\lambda+|\alpha|^2}\right)^{-1} |V(z+x)|\,dz\,,
& {\rm if} \quad d=1.\\
\end{align*}
Here $\approx$ means that the ratio of both sides is bounded above and below by positive constants independent of $\lambda$ and $V$.
Actually, the comparability constants in the above depend only on $d$.
The relation between the exponents
of the kernel $K(t,x,y)$ and in the explicit estimates of
$e_*$
becomes more visible
when putting
$\alpha=-y/2$
and after noticing that
\begin{align}\label{eq:r_s}
\left<z,\alpha\right>+|z|\sqrt{\lambda+|\alpha|^2}
=\left<z,\alpha\right>+|z||\alpha|+
|z|\frac{\lambda}{\sqrt{\lambda+|\alpha|^2}+|\alpha|}\,.
\end{align}
What is more,
on its support
$K(t,x,y)$
coincides with the above explicit estimates of $e_*$ with $\lambda=0$ if $d\geq 2$,
and a similar comparability holds with $\lambda=1/t$ if $d=1$.
This is not a coincidence and it becomes clear
by the next proposition,
which plays a key role in the proof of Theorem~\ref{thm:new_equiv}
and which reveals the origin of the function $K(t,x,y)$.
\begin{prop}\label{thm:J_K_new}
For all $t>0$,
$\alpha,x,z\in\RR^d$ satisfying
$|z-x|\leq 2|\alpha| t$ we have
\begin{align}
\frac12 \int_0^{\infty} p_{\alpha}(s,x,z)\,ds
&\leq \int_0^{t} p_{\alpha}(s,x,z)\,ds
\leq \int_0^{\infty} p_{\alpha}(s,x,z)\,ds\,, \qquad d\geq 2 ;
\label{ineq:t_vs_infty} \\
\frac{e}{e+1}\int_0^{\infty} e^{-s/t} p_{\alpha}(s,x,z)\,ds
&\leq \int_0^{t} p_{\alpha}(s,x,z)\,ds
\leq e \int_0^{\infty} e^{-s/t} p_{\alpha}(s,x,z)\,ds\,,\qquad d=1.
\label{ineq:t_vs_e-infty}
\end{align}
There are constants $0<n_1\leq n_2<\infty$ that depend only on $d$ and such that
for all $t>0$,
$\alpha,x,z\in\RR^d$ satisfying
$|z-x|\leq 2|\alpha| t$ we have
\begin{align}\label{ineq:J_K_fun}
n_1 K(t,z-x,-2\alpha) \leq
\int_0^{t} p_{\alpha}(s,x,z)\,ds
\leq n_2 K(t,z-x,-2\alpha)\,.
\end{align}
\end{prop}
\begin{proof}
For simplicity we let $\tilde{x}=z-x$ and $y=-2\alpha$. Then
we have
\begin{align*}
\int_0^{t} p_{\alpha}(s,x,z)\,ds
= (4\pi t)^{-d/2} \,t \int_0^1 s^{-d/2} e^{-\frac{|\tilde{x}-ts y|^2}{4ts}} \,ds\,.
\end{align*}
Since for $|\tilde{x}|\leq |y| t$ and $s\in (0,1)$, we have
$$\frac{|\tilde{x}|^2}{s} + s |ty|^2\leq \frac{|ty|^2}{s} +s |\tilde{x}|^2\,.$$
For $d\geq 2$ we get
\begin{align*}
\int_0^1 s^{-d/2} e^{-\frac{|\tilde{x}-ts y|^2}{4ts}} \,ds
&=e^{\frac{\left<\tilde{x},y\right>}{2}} \int_0^1 s^{-d/2+1} e^{-\left(\frac{|\tilde{x}|^2}{s} +s |ty|^2\right)/(4t)}\, \frac{ds}{s}\\
&\geq e^{\frac{\left<\tilde{x},y\right>}{2}} \int_0^1 s^{d/2-1} e^{-\left(\frac{|ty|^2}{s} +s |\tilde{x}|^2\right)/(4t)} \,\frac{ds}{s}
= \int_1^{\infty} u^{-d/2+1} e^{-\frac{|\tilde{x}- tu y|^2}{4t u}} \,\frac{du}{u}\,.
\end{align*}
Therefore, for $|z-x|\leq 2|\alpha| t$,
\begin{align*}
\int_0^{\infty} p_{\alpha}(s,x,z)\,ds
\leq 2 \int_0^{t} p_{\alpha}(s,x,z)\,ds\,.
\end{align*}
This proves \eqref{ineq:t_vs_infty}.
For $d=1$ we have
\begin{align*}
\int_0^1 s^{-1/2} e^{-\frac{|\tilde{x}-ts y|^2}{4ts}} \,ds
&=e^{\frac{\left<\tilde{x},y\right>}{2}} \int_0^1 s^{1/2} e^{-\left(\frac{|\tilde{x}|^2}{s} +s |ty|^2\right)/(4t)}\, \frac{ds}{s}
\geq e^{\frac{\left<\tilde{x},y\right>}{2}} \int_0^1 s^{1/2} e^{-\left(\frac{|ty|^2}{s} +s |\tilde{x}|^2\right)/(4t)} \,\frac{ds}{s}\\
&= \int_1^{\infty} u^{-1/2} e^{-\frac{|\tilde{x}- tu y|^2}{4t u}} \,\frac{du}{u}
\geq e \int_1^{\infty} e^{-u} u^{1/2} e^{-\frac{|\tilde{x}- tu y|^2}{4t u}} \,\frac{du}{u} \,.
\end{align*}
Therefore, for $|z-x|\leq 2|\alpha| t$,
\begin{align*}
\int_0^{\infty} e^{-s/t} p_{\alpha}(s,x,z)\,ds
\leq (1+1/e) \int_0^{t} p_{\alpha}(s,x,z)\,ds\,.
\end{align*}
This ends the proof of \eqref{ineq:t_vs_e-infty}.
Now, note that we can take $\lambda=0$ in \eqref{eq:rez-ker} by passing with $\lambda>0$ to zero.
Then \eqref{ineq:J_K_fun}
follows from \eqref{eq:rez-ker} and the
estimates of $K_{\nu}$ mentioned above;
and from \eqref{eq:r_s} for $d=1$.
\end{proof}
\begin{lem}
For all $T>0$ and $V$ we have
\begin{align}
r_*(V,T) &\geq
\frac{n_1}{2} \|K(V,T)\|_{\infty}+ \frac{1}{2}\sup_{x\in\RR^d} \int_0^{T}\int_{\RR^d} g(s,x,z) |V(z)|\,dzds \,, \label{ineq:r_lower} \\
r_*(V,T) & \leq n_2 \|K(V,T)\|_{\infty}+2^{d-2} \sup_{x\in\RR^d} \int_0^{4T}\int_{\RR^d} g(s,x,z) |V(z)|\,dzds\,.
\label{ineq:r_upper}
\end{align}
The constants $0<n_1\leq n_2<\infty$ are taken from \eqref{ineq:J_K_fun}.
\end{lem}
\begin{proof}
Recall that
$p_{\alpha}(t,x,z)= g(t,x-2\alpha t,z)$.
If we put $\alpha=0$, we get
that $r_*(V,T)$ is bounded below by $\sup_{x\in\RR^d} \int_0^{T}\int_{\RR^d} g(s,x,z) |V(z)|\,dzds $,
while by
reducing the domain of integration in space variable $z$ to $|z-x|\leq 2|\alpha|t$ and by \eqref{ineq:J_K_fun}
we have
$r_*(V,T) \geq n_1 \|K(V,T)\|_{\infty}$.
That proves the lower bound
\eqref{ineq:r_lower}.
Now, let $y=-2\alpha$.
For the upper bound we consider two regions of integration,
\begin{align*}
A_1&=\{z\in\RR^d \colon |z-x|> t|y|\}\,,\\
A_2&=\{z\in\RR^d \colon |z-x|\leq t|y|\} \,.
\end{align*}
Note that if $z\in A_1$ and $s\in (0,t)$, then
\begin{align*}
|z-x-ty|&\leq |z-x-s y|+(t-s)|y| \\
&< |z-x-s y|+|z-x|-|s y|\leq 2|z-x-s y|\,.
\end{align*}
By the monotonicity of the exponential function we get
\begin{align*}
\int_0^t\int_{A_1} g(s,x+sy,z)|V(z)|\,dz ds
&\leq
2^d\int_0^t\int_{\RR^d} g(4s,x+ty,z)|V(z)|\,dz ds\\
&=2^{d-2}\int_0^{4t}\int_{\RR^d} g(u,x+ty,z)|V(z)|\,dz du\,.
\end{align*}
On the set $A_2$ we apply \eqref{ineq:J_K_fun}. This ends the proof of \eqref{ineq:r_upper}.
\end{proof}
We are now ready to justify Theorem~\ref{thm:new_equiv}.
\begin{proof}[Proof of Theorem~\ref{thm:new_equiv}]
We will actually prove that
$$
c_1 \|K(V,T)\|_{\infty} \leq r_*(V,T) \leq c_2 \|K(V,T)\|_{\infty}\,,
$$
for all $T>0$ with constants $0<c_1< c_2<\infty$
that depend only on $d$. The result will then follow from Proposition~\ref{prop:comp} and \eqref{ineq:2T-T}.
The lower bound holds by \eqref{ineq:r_lower}.
We focus on the upper bound
and due to \eqref{ineq:r_upper}
it suffices to show that
$$
\sup_{x\in\RR^d} \int_0^{4T}\int_{\RR^d} g(s,x,z) |V(z)|\,dzds
\leq c\, \|K(V,T)\|_{\infty}\,.
$$
For the whole proof we
let $y=(4 t^{-1/2},0,\ldots,0)\in\RR^d$. Then
for $d\geq 3$,
since $-\left<x,y\right>\leq |x||y|$, we have
\begin{align*}
K(t,x,y) \geq e^{-4|x| t^{-1/2}} \frac{1}{|x|^{d-2}}
{\bf 1}_{|x|\leq \sqrt{16t}}
\geq e^{-16} \frac{1}{|x|^{d-2}}{\bf 1}_{|x|\leq \sqrt{16t}}\,.
\end{align*}
Therefore, by
\eqref{est:A3}
(cf. \cite[(4.3)]{MR3200161})
there is a constant $c>0$ that depends only on $d$ such that
\begin{align*}
\|K(V,T)\|_{\infty}
\geq e^{-16} \sup_{x\in\RR^d} \int_{|z-x|\leq \sqrt{16T}}
\frac{|V(z)|}{|z-x|^{d-2}}\,dz
\geq
c \sup_{x\in\RR^d} \int_0^{4T}\int_{\RR^d} g(s,x,z) |V(z)|\,dzds\,.
\end{align*}
For $d=2$ we
first note that
$\log(1+r/2)\geq (1/3)\log (r)$
if $r\geq 1$.
Therefore,
\begin{align*}
K(t,x,y)&\geq
e^{-16}
\log\left( 1+\frac1{2}\left(\frac{16 t}{|x|^2}\right)^{1/4}\right){\bf 1}_{|x|\leq \sqrt{16t}}
\geq (e^{-16}/3) \log\left(\frac{16 t}{|x|^2} \right){\bf 1}_{|x|\leq \sqrt{16t}}\,.
\end{align*}
Finally, by
\eqref{est:A2}
there is an absolute constant $c>0$ such that
\begin{align*}
\|K(V,T)\|_{\infty}
&\geq (e^{-16}/3)
\sup_{x\in\mathbb{R}^2} \int_{|z-x|\leq \sqrt{16T}}
\log\left(\frac{16T}{|z-x|^2}\right)|V(z)|\,dz
\geq c \sup_{x\in\mathbb{R}^2} \int_0^{4T}\int_{\mathbb{R}^2} g(s,x,z) |V(z)|\,dzds\,.
\end{align*}
For $d=1$ we have
\begin{align*}
K(t,x,y)
\geq \frac{e^{-16}}{\sqrt{17}} \sqrt{t} \,{\bf 1}_{|x|\leq \sqrt{16t}}\,,
\end{align*}
and by \eqref{est:A1}
there is an absolute constant $c>0$ such that
\begin{align*}
\|K(V,T)\|_{\infty}
&\geq \frac{e^{-16}}{\sqrt{17}} \sup_{x\in\mathbb{R}}
\, \sqrt{T}\!\!\! \int_{|z-x|<\sqrt{16T}} |V(z)|\,dz
\geq c \sup_{x\in\mathbb{R}} \int_0^{4T}\int_{\mathbb{R}} g(s,x,z) |V(z)|\,dzds\,.
\end{align*}
\end{proof}
\section{Proofs of Theorems \ref{thm:dgeq4} -- \ref{thm:d2d1}}\label{sec:proofs}
\subsection{Proof of Theorem~\ref{thm:dgeq4}}
In the proof we construct a function $V$ with the desired properties. The construction is based on another function defined in \cite[Proposition~1.6]{MR3914946}, and uses truncations and dilatations.
\begin{proof}
For $s>0$ we let $\tau_s f(x)=sf(\sqrt{s}x)$. Note that such dilatation does not change the norm
$$
\| \Delta^{-1}(\tau_s f) \|_{\infty}=\| \Delta^{-1} f \|_{\infty}\,.
$$
Moreover, ${\rm supp} (\tau_s f) \subseteq B(0,r/\sqrt{s})$ if ${\rm supp} (f)\subseteq B(0,r)$, $r>0$, and for $t>0$,
$$
\|S(\tau_s f,t)\|_{\infty}=\|S(f,st)\|_{\infty}\,.
$$
Now, let $U\colon \RR^d\to \mathbb{R}$ be non-positive and such that
$$
\|U\|_\infty\leq 1\,,\quad \|\Delta^{-1}U\|_{\infty}=C<\infty\,,\quad \sup_{t>0}\|S(U,t)\|_{\infty}=\infty\,.
$$
Such $U$ exists by \cite[Proposition~1.6 and Theorem~1.4]{MR3914946}.
By the definition of the supremum norm and the monotone convergence theorem,
for $n\in\mathbb{N}$ there are $t_n, r_n>0$ such that
$
\|S(U{\bf 1}_{B_{r_n}},t_n)\|_{\infty}> (4m_2/m_1)\, 4^n
$.
For simplicity we define $U_n=U{\bf 1}_{B_{r_n}}$, so we have
$$
\|S(U_n,t_n)\|_{\infty}> (4m_2/m_1)\, 4^n \,.
$$
Let $s_n=\max\{r_n^2, n\,t_n\}$
and define
$$
V_n=\tau_{s_n} (U_n)\,.
$$
Then ${\rm supp} (V_n)\subseteq B(0,1)$, $V_n\in L^{\infty}(\RR^d)$, $\|\Delta^{-1} V_n\|_{\infty}\leq C$ and
by Corollary~\ref{cor:reduction}
\begin{align*}
\|S(V_n,1/n)\|_{\infty}
&=\| S(U_n,s_n/n)\|_{\infty}\\
&\geq \left(\frac{m_1}{4m_2}\right) \|S(U_n)\|_{s_n/n,\infty}\\
&\geq \left(\frac{m_1}{4m_2}\right) \|S(U_n)\|_{t_n,\infty}
\geq \left(\frac{m_1}{4m_2}\right) \|S(U_n,t_n)\|_{\infty}> 4^n\,.
\end{align*}
Finally, let
$$V:=\sum_{n=1}^{\infty} V_n/2^n\,.$$ Obviously, part (a) holds. Further,
again by Corollary~\ref{cor:reduction}, for $t>0$ we get
\begin{align*}
\|S(V,t)\|_{\infty}
&\geq \left(\frac{m_1}{4m_2}\right)
\lim_{n\to\infty}\|S(V,1/n)\|_{\infty}
\geq \left(\frac{m_1}{4m_2}\right)
\lim_{n\to\infty}
2^{-n} \|S(V_n,1/n)\|_{\infty}
=\infty\,.
\end{align*}
This proves part (d).
The statement (c) holds by
$$
\|\Delta^{-1} V\|_{\infty} \leq \sum_{n=1}^{\infty} \|\Delta^{-1}V_n\|_{\infty}/2^n\leq C<\infty\,.
$$
Next,
\begin{align*}
\sup_{x\in\RR^d}&\int_0^t \int_{\RR^d}
g(s,x,z)|V(z)|\,dzds\\
&\leq
\sum_{n=1}^N
\sup_{x\in\RR^d}\int_0^t \int_{\RR^d}
g(s,x,z)\frac{|V_n(z)|}{2^n}\,dzds
+\sum_{n=N+1}^{\infty} \sup_{x\in\RR^d}\int_0^{\infty} \int_{\RR^d} g(s,x,z)\frac{|V_n(z)|}{2^n}\,dzds\\
&\leq \sum_{n=1}^N t \|V_n\|_{\infty}
+\sum_{n=N+1}^{\infty} \|\Delta^{-1}V_n\|_{\infty}/2^n\\
&\leq
t \sum_{n=1}^N \|V_n\|_{\infty}
+\frac{C}{2^N} \,,
\end{align*}
which can be made arbitrary small by the choice of $N$ and $t$,
and proves part (b).
\end{proof}
\subsection{Proof of Theorem~\ref{thm:d3}}
Similarly to the proof of Theorem~\ref{thm:dgeq4} we construct a function $V$ with the desired features.
We will choose a decreasing function $f\geq 0$ satisfying $\int_0^{1/25}f(r)dr=\infty$.
The function $V$ will be given by a series based on certain functions $V_n$. Each $V_n$ will be supported on a union of properly chosen cylinders $C_{k,r}$ and will have values according to the function $f$.
In particular, the choice will be such that on the support of $V_n$, the function
$K(t,x,y)$ with $|y|=25n$
will be comparable to
$
1/|x|
$
and
such that for a sequence $n_i\in\mathbb{N}$
diverging to infinity
we will have
$$
\|K(V_{n_i},1)\|_{\infty} \geq c \int_{1/(25n_i)}^{1/25}f(r)dr\geq 4^i \,.
$$
In the first lemma we investigate a function $U_r$ that is supported on a cylinder $C_r\subset \mathbb{R}^3$ and takes values related to the size of the cylinder.
To simplify the notation, for
$z=(z_1, z_2,z_3)\in\mathbb{R}^3$ we write $z=(z_1,\mathbf z_2)$,
where $\mathbf z_2=(z_2,z_3)\in \mathbb{R}^2$.
\begin{lem}\label{lem:kato}
For $r>0$ we define
\begin{align*}
C_r = \left[0,\frac14 \right] \times D_r,
\end{align*}
where $D_r$ is a 2-dimensional ball of radius $r$ centred at $0$.
For $r\in(0,e^{-1})$, $z\in\mathbb{R}^3$ put
$$
\varrho(r)=\frac{1}{r^2 \,|\ln r| \, \ln | \ln r|} \qquad {and}\qquad U_r(z) = \varrho(r) {\bf 1}_{C_r}(z)\,.
$$
Then
\begin{align*
\lim_{\varepsilon \to 0^+} \sup_{
\substack{x\in\mathbb{R}^3\\ r\in (0,1/5)}} \quad\int_{|z-x| < \varepsilon} \frac{1}{|z-x|} | U_r(z)| dz
=0\,.
\end{align*}
\end{lem}
\begin{proof}
Note that $\varrho(r)$ is decreasing on $(0,1/5)$.
On the other hand, $r^2 \varrho(r)$ and $r^2 |\ln r| \varrho(r)$ are increasing on $(0,1/5)$.
Let $0<\varepsilon<1/5$ and
\begin{align*}
{\rm I}_r(\varepsilon):=\sup_{x\in\mathbb{R}^3}\int_{|z-x| < \varepsilon} \frac{1}{|z-x|} | U_r(z)| dz
=\varrho(r) \sup_{x\in\mathbb{R}^3} \int_{|z|< \varepsilon} \frac{1}{|z|}{\bf 1}_{C_r}(z+x) dz\,.
\end{align*}
If $\varepsilon \le r$, then
$$
{\rm I}_r(\varepsilon)
\le \varrho(r)
\int_{|z|< \varepsilon} \frac{1}{|z|}\,dz
\le 2\pi \varepsilon^2 \varrho(\varepsilon)\,.
$$
If $r \le \varepsilon$,
we use the symmetric rearrangement inequality \cite[Chapter~3]{MR1817225} and that $\varepsilon<1/5$ to get
\begin{align*}
\int_{|z|< \varepsilon} \frac{1}{|z|}{\bf 1}_{C_r} (z+x) dz
&=\int_{-x_1}^{1/4-x_1} dz_1
\int_{\mathbb{R}^2} \frac{{\bf 1}_{|z|<\varepsilon}}{|z|} \, {\bf 1}_{D_r}(\mathbf z_2 + \mathbf x_2) d \mathbf z_2
\leq \int_{-x_1}^{1/4-x_1} dz_1
\int_{\mathbb{R}^2} \frac{{\bf 1}_{|z|<\varepsilon}}{|z|}\, {\bf 1}_{D_r}(\mathbf z_2) d \mathbf z_2\\
&\leq \int_{-1/4}^{1/4} dz_1\int_{\mathbb{R}^2} \frac{{\bf 1}_{|z|<\varepsilon}}{|z|}\, {\bf 1}_{D_r}(\mathbf z_2) d \mathbf z_2
=\int_{|z|<\varepsilon} \frac{1}{|z|}\, {\bf 1}_{C_r\cup (-C_r)}(z)dz\,.
\end{align*}
Now note that
$$B(0,\varepsilon)\cap (C_r\cup (-C_r))\subseteq B(0,\sqrt{2}r)\cup ( [r,\varepsilon]\times D_r)
\cup ( [-\varepsilon,-r]\times D_r).$$
Then
\begin{align*}
{\rm I}_r(\varepsilon)
\leq
\varrho(r) \left(\int_{|z|\leq \sqrt{2}r} \frac1{|z|}\,dz + 2 \int_r^\varepsilon \frac{|D_r|}{z_1}\, dz_1 \right)
&\le \varrho(r) \Big( 4\pi r^2 + 2\pi r^2 |\ln r|\Big)\\
&\le \varrho(\varepsilon) \Big( 4\pi \varepsilon^2 + 2\pi \varepsilon^2 |\ln \varepsilon|\Big).
\end{align*}
Thus
\begin{align*}
\lim_{\varepsilon \to 0^+} \sup_{
\substack{x\in\mathbb{R}^3\\ r\in (0,1/5)}} \quad\int_{|z-x| < \varepsilon} \frac{1}{|z-x|} | U_r(z)| dz=
\lim_{\varepsilon \to 0^+} \sup_{r\in (0,1/5)} {\rm I}_r(\varepsilon)
=0\,.
\end{align*}
\end{proof}
\begin{cor}\label{cor:kato}
For $k\in\mathbb{N}$ and $r>0$ we
define
\begin{align*}
C_{k,r} = \left[k,k+\frac14 \right] \times D_r,
\end{align*}
where $D_r$ is a 2-dimensional ball of radius $r$ centred at $0$.
For $r\in(0,e^{-1})$, $n\in\mathbb{N}$ and $z\in\mathbb{R}^3$ put
$$
f(r)=\frac{1}{r \,|\ln r| \, \ln | \ln r|}
\qquad {and}\qquad
V_{n}(z)= f\left(\frac{z_1}{25n}\right) \sum_{k=1}^{n} {\bf 1}_{C_{k,\sqrt{k/(25n)}}} (z)\,.
$$
Then
\begin{align*
\lim_{\varepsilon \to 0^+}
\sup_{x\in\mathbb{R}^3,\, n\in \mathbb{N}}
\quad
\int_{|z-x| < \varepsilon} \frac{1}{|z-x|} | V_{n}(z)| dz=0\,.
\end{align*}
\end{cor}
\begin{proof}
We use
$\varrho(r)$ and $U_r$
as defined in
Lemma~\ref{lem:kato}.
Note that $f(r)$ is decreasing on
$(0,e^{-3})$ and $f(r^2)\leq \varrho(r)$ on $(0,e^{-1})$.
We record that every two cylinders $C_{k,\sqrt{k/(25n)}}$
that correspond to different values of $k\in\mathbb{N}$
are disjoint. Therefore
if $z\in C_{k,\sqrt{k/(25n)}}$
we have
$$
V_{n}(z)
=f\left(\frac{z_1}{25n}\right)
\leq
f\left(\frac{k}{25n}\right)
\leq \varrho
\left(\sqrt{\frac{k}{25n}}\right)
= U_{\sqrt{k/(25n)}} (z-(k,\mathbf 0)).
$$
What is more, the distance between
every two cylinders $C_{k,\sqrt{k/(25n)}}$
that correspond to different $k$
is at least $3/4$.
Thus,
for any $x\in\RR^d$ and $0<\varepsilon<3/8$,
the intersection of $B(x,\varepsilon)$
and ${\rm supp}(V_n)$ is a subset of at most one
cylinder $C_{k,\sqrt{k/(25n)}}$,
and so by Lemma~\ref{lem:kato},
\begin{align*}
\lim_{\varepsilon \to 0^+}
\sup_{x\in\mathbb{R}^3,\, n\in \mathbb{N}}
\,
\int_{|z-x| < \varepsilon} \frac{1}{|z-x|} | V_{n}(z)| dz
&\leq \lim_{\varepsilon \to 0^+}
\sup_{\substack{x\in\mathbb{R}^3,\,n\in \mathbb{N} \\k =1,\ldots,n}}
\,
\int_{|z-x| < \varepsilon} \frac{1}{|z-x|}
U_{\sqrt{k/(25n)}} (z-(k,\mathbf 0))dz\\
& \leq \lim_{\varepsilon \to 0^+}
\sup_{\substack{x\in\mathbb{R}^3\\ r\in (0,1/5)}} \quad\int_{|z-x| < \varepsilon} \frac{1}{|z-x|} | U_r(z)| dz
=0\,.
\end{align*}
\end{proof}
\begin{lem}\label{lem:nosharp}
Let $V_n$ be defined as in Corollary~\ref{cor:kato}.
There are $n_i\in\mathbb{N}$, $i\in\mathbb{N}$, such that
for every $i\in\mathbb{N}$,
$$
\|K(V_{n_i},1)\|_{\infty}\geq 4^i \,.
$$
\end{lem}
\begin{proof}
Let $\theta>0$. Then
\begin{align*}
\theta(|z| - z_1) <1 \qquad \iff \qquad z_1 > \frac{\theta}{2} |\mathbf z_2|^2 - \frac{1}{2\theta}.
\end{align*}
For $n\in\mathbb{N}$ we put
\begin{align*}
E_n: = \left\{z \in \mathbb{R}^3 \colon z_1 > \frac{25 n}{2} |\mathbf z_2|^2 - \frac{1}{50n}\right\}.
\end{align*}
Thus,
for
$
z\in E_n
$
we have
$
25n(|z| - z_1)<1
$.
Then, by taking $x=0$ and $y = (25n,\mathbf 0)$ in the first inequality below,
and using ${\rm supp} (V_n)\subset E_{n} \cap B(0,25n)$ in the second one,
\begin{align*}
\|K(V_n,1)\|_{\infty}
&=\sup_{x,y \in \mathbb{R}^3} \int_{|z-x| \le |y|}
\frac{e^{-\frac{|z-x||y|-\left<z-x,y\right>}{2}}}{|z-x|} | V_n(z)| dz\\
&\geq \int_{|z|\le 25n} \frac{e^{-\frac12 \cdot 25n(|z| - z_1)}}{|z|} |V_n(z)| dz
\geq e^{-\frac12} \int_{\mathbb{R}^3}\frac{1}{|z|}|V_n(z)|dz\,.
\end{align*}
Further, by the definition of $V_n$
and $C_{k,\sqrt{k/(25n)}}$,
\begin{align*}
\|K(V_n,1)\|_{\infty}
&\geq e^{-1/2} \sum_{k=1}^{n}\, \int_{\mathbb{R}^3} \frac1{|z|} f\left(\frac{z_1}{25 n}\right) {\bf 1}_{C_{k,\sqrt{k/(25n)}}}(z)dz\\
&\geq e^{-1/2} \sum_{k=1}^{n} \int_k^{k+1/4} \frac1{k+1} f\left(\frac{z_1}{25n}\right)|D_{\sqrt{k/(25n)}}| dz_1\\
&\geq \frac{\pi e^{-1/2}}{2} \sum_{k=1}^{n} \int_k^{k+1/4} f\left(\frac{z_1}{25n}\right)
\frac{dz_1}{25n}\\
&\geq \frac{\pi e^{-1/2}}{8}\int_1^{n}
f\left(\frac{z_1}{25n}\right)
\frac{dz_1}{25n}= \frac{\pi e^{-1/2}}{8} \int_{1/(25n)}^{1/25} f(r)dr.
\end{align*}
This ends the proof since $\int_0^{1/25}f(r)dr=\infty$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:d3}]
For $n\in\mathbb{N}$ let $V_n$ be as in Corollary~\ref{cor:kato}
and $(n_i)_{i\in\mathbb{N}}$ be a sequence of natural numbers taken from Lemma~\ref{lem:nosharp}.
We take
$$
V:=-\sum_{i=1}^{\infty} V_{n_i}/2^i\,.
$$
By Lemma~\ref{lem:nosharp} we have
$$
\|K(V,1)\|_{\infty}\geq \sup_{i\in\mathbb{N}} 2^{-i}\,\|K(V_{n_i},1)\|_{\infty}=\infty\,.
$$
Therefore, by Theorem~\ref{thm:new_equiv},
\eqref{ineq:2T-T}
and Corollary~\ref{cor:reduction}
part b) follows.
Next, we have
\begin{align*}
\sup_{x\in\mathbb{R}^3}
\quad
\int_{|z-x| < \varepsilon} \frac{1}{|z-x|} | V(z)| dz
&\leq
\sum_{i=1}^{\infty} 2^{-i}
\sup_{x\in\mathbb{R}^3}
\quad
\int_{|z-x| < \varepsilon} \frac{1}{|z-x|} | V_{n_i}(z)| dz\\
&\leq \sup_{x\in\mathbb{R}^3,\,i\in\mathbb{N}}
\quad
\int_{|z-x| < \varepsilon} \frac{1}{|z-x|} | V_{n_i}(z)| dz\,,
\end{align*}
which
can be made arbitrary small by the choice of $\varepsilon$ due to
Corollary~\ref{cor:kato}.
This proves part a).
\end{proof}
\subsection{Proof of Theorem~\ref{thm:d2d1}}
Before we pass to the proof of Theorem~\ref{thm:d2d1} we show the following auxiliary result in $d=2$. For
$z\in\mathbb{R}^2$ we write
as usual
$z=(z_1,z_2)$,
where $z_1,z_2\in \mathbb{R}$.
\begin{lem}\label{lem:D}
Let $d=2$. For $r \geq 2$ we let
$D_r=\{z\in\mathbb{R}^2\colon z_1 \geq 0 \mbox{ and } 2 \leq |z|\leq r\}$.
There exists a constant $c>0$ such that for all Borel measurable $U\colon \mathbb{R}^2\to [0,\infty]$ and $r\geq 2$,
$$
\int_{D_r} K(1,z,(r,0))\, U(z) dz
\leq c \sup_{w\in\mathbb{R}^2} \int_{|z|\leq 2} U(z+w)dz\,.
$$
\end{lem}
\begin{proof}
Note that for $r>0$ and $n\in\mathbb{N} \cup \{0\}$,
$$
r(|z|-z_1)\leq n \quad \iff \quad |z_2|\leq \frac{\sqrt{2 n z_1 r+n^2}}{r}\,.
$$
In the rest of proof we consider $r\geq 2$ and $0\leq z_1\leq r$.
For $n\in\mathbb{N} \cup \{0\}$ we let
$$
f_n(z_1):=\frac{\sqrt{2nz_1r+n^2}}{r}
\qquad
\mbox{and}
\qquad
F_n:=\{ z\in\mathbb{R}^2\colon f_n(z_1)\leq |z_2|\leq f_{n+1}(z_1),\,0\leq z_1\leq r\}\,.
$$
Obviously, $f_n$ and $F_n$ depend on $r$, which we do not indicate explicitly to lighten the notation.
In particular,
$
n \leq r(|z|-z_1) \leq n+1 \iff z \in F_n
$.
A direct analysis of the derivative shows that
for each $a\geq 0$ and $b>0$
a function
$$
h(t)=\sqrt{2(a+b)(t+1)+(t+1)^2}-\sqrt{2at+t^2}\,,\qquad t\geq 0\,,
$$
is decreasing on $[0,a/b]$ and increasing on $[a/b,\infty)$. This guarantees
for each $\delta\in (0,1)$ that
\begin{align*}
f_{n+1}(z_1+\delta)-f_n(z_1)
&\leq
\max \left\{ f_1(z_1+\delta), \lim_{n\to\infty} (f_{n+1}(z_1+\delta)-f_n(z_1))\right\}\\
&= \max\left\{\frac{\sqrt{2z_1 r+1}}{r},\delta+\frac1r\right\}
\leq \max\left\{\sqrt{2+\frac1{r^2}},\delta+\frac1r\right\}\leq \frac32
\end{align*}
We fix $\delta\in (0,1)$ (any $\delta \leq \sqrt{7}/2$ has that property) so that
for all $n\in\mathbb{N} \cup \{0\}$,
\begin{align}\label{eq:diag}
\sqrt{(f_{n+1}(z_1+\delta)-f_n(z_1))^2+\delta^2} \leq 2\,.
\end{align}
For $n,k\in\mathbb{N} \cup \{0\}$ we define rectangles
$$
P_{n,k}:=\Big[ k\delta,\,(k+1)\delta\Big]\times \Big[f_n(k\delta),\,f_{n+1}((k+1)\delta)\Big]\subset \mathbb{R}^2\,.
$$
\begin{figure}[h]
\includegraphics{rys.pdf}
\centering
\caption{Graphs of functions $f_n$ and rectangles $P_{i,k}$ for $r=5$, $i=2$ and $\delta =2/5$. Here $F_2 \subset \bigcup\limits_{k=0}^{11} P_{2,k}$.}
\end{figure}
\noindent
The bottom left vertex of $P_{n,k}$ equals $a_{n,k}=(k\delta, f_n(k\delta))$
and satisfies $|a_{n,k}|=k\delta+\frac{n}{r}$. Furthermore,
if~$k\leq \lfloor r/\delta \rfloor$, then $k\delta\leq r$ and by \eqref{eq:diag} the diagonal of $P_{n,k}$ does not exceed 2. Hence
$P_{n,k}\subseteq B(a_{n,k},2)$, where the latter is
a 2-dimensional ball of radius $2$ centred at $a_{n,k}$.
Next, observe that
$$
D_r\subseteq \bigcup_{n=0}^{\lfloor r^2 \rfloor} F_n
\qquad
\mbox{and}
\qquad
F_n\subseteq \bigcup_{k=0}^{\lfloor \frac{r}{\delta} \rfloor} P_{n,k}\,.
$$
Finally, on $D_r \cap F_n \cap P_{n,k}$ we have
\begin{align*}
K(1,z,(r,0)) &= e^{-\frac{1}{2} r(|z|-z_1)}\log\left(1+\frac{1}{\sqrt{r|z|}}\right){\bf 1}_{|z|\leq r}\\
&\leq
\frac{e^{-\frac{1}{2} r(|z|-z_1)}}{\sqrt{r|z|}}
\leq
\frac{e^{-n/2}}{\sqrt{r\max\{|a_{n,k}|,2\}}}
\leq
\frac{e^{-n/2}}{\sqrt{r(k\delta/2+1)}} {\bf 1}_{B(a_{n,k},2)}(z)\,.
\end{align*}
This implies
\begin{align*}
\int_{D_r \cap F_n \cap P_{n,k}} K(1,z,(r,0))\, U(z) dz
&\leq
\frac{e^{-n/2}}{\sqrt{r(k\delta/2+1)}}
\int_{|z|\leq 2} U(z+a_{n,k})dz \\
&\leq
\frac{e^{-n/2}}{\sqrt{r(k\delta/2+1)}} \sup_{w \in \mathbb{R}^2}
\int_{|z|\leq 2} U(z+w)dz\,.
\end{align*}
It remains to notice that
$\frac{1}{\sqrt{r}}\sum\limits_{k=0}^{\lfloor \frac{r}{\delta} \rfloor}(k\delta/2+1)^{-1/2}\leq 1+ 4/\delta$
and $\sum\limits_{n=0}^{\infty}e^{-n/2}<\infty$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:d2d1}]
The lower bound in \eqref{ineq:d2d1}
follows immediately from
\eqref{ineq:2T-T},
\eqref{ineq:S_N} and
\eqref{ineq:N_r}.
We focus on the upper bound.
Due to Theorem~\ref{thm:new_equiv} it suffices to estimate $\|K(V,t)\|_{\infty}$, $t>0$.
First we consider $d=2$.
For $|y|\leq 2 t^{-1/2}$ we have
\begin{align}\label{ineq:D2}
K(t,x,y)\leq \log\left(1+\frac{
\sqrt{t}}{|x|}\right){\bf 1}_{|x|\leq \sqrt{4t}}
\leq \left( 1+ \log \frac{4t}{|x|^2}\right){\bf 1}_{|x|\leq \sqrt{4t}}\,.
\end{align}
Therefore,
by
\eqref{est:A2}
there is an absolute constant $c>0$ such that
$$
\sup_{|y|\leq 2 t^{-1/2}} \sup_{x\in\mathbb{R}^2}K(V,t,x,y)\leq c \sup_{x\in\mathbb{R}^2} \int_0^t\int_{\mathbb{R}^2} g(s,x,z) |V(z)|dzds\,.
$$
We focus on $|y|\geq 2 t^{-1/2}$.
Let
\begin{align*}
A_1&=\{z\in\mathbb{R}^2 \colon \left<z-x,y\right> \leq 0\}\,,\\
A_2&=\{z\in\mathbb{R}^2 \colon \left<z-x,y\right> \geq 0 \mbox{ and } |z-x|\leq \sqrt{4 t}\}\,,\\
A_3&=\{z\in\mathbb{R}^2 \colon \left<z-x,y\right> \geq 0 \mbox{ and } \sqrt{4t} \leq |z-x|\leq t|y|\} \,.
\end{align*}
On~the set $A_1$ we have $|z-x-sy| \ge |z-x|$, hence
by \eqref{ineq:J_K_fun} we get
$$
n_1 K(t,z-x,y)
\leq \int_0^t p_{(-y/2)}(s,x,z)ds
= \int_0^t g(s,x+sy,z)ds
\leq \int_0^t g(s,x,z)ds\,.
$$
Thus
$$
\sup_{|y|\geq 2 t^{-1/2}} \sup_{x\in\mathbb{R}^2}\int_{A_1} K(t,z-x,y)|V(z)|dz\leq (1/n_1)\sup_{x\in\mathbb{R}^2}
\int_0^t\int_{\mathbb{R}^2} g(s,x,z) |V(z)|dzds\,.
$$
On the set $A_2$ we argue like in \eqref{ineq:D2}, therefore
$$
\sup_{|y|\geq 2 t^{-1/2}} \sup_{x\in\mathbb{R}^2}\int_{A_2} K(t,z-x,y)|V(z)|dz
\leq
c \sup_{x\in\mathbb{R}^2} \int_0^t\int_{\mathbb{R}^2} g(s,x,z) |V(z)|dzds\,.
$$
It remains now to consider
$$
\sup_{|y|\geq 2 t^{-1/2}} \sup_{x\in\mathbb{R}^2}\int_{A_3} K(t,z-x,y)|V(z)|dz\,.
$$
Given $|y|\geq 2t^{-1/2}$ we let
$$\mathcal{O}_y
=\left[\begin{array}{rr} y_1 |y|^{-1} & y_2 |y|^{-1} \\ -y_2|y|^{-1} & y_1 |y|^{-1} \end{array} \right].
$$
Note that $\mathcal{O}_y
$ is a rotation matrix in $\mathbb{R}^2$ such that $\mathcal{O}_y \,y=(|y|,0)$.
Then, substituting
$z$ by
$t^{1/2}\mathcal{O}_y^{-1} z$, we obtain
\begin{align}\label{eq:D_3}
\int_{A_3} K(t,z-x,y)|V(z)|dz
&= \int_{D_r} K(1,z,(r,0))\, U(z) dz\,,
\end{align}
where
$$
r=t^{1/2}|y|\,,
\quad
D_r=\{z\in\mathbb{R}^2\colon z_1 \geq 0 \mbox{ and } 2 \leq |z|\leq r\}\,,
\quad
U(z)= t|V(t^{1/2}\mathcal{O}_y^{-1} z+x)|\,.
$$
Combining
\eqref{eq:D_3}
and
Lemma~\ref{lem:D} we get for $|y|\geq 2 t^{-1/2}$,
\begin{align*}
\int_{A_3} K(t,z-x,y)|V(z)|dz
&\leq
c \sup_{w\in\mathbb{R}^2}
\int_{|z|\leq 2} t|V(t^{1/2}\mathcal{O}_y^{-1}z+t^{1/2}\mathcal{O}_y^{-1}w+ x)|dz\\
&\leq
c \sup_{\widetilde{w}\in\mathbb{R}^2}
\int_{|z|\leq \sqrt{4t}} |V(z+\widetilde{w})|dz\,.
\end{align*}
Thus by
\eqref{est:A2},
$$
\sup_{|y|\geq 2 t^{-1/2}} \sup_{x\in\mathbb{R}^2}\int_{A_3} K(t,z-x,y)|V(z)|dz
\leq
c \sup_{x\in\mathbb{R}^2} \int_0^t\int_{\mathbb{R}^2} g(s,x,z) |V(z)|dzds\,.
$$
This finally gives the desired estimate and ends the proof for $d=2$.
Now, let $d=1$.
Using \cite[Lemma~4.2]{MR3200161} with
$k(x)=\sqrt{t}\left(1+t|y|^2\right)^{-1/2}
{\bf 1}_{|x|\leq t|y|}$ and $K(x)=\sqrt{t}$
we get for $r>0$,
\begin{align*}
\|K(V,t)\|_{\infty}
&\leq \sup_{x,y\in\mathbb{R}} \int_{\mathbb{R}}
\sqrt{t}\left(1+t|y|^2\right)^{-1/2}
{\bf 1}_{|z-x|\leq t|y|} |V(z)|dz\\
&\leq
\sup_{x,y\in\mathbb{R}}
\left(1+\frac{\sqrt{4t}}{r}\left(\frac{t|y|^2}{1+t|y|^2} \right)^{1/2}\right)
\int_{|z|<r} \sqrt{t} |V(z)|dz
\leq \left(1+\frac{\sqrt{4t}}{r}\right) \sup_{x\in\mathbb{R}} \sqrt{t} \!\!\!\int_{|z|<r} |V(z)|dz\,.
\end{align*}
Eventually, we put $r=\sqrt{4t}$ and use
\eqref{est:A1}, which ends the proof.
\end{proof}
\section{Corollaries and proof of Theorem~\ref{thm:t1}}
We will now give corollaries of Theorems \ref{thm:dgeq4} -- \ref{thm:d2d1}. We will seperately consider the cases $d \ge 4$, $d=3$, $d=2$ and $d=1$.
We begin with $d\geq 4$ and an aftermath of Theorem~\ref{thm:dgeq4}.
\begin{cor}\label{cor:d4}
Let $d\geq 4$. There is compactly supported $V\leq 0$ such that
\begin{enumerate}[label*=(\roman*)]
\item
\eqref{est:gaus_b} holds with $\varepsilon_1< 1<\varepsilon_2$ arbitrarily close to $1$ ,
\item \eqref{est:gaus} holds,
\item \eqref{est:sharp_time} does not hold
.
\end{enumerate}
By considering $-V$ we can obtain a similar non-negative example.
\end{cor}
\begin{proof}
We take $V\leq 0$ from Theorem~\ref{thm:dgeq4}. We justify all statements by using
parts (a), (b), (c) and (d) of the theorem along with the references indicated below. Namely,
\begin{enumerate}[label*=$\vcenter{\hbox{\tiny$\bullet$}}$]
\item $V$ is compactly supported by (a),
\item \textit{(i)} follows from (b) and \cite[Theorem~1A]{MR1994762},
\item \textit{(ii)} follows from (c) and \cite[p. 556 and Corollary~A]{MR1772429},
\item \textit{(iii)} follows from (d), Corollary~\ref{cor:reduction} and Lemma~\ref{lem:comb}.
\end{enumerate}
For a non-negative example we may need to multiply $-V$ by a small constant to obtain (c$'$) $\|\Delta^{-1} V\|_{\infty}< \varepsilon$ (small) and use for instance \cite[Theorem~1.4]{MR3200161} to get {\it (ii)}.
\end{proof}
A similar argumentation based on Theorem~\ref{thm:d3}, \cite[Theorem~1A and~1B]{MR1994762}, Corollary~\ref{cor:reduction} and Lemma~\ref{lem:comb} gives consequences for $d=3$.
As pointed out after Theorem~\ref{thm:d3} we cannot expect an example of $V\leq 0$ that satisfies \eqref{est:gaus}, but not \eqref{est:sharp_time}.
\begin{cor}\label{cor:d3}
Let $d=3$. There is $V\leq 0$
(of unbounded support)
such that
\begin{enumerate}[label*=(\roman*)]
\item
\eqref{est:gaus_b} holds
with
$\varepsilon_1< 1<\varepsilon_2$
arbitrarily close to $1$,
\item
\eqref{est:sharp_time} fails to hold.
\end{enumerate}
\end{cor}
Here is what results from
Theorem~\ref{thm:d2d1} for $d=2$.
\begin{cor}\label{cor:d2_1}
Let $d=2$. We have
\begin{enumerate}
\item[1)] $V\in \mathcal{K}_2$ if and only if $\lim_{T\to 0^+}\|S(V)\|_{T,\infty}=0$.
\item[2)] $V\in \widehat{\mathcal{K}}_2$ if and only if $\|S(V)\|_{T,\infty}<\infty$ for some (every) $T>0$.
\item[3)] If $V\leq 0$, then \eqref{est:sharp_time} holds if and only if
$V\in \widehat{\mathcal{K}}_2$.
\end{enumerate}
\end{cor}
\begin{proof}
The first two statements follow from
Theorem~\ref{thm:d2d1} and the definitions of
$\mathcal{K}_2$ and $\widehat{\mathcal{K}}_2$.
The last one follows from Lemma~\ref{lem:comb} and {\it 2)}.
\end{proof}
Finally we focus on $d=1$ in view of Theorem~\ref{thm:d2d1}.
\begin{cor}\label{cor:d1_1}
Let $d=1$.
The following conditions are equivalent
\begin{enumerate}
\item[a)] $V\in \mathcal{K}_1$,
\item[b)] $V\in \widehat{\mathcal{K}}_1$,
\item[c)] $\sup_{x\in\RR^d} \int_{|z-x|\leq 1} |V(z)|<\infty$,
\item[d)] $\lim_{T\to 0^+}\|S(V)\|_{T,\infty}=0$,
\item[e)] $\|S(V)\|_{T,\infty}<\infty$ for some (every) $T>0$.
\end{enumerate}
\end{cor}
\begin{proof}
The equivalence of {\it a)}, {\it b)} and {\it c)} is well known and follows for instance from
\eqref{est:A1}.
Part {\it a)} is equivalent to {\it d)}, and part {\it b)} to {\it e)}
by
Theorem~\ref{thm:d2d1}.
\end{proof}
\begin{cor}\label{cor:d1_2}
Let $d=1$. If $V$ is of fixed sign, then \eqref{est:sharp_time} holds if and only if
$V\in \mathcal{K}_1$.
\end{cor}
\begin{proof}
The equivalence follows from Lemma~\ref{lem:comb} and Corollary~\ref{cor:d1_1}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:t1}]
We justify statements in Table~\ref{t:1}.
We refer to 'local in time' and 'global in time' column as the 'first' and the 'second' column, respectively.
If $d\geq 4$, the lack of the equivalence in both columns is an aftermath of Corollary~\ref{cor:d4} (also since \eqref{est:sharp_uni} implies \eqref{est:sharp_time}).
If $d=3$, the negative answer in the 'first' column results from Corollary~\ref{cor:d3}.
Before we move forward, we note that
for $V\leq 0$, by the Duhamel formula,
$$
\int_0^t\int_{\RR^d} G(s,x,z)|V(z)|g(t-s,z,y)dzds\leq g(t,x,y)\,.
$$
Thus, by integrating in $x$ variable over $\RR^d$, we see that \eqref{est:gaus} implies
\begin{align}\label{nes1}
\sup_{t>0,\,y\in\RR^d} \int_0^t \int_{\RR^d} |V(z)|\,g(t-s,z,y)dzds= \sup_{x\in\RR^d} \int_0^{\infty} \int_{\RR^d}g(s,x,z) |V(z)|dzds <\infty\,,
\end{align}
while \eqref{est:gaus_b} necessitates
\begin{align}\label{nes2}
\sup_{0<t\leq T,\,y\in\RR^d} \int_0^t \int_{\RR^d} |V(z)|\,g(t-s,z,y)dzds= \sup_{x\in\RR^d} \int_0^T \int_{\RR^d}g(s,x,z) |V(z)|dzds <\infty\,.
\end{align}
Therefore, the positive answer in dimension $d=3$ in the 'second' column follows from \eqref{nes1} and
\cite[Corollary~1.5 and (8)]{MR3914946} (or see \cite[Page 6]{MR3914946}).
The remaining two positive answers in 'global in time' column (dimensions $d=2$, $d=1$) also follow from \eqref{nes1}, this time complemented with
Theorem~\ref{thm:d2d1} and \cite[Lemma~1.1]{MR3914946}.
The two positive answers in 'local in time' column (dimensions $d=2$, $d=1$) follow from
\eqref{nes2}, Theorem~\ref{thm:d2d1} and the first statement of Lemma~\ref{lem:comb} (see also \cite[Lemma~1.1]{MR3914946}).
\end{proof}
\bibliographystyle{abbrv}
|
train/arxiv
|
BkiUdWI5qdmC_F6-O27b
| 5
| 1
|
\section{Introduction}
The asymmetric simple exclusion process (ASEP) is a continuous time Markov process of interacting particles on the integer lattice $\mathbb{Z}$ subject to two rules: (1) A particle at $x$ waits an exponential time with parameter one (independently of all other particles) and then it chooses $y$ with probability $p(x,y)$; (2) If $y$ is vacant at that time it moves to $y$, while if $y$ is occupied it remains at $x$ and restarts its clock. The adjective ``simple'' refers to the fact that the allowed jumps are one step to the right, $p(x,x+1)=p$, or one step to the
left $p(x,x-1)=1-p=q$. The asymmetric condition is $p\neq q$ so there is a net drift of particles.
The special cases $p=1$ (particles hop only
to the right) or $q=1$ (particles hop only to the left) are called the T(totally)ASEP.
The dynamics are uniquely determined once we specify the initial
state, which may be either deterministic
or random. A rigorous construction of this infinite particle process can be found in Liggett \cite{liggett1}.
Since its introduction by Spitzer \cite{spitzer}, the ASEP has remained a popular model among probabilists
and physicists because it is one of the simplest nontrivial processes modeling nonequilibrium phenomena. (For recent reviews see \cite{golinelli, liggett2, seppalainen, spohn1}.)
If initially the particles are located at $\mathbb{Z}^{+}=\{1,2,\ldots\}$, called the \textit{step initial condition},
and if $p<q$, then there will be on average a net flow of particles, or \textit{current}, to the left. More precisely,
we introduce the \textit{total current} $\mathcal{I}$ at position $x\le0$ at time $t$:
\[\mathcal{I}(x,t):=\# \>\> \textrm{of particles}\>\le x\>\>\textrm{at time}\>\> t.\]
With step initial condition, it has been known for some time (see, e.g., Theorem 5.12 in \cite{liggett1}) that if we set
$\gamma:=q-p>0$ and $0\le c\le\gamma$, then the current $\mathcal{I}$ satisfies the strong law
\[ \lim_{t\rightarrow\infty}\frac{\mathcal{I}([-c t],t)}{t}=\frac{1}{4\gamma}\(\gamma-c\)^2\ \ \textrm{a.s.}.\]
The natural next step is to examine the current fluctuations
\begin{equation} \mathcal{I}(x,t)-\frac{1}{4\gamma}(\gamma-c)^2\, t \label{diff}\end{equation}
for large $x$ and $t$.
Physicists conjectured \cite{KPZ}, and Johansson proved for TASEP \cite{johansson}, that
to obtain a nontrivial limiting distribution the correct normalization of (\ref{diff}) is cube root in $t$.
For TASEP Johansson not only proved that the fluctuations are of order $t^{1/3}$ but found
the limiting distribution
function. Precisely, for $0\le v<1$ we have\footnote{The value of $a_2$ given in (\ref{a}) corrects a misprint in Corollary~1.7 of \cite{johansson}.}
\begin{equation} \lim_{t\rightarrow\infty}\mathbb{P}\( \frac{\mathcal{I}([-vt],t)-a_1 t }{ a_2 t^{1/3} }\le s\)=1-F_2(-s),\label{limitLaw}\end{equation}
where
\begin{equation} a_1=\frac{1}{4}\(1-v\)^2,\>\> a_2=2^{-4/3}(1-v^2)^{2/3},\label{a}\end{equation}
and $F_2$ is the limiting distribution of the largest eigenvalue in the Gaussian Unitary Ensemble \cite{TW1}.
The proof of this relied on the fact that TASEP is a determinantal process \cite{johansson,soshnikov,spohn1}. However, universality arguments suggest that (\ref{limitLaw})
should extend to ASEP with step initial condition even though ASEP is not a determinantal process. When the initial state is the Bernoulli product measure, it has been recently proved, using general probabilistic arguments, that the correct normalization remains $t^{1/3}$ for a large class of stochastic models including ASEP \cite{BKS, BS1, BS2, QV}.
In this paper we show that (\ref{limitLaw}) does extend to ASEP.
\par\vspace{5mm}
\noindent\textbf{Theorem.} For ASEP with step initial condition we have, for $0\le v<1$,
\[\lim_{t\rightarrow\infty}\mathbb{P}\( \frac{\mathcal{I}([-vt],t/\gamma)-a_1 t }{ a_2 t^{1/3} }\le s\)=1-F_2(-s),\]
where $\gamma=q-p$ and $a_1$ and $a_2$ are given by (\ref{a}).\footnote{With step initial condition and $x>0$ the total current equals the number of particles to the left of $x$ at time $t$ minus $x$. In what follows we shall require only that $|v|<1$. Therefore the statement of the Theorem holds for all such $v$ if when $v<0$ the value of $a_1$ is decreased by $|v|$.}
\par\vspace{5mm}
This theorem is a corollary, as we show below, of
earlier work by the authors \cite{TW2}.
\par\vspace{5mm}
\section{Proof of the Theorem}
We denote by $x_m(t)$ the position of the $m$th left-most particle (thus $x_m(0)=m\in\mathbb{Z}^+$).
We are interested in the probability of the event
\begin{equation} \{\mathcal{I}(x,t)=m\}=\{x_m(t)\le x, x_{m+1}(t)>x\}. \label{event}\end{equation}
The sample space
consists of the four disjoint events $\{x_m(t)\le x, x_{m+1}(t)>x\}$,
$\{x_m(t)\le x, x_{m+1}(t)\le x\}$,
$\{x_m(t)> x, x_{m+1}(t)>x\}$,
$\{x_m(t)\le x, x_{m+1}(t)\le x\}$ and because of the exclusion property we have
\begin{eqnarray*}
\{ x_m(t)\le x, x_{m+1}(t)\le x\}&=&\{x_{m+1}(t)\le x\},\\
\{ x_m(t)>x,x_{m+1}(t)>x\}&=& \{x_m(t)>x\},\\
\{x_m(t)>x,x_{m+1}(t)\le x\}&=& \emptyset.\end{eqnarray*}
These observations and (\ref{event}) give (the intuitively obvious)
\[ \mathbb{P}\(\mathcal{I}(x,t)=m\)=\mathbb{P}\(x_m(t)\le x\)-\mathbb{P}\(x_{m+1}(t)\le x\).\]
Since $\mathbb{P}\(\mathcal{I}(x,t)=0\)=\mathbb{P}\(x_1(t)>x\)$, we have
\[\mathbb{P}\(\mathcal{I}(x,t)\le m\)=1-\mathbb{P}\(x_{m+1}(t)\le x\). \]
Thus, since $x$ and $x_{m+1}(t)$ are integers, the statement of the Theorem is equivalent to the statement that
\[\lim_{t\to\infty}\P(x_{m+1}(t/\gamma)\le -vt)=F_2(s),\]
when $m=[a_1\,t-a_2\,s\,t^{1/3}]$.
In fact, we shall show that
\begin{equation}\lim_{t\to\infty}\P(x_{m}(t/\gamma)\le -vt)=F_2(s),\label{theq}\end{equation}
when
\begin{equation} m=a_1\,t-a_2\,s\,t^{1/3}+o(t^{1/3}).\label{m}\end{equation}
Let
\[ \sigma=\frac{m}{t},\>\>\> c_1=-1+2\sqrt{\sigma},\>\>\> c_2=\sigma^{-1/6}(1-\sqrt{\sigma})^{2/3}.\]
It was proved in \cite{TW2} that when $0\le p<q$,
\begin{equation}\lim_{t\to\infty}\P(x_m(t/\gamma)\le c_1\,t+s\,c_2\, t^{1/3})=F_2(s)\label{TWth}\end{equation}
uniformly for $\sigma$ in a compact subset of $(0,\,1)$.
To obtain (\ref{theq}) from this we determine $\sigma$ so that
\[-vt=c_1\,t+s\,c_2\, t^{1/3}.\]
Thus,
\[v=1-2\sqrt\sigma-s\,\sigma^{-1/6}\,(1-\sqrt\sigma)^{2/3}\,t^{-2/3}.\]
Solving, we get
\[\({1-v\ov2}\)^2=\sigma+s\,\sigma^{1/3}\,(1-\sqrt\sigma)^{2/3}\,t^{-2/3}+O\(t^{-4/3}\),\]
from which we deduce
\[\sigma=\({1-v\ov2}\)^2-s\,\({1-v\ov2}\)^{2/3}\,\({1+v\ov2}\)^{2/3}\,t^{-2/3}+O\(t^{-4/3}\)\]
\[=\({1-v\ov2}\)^2-s\,2^{-4/3}\,(1-v^2)^{2/3}\,t^{-2/3}+O\(t^{-4/3}\).\footnote{Since the condition on $\sigma$ is $0<\sigma<1$, the corresponding condition on $v$ is $|v|<1$, as was stated in footnote 2.}\]
By the uniformity of (\ref{TWth}) in $\sigma$ we get the same asymptotics if we replace the $\sigma$ we just computed by any $\sigma$ satisfying
\[\sigma=\({1-v\ov2}\)^2-s\,2^{-4/3}\,(1-v^2)^{2/3}\,t^{-2/3}+o(t^{-2/3}).\]
Since this is exactly the statement that $m=\sigma t$ satisfies (\ref{m}), we see that the Theorem is established.
\par\vspace{5mm}
\noindent\textbf{Acknowledgements.} This work was supported by the National Science Foundation under grants
DMS--0553379 (first author) and DMS--0552388 (second author).
|
train/arxiv
|
BkiUb1nxK2li-LM1QnW7
| 5
| 1
|
\section{Introduction}
Real time processes at finite temperature play an essential role in
the physics of the early universe and of heavy ion collisions. A key
quantity in scenarios of baryogenesis~\cite{kuzmin,rs} is the rate for
electroweak baryon number violation (the sphaleron rate). In the broken
phase the sphaleron rate can be computed with semiclassical
methods~\cite{kuzmin,ar,khlebnikov} but in the symmetric phase
\cite{p} they are not reliable. Unfortunately, a direct
non-perturbative lattice determination of the hot sphaleron rate is
not available, either.
The most promising approach to this problem~\cite{grigoriev} is to
compute the sphaleron rate in a classical real time simulation since
the relevant thermal transitions are essentially
classical. Considerable work has been done in this
direction~\cite{ambjorn91}--\cite{smit}.
Treating the dynamics of a classical gauge field system one
is nevertheless faced
with severe difficulties~\cite{nielsen}--\cite{arnold97}.
The high momentum modes with
$k\mathop{\gsi} T$ which do not behave classically, do not decouple from the
dynamics. In general, these modes lead to ultraviolet divergences in
the classical correlation functions which cannot be removed by
introducing local counterterms in the classical
theory~\cite{bodeker,arnold97}.
There is
another question related to the classical approach which
has hardly been
considered so far:
under which conditions is the classical
approximation for the low momentum modes
reliable? One systematic way
of investigating the validity of the classical approximation is to
compute the first quantum corrections in the $\hbar$-expansion. So
far, the expressions have been derived only for quantum mechanics and
scalar field theories~\cite{b}. However, these simple cases should
already teach us something in spite of the fact that topological
observables and the associated rate do not exist. In these models
relevant observables might be related for instance to the damping
rate~\cite{aarts,smit}.
The purpose of the present paper is to evaluate the quantum
corrections in the simplest non-trivial case, the
quantum mechanical anharmonic oscillator. This study serves to
estimate the feasibility of similar studies in field theories.
Moreover, we believe that some of the general results might
be carried over to that context.
We find that while at small times the classical approximation
is reliable, it breaks down at large enough times. The reason
is that the functional form of the quantum corrections is qualitatively
different from that of the classical answer, in a way which
cannot be accounted for by modifying the parameters of the
classical result.
The paper is organized as follows.
In Sec.~\ref{Formulation} we discuss the formulation of the problem.
In Sec.~\ref{sec:ho} we briefly discuss the harmonic oscillator
and in Sec.~\ref{aho} the anharmonic oscillator. The ``symmetric''
and ``broken'' cases of the latter are analyzed
in more detail in Secs.~\ref{symmetric}, \ref{broken},
and we conclude in Sec.~\ref{concl}.
\section{The formulation of the problem}
\label{Formulation}
We consider one bosonic degree of freedom $q$ with
conjugate momentum $p$ and the Hamiltonian
\begin{equation}
H=\frac{p^2}{2}+U(q),
\end{equation}
where
\begin{equation}
U(q)=\left\{\begin{array}{l}
+\frac{1}{2}\omega^2q^2+\frac{1}{4}g^2 q^4
\\
-\frac{1}{2}\omega^2q^2+\frac{1}{4}g^2 q^4+\frac{\omega^4}{4 g^2}
\end{array}\right. .
\label{U}
\end{equation}
We refer to the two cases of a positive and of a negative quadratic
term as the symmetric and the broken case, respectively.
Quantum mechanical (Heisenberg) operators are denoted by capital letters,
for example
\begin{equation}
Q(t)=e^{\frac{i}{\hbar} H t} Q(0) e^{-\frac{i}{\hbar} H t}.
\end{equation}
The finite temperature correlator we consider is
\begin{equation}
C(t)=
\left\langle
\frac{1}{2}\Big[
Q(t)Q(0)+Q(0)Q(t)
\Big]
\right\rangle= \frac{1}{Z} \mathop{\rm Re}\mathop{\rm Tr}
\left[e^{-\beta H(P,Q)} Q(t)Q(0)\right], \la{c}
\end{equation}
relevant for the time dependence of
\begin{equation}
\left\langle
\Big[Q(t)-Q(0)\Big]^2
\right\rangle.
\end{equation}
Here $Z= \mathop{\rm Tr}\left[\exp(-\beta H)\right]$ and
$\beta$ is the inverse temperature.
Note that $C(t)$ is an even function of $t$.
In~\cite{b}, the expansion
\begin{eqnarray}
C(t) = C_{\rm cl}(t) + C_\hbar(t) + C_{\hbar^2}(t) +
{\cal O}(\hbar^3)
\end{eqnarray}
was derived for $C(t)$. The classical result
is~\cite{dolan,bochkarev,bodeker}
\begin{eqnarray}
C_{\rm cl}(t)= Z_{\rm cl}^{-1} \int \frac{dp dq}{2\pi\hbar}
e^{-\beta H(p,q)} q q_{\rm c}(t) ,
\la{c0}
\end{eqnarray}
where $Z_{\rm cl} = \int \frac{dp dq}{2\pi\hbar} e^{-\beta H(p,q)}$
and $q_c(t)$ is the solution of the classical equations of motion
with the initial conditions $q_c(0)=q,$ $ \dot{q}_c(0)=p$.
This expression corresponds to the prescription suggested
by Grigoriev and Rubakov~\cite{grigoriev}.
As for the quantum corrections,
the contribution $C_\hbar(t)$ vanishes. The result to
order $\hbar^2$ is then~\cite{b}
\begin{eqnarray}
C(t) &=& Z^{-1} \int \frac{dp dq}{2\pi\hbar} e^{-\beta H(p,q)}
\biggl\{ \biggl[ 1 - \frac{\hbar^2\beta^2}{24} U''(q) +
\frac{\hbar^2\beta}{24} \left(
\partial_q^2
+ U''(q) \partial_p^2
\right) \biggr] q q_{\rm c}(t) \nonumber \\
& & \hspace*{2cm} - \frac{\hbar^2}{24}
q \int_0^t dt' U'''\big(q_{\rm c}(t')\big)
\{q_{\rm c}(t'),q_{\rm c}(t)\}_3 \biggr\} + {\cal O}(\hbar^3), \la{ch2}
\end{eqnarray}
where $\{,\}$ denotes the Poisson bracket
\begin{eqnarray}
\{f,g\} = \partial_p f \partial_q g - \partial_p g
\partial_q f,
\end{eqnarray}
and
\begin{eqnarray}
\{f,g\}_0= g ,\qquad \{f,g\}_{n+1} = \{f,\{f,g\}_n\}.
\la{poisson_n}
\end{eqnarray}
Similarly, the expression for $Z$ to order $\hbar^2$ is
\begin{equation}
Z= \int \frac{dp dq}{2\pi\hbar} e^{-\beta H(p,q)}
\left[ 1 - \frac{\hbar^2\beta^2}{24} U''(q) \right]. \la{z}
\end{equation}
There are thus three kinds of terms
in the $\hbar^2$-correction to $C(t)$,
denoted by $C_{\hbar^2}^{(i)}(t)$, $i=a,b,c$:
\begin{eqnarray}
C_{\hbar^2}(t) = C_{\hbar^2}^{(a)}(t) + C_{\hbar^2}^{(b)}(t) +
C_{\hbar^2}^{(c)}(t),
\end{eqnarray}
where
\begin{eqnarray}
C_{\hbar^2}^{(a)}(t) & = & Z_{\rm cl}^{-1}
\biggl(\frac{\hbar^2\beta^2}{24}\biggr)
\int \frac{dp dq}{2\pi\hbar}
e^{-\beta H(p,q)} U''(q)
\biggl[ C_{\rm cl}(t) - q q_{\rm c}(t) \biggr], \la{cha} \\
C_{\hbar^2}^{(b)}(t) & = & Z_{\rm cl}^{-1}
\biggl(\frac{\hbar^2\beta}{24}\biggr)
\int \frac{dp dq}{2\pi\hbar}
e^{-\beta H(p,q)}
\biggl[\partial_q^2
+ U''(q) \partial_p^2
\biggr] q q_{\rm c}(t), \la{chb} \\
C_{\hbar^2}^{(c)}(t) & = & Z_{\rm cl}^{-1}
\biggl(\frac{- \hbar^2}{24}\biggr)
\int \frac{dp dq}{2\pi\hbar}
e^{-\beta H(p,q)}
q \int_0^t dt' U'''\big(q_{\rm c}(t')\big)
\{q_{\rm c}(t'),q_{\rm c}(t)\}_3. \la{chc}
\end{eqnarray}
The term $C_{\hbar^2}^{(a)}(t)$ is a sum of the
$\hbar^2$ correction to the partition function
when it combines with the classical result $C_{\rm cl}(t)$,
and of the corresponding term in the numerator of eq.~\nr{ch2}.
Eqs.~\nr{cha}--\nr{chc} are the corrections we will evaluate below.
One of the key issues of the present
problem is the following: In the
case of static {\it time-independent} correlators, it is
possible (in a weakly coupled theory)
to reproduce
the results of the full quantum theory
from a classical theory with a high
accuracy, provided that the parameters of the
classical theory are modified appropriately. This is called
dimensional reduction~\cite{ginsparg,kajantie}.
The question is then
whether such a resummation might also work
in the time-dependent case. Indeed,
it has been proved that the resummation used in the
time-independent context is sufficient
for making the time-dependent two-point function
in the scalar $\phi^4$ theory finite
to two-loop order in perturbation theory
and even for giving
the corresponding damping rate
the right leading order numerical
value~\cite{aarts}. General arguments
in the same direction were also given in~\cite{mt}. The expansion
in eq.~\nr{ch2} is, in contrast, non-perturbative:
each term involves contributions from all orders in the coupling constant.
Let us therefore
discuss the effects of the resummation in the present
context (see also~\cite{b}).
Of course, the problem of divergences
does not occur unlike in field theory.
First, consider dimensional reduction.
Let us take as an example
the ``symmetric case'' anharmonic oscillator,
\begin{equation}
U(q)=\frac{1}{2}\omega^2 q^2+\frac{1}{4}g^2q^4.
\end{equation}
The starting point is then a 1-dimensional Euclidean field theory
defined by
\begin{equation}
{\cal L}=\frac{1}{2}(\partial_\tau q)^2+\frac{1}{2}\omega^2q^2+
\frac{1}{4}g^2q^4,\quad
Z=\int {\cal D}q
\exp(-\frac{1}{\hbar}\int_0^{\beta\hbar}\!d\tau {\cal L}).
\end{equation}
According to dimensional reduction, this can be written as
\begin{equation}
Z={\rm const.}\times \int \! dq_0 \exp(-S_{\rm eff}),
\end{equation}
where $q_0$ is the zero Matsubara mode. The parameters
in $S_{\rm eff}$ are modified by the non-zero modes. The
non-zero mode propagator is
\begin{equation}
\langle q_n q_m\rangle=
\frac{\delta_{n+m,0}}{\omega^2+(2\pi n T/\hbar)^2}.
\end{equation}
To order $\hbar^2$ (which is a good approximation
as long as $\beta \hbar\omega\ll\pi$), one can then easily
calculate how the mass parameter in the effective theory
is modified:
\begin{equation}
\omega_{\rm eff}^2=\omega^2+3g^2 T\sum_{n\neq 0}
\frac{1}{(2\pi nT/\hbar)^2}=\omega^2+\frac{1}{4}g^2\hbar^2 \beta. \la{effw}
\end{equation}
The change in the coupling constant is of order $\hbar^4$
and thus does not contribute in the present
$\hbar^2$-calculation.
Consider, on the other hand,
eqs.~\nr{ch2}, \nr{z}. In eq.~\nr{z},
$U''=\omega^2+3 g^2q^2$. The constant $\omega^2$-part
of this expression
does not contribute in eq.~\nr{ch2} since it is cancelled
by a similar part in the numerator,
see eq.~\nr{cha}.
The $q^2$-part, on the other hand, can be exactly
reproduced by calculating the classical partition
function $Z_{\rm cl}$ with $\omega^2$ modified according
to eq.~\nr{effw}:
\begin{equation}
\exp\left(-\beta\frac{1}{2}\omega^2q^2\right)
\left(1-\frac{\hbar^2\beta^2}{24}3g^2q^2
\right)=
\exp\left(-\beta\frac{1}{2}
\omega_{\rm eff}^2q^2\right) + {\cal O}(\hbar^4).
\end{equation}
Similarly, the $-\beta^2 U''(q)$-term in the square brackets
in eq.~\nr{ch2} is accounted for by the change in $\omega^2$
according to eq.~\nr{effw}. Thus the term
$C_{\hbar^2}^{(a)}(t)$ in eq.~\nr{cha} is directly related to
changing the parameters of the classical theory.
However, there remain the terms
$C_{\hbar^2}^{(b)}(t)$ and $C_{\hbar^2}^{(c)}(t)$.
On the other hand, $q_c(t)$ is still a solution to the
original Hamilton equations of motion. Hence the question is
whether the $\hbar^2$-effects can be taken into account by determining
$q_c(t)$ form the equations of motion with the
modified parameter $\omega_{\rm eff}^2$
rather than $\omega^2$.
This issue will be discussed below and we find that, in general,
such a resummation does not take place.
Finally, it should be noted that in the field theory
case one is usually interested in a ``rate'' observable:
a time independent constant determining the time dependence
of some Green's function,
for example the sphaleron rate or the damping rate.
We are not aware of such an observable related to $C(t)$
in the present context. We thus consider the
general large-time functional behaviour of $C(t)$.
\section{Harmonic oscillator}
\la{sec:ho}
In order to show in a simple setting how the $\hbar$-expansion
works and to see what the structure of the perturbative solution
is, let us start by considering briefly the harmonic oscillator.
The classical Hamiltonian is
\begin{equation}
H=\fr12 p^2+\frac{1}{2}\omega^2 q^2.
\end{equation}
In this trivial case, the correlation function in eq.~\nr{c}
can be calculated exactly, with the result
\begin{equation}
C(t)=\frac{\hbar}{2 \omega}
\biggl(\tanh\!\frac{\beta\hbar\omega}{2}\biggr)^{-1}
\cos \!\omega t. \la{ho}
\end{equation}
Expanding in $\hbar$, one gets
\begin{equation}
C_{\rm cl}(t)+
C_{\hbar^2}(t)=\frac{\cos \!\omega t}{\beta\omega^2}
\left[ 1+\frac{1}{12}(\beta\hbar\omega)^2
\right]. \la{hoh2}
\end{equation}
The fact that it is the symmetric
combination of $Q(t)Q(0)$
which appears in eq.~\nr{c},
removes the term linear in $\hbar$ from the result.
It is seen that
the quantum corrections change the amplitude of $C_{\rm cl}(t)$,
but not the frequency
since $\omega$ is independent of energy.
The classical $\hbar^0$-term
is reliable in the limit $\beta\hbar\omega \ll 1$,
that is, at high temperatures. At low temperatures, in contrast,
the $T=0$ result (with $\tanh =1$ in eq.~\nr{ho}) is reliable.
How is this result reproduced by eqs.~\nr{c0}, \nr{ch2}?
The solution of the classical equations of motion is
\begin{equation}
q_c(t)=q\cos\!\omega t+\frac{p}{\omega}\sin\!\omega t.
\la{hoqct}
\end{equation}
Substituting this into eq.~\nr{c0}, one sees that
the term proportional to $p$ in $q_c(t)$ does
not contribute due to antisymmetry in $p$, and
one gets directly
\begin{equation}
C_{\rm cl}(t)=\frac{1}{\hbar Z_{\rm cl}}
\frac{\cos\!\omega t}{\beta^2\omega^3}
=\frac{\cos\!\omega t}{\beta\omega^2},
\label{CclHO}
\end{equation}
where it was used that $Z_{\rm cl}=(\beta\hbar\omega)^{-1}$.
As for the quantum corrections,
the last term in eq.~\nr{ch2}
is proportional to the third derivative of the potential
and thus does not contribute, $C_{\hbar^2}^{(c)}(t)=0$.
The term $C_{\hbar^2}^{(a)}(t)$ in eq.~\nr{cha} does not contribute
either, since $U''(q)$ is just a constant. There
remains a contribution from $C_{\hbar^2}^{(b)}(t)$ in eq.~\nr{chb},
reproducing eq.~\nr{hoh2}.
\section{Anharmonic oscillator}
\la{aho}
Let us then move to the less trivial case of the anharmonic oscillator.
Here and in the following we use $\omega$, $g$ and
\begin{equation}
V_0=\frac{\omega^4}{4g^2}
\end{equation}
to introduce the dimensionless variables $\hat q,\hat p,\hat t,\hat \beta,\hat E$:
\begin{equation}
\hat q=\frac{g}{\omega}q, \quad
\hat p=\frac{g}{\omega^2}p,\quad \hat t=\omega t,
\quad \hat \beta = \beta V_0,\quad \hat E = \frac{E}{V_0}.
\la{dimless}
\end{equation}
This rescaling serves to show the parameter
dependence of the final non-perturbative result more clearly.
At the same time, it makes the coupling constant equal to
unity so that if one wants to compare with perturbation
theory, one should go back to the original variables.
In terms of the rescaled variables the potential in eq.~(\ref{U})
reads
\begin{equation}
U(q)=\left\{\begin{array}{lll}
& V_0 (\hat q^4+2\hat q^2), & \mbox{\rm the ``symmetric'' case} \\
& V_0 (\hat q^2-1)^2, & \mbox{\rm the ``broken'' case}
\end{array}\right. .
\end{equation}
A dimensionless combination to which $\hbar$ can
be attached is
\begin{equation}
\epsilon = \frac{g^2\hbar}{\omega^3}.
\end{equation}
The quantity naively
governing the semiclassical expansion is hence $\epsilon^2$.
This is multiplied by some dimensionless function $f(\hat \beta,\hat t)$ which
may scale approximately with some power of $\hat \beta$ for given $\hat t$.
For instance, in the case of the harmonic oscillator, $f(\hat \beta,\hat t)$
scales as $\hat \beta^2$ so that the real expansion parameter is
\begin{equation}
(\epsilon \hat \beta)^2 \propto (\beta\hbar\omega)^2.
\end{equation}
One of the issues below is how the function $f(\hat \beta,\hat t)$
behaves in the anharmonic case
as a function of $\hat \beta$.
With the rescaling performed, one can also write $C(t)$
in a dimensionless form. Factoring out the scale $\omega^2/g^2$,
the classical correlation function is
\begin{equation}
C_{\rm cl}(t)=\left(\frac{\omega^2}{g^2}\right)
\left(\frac{\omega^3}{\pi g^2\hbar}\right)\frac{1}{Z_{\rm cl}}
\int_{-\infty}^{\infty}\!d\hat p \int_0^{\infty}\!d\hat q
e^{-\hat \beta \hat E}\hat q\hat q_c(\htt) \equiv
\left(\frac{\omega^2}{g^2}\right)\hat Z_{\rm cl}^{-1}(\hat \beta)
\hat C_{\rm cl}(\hat \beta,\hat t), \la{cct}
\end{equation}
where
\begin{equation}
Z_{\rm cl} = \left(\frac{\omega^3}{\pi g^2\hbar}\right)
\hat Z_{\rm cl}(\hat \beta),\quad
\hat Z_{\rm cl}(\hat \beta) = \int_{-\infty}^{\infty}\!d\hat p \int_0^{\infty}\!d\hat q
e^{-\hat \beta \hat E}. \la{cz}
\end{equation}
Here we utilized the symmetry of the integrand in
$\hat q\to -\hat q,\hat p\to -\hat p$.
We can also write the quantum corrections in a dimensionless form,
\begin{equation}
C_{\hbar^2}(t)=\epsilon^2
\left(\frac{\omega^2}{g^2}\right)\hat Z_{\rm cl}^{-1}(\hat \beta)
\Bigl[
\hat C_{\hbar^2}^{(a)}(\hat \beta,\hat t)+
\hat C_{\hbar^2}^{(b)}(\hat \beta,\hat t)+
\hat C_{\hbar^2}^{(c)}(\hat \beta,\hat t)
\Bigr]. \la{cq}
\end{equation}
We will then discuss the ``symmetric'' and ``broken'' cases
separately.
\section{The symmetric case}
\la{symmetric}
\subsection{Numerical results}
The detailed form of the classical solution $q_c(t)$
and of the integrals appearing
in the symmetric case
is discussed in \ref{appA}.
The expressions to be evaluated are in
eqs.~\nr{cct}, (\ref{sfzc})--(\ref{sfchc}). We have done
the evaluation numerically, as well as
analytically in certain regimes of $\hat \beta,\hat t$.
Let us discuss the numerical result first. The curves are
displayed in Figs.~\ref{sccl}--\ref{schc}.
\begin{figure}[tb]
\vspace*{-1.0cm}
\hspace{1cm}
\epsfysize=18cm
\centerline{\epsffile{sccl.eps}}
\vspace*{-6cm}
\caption[a]{
The classical correlator $C_{\rm cl}(t)$ in the symmetric case.
The thin line represents the analytic approximation of
Sec.~\ref{ltl}. The difference between the thin and thick lines
in the regime $1\mathop{\lsi} \omega t\mathop{\lsi} \beta V_0$ is due
to higher order perturbative corrections in $1/\beta V_0
\sim g^2/\beta\omega^4$.}
\la{sccl}
\end{figure}
\begin{figure}[tb]
\vspace*{-1.0cm}
\hspace{1cm}
\epsfysize=18cm
\centerline{\epsffile{ssum.eps}}
\vspace*{-6cm}
\caption[a]{
The quantum correction
$C_{\hbar^2}^{(a)}(t) + C_{\hbar^2}^{(b)}(t)$ in the symmetric case.
We have divided out the naive expansion parameter
$(\epsilon\beta V_0)^2=(\fr14 \beta\hbar\omega)^2$.}
\la{ssum}
\end{figure}
\begin{figure}[tb]
\vspace*{-1.0cm}
\hspace{1cm}
\epsfysize=18cm
\centerline{\epsffile{schc.eps}}
\vspace*{-6cm}
\caption[a]{
The quantum correction
$C_{\hbar^2}^{(c)}(t)$ in the symmetric case,
compared with the analytic approximation of
Sec.~\ref{ltl}.}
\la{schc}
\end{figure}
The qualitative features of the solution are the following:
Both the classical solution $C_{\rm cl}(t)$ and the quantum
correction $C_{\hbar^2}^{(a)}(t)+C_{\hbar^2}^{(b)}(t)$ approach
zero at large times. The time scale it takes for the amplitude
to diminish depends on $\hat \beta$, being roughly
proportional to $\hat \beta$, and being somewhat larger for the
quantum corrections. The reason for the attenuation is the
destructive interference of the continuum of classical
solutions with different frequencies.
This feature does not fully persist
in the quantum case where the energy levels are
discrete: rather the behaviour is ``almost periodic''~\cite{dolan}
on a larger time scale. Indeed, already
the term $C_{\hbar^2}^{(c)}(t)$ in Fig.~\ref{schc}
behaves in a manner qualitatively different
from $C_{\hbar^2}^{(a)}(t)$, $C_{\hbar^2}^{(b)}(t)$: it has a constant
amplitude at large times.
Let us discuss these features and their implications
in more quantitative terms.
\subsection{The large time limit}
\la{ltl}
We are mainly interested in the
large time behaviour of the correlation function.
Ordinary perturbation theory breaks down
for large times and is therefore excluded.
This is due to the secular terms in the perturbative series:
at lowest order the solution to the classical equations
of motion is proportional to $\cos(\omega t + \alpha)$ while the next
order contains a term proportional to $g^2 t \sin(\omega t + \alpha)$.
Thus, by dimensional analysis,
straightforward perturbation theory only works for
\begin{eqnarray}
\hat t = \omega t \ll \frac{\omega^4}{g^2} \beta = 4\beta V_0.
\label{tPerturbative}
\end{eqnarray}
The way to avoid the
secular terms is to use the exact
frequency $\Omega(E)$ inside the trigonometric functions appearing
in the perturbative series. The perturbative series
for the classical solution of eq.~\nr{symmqct} is obtained
from eq.~\nr{series}.
In the phase space integration
one then has to compute
(after a change of variables according to eq.~\nr{intvars})
the dimensionless integrals
\begin{eqnarray}
J_{n m} (\beta, t) &=&
(\mbox{$\frac{g^2}{\omega^4}$})^{n + 1}\int_0^\infty dE e^{-\beta E} E^n
\cos(m\Omega(E) t), \nonumber \\
\overline{J}_{n m} (\beta, t) &=& (\mbox{$\frac{g^2}{\omega^4}$)}^{n + 1}
\int_0^\infty dE e^{-\beta E}
E^n \sin(m\Omega(E) t).
\label{Jn}
\end{eqnarray}
In general, this is difficult to do.
Fortunately, an exact evaluation
is not necessary if one is interested in the
large time limit $\omega t\gg 1$. Then it is sufficient to keep
only the first two terms of the low energy expansion
of the exact frequency,
\begin{equation}
\Omega(E) = \omega \Bigl[ 1 + \fr14 c_1 \hat E
+ {\cal O} (\hat E^2 ) \Bigr],
\label{OmegaExpansion}
\end{equation}
where $\hat E=E/V_0$ and $c_1 = 3/4$.
In this approximation we find
\begin{eqnarray}
J_{n 1} (\beta, t) &\approx& n!
\frac{\cos(\omega t + (n + 1)\varphi)}
{\Big((\frac{\omega^4}{g^2}\beta )^2 +
(c_1 \omega t)^2\Big)^{(n+1)/2}},\nonumber \\
\overline{J}_{n 1} (\beta, t) &\approx & n!
\frac{\sin(\omega t + (n + 1)\varphi)}
{\Big((\frac{\omega^4}{g^2}\beta )^2 + (c_1 \omega t)^2\Big)^{(n+1)/2}},
\label{JnExpansion}
\end{eqnarray}
where
\begin{eqnarray}
\varphi = \arcsin\left(
\frac{c_1 \omega t}{\sqrt{(\frac{\omega^4}{g^2}\beta)^2 + (c_1\omega t)^2}}
\right).
\end{eqnarray}
{}From these expressions one can see that
in the region $\omega t\gg \frac{\omega^4}{g^2}\beta$
the terms which have been neglected in eq.\ (\ref{JnExpansion})
are suppressed
by at least one power of $1/(\omega t)$.
The reason is that
each power of $E$ in the phase space integrand of
eq.\ (\ref{Jn}) gives one
power of $1/(\omega t)$. In the region
$1\ll\omega t \ll \beta V_0$, the terms neglected
are suppressed by $g^2\beta/\omega^4$, corresponding
to higher order perturbative corrections.
Note that the approximation
(\ref{JnExpansion}) is valid also for $\omega t \gg \beta V_0 $
where the perturbative expansion breaks down.
Thus the large time expansion for $C(t)$ can be obtained from
the low energy expansion of the phase space integrand.
Using this expansion, we find
for $C_{\rm cl}(t)$ for $\omega t \gg 1$,
\begin{eqnarray}
C_{\rm cl}(t) \approx
\frac{1}{\hbar Z_{\rm cl}}
\frac{\omega^5}{g^4} J_{11}(\beta, t).
\label{CclAnalytic}
\end{eqnarray}
If $\hat \beta=\frac{\omega^4}{4g^2} \beta \gg 1$, there is an overlap of the
``perturbative region'' of eq.~(\ref{tPerturbative}) and the large time
region:
for moderately large times $1 \ll \omega t \ll \frac{\omega^4}{g^2} \beta$ we
recover the leading order perturbative result.
If $\omega t \gg \frac{\omega^4}{g^2}\beta $,
in contrast, eq.\ (\ref{CclAnalytic})
simplifies to
\begin{eqnarray}
C_{\rm cl}(t) \approx - \frac{16}{9} \frac{1}{\hbar Z_{\rm cl}}
\frac{\omega^5}{g^4} \frac{\cos\omega t}{(\omega t)^2}.
\label{CclAsymptotic}
\end{eqnarray}
That is, for large times, the classical correlation function
oscillates with the 'tree level' frequency $\omega$ and with an
amplitude which decreases as $1/t^2$. Comparing eq. (\ref{CclAsymptotic})
with the corresponding result for the harmonic oscillator,
eq.\ (\ref{CclHO}), we see that eq.~(\ref{CclAsymptotic})
is non-perturbative since its functional form cannot be obtained
by adding corrections multiplied by positive powers of $g^2$
to the harmonic oscillator result.
Let us now
note that if a resummation according to eq.~\nr{effw}
would take place, then the $\hbar^2$ quantum result should be
obtained by replacing $\omega^2\to\omega_{\rm eff}^2$
in eq.~\nr{CclAsymptotic}, that is
\begin{equation}
C_{\hbar^2}^{\rm resummed}(t) \approx
\biggl[
1+b_1 \frac{g^2\hbar^2\beta}{\omega^2}\biggr] C_{\rm cl}(t)+
\fr29
\frac{1}{\hbar Z_{\rm cl}}
\frac{\hbar^2\beta\omega^3}{g^2}
\frac{\sin\omega t}{\omega t},
\la{provh2}
\end{equation}
where $b_1$ is some number. We show below that the true
$C_{\hbar^2}(t)$ is not of the form in eq.~\nr{provh2}.
We start with $C_{\hbar^2}^{(a)}(t)$.
It was pointed out already in
Sec.~\ref{Formulation} that this term is related to
the replacement $\omega^2\to\omega_{\rm eff}^2$ in the
Hamiltonian appearing in the Boltzmann factor. To be more specific,
the term $\omega^2$ in $U''(q)$ cancels in eq.~\nr{cha}
and in the limit $\omega t\gg1$ we find
\begin{eqnarray}
\label{CaAsymptotic}
C_{\hbar^2}^{(a)} (t) \approx
\frac{1}{8} (\hbar g \beta)^2
\langle q^2 \rangle_{\rm cl}
C_{\rm cl}(t).
\end{eqnarray}
The contribution proportional to $\langle q^3 q_c(t)\rangle_{\rm cl}$,
on the other hand, has one additional power of $E$ in the phase space
integrand compared with the classical case and is
thus suppressed by a factor $1/(\omega t)$. From
eq.~(\ref{CaAsymptotic}) it is obvious that the quantum correction
$C_{\hbar^2}^{(a)} (t)$ shows the
qualitative behaviour indicated in the first
term in eq.~\nr{provh2}: it is small compared with the classical result
if $\beta\hbar\omega \ll 1$ and this holds even for arbitrarily
large times.
Next we consider the quantum corrections containing the derivatives
$\partial^2_q$, $\partial^2_p$ which we have denoted by
$C_{\hbar^2}^{(b)} (t)$. These derivatives acting on
the trigonometric functions in $q q_c(t)$ give extra
factors of $t$.
When expanding the integrand in powers of energy one has
to count $t$ as $E^{-1}$.
For $\omega t\gg 1$ we find
\begin{eqnarray}
\label{CbAsymptotic}
C_{\hbar^2}^{(b)} (t) \approx \frac{1}{48} \frac{1}{\hbar Z_{\rm cl}}
\frac{\hbar^2\beta \omega^3}{g^2}
\left\{ 4 J_{01}(\beta, t)
- 9 \omega t \overline{J}_{11}(\beta, t)
- \frac94 (\omega t)^2 J_{21}(\beta, t) \right\}.
\end{eqnarray}
The individual terms in the curly brackets behave as
$\sin(\omega t)/(\omega t)$ for large times, which is the
expected behaviour in eq.~\nr{provh2}. Such a result would
at the same time indicate that without resummation, the
semiclassical expansion breaks down for
\begin{eqnarray}
\omega t\mathop{\gsi} \frac{\omega^3}{\hbar g^2}\frac{1}{\beta\hbar\omega},
\end{eqnarray}
when the correction term in eq.~\nr{provh2}
is as large as the leading term.
However, we find that this does not occur: in the limit
$\omega t \gg 1$ the individual terms in the curly brackets in
eq.\ (\ref{CbAsymptotic}) cancel at leading order in
$1/(\omega t)$. Therefore the amplitude of $C_{\hbar^2}^{(b)} (t) $
decreases as $1/(\omega t)^2$ for large times. Thus
$C_{\hbar^2}^{(b)} (t)$ is small compared with the classical result
at high temperatures. The corresponding suppression factor, however,
is not in general given by $(\beta\hbar\omega)^2 $.
There are terms proportional to
$1/(\omega t)^2$ having different dependences on
the temperature: expanding eq.\ (\ref{CbAsymptotic}) gives terms
$\propto \beta^2$ while the subleading terms in the low energy expansion
are proportional to $\beta$. We have not calculated these terms analytically.
The numerical result for the sum of $C_{\hbar^2}^{(a)} (t)$ and
$C_{\hbar^2}^{(b)} (t)$ is shown in Fig.\ \ref{ssum}.
Finally we consider the correction $C_{\hbar^2}^{(c)} (t)$.
The result for $\omega t\gg 1$ is
\begin{eqnarray}
C_{\hbar^2}^{(c)} (t) \approx \frac{9}{64} \frac{1}{\hbar Z_{\rm cl}}
\frac{\hbar^2}{\omega^4} (\omega t)^2 \{ J_{21}(\beta, t)
- \frac14 \omega t \overline{J}_{11}(\beta, t)
\},
\end{eqnarray}
which for $\omega t \gg \frac{\omega^4}{g^2}\beta $ becomes
\begin{eqnarray}
\label{CcAsymptotic}
C_{\hbar^2}^{(c)} (t) \approx - \frac{1}{12}
\frac{1}{\hbar Z_{\rm cl}}
\frac{\hbar^2}{\omega} \cos\omega t.
\end{eqnarray}
Thus at large times $ C_{\hbar^2}^{(c)} (t)$ oscillates with the
``tree
level frequency'' but with a time independent amplitude.
This behaviour is qualitatively different from the classical case.
Comparing eqs.\ (\ref{CclAsymptotic}), (\ref{CcAsymptotic})
we see that $ C_{\hbar^2}^{(c)} (t)$ becomes as large as the classical
correlator for $t\sim t_*$ where
\begin{eqnarray}
\omega t_* = \frac{\omega^3}{\hbar g^2 }=\frac{1}{\epsilon},
\la{t*}
\end{eqnarray}
and for $t>t_*$ the semiclassical approximation breaks down.
The correction in eq.~\nr{CcAsymptotic} is clearly not of the
form allowed by eq.~\nr{provh2}. Since there is a term of a functional
form not allowed and the allowed $\sin \omega t/(\omega t)$-term
does not emerge, we conclude that a resummation according to
eq.~\nr{effw} does not take place in the large time limit.
Neither can one understand the result as a resummation
with a correction factor different from that in eq.~\nr{effw}.
Since a resummation cannot be made, the semiclassical
expansion breaks down at the time given by eq.~\nr{t*}.
It can be checked from
Fig.\ \ref{schc} that for $\omega t \mathop{\gsi} 10$ the analytic approximation
for $C_{\hbar^2}^{(c)} (t)$
indeed gives quite an accurate estimate of the exact numerical result.
To conclude, let us point out that the qualitative features found,
together with the ``almost periodic'' behaviour~\cite{dolan}
at time scales $t\mathop{\gsi} t_*$, can be reproduced with the
following approximation. Writing the full quantum result
in eq.~\nr{c} in the energy basis, one gets
\begin{equation}
C(t)=Z^{-1} \mathop{\rm Re}
\sum_{m,n}e^{-\beta E_m}e^{\frac{i}{\hbar} t(E_m-E_n)}
|\langle m|Q|n\rangle|^2. \la{enbasis}
\end{equation}
Approximating the energy levels to first order in $g^2$,
\begin{equation}
E_n =\hbar\omega\left(n+\fr12\right)
+\fr38\frac{g^2\hbar^2}{\omega^2}\left(n^2+n+\fr12\right),
\end{equation}
and the eigenstates to zeroth order, one gets
\begin{equation}
C(t)\simeq
Z^{-1}\frac{\hbar}{2\omega}
\sum_{m=0}^{\infty}
(m+1)
\left(e^{-\beta E_{m+1}}+e^{-\beta E_{m}}\right)
\cos\! \left(\frac{E_{m+1}-E_m}{\hbar}\right)t.
\end{equation}
The behaviour of this solution for small $\epsilon=\hbar g^2/\omega^3$
follows
the classical
solution in Fig.~\ref{sccl}
until the time scale is of order $t\sim 4 t_*=4/\epsilon$,
but then the periodicity sets in so that
at the time scale $t\sim 8t_*$, the structure around $t=0$ in the
classical solution is repeated. This is the reason for
the breakdown of the classical approximation.
\section{The broken case}
\la{broken}
\subsection{Preliminaries}
In the broken case, the classical Hamiltonian is
\begin{equation}
H=V_0\Bigl[2\hat p^2+(\hat q^2-1)^2\Bigr]. \la{bH}
\end{equation}
There exists, of course, an enormous
literature on this system.
In the present finite temperature context, it has been
previously studied by Dolan and Kiskis~\cite{dolan}
and by Bochkarev~\cite{bochkarev}.
Quite a lot is known about the qualitative
behaviour of $C(t)$. In general,
the solution can be written as in eq.~\nr{enbasis}.
Since the solution is a sum
of periodic contributions corresponding to the different
energy levels that can be excited, $C(t)$ is
``almost periodic''~\cite{dolan}. In particular,
the lowest frequency appearing is determined by
\begin{equation}
\Delta E = E_1-E_0
\propto \hbar\omega \exp \left(-\frac{2\sqrt{2}}{3}
\frac{\omega^3}{g^2\hbar}\right),
\la{pol2}
\end{equation}
implying that the symmetry is restored
already at $T=0$~\cite{polyakov} in the sense that the
correlator averaged over a long enough time period vanishes.
In contrast, the
classical result $C_{\rm cl}(t)$ has a non-zero limiting value
for $t\to \infty$, in which the symmetry is only partially
restored and all the oscillations die out~\cite{dolan}. The oscillations
die out, like in the symmetric case, due to the destructive interference of
the continuum of classical solutions with different frequencies.
The fact alone that the classical result does not show
the expected qualitative behaviour of the full result,
indicates that the classical result is not generically applicable.
We study this problem in more concrete terms below
by evaluating the $\hbar^2$-corrections.
Note that, in seeming contrast to what was just pointed out,
the system in eq.~\nr{bH} has also been used
to illustrate that the classical
approximation {\it is} applicable
to some real time problems~\cite{rs}.
The reason for the difference is that
the situation we consider is
different from the one in~\cite{rs}: we have a
strict equilibrium situation
at a finite temperature $\beta^{-1}$, which is also what is considered
in~\cite{dolan,bochkarev} and which occurs
in the real time sphaleron rate simulations. The
consideration in~\cite{rs}, in contrast,
concerns a non-equilibrium symmetry-restoring rate obtained
by taking an initial state where the system is prepared in one
of the minima.
In the strict equilibrium case, one cannot define such a rate.
Still, the problem of the general applicability
of the classical approximation
to real-time problems remains.
\subsection{Numerical Results}
The form of the classical solution $q_c(t)$ for the broken case
is discussed in~\ref{appB}.
The numerically evaluated classical correlator $C_{\rm cl}(t)$
is shown in Fig.~\ref{bccl}, and the quantum correction
$C_{\hbar^2}^{(a)}(t)+C_{\hbar^2}^{(b)}(t)$ in Fig.~\ref{bsum}.
The most notable difference with respect to the symmetric case
is that there is a constant part in the broken case results.
The energy integrand for $C_{\rm cl}(t)$ is for illustration
shown in Fig.~\ref{encl} where
the emergence of the constant part from $\hat E<1$ can be seen.
It is evident from Fig.~\ref{bccl} that
the partial symmetry restoration in the classical result
is the stronger
the higher the temperature is~\cite{dolan}, and
from Fig.~\ref{bsum} that there is a further
symmetry restoring effect from the quantum corrections.
It is seen in Fig.~\ref{bsum} that at high temperatures
($\beta V_0\sim 0.5-2.0$) the quantum corrections are
roughly proportional to the naive expansion parameter
$(\epsilon\beta V_0)^2=(\fr14\beta\hbar\omega)^2$
which has been factored out.
\begin{figure}[tb]
\vspace*{-1.0cm}
\hspace{1cm}
\epsfysize=18cm
\centerline{\epsffile{bccl.eps}}
\vspace*{-6cm}
\caption[a]{
The classical correlator $C_{\rm cl}(t)$ in the broken
case (thick lines), together with the analytic approximation
of Sec.~\ref{bltt} (thin lines).}
\la{bccl}
\end{figure}
\begin{figure}[tb]
\vspace*{-1.0cm}
\hspace{1cm}
\epsfysize=18cm
\centerline{\epsffile{bsum.eps}}
\vspace*{-6cm}
\caption[a]{
The quantum correction
$C_{\hbar^2}^{(a)}(t)+C_{\hbar^2}^{(b)}(t)$ in the broken case.}
\la{bsum}
\end{figure}
\begin{figure}[tb]
\vspace*{-1.0cm}
\hspace{1cm}
\epsfysize=18cm
\centerline{\epsffile{encl.eps}}
\vspace*{-6cm}
\caption[a]{
The energy integrands for $C_{\rm cl}(t)$
and $C_{\hbar^2}^{(c)}(t)$
in the broken symmetry case for $\beta V_0=2$.
In the limit $\omega t\to\infty$ only the
region $E/V_0<1$ contributes in $C_{\rm cl}(t)$.
The energy integrand for
$C_{\hbar^2}^{(c)}(t)$
has been shown on a logarithmic scale. It
involves essentially the second derivative
of the classical integrand, which is why it has very
high peaks ($\sim 10^{10}$ already at $\omega t\sim 15$) around
$E/V_0=1$.}
\la{encl}
\end{figure}
\begin{figure}[tb]
\hspace{1cm}
\epsfysize=18cm
\centerline{\epsffile{bchc.eps}}
\vspace*{-6cm}
\caption[a]{
The quantum correction
$C_{\hbar^2}^{(c)}(t)$ in the broken case,
divided by $(\epsilon\beta V_0)^2=(\fr14 \beta\hbar\omega)^2$.}
\la{bchc}
\end{figure}
Let us then discuss $\hat C_{\hbar^2}^{(c)}(\hat{\beta},\hat t)$.
Its numerical evaluation turns out to be very difficult
for large $\hat t$. The reason is that
the energy-integrand is highly peaked
and oscillatory around unity.
To see this, note first that
at $\hat t=0$, the integrand
in eq.~\nr{sfchc} vanishes.
Moreover, the integrand involves
terms $\sim\sin \hat\Omega(\hat E) \hat t$,
in analogy with eq.~\nr{intdndn} below.
Hence a particular energy region will contribute provided
that
\begin{equation}
\hat\Omega(\hat E) \hat t \mathop{\gsi} 1. \la{est}
\end{equation}
Let $y=|\hat E-1|\ll 1$. Since $K(k)\sim \ln(4/k')$ close
to $k=1$ ($k'=\sqrt{1-k^2}$),
one gets from eq.~\nr{bfreq} that
\begin{equation}
\hat\Omega(\hat E)=\left\{
\begin{array}{ll}
\frac{2\pi}{\ln(64/y)}, & \hat E\mathop{\lsi} 1 \\
\frac{\pi}{\ln(64/y)}, & \hat E\mathop{\gsi} 1
\end{array}
\right. . \la{omegaatone}
\end{equation}
Eq.~\nr{est} shows then that the energy-integrand
can be large in the region
\begin{equation}
y \mathop{\gsi} e^{-\hat t}.
\la{large}
\end{equation}
On the other hand, the second partial derivative in eq.~\nr{sfchc}
will involve
\begin{equation}
\partial_{\hat p}^2 \Omega(\hat E) =
( \partial_{\hat p} {\hat E} )^2
\partial_{\hat E}^2 \Omega(\hat E)+\ldots,
\end{equation}
where $\partial_{\hat p}\hat E=4\hat p$.
Hence according to eqs.~\nr{omegaatone}, \nr{large},
\begin{equation}
\frac{\partial^2\hat q_c(\hat t')}{\partial \hat p^2} \sim \frac{1}{y^2}\mathop{\lsi}
e^{2 \hat t}.
\end{equation}
Thus the height of the peaks around $\hat E =1$ grows exponentially
with time, and the peaks move closer to $\hat E =1$. The width
of the peaks is diminishing, but their height is growing faster
so that they give an increasing contribution.
In fact, the highest peak's contribution from $\hat E < 1$
(where the peak gives a positive contribution) and from
$\hat E>1$ (where it gives a negative one) to a large extent
cancel, but the cancellation is not complete and one has
to account for it very precisely in the numerics to get
the remaining contribution correctly. This is
why we cannot go to large $\hat t$.
In practice, we can reliably calculate
$\hat C_{\hbar^2}^{(c)}(\hat \beta,\hat t)$ only up to $\hat t=15$,
when the highest peaks in the energy integrand are
of height $\sim 10^{10}$ (for $\beta V_0\sim 2$)
at $\delta\hat E \sim 10^{-5}$
around unity.
The energy integrand is shown in Fig.~\ref{encl}
and the result of the integration
in Fig.~\ref{bchc}.
\subsection{The large time limit}
\la{bltt}
Consider first the classical correlation function.
The form of the solutions in eq.~\nr{bct} can
be read off from eq.~\nr{series}. It is seen that
for $\hat E<1$, $q_c(t)$ contains a constant part in addition to
the cosines.
The $\phi$-integral
obtained with the change of variables in eq.~\nr{intvars},
gives then
\begin{equation}
\int_{-K(k)}^{K(k)} \!d\phi\, {\rm dn}_k(\phi){\rm dn}_k(w\hat t+\phi)=
\frac{\pi^2}{2 K(k)}\biggl\{
1+8\sum_{n=1}^\infty \frac{q^{2n}}{(1+q^{2n})^2}
\cos\!\left[{n\Omega(E)t}\right]
\biggr\}.
\la{intdndn}
\end{equation}
The cosines in eq.~\nr{intdndn} give contributions
which vanish in the limit $t \to\infty$
as in eq.~\nr{JnExpansion}, see below. Hence one gets
from eqs.~\nr{cct}, \nr{intvars} that
the constant part surviving is
\begin{equation}
C_{\rm cl}(t\to\infty) =\frac{\pi}{4} \frac{1}{\hbar Z_{\rm cl}}
\frac{\omega^5}{g^4}
\int_0^1 d\hat E e^{-\hat \beta\hat E}\frac{w}{K(k)}. \la{tlim}
\end{equation}
One may also try to compute the time dependent
part for $\omega t\gg 1$ in the same
way as in the symmetric case. There we saw that the limiting behaviour
for large times can be obtained from a suitable low energy expansion
of the solution to the equations of motion. In the present case it is
obvious that this expansion cannot be convergent
when $\hat E$ approaches unity.
One may argue, however, that for large times only the solutions with small
energies are relevant and that this expansion still works.
We find that
\begin{eqnarray}
C_{\rm cl}(t) = C_{\rm cl}(t\to\infty) +
\frac{1}{16 \sqrt{2} } \frac{1}{\hbar Z_{\rm cl}}
\frac{\omega^5}{g^4}
\int_0^1 d\hat E e^{-\hat \beta\hat E}\Bigg( \hat E
\cos [\Omega(\hat E) t] +
{\cal O}(\hat E^2)
\Bigg).
\end{eqnarray}
Replacing the upper integration limit
by $\infty$ and keeping
only the first two terms of the low energy expansion of
\begin{eqnarray}
\Omega(\hat E) = \sqrt{2}\omega(1 - d_1 \hat E + {\cal O}(\hat E^2))
\end{eqnarray}
where $d_1 = 3/16$, we obtain for $\omega t \gg 1$,
\begin{eqnarray}
C_{\rm cl}(t) \approx C_{\rm cl}(t\to\infty) +
\frac{1}{16\sqrt{2} } \frac{1}{\hbar Z_{\rm cl}}
\frac{\omega^5}{g^4} \,
\frac{\cos(\sqrt{2}\omega t - 2 \varphi)}{(\beta V_0)^2 +
(d_1 \sqrt{2}\omega t)^2},
\la{bAnCcl}
\end{eqnarray}
where now
\begin{eqnarray}
\varphi = \arcsin\left(\frac{d_1 \sqrt{2}\omega t}{\sqrt{(\beta V_0)^2 +
(d_1 \sqrt{2}\omega t)^2}}\right).
\end{eqnarray}
It can be seen in Fig.~\ref{bccl} that eq.~\nr{bAnCcl} is indeed a good
approximation at large times.
The integrals appearing in the quantum corrections
$C_{\hbar^2}^{(a)}(t)$ and $C_{\hbar^2}^{(b)}(t)$
are qualitatively quite similar to that appearing
in $C_{\rm cl}(t)$.
In particular,
there is a constant part in these corrections
which can be evaluated in the same way
as eq.~\nr{tlim}.
It is seen that the constant part
tends to further restore the symmetry
compared with the classical result, see Fig.~\ref{bsum}.
For the quantum correction $C_{\hbar^2}^{(c)}(t)$,
in contrast, the ``small energy expansion'' does not seem
to be applicable.
We have computed the solution but it does not agree
with Fig.~\ref{bchc}. However, this need not be a surprise since,
as discussed,
it is not guaranteed that the small energy expansion
works in the broken case due to the singular nature of the
point $\hat E=1$:
the energy integration extends beyond the radius of convergence of the
small energy expansion.
Moreover, the integrand in $C_{\hbar^2}^{(c)}(t)$ is qualitatively
different from that in $C_{\rm cl}(t)$.
A simple example where the small energy
expansion would not work is given by
\begin{equation}
f(\hat t)=\int_0^\infty d\hat E e^{-\hat \beta\hat E}
\frac{\sin \hat t(1-\hat E)}{\pi (1-\hat E)}.
\end{equation}
At $\hat t\to\infty$ the integrand makes a delta-function, and
$f(\hat t\to\infty)\to \exp(-\hat \beta)$. Yet an expansion
in $\hat E$ of the denominator
around $\hat E=0$ and an integration term by term,
gives a result which oscillates around zero.
We could not find any other analytic way of evaluating the
energy integral for $C_{\hbar^2}^{(c)}(t)$,
either. The integrand is very complicated
around $\hat E\sim 1$. Thus we can only mention some
general features of the solution.
First, note that
the numerical result in Fig.~\ref{bchc} shows
that there is a growing
negative contribution at large $\hat t$ in $C_{\hbar^2}^{(c)}(t)$.
This seems to arise from $\hat E$ a bit larger than unity.
To estimate very roughly when
this kind of a contribution can be important, note
that then the peak heights must be
such that the exponential
suppression cannot hide them any more, that is
\begin{equation}
e^{-\hat \beta} e^{ 2 \hat t} \mathop{\gsi} 1.
\end{equation}
Hence one starts to get an effect at $\hat t \mathop{\gsi} \hat \beta$.
As to the functional form of the solution,
it looks roughly like $-\hat t^4$ at large $\hat t$.
It is easy to see that a linear in $\hat t$ behaviour
cannot occur, since it follows directly
from the definition in eq.~\nr{c} that $C(t)$
is symmetric in $t$.
For $\beta V_0=0.5,1.0$ in which case the asymptotic
behaviour is obtained
earliest, the leading term of
$C_{\hbar^2}^{(c)}(t)$ can be
fitted at $\omega t\sim 8 \ldots 15$ for instance as
\begin{equation}
C_{\hbar^2}^{(c)}(t)\sim \frac{\omega^2}{g^2}\epsilon^2
\Bigl[-0.01 (\omega t)^4 \Bigr].
\la{fit}
\end{equation}
The conclusions one can draw from the broken case
seem rather similar to those from the symmetric
case. The quantum correction $C_{\hbar^2}^{(c)}(t)$
behaves in a manner qualitatively different form
what was observed for $C_{\rm cl}(t)$. Moreover, the
difference is such that it cannot be accounted for
by a simple resummation of the mass parameter~$\omega^2$.
As the classical result in Fig.~\ref{bccl} is of
order unity and the fit in eq.~\nr{fit} would suggest
the behaviour $\epsilon^2 (\omega t)^4$ for the
quantum correction, one would expect that the
semiclassical expansion breaks down at
\begin{equation}
\omega t \sim \omega t_* = \frac{1}{\sqrt{\epsilon}} =
\biggl(\frac{\omega^3}{g^2\hbar}\biggr)^{1/2}.
\end{equation}
In eq.~\nr{t*} in the symmetric case it was rather
observed that $\omega t_* = 1/\epsilon$. However,
the fit in eq.~\nr{fit} should not be taken very
seriously as the interval is very small, and the main
point is that the time scale for the
breakdown seems to be determined by~$1/\epsilon$.
Finally, let us point out that
from the general form of eq.~\nr{enbasis},
one might have expected that at finite temperature
the asymptotic values of $C(t)$ are
oscillating between positive and negative values.
At zero temperature the time scale would be
$\sim\exp[2\sqrt{2}/(3\epsilon)]$ according to eq.~\nr{pol2}.
Thus the quantum correction $C_{\hbar^2}^{(c)}(t)$
seems to restore some of the qualitative features
missing in $C_{\rm cl}(t)$, in the sense that the
behaviour in Fig.~\ref{bchc} looks like the beginning
of an oscillation with a large time scale. The difference
from the zero temperature case,
however, is that the time scale associated with
the oscillation is not exponential.
\section{Summary and Conclusions}
\la{concl}
We have studied the classical
finite temperature real time two-point correlation
function and its first quantum corrections
for the anharmonic oscillator.
The expansion around the classical limit
is made in powers of $\hbar$, so that each order
contains all orders in the coupling constant $g^2$.
One can identify three different
time scales in the results. In the
symmetric case (Section \ref{symmetric}), these are
\begin{equation}
\omega t \sim 1,\quad \omega t \sim \hat \beta\equiv \frac{\omega^4}{4g^2}
\beta,\quad \omega t \sim \omega t_* = \frac{1}{\epsilon}\equiv
\frac{\omega^3}{g^2\hbar}.
\end{equation}
As long as $\omega t \ll
\hat \beta$, perturbation theory works and the correlation function
oscillates with period $\omega t \sim 1$. In the non-perturbative
region $\omega t \mathop{\gsi} \hat \beta$, the correlation function approaches its
asymptotic form. We have developed a large time expansion which
allows to address also the time scales $\omega t\gg \hat \beta$. In this
regime the amplitude of the oscillations in the classical result
attenuates due to the destructive interference of solutions to the
equations of motion with different energies. This attenuation
cannot be associated with a damping rate. Finally, the time scale
$t_*$ is associated with the quantum corrections and becomes infinity in the
formal limit $\hbar\to 0$. There is a hierarchy $\omega t_* \gg
\hat \beta$ provided that $\beta\hbar\omega\ll 1$.
The general result of our study is that at the
non-perturbative time scales $\omega t\mathop{\gsi} \hat \beta$, the
form of the quantum corrections differs qualitatively
from that of the classical result. The
semiclassical expansion breaks down at $t \sim t_*$
when the quantum corrections become as large as the
classical result. Moreover, we found that these large corrections
cannot be resummed by modifying the parameters of the classical
theory.
On the other hand,
the first quantum corrections to the classical correlation
function are small for $t \ll t_*$. From
this we would expect that in this region the
classical limit gives a good approximation for the full
quantum mechanical correlation function.
The expansion parameter for the quantum corrections
in this region is not
just the naive one $(\beta\hbar\omega)^2$, but
$\epsilon (\beta\hbar\omega)$ and $\epsilon^2$ appear, as well.
An essential question is then which of the discussed
features might be carried over to field theory.
Unfortunately, we cannot say very much about this.
However, certainly the present study does not encourage
one to believe in the generic applicability of the
classical approximation in the high temperature limit
for time-dependent quantities at arbitrarily large times.
On the other hand, there are also obvious features which
cannot hold in a four-dimensional field theory: for instance, we found that
the time $t_*$ does not depend on the temperature. This
is unlikely to be true in the pure SU(2) theory, say;
dimensionally, the classical time scale not involving
$\hbar$ is $(g^2 T)^{-1}$ in that case and the time scale
proportional to $\hbar^{-1}$ is $(\hbar g^4 T)^{-1}$.
It would be interesting to extend the present
type of an analysis to field theory to be able to make
more concrete conclusions. Unfortunately, a straightforward
evaluation of the quantum correction $C_{\hbar^2}^{(c)}(t)$
was numerically quite demanding even in the present case,
in particular for the ``broken'' case where the modes
with $E/V_0\sim 1$ are rather singular. In the field theory
case, the partial derivatives of the classical solution
with respect to the initial conditions would be replaced by
functional derivatives, making things even more complicated.
Still, one might hope that the scalar
field theory analogue of the symmetric
case would allow a non-perturbative investigation of the
quantum corrections in the damping rate.
Finally, let us point out that as it appears
that the classical approximation does not describe the
large time behaviour at least in the present case,
it would perhaps be useful to consider the feasibility of
other approaches. In principle
the problem can be solved non-perturbatively
using Euclidean simulations and spectral function
techniques. The anharmonic oscillator considered
in this paper might be a suitable toy model for developing
techniques for such studies, since it appears
that there is some non-trivial structure even in this case.
\section*{Acknowledgements}
D.B is grateful to M.Shaposhnikov and
M.L to K.Kajantie for discussions.
|
train/arxiv
|
BkiUgB_xK4sA-7sDhzG6
| 5
| 1
|
\section{Introduction}
\label{sec:intro}
The Epoch of Reionization (EoR) marks a fundamental phase transition in cosmic history, where neutral hydrogen filling the intergalactic medium (IGM) was ionized by a radiation field thought to originate from the formation of the first generation of stars and galaxies in the universe \citep[for reviews, see][]{Furlanetto2006c, Loeb2013, Mesinger2016b}.
One of the only direct probes of the IGM throughout the entirety of the EoR is neutral hydrogen's $21\,\textrm{cm}$\ line.
A hyperfine transition of neutral hydrogen, $21\,\textrm{cm}$\ emission is a three dimensional, tomographic probe of the IGM's density, ionization and temperature structure.
Low-frequency radio surveys promise to revolutionize our understanding of the IGM by using the $21\,\textrm{cm}$\ line to map out its morphology during EoR, and place constraints on the sources responsible for its heating and eventual reionization.
Over the past decade, experiments like the Donald C. Backer Precision Array for Probing the Epoch of Reionization \citep[PAPER;][]{Parsons2014, Jacobs2015, Ali2015}, the Murchison Widefield Array \citep[MWA;][]{Dillon2014, Ewall-Wice2016b, Beardsley2016}, the Low Frequency Array \citep[LOFAR;][]{Patil2017}, and the Giant Metre Wave Radio Telescope \citep[GMRT;][]{Paciga2013} have placed increasingly competitive limits on the $21\,\textrm{cm}$\ power spectrum.
These experiments face the challenge of separating a weak cosmological signal from foreground emission that is generally $10^5$ times brighter in order to characterize the EoR.
Instrumental systematics further complicate this effort, which can cause foreground signal to contaminate Fourier modes in the data that would otherwise only be noise limited.
As such, many of the current upper limits on the $21\,\textrm{cm}$\ power spectrum have been limited by instrumental systematics.
Current and future experiments like the Hydrogen Epoch of Reionization Array \citep[HERA;][]{DeBoer2017} and the Square Kilometer Array \citep[SKA;][]{Koopmans2015} are nominally forecasted to provide high significance characterizations of the $21\,\textrm{cm}$\ signal and place constraints on IGM properties and the sources driving reionization \citep{Pober2014, Greig2015a, Greig2015b, Liu2016b, Ewall-Wice2016b, Greig2017b, Kern2017}.
However, these forecasts neglect the impact of systematic contamination, which can significantly hamper an experiment's overall sensitivity and parameter constraining ability.
Precise modeling and separation of instrumental systematics will therefore likely be necessary for second-generation $21\,\textrm{cm}$\ experiments to make robust detections of the cosmological $21\,\textrm{cm}$\ signal.
Systematic contamination comes in a variety of forms, including calibration errors, ionospheric faraday rotation, primary beam ellipticity, analogue signal chain imperfections (such as impedance mismatches), and others.
In a companion paper, \citet{Kern2019a}, we presented techniques for modeling and removing systematics specifically due to internal instrument coupling, such as signal chain reflections and antenna cross coupling (i.e. crosstalk).
In that paper, we describe the phenomenology of internal instrument systematics in the interferometric visibilities, propose algorithms for removing them from the data, and demonstrate their performance against numerical simulations.
In this work, we investigate data from HERA Phase I for internal instrument systematics and apply our systematic modeling algorithms as a proof-of-concept.
The structure of this paper is as follows.
In \S2 we describe the data and observations used for this analysis.
In \S3 we examine the data for signal chain reflections,
and demonstrate reflection calibration on HERA auto-correlation visibilities.
In \S4 we present a study of cross coupling systematics in the HERA system, and demonstrate cross coupling removal performance on a few select baselines.
In \S5 we perform joint reflection and cross coupling systematic removal for baselines across the entire HERA array and compute power spectra, and lastly in \S6 we summarize our results.
\section{Observations}
\label{sec:hera_data}
Data were taken during HERA Phase I, which observed from 2017 to 2018 while undergoing active construction \citep{DeBoer2017}.
The Phase I instrument was a hybrid HERA-PAPER system, taking the signal chains and correlator from the PAPER experiment \citep{Parsons2010, Ali2015} and attaching them to new HERA antennas.
The HERA antenna is a parabolic dish spanning 14 meters in diameter, with a focal height designed to minimize reflections within the dish \citep{Neben2016, Thyagarajan2016, Ewall-Wice2016c, Patra2018}.
The feed uses the PAPER sleeved-dipole as the active element, which in the Phase I instrument has been optimized for the HERA antenna \citep{DeBoer2015, Fagnoni2016}.
An active balun or front-end module (FEM) is connected to the feed and houses a low-noise amplifier.
After initial amplification, the signals are sent through a 150-meter coaxial cable (first cable in \autoref{fig:sigchain}) to a node unit in the field holding a post-amplifier module (PAM; A box in \autoref{fig:sigchain}), and are then sent through another coaxial cable of about 20 meters in length (second cable in \autoref{fig:sigchain}) to a container holding ROACH2 boards\footnote{\url{https://casper.ssl.berkeley.edu/wiki/ROACH2}} \citep{Parsons2008, Hickish2016} that digitize the signals and then Fourier transforms them into the frequency domain (F box in \autoref{fig:sigchain}).
Finally, a Graphics Processing Unit (GPU) correlator cross-multiplies the signals between all antenna pairs to form interferometric visibilities that are integrated for 10.7 seconds before being written to disk (X box in \autoref{fig:sigchain}).
\begin{figure}
\centering
\label{fig:sigchain}
\includegraphics[scale=0.3]{imgs/sigchain.pdf}
\caption{A schematic of HERA signal chains for two antennas, 1 \& 2.
Sky signal ($\vec{\mathcal{S}}$) enters each antenna's dish and feed where it is converted into a voltage, travels down a 150-m coaxial cable to a processing node holding a post-amplification module ($\mathbf{A}$), before being directed through a 20-m cable to an engine that digitizes and Fourier transforms the signal ($\mathbf{F}$) and then sent to the correlator ($\mathbf{X}$) to produce the visibility $V_{12}$.
A possible cable reflection in antenna 1's signal chain is marked as $\epsilon_{11}$, traversing up and down the cable connecting the feed to the node.
A possible source of feed-to-feed coupling is marked as $\epsilon_{12}$ and $\epsilon_{21}$, where a signal is reflected off of antenna 1's feed and into antenna 2's feed or vice versa.
The dashed line from $\mathbf{F}$ to $\mathbf{X}$ denotes a signal pathway after digitization, where reflection are not a concern.}
\end{figure}
\begin{figure}
\centering
\label{fig:array_layout}
\includegraphics[scale=0.45]{imgs/array_layout}
\caption{The HERA array configuration at the time of observations on Julian Date 2458101 with roughly 46 operational antennas, showing which fall into Type 1 (blue) and Type 2 (red) signal chains categories.}
\end{figure}
\begin{deluxetable}{lc}
\tabletypesize{\footnotesize}
\tablewidth{0pt}
\tablecaption{
HERA Observation Parameters
\label{tab:hera_obs}
}
\tablehead{Parameter & Value}
\startdata
Observation Date & December 13, 2017 \\[.1cm]
Array Coordinates & -30.7$^\circ$ S, 21.4$^\circ$E \\[.1cm]
JD Range & 2458101.27 - 2458101.61 \\[.1cm]
LST Range & 1.5 - 9.6 hours \\[.1cm]
Integration Time & 10.7 seconds \\[.1cm]
Frequency Range & 100 - 200 MHz \\[.1cm]
Channel Width & 97.65 kHz \\[.1cm]
Dish Diameter & 14 meter \\[.1cm]
Feed Type & PAPER dipole \\[.1cm]
Instrumental Polarization & North-South (``YY'') \\[.1cm]
Cable Type & 150-m \& 20-m coaxial \\[.1cm]
\enddata
\tablecomments{For the 2017--2018 observation, the HERA correlator used the convention that the X dipole points East-West while the Y dipole points North-South, which is not the standard \citet{Hamaker1996b} definition.}
\end{deluxetable}
The observations presented in this work come from a single night spanning 8 hours of local sidereal time (LST) on Julian Date 2458101.
At that time, the array consisted of 46 operational antennas, each with dual-polarization dipole feeds (\autoref{fig:array_layout}).
Additionally, the signal chains of the array were split into two categories: Type 1 which used newly manufactured FEMs, PAMs, and coaxial cables specifically for HERA Phase I, and Type 2 which re-purposed the PAPER FEMs, PAMs and coaxial cables (colored blue and red in \autoref{fig:array_layout}, respectively).
In this analysis, we only use North-South (``YY'') linear dipole polarization data, although all four auto and cross-feed polarization data products are recorded by the correlator.
Additional observational parameters are tabulated in \autoref{tab:hera_obs}.
The data have been pre-processed with part of the HERA reduction and calibration pipeline.
Specifically, the data are first flagged for radio frequency interference (RFI) using a median filter and watershed algorithm operating on the cross correlation visibilities \citep{Kerrigan2019, Beardsley2019}.
In this work, we also enact two additional steps for RFI flagging.
The first takes stacked auto-correlation visibilities and differences them across time and frequency, normalizes them by their median absolute deviation and flags the residual at the 4 sigma level.
Our second step runs a delay-based, iterative deconvolution on a subset of the auto-correlation visibilities, which attempts to deconvolve the discontinuous windowing function created by flagged data.
This is similar in concept to the image-based CLEAN deconvolution \citep{Hogbom1974}, except applied to the frequency and delay domains rather than the $uv$ and $lm$ domains, and with the missing data coming from RFI rather than incomplete $uv$ sampling.
We then normalize the filtered residual in frequency space by its median absolute deviation, and again enact RFI cuts at the 4 sigma level.
Flags from each of the three independent steps are combined with a logical OR and then broadcasted across time and/or frequency if a 15\% flagged threshold is met for any individual time bin or frequency channel.
An example of the fairly aggressive resultant visibility flagging mask is shown in \autoref{fig:flag_mask}.
In total roughly 30\% of the data volume is flagged, although this likely contains a decent amount of over-flagging.
\begin{figure}
\centering
\label{fig:flag_mask}
\includegraphics[scale=0.55]{imgs/flag_mask.pdf}
\caption{Aggressive RFI visibility mask as a function of time and frequency after three rounds of flagging.
Flags are broadcasted across time and/or frequency if a 15\% flagged threshold is met per time bin and frequency channel.
For this particular night $\sim30$\% of the total data volume is flagged, which is sub-optimal in that it is likely a significant over-flagging,
but with the benefit of being more aggressive in flagging low-level and repeating RFI.}
\end{figure}
Next we calibrate the data using a highly simplified antenna-based calibration.
The full HERA calibration pipeline computes complex antenna gains for each time integration over the entire night from a combination of redundant calibration \citep{Dillon2019} and a constrained absolute calibration with the resultant gains smoothed across time and frequency \citep{Kern2019c}.
In this work, we take the gains derived from these steps and 1) average them across the entire night into a single spectrum, 2) average their amplitude across frequency to a single number, and 3) fit for a phase-slope across frequency (i.e. a single antenna delay).
We are left with a single amplitude and delay for each antenna, which we apply to all times of the night.
This has the effect of properly setting the flux scale of the data and also calibrates out the antenna cable delay, but ensures the gain itself we apply to the data has little to no spectral structure.
Because of our highly simplified calibration, the instrumental bandpass is not corrected for and still exists in the data.
Calibration, being multiplicative in frequency space, can be thought of as a convolution in delay space.
The true response of the visibilities in delay space is therefore initially convolved by the bandpass kernel upon measurement by the telescope.
Assuming the bandpass is composed primarily of large-scale modes, its impact will be a slight smoothing-out of the true sky delay response and features created by systematics.
Bandpass calibration performed beforehand may therefore sharpen systematics in delay space and actually make it easier to model and remove them.
Antenna-based calibration for HERA in the context of redundant calibration and absolute calibration is explored in \citet{Dillon2019} and \citet{Kern2019c}.
\begin{figure*}
\centering
\label{fig:autoamp_avg}
\includegraphics[scale=0.55]{imgs/autoamp_avg.pdf}
\caption{Auto-correlation visibilities for signal chain Type 1 (left) and Type 2 (right) with absolute time averaging (blue \& red) and with complex time averaging (black), and their associated noise floors (dashed).
Antennas 84 and 121 were used for the two auto-correlation visibilities.
Delays for relevant length scales in the analogue system are marked with arrows.
Resonances in the dish and reflections in the cables tend to be worse for signal chain Type 1.
Additionally, we see evidence for a systematic tail in both signal chain types spanning a wide range of delays that does not integrate down like noise.}
\end{figure*}
\section{Signal Chain Reflections}
\label{sec:ref_cal}
In this section we inspect the data for evidence of signal chain reflections.
To do this, we take the auto-correlation visibility from each antenna and look for peaks in delay space (see \citealt{Kern2019a} for a summary of the algorithm).
The calibrated data are filled with flags due to RFI (\autoref{fig:flag_mask}) and are thus nulled to zero at the flagged channels.
This is not ideal for inspecting the data in delay space, as the Fourier transform of such a discontinuous windowing function
creates strong sidelobes.
To mitigate this we employ the same delay-based, iterative deconvolution algorithm from before to subtract these sidelobes, effectively interpolating across the nulled gaps in the data due to RFI \citep{Parsons2009}.
We allow the deconvolution to place model components out to delays of $|\tau| < 1600$ ns, and iterate until the process reaches 5$\times$ the noise floor of the data.
We then make a copy of the data, and with the first copy we average the absolute value of the deconvolved visibilities in delay space across a few hours of LST.
With the second copy we average the full complex-valued, deconvolved visibilities across the same time range, which will have a lower noise floor due to the complex average.
\autoref{fig:autoamp_avg} shows these data products for the Type 1 (left) and Type 2 (right) signal chain, with the absolute time-averaged data shown in blue or red, and the complex time-averaged data shown in solid black.
Additionally, the thermal noise floors of each data product is plotted as dashed lines, which is estimated from the data via adjacent time and frequency differencing, and then divided by $1/\sqrt{N_{\rm avg}}$ where $N_{\rm avg}$ is the number of complex averages performed on the data.
We find that Type 2 signal chains achieve a better overall impedance match with the analogue system, leading to slightly less structure in the auto-correlations across a wide range of delays.
Nonetheless, we do see evidence for reflections from both the 20-meter and 150-meter cables, with reflection amplitudes in the range of roughly $3\times10^{-3}$ and 1$\times10^{-3}$, respectively.
Of major concern is the tail of the auto-correlation response, which starts at low delays and slopes down to the noise floor out to the 150-meter cable delay.
This tail is over an order of magnitude larger than that predicted by simulations of the HERA dish and feed \citep{Ewall-Wice2016c}.
\begin{figure*}
\centering
\label{fig:refcal_amps}
\includegraphics[scale=0.52]{imgs/refcal_amps.pdf}
\caption{Reflection calibration performed over the full band (120 -- 180 MHz) and applied to the auto-correlation visibilities. {\bfseries Left}: The auto-correlation response before calibration (green) and after calibration (purple) demonstrates suppression of reflection systematics by roughly an order of magnitude in the visibility. {\bfseries Center}: Histogram of derived 20-m reflection amplitudes before and after calibration. In the majority of cases we only see suppression by a factor of a few. {\bfseries Right}: Histogram of derived 150-m reflection amplitudes before and after calibration. In the majority of cases we see suppression by at least an order of magnitude.
Less suppression for the 20-m cable is likely attributable to more significant frequency evolution in the reflection parameters.}
\end{figure*}
\begin{figure*}
\centering
\label{fig:autoamp_avg_subbands}
\includegraphics[scale=0.55]{imgs/autoamp_avg_subbands.pdf}
\caption{Auto-correlation visibility after complex time-averaging, transformed over the full band (120--180 MHz; black), just the low side of the band (120--150 MHz; blue) and just the high side of the band (150--180 MHz; gold) for a Type 1 signal chain.
The 150-m cable reflection parameters are fairly consistent between both sides of the band, while the 20-m cable reflection shows significantly more frequency evolution.
The smaller peaks along the systematic tail also shows significant frequency evolution.
}
\end{figure*}
In this case the noise floor has been integrated down (solid black), we see that delays outside the 150-meter cable delay seem to effectively integrate down with the noise, while delays inside the 150-m cable-delay do not.
This means that the features at low and intermediate delays are coherent on long timescales of at least a few hours.
The abrupt change at $\sim1250$ nanoseconds is also possibly suggestive that tailed response might in part be originating within the 150-m cable.
A possible mechanism for this could be sub-reflections within the cable due to intrinsic cable imperfections or environmental wear and damage along the cable.
Another explanation is the effect of cross coupling (or mutual coupling) between neighboring antennas, which we explore in more detail in cross-correlation visibilities in the following section.
It is not easy to distinguish between these two effects in the auto-correlation visibilities alone.
Direct electromagnetic simulations of mutual coupling in the HERA system provide mixed evidence: predicting it to appear at a similar amplitude and slope in the auto-correlations, but also predicting it to truncate at lower delays of $\sim$600 ns \citep{Fagnoni2019}.
The fact that the auto-correlations show a systematic tail that, for $\tau > 300$ ns or $k_\parallel > 0.2$ $h$ Mpc$^{-1}$, shows only three to four orders of magnitude of dynamic range is concerning, given that fiducial EoR amplitudes are generally assumed to lie at or below five orders of magnitude in dynamic range in the visibility for similar $k$ \citep{Thyagarajan2016}.
Furthermore, the observed systematic tail extends over a wide range of delays that covers essentially all of the $k_\parallel$ modes of interest ($0.2 < k < 0.6$ $h$ Mpc$^{-1}$).
These systematics need to be well-understood and mitigated if the data are to be used for stringent EoR limits.
Next we attempt to model some of these features and calibrate them out.
One needs to proceed carefully when doing this because calibrating out structure that is inherent to the true data will actually \emph{create} systematics.
To be conservative, we only target the two features that we know to correlate with the expected delays of the 20-m and 150-m coaxial cables at $\sim200$ and $\sim1250$ ns.
We use the method described in \citet{Kern2019a} to derive reflection parameters across the full bandwidth excluding the band edges (120 -- 180 MHz) and then apply them to the data in frequency space.
\autoref{fig:refcal_amps} shows the result, demonstrating the delay response of an auto-correlation before (green) and after (purple) reflection calibration, and also showing the derived reflection amplitudes of the 20-m and 150-m cable reflections before and after calibration.
We find that in general we can suppress the 150-m cable reflection by a couple orders of magnitude (in the visibility), whereas for the 20-m cable reflection we get on average only a factor of a few suppression.
This may not be enough to remove them below fiducial EoR levels, which is highly concerning for the ultimate performance of the HERA Phase I system.
However, a way to achieve more suppression in the power spectrum is to utilize the highly redundant nature of the array and cross-correlate different baselines of the same orientation when forming power spectra.
Often a limiting factor in reflection modeling is frequency evolution of the reflection parameters \citep{Ewall-Wice2016b}.
In \autoref{fig:autoamp_avg_subbands} we plot the auto-correlation response having taken the Fourier transform of the data over a low-band (120--150 MHz; blue) and a high-band (150--180 MHz; gold), plotted in decibels relative to their peak value.
We observe non-negligible amounts of frequency evolution in the general structure of the systematic tail, with slight evolution for the 150-m cable bump and more significant evolution in the 20-m cable bump.
This is likely at least part of the reason why we achieve less suppression for the 20-m cable reflection,
and suggests that to mitigate reflections to higher dynamic range we will need to perform reflection calibration at the sub-band level.
Because we find the suppression achieved by modeling these reflection across the full band is sufficient for this analysis (\autoref{sec:power_spectra}), we defer sub-band reflection modeling to future studies.
An immediate concern one might have about this technique is the fact that we are applying a calibration with spectral structure at the same or similar delays we hope to use for measuring the EoR power spectrum \citep{Mouri2019}, which may lead to signal loss \citep{Cheng2018}.
In our companion paper we study signal loss in this same scenario with simulated reflection systematics, and we find that although the auto-correlation visibilities may sustain low levels of signal loss, the cross-correlation visibilities show resistance to signal loss across all delays \citep{Kern2019a}.
This is in part due to the subspace that reflection calibration spans relative to the EoR signal: reflection calibration spans an direction-independent, antenna-based space, while EoR is fundamentally a baseline-dependent measurement.
As such, it is hard for reflection calibration to soak up and calibrate out EoR signal from the cross-correlation visibilities.
This is further compounded by the fact that our reflection calibration method only uses the auto-correlation visibilities to derive reflection parameters.
We refer the reader to \citet{Kern2019a} for a more detailed description of the algorithm and our signal loss simulations.
\section{Antenna Cross Couplings}
\label{sec:cross_coupling}
Next we turn our attention to HERA's cross-correlation visibilities in order to probe for antenna cross coupling systematics.
Specifically, we look at the North-South instrumental polarization (also denoted as `YY') for baselines (11, 12), (11, 13) \& (11, 14), which are three East-West baselines with lengths of 15, 29 and 44 meters, respectively (\autoref{fig:array_layout}).
These baselines display some of the strongest cross coupling systematics seen in the data, but are otherwise fairly nominal baselines.
\begin{figure}
\centering
\label{fig:cross_corr_freq_spectra}
\includegraphics[scale=0.5]{imgs/cross_corr_freq_spectra.pdf}
\caption{Cross correlation visibility amplitudes in frequency space for three East-West oriented baselines increasing in length from 15 meters up to 44 meters at an LST of $\sim6$ hours.
In addition to a broad-scale ripple that decreases in spectral scale with increasing baseline length (most apparent at lower frequencies), we can also see a fast ripple at roughly a 1 MHz scale in all baselines that is likely due to a cross coupling systematic.}
\end{figure}
\begin{figure*}[t!]
\centering
\label{fig:hera_cross_corr}
\includegraphics[scale=0.5]{imgs/hera_cross_corr.pdf}
\caption{HERA cross correlation visibilities averaged in amplitude across LST for three East-West baselines of increasing length: 15 meters, 29 meters and 44 meters (blue, orange and green, respectively).
The dashed vertical lines represent the geometric delay of the horizon for each baseline, within which foreground emission is nominally bounded.
We see spikes in amplitude at the geometric horizon (``low-delay spikes'') and also at higher delays of $|\tau| > 700$ ns (``high-delay spikes'').
The low-delay spikes are thought to be either a pitchfork-effect as predicted by \citet{Thyagarajan2015a} or antenna cross coupling.
Evidence suggests the high-delay features to be some kind of cross coupling systematic.}
\end{figure*}
In a similar fashion as before, we perform a delay-space deconvolution to fill-in missing data due to RFI flags and suppress its sidelobes in the delay domain.
We allow the deconvolution to set model components out to $|\tau| < 1600$ ns, and iterate down to 5$\times$ the noise floor of the visibilities.
\autoref{fig:cross_corr_freq_spectra} shows visibility spectra from the three baselines of interest after deconvolution.
We can clearly see a fast ripple on all baselines with a spectral scale of roughly 1 MHz.
We also see larger scale ripples (particularly at the lower half of the band) that decrease in spectral scale with increasing baseline length.
As we will see below, the former is likely a combination of a cross coupling and reflection systematic, while the latter may also be a form of cross coupling systematic.
Next we window the visibilities from 120 -- 180 MHz with a Blackman-Harris function \citep{Blackman1958} to limit spectral leakage, and then Fourier transform the visibilities to delay space.
At the moment we are only interested in diagnosing systematics, so we do not square the Fourier amplitudes as we would in forming power spectra, meaning the visibilities are in units of Jansky Hz.
\autoref{fig:hera_cross_corr} shows the result for the 15-meter baseline (blue), 29-meter baseline (orange) and 44-meter baseline (green).
Also plotted as dashed vertical lines are the geometric horizons for each baseline.
The nearly-symmetric peaks at each baseline's geometric horizon could be due to the ``pitchfork'' effect predicted to exist for wide-field radio interferometers \citep{Thyagarajan2015a}.
The pitchfork effect is not a systematic in the context of this work: it is a natural phenomenon from diffuse foregrounds, and is explained as the boosting of measured diffuse sky power near the horizon, where sky signal shows up in the visibilities with delays of the baseline's geometric horizon.
While HERA has a more compact primary beam compared to other low-frequency $21\,\textrm{cm}$\ experiments (e.g. MWA, PAPER), the pitchfork effect was nonetheless predicted to exist from simulations of the HERA dish and feed \citep{Thyagarajan2016}.
However, these features could also be due to sky emission reflecting off the feed of one antenna and entering the feed of a neighboring antenna (i.e. feed-to-feed reflections), which is a form of antenna cross-coupling that we would also expect to appear at the delay of each baseline's geometric horizon.
While both are expected to produce power at a baseline's geometric horizon, both are also expected to be slowly time-variable, meaning they will occupy similar modes in the delay \& fringe-rate Fourier domains.\footnote{Cross coupling produces slowly time-variable signals in the visibility because it inserts a copy of the auto-correlation, which is slowly time variable). The pitchfork mechanism is a mimicking of the auto-correlation at declinations near the horizon, thus we expect it to have a slow time variability like the auto-correlation.}
\begin{figure*}
\centering
\label{fig:cross_corr_sim_comparison}
\includegraphics[scale=0.45]{imgs/cross_corr_sim_comparison.pdf}
\caption{Comparison of HERA data with a simulated foreground visibility using the diffuse GSM sky for a 29-meter East-West baseline.
{\bfseries Left:} Averaged HERA cross correlation visibility amplitude in delay space (solid) with an equivalent data product from a simulated foreground visibility with matching LST range (dashed). The geometric baseline horizon is shown at $\sim100$ ns (dashed green). While we see some evidence for a slight pitchfork-like structure in the simulated visibility, it is significantly weaker than the power bumps at equivalent delays in the real data.
{\bfseries Right:} The simulated visibility transformed to fringe-rate and delay space, with the geometric baseline horizon over-plotted (dashed green). We can more clearly see the existence of the pitchfork effect in this plot, which is centered at-$f=0$ mHz and extends out to the natural geometric horizon and quickly falls off after.}
\end{figure*}
\begin{figure*}
\centering
\label{fig:hera_cross_corr_frate_dly}
\includegraphics[scale=0.5]{imgs/hera_cross_corr_frate_dly.pdf}
\caption{A HERA cross correlation visibility showing foregrounds, cable reflections and cross coupling systematics.
{\bfseries Top:} Real component of the visibility in time and delay space, showing foreground power falling within the geometric horizon (green dashed). Notice that power well within the horizon fringes quickly as a function of time, while power near the geometric horizon shows much slower time variability and has spillover to outside the baseline's horizon.
{\bfseries Bottom:} Visibility amplitude in fringe-rate and delay space. Here, we can see the slowly time variable systematics confined to $f\sim0$ mHz fringe-rate modes, while foreground power is boosted to positive fringe rates. In addition, although not visible in the top plot, we can see the cable reflection just barely visible from the background noise, which appears at positive fringe-rates because it is merely a copy of the intrinsic foreground signal.}
\end{figure*}
In \autoref{fig:cross_corr_sim_comparison} we compare the data against a simulated diffuse foreground visibility from \citet{Kern2019a}, which uses the Global Sky Model \citep{Oliveira2008} as the foreground model and a simulated direction-dependent primary beam response for HERA \citep{Fagnoni2019}.
While we do see evidence for a slight pitchfork effect in the simulated data at the geometric delay, its amplitude is considerably weaker than what is observed in the data.
There is also some total power missing from the $\tau=0$ mode, which is likely due to our exclusion of point sources in the simulation.
The simulated pitchfork can be seen more clearly when transforming the simulated visibility into fringe-rate and delay space (right of \autoref{fig:cross_corr_sim_comparison}), where indeed we see the pitchfork occupying $f\sim0$ mHz modes as expected.
As noted, this result is at odds with previous work predicting a strong pitchfork effect in HERA data \citep{Thyagarajan2016}, which used a different model for the HERA primary beam.
This comparison needs further study before we can unequivocally state that the excess power at the geometric horizon is feed-to-feed cross coupling in nature: the simulated pitchfork is highly dependent on the adopted primary beam response at the horizon, which is typically the least accurate aspect of the simulated primary beam response and is also hard to characterize empirically.
A more rigorous analysis using a combination of empirical primary beam constraints as well as a suite of primary beam simulations is needed to better understand this effect in HERA data.
We also see evidence in \autoref{fig:hera_cross_corr} for non-negligible amounts of spillover of foreground emission (or supra-horizon emission) beyond the baseline's geometric horizon, which has also been observed by other $21\,\textrm{cm}$\ experiments \citep[e.g.][]{Pober2013a, Beardsley2016}.
Supra-horizon emission can come naturally from intrinsic spectral structure of the foregrounds.
It can also be created by chromaticity of the instrumental gain that pushes out structure inherently contained within the geometric horizon, or from low-level artifacts in the data which have a similar effect \citep{Offringa2019}.
As noted above, the antenna-based gains we apply to the data are simplified to a single flux scaling and a single delay, meaning a large part of the observed supra-horizon emission is likely due to uncalibrated instrumental gain terms, which we do not explore in this work.
For a foreground-avoidance approach to estimating the $21\,\textrm{cm}$\ power spectrum, the presence of supra-horizon emission is highly concerning because it limits our ability to measure the low $k$ modes that in theory probe the EoR at the highest signal-to-noise ratio.
The upside is if supra-horizon emission is slowly time-variable (as are both the pitchfork effect and antenna cross coupling systematics), then regardless of its origin we can mitigate it by filtering it off in Fourier space.
Indeed, this is exactly the principle that cross-coupling subtraction algorithms are founded upon.
Another striking feature in \autoref{fig:hera_cross_corr} is the large amount of excess power above the noise floor at high delay ($|\tau| > 700$ ns).
These features, which we refer to as the ``high-delay'' spikes, exhibit some very peculiar behavior.
First, these features seem to be highly baseline-dependent: the three baselines shown in this section are all tied to antenna 11, yet their structures do not seem to be significantly correlated between the baselines.
Second, their profile as a function of delay does not show isolated, individual peaks as one might expect from one or a few feed-to-feed reflections, but rather shows a wide range of delays corrupted by excess power.
Third, while the structures show up roughly near the delays where we would expect reflections from the 150-m cable to appear, they also show up at delays significantly smaller, enough to necessitate a considerably shorter cable length than 150 meters, which is unlikely.
The high-delay spikes exhibit show slow time-variability with their power centered at $f=0$ mHz, as we would expect from a cross coupling systematic.
\autoref{fig:hera_cross_corr_frate_dly} shows the cross-correlation visibility from the 29-meter baseline in time \& delay space (top) as well as in fringe-rate \& delay space (bottom), where recall the latter is merely the Fourier transform of the former across time.
We can clearly see that the high-delay structures are slowly variable, both by their slow movement as a function of time in the top plot, but also by the fact that their power is centered at $f=0$ mHz in the bottom plot.
This is in contrast to the foreground power centered at $\tau=0$ ns, which oscillates rapidly as a function of time and is therefore boosted to positive fringe-rates, with the exception of the power at the baseline's geometric horizon (dashed green), which, like the systematics at high delay, exhibits slow time variability centered at $f=0$ mHz.
\begin{figure*}[t!]
\centering
\label{fig:hera_svd_modes}
\includegraphics[scale=0.5]{imgs/hera_svd_modes.pdf}
\caption{Singular value decomposition of the 29-m East-West baseline visibility from \autoref{fig:hera_cross_corr_frate_dly}.
{\bfseries Left:} The first $\mathbf{T}$ eigenvector across time showing its raw form (blue) and its low-pass filtered form (orange), having filtering out modes with $f>0.46$ mHz with a Gaussian Process model \citep{Kern2019a}.
{\bfseries Center:} The first sixty singular values, showing that most of the variance in the systematic-prone regions can be described with a handful of modes before a noise plateau is reached.
{\bfseries Right:} The first $\mathbf{D}$ eigenvector across delay, showing it picking up on the slowly variable structure at large delays ($|\tau|\sim1200$ ns) and also some structure near the baseline horizon ($|\tau|\sim200$ ns).}
\end{figure*}
What we cannot see by looking at the visibility in time \& delay but can barely begin to discern when we transform to the fringe-rate domain are the cable reflections at $|\tau|\sim1300$ ns.
As we saw in \autoref{fig:autoamp_avg}, the measured reflection amplitudes are roughly $3\times10^{-3}$ times the peak power in the visibility.
Because the high-delay spikes at $f=0$ mHz also show up at similar delays and are stronger in amplitude, we cannot see the cable reflections in \autoref{fig:hera_cross_corr} or in the top panel of \autoref{fig:hera_cross_corr_frate_dly} buried under the other systematics.
Reflections have the same time-structure as the unreflected signal, so by transforming to fringe-rate space we can isolate them from the slowly time variable systematics, and indeed we can just barely seem them above the noise floor of the cross correlation visibilities at roughly $3\times10^{-3}$ times the main foreground power as expected.
\autoref{fig:hera_cross_corr_frate_dly} also shows evidence for the supra-horizon emission having two distinct components: one that is has fast time variability like foregrounds from the main-lobe of the primary beam, and another that is slowly fringing like a cross coupling systematic or a pitchfork effect, and both extend considerably beyond the baseline's geometric horizon.
\begin{figure*}
\centering
\label{fig:hera_cross_corr_sub}
\includegraphics[scale=0.5]{imgs/hera_cross_corr_sub.pdf}
\caption{HERA cross correlation visibilities from \autoref{fig:hera_cross_corr} after cross coupling subtraction but before reflection calibration (solid) and after both cross coupling subtraction and reflection calibration (dashed).
The black-dashed line represents the lower delay boundary of the cross coupling model.
Grey shaded regions indicate expected delays for reflection systematics having inspected the auto-correlations for peaks.
Joint systematic suppression yields cross correlations visibly free of systematics at the level of the per-baseline noise floor.}
\end{figure*}
\begin{figure*}
\centering
\label{fig:hera_cross_corr_frate_dly_sub}
\includegraphics[scale=0.5]{imgs/hera_cross_corr_frate_dly_sub.pdf}
\caption{Same 29-m visibility in fringe-rate and delay space as shown in \autoref{fig:hera_cross_corr_frate_dly} but now with reflection and cross coupling systematics removed. The blue-dashed region shows where the cross coupling algorithm modeled and removed systematics, and the green-dashed line marks the baseline's geometric horizon.}
\end{figure*}
Currently, there is not a single physical model for the origin of the high-delay spikes that can explain all of its behavior observed in the data.
In \autoref{sec:cross_coupling_models}, we explore some simple physical models for the systematic and show that we can tentatively rule them out; however, further work is needed to more fully understand their origin.
Nonetheless, their temporal behavior is suggestive of some kind of antenna cross coupling that occurs at some point along the signal chain.
At present, what we can say with certainty is that their time-dependence is highly inconsistent with an EoR signal, and as such we can suppress it by filtering the data in fringe-rate space before forming power spectra (see \citet{Kern2019a} for details on why this is inconsistent with an EoR signal).
With that in mind, \autoref{fig:hera_svd_modes} shows the result of running an SVD-based cross coupling model \citep{Kern2019a} on the 29-meter baseline data, which decomposes the matrix shown in the top panel of \autoref{fig:hera_cross_corr_frate_dly} into orthogonal time eigenmodes ($\mathbf{T}$), orthogonal delay eigenmodes ($\mathbf{D}$) and their singular values ($\mathbf{S}$).
Before taking the SVD we apply a bandstop window on the data matrix that assigns zero weight to all delay modes outside of the range $200 < |\tau| < 2000$ ns, which was chosen to encompass most of the observed cross-coupling systematics and to reject the foregrounds at very low delays.
The left panel plots the first $\mathbf{T}$ eigenmode across time, showing the raw eigenmode (blue) and the eigenmode after low-pass filtering it out to $f_{\rm max} = 0.46$ mHz (orange).
We use the Gaussian Process-based filter explored in \citet{Kern2019a} to low-pass filter these time-modes.
The center panel shows the first 60 singular values, giving us a sense for how much the information content is isolated into the first few eigenmodes.
We find that most of the structure can be described with only a handful of modes before reaching a plateau.
In forming the systematic model we keep the top 30 modes out of $\sim1000$ and truncate the rest.
Lastly, the right panel shows the first $\mathbf{D}$ eigenmode across delay, showing it picking up the high-delay cross coupling systematic and some of the supra-horizon emission at low delay.
In addition to picking up on the systematic, the SVD will pick up on the noise of the data as well.
However, because we keep only a small fraction of the eigenmodes and additionally smooth them across time, we do not suspect that we are subtracting a significant component of the noise in the process of systematic removal.
We repeat this for the other baselines at hand, low-pass filtering the $\mathbf{T}$ basis vectors from the 15-meter and 44-meter baselines with $f_{\rm max} = 0.14$ and $0.83$ mHz respectively, using a Gaussian-process-based smoothing for the low-pass filter.
See Table 1 and Appendix B of \citet{Kern2019a} for more details on this process.
\autoref{fig:hera_cross_corr_sub} shows the baselines in \autoref{fig:hera_cross_corr} after cross-coupling subtraction, with the vertical dashed line showing the minimum delay of the cross coupling model at $\tau=200$ ns.
The top panel shows only cross coupling subtraction, where we see significant suppression of the high-delay spikes and the outer edge of the low-delay spikes.
As expected, after subtracting the strong cross coupling terms at high-delay we are left with the appearance of localized bumps that mark the cable reflections (marked in grey bands), which recall were not subtracted out with the cross coupling because they occupy fringe-rate modes that were filtered out of the systematic model in the process of smoothing.
The bottom panel shows the data after applying reflection calibration from \autoref{sec:ref_cal} and cross coupling subtraction, showing that the data is now consistent with a scale-independent thermal noise floor for all delays outside $|\tau| > 500$ ns.
There is, however, still a slight slope in the data at intermediate delays of $200 < |\tau| < 500$ ns, which is part of the supra-horizon emission we observed earlier.
To ensure that this tail is not coming from the cross-coupling component that we attempted to filter out, we can plot the systematic-subtracted data in fringe-rate \& delay space, which is shown in \autoref{fig:hera_cross_corr_frate_dly_sub} with the blue-dashed region showing the region of Fourier space where cross coupling subtraction was performed.
\autoref{fig:hera_cross_corr_frate_dly_sub} confirms that the excess signal between $200 < |\tau| < 500$ ns observed in \autoref{fig:hera_cross_corr_sub} does not come from modes that should have been subtracted in the process of cross coupling removal, and originates from the second supra-horizon component at higher fringe-rates.
As discussed above, this supra-horizon emission can come from uncalibrated bandpass terms or from low-level artifacts in the data, which push foregrounds out in delay that were intrinsically contained within the geometric horizon.
These effects can be somewhat mitigated with better bandpass calibration and data flagging, but are still active areas of research in the literature.
Additionally, the slight overlap of low fringe-rate power inside the dashed region at $|\tau|=200$ ns is produced by a windowing function applied to the data before taking its Fourier transform.
Signal loss is a principal concern when applying any baseline-dependent operation to the data, as we have done with cross coupling subtraction.
In \citet{Kern2019a} we vet our cross coupling modeling algorithms for EoR signal loss against numerical visibility simulations of the HERA Phase I system.
We show that by low-pass filtering the systematic model along time (\autoref{fig:hera_svd_modes}), we can harden our systematic model against EoR signal loss to an almost arbitrary level.
In our case, we chose the fringe-rate bounds above by adopting a signal loss tolerance of 1\% in EoR power, which is below the expected measurement error of the full HERA array.
We refer the reader to our analysis and discussion in that paper for more details on signal loss quantification in the context of cross coupling removal.
\section{Power Spectrum Estimation}
\label{sec:power_spectra}
Now that we've demonstrated that we can suppress reflection and cross coupling systematics for a few baselines down to their individual noise floors, we would like to prove that we can similarly do this for baselines across the entire array, and confirm that these systematics are a non-limiting factor in the power spectrum even after redundant baseline averaging.
We will focus on the same three baseline orientations (14-m, 29-m and 44-m East West baselines), but now look at all baselines within the array that fall within each baseline group.
\subsection{Delay Spectra}
To estimate the three-dimensional $21\,\textrm{cm}$\ power spectrum, $P_{21}(\mathbf{k})$, we use the delay spectrum estimator \citep{Parsons2012b, Liu2014a, Parsons2014}.
The delay spectrum is a per-baseline, visibility-based power spectrum estimator that relies on the Fourier transform of the visibility across frequency into the delay ($\tau$) domain,
\begin{align}
\label{eq:delay_transform}
\widetilde{V}(\mathbf{u}, \tau) = \int d\nu\ e^{2\pi i\nu\tau} V(\mathbf{u}, \nu),
\end{align}
where $\mathbf{u} = \mathbf{b} / \lambda$ is the baseline vector divided by the observing wavelength.
The ``delay transform'' of the visibility is not a direct measurement of the line-of-sight cosmological $k_{\parallel}$ mode, due to an interferometer's inherent chromaticity \citep{Morales2012}.
Approximating it as such is known as the ``delay approximation,'' which was shown to be a good approximation for short baselines and is one of the motivating factors behind HERA's compact design \citep{Parsons2012b, Dillon2016}.
We refer the reader to \citet{Morales2019} for a broader discussion on various $21\,\textrm{cm}$\ power spectrum estimators.
The delay spectrum approximation of the $21\,\textrm{cm}$\ power spectrum is then the square of the delay-transformed visibility with the appropriate scaling factors,
\begin{align}
\label{eq:delay_spectrum}
\widehat{P}_{21}(k_{\perp}, k_{\parallel}) \approx |\widetilde{V}(\mathbf{u}, \tau)|^2\frac{X^2Y}{\Omega_{pp}B_{p}}\left(\frac{c^2}{2k_{B}\bar{\nu}^2}\right)^2
\end{align}
where $X$ and $Y$ are redshift-dependent scalings converting sky angles and frequencies to cosmological length scales, $\Omega_{pp}$ is the sky-integral of the squared antenna primary beam response, $\bar{\nu}$ is the delay transform center frequency and $B_{p}$ is the delay transform bandwidth, as defined in Appendix B of \citet{Parsons2014}.
The factors relating the $\mathbf{u}$ and $\tau$ Fourier domains inherent to the telescope to the cosmological Fourier domains of $k_{\perp}$ and $k_{\parallel}$ are
\begin{align}
\label{eq:cosmo_scalings}
k_{\parallel} &= \frac{2\pi}{X}\tau \nonumber \\
k_{\perp} &= \frac{2\pi}{Y}\frac{b}{\lambda}
\end{align}
where $X = c(1+z)^2\nu_{21}^{-1}H(z)^{-1}$, $Y = D(z)$, $\nu_{21}=1.420$ GHz, $H(z)$ is the Hubble parameter, $D(z)$ is the transverse comoving distance, $b$ is the baseline length and $\lambda$ is the observing wavelength \citep{Parsons2012a, Liu2014a}.
Cross multiplying a visibility with itself in \autoref{eq:delay_spectrum} to form a delay spectrum will result in an overall bias in power due to the noise present in the data.
To avoid this, we take visibility spectra adjacent to each other in LST separated by 10.7 seconds and apply a phasing term to align their phase centers before cross multiplication \citep{Pober2013b}.
This means the two visibilities to leading order measure the same cosmological mode on the sky but have uncorrelated noise realizations, such that they do not produce a noise bias upon cross correlation.
\begin{figure}
\centering
\label{fig:hera_tsys}
\includegraphics[scale=0.55]{imgs/hera_Tsys.pdf}
\caption{System temperature curves for all baselines used in the power spectral analysis (colored points), and their average (black dashed). Delay spectra presented in this section are formed between channels 450 and 650 (144 -- 163 MHz) with an effective system temperature of $\sim270$ K.}
\end{figure}
\begin{figure*}
\centering
\label{fig:pspec_waterfall}
\includegraphics[scale=0.5]{imgs/pspec_waterfall.pdf}
\caption{An averaged power spectrum waterfall of the East-West 15-m group showing the absolute value of the real component of the power spectra, having first incoherently averaged 35 separate baseline-pairs in the group. We plot the data with systematics in (left) and with systematics removed (right).}
\end{figure*}
Thermal noise in interferometric visibilities is mean-zero, Gaussian distributed, and is statistically uncorrelated on all time and frequency scales; however, it generally is non-stationary, and will have an amplitude dependence as a function of LST and frequency.
A signal chain's \emph{system temperature} is proportional to the total amount of noise power received by the analogue system, and is the sum of the sky noise and receiver noise,
\begin{align}
T_{\rm sys}(\nu, t) = T_{\rm sky}(\nu, t) + T_{\rm rcvr}(\nu, t)\ {\rm [K]}.
\end{align}
In practice, antenna signal chains will have variable system temperatures due to different angular primary beam responses and different receiver properties.
A visibility-based system temperature can therefore be estimated, which is the system temperature as measured by a particular baseline.
This can be estimated by taking differences of adjacent pixels in time and frequency and relating its RMS to a system temperature via the radiometer equation,
\begin{align}
\sigma_{\rm rms}^{ij} = \frac{2k_b\nu^2}{c^2\Omega_p} \frac{T_{\rm sys}^{ij}}{\sqrt{\Delta\nu\Delta t}},
\end{align}
where $\sigma_{\rm rms}^{ij}$ is the RMS of the visibility between antennas $i$ and $j$ in Jansky, $k_b$ is the Boltzmann constant, $\nu$ is the average observing frequency, $\Omega_p$ is the angular integral of the peak-normalized primary beam response in steradians, $\Delta\nu$ is the correlator channel width in Hz and $\Delta t$ is the correlator integration time in seconds \citep{Thompson2017}.
Another estimate of the noise comes directly from the auto-correlation visibility, which itself is a measurement of the total power received by a particular antenna.
For a cross-correlation visibility between antenna $i$ and $j$, we can estimate the baseline's system temperature as
\begin{align}
\sqrt{V_{ii}V_{jj}} = \frac{2k_b\nu^2}{c^2\Omega_p}T_{\rm sys}^{ij},
\end{align}
where $V_{ii}$ is the auto-correlation visibility of antenna $i$.
While both methods are comparable, we defer to using the auto-correlations, which in practice generally lead to more stable and cleaner noise models.
\autoref{fig:hera_tsys} shows system temperature estimates for each baseline participating in the analysis (blue) and the averaged system temperature, which is each baseline's system temperature averaged in quadrature.
Again, because we have not corrected for the bandpass structure of the gains, the large-scale fluctuations in \autoref{fig:hera_tsys} are not unexpected, and would be smoothed-out after solving for and applying the appropriate instrumental gains.
The presence of such structure in the noise curves does not change the fundamental results of this section.
Power spectra presented in this section are formed between channels 450 -- 650 (144 -- 163 MHz) with an effective system temperature of $\sim270$ K.
\begin{figure*}
\centering
\label{fig:hera_avg_pspec}
\includegraphics[scale=0.5]{imgs/hera_avg_pspec.pdf}
\caption{Delay spectra for three unique baseline lengths oriented along the East-West axis without systematic removal (blue) and with systematic removal (orange). The power spectra are formed directly from the visibilities for each baseline in the array, are incoherently averaged within each redundant group, and then their absolute value is averaged across the remaining bins in LST.
We see suppression of high delay systematics down to the integrated noise floor, and get some suppression of supra-horizon power at low delay.}
\end{figure*}
With an understanding of the noise properties of our data, we can compute a theoretical estimate of the noise power spectrum, $P_{\rm N}$, which is equivalent to the root-mean square (RMS) of the power spectrum if the only component in the data were noise.
This is one way to measure the uncertainty on the estimated power spectra, but also represents the theoretical amplitude of the power spectra in the limit that they are noise dominated (as opposed to signal or systematic dominated).
This is given in \citet{Cheng2018} as
\begin{align}
\label{eqn:PN}
P_{\rm N} = \frac{X^2Y\Omega_{\rm eff}T_{\rm sys}^2}{t_{\rm int}N_{\rm coherent}\sqrt{2N_{\rm incoherent}}},
\end{align}
where the $X$ and $Y$ scalars are the same as before, $T_{\rm sys}$ is the system temperature in milli-Kelvin, $t_{\rm int}$ is the correlator integration time in seconds, $N_{\rm coherent}$ is the number of sample averages done at the visibility level (i.e. before visibility squaring), and $N_{\rm incoherent}$ is the number of sample averages done at the power spectrum level (i.e. after visibility squaring).
$\Omega_{\rm eff}$ is the effective beam area given by $\Omega_{\rm eff} = \Omega_{p}^2 / \Omega_{pp}$, where $\Omega_{p}$ is the integral of the beam across the sky in steradians, and $\Omega_{pp}$ is the integral of the squared-beam across the sky in steradians \citep{Pober2013b, Parsons2014}.
We calculate $P_N$ for each redundant group using the baseline-averaged system temperature.
The data are natively sampled at a 10.7 second cadence.
Before forming power spectra, we coherently average each visibility across LST for 3.6 minutes (20 samples), applying a fringe-stop in each averaging window to limit sky signal attenuation.
We select a wide spectral window between channels 400 to 700 (139 -- 168 MHz), and apply a Blackman-Harris windowing function before transforming to Fourier space.
Because the cosmological signal undergoes non-negligible evolution within such a bandwidth we would not normally use such a wide bandwidth for setting upper limits, however, we do this to achieve better resolution in delay space for diagnostic purposes.
We then cross-multiply the visibilities and apply the necessary normalization factors as per \autoref{eq:delay_spectrum}.
For simplicity, we only form power spectra by cross multiplying baselines with themselves (at adjacent times), and do not cross correlate different baselines within redundant groups.
Then we average the power spectra within each redundant group (i.e. an incoherent average).
For the 15-m, 29-m and 44-m groups this involves averaging 35, 28, and 20 independent baselines, respectively.
What we are left with is a single complex-valued power spectrum waterfall for each redundant group as a function of LST and delay, consisting of 60 leftover time bins and 200 delay bins.
In \autoref{fig:pspec_waterfall} we show this for the 15-m group with and without systematic removal (right \& left)
In our final step, we take the real component of each power spectrum waterfall and average its absolute value over the remaining time bins.
This is done to make a higher signal-to-noise measurement of the noise floor at the level of the power spectrum waterfall: we could have gained more sensitivity by not taking the absolute value before averaging, but our point here is to make a visually clearer comparison with the known noise level rather than gain increased sensitivity.
\autoref{fig:hera_avg_pspec} shows the power spectra of the data without systematic removal (blue), with systematic removal (orange) and also shows the theoretical noise level given our visibility noise estimates and taking into account the various forms of averaging before and after squaring the visibilities (black dashed).
In this case, the systematic removal includes both cross coupling subtraction and reflection calibration.
We find that we can suppress the observed systematics by roughly two orders of magnitude in power, enabling us to achieve six orders of magnitude in dynamic range with respect to the peak foreground power for $|k_\parallel| > 0.2\ h$ Mpc$^{-1}$.
The power spectra show generally good agreement with our prediction of the thermal noise floor for delays considerably outside of the foreground wedge.
Although the geometric horizon for these short baselines is on the order of 50 -- 150 ns, the Blackman-Harris windowing function pushes this out by about +100 ns, such that their effective horizon is on the order of 150 -- 250 ns.
However, we can still see some amount of positive power near the transition region, particularly for the 15-meter group.
This could be due to uncalibrated bandpass terms in the data, low-level artifacts in the data missed by RFI flagging, or residual reflection and cross coupling systematics.
More complete gain calibration and deeper integrations will allow us to investigate this at higher SNR levels.
\citet{Gosh2019} also propose methods for subtracting systematics observed in the HERA Phase I instrument using a Gaussian Process based model.
With their model, they find good subtraction of the systematic down to similar dynamic ranges ($10^6$ in power), at the cost of possible signal loss at the $\sim10\%$ level.
Systematics of a similar nature were also observed in the HERA-19 Commissioning array \citep{Kohn2019}.
However, a direct comparison with this work is difficult because the array was re-configured en route to the Phase I configuration.
As a final note, we would like to clarify how we came to the noise level plotted in \autoref{fig:hera_avg_pspec}.
Noise in the interferometric visibility is a complex Gaussian random variable, meaning that when we form power spectra by squaring the visibilities we are left with a noise component that is drawn from a complex normal-product distribution.
A real-valued normal-product distribution can be shown to be described by a modified Bessel function of the second kind of order 0 \citep{Wells1962, Cui2016}.
A complex-valued normal-product random variable is simply the sum of two real-valued normal-product random variables, which means it's probability density function (PDF) is a convolution of the Bessel function with itself and turns out to be a double-sided exponential distribution.
Therefore, after squaring the visibilities, noise in the power spectrum is drawn exponentially.
However, most power spectrum pipelines will average the data after squaring the visibilities (i.e. incoherent averaging), which will re-Gaussianize the data due to the Central Limit Theorem.
Indeed, to create \autoref{fig:pspec_waterfall} we perform a few dozen incoherent averages across redundant baselines after squaring the visibilities, meaning it is fair to assume the noise in our power spectrum is Gaussian-distributed.
However, in order to collapse our data along the LST axis to form \autoref{fig:hera_avg_pspec} we took the absolute value of the real-component of the power spectrum before averaging.
The absolute value operation transforms the noise from a Normally-distributed, mean-zero random variable into a random variable drawn from a half-Normal distribution, which is no longer mean-zero and has an expectation value of $\sigma\sqrt{\frac{2}{\pi}}$.
Recall from \autoref{eqn:PN} that $P_N$ tells us the expected RMS of the real (or imaginary) component of the complex power spectrum due to thermal noise.
Therefore, the act of taking the absolute value of the real-component of the power spectra and averaging across LST means we need to multiply our final $P_N$ estimate by a factor of $\sqrt{2/\pi}$, which is what is actually plotted in \autoref{fig:hera_avg_pspec} as the black-dashed line.
\section{Summary}
In this work we investigate data from HERA Phase 1 for signal chain reflection and antenna cross coupling systematics.
We find cable reflections on the order of $\sim10^{-3}$ in amplitude, and a systematic tail in the auto-correlation visibilities straddling the EoR window at roughly the $10^{-3}-10^{-4}$ level, which is considerably larger than that expected from simulations of the HERA dish and feed.
If not mitigated, the systematic tail observed in the auto-correlations may prevent HERA Phase I from setting competitive upper limits on the EoR, let alone detecting it.
We show that reflection calibration can help to suppress some of these features by about an order of magnitude in the visibility at specific $k_\parallel$ modes.
The presence of the systematic tail in the auto-correlation may be indicative of highly complex cable sub-reflections that will be hard to calibrate out down to EoR levels, even with the methods demonstrated here.
We also inspect the data for antenna cross-coupling systematics and find that they contaminate the data at high delays near the edge of the targeted EoR modes at $k_\parallel\sim0.5\ h$ Mpc$^{-1}$.
We also find evidence for excess emission at each baseline's geometric horizon that is likely due to either 1) a pitchfork effect \citep{Thyagarajan2016} or 2) feed-to-feed mutual coupling.
These features produce non-negligible spillover into the EoR window and thus need to be controlled for foreground avoidance power spectrum approaches.
We investigate three East-West baselines of increasing length (15-m, 29-m \& 44-m) that exhibit particularly strong systematics, and find that we can model and remove both of the contaminating components in the EoR window down to the integrated noise floor of each baseline.
We then form power spectra from three redundant groups for baselines across the entire array.
We show that by combining reflection calibration and cross coupling subtraction on specific baseline orientations, we can suppress all visible systematics for $k_\parallel > 0.2\ h$ Mpc$^{-1}$, down to the integrated noise floor of the array for a single nightly observation: with the exception of a weak supra-horizon tail at low $k$ that merits further investigation through improved bandpass calibration and RFI flagging.
Instrumental bandpass calibration for HERA Phase I is explored in \citet{Kern2019c} and \citet{Dillon2019}.
This work shows that the immediate systematics seen in HERA Phase I system can be modeled and dealt with down to a dynamic range of $10^{-6}$ in the power spectrum, even with an extremely simple approach to direction-independent, antenna-based calibration.
While this is reassuring, fiducial EoR levels are expected to appear at dynamic ranges of $\sim10^{-10}$ in the power spectrum for low-$k$ modes \citep{Thyagarajan2016}.
Assuming that the systematics studied here can continue to be subtracted to lower noise levels and barring the appearance of other systematics, this work suggests that fully integrated HERA Phase I may have the potential to set competitive upper limits on the $21\,\textrm{cm}$\ power spectrum.
\
This material is based upon work supported by the National Science Foundation under Grant Nos. 1636646 and 1836019 and institutional support from the HERA collaboration partners.
This research is funded in part by the Gordon and Betty Moore Foundation.
HERA is hosted by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation.
A. Lanman and J. C. P. would like to acknowledge NASA Grant 80NSSC18K0389.
A. Liu acknowledges support from a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant and a Discovery Launch Supplement, as well as the Canadian Institute for Advanced Research (CIFAR) Azrieli Global Scholars program.
Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.
G. B. acknowledges funding from the INAF PRIN-SKA 2017 project 1.05.01.88.04 (FORECaST), support from the Ministero degli Affari Esteri della Cooperazione Internazionale - Direzione Generale per la Promozione del Sistema Paese Progetto di Grande Rilevanza ZA18GR02 and the National Research Foundation of South Africa (Grant Number 113121) as part of the ISARP RADIOSKY2020 Joint Research Scheme, from the Royal Society and the Newton Fund under grant NA150184 and from the National Research Foundation of South Africa (grant No. 103424).
|
train/arxiv
|
BkiUaLfxK02iP4Y2sH3u
| 5
| 1
|
\section{Crystal growth and structure}
Recent systematic, synthetic explorations of the Mn$-$Bi$-$Te system have revealed new compounds that are ordered (Bi$_2$Te$_3$)$_n$(MnBi$_2$Te$_4$) ($n = 1, 2$) modular stackings of quintuple (Bi$_2$Te$_3$) and septuple (MnBi$_2$Te$_4$) blocks (Fig.~\ref{fig:Mn147struct}a) \cite{26,27}. In this section, we report a new crystal-growth protocol for $n = 1$, assisted by our preceding thermochemical studies~\cite{27}, and the structure elucidation by single-crystal X-ray diffraction (SCXRD). In Ref.~\cite{27} we developed robust synthetic protocols for phase-pure powders of $n = 1$ and $n = 2$ members based on differential scanning calorimetry. For all members of the (Bi$_2$Te$_3$)$_n$(MnBi$_2$Te$_4$) ($n = 0, 1, 2$) series we found an ubiquitous deviation from the idealized compositions~\cite{20, 27}. Henceforward, the title compound is denoted as Mn147, keeping in mind its non-stoichiometry, and MnBi$_2$Te$_4$ compound is denoted as Mn124.
As shown first in~\cite{27}, Mn147 is thermodynamically stable in a high-temperature interval well above the room temperature. Whereas Mn124 melts at 600(5)~$^\circ$C~\cite{20}, the melting point of Mn147 is 590(5)~$^\circ$C~\cite{27} and, thus, offers a very narrow window above the crystallization point of Bi$_2$Te$_3$ (586~$^\circ$C), in which Mn147 can be grown from a melt.
Crystal-growth is thus very challenging, and our experiments evidence that polycrystals grown outside the determined temperature window exhibit stacking variants of both Mn124 and Mn147. Mn147 is not thermodynamically stable at room temperature, but can be obtained as a metastable product by quenching from 585~$^\circ$C~\cite{27}. Based on these findings, we have established an optimized crystal-growth technique for Mn147: mm-sized platelets (Fig.~\ref{fig:Mn147struct}b) can be obtained by a stepwise slow cooling of a heterogeneous MnTe$_{(s)}$/Bi$_2$Te$_3$$_{(l)}$ melt and long-term annealing at a precisely controlled temperature of 585~$^\circ$C, followed by rapid water-quenching (see Appendix \ref{app:crystal} for further details). These high-quality crystals enable the following studies of the crystal structure, the magnetic order, the transport and the surface electronic structure (sections II and III). The compositions of all crystals used for physical property measurements were verified by energy-dispersive X-ray spectroscopy (EDX).
\begin{figure}
\center
\includegraphics[width = 8.5cm]{figure1.pdf}
\caption{\label{fig:Mn147struct} \textbf{a}, Crystal structure of Mn$_{1-x}$Bi$_{4+2x/3}$Te$_7$ (GeBi$_4$Te$_7$ structure type) with alternating (Bi$_2$Te$_3$) and (Mn$_{1-x}\Box_{x/3}$Bi$_{2+2x/3}$Te$_4$) blocks. Mn atoms are shown in green, Bi -- in blue, Te -- in orange. \textbf{b}, As-grown crystals with the experimental composition (EDX) Mn$_{0.8(1)}$Bi$_{4.3(1)}$Te$_7$.}
\end{figure}
Our current structure elucidation by SCXRD confirms cationic non-stoichiometry in Mn147 in full accordance with our previous data on Mn147 powders~\cite{27}. The disorder manifests itself in Mn$^{2+}$/Bi$^{3+}$ antisite defects and related cationic vacancies ($\Box$) in the septuple (Mn$_{1-x}$$\Box_{x/3}$Bi$_{2+ 2x/3}$Te$_4$) blocks, predominantly in the \textit{1a} site in the middle of the septuple block (Tables S1, S2, Supplementary Note 1 \footnote{See Supplemental Material at [URL will be inserted by
publisher] for: Supplementary Notes 1-7, Figures S1-S13 and Tables S1-S4.}). Interestingly, the (Bi$_2$Te$_3$) block appears unaffected by cationic intermixing (Table S3). These mixed occupancies of the cationic positions and Mn vacancies result in a non-stoichiometric composition Mn$_{0.75(3)}$Bi$_{4.17(3)}$Te$_7$ as refined from a SCXRD experiment. This stoichiometry slightly deviates from the one previously determined for polycrystalline powders, Mn$_{0.85(3)}$Bi$_{4.10(2)}$Te$_7$~\cite{27}; thus, indicating that a homogeneity range $0.15 \leq x \leq 0.25$ may exist for Mn$_{1-x}$$\Box$$_{x/3}$Bi$_{4+{2x/3}}$Te$_7$ phase.
The cationic disorder, however, does not alter the trigonal lattice symmetry of Mn147 (sp. gr. $P\bar{3}m$1; the GeBi$_4$Te$_7$ structure type~\cite{28}), neither inhibits long-range magnetic order. Similar intrinsic phenomenon has been reported for isostructural~\cite{28}, and structurally~\cite{20} and compositionally related~\cite{29,30} compounds. In contrast to some of them, we find no indications of massive stacking faults in our crystals by X-ray or electron diffraction methods~\cite{27}.
\section{Magnetic properties}
In this section, we analyze the bulk and surface magnetic properties of Mn147 crystals, based on which we discuss the topological electronic properties in section III. Electrical resistivity ($\rho_{xx}$) measurements as a function of temperature ($T$) reveal a metallic behaviour (see Fig.~\ref{fig:Mn147}a). Focusing on the most salient features of the data, a clear upturn anomaly is visible at 13~K, which is reminiscent of the typical signature of magnetic ordering in itinerant materials~\cite{31}. The upturn indicates enhanced fluctuations causing electron scattering, which is strongly reduced in the ordered phase, in which a steep decrease of $\rho_{xx}$ occurs. Upon lowering the temperature, a jump-like drop at about 5~K reveals further reduction of scattering, possibly related to a rearrangement of the magnetic structure.
\begin{figure*}
\center
\includegraphics[width =\textwidth]{figure2.pdf}
\caption{\label{fig:Mn147} Magnetic and transport properties of Mn147. \textbf{a}, In-plane electrical resistivity as a function of the temperature. \textbf{b} and \textbf{c}, Normalized magnetization as a function of temperature for fields applied both parallel and perpendicular to the $ab$ directions. Open and filled symbols correspond to ZFC and FC protocols, respectively. \textbf{d} and \textbf{e}, Hall resistivity and magnetization as a function of the field applied perpendicular to the $ab$ planes. \textbf{f} Magnetization as a function of the field applied parallel to the $ab$ planes. The hysteretic behavior for temperatures $T < 5$~K indicates ferromagnetic (intra-plane) interactions.}
\end{figure*}
\begin{figure*}
\center
\includegraphics[width = \textwidth]{figure3.pdf}
\caption{\label{fig:Mn147_2} Spectroscopy of magnetic properties in Mn147. \textbf{a}, Typical ESR spectra measured at $T =$ 4~K and $T =$ 30~K. \textbf{b}, Frequency dependence of the resonance field of the ESR signal measured at temperatures of 4~K, 15~K, 30~K. \textbf{c}, The anisotropy gap $\Delta$ as a function of temperature extracted from \textbf{a} by fitting $\nu = \Delta + g \mu_B \mu_0 H_0/ h$. \textbf{d}, XMCD and \textbf{e}, XMLD data for Mn147(0001) obtained at the Mn $L_{2,3}$ absorption edge with circularly polarized (RCP and LCP) and linearly polarized (LV and LH) light, respectively. Measurements were performed in normal (NI) and grazing (GI) light incidence geometries, as sketched in the inset of \textbf{d}. XMDC signals are shown for an external field ($\mu_0 H =$ 6~T) along the light incidence direction and for remnant conditions ($\mu_0 H = 0$) at $T =$ 2~K. XMLD data without external field are reported for different temperatures.}
\end{figure*}
Indeed, measurements of the magnetization ($M$) in an external field ($H$) on a Mn$_{0.82(7)}$Bi$_{4.2(1)}$Te$_{7.00(5)}$ crystal (as determined by EDX) as a function of temperature show an antiferromagnetic phase transition at $T_N =$ 13 K (Fig.~\ref{fig:Mn147}b). For $H \bot ab$ and small fields, such as $H =$ 200 Oe, a ferromagnetic-like increase occurs upon further cooling. This is followed by a splitting of field-cooled (FC) and zero field-cooled (ZFC) curves at around 7~K, as well as a kink and a peak at about 5~K, respectively, the latter coinciding with the jump in the resistivity. Both features are rapidly suppressed by applying an external magnetic field.
In the magnetically ordered phase, interesting metamagnetic behavior occurs for $H \bot ab$ as is evidenced by the magnetization curves (Fig.~\ref{fig:Mn147}e). For example, at $T =$ 10~K a spin-flip-like feature is observed, and for $T <$ 7~K dominating hysteresis typical for ferromagnets is apparent. This complex magnetic phenomenology is also reproduced by the Hall resistivity $\rho_{xy}$ (Fig.~\ref{fig:Mn147}d). For $T < T_N$, the system exhibits anomalous Hall effect tracking the observed metamagnetic behavior. For $T \leq$ 7~K the data reveal an additional metamagnetic transition in the low-field region $\mu_0 H \leq$ 300~Oe. This observation is confirmed by a close inspection of the magnetization data. At 2.5~K a large hysteresis associated with global ferromagnetism is present. Above $T_N$ the anomalous contribution disappears, and only a standard component persists with a negative sign consistent with an $n$-type conduction (inset in Fig.~\ref{fig:Mn147}d).
The magnetic anisotropy of the compound is examined via additional measurements for $H \parallel ab$ (Fig.~\ref{fig:Mn147}c and f). The magnetic moment values in the ordered state are much lower in this case. At lower fields both an antiferromagnetic transition and a ZFC--FC splitting are observed, but the suppression of these features occurs at higher fields than for $H \bot ab$. For the $H \parallel ab$ direction the magnetization increases almost linearly with the applied magnetic field as expected for an antiferromagnet. On top of that, a spin reorientation at lower temperatures is indicated by an increase of $M/H$ below ca.~10~K and a ferromagnetic net magnetization is clearly present.
Apparently, the field necessary to observe such feature can be sample-dependent, as is evident from a comparison of our results with the recent reports \cite{Wueaax9989, PhysRevB.100.155144,arXiv:1910.11653}. This finding may be associated with slight differences in the Mn content due to Mn/Bi intermixing and Mn vacancies, which, however, have no influence on the lattice symmetry. Notably, all reports are in line regarding the behavior of magnetization as a function of magnetic field in both directions. The results show that the system is yet not fully saturated at $\mu_0 H =$ 5~T as indicated by a small slope at higher fields (inset in Fig.~\ref{fig:Mn147}e). High-field experiments are necessary to gain a better insight into the details of the magnetic phase diagram.
The Curie--Weiss fitting of the magnetization high-temperature data in both directions yields positive values of the Curie--Weiss temperature: $\theta_{CW}^{ab}=$ 13.7(5)~K and $\theta_{CW}^{c}=$ 14.7(5)~K, thus, confirming the predominantly ferromagnetic character of the largest (intra-plane) exchange interactions (see Supplementary Note~2, Fig.~S1 \cite{Note1}). In addition, the estimated effective magnetic moments, given the homogeneity range ($0.15 \leq x \leq 0.25$), fall into the ranges $5.2 \mu_B \leq \mu_{eff}^{ab} \leq 5.6 \mu_B$ and $5.1 \mu_B \leq \mu_{eff}^{c} \leq 5.5 \mu_B$, where $\mu_B$ is the Bohr magneton, which suggest the manganese(II) high-spin configuration $S = \frac{5}{2}$. The microscopic nature of the different magnetic states as a function of field and temperature is a matter of debate and requires further elucidation.
The observed high-frequency electron spin resonance (ESR) signal of Mn147 (Fig.~\ref{fig:Mn147_2}a--c) is almost isotropic above $T \sim$ 30 K and follows a typical Mn(II)-ion paramagnetic resonance condition $h\nu = g \mu_B \mu_0 H_0 \mid m_s^z - (m_s^z \pm 1)\mid$ with the $g$-factor very close to 2. Here, $h$ is the Planck constant, and $m_s^z$ is the projection of the spin on the quantization (magnetic field) axis. Importantly, below $T \sim 30$\,K an energy gap $\Delta$ develops in the ESR response, and the resonance condition is modified to $\nu = \Delta + g \mu_B \mu_0 H_0/h$. The measured linear dependence of $\nu$ vs.\ $\mu_0 H_0$ for $\mu_0 H \parallel ab$ (Fig.~\ref{fig:Mn147_2}a) is typical for the wave vector $q = 0$ spin wave excitation (ferromagnetic resonance --- FMR) in an easy-axis-type ferromagnetically ordered material, where $\Delta$ represents the magnetic anisotropy gap. Considering the smallness of $\Delta$ as compared to the applied magnetic fields, such linearity is incompatible with the resonance response of an ordered collinear antiferromagnet in this field regime~\cite{32,33}. The opening of the excitation gap $\Delta$ at $T \leq$ 30 K and its gradual increase (Fig.~\ref{fig:Mn147_2}b) evidence significant ferromagnetic spin correlations on the time scale of ESR ($10^{-11}$~s) unrelated to 3D antiferromagnetic ordering which sets in at $T_N =$ 13~K. Therefore, given the pronounced low-dimensionality of the system, it is likely that the Mn-containing blocks are inherently ferromagnetic and give rise to a typical FMR signal in strong fields. At the same time, the application of a magnetic field suppresses the expected much weaker inter-layer antiferromagnetic coupling responsible for the 3D long-range order at $T_N$ in zero and small fields, whereas a paramagnetic state with strong intra-plane ferromagnetic correlations --- denoted in the following as correlated paramagnet (CPM) --- persists up to temperatures of the order 30 K.
To complement the magnetic characterization, we carried out X-ray magnetic circular (XMCD) and linear dichroism (XMLD) experiments at the Mn $L_{2,3}$ absorption edge in total electron yield mode (TEY) with a typical probing depth of a few nm~\cite{34}. The XMCD data collected at $T =$ 2~K provide evidence for a substantial remanent net magnetization of the Mn ions along the surface normal (Fig.~\ref{fig:Mn147_2}d), in sharp contrast to our previous observations for Mn124 with antiferromagnetic order~\cite{20}. This confirms that the spontaneous ferromagnetic polarization observed in the bulk magnetization data below $T \sim 5$~K extends up to the surface layers. A sizable XMLD signal in grazing light incidence and its absence in normal incidence further confirm the remnant out-of-plane magnetization (Fig.~\ref{fig:Mn147_2}e) in agreement with the ESR results. The XMLD signal gradually diminishes with increasing temperature, confirming its magnetic origin and indicating the transition into the paramagnetic regime, in line with our bulk magnetization results.
By density-functional calculations (DFT) we considered various possible magnetic structures for the ordered MnBi$_4$Te$_7$ model (see Supplementary Note~3 \cite{Note1}). The Mn atoms are found in the high-spin Mn(II) configuration, in agreement with the high-temperature magnetization measurements and similar to Mn124~\cite{20}. Our calculations show that the magnetic moments within the Mn layers prefer intra-plane ferromagnetic order with an out-of-plane spin configuration. The first-neighbor coupling is estimated as $-0.09$~meV$/\mu_B^2$, which is very close to the value reported for Mn124~\cite{19}. Furthermore, we find that antiferromagnetic ordering between the Mn layers (AFM1 state) results in a smaller total energy than the ferromagnetic ordering (FM state). The energy difference is, however, only about 0.5~meV/Mn atom, which is an order of magnitude smaller than in Mn124~\cite{18}. Moreover, this value is very close to the magnetic anisotropy energy, which yields about 0.5~meV/Mn atom in favor of the easy-axis configuration. These estimates corroborate a scenario with competing magnetic states differing slightly in energy, and, hence, a more complex magnetic response shown by Mn147, as compared to Mn124.
\section{Electronic structure and topological surface state}
Having established the crystal structure and the magnetic properties of Mn147, we will now discuss its electronic structure based on DFT calculations and angle-resolved photoemission (ARPES) experiments. The structural resemblance between Mn147 and Bi$_2$Te$_3$ opens the question, to what extent Mn147 inherits properties from the (Bi$_2$Te$_3$) building blocks. We begin our theoretical analysis of the topology of the electronic structure with an auxiliary calculation without spin polarization (see Supplementary Note~4~\cite{Note1}), which shows that, in the absence of magnetism, the system would be both a strong topological insulator and a topological crystalline insulator, just like Bi$_2$Te$_3$~\cite{35}.
The influence of magnetism on the topological properties is first examined for the band-inversion phenomena. Fig.~\ref{fig:Mn147_3}a,b show the band structure of Mn147 for the FM and AFM1 (layer-wise AFM) order, respectively (see Fig.~S3 \cite{Note1}). The symbol size is proportional to the overlap between the corresponding Bloch states and the indicated orbitals. For reference, Fig.~S3f shows the well-known case of Bi$_2$Te$_3$, whose nontrivial topology originates in the inversion between Bi- and Te-$p_z$ orbitals of opposite parity~\cite{13,14,Note1}. The same orbitals contribute to the band inversion in Mn147 with the difference that the inverted bands are spin-polarized in the presence of a ferromagnetic component. Namely, the occupied Bi states form two bands (B1 and B3 in Fig.~\ref{fig:Mn147_3}a) of predominantly opposite spin. This effect is appreciable because the Bi atoms that are more involved in the band inversion are the ones closest to the Mn atomic layers.
Naturally, in the AFM1 configuration the spin polarization in each band is compensated, i.e. each band is spin-degenerate. In this phase, the system realizes a ${Z}_2$ antiferromagnetic topological insulator (AFMTI), protected by a combination of time-reversal symmetry and translation along the $c$ axis. This case is analogous to the recently established AFMTI state in Mn124~\cite{19}. The ${Z}_2$ topology is analyzed in Fig.~\ref{fig:Mn147_3}c based on a Wannier charge center (WCC) in the $k_z=0$ plane. An arbitrary horizontal line crosses the WCC an odd number of times, and thus the $k_z=0$ plane behaves as a quantum spin Hall insulator \cite{PhysRevB.81.245209}. This ensures the existence of gapless surface states on side surfaces parallel to the $c$ axis.
A natural question is to what extent the observed structure tendencies to Mn/Bi intermixing and to Mn vacancies affect the electronic structure.
To address this point we performed supercell calculations for the compositions Mn$_{0.75}$Bi$_{4.25}$Te$_7$ \, and Mn$_{0.50}\Box_{0.25}$Bi$_{4.25}$Te$_7$ \, (lifting up a restriction of electron neutrality used in the SCXRD refinements).
Our calculations show that the fundamental gap remains open, suggesting that, in the present amounts, Mn/Bi intermixing and Mn vacancies do not affect the non-trivial topology of the material, but rather only shift the chemical potential (see Supplementary Note~7~\cite{Note1}).
The existence of a topologically non-trivial surface state is confirmed by the calculated (0001) surface spectral density in Fig.~\ref{fig:Mn147_3}d for a quintuple-layer (QL) termination. As expected for the AFM1 state with an intra-plane ferromagnetic configuration the surface spectral density shows a gap-like feature at the $\bar{\Gamma}$-point \cite{19}. We find a similar situation in our surface calculations for the case of a septuple-layer (SL) termination (Fig. S8) and also for a FM magnetic configuration (Fig.~S5).
\begin{figure}
\center
\includegraphics[width = 8.7cm]{figure4.pdf}
\caption{\label{fig:Mn147_3} Band inversion phenomena in Mn147 (GGA+$U$+SOC). \textbf{a}, Band structure in the ferromagnetic configuration. The symbol size in each $k$-point and band is proportional to the overlap between the corresponding Bloch state and the Te and Bi $p$-orbitals, depicted in different colors. Filled (empty) dots correspond to spin down (up). The black arrow indicates the energy of the Weyl node of lowest energy in the conduction band. \textbf{b}, Band structure for the antiferromagnetic AFM1 configuration.
\textbf{c}, Wannier center evolution in the $k_z=0$ plane. $k_x$ is the crystal momentum along the primitive lattice vector $\overline{b}_1$ and $\overline{b}_2$ is the second primitive vector in the $k_z =0$ plane. \textbf{d}, Mn147(0001) surface spectral density along the $\bar{\Gamma}-\bar{M}$ direction for a quintuple layer termination.}
\end{figure}
\begin{figure*}
\center
\includegraphics[width = \textwidth]{figure5.png}
\caption{\label{fig:Mn147_4} Electronic structure of the Mn147(0001) surface as measured by ARPES. \textbf{a}, Overview data set of the valence band structure obtained at $T =$ 8~K showing characteristic surface states SS1 and SS2 and a feature related to Mn $3d$-states (cf. Fig.~S9--S12 \cite{Note1}). \textbf{b,c,f}, High-resolution data sets of the electronic structure near $E_F$ obtained at different photon energies and a temperature of $T =$ 8~K, showing a topological surface state (TSS) in the gap between conduction and valence-band derived states (BCB and BVB). \textbf{d}, Photon-energy dependence of the ARPES intensity at $E_F$ ($T =$ 8~K). \textbf{e}, Same as in \textbf{f}, but for $T =$ 80~K. \textbf{g}, Same as in \textbf{f}, but for a septuple layer (SL) termination. All other data sets in Fig.~4 are assigned to a quintuple layer (QL) termination. The ARPES data sets in Fig.~5 were measured along the $\bar{\Gamma} - \bar{M}$ high-symmetry direction.}
\end{figure*}
To experimentally support the non-trivial topology of the electronic structure predicted by our calculations we conducted ARPES measurements on the natural cleaving (0001) surface of Mn147 (Fig.~\ref{fig:Mn147_4}). The overview band structures (Fig.~\ref{fig:Mn147_4}a, Fig.~S9d,e in Supplementary Note~6 \cite{Note1}) bear clear resemblance to previous ARPES experiments for the topological insulator Bi$_2$Te$_3$~\cite{13,36}. Most importantly, we likewise find a state with a V-shaped dispersion in the bulk gap between the conduction and valence band states (BCB and BVB), near the Fermi level $E_F$ (Fig.~\ref{fig:Mn147_4}b,g). Systematic photon-energy-dependent measurements confirm the surface character of this state (Fig.~\ref{fig:Mn147_4}d). By comparison to our density-functional calculations in Fig.~\ref{fig:Mn147_3}d we identify it as a topologically non-trivial surface state (TSS).
The observation of conduction band states below $E_F$ is in line with our transport measurements, although the prominent feature BCB shows a markedly 2D character and possibly arises from band bending, as commonly found for Bi$_2$Te$_3$~\cite{37} and Bi$_2$Se$_3$~\cite{38}. At higher binding energies in the valence band, we observe additional surface states SS1 and SS2 (Fig.~\ref{fig:Mn147_4}a, Fig.~S9d,e) that are similar to those previously detected for Bi$_2$Te$_3$~\cite{36}. These states are highly surface-localized well within a single (Bi$_2$Te$_3$) QL~\cite{36}, suggesting that our results in Fig.~\ref{fig:Mn147_4}a--f represent a surface terminated by a Bi$_2$Te$_3$ QL. This is supported by our calculations in Fig.~S7a \cite{Note1} for Bi$_2$Te$_3$-terminated Mn147, where similar surface states are found. Measurements on a single (0001) surface also revealed areas with a different well-defined band structure (Fig.~\ref{fig:Mn147_4}g, Fig.~S10), which we tentatively attribute to the second possible surface termination by a (MnBi$_2$Te$_4$) SL. The reduced data quality for this SL-termination may arise from the higher defect density in the SL than in the QL evidenced by our XRD results in section 1. Nevertheless, we observe qualitatively similar features in ARPES as for the QL-termination. Both terminations accommodate a dispersionless feature at a binding energy near 3.8~eV, which can be attributed to the Mn $3d$-states (Fig.~\ref{fig:Mn147_4}a, Fig.~S10), as confirmed by resonant photoemission measurements at the Mn $L$-edge (Fig.~S11).
Unlike for Bi$_2$Te$_3$, our measurements for Mn147 in the AFM1 state suggest the presence of a finite separation between the TSS and the BVB. This gap-like feature shows a subtle photon-energy-dependence arising mainly from changes in the spectral appearance of the BVB maximum, as exemplified by the three data sets in Fig.~\ref{fig:Mn147_4}b,c,f. The latter consists of at least two different features within a narrow energy range that exhibit complex $h\nu$-dependent intensity variations and possibly arise from a coexistence of surface- and bulk-derived states. Measurements at $T =$ 80~K (Fig.~\ref{fig:Mn147_4}e) do not show strong changes in the spectra for $h\nu =$~52 eV. However, towards higher temperatures we observe an increased spectral-weight filling of the gap-like feature, suggesting its mitigation with increasing temperature (Fig. S12).
A comprehensive picture of the detailed spectral-weight behaviour of the TSS near the Dirac point is yet to emerge. Gap-like features, even in the paramagnetic regime, were also found in different magnetically doped TIs~\cite{39,40}, in Mn124 \cite{19,21} and, very recently, also in Mn147~\cite{Wueaax9989, arxiv:1910.13943, arXiv:1910.11653, arXiv:1905.02154}. We expect our detailed discussion
of the $h\nu$-dependency to be a useful ingredient, which, e.g., indicates a gap-like feature considerably smaller than the one observed in \cite{Wueaax9989}.
Additionally, other reports have found a vanishing gap in Mn147~\cite{arXiv:1910.11323, arXiv:1910.11014,PhysRevX.9.041039} and argued about a surface magnetic structure possibly different to that in the bulk. At low temperatures, our XMCD measurements provide evidence for a finite net magnetization at the surface, which motivates future ARPES measurements in this temperature regime. These developments shows that, likely, additional spectroscopic experiments, e.g. including spin-resolution, scanning tunneling microscopy and transport experiments on thin flakes will help to further elucidate this essential point.
\section{Magnetic topological phases}
Motivated by the experimental observations of a TSS due to band inversion and of the competition between different magnetic phases, we outline, based on our calculations, a topological phase diagram as a function of temperature and magnetic field. Fig.~\ref{fig:Mn147_5} sketches the phases theoretically explored in the ordered MnBi$_4$Te$_7$ model. Below the Ne$\acute{\text{e}}$l temperature, our calculations predict that the antiferromagnetic phase (AFM1) hosts a $Z_2$ topological insulating phase protected by a combination of the time-reversal and translation symmetries along the polar axis. In the lowest-temperature regime the experiments reveal a phase with net magnetization at zero magnetic field (see Fig.~\ref{fig:Mn147} e and f). To mimic this regime we consider a collinear FM state which, interestingly, also features a non-trivial topology (see Supplementary Note~5 \cite{Note1}). Namely, this phase realizes a topological crystalline insulator (TCI) tunable by the magnetization orientation. Specifically, the crystal structure presents three mirror planes that contain the polar axis and are related by $2\pi /3$ rotations. When the magnetization points perpendicular to one of these planes, it preserves the corresponding reflection symmetry. The calculated mirror Chern number in such a magnetic configuration equals to $-1$. For other magnetization orientations, as in particular out-of-plane, the topological protection at the (0001) surface is lifted and the corresponding surface state is gapped (see Fig.~S5 \cite{Note1}). In addition, as shown in Fig.~\ref{fig:Mn147_3}a, doped samples can be of interest, since a ferromagnetic component splits the double degeneracy of the bulk bands and opens up the possibilities of Weyl physics. Indeed, Weyl nodes are revealed only 24~meV above the gap, very close to the bottom of the conduction bands (see the arrow in Fig.~\ref{fig:Mn147_3}a and Fig.~S6).
\begin{figure}
\center
\includegraphics[width = 0.99\linewidth]{figure6.png}
\caption{\label{fig:Mn147_5} Schematic topological phase diagram of MnBi$_4$Te$_7$. The scheme follows the experimental observed trends: a correlated paramagnetic state above $T_N$, followed by an antiferromagnetic phase that at lower temperatures evolves to magnetic state with a strong ferromagnetic component. The text in red highlights possible non-trivial topology (see text for details). }
\end{figure}
\section{Conclusion}
We have presented a comprehensive study of structural, magnetic and electronic properties of the Bi$_2$Te$_3$-derivative Mn$_{0.75(3)}$Bi$_{4.17(3)}$Te$_7$, i.e. the ($n = 1$)-member Mn147 of the modular (Bi$_2$Te$_3$)$_n$(MnBi$_2$Te$_4$) series. Our results indicate that Mn147 realizes an intrinsic magnetic topological insulator, similar to the recently established first antiferromagnetic topological insulator MnBi$_2$Te$_4$ for $n = 0$ \cite{19}. Unlike for MnBi$_2$Te$_4$, Mn147 develops a strong out-of-plane ferromagnetic component at low temperatures. In this regime Mn147 realizes the first instance of compound that features, both, an \textit{intrinsic} net magnetization and a topologically non-trivial surface state originating from a band inversion. In the thin-film limit these properties could facilitate the realization of the quantum anomalous Hall effect in an intrinsic material, as recently reported for Mn124 where, however, it requires large external fields due to the robust antiferromagnetism~\cite{41,42}.
Moreover, our calculations show how the complex magnetic phase diagram of Mn147, that we observe experimentally, may facilitate the tunability between different topological regimes, including antiferromagnetic topological and topological crystalline insulator states.
\begin{acknowledgments}
We thank E.V.~Chulkov and M.M.~Otrokov (DIPC, San Sebastian, Spain, and Tomsk State University, Tomsk, Russia) for the initial impetus for this and other works on manganese-bismuth tellurides as perspective topological materials. This work was supported by the German Research Foundation (DFG) in the framework of the Special Priority Program (SPP 1666, IS 250/1-2) ''Topological Insulators'', by the ERA-Chemistry Program (RU 766/15-1), by CRC ''Tocotronics'' (SFB 1170), by CRC ''Correlated Magnetism -- From Frustration to Topology'' (SFB-1143, project id 247310070) and by W\"urzburg--Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter -- \textit{ct.qmat} (EXC 2147, project-id 39085490). Part of this work was carried out with the support of the Diamond Light Source, beamline I05 (proposal SI22468-1). We acknowledge experimental support by K.~Kissner, M.~\"Unzelmann, S.~Schatz, Chul Hee Min (University of W\"urzburg), F.~Diekmann, S.~Rohlf and M.~Kall\"ane (University Kiel) as well as the beamline staff at the Maestro endstation (ALS, Berkeley), at the APE beamline (Elettra, Trieste) and at the beamline P04 of PETRA III (DESY Hamburg).
J.I.F. thanks the Alexander von Humboldt Foundation for financial support through the Georg Forster Research Fellowship Program.
K.~Mehlawat acknowledges the Hallwachs--R\"ontgen Postdoc Program of \textit{ct.qmat} for financial support.
J.I.F., R.R. and M.Ri. thank U. Nitzsche for technical assistance. J.I.F. and F.C. thank the IFW Excellence Program.
\end{acknowledgments}
\section{Appendix}
In this Appendix we provide further information on the different methods used in this work.
\subsection{Crystal Growth}
\label{app:crystal}
First indications for the existence of Mn$_{1-x}$$\Box_{x/3}$Bi$_{4+2x/3}$Te$_7$ ($0.15 \leq x \leq 0.25$) and its composition were obtained in our previously published DSC experiments~\cite{20}. Attempts to synthesize a phase-pure powder of Mn$_{0.85}$Bi$_{4.10}$Te$_7$ following the synthetic route described in Ref.~\cite{20}, namely by long-term annealing of a stoichiometric mixture of Bi$_2$Te$_3$ and $\alpha$-MnTe at subsolidus temperature lead to considerable amounts (up to 15~wt.-\%) of MnBi$_2$Te$_4$ admixtures. A phase-pure ingot of Mn$_{0.85}$Bi$_{4.10}$Te$_7$ was synthesized by annealing at 590~$^\circ$C for 3~days, subsequent slow cooling to 585~$^\circ$C and, finally, annealing for 1~day followed by rapid quenching in water. Worth noting is that powders with the idealized MnBi$_4$Te$_7$ composition that were prepared by this route contained impurities, suggesting that this composition lies outside the homogeneity range. High-quality single crystals of Mn$_{1-x}$$\Box_{x/3}$Bi$_{4+2x/3}$Te$_7$ were grown by slow cooling ($-1$~K/h) of a melt from 650~$^\circ$C down to 585~$^\circ$C (right above the solidification point of Bi$_2$Te$_3$), followed by annealing for 10~days and rapid quenching. Platelet-like strongly intergrown crystals were mechanically extracted from the obtained ingots. Their compositions were controlled by EDX analysis.
\subsection{X-ray Diffraction and Energy-Dispersive X-ray Spectroscopy}
Single-crystal X-ray diffraction data were collected on a four-circle Kappa APEX II CCD diffractometer (Bruker) with a graphite(002)-monochromator and a CCD-detector at $T =$ 296(1)~K. Mo-K$_\alpha$ radiation ($\lambda =$ 71.073~pm) was used. A numerical absorption correction based on an optimized crystal description was applied~\cite{45}, and the initial structure solution was performed in JANA2006~\cite{46}. The structure was refined in SHELXL against $F_o^2$~\cite{47}. Further details on the crystal structure investigations of Mn$_{0.75(3)}$Bi$_{4.17(3)}$Te$_7$ can be obtained from the Fachinformationszentrum Karlsruhe, 76344 Eggenstein-Leopoldshafen, Germany (fax, (+49)7247-808-666; E-mail, [email protected]), on quoting the depository number CSD-1891486.
Powder X-ray diffraction data were measured using an X-Pert Pro diffractometer (PANalytical) with Bragg-Brentano geometry or a Huber G670 diffractometer with an integrated imaging plate detector and read-out system. Both machines operate with a curved Ge(111) monochromator and Cu-K$_{\alpha 1}$ radiation ($\lambda =$ 154.06~pm). Variable divergence slits were used on the X-Pert Pro equipment to keep the illuminated sample area constant. The graphics of the structures were developed with the Diamond software~\cite{48}.
Energy dispersive X-ray spectra (EDX) were collected on a scanning electron microscope Hitachi SU8020 using an Oxford Silicon Drift X-MaxN detector at an acceleration voltage of 20~kV and 100~s accumulation time. The EDX analysis was performed using the $P$/$B$-$ZAF$ standardless method (where $Z$ = atomic no. correction factor, $A$ = absorption correction factor, $F$ = fluorescence factor, and $P$/$B$ = peak to background model). Experimentally determined compositions (EDX) fall into a range from Mn$_{0.6(1)}$Bi$_{4.4(1)}$Te$_7$ to Mn$_{0.8(1)}$Bi$_{4.3(1)}$Te$_7$.
\subsection{Angle-Resolved Photoelectron Spectroscopy}
ARPES measurements on the (0001) surface of cleaved crystals in a temperature range range between 8~K and 80~K were carried out at the high-resolution-branch of beamline i05 at the Diamond Light Source, UK, using p-polarized photons with energies between $h \nu =$ 20 and 90~eV and energy resolution $< 10$~meV [Fig. 5 and S10]. The spot size of the photon beam was ca. 30 $\mu$m. Supplementary ARPES experiments were performed at the Microscopic and Electronic Structure Observatory (MAESTRO) at beamline~7 of the Advanced Light Source (ALS) [Fig. S9] at the LE-branch of APE beamline at the Elettrasynchrotron [Fig. S11]. All measurements were performed in ultrahigh vacuum of lower than $10^{-10}$~mbar. Supplementary core-level photoemission data were acquired at the ASPHERE III endstation at beamline P04 of PETRA III (DESY, Hamburg) [Fig. S11].
\subsection{Electron Spin Resonance Measurements}
ESR experiments were performed on a single crystal with a home-made ESR setup in the microwave frequency range $\nu =$ 75--300~GHz, in the temperature range $T =$ 4--35~K and in magnetic fields up to $\mu_0 H_0 =$ 16~T.
\subsection{X-ray Magnetic Circular and Linear Dichroism}
XMCD and XMLD measurements on the (0001) surface of cleaved crystals were carried out in total electron yield (TEY) mode at the BOREAS beamline of the ALBA synchrotron \cite{xmcd}.
\subsection{Density-Functional Calculations}
Fully-relativistic Density Functional Theory (DFT) calculations were performed using the PBE implementation~\cite{49} of the Generalized Gradient Approximation (GGA) and treating the spin-orbit coupling in the 4-spinor formalism, as implemented in FPLO-18~\cite{50}. For results presented in the main text, the experimental crystal structure based on a fully ordered MnBi$_4$Te$_7$ was used in our calculations. Namely, the cationic intermixing and cation deficiency were neglected and the stoichiometric limit MnBi$_4$Te$_7$ was considered.
Effects of these sorts of defects were studied by supercell calculations (see Supplementary Note 7 \cite{Note1}).
For the ordered model calculations, a linear tetrahedron method with a mesh of $16 \times 16 \times 2$ subdivisions (or $16 \times 16 \times 1$ in the AFM1 state) in the full Brillouin zone was used. GGA+$U$ calculations we also performed using the atomic limit implementation of the double-counting correction and fixing $J = 1$~eV. The value of $U$ affects the resulting bulk gap and determines at which energies the spectral weight associated with Mn-$d$ states is placed. We find that the position of the Mn $3d$ states measured with core-level spectroscopy (see Fig.~S11 \cite{Note1}) is best described by a moderate value of $U \sim 2$~eV (see Fig.~S3c \cite{Note1}). This value renders a bulk gap of $\sim 75$~meV. In MnBi$_2$Te$_4$, higher values of $U$ have been used aiming to reproduce the experimental estimation of the gap~\cite{19}. The difficulty in finding a single value of $U$ that correctly accounts for all experimental results suggests that a quantitative comparison in these materials may necessitate the usage of exchange and correlation functionals beyond GGA+$U$. The statements on the total energy calculations are, however, robust, as shown in Supplementary Note~3. For the surface spectral calculations, as well as for the search of Weyl nodes, an accurate tight-binding model was built by constructing Wannier functions with the projection method implemented in the PYFPLO interface of FPLO~\cite{50}. The Bi-$6p$, Te-$5p$ and Mn-$3d$ orbitals were considered in this construction. The mirror Chern numbers were computed based on this Hamiltonian as implemented in Ref.~\cite{PhysRevMaterials.3.074202}.
\vspace{2mm}
\subsection{Magnetization and Transport}
The dc magnetization measurements were performed in a Superconducting Quantum Interference Device (SQUID) Vibrating Sample Magnetometer (VSM) from Quantum Design for 1.8~K up to room temperature and in magnetic fields up to 7~T. Zero-field-cooled (ZFC) and field-cooled (FC) magnetization curves were recorded upon warming.
The transport properties were performed in the standard 4-wires configuration using a home-made probe inserted in a He-bath cryostat by Oxford Instruments, endowed with a 15/17~T magnet.
|
train/arxiv
|
BkiUdIY5qoYAxftPVIYj
| 5
| 1
|
\section{Introduction}
\label{sec:intro}
Precision astrophysical and cosmological measurements have now established that a significant fraction of the matter content in the universe is composed of non-baryonic Dark Matter (DM)~\cite{Hooper}. The data favour cold (non-relativistic) dark matter (CDM) and give the present density as ($68\%$ C.L.)~\cite{Lahav:2014vza}
\begin{equation}
\Omega_{DM} = 0.1187\pm 0.0017\,h^{-2},\label{eq:dm_abun}
\end{equation}
where $\Omega_{DM}$ is the dark matter density as a fraction of the total mass-energy budget and $h = 0.678\pm 0.008$ is defined by the present value of the Hubble constant $H_0 = 100\, h$ km/s/Mpc. The most popular theoretical CDM candidates are WIMPs (Weakly Interacting Massive Particles) with mass $m_{\chi} \sim \mathcal{O}(10 - 1000)$ GeV. One viable WIMP candidate is the neutralino, the lightest supersymmetric particle in supersymmetric extensions of the Standard Model (SM) in which $R$-parity is conserved.
The origin of the DM can be explained by the thermal relic scenario~\cite{KandT}: at early times, frequent interactions keep the DM particles in equilibrium with the background cosmic bath. As the universe expands and cools, the Boltzmann suppressed interaction rate drops below the expansion rate and the DM particles fall out of equilibrium. At this point - known as particle freeze-out - both annihilation and creation processes cease and the number density redshifts with expansion; the surviving 'relic' particles constitute the dark matter density we observe today.
Due to the Boltzmann suppression factor in the equilibrium number density, the present dark matter abundance depends sensitively on the timing of freeze-out: the longer a species remains in thermal contact with the background bath, the lower its density at freeze-out. In the standard cosmological model of cold DM with a non-zero cosmological constant (denoted the $\Lambda$CDM model), particle freeze-out occurs during the radiation dominated era when the expansion rate $H \sim T^2/M_{\mathrm{Pl}}$ (where $M_{\mathrm{Pl}} = 1.22\times 10^{19}$ GeV is the Planck mass). In this scenario, a DM candidate with a weak scale interaction cross section, $\sigma \sim G_{\mathrm{F}}^2\,m_\chi^2$, freezes out with an abundance that matches the presently observed value~\eqref{eq:dm_abun} - this is known as the 'WIMP miracle' and strongly motivates thermal WIMP dark matter models.
Despite the observational success of $\Lambda$CDM, current datasets leave the physics of the universe prior to Big Bang Nucleosynthesis (BBN) ($t \sim 200$ s) relatively unconstrained. If the universe experiences a non-standard expansion law at early times, and in particular during the era of DM decoupling, particle freeze-out may be accelerated (or delayed) and the relic abundance enhanced (or suppressed)~\cite{DBarrow1982501,PhysRevD.81.123522,Pallis:2009ed,Salati2003121,Arbey200846,Gelmini:2013awa,Iminniyaz:2013cla,Meehan:2014zsa} (see also~\cite{Gelmini:2009yh}).
An interesting class of alternative cosmological models that address this pre-BBN era is provided by the braneworld scenario
in which the observable universe is a 3(+1) dimensional surface (the 'brane') embedded in a five dimensional bulk spacetime. Standard Model particles are confined to the surface of the brane whilst gravity propagates in the higher dimensional bulk~\cite{Langlois:2002bb,Maartens:2010ar}. This class of models is motivated by (super)string theory and M-theory which require additional spacetime dimensions for internal consistency.
In the widely studied Randall-Sundrum type II (RSII) model~\cite{Randall:1999vf}, General Relativity (GR) is recovered on the surface of a 3(+1) Minkowski brane located at the ultraviolet boundary of a five dimensional anti-de Sitter bulk. The warped geometry of the bulk spacetime ensures the fifth dimension is only accessible in the ultraviolet regime and that $\Lambda$CDM is reproduced in the low energy limit. Relic DM abundances in a RSII braneworld model have been investigated for both the case of symmetric DM \cite{PhysRevD.70.083531,PhysRevD.71.063535,PhysRevD.73.063518,PhysRevD.79.115023,
AbouElDahab:2006wb,Meehan:2014zsa}, in which the DM particles are Majorana fermions, that is the particles $\chi $ and antiparticles $\bar{\chi}$ are identical, $\chi = \bar{\chi}$, and the case of asymmetric DM \cite{Meehan:2014zsa} in which the particles and antiparticles are distinct,
$\chi \ne \bar{\chi}$. In both cases the enhanced early time expansion rate boosts the final relic abundance.
In this article we consider an extension of the RSII model which incorporates a Gauss-Bonnet (GB) higher order curvature term in the bulk action integral, thus modifying the braneworld dynamics at high energies.\footnote{The inclusion of a GB term affects early universe inflation and modifies both scalar and tensor primordial perturbations and the consistency relation between them~\cite{PhysRevD.70.083525,Tsujikawa2004a,Tsujikawa2004b,Calcagni2013}. Although it produces an enhanced ratio $r$ of the tensor to scalar perturbations~\cite{Neupane:2014}, it is still compatible with the recent Planck~\cite{Ade:2013zuv} and BICEP2~\cite{BICEP2} measurements for the case of single scalar field $m^{2}\phi^{2}$ inflation. For a similar study in the regular Randall-Sundrum model see~\cite{Okada:2014nia}.} The relic density of DM in the Gauss-Bonnet braneworld scenario has been studied by~\cite{PhysRevD.79.103528} for the case of symmetric DM. The GB braneworld effect is treated approximately through the use of a simple multiplicatively modified Hubble expansion which can be interpreted as a multiplicatively modified annihilation cross section in the Boltzmann rate equation and allows the development of an approximate analytic expression for the asymptotic relic abundance. They found that the expansion rate was reduced in the GB model, delaying particle freeze-out and leading to a suppressed relic abundance. This is in direct contrast to the behaviour observed in the RSII braneworld model. This finding, however, is based upon a highly contrived situation in which the Gauss-Bonnet expansion era evolves directly into a standard General Relativity expansion era, rather than passing through a Randall-Sundrum expansion era as is the general case. This collapse of the RS era requires equating the mass scale $m_{\alpha}$ of the GB modification and the mass scale $m_{\sigma}$ of the brane tension.
However, if the GB contribution is to be considered as the lowest order correction from string theory
to the RS action, we would expect $m_{\alpha} > m_{\sigma}$. It is therefore important to investigate the effect upon the relic abundance of choosing more realistic values for the ratio $\mathcal{R}_{m} \equiv m_{\alpha}/m_{\sigma}$ of these two mass scales.
In the present paper we revisit the calculation of the relic abundance of DM in the GB scenario and study the effects of breaking the assumption $\mathcal{R}_{m}=1$ made by \cite{PhysRevD.79.103528}, replacing it by more realistic values. We also extend the investigation to consider both symmetric and asymmetric DM species and discuss the implications for DM detection experiments and DM particle models.
In the next section we introduce the action integral for the braneworld bulk which includes the Gauss-Bonnet higher curvature term and discuss the modified Friedmann equation in this model. Then, in section~\ref{sec:sdm}, we calculate the DM relic abundance in the Gauss-Bonnet braneworld scenario before deriving constraints on the GB model parameters using the observed relic density. This is repeated for the case of asymmetric DM in section~\ref{sec:adm} and, finally, in section~\ref{sec:con} we summarize our results.
\section{Gauss-Bonnet Braneworlds}
The Randall-Sundrum braneworld model derived from the five dimensional Einstein-Hilbert action can be considered as a low energy effective model of some higher order field theory such as string theory or M-theory. Since our interest in the model lies in the high energy regime where additional quantum corrections in the bulk action may contribute to the braneworld dynamics, we include the leading order correction from heterotic string theory, known as the Gauss-Bonnet term $\mathcal{L}_{GB}$~\cite{Gross198741}, which is given by
\begin{equation}
\mathcal{L}_{GB} = R^2 - 4R_{ab}R^{ab} + R^{abcd}R_{abcd}.
\end{equation}
Inclusion of higher order curvature terms generally leads to fourth order equations of motion. However,
in five dimensions, the GB combination of invariants constructed from the Riemann tensor $R_{abcd}$ is of particular significance since it is the unique combination that leads to second order gravitational field equations in the bulk metric which are symmetric, divergenceless and ghost free~\cite{Clifton:2011jh}.
Inclusion of the Gauss-Bonnet term modifies the Randall-Sundrum action so that the action integral for the GB braneworld model, taken over the five dimensional bulk spacetime $\mathcal{M}$, is
\begin{equation}
S_\mathcal{M} = \frac{1}{2\kappa_5^2}\int_\mathcal{M}{d^5x\,\sqrt{-g}\left[R - 2\Lambda_5 + \alpha\mathcal{L}_{GB}\right]},\label{eq:geoact}
\end{equation}
where $g$ is the determinant of the bulk metric $g_{ab}$, $R$ is the five dimensional Ricci scalar and $\Lambda_5(<0)$ is the bulk cosmological constant. We have parameterized the GB contribution through the coupling $\alpha$ which, if this contribution is to be considered as the lowest order correction
from string theory to the Randall-Sundrum action, must
satisfy~\cite{PhysRevD.70.083525,Tsujikawa2004a} $\alpha |R^{2}| \ll |R|$. Consequently,
$\alpha \ll \ell^{2}$ where $\ell $ is the bulk curvature scale $|R| \propto \ell^{-2}$.
Introducing the associated energy scale $\mu \equiv \ell^{-1}$ then we require
\begin{equation}
\beta \equiv 4 \alpha \mu ^{2} \ll 1.
\end{equation}
The matter fields, which are localized on the brane surface $\partial\mathcal{M}$, are included via
\begin{equation}
S_m = - \int_{\partial\mathcal{M}}{d^4x\,\sqrt{-h}\,[\mathcal{L}_{m} + \sigma]},\label{eq:matact}
\end{equation}
where $h$ is the determinant of the induced metric $h_{\mu \nu}$ on the brane surface,
$\mathcal{L}_m$ is the matter field Lagrangian and $\sigma(>0)$ is the brane tension.
Varying the total action $S_{tot} = S_\mathcal{M} + S_{m}$ (+ boundary terms) with
respect to the metric field and solving the resulting field equations yields the modified Friedmann equation for the GB braneworld
scenario~\cite{0264-9381-19-18-304,PhysRevD.67.024030}
\begin{equation}
\kappa_5^2\left(\rho + \sigma\right) = 2\mu\sqrt{1 + \frac{H^2}{\mu^2}}\left(3 - \beta + 2\beta\frac{H^2}{\mu^2}\right),\label{eq:gb_hub}
\end{equation}
where $\rho$ is the energy density of matter fields on the brane and
$\beta = 1 - \sqrt{1+ 4\alpha\Lambda_5/3}$.
The modified Friedmann equation~\eqref{eq:gb_hub} clearly predicts non-standard behaviour for the
expansion of the universe. However, in the low energy limit, equation~\eqref{eq:gb_hub} reduces to the standard expansion law for a flat universe
\begin{equation}
H^{2}= \frac{8 \pi}{3 M_{\mathrm{Pl}}^{2}}\rho + \frac{\Lambda_{4}}{3}, \label{eq:st_hub1}
\end{equation}
provided we identify~\cite{Neupane:2001}\footnote{For comparison with~\cite{PhysRevD.70.083531,Meehan:2014zsa}, we note that $\mu$ and $\beta$ are related to the five dimensional Planck mass $M_5$ via
\begin{equation}
M_5^3 = \frac{\mu}{1 + \beta}\frac{M_{\mathrm{Pl}}^2}{8\pi}.\nonumber
\end{equation}}
\begin{equation}
\kappa_4^2 \equiv \frac{8 \pi}{M_{\mathrm{Pl}}^{2}} = \frac{\mu}{1 + \beta}\kappa_5^2.\label{eq:tune1}
\end{equation}
Additionally, requiring that the four dimensional cosmological constant $\Lambda_{4}$ vanish gives
\begin{equation}
\kappa_5^2 \sigma = 2\mu\left(3 - \beta\right),\label{eq:tune2}
\end{equation}
which is equivalent to the familiar Randall-Sundrum tuning in the limit $\alpha\rightarrow 0$.
As shown in~\cite{PhysRevD.67.103510}, it is possible to solve equation~\eqref{eq:gb_hub} to get an explicit expression for the Hubble factor $H$;
\begin{equation}
H^2 = \frac{\mu^2}{\beta}\left[\left(1 - \beta\right)\cosh{\left(\frac{2\chi}{3}\right)} - 1\right],~\label{eq:gb_hub1}
\end{equation}
where $\chi$ is related to the energy density $\rho$ via
\begin{equation}
\rho + m_\sigma^4 = m_\alpha^4\sinh{\chi},\label{eq:rhomchi}
\end{equation}
and the two mass scales $m_\alpha$ and $m_\sigma$, which correspond to the GB correction and the brane tension respectively, are given by
\begin{equation}
m_\alpha^4 = \sqrt{\frac{8\mu^2(1 - \beta)^3}{\beta\kappa_5^4}},\quad m_\sigma^4 = \sigma.
\end{equation}
Substituting in the constraints~\eqref{eq:tune1} and~\eqref{eq:tune2}, $m_\alpha$ and $m_\sigma$ can be written in terms of the two remaining free parameters $\mu$ and $\beta$ as
\begin{equation}
m_\alpha^4 = 2\,\frac{\mu^2}{\kappa_4^2}\sqrt{\frac{2(1 - \beta)^3}{\beta(1 + \beta)^2}},\quad m_\sigma^4 = 2\,\frac{\mu^2}{\kappa_4^2}\left(\frac{3-\beta}{1 + \beta}\right).
\end{equation}
Since the Gauss-Bonnet term is a high energy correction to the regular Randall-Sundrum action,
we expect $\beta \ll 1$. This motivates us to introduce the quantity
\begin{equation}
\mathcal{R}_{m} \equiv \frac{m_\alpha}{m_\sigma} = \left[\frac{2(1 - \beta)^3}{\beta(3 - \beta)^2}\right]^{1/8},\label{eq:rmdef}
\end{equation}
which measures the ratio of the two mass scales and depends only on $\beta$. The two mass scales
are equal for $\beta = 0.1509$ but, as we expect $\beta \ll 1$, the general situation will be
$\mathcal{R}_m > 1$.
Before choosing specific values of $\beta$, we first discuss the evolution of the modified expansion rate in the generalized Gauss-Bonnet scenario. By expanding~\eqref{eq:gb_hub} in the high, intermediate, and low energy limits, we see that the Hubble factor evolves through three distinct expansion regimes, characterized by the mass scales $m_\alpha$ and $m_\sigma$~\cite{PhysRevD.70.083525,PhysRevD.79.103528}:
\begin{enumerate}
\item
The GB regime: $\rho \gg m_\alpha^4$
\begin{equation}
H^2 \simeq \left(\frac{1 + \beta}{4\beta}\mu\kappa_4^2\rho\right)^{2/3},\label{eq:gb_reg}
\end{equation}
\item
The RS regime: $m_\alpha^4\gg \rho \gg m_\sigma^4$
\begin{equation}
H^2 \simeq \frac{\kappa_4^2}{6m_\sigma^4}\rho^2,\label{eq:rs_hub}
\end{equation}
\item
The standard regime: $m_\sigma^4 \gg \rho$
\begin{equation}
H^2 \simeq \frac{\kappa_4^2}{3}\rho.\label{eq:st_hub}
\end{equation}
\end{enumerate}
At early times, during the Gauss-Bonnet regime, the expansion rate of the universe $H \sim \rho^{1/3}$ falls more slowly than the standard expansion law $H\sim \rho^{1/2}$. Later, the universe evolves into a Randall-Sundrum type era with an enhanced expansion $H \sim \rho$, before finally reducing to the standard expansion law in the low energy limit (see figure~\ref{fig:GB_hub_betam15_mum44}).
\begin{figure}[tbp]
\centering
\includegraphics[scale=0.8,trim=0 0 0 0,clip=true]{GB_hub_betam15_mum44}
\caption{\label{fig:GB_hub_betam15_mum44}Modified expansion rate in the Gauss-Bonnet scenario (solid blue curve) for $\mu^2 = 10^{-44}$ GeV$^{2}$ and $\beta = 10^{-15}$. We have assumed that the energy density is radiation dominated for the period shown, taking $\rho = \rho_r = \pi^2\,g_*(T)T^4/30$. The various expansion regimes through which the Hubble parameter evolves are indicated, together with the standard expansion rate (dashed black curve) for reference.}
\end{figure}
The duration of the Randall-Sundrum regime is determined by the magnitude of $\mathcal{R}_m \equiv m_\alpha/m_\sigma$: when $\mathcal{R}_m$ is small, the RS era is short and the expansion rate passes quickly from the Gauss-Bonnet era to the standard era; when $\mathcal{R}_m$ is large, the duration of the Randall-Sundrum era is extended. Using the expression for $\mathcal{R}_m$ (equation~\eqref{eq:rmdef}) we see that these two cases correspond to $\beta\lesssim 0.1509$ and $\beta \rightarrow 0$, respectively.
The investigation by~\cite{PhysRevD.79.103528} chose to collapse the Randall-Sundrum era by equating $m_\alpha = m_\sigma$, setting $\beta = 0.1509$. In this case, the early time expansion rate is always slower than (or equal to) the standard expansion rate. The slower expansion rate delays dark matter particle freeze-out and suppresses the relic abundance. This is obviously a contrived scenario considering the Gauss-Bonnet term is a high energy correction to the Randall-Sundrum action and we expect $m_\alpha > m_\sigma$, corresponding to $\beta \ll 1$. In the next section we will show that the unnatural choice of $\beta = 0.1509$ and the conclusions drawn in~\cite{PhysRevD.79.103528} misrepresent the typical behaviour of the relic density in the Gauss-Bonnet braneworld model and that, in fact, the dark matter abundance tends to be enhanced rather than suppressed when realistic values of $\beta$ are used.
It is convenient for the derivation of approximate solutions for the dark matter relic density (see next section) to express the modified expansion rates in the early universe (equations~\eqref{eq:gb_reg} and~\eqref{eq:rs_hub}) in terms of the standard expansion rate $H_{GR}$.
Since the energy density of the universe during the era of dark matter decoupling is dominated by radiation with $\rho_r = \pi^2 g_*(T) T^4/30$, where $g_*(T)$ is the effective number of relativistic degrees of freedom, equations~\eqref{eq:gb_reg} and~\eqref{eq:rs_hub} can be written as
\begin{align}
H_{GB} &= H_{GR}\left(\frac{x}{x_t^{GB}}\right)^{2/3},\\
H_{RS} &= H_{GR}\left(\frac{x_t^{RS}}{x}\right)^2,
\end{align}
where $x = m_\chi/T$ is a dimensionless variable and $x_t^{GB}$ and $x_{t}^{RS}$ are given by
\begin{align}
\left(x_t^{GB}\right)^4 &\simeq 0.195\,g_*(T_t)m_\chi^4\left(\frac{\beta}{1 + \beta}\right)^2\frac{\kappa_4^2}{\mu^2},\label{eq:xtgb}\\
\left(x_t^{RS}\right)^4 &\simeq 0.082\,g_*(T_t)m_\chi^4\left(\frac{1 + \beta}{3 - \beta}\right)\frac{\kappa_4^2}{\mu^2}.\label{eq:xtrs}
\end{align}
The quantity $x_t^{RS}$ effectively denotes the transition point between the Randall-Sundrum expansion era and the standard expansion era. In order to preserve the successful predictions of BBN, the standard expansion law $H_{GR}$ must be restored prior to $T = 1$ MeV. Thus we require $x_t^{RS}\lesssim 10^3\,m_\chi$, which, using~\eqref{eq:xtrs}, gives the conservative bound
\begin{equation}
\mu \gtrsim 1\times 10^{-25}\,\mbox{GeV}.\label{eq:murange}
\end{equation}
Furthermore, if we assume that particle freeze-out occurs at $x_f\gtrsim 10$, we can derive an upper limit on the relevant range of $\mu$. Again, using equation~\eqref{eq:xtrs}, we find $\mu \lesssim 5\times 10^{-17}$ GeV and $\mu \lesssim 5\times 10^{-19}$ GeV for $m_\chi = 100$ GeV and $m_\chi = 10$ GeV respectively. For larger values of $\mu$ the standard expansion rate is restored prior to particle freeze-out and particle decoupling is unaffected.
\section{Symmetric Dark Matter}
\label{sec:sdm}
We begin this section by reviewing the relic abundance calculation for a symmetric dark matter species $\chi (= \bar{\chi})$ initially in equilibrium with the background cosmic bath. The dark matter number density $n_\chi$ evolves according to the relativistic Boltzmann equation
\begin{equation}
\frac{dn_\chi}{dt} = -3Hn_\chi - \langle\sigma v\rangle\left(n_\chi^2 - n_\chi^{\mathrm{eq}\,^2}\right),\label{eq:boltsym}
\end{equation}
where $\langle\sigma v\rangle$ is the thermally averaged annihilation cross section and $n_\chi^{\mathrm{eq}}$ is the equilibrium number density. Here, we assume that annihilations are dominated by $s$-wave processes for which the annihilation cross section is a constant, i.e. $\langle\sigma v\rangle = \sigma_0$.\footnote{It is straightforward to extend our analysis to higher partial wave expansions of the annihilation cross section, i.e. $\langle\sigma v\rangle = \sigma_n x^{-n}$.}
It is convenient to rewrite the Boltzmann equation~\eqref{eq:boltsym} in terms of $x = m_\chi/T$ and the comoving number density $Y=n_\chi/s$, where $s$ is the entropy density given by $s = 2\pi^2 g_*(T) T^4/45$.\footnote{Here $g_*(T)$ actually refers to the number of entropic degrees of freedom $g_{* s}$. Since the number of relativistic and entropic degrees of freedom only differ when a particle crosses a mass threshold, we take $g_{*\rho}=g_{* s} \equiv g_{*}$~\cite{Steigman:2012nb}.}
We then have
\begin{equation}
\frac{dY}{dx}=-\frac{s\langle\sigma v\rangle}{xH} \zeta(x)\left(Y^2 - Y_{\mathrm{eq}}^2\right),~\label{eq:bolt}
\end{equation}
where $Y_{\mathrm{eq}} \simeq 0.145(g_\chi/g_*)\,x^{3/2}e^{-x}$, $g_\chi = 2$ is the number of internal degrees of freedom of the dark matter species $\chi$ and
\begin{equation}
\zeta (x) = 1 - \frac{1}{3}\frac{d\log{g_*}}{d\log{x}}~\label{eq:df}
\end{equation}
is a temperature dependent factor related to the change in the number of degrees of freedom.
The present dark matter density, $\Omega_{DM}h^2$, is obtained from the asymptotic
solution ($x\rightarrow\infty$) of equation~\eqref{eq:bolt}
\begin{equation}
\Omega_{DM}h^2 = 2.75\times 10^8\,m_\chi Y_\infty,\label{eq:omgdm}
\end{equation}
where $Y_\infty=Y(x \rightarrow \infty)$ is the present comoving density.
In general, the Boltzmann equation cannot be solved analytically and equation~\eqref{eq:bolt} must be integrated numerically. However, an approximate solution can be found by exploiting the exponential decay of $Y_{\mathrm{eq}}$: as outlined in~\cite{PhysRevD.33.1585,KandT}, the creation term ($\propto Y_{\mathrm{eq}}^2$) in
equation~\eqref{eq:bolt} can be neglected following particle decoupling (i.e. for $x > x_f$) and the resulting expression can be integrated directly once the expansion rate and annihilation cross section have been specified. Taking $H = H_{GR}$ and $\langle\sigma v \rangle =$ constant, we get the well-known approximate solution for the asymptotic comoving density in the standard cosmological scenario\footnote{The annihilation cross sections $\langle\sigma v\rangle$ in equations~\eqref{eq:yinfst},~\eqref{eq:ysym} and~\eqref{eq:yinfrs} have units of GeV$^{-2}$ to match the units of $\lambda$.}
\begin{equation}
Y_\infty^{GR} \simeq \frac{x_f^{GR}}{\lambda_{GR}\langle\sigma v\rangle},\label{eq:yinfst}
\end{equation}
where $\lambda_{GR} \simeq 0.264 \sqrt{g_*}M_{\mathrm{Pl}}\,m_\chi$ and $x_f^{GR}$ is the freeze-out point in the standard scenario that can be estimated using~\cite{PhysRevD.33.1585,KandT}
\begin{equation}
x_f^{GR} \simeq \log{\left[\left(2 + c\right)\lambda_{GR}\langle\sigma v\rangle ac\right]} - \frac{1}{2}\log{\left\{\log{\left[\left(2 + c\right)\lambda_{GR}\langle\sigma v\rangle ac\right]}\right\}},
\end{equation}
with $a \simeq 0.145\,g_\chi/g_*$ and $c \approx 0.6$ a numerical constant (see~\cite{KandT} for more details).\footnote{In deriving equation~\eqref{eq:yinfst} the number of relativistic degrees of freedom has been fixed at $g_*(T) = g_*(T_f)$, but, note that the full temperature dependence is restored in the numerical integration.}
In the Gauss-Bonnet braneworld scenario, the universe first passes through a Gauss-Bonnet and then a Randall-Sundrum type expansion era before relaxing to the standard expansion law (see previous section). We therefore need to find equivalent expressions to~\eqref{eq:yinfst} for when dark matter decoupling occurs during each of these non-standard regimes.
Taking $H = H_{GB}$ (equation~\eqref{eq:gb_hub}), we find that if decoupling occurs during a Gauss-Bonnet type expansion regime~\cite{PhysRevD.79.103528},
\begin{equation}
Y_{\infty}^{GB} \simeq \frac{5}{3}\frac{(x_f^{GB})^{5/3}}{\lambda_{GB}\langle\sigma v\rangle},\label{eq:ysym}
\end{equation}
where
\begin{align}
\lambda_{GB} &= \lambda_{GR}\left(x_t^{GB}\right)^{2/3}\nonumber\\
&\simeq \left[\left(\frac{\beta}{1 + \beta}\right)g_*^2\frac{m_\chi^5}{\mu\kappa_4^2}\right]^{1/3},\label{eq:lamgb}
\end{align}
and the freeze-out point is
\begin{equation}
x_f^{GB} \simeq \log{\left[\left(2 + c\right)\lambda_{GB}\langle\sigma v\rangle ac\right]} - \frac{7}{6}\log{\left\{\log{\left[\left(2 + c\right)\lambda_{GB}\langle\sigma v\rangle ac\right]}\right\}}.
\end{equation}
Similarly, if decoupling occurs during the Randall-Sundrum era~\cite{PhysRevD.70.083531}
\begin{equation}
Y_\infty^{RS} \simeq \frac{0.54\,x_t^{RS}}{\lambda_{GR}\langle\sigma v\rangle},\label{eq:yinfrs}
\end{equation}
which we note is independent of the freeze-out point $x_f$ (provided $x_t^{RS} \gg x_f$).
Comparing equations~\eqref{eq:ysym} and~\eqref{eq:yinfrs} with~\eqref{eq:yinfst}, we see that the asymptotic comoving density can be either suppressed or enhanced depending on the relative magnitude of $\mu$ and $\beta$ and the timing of particle decoupling. More specifically, if decoupling occurs during the Gauss-Bonnet era, the comoving density may be either enhanced or suppressed, otherwise, if decoupling occurs during the Randall-Sundrum era, the comoving density is always enhanced.
To determine which parameter combinations lead to suppression, and which lead to enhancement (with respect to the standard cosmology result) we can equate equations~\eqref{eq:ysym} and~\eqref{eq:yinfrs} with~\eqref{eq:yinfst}. Rearranging for $\mu^2$, we find that the relic abundance is enhanced for the interval\footnote{To derive~\eqref{eq:enhint} and~\eqref{eq:supint} we have assumed that the freeze-out point is roughly constant. In doing so we have neglected a logarithmic dependence on the annihilation cross section $\langle\sigma v\rangle$.
\begin{equation}
5\times 10^{-43}\,m_\chi^4\left(\frac{\beta}{1 + \beta}\right)^2\lesssim \mu^2 \lesssim 1\times 10^{-41}\,m_\chi^4,\label{eq:enhint}
\end{equation}
and suppressed for
\begin{equation}
\mu^2 \lesssim 5\times 10^{-43}\,m_\chi^4\left(\frac{\beta}{1 + \beta}\right)^2.\label{eq:supint}
\end{equation}
For $\mu^2 \gtrsim 10^{-41} m_\chi^4$, the standard expansion rate is restored prior to particle decoupling and the predicted value of $\Omega_{DM}h^2$ reduces to the canonical result.
In figure~\ref{fig:musq_omega_varbeta} we plot the predicted relic abundance $\Omega_{DM}h^2$ in the general Gauss-Bonnet scenario as a function of $\mu^2$ for varying $\beta$.
\begin{figure}[tbp]
\centering
\includegraphics[scale=0.57,trim=0 0 0 0,clip=true]{musq_omega_varbeta100}
\hfill
\includegraphics[scale=0.57,trim=0 0 0 0,clip=true]{musq_omega_varbeta10}
\caption{\label{fig:musq_omega_varbeta} Relic abundance $\Omega_{DM}h^2$ for a symmetric WIMP with
$\langle\sigma v\rangle = 2\times 10^{-26}$ cm$^3$s$^{-1}$ as a function of $\mu^2$ for $\beta = 0.1509$ (blue curve), $\beta = 10^{-5}$ (red curve), $\beta = 10^{-10}$ (yellow curve) and $\beta = 10^{-15}$ (purple curve). The left and right panels correspond to WIMP masses $m_\chi = 100$ GeV and 10 GeV respectively.}
\end{figure}
Immediately we see that $\Omega_{DM}h^2$ (much like the expansion rate $H$) can be split up into three distinct regions: for small $\mu^2$ (and large $\beta$), the relic density increases with increasing $\mu^2$ (and decreasing $\beta$), reaching a maximum that is approximately given by\footnote{The parameter dependence of the maximum can be derived by equating~\eqref{eq:ysym} with~\eqref{eq:yinfrs}. Note, however, that the numerical constants are only approximate because we have not taken into account the variation in $x_f$.}
\begin{equation}
\Omega^{\mathrm{max}}_{DM}h^2 \sim \frac{9\times 10^{-11}}{\beta^{1/5}\langle\sigma v\rangle};\qquad \mu^2_{\mathrm{max}} \sim 3\times 10^{-43}\,(m_\chi \beta^{1/5})^{4}\,\,\mbox{GeV}^2.\label{eq:max}
\end{equation}
In this region, decoupling occurs during the Gauss-Bonnet expansion era and the relic density can be estimated using~\eqref{eq:ysym}. Next, for $\mu^2 \gtrsim \mu^2_{\mathrm{max}}$, the relic density decreases with increasing $\mu^2$ and is relatively independent of $\beta$. Here, decoupling occurs during the Randall-Sundrum era and each curve approaches the Randall-Sundrum result~\cite{PhysRevD.70.083531}. Finally, when $\mu^2 \gtrsim 10^{-41} m_\chi^4$, each curve reduces to the standard cosmology result. Hence, for the purpose of estimating the relic density, three approximate regimes can be
identified:
\begin{align}
\mu^2 &\lesssim 3\times 10^{-43}m_\chi^4\beta^{4/5} &:\quad \mbox{GB regime}\label{eq:gb_freeze}\\
3\times 10^{-43}m_\chi^4\beta^{4/5} \lesssim \mu^2 &\lesssim 10^{-41}m_\chi^4 &: \quad \,\mbox{RS regime}\label{eq:rs_freeze}\\
\mu^2 &\gtrsim 10^{-41}m_\chi^4 &: \quad \mbox{GR regime}\label{eq:st_freeze}
\end{align}
within which equations~\eqref{eq:ysym},~\eqref{eq:yinfrs} and~\eqref{eq:yinfst} for $Y_{\infty}$ would be appropriately used.
As expected, figure~\ref{fig:musq_omega_varbeta} shows that the dark matter relic abundance may be either enhanced or suppressed by up to two or more orders of magnitude, depending on the values of $\mu^2$ and $\beta$. We must stress, however, that as the value of $\beta$ is reduced, the predicted relic density tends towards the Randall-Sundrum result, and is therefore enhanced. Also, since $\mu^2 \gtrsim 10^{-50}$ GeV$^2$ is bounded from below by BBN constraints, suppression is only possible if $\beta\gtrsim 1.4\times 10^{-4}/m_\chi^2$, corresponding to the condition $\mathcal{R}_m \lesssim 3.3\,m_\chi^{1/4}$. Furthermore, it is only for the particular case considered in~\cite{PhysRevD.79.103528}, that is $\beta = 0.1509$ ($\mathcal{R}_m = 1$) (blue curve), that $\Omega_{DM}h^2$ is exclusively suppressed. For more reasonable values of $\beta$ (and $\mathcal{R}_m$) the relic density is typically enhanced.
We can invert these results to find the annihilation cross section required to produce the observed relic density $\Omega_{DM}h^2 = 0.1187$. In figure~\ref{fig:musq_sigma_cont_varbeta} we plot this cross section as a function of $\mu^2$ for varying $\beta$. The cross section, which is inversely proportional to $\Omega_{DM}h^2$, exhibits similar behaviour to the relic density curves presented in figure~\ref{fig:musq_omega_varbeta} in that the three regimes - Gauss-Bonnet, Randall-Sundrum and standard - are immediately apparent.
\begin{figure}[tbp]
\centering
\includegraphics[scale=0.48,trim=0 0 0 0,clip=true]{musq_sigma_cont_varbeta100}
\hfill
\includegraphics[scale=0.48,trim=0 0 0 0,clip=true]{musq_sigma_cont_varbeta10}
\caption{\label{fig:musq_sigma_cont_varbeta} Required annihilation cross section $\langle\sigma v\rangle$ for a symmetric WIMP as a function of $\mu^2$ for $\beta = 0.1509$ (blue curve), $\beta = 10^{-5}$ (red curve), $\beta = 10^{-10}$ (yellow curve) and $\beta = 10^{-15}$ (purple curve). Also shown is the corresponding result for a pure Randall-Sundrum scenario (dot-dashed black curve). The left and right panels correspond to WIMP masses $m_\chi = 100$ GeV and 10 GeV respectively.}
\end{figure}
The required cross section in each regime can be estimated by rearranging the approximate expressions~\eqref{eq:ysym},~\eqref{eq:yinfrs} and~\eqref{eq:yinfst} and substituting in the observed relic density $\Omega_{DM}h^2$. Thus, if decoupling occurs deep in the Gauss-Bonnet era, the required annihilation cross section is given by
\begin{equation}
\langle\sigma v\rangle \simeq 2.0\times 10^{-22}\left(\frac{1 + \beta}{\beta}\,\frac{\mu}{m_\chi^2}\right)^{1/3}\frac{\left(x_f^{GB}\right)^{5/3}}{\Omega_{DM}h^2}\quad\mbox{cm}^3\mbox{s}^{-1}.\label{eq:sigmagb}
\end{equation}
Similarly, if decoupling occurs during the Randall-Sundrum era,
\begin{equation}
\langle\sigma v\rangle \simeq 9.4\times 10^{-38}\left[\left(\frac{1 + \beta}{3 - \beta}\right)\frac{1}{\mu^2}\right]^{1/4}\frac{m_\chi}{\Omega_{DM}h^2}\quad\mbox{cm}^3\mbox{s}^{-1},\label{eq:sigmars}
\end{equation}
which is relatively independent of $\beta$. For $\mu^2\gtrsim 10^{-41}m_\chi^4$, the transition point $x_t^{RS}$ precedes the freeze-out point and we recover the canonical result $\langle\sigma v\rangle \simeq \langle\sigma v\rangle^{GR} \simeq 2.03\times 10^{-26}$ cm$^3$s$^{-1}$ and $\langle\sigma v\rangle \simeq \langle\sigma v\rangle^{GR} \simeq 2.21\times 10^{-26}$ cm$^3$s$^{-1}$ for $m_\chi = 100$ GeV and $m_\chi =10$ GeV respectively.\footnote{Note that the approximate expressions~\eqref{eq:sigmagb} and~\eqref{eq:sigmars} are more accurate than the corresponding expressions involving the relic density since there is much less variation in the freeze-out point $x_f$ once $\Omega_{DM}h^2$ has been specified.}
The results in figure~\ref{fig:musq_sigma_cont_varbeta} should be compared with the latest constraints derived from the Fermi-LAT gamma ray data~\cite{Fermi}. For example, the bounds for the $\chi\bar{\chi}\rightarrow b\bar{b}$ and $\chi\bar{\chi}\rightarrow \mu^+\mu^-$ annihilation channels for a dark matter particle with mass $m_\chi = 100$ GeV are $\langle\sigma v\rangle_{\mathrm{Fermi}} = 1.31\times 10^{-25}$ cm$^3$s$^{-1}$ and $\langle\sigma v\rangle_{\mathrm{Fermi}} = 1.38\times 10^{-24}$ cm$^3$s$^{-1}$ respectively. For the $m_\chi = 10$ GeV case the bounds are more stringent with $\langle\sigma v\rangle_{\mathrm{Fermi}} = 2.90\times 10^{-26}$ cm$^3$s$^{-1}$ and $\langle\sigma v\rangle_{\mathrm{Fermi}} = 2.01\times 10^{-25}$ cm$^3$s$^{-1}$ for the respective channels. The Fermi-LAT constraints therefore exclude a portion of the Gauss-Bonnet parameter space. For the small values of $\beta$ that correspond to realistic values $\mathcal{R}_m > 1$, larger values of $\mu^2$ are favoured. We must keep in mind however, that these constraints only apply if the dark matter particle annihilates primarily through one of the channels mentioned.
\section{Asymmetric Dark Matter}
\label{sec:adm}
Asymmetric dark matter models treat the dark matter particle $\chi$ and antiparticle $\bar{\chi}$ as distinct and with unequal number densities, similar to the asymmetry that exists in the baryonic sector. In fact, these models typically assume~\cite{Kumar,Graesser:2011wi} either a primordial asymmetry in one sector that is transferred to the other sector, or that both asymmetries are generated by the same physical process such as the decay of a heavy particle. Connecting the two asymmetries also explains the proximity of the dark and baryonic densities, $\Omega_{DM}/\Omega_b \sim 5$, suggesting the dark matter mass is in the range $m_\chi \sim 5 - 15$ GeV~\cite{PhysRevD.79.115016}.
When the particle $\chi$ and antiparticle $\bar{\chi}$ are distinct, the Boltzmann equation~\eqref{eq:boltsym} is generalized to the coupled system
\begin{subequations}
\begin{eqnarray}
\frac{dn_\chi}{dt} &=- 3Hn_\chi -\langle\sigma v\rangle\left(n_\chi n_{\bar{\chi}} - n_\chi^{\mathrm{eq}}n_{\bar{\chi}}^{\mathrm{eq}}\right),\label{eq:nchi} \\
\frac{dn_{\bar{\chi}}}{dt} &= - 3Hn_{\bar{\chi}} -\langle\sigma v\rangle\left(n_\chi n_{\bar{\chi}} - n_\chi^{\mathrm{eq}}n_{\bar{\chi}}^{\mathrm{eq}}\right)\label{eq:nchibar},
\end{eqnarray}
\label{eq:nchiasym}
\end{subequations}
where $n_\chi^{\mathrm{eq}}$ and $n_{\bar{\chi}}^{\mathrm{eq}}$ are the equilibrium number densities of the $\chi$ and $\bar{\chi}$ components respectively.
We assume that self annihilations are forbidden, and that only interactions of the type $\chi\bar{\chi}\rightarrow X\bar{X}$ (where the $X$'s are Standard Model particles) can change the dark matter particle number. We can then write
\begin{equation}
Y_\chi - Y_{\bar{\chi}} = C\label{eq:Cdef},
\end{equation}
where $C$ is a strictly positive constant that characterizes the asymmetry between the particles and antiparticles. Here, we are not concerned with the mechanism that generates the asymmetry, only that one has been created well before particle freeze-out.
Rewriting the Boltzmann equations in terms of the comoving density $Y$, and using equation~\eqref{eq:Cdef}, the system~\eqref{eq:nchiasym} becomes
\begin{align}
\frac{dY_{\chi}}{dx}&=-\frac{s\langle\sigma v\rangle}{xH}\zeta(x)\left(Y_\chi^2 - CY_{\chi} - P\right),\nonumber\\
\frac{dY_{\bar{\chi}}}{dx}&=-\frac{s\langle\sigma v\rangle}{xH}\zeta(x)\left(Y_{\bar{\chi}}^2 + CY_{\bar{\chi}} - P\right),\label{eq:dYasym}
\end{align}
where, since the dark matter particles and antiparticles are non-relativistic at decoupling,
\begin{equation}
P \equiv Y_\chi^{\mathrm{eq}}Y_{\bar{\chi}}^{\mathrm{eq}} = \left(\frac{0.145\,g_\chi}{g_{*}}\right)^2x^3e^{-2x}.\label{eq:Pdef}
\end{equation}
Solving the system~\eqref{eq:dYasym} in the asymptotic limit, the total dark matter density, $\Omega_{DM}h^2$, is the sum of the $\chi$ and $\bar{\chi}$ components,
\begin{equation}
\Omega_{DM}h^2 = 2.75\times 10^8\,m_\chi\left(Y_\chi^\infty + Y_{\bar{\chi}}^\infty\right).\label{eq:dmtot}
\end{equation}
Following similar arguments to those for the symmetric case, we can find an approximate
solution to the system~\eqref{eq:dYasym} for the asymptotic density of the $\bar{\chi}$
component (see~\cite{Iminniyaz:2011yp} for details)
\begin{equation}
Y_{\bar{\chi}}^\infty \simeq \frac{C}{\exp{\left(C/Y^\infty_{(sym)}\right)} - 1},\label{eq:ychibarapp}
\end{equation}
where we use $Y^\infty_{(sym)}$ to denote the corresponding asymptotic solution for symmetric dark matter. As we saw in the previous section, $Y^\infty_{(sym)}$ depends on the timing of freeze-out and
is given by, respectively, equations~\eqref{eq:yinfst},~\eqref{eq:ysym} and~\eqref{eq:yinfrs} for the
three regimes~\eqref{eq:gb_freeze}-\eqref{eq:st_freeze}. From equations~\eqref{eq:ychibarapp}
and~\eqref{eq:Cdef}, we readily obtain
\begin{equation}
Y_{\chi}^\infty \simeq \frac{C}{1 - \exp{\left(-C/Y^\infty_{(sym)}\right)}}.\label{eq:ychiapp}
\end{equation}
As discussed in~\cite{Iminniyaz:2011yp,Meehan:2014zsa}, the contribution from the minority and majority components to the total dark matter density depends sensitively on the ratio $C/Y^{\infty}_{(sym)}$. When $C/Y_{(sym)}^\infty\gg 1$, the density of the $\bar{\chi}$ component is exponentially suppressed, $Y_{\bar{\chi}}^\infty \simeq C\,\exp{\left(-C/Y^\infty_{(sym)}\right)}$, and the density of the $\chi$ component approaches the asymmetry $C$, $Y_\chi\simeq C + C\,\exp{\left(-C/Y^{\infty}_{(sym)}\right)}$. Conversely, when $C/Y_{(sym)}^\infty \ll 1$, the factor $C$ drops out of the expressions~\eqref{eq:ychibarapp} and \eqref{eq:ychiapp} and each component behaves like symmetric dark matter, i.e. $Y_\chi^\infty \simeq Y_{\bar{\chi}}^\infty \simeq Y^\infty_{(sym)}$. We designate each of these regimes as being \textit{strongly} and \textit{weakly} asymmetric respectively, with the relic density in each case behaving like
\begin{equation}
\Omega_{DM}h^2 \simeq
\begin{cases}
\,2\times 2.75\times 10^8\,m_\chi\,Y_{(sym)}^\infty, & \quad C/Y_{(sym)}^\infty\ll 1,\\
\,2.75\times 10^8\,m_\chi\,C, & \quad C/Y_{(sym)}^\infty\gg 1.\label{eq:omglim}
\end{cases}
\end{equation}
To determine which parameter values correspond to each regime, we use the results derived in the previous section for symmetric dark matter. There we saw that the relic density was enhanced for the interval~\eqref{eq:enhint},
\begin{equation}
5\times 10^{-43}\,m_\chi^4\left(\frac{\beta}{1 + \beta}\right)^2\lesssim \mu^2 \lesssim 1\times 10^{-41}\,m_\chi^4,
\end{equation}
and suppressed for the interval~\eqref{eq:supint}
\begin{equation}
\mu^2 \lesssim 5\times 10^{-43}\,m_\chi^4\left(\frac{\beta}{1 + \beta}\right)^2.
\end{equation}
Therefore, for a fixed value of the asymmetry $C$, these two cases would drive the dark matter species towards the weakly or strongly asymmetric regimes respectively.
Again, we can invert the expressions for the asymptotic comoving densities~\eqref{eq:ychibarapp} and~\eqref{eq:ychiapp} and, using~\eqref{eq:dmtot}, find the annihilation cross section required to produce the observed relic density. Then, depending on the timing of freeze-out (see equations~\eqref{eq:gb_freeze}-\eqref{eq:st_freeze}), the cross section and asymmetry are related via
\begin{equation}
\langle\sigma v\rangle \simeq \frac{a}{C}\coth^{-1}\left(\frac{\omega}{C}\right) \times
\begin{cases}
10\,\left(x_f^{GB}\right)^{5/3}/(3\lambda_{GB}) &; \quad (\mbox{GB}\,\mbox{regime})\\
1.1\,x_t^{RS}/\lambda_{GR} &; \quad (\mbox{RS}\,\mbox{regime})\\
2\,\left(x_f^{GB}\right)/\lambda_{GR} &; \quad (\mbox{GR}\,\mbox{regime})\label{eq:sigma_asym}\\
\end{cases}
\end{equation}
where $\omega = \Omega_{DM}h^2/(2.75\times 10^8 m_\chi)$ and $a=1.167\times 10^{-17} \mbox{cm}^3\mbox{s}^{-1}$.
The numerical results for the required annihilation cross section are plotted in figures~\ref{fig:sig_C_cont_varbeta100} and~\ref{fig:sig_C_cont_varbeta10} (solid curves) for $m_\chi = 100$ GeV and $m_\chi = 10$ GeV respectively. The different curves within each panel correspond to different values of $\mu^2$ and we have reduced the magnitude of $\beta$ in the successive panels. In each figure we plot the standard cosmology result (black) for reference.
\begin{figure}[tbp]
\centering
\includegraphics[scale = 0.85, trim = 0 0 0 0, clip = true]{sigma_C_cont_varbeta100_fermi}
\caption{\label{fig:sig_C_cont_varbeta100} Iso-abundance contours in the $(\langle\sigma v\rangle, C)$ plane corresponding to the observed dark matter abundance $\Omega_{DM}h^2 = 0.1187$ for a $100$ GeV WIMP. The contours shown are for $\mu^2 = 10^{-38}$ GeV$^2$ (solid blue curve),
$\mu^2 = 10^{-44}$ GeV$^2$ (solid red curve) and $\mu^2 = 10^{-50}$ GeV$^2$ (solid yellow curve). Also shown is the standard cosmology result (solid black curve). The panels correspond to $\beta = 0.1509$ (top left), $\beta = 10^{-5}$ (top right), $\beta = 10^{-10}$ (bottom left) and $\beta = 10^{-15}$ (bottom right). Note that, for $\beta = 10^{-15}$, the contours for $\mu^2 = 10^{-44}$ GeV$^2$ and $\mu^2 = 10^{-50}$ GeV$^2$ (almost) coincide. In each panel we have superimposed the constraints derived from the Fermi-LAT gamma ray data~\cite{Fermi} with the regions below the dark purple and magenta (dot-dashed) curves excluded for the $\mu^+\mu^-$ and $b\bar{b}$ annihilation channels respectively. We have also indicated the region (below the dot-dashed blue curve) for which the asymmetric detection signal in the Gauss-Bonnet scenario exceeds the symmetric signal in the standard scenario.}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[scale = 0.85, trim = 0 0 0 0, clip = true]{sigma_C_cont_varbeta10_fermi}
\caption{\label{fig:sig_C_cont_varbeta10} Same as figure~\ref{fig:sig_C_cont_varbeta100} but for $m_\chi = 10$ GeV. In each panel the contour for $\mu^2 = 10^{-38}$ GeV$^2$ (almost) overlaps the standard cosmology result.}
\end{figure}
Initially the curves are vertical and the relic density is determined solely by the annihilation cross section. In this region the ratio $C/Y^\infty_{(sym)}$ is small and each component behaves like symmetric dark matter. As both the annihilation cross section and the asymmetry increase we transition into a regime which is strongly asymmetric where the curves are horizontal. Here the density of the minority component is exponentially suppressed and the relic abundance is fixed by the asymmetry $C$. This general behaviour is exhibited regardless of the values of $\mu^2$ or $\beta$, however, the magnitude of the annihilation cross section which separates the weakly and strongly asymmetric regions depends significantly on the combination of $\mu^2$ and $\beta$ (see~\eqref{eq:sigma_asym}).
Since the vertical section of each curve corresponds to the weakly asymmetric regime, the position of the vertical asymptotes can be deduced simply from figure~\ref{fig:musq_sigma_cont_varbeta} (with allowance for the additional factor of $\sim 2$ due to the $\chi$ and $\bar{\chi}$ contributions). When the annihilation cross section is enhanced in figure~\ref{fig:musq_sigma_cont_varbeta}, the curves in figures~\ref{fig:sig_C_cont_varbeta100} and~\ref{fig:sig_C_cont_varbeta10} will be shifted to the right of the standard cosmology result. Similarly, when the symmetric cross section is suppressed, the asymmetric curves will be shifted towards the left. Thus the symmetric cross section determines the vertical asymptote of the required asymmetric cross section.
Consequently, just like the symmetric case, the required annihilation cross section is reduced for all values of $\mu^2$ when $\beta = 0.1509$ (panel 1), getting smaller with decreasing $\mu^2$. Then, as the magnitude of $\beta$ is decreased (in successive panels), the curves are shifted towards larger cross sections. There is a limit however, to how much each curve is shifted for a fixed value of $\mu^2$. For example, in figure~\ref{fig:sig_C_cont_varbeta100}, the $\mu^2 = 10^{-38}$ GeV$^2$ case (solid blue) is shifted to higher cross sections when $\beta$ is reduced from $\beta = 0.1509$ to $\beta = 10^{-5}$ (i.e. going from panel 1 to panel 2). But, as the value of $\beta$ is reduced further in the successive panels, the curve does not move. A similar thing happens for the $\mu^2 = 10^{-44}$ GeV$^2$ case (solid red) once $\beta \lesssim 10^{-10}$ (panels 3 and 4). We understand this by noting that once the value of $\beta$ has dropped below the threshold given in~\eqref{eq:rs_freeze}, the behaviour of each curve is given by the Randall-Sundrum result (see~\eqref{eq:sigma_asym}), and is therefore independent of $\beta$.
The increased annihilation cross section of the asymmetric dark matter species in the Gauss-Bonnet braneworld scenario gives rise to an interesting prospect, first pointed out in~\cite{Gelmini:2013awa}: if the cross section is large enough, it is possible that the annihilation rate and in turn the indirect detection signal of asymmetric dark matter could be enhanced with respect to the symmetric signal in the standard scenario, despite the suppressed abundance of the minority dark matter component. This behaviour, which is contrary to the usual expectation, is possible in both the quintessence and scalar-tensor scenarios~\cite{Gelmini:2013awa}, as well as the Randall-Sundrum braneworld model~\cite{Meehan:2014zsa}. Since we have shown that the required annihilation cross section in the GB braneworld model is increased by up to several orders of magnitude, we would expect similar behviour here also.
Using the formalism developed in~\cite{Meehan:2014zsa}, we indicate in figures~\ref{fig:sig_C_cont_varbeta100} and~\ref{fig:sig_C_cont_varbeta10} the regions in the $(\langle\sigma v\rangle,C)$ plane that produce an amplified asymmetric dark matter detection signal (dot-dashed blue curve). To compare our results with experiment, we also show the region excluded by the latest Fermi-LAT data~\cite{Fermi} (dot-dashed purple and magenta curves). Combining the two, the allowed region of parameter space that produces an amplified detection signal is given by
\begin{equation}
\langle\sigma v\rangle^{GR} < \langle\sigma v\rangle\gamma < \langle\sigma v\rangle_{\mathrm{Fermi}},
\end{equation}
where $\langle\sigma v\rangle^{GR}$ is the required annihilation cross section for symmetric dark matter in the standard cosmological scenario (see section~\ref{sec:sdm}) and $\gamma$ is a damping factor that arises from the asymmetry between the particles $\chi$ and antiparticles $\bar{\chi}$, given by (see~\cite{Meehan:2014zsa})
\begin{equation}
\gamma\equiv \frac{2Y_\chi Y_{\bar{\chi}}}{\left(Y_\chi + Y_{\bar{\chi}}\right)^2} = \frac{\omega^2 - C^2}{2\omega^2}.
\end{equation}
Figures~\ref{fig:sig_C_cont_varbeta100} and~\ref{fig:sig_C_cont_varbeta10} show that it is possible to produce an amplified asymmetric detection signal in the Gauss-Bonnet braneworld model, however, the allowed region decreases as the dark matter particle mass drops from $m_\chi = 100$ GeV to $m_\chi = 10$ GeV due to the more stringent Fermi-LAT constraints.
\section{Conclusions}
\label{sec:con}
Relic abundance calculations provide an important test of non-standard cosmological scenarios in the early pre-BBN universe (see~\cite{Gelmini:2009yh} for further discussion). In this article we have revisited the relic abundance investigation in the Gauss-Bonnet braneworld scenario in which a Gauss-Bonnet curvature invariant is added to the Randall-Sundrum braneworld action. A previous investigation by~\cite{PhysRevD.79.103528} found that the dark matter density is suppressed in the GB braneworld model, however, this conclusion is based on a highly contrived assumption that collapses the Randall-Sundrum expansion era, leading to a slower early time expansion law. We find that when this assumption is relaxed, the early time expansion rate can be either faster or slower than the standard expansion law, depending on the model parameters. In turn, the dark matter relic abundance is either enhanced or suppressed by up to several orders of magnitude with respect to the standard cosmology result, respectively. Importantly, when realistic parameter values are chosen, the early time expansion rate is typically faster than the standard expansion law during the era of dark matter decoupling and the resulting relic abundance is enhanced. Moreover, in the limit $\beta\lll 1$ (corresponding to $\mathcal{R}_m \gg 1$) the usual Randall-Sundrum type behaviour is recovered~\cite{PhysRevD.70.083531,PhysRevD.79.115023}.
We have also investigated the GB braneworld effect on asymmetric dark matter species and found that the enhanced annihilation cross section required to provide the observed relic density is capable of producing an amplified annihilation signal with respect to the symmetric signal in the standard cosmological scenario. This effect, which is contrary to the usual expectation, has also been demonstrated in quintessence, scalar-tensor~\cite{Gelmini:2013awa} and Randall-Sundrum braneworld models~\cite{Meehan:2014zsa}.
The implications of the latest Fermi-LAT constraints on the dark matter annihilation cross section have been considered for both the symmetric and asymmetric models. For small $\beta$, corresponding to realistic values for the mass ratio $\mathcal{R}_m$, larger values of $\mu^{2}$ are favoured, suggesting that the Gauss-Bonnet braneworld expansion rate has reduced to the standard expansion law before dark matter decoupling.
The present investigation is timely because the weak scale cross section relevant to generic relic abundance calculations should be accessible to the next generation of direct and indirect detection experiments~\cite{Bauer:2013ihz}. Therefore, additional constraints and/or an unexpected signal from these experiments could point to new physics in the era prior to BBN.
Our investigation also has implications for dark matter particle models and scans of supersymmetric parameter space. If the early time expansion rate is in fact slower than the standard scenario, particles which are typically overproduced in the standard cosmology and thus ruled out by relic density constraints, may be rescued in the GB scenario.
|
train/arxiv
|
BkiUgFQ5qhDCuPMnCpRc
| 5
| 1
|
\section{I. Introduction}
When a quantum system undergoes a unitary evolution and comes back to its initial state it acquires a
phase. The acquired phase can be of two types; the dynamic phase which
depends on the evolution Hamiltonian and the geometric phase, which depends only on the evolution
path of the quantum system in the projective Hilbert space \cite{panch,berry,aharonov}. For a two level quantum system
(spin-$\frac{1}{2}$), the projective Hilbert space is a
sphere and the geometric phase depends on the geodesical solid angle subtended at the
center of the sphere by the path of evolution of the state vector. The concept of geometric phase first came in the adiabatic
context \cite{berry} but later Aharonov {\it et al.} \cite{aharonov} gave a non-adiabatic generalization of the theory of
geometric phase\ . In the adiabatic approach the state vector is parallel
transported adiabatically to ensure that the system always remains in one of the eigenstates (assuming that the system initially is prepared in one of the
eigenstates) of the instantaneous Hamiltonian during the evolution. In the non-adiabatic approach the system is changed abruptly and the system comes back
to its initial state through different intermediate states.
From the total phase acquired by the quantum system, the dynamical phase is eliminated by various methods in order to experimentally measure the geometric
phase. In magnetic resonance experiments this is achieved
by a spin echo \cite{ernstbook}. The theory of geometric phase\ of a pure quantum system or pure state geometric phase is well understood and has been
demonstrated experimentally by various experimental systems such as Nuclear Magnetic Resonance (NMR) \cite{suter}, single \cite{kwiat} and
two photon interferometry \cite{brendel}.\\\\
Recently it has been proposed that fault tolerant quantum computation can be performed using geometric
phase as it depends on the path and not on the speed of the evolution \cite{fault1,fault2}. To perform computation using geometric phase, it is necessary
to understand the relation between geometric
phase and decoherence. As decoherence or relaxation leads a system from a pure state to a mixed state, an understanding of the mixed state
geometric phase is needed. In 1986 Uhlmann mathematically introduced the concept of mixed state geometric phase \cite{uhlmann}.
In this paper Uhlmann has taken a large system in pure state and a part or a subsystem
in mixed state and pointed out the unitary evolution in which the subsystem is transported in a maximally parallel
manner \cite{uhlmann}. Recently Sj\"{o}qvist {\it et al.} have provided a new description for mixed state geometric phase\
in terms of quantum interferometry \cite{sjoqvist}. In a quantum interferometer a quantum system undergoes a series of unitary evolutions,
after which the probability of finding the system in one of its eigenstates becomes an oscillatory function of some
control parameter. The oscillation pattern of the probability resembles the well known optical interference pattern.
According to Sj\"{o}qvist {\it et al.} the shift of interference pattern is a function of the
geometric phase\ acquired by the quantum system during the unitary evolutions, as well as the purity of the initial internal state (such as, polarization of a photon) of
the quantum systems involved in the
interferometric operation \cite{sjoqvist}. The geometric phase can be directly measured from the shift of the interference pattern.
Mixed state geometric phase has been experimentally demonstrated by Du {\it et al.} \cite{du}
using NMR and by Ericsson {\it et al.} \cite{eric} using single photon interferometry. Du {\it et al.} have experimentally
demonstrated the mixed state geometric phase by measuring the relative phase change
of an auxiliary spin. In the present work we measure the shift of the interference pattern in the Sj\"{o}qvist's interferometry model and show that the
shift is same as the theoretically predicted geometric phase\ as a function of mixed state purity. We also demonstrate the effect of mixed state geometric phase on the
interference visibility.
\section{II. Theory}
\subsection{Quantum Interference}
Let us consider the Sj\"{o}qvist's interferometry model as shown in Fig.\ref{M-Z} \cite{hosoya}. Photons entering the interferometer along a
horizontal path are split into two perpendicular paths by a beam splitter ($BS_1$).
On the horizontal path the photons are
globally phase shifted, whereas on the other path the internal states of the photons, say the
polarization, undergo a unitary evolution {\it U}. The photons are reflected by two mirrors ($M_1$,$M_2$) and the two
paths meet again at another beam splitter ($BS_2$). A detector detects the photons coming only along the
horizontal path. The detected intensity shows an interference pattern as a function of the phase shift.\\
As a photon can take one of the two possible paths and in each path it can have one of the two possible polarizations, so
the Hilbert space of the combined ``path-internal state" system becomes $2^{2}\times 2^{2}$. In NMR, the above
interferometry
model can be simulated using two coupled spin-$\frac{1}{2}$ nuclei, which have the Hilbert space of identical
dimension. One qubit represents the path qubit and the other qubit, termed as spin qubit, represents the
internal state.\\\\
The equivalent quantum circuit of the Sj\"{o}qvist's interferometry model is shown in Fig.\ref{circuit} \cite{hosoya}.
The eigenstates $|0\rangle$ and $|1\rangle$ of the path
qubit represent the two paths, horizontal and vertical respectively. The path qubit is prepared in the pure state
$|0\rangle \langle 0|$ at the beginning of the interferometry operation. The beam splitter is represented by a Hadamard gate
given by,
\begin{eqnarray}
U_H
=
\frac{1}{\sqrt{2}}
\left[
\begin{array}{cr}
1& 1\\
1&-1
\end{array}
\right].
\label{had}
\end{eqnarray}
As the phase shifter and the unitary operator {\it `U'} are path specific, they are represented by two
controlled operations, together given by \cite{sjoqvist},
\begin{eqnarray}
U_C
=
\left[
\begin{array}{lr}
0&0\\
0&1
\end{array}
\right]
\otimes
U
+
\left[
\begin{array}{lr}
e^{i \chi}&0\\
0&0
\end{array}
\right]
\otimes
\mathds{1}.
\label{com}
\end{eqnarray}
Depending upon the state of the first qubit, the operator $U_C$ either applies {\it U} on
the second qubit or phase shifts the first qubit. The mirrors in Fig.\ref{M-Z} represent NOT gates, given by,
\begin{eqnarray}
U_M
=
\left[
\begin{array}{lr}
0&1\\
1&0
\end{array}
\right].
\label{not}
\end{eqnarray}
The input state of the combined ``path-spin" system can be written as \cite{hosoya},
\begin{eqnarray}
\rho_{in} = (|0\rangle \langle 0|)^P \otimes \rho_0^S ,
\end{eqnarray}
where P stands for path and $\rho_0^S$ is the density matrix corresponding to the initial state of the spin
qubit which can be either pure or mixed. The initial density matrix $\rho_{in}$ is transformed into the final density
matrix $\rho_{out}$ as,
\begin{eqnarray}
\rho_{out} = U_H U_M U_C U_H \hspace*{0.1cm} \rho_{in} \hspace*{0.1 cm} {U_H}^{\dagger} {U_C}^{\dagger} {U_M}^{\dagger}
{U_H}^{\dagger}.
\end{eqnarray}
Substituting the matrix forms of the operators given by Eq.(\ref{had}-\ref{not}) we obtain \cite{sjoqvist},
\small
\begin{eqnarray}
\rho_{out} =
\frac{1}{4}
\left[
\left(
\begin{array}{lr}
1&1\\
1&1
\end{array}
\right)
\otimes
U\rho_0^S U^{\dagger}
+
\left(
\begin{array}{rr}
1&-1\\
-1&1
\end{array}
\right)
\otimes
\rho_0^S
+
e^{i \chi}
\left(
\begin{array}{rr}
1&1\\
-1&-1
\end{array}
\right)
\otimes
\rho_0^S U^{\dagger}
+
e^{-i \chi}
\left(
\begin{array}{lr}
1&-1\\
1&-1
\end{array}
\right)
\otimes
U\rho_0^S
\right].
\end{eqnarray}
\large
The detected signal in the horizontal path ($|0\rangle$ eigenstate of path qubit) is given by the trace of the reduced
density matrix of the spin qubit corresponding to the $|0\rangle$ state of the path qubit. The output intensity is given
by \cite{hosoya},
\begin{eqnarray}
I &=& \frac{1}{4} Tr_S \left(U\rho_0^S U^{\dagger} + \rho_0^S + e^{-i \chi}U\rho_0^S + e^{i \chi}\rho_0^S U^{\dagger} \right)
\nonumber\\
&=& \frac{1}{2}\left(1 + |Tr_S \left(U\rho_0^S \right)|cos\left[\chi - arg \ Tr_S \left(U\rho_0^S \right)\right]\right) \nonumber
\\
&=& \frac{1}{2}\left(1 + \nu \ cos\left[\chi - \phi \right]\right),
\label{intpat}
\end{eqnarray}
where the amplitude of oscillation $\nu = |Tr_S \left(U\rho_0^S \right)|$ is called the visibility of interference and the
shift $\phi = arg \ Tr_S \left(U\rho_0^S \right)$ depends on the unitary operator {\it `U'} acting on the spin qubit density
matrix $\rho_0^S$.\\\\
A mixed state can be thought of a mixture of several pure states incoherently weighted by their respective
probabilities. Therefore, the interference pattern given by Eq.\ref{intpat} takes the following form for a mixed input
spin state\cite{sjoqvist},
\begin{eqnarray}
I = \sum_k w_k I_k = \frac{1}{2} \left( 1 + \sum_k w_k \nu_k \ cos\left[ \chi - \phi_k \right] \right),
\label{intmix}
\end{eqnarray}
where the index k denotes individual pure states with probabilities $w_k$. The above equation can be written in the form of Eq.\ref{intpat} as
\begin{eqnarray}
I \propto 1 + \tilde{\nu}\ cos(\chi - \tilde{\phi}),
\end{eqnarray}
by defining mixed state phase shift $\tilde{\phi}$ and visibility $\tilde{\nu}$ as\cite{sjoqvist},
\begin{eqnarray}
\tilde{\phi}& = & arg \ \left(\sum_k w_k \nu_k e^{i \phi_k} \right),
\label{mixphi}
\end{eqnarray}
\begin{eqnarray}
\tilde{\nu}& =& \left| \sum_k w_k \nu_k e^{i \phi_k} \right|.
\label{mixnu}
\end{eqnarray}
\subsection{Geometric phase and parallel transport condition}
The parallel transport condition for any state vector $|\psi (t)\rangle$ is given by,
\begin{eqnarray}
\langle \psi (t)|\dot{\psi}(t)\rangle = 0,
\label{parcon}
\end{eqnarray}
which means that the phase does not change when $|\psi (t)\rangle$ evolves to $|\psi (t+\delta t)\rangle$ for infinitesimal
$\delta t$. When a mixed
state given by the density matrix $\rho_m (t)$ evolves under a unitary operator {\it A(t)}, the condition given in
Eq.\ref{parcon} leads to \cite{sjoqvist},
\begin{eqnarray}
Tr \left[ \rho_m (t) \dot{A}(t) A^{\dagger}(t)\right] = 0.
\end{eqnarray}
This condition although necessary is not sufficient to determine the unitary operator {\it A(t)} as it can determine {\it
A(t)} only up to N phase factors, where N is the dimension of the Hilbert space. The N phase factors can be determined from the
conditions \cite{wagh},
\begin{eqnarray}
\langle k(t)\left| \dot{A}(t)\ A^{\dagger}(t)\right|k(t)\rangle = 0, \hspace*{2cm} k = 1,2,3......,N,
\label{strcon}
\end{eqnarray}
where $|k(t)\rangle$ are the orthonormal eigenstates of $\rho_m (t)$. The unitary operator {\it A(t)}, obtained by
solving the above conditions, parallel transports the mixed state density matrix $\rho_m$ so that the dynamical phase
becomes identically zero.
The geometric phase\ $\gamma_g$, acquired by a mixed state when the state evolves under {\it A(t)} along a curve $\Gamma$, is given by \cite{sjoqvist},
\begin{eqnarray}
\gamma_g[\Gamma] = arg\ Tr[\rho_m A(t)] = arg\left( \sum_k w_k \nu_k e^{i \beta_k}\right),
\label{mgp}
\end{eqnarray}
where $e^{i \beta_k}$ is the geometric phase\ associated with the $k^{th}$ pure state. The
expression for the geometric phase\ given by Eq.\ref{mgp} is similar to the
expression for the interferometric phase shift given by Eq.\ref{mixphi} and therefore the interferometric phase shift
directly gives the geometric phase\ of the spin qubit.\\\\
In the present work we consider the mixed state of a spin-$\frac{1}{2}$ particle. The density operator of a spin-$\frac{1}{2}$ particle can be in
general written as,
\begin{eqnarray}
\rho_m = \frac{1}{2} \left( 1 + \vec{r}.\vec{\sigma} \right),
\end{eqnarray}
where the length `r' of the Bloch vector $\vec{r}$, is equal to one for pure states, less than one for mixed states and
remains unchanged during unitary evolution of the state. The components of $\vec{\sigma}$ are the Pauli matrices,
$\vec{\sigma} = [\sigma_x,\sigma_y,\sigma_z]$. $\rho_m$ represents a mixture of two of its eigenvectors with eigenvalues
$\frac{1}{2}\left( 1 \pm r\right)$.\\
Let us consider that the Bloch vector $\vec{r}$ for a mixed state (r$<$1) traces out a cyclic curve in the Bloch sphere which
subtends a geodesically closed solid angle of $\Omega$. During the process the two eigenstates of the density operator with eigenvalues
$\frac{1}{2}\left( 1 \pm r\right)$ acquire geometric phase\ $\mp \frac{\Omega}{2}$ respectively\cite{anandan}. The quantity $\sum_k w_k
\nu_k e^{i \phi_k}$ (Eq.\ref{mixphi} and \ref{mixnu}), using the fact that $\nu_k = 1$ for cyclic evolution\cite{sjoqvist}, becomes,
\begin{eqnarray}
\sum_k w_k \nu_k e^{i \phi_k}& =& \frac{1}{2}\left(1-r \right)e^{i \frac{\Omega}{2}} + \frac{1}{2}\left(1+r \right)e^{-i
\frac{\Omega}{2}}\nonumber \\
&=& cos(\frac{\Omega}{2}) - i\ r\ sin(\frac{\Omega}{2}).
\label{quan}
\end{eqnarray}
Using the expression given by Eq.\ref{quan}
the shift of interference pattern (Eq.\ref{mixphi}) and the visibility (Eq.\ref{mixnu}) for mixed state can be respectively written as,
\begin{eqnarray}
\tilde{\phi} = -\ arctan\left( r\ tan \left(\frac{\Omega}{2}\right)\right),
\label{shiftgeo}
\end{eqnarray}
and
\begin{eqnarray}
\tilde{\nu} = \sqrt{cos^{2}(\frac{\Omega}{2}) + r^{2} sin^{2}(\frac{\Omega}{2})}.
\label{visgeo}
\end{eqnarray}
In the present paper, we have experimentally measured the above frequency shift and the visibility using a two qubit system,
by NMR. The frequency shift directly gives the geometric phase of the spin qubit.
\section{III. Experimental Procedure}
Experiments were performed on Carbon-13 enriched $^{13}CHCl_3$ dissolved in $CDCl_3$. The $^{13}C$ and $^{1}H$ nuclei form a two
qubit system with a J-coupling of 209 Hz. $^{1}H$ and $^{13}C$ respectively are used as the path and spin qubits.
The spin-lattice ($T_1$) relaxation times of $^{13}C$ and $^{1}H$ at room temperature were measured
as 21s and 16s respectively, and the spin-spin ($T_2$) relaxation times were measured to be 0.29s and 3.4s respectively. All the experiments were
performed using a
Bruker DRX 500 MHz (11.2 Tesla) NMR spectrometer where the resonance frequencies for $^{13}C$ and $^{1}H$ are 125.76 MHz and 500.13 MHz
respectively. The pulse
programme is given in Fig.\ref{pp}. The pulse programme contains several parts which are described below:
\subsection{Creation of Pseudo-pure state (PPS)}
The ``path-spin" system is first prepared in a pseudo-pure state
using the method of spatial averaging \cite{cory98}. The pulse sequence is as follows,
\begin{eqnarray}
\left(\frac{\pi}{3}\right)_x^H - G_z - \left(\frac{\pi}{4}\right)_x^H - \frac{1}{4J_{CH}} - \left(\pi \right)_y^{H,C} - \frac{1}{4J_{CH}}
-\left(\frac{\pi}{4}\right)_{\bar{y}}^H - \left(\pi\right)_{\bar{y}}^{H,C} - G_z
,
\end{eqnarray}
where the superscript H or C identifies the spin (proton or carbon respectively) on which the r.f. pulse is applied and the subscript {\it x} or {\it y}
determines the phase of the pulse. $J_{CH}$ is
the J-coupling and $G_z$ indicates a {\it z}-gradient which destroys all coherences ({\it x} and {\it y} magnetizations) and retains only longitudinal
magnetization ({\it z} magnetization component). At the end of this sequence the system is prepared in the $|00\rangle$ pseudo pure statei \cite{cory98}.
\subsection{Creation of Mixed state}
After preparing the $|00\rangle$ PPS, an $\alpha$ degree pulse is applied on the carbon spin followed by a {\it z}-gradient.
In the Bloch representation it creates a mixed state vector
whose
length r = $cos\ \alpha$ $[r < 1$ for $0^{\circ} < \alpha \leq 90^{\circ}$], where the value of r determines the purity of the state.
The above pulse
programme can be written as,
\begin{eqnarray}
\left( \alpha \right)_x^C - Gz.
\end{eqnarray}
\subsection{The Interferometer}
The Hadamard gate (Beam Splitter) is implemented by the sequence $\left(\frac{\pi}{2}\right)_y^H -
\left(\pi\right)_x^H$ \cite{jonesgate}. The important operation of the
interferometry part is the controlled operation $U_C$. $U_C$ contains a controlled phase shift gate applied on the path
qubit and a
controlled {\it U} operation acting on the spin qubit.\\
\small {\bf Controlled Phase shift:} \large\\
\hspace*{0.5cm}
The controlled phase shift gate in the present context is different from the conventional two-qubit gate. Here both the controlling and the target qubits
are the path qubit. The path qubit is phase
shifted by $\chi$ when it is in the $|0\rangle$ (horizontal path) state. The output of the phase shift gate $U_{\chi}$ can
be written as,
\begin{eqnarray}
U_\chi|00\rangle = e^{i \chi}|00\rangle ;\:\nonumber
U_\chi|01\rangle = e^{i \chi}|01\rangle ;\:\nonumber
U_\chi|10\rangle = |10\rangle ;\:\nonumber
U_\chi|11\rangle = |11\rangle.
\end{eqnarray}
The pulse sequence for $U_\chi$ is \cite{jonesgate},
\begin{eqnarray}
\left(\frac{\pi}{2}\right)_x^H - \left( \chi \right)_{\bar{y}}^H - \left(\frac{\pi}{2}\right)_{\bar{x}}^H
\end{eqnarray}
\small {\bf Controlled {\it U}:} \large\\
\hspace*{0.5cm}
In the present case {\it U} is a geometric phase shift operator. The controlled geometric phase shift operator is
implemented by evolving the spin qubit in a cyclic path (the `Slice Circuit') in the Bloch sphere as shown in Fig.\ref{cycle}, when the path qubit is in
state $|1\rangle$ (vertical path). This is achieved by pulsing only on the $|10\rangle - |11\rangle$
subsystem. Two transition selective $\pi$
pulses are applied on the $|10\rangle - |11\rangle$ transition of $^{13}C$ with phases differing by $(\pi + \phi)$ \cite{ranathesis}. They
cause the Bloch vector to flip along one path, $\Gamma_1$ and come back to its initial orientation
along a different path, $\Gamma_2$. The loop ($\Gamma_1,\Gamma_2$) subtends a geodesically closed solid angle of
$\Omega = 2\phi$ \cite{suter}.
A spin echo sequence has been incorporated to eliminate the dynamical phase.
The pulse sequence is given by,
\begin{eqnarray}
\tau - \left(\pi \right)_x^H - \left(\pi\right)_{\theta}^{|10\rangle - |11\rangle} -
\left(\pi\right)_{\theta + \pi + \phi}^{|10\rangle - |11\rangle} - \left(\pi \right)_{\bar{x}}^H,
\end{eqnarray}
where $\tau$ is the total time of the two transition selective pulses. The second $\left(\pi \right)^H_{\bar{x}}$ pulse
in the above sequence restores the sign of the $^{1}H$ magnetization.\\\\
The mirror (NOT gate) is implemented by a $\left(\pi\right)_x^H$ pulse. It converts the state $|0\rangle$ (horizontal path) to state $|1\rangle$
(vertical path) and vice-versa. The sequence $\left(\frac{\pi}{2}\right)_y^H - \left(\pi\right)_x^H$ for the Hadamard gate
is repeated after the mirror, in order to implement the second beam splitter ($BS_2$).
\subsection{Measurement}
At the end of the interferometric operations, both the
qubits were detected after applying a {\it z}-gradient and a reading $\pi/2$ pulse on the detection qubit. The diagonal
part of the density matrix was then tomographed using the line intensities normalized to the respective equilibrium
spectra \cite{rana}.
\section{IV. Results}
The intensity of the signal detected only in the horizontal path is proportional to the total population of the
$|00\rangle$ and
$|01\rangle$ levels as these two energy levels correspond to the $|0\rangle$ state (horizontal path) of the path qubit. For
each $\chi$ the final density matrix (diagonal part only) was tomographed and the sum of the $|00\rangle$ and
$|01\rangle$ populations was plotted as the intensity.
Data were collected at 37 equidistant values of $\chi$ ranging from -360$^\circ$ to 360$^\circ$ to obtain the full
interference pattern.
Fig.\ref{pattern} shows the $^1H$ and $^{13}C$ spectra for different $\chi$ values for pure initial state
of the spin qubit. The three low intensity lines in the carbon spectra arise from the natural abundant carbon coupled to deuterium in the solvent
CDCl$_3$. The intensities (sum of $|00\rangle$ and $|01\rangle$ populations) calculated from the normalized
spectral line intensities are plotted
as a function of phase shift $\chi$. Fig.\ref{pattern}(a) shows the pattern when {\it U} (the geometric phase shift
operator) was not applied.
As expected no shift in the pattern from $\chi$ = 0 was observed. Whereas Fig.\ref{pattern}(b) shows the pattern
corresponding to $\Omega$ = 180$^\circ$. A shift of -90$^\circ$ was observed as expected according to Eq.\ref{shiftgeo} for
pure state (r = 1). In each plot the solid line represents the expected theoretical curve.
\subsection{The `shift - geometric phase' relationship (Eq.\ref{shiftgeo})}
For mixed input state of the spin qubit, the pattern shifts from $\chi$ = 0 for non-zero geometric phase. The amount of
shift is a
function of both the purity of the input state as well as the geometric phase\ of the spin qubit. Fig.\ref{shift} shows the dependence
of interferometric shift on the geometric phase\ and the purity of mixed state for $\Omega$ = 60$^\circ$ (\ref{shift}.a),
$\Omega$ = 90$^\circ$ (\ref{shift}.b) and
$\Omega$ = 120$^\circ$ (\ref{shift}.c). For a particular value of $\Omega$ and r, experiments have been performed for ten
equidistant points of $\chi$ in the range [-90$^\circ$,0$^\circ$]. The data were fitted with a function
$\mathscr{F}$($\nu,\phi$) = $\nu \:$cos($\chi - \phi$), to calculate the shift. The shift is zero for r = 0 and
the shift is $-\frac{\Omega}{2}$ for r = 1 in all
the three cases. The solid line in each plot represents the theoretical
curve. Spectra corresponding to $\alpha$ = 0$^\circ$,30$^\circ$,50$^\circ$,70$^\circ$ and 90$^\circ$ have been shown for
$\chi$ = 30$^\circ$ (\ref{shift}.a), $\chi$ = 40$^\circ$ (\ref{shift}.b) and $\chi$ = 60$^\circ$ (\ref{shift}.c).
\subsection{The `visibility - geometric phase' relationship (Eq.\ref{visgeo})}
The visibility
of interference or the amplitude of oscillation is given by the difference between the maximum and the minimum intensities
in the interference pattern. The visibility was measured for different purity `r' of the spin qubit state.
Fig.\ref{visib} shows the visibility as a function of `r' for $\Omega$ = 120$^\circ$ (\ref{visib}.a), $\Omega$ = 180$^\circ$
(\ref{visib}.b) and $\Omega$ = 360$^\circ$ (\ref{visib}.c). $\Omega$ = 360$^\circ$ makes the visibility independent of r,
while for $\Omega$ = 180$^\circ$
the visibility changes linearly with r. The experimental data matches the expected behavior (solid line) given by
Eq.\ref{visgeo}. Spectra corresponding to
$\alpha$ = 0$^\circ$,30$^\circ$,60$^\circ$ and 90$^\circ$ (for $\Omega$ = 180$^\circ$, $\alpha$ = 89$^\circ$ was applied
instead of 90$^\circ$ as the shift according to Eq.\ref{shiftgeo} becomes undefined for $\Omega$ = 180$^\circ$ and r = 0) have
been shown adjacent to each plot. While recording the spectra
the value of $\chi$ was chosen according to the shift of pattern given by Eq.\ref{shiftgeo}. All the experiments for
$\Omega$ = 360$^\circ$ was performed at $\chi$ = 0$^\circ$ and for $\Omega$ = 180$^\circ$, at $\chi$ = -90$^\circ$. For
$\Omega$ = 120$^\circ$, $\chi$ was chosen same as the shift predicted by Eq.\ref{shiftgeo}.
\section{V. Conclusion}
The study of mixed state geometric phase has become important ever since geometric phase\ was proposed as a possible method of
performing fault tolerant quantum computing. The pure state geometric phase\ is well understood and well studied by various
experimental methods. Here we have reported the first
experimental measurement of mixed state geometric phase\ directly from the shift of a quantum interference pattern. We have
experimentally measured the visibility and the shift of the interference pattern as a function of the purity of the input
mixed state which agree with the theoretically expected results. This study shows that NMR interferometry is one of the
possible experimental methods to measure geometric phase\ of a pure as well as a mixed state. Future directions include studies of
non-cyclic geometric phase\ \cite{sjo-pla} and applications of geometric phase\ in fault tolerant quantum computations.
\section*{Acknowledgments}
We gratefully acknowledge Prof. K. V. Ramanathan for discussions. The use of DRX-500 high resolution liquid state
spectrometer of the NMR Research Centre, Indian Institute of
Science, Bangalore, funded by Department of Science and Technology (DST), New Delhi, is gratefully acknowledged. AK
acknowledges ``DAE-BRNS" for
``Senior Scientist scheme", and DST for a research grant.
\newpage
|
train/arxiv
|
BkiUdpU5qoTBG_qrqviM
| 5
| 1
|
\section{Introduction}
\label{intro}
Phytoplankton produces organic matter in the surface ocean from sunlight through
photosynthesis, thus providing the source of energy for almost all marine living creatures, including
zooplankton, fish and larger animals~\cite{ML2005}. The efficiency of this
process depends on several factors, such as the rate of essential biochemical reactions,
the effectiveness of predation by higher organisms, nutrient abundance, and light availability.
All these factors are
affected by the transport and mixing processes taking place in the fluid environment
where phytoplankton organisms live.
Understanding the relations between laminar and turbulent flows,
and the distributions of planktonic species is a complex problem that has been
previously addressed from different perspectives~\cite{abra,denman1995biological,martin2002,martin2003phytoplankton,lo,GR2008,GF2020}.
{Particular interest has been put on the characterization of the statistical features of plankton density fields, in terms of variance fluctuation spectra~\cite{DP1976,Smith_etal_1988,LK2004,franks}. Within this framework, a major question is whether the scaling of planktonic spectra is different from that of an inert quantity that is passively transported by a turbulent flow. The issue is relevant both at a fundamental level, to assess the relative importance of fluid and reactive (biological, in the present case) dynamics, and to quantify the patchiness (meaning, heterogeneity) of plankton spatial distributions. Indeed, were the spectra of planktonic fields different from those of a passive (non-reactive) scalar in a given wavenumber (or frequency) range, this would indicate the dominance of biological activity in the corresponding interval of spatial (or temporal) scales. Moreover, the slope of such spectra gives information on the scale-by-scale intensity of plankton fluctuations and, hence, could allow to quantify the typical size of structures where the biological species are possibly more concentrated.}
{Relying on dimensional arguments, some single-species models~\cite{DP1976,DP1976b} of plankton dynamics in three-dimensional (3D) turbulence predict that, at sufficiently small scales, the population fluctuation spectrum should scale as that of the velocity field ({i.e. according to the Obhukov-Corrsin scaling} as is the case for a non-reactive scalar). Due to the biological dynamics, however, the planktonic spectrum should flatten at larger scales, which would correspond to reduced patchiness in this range of scales. An extension of this picture was obtained by considering two plankton populations interacting according to Lotka-Volterra dynamics~\cite{powell}. Such a model reproduces the results of the single-species model
when the interactions are neglected. In the presence of interactions, instead, it suggests that the reactive contribution to the spectrum of the density fluctuations of each species should be steeper than the spectrum of a non-reactive scalar (which would correspond to increased patchiness). Nevertheless, the full planktonic spectra result to be the sum of such a contribution and a non-reactive one, with different weights, making it difficult to draw general conclusions. In two-dimensional (2D) turbulent flows, forced at large scale, the same dimensional reasoning predicts flat spectra of reactive species (flatter than those of velocity fluctuations).
Both for a single population and two interacting ones, the spectral slope is the same as the one found for the concentration of an inert substance.}
{Several observational studies report about the comparison of
plankton density fluctuation spectra, obtained through the measurement of fluorescence (a proxy for phytoplankton concentration), with theoretical predictions (see~\cite{franks} for a critical review in 3D turbulent flows). The results are overall varied over different oceanic regions and do not appear conclusive~\cite{LK2004}. In some cases, quite flat spectra, possibly suggesting the importance of biological activity, have been found (see, e.g., \cite{DEROT2015210}). Recent detailed numerical simulations of multicomponent reactions, of different orders, in fully developed homogeneous isotropic 3D turbulence, instead, have shown that reactive spectra are indistinguishable from those of non-reactive passive scalar fields~\cite{wu}.}
In this work, we aim at comparing the 2D and 3D advection-reaction-diffusion dynamics of a predator-prey system, corresponding to the phytoplankton-zooplankton (PZ) model, in the turbulent flow generated by a cylindrical obstacle.
Our motivation is twofold. On one side we are interested in testing the robustness of results already found in the 2D case~\cite{jaccod2021predator}, about the minimal flow ingredients needed to sustain a persistent bloom, when adding an extra dimension. On the other, based on the remarkably different cascade processes of 2D and 3D turbulence~\cite{alexakis2018cascades}, we intend to evaluate the impact of the ensuing multiscale turbulent dynamics, in two and three spatial dimensions, on the statistical features of the plankton population densities.
It is also worth noting that the majority of early studies addressing the conditions for plankton blooms under flow, which have been instructive to elucidate the basic mechanisms controlling the interplay between fluid and reactive dynamics, adopted the paradigm of chaotic advection, relying on 2D kinematic (i.e. synthetic) flows~\cite{toroczkai1998advection,neu,NLHP2002,fer,hl}. While it might appear, to some extent, reasonable that the overall phenomenology remains unchanged in the presence of 2D dynamic turbulent velocity fields, as it has been indeed recently shown~\cite{jaccod2021predator}, this is not straightforward when considering the extension to 3D authentic turbulence.
The model investigated here, clearly, cannot be considered as fully representative of real plankton dynamics in the ocean. Our obstacle represents an idealized island (or obstruction of other kind), and we do not account for either vertical boundaries or topographic effects. Moreover, by considering PZ population dynamics, we neglect possible nutrient heterogeneities. While this choice allows us to limit the complexity of the problem, it precludes the possibility to describe oligotrophic environments, where nutrients are a growth limiting factor.
{Nevertheless, in our view, beyond the interest for general aspects related to the effect of turbulence on reaction dynamics, the present approach can be seen as a preliminary step towards the investigation of more realistic configurations relevant to plankton dynamics. In particular, it may reveal useful for subsequent developments aimed at improving the, still incomplete, understanding of the role of vertical transport (of different species) on different scales, which is considered an essential factor for primary production~\cite{levy2001impact,levy2003mesoscale,Levy_etal_2018}.}
This article is organized as follows. In Sec.~\ref{sec:math}, we introduce the model system and its governing equations. Section~\ref{sec:num3d} illustrates the numerical setup, as well as the flow configuration and the main parameters used. The results and their analysis are presented in Sec.~\ref{sec:results}. Section~\ref{sec:conclu3d} summarizes the main findings and discusses their implications.
\section{{Model dynamics}}
\label{sec:math}
We consider two plankton species interacting as in a predator-prey system, the phytoplankton (prey) and the zooplankton (predator).
Their spatiotemporal evolution in a fluid flow can be conveniently described using coupled advection-reaction-diffusion equations.
Following~\cite{jaccod2021predator}, we adopt for the reaction kinetics the PZ model~\cite{tru}.
By introducing a characteristic (obstacle) length $l_0$,
a typical fluid velocity $u_0$, a typical time $l_0/u_0$, and the {phytoplankton} carrying capacity $K$,
the nondimensional evolution equations for the population densities of phytoplankton $P(\boldsymbol{x},t)$ and of zooplankton $Z(\boldsymbol{x},t)$ [with $\boldsymbol{x} = (x,y,z)$] are:
\begin{subequations}
\begin{align}
& \partial_t P + \bm{u} \cdot \bm{\nabla} P - \frac{1}{Re Sc}\bm{\nabla}^2 P = \left(\beta P\left(1-P\right) - \delta Z \frac{P^2}{P^2 + \chi^2}\right),\label{apza}\\
& \partial_t Z + \bm{u} \cdot \bm{\nabla} Z - \frac{1}{Re Sc}\bm{\nabla}^2 Z = \gamma Z \left( \delta \frac{P^2}{P^2 + \chi^2} - \lambda \right), \label{apzb}
\end{align}
\end{subequations}
where $Re = u_0 d/\nu$ is the Reynolds number based on the obstacle diameter $d = 2l_0$, $\nu$ the viscosity coefficient, $Sc = \nu/D$ the Schmidt number, and $D$ the diffusion coefficient.
The remaining biological parameters are $\beta = rl_0/u_0$, where $r$ is the maximum specific growth rate of $P$, $\delta = R_m l_0/u_0$, where $R_m$ is the maximum specific predation rate of $Z$, $\chi = \kappa/K$, where $\kappa$ indicates how quickly that maximum is attained,
while $\lambda = \mu l_0/(u_0 \gamma)$, with $\mu$ the rate of zooplankton removal {(due to death or sinking)} and $\gamma$ the ratio of biomass consumed to biomass of new herbivores produced.
{The space-independent version of the above model (representative of a well-mixed situation) can display excitability, and for this reason it was originally introduced to describe algal blooms~\cite{tru}.
It should be noted that, for this to occur, it is necessary that two different timescales exist: to escape the predation control, and thus initiate the outbreak, the phytoplankton growth rate must be larger than the predation rate by the zooplankton.
We also recall that the space-independent reactive dynamics are characterized by three fixed points~\cite{tru}: $(P_1,Z_1)=(0,0)$, which represents the extinction of both species; $(P_2,Z_2)=(1,0)$, which gives the equilibrium value of $P$ in the absence of $Z$; and $(P_3,Z_3)=(P_{eq}, Z_{eq})$ where $P_{eq}= \chi\sqrt{\lambda/(\delta-\lambda)}$ and $Z_{eq}= \beta(1-P_{eq})(P_{eq}^2+\chi^2)/(P_{eq}\delta)$, which is a stable equilibrium point for the parameter values here adopted.}
{For the velocity field $\bm{u}(\bm{x},t)$ appearing in Eqs.~(\ref{apza}) and (\ref{apzb}), we consider an incompressible 3D flow
in the presence of a circular cylinder of diameter $d$ and height
$L \gg d$, which is the solution of the Navier-Stokes equation with the appropriate boundary conditions (see Sec.~\ref{sec:num3d}).}
{In nondimensional form, the Navier-Stokes equation, supplemented by the incompressibility condition, reads:
\begin{subequations}
\begin{eqnarray}
\partial_t \bm{u} + (\bm{u} \cdot \bm{\nabla}) \bm{u} & = & -\bm{\nabla} p + \frac{1}{Re} \bm{\nabla}^2 \bm{u}, \label{ansa}\\
\bm{\nabla} \cdot \bm{u} & = & 0,
\label{ansb}
\end{eqnarray}
\end{subequations}
where
$\bm{u}=(u_x,u_y,u_z)$
and $p$ is pressure. For the analogous 2D dynamics, the cylinder reduces to a circle and $u_z=0$.}
\section{{Numerical methods}}
\label{sec:num3d}
{The 3D geometrical setup corresponds to a cubic domain of side $L$, in which a planar uniform flow [in plane $(x,y)$], homogeneous in the normal direction $z$, impacts the cylindrical obstacle, with axis in the $z$ direction and height $L=16d$, thus generating a turbulent wake behind it. The 2D case is obtained by considering only the dynamics in a square domain of side $L$ [in plane $(x,y)$].
In the following we will also refer to the streamwise ($x$), the cross-stream ($y$) and the spanwise ($z$) directions as the longitudinal, the transversal, and the vertical one, respectively.}
In the present study, the Reynolds number is fixed. Based on the investigation on its role performed in~\cite{jaccod2021predator}, we evaluate as a reasonable choice to select it at an intermediate value, $Re=2000$, also considering the substantial increase of the computational cost of 3D direct numerical simulations (DNS), with respect to their 2D counterpart. The Schmidt number, due to numerical constraints, is also fixed, and takes the value $Sc = 1$. Consequently, the smallest relevant scale, i.e. the Batchelor scale, $\ell_B = \ell_{\nu} Sc^{-1/2}$, coincides with the viscous dissipation cutoff $\ell_{\nu}$. For the 2D case, being the turbulent dynamics governed by a direct enstrophy cascade (as observed in~\cite{jaccod2021predator}), the latter viscous scale can be estimated as $\ell^{2D}_{\nu} = (\nu^3/\langle\eta_{\nu}\rangle)^{1/6}$, where $\langle \eta_\nu \rangle$ is the mean enstrophy flux~\cite{boff}. For the 3D case, assuming that the flow is characterized by a direct energy cascade, the Kolmogorov scale is equal to $\ell^{3D}_{\nu} = (\nu^3/\langle\epsilon_{\nu}\rangle)^{1/4}$, where $\langle \epsilon_\nu \rangle$ is the mean kinetic energy dissipation rate~\cite{kolmogorov1941local,Frisch}.
{The dimensional and non-dimensional values we adopted for the biological model~\cite{hl} are reported in Table~\ref{tab1}.}
\begin{table}[htb]
\begin{center}
$
\begin{array}{|c|c|c|}
\hline
\text{Parameter} & \text{Value} & \text{Dimensionless value}\\
\hline
K & 108 \,\mu \mathrm{g~N~l}^{-1} & 1\\
r\,(\beta) & 0.3 \, \mathrm{day}^{-1} & 4.285\\
R_m\,(\delta) & 0.7 \, \mathrm{day}^{-1} & 10\\
\kappa\,(\chi) & 5.7 \, \mu \mathrm{g~N~l}^{-1} & 0.053\\
\mu\,(\lambda) & 0.0024 \, \mathrm{day}^{-1} & 3.428\\
\gamma & 0.01 &0.01\\
\hline
\end{array}
$
\end{center}
\caption{Parameters used in the biological model.
The symbols adopted for the nondimensional quantities appear in parentheses in the first column.
The values are consistent with typical oceanic ones, {those of $K$ and $\kappa$ are expressed in units of mass of nitrogen equivalent per liter}.}\label{tab1}
\end{table}
All the dynamical equations are solved through the open-source code Basilisk
(\texttt{http://basilisk.fr}),
through an adaptive grid with maximum resolution $N=2^9$ for both the 2D and 3D cases. To perform a reasonable comparison between the two cases and to cope with the numerical constraints imposed by the 3D configuration, we first performed the 3D simulations, estimating the value of $\Delta x/\ell^{3D}_{\nu}$ (with $\Delta x$ the smallest mesh size), which resulted to be around $1.5$. Then, we performed several 2D simulations by varying the minimum and maximum
resolutions in order to ensure that the mesh size is approximately the same, $\Delta x/\ell^{2D}_{\nu} \approx 1.5$. This implies that the fluid (and scalar) dynamics are moderately under-resolved and consequently the present results for the 2D case can be slightly different from the ones at the same Reynolds number from~\cite{jaccod2021predator}.
The adopted boundary conditions are such that inflow/outflow conditions are imposed
on the left/right side of the domain, while free-slip conditions hold at the boundaries in the $y$-direction. {For the 3D case, at the top/bottom side periodic boundary conditions are imposed, to mimic a cylinder of ideally infinite height.}
On the obstacle we have a no-slip condition for the velocity while a no-flux condition is imposed for the two scalars, which are furthermore kept
at the equilibrium values ($P_{eq},Z_{eq}$) at all sides of the domain.
{For the initial conditions, we fix the longitudinal advecting velocity to the uniform inflow value $u_0=1$ (in nondimensional units), while the transversal and vertical ones are zero. The scalar fields are initially set to their equilibrium values, then at a later time $t^*>0$, once the flow is in statistically stationary conditions, we let a localized patch of $P$ density
enter the system from the left side. Its spatial distribution is of the form:
\begin{equation}
P(\bm{x}, t^*) = P_{eq} + P_a \, e^{(- ((x-x_0)^2 + (y-y_0)^2)/w^2)},
\label{pini}
\end{equation}
where $P_a = 0.5$ is the amplitude of the excitation, $(x_0,y_0) = (-2,0.5)$ its location and $w=0.9$ ($\simeq l_0$) its width. In the 3D case, the perturbation is introduced along the entire spanwise direction ($0 \leq z \leq L$).}
Instantaneous visualizations of the flow and of the phytoplankton field, in 3D, are provided in Figs.~\ref{fig:vis}a and \ref{fig:vis}b.
Here, vortices have been tracked through the $\lambda_2$ criterion~\cite{jeong_hussain_1995}.
Figures~\ref{fig:vis}c and \ref{fig:vis}d respectively show the $z$ component of vorticity in the 3D case, and vorticity in the 2D case.
The 3D wake displays a much better mixed {state}
with respect to the 2D one, which is characterized by coherent vortical structures with a typical size of the order of the cylinder diameter. Since the vortex-stretching term is non-zero in the 3D Navier-Stokes equation, turbulent eddies stretch along their normal direction, reducing their size until they are finally dissipated. Moreover, the 2D vortices appear to
form much closer to the obstacle and progressively spread, covering a larger portion of the domain, with respect to the 3D case. A further characterization of the wake in the two cases will be given in the next section.
The phytoplankton population
distribution (Fig.~\ref{fig:3dpl}) follows the spatial organization of the flow: complex vortical structures appear immediately downstream of the obstacle and continuously leave the domain through the right side.
\begin{figure}[h!]
\captionsetup[subfigure]{labelformat=empty,justification=centering}
\captionsetup[figure]{justification=justified, singlelinecheck=off}
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d1a}
\caption{(a)}
\label{fig:3dvort}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d1b}
\caption{(b)}
\label{fig:3dpl}
\end{subfigure}\\
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d1d}
\caption{(c)}
\label{fig:3dvort3}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d1c}
\caption{(d)}
\label{fig:3dvort2}
\end{subfigure}
\caption{{Snapshots from the 3D simulation, at time $t=350$: (a) vorticity field (visualized through the $\lambda_2$ criterion), and (b) phytoplankton density field. The cylinder surface is shown in red, while the bottom wall displays the
adaptive grid.
(c) Spanwise vorticity, $\omega_z$, at $z=0$ and $t=350$ for the 3D case. (d) Vorticity, $\omega$, for the 2D case at $t=350$. To ease the comparison, in both (c) and (d), the fields have been normalized such that they take values in
$[-1,1]$.}}
\label{fig:vis}
\end{figure}
\section{Results}
\label{sec:results}
\subsection{{Persistent blooms and global features of the dynamics}}
\label{sec:3dfullmodel}
The differences between the 3D and 2D turbulent dynamics suggested by Fig.~\ref{fig:vis} can be further appreciated in Fig.~\ref{fig:ke}.
Here, we report the temporal behavior of kinetic energy, which shows a higher mean value and more important fluctuations in the 2D case than in the 3D one.
We also computed the integrated forces on the cylindrical solid body, in terms of the lift and drag coefficients:
\begin{align}
C_L &= \frac{F_y}{\rho U^2 d/2}\\
C_D &= \frac{F_x}{\rho U^2 d/2}
\end{align}
where $U$ is the free-stream velocity, $F_y$ and $F_x$ are the forces per unit length in the cross-stream and streamwise directions, respectively, and $\rho$ is the fluid density.
In the inset of Fig.~\ref{fig:ke}, we show the drag coefficient $C_D$ versus time, which displays a qualitative behavior similar to that of kinetic energy. Both its mean value and its oscillation amplitudes are larger in the 2D case than in the 3D one. The same occurs for the lift coefficient $C_L$ (not shown).
As already observed from the visualizations of Fig.~\ref{fig:vis}, coherent structures, where vorticity is particularly intense, are created in the 2D flow. Furthermore, the roll-up of vortices happens closer to the obstacle and the near wake increases in width, hence the shedding of these structures induces larger forces on the cylinder~\cite{chua1990numerical}. In contrast, in the 3D case the roll-up occurs further downstream, so that the forces on the obstacle fluctuate less, and the width of the near wake is kept almost constant. By inferring the shedding frequency $n$ from the temporal behavior of the lift coefficient, we have calculated the Strouhal number $St = nd/U$, which resulted to be in both cases around $0.2$, in good agreement with experiments in a homogeneous non-rotating tank (where $St \approx 0.21$ \cite{z}).
The reactive dynamics in the absence of flow have been investigated in the 3D case, in analogy to what was done for the 2D case in~\cite{jaccod2021predator}. Without advection, after a sudden increase in the $P$ concentration, the effect of grazing by $Z$ makes the system come back to the equilibrium point (results not shown for the sake of brevity).
When the advection term is switched on, the transient character of the flow, combined with the excitable character of the biological dynamics, gives rise to a permanent excitation of the predator-prey system, as also observed in kinematic and dynamic 2D settings~\cite{lo,jaccod2021predator}.
As it can be seen in Fig.~\ref{fig:pop}, in both the 3D and 2D cases, after a transient, the spatially averaged phytoplankton population density $\langle P \rangle$ reaches values that are considerably larger than at equilibrium. For completeness, we show in the inset of Fig.~\ref{fig:pop} also the analogous behavior for the zooplankton population density Z.
Concerning the comparison between the 3D case and the 2D one,
it can be noted that the 2D time-averaged value $\overline{\langle P \rangle}$ is larger than the corresponding 3D one (we will discuss this point in Sec.~\ref{sub:spa}).
We further remark that, while the temporal behavior of the 2D case is to good extent controlled by the vortex shedding, which has a period slightly larger than $1/n$, the 3D case appears more irregular and shows oscillation amplitudes that are definitely smaller than in the 2D case. This suggests, in accordance with the behavior of kinetic energy (Fig.~\ref{fig:ke}), that at the same Reynolds number the three-dimensionality of the flow leads to more chaotic fluid motion,
due to which the reactive scalars oscillate more irregularly in time.
Despite these quantitative differences, from a qualitative point of view, the global response of the two scalars to the combined effect of fluid transport and biological interactions seems to be the same in both the 2D and the 3D cases.
We then argue that also in the 3D case the mechanism controlling the sustained bloom (excitation of the planktonic species) is the same as the one at play in 2D systems, found in~{\cite{neu,NLHP2002,fer}} employing kinematic flows, and discussed in~\cite{jaccod2021predator} using dynamic velocity fields.
The main ingredients are the straining action exerted by the flow, which can contrast reaction-diffusion spreading, {to localize the fast growing phytoplankton in filamentary structures along which the slowly growing zooplankton gets diluted,} and open boundaries, needed to avoid the homogenization of the scalar densities, which would end the excitation (see~\cite{NLHP2002} for more details). Moreover, the transport of biological material towards the obstacle, where strain is primarily located in the present case, is also relevant. The persistence of the excitation should then result from the characteristic timescale of strain being intermediate between the typical times of phytoplankton and zooplankton reproduction~{\cite{fer}}.
While a 2D turbulent flow, forced at large scale, is characterized by a single timescale, determined by the strain, in the 3D case a whole range of timescales exist, associated with eddies of different sizes.
{However, considering that large scales should still provide the largest contribution to the localization of phytoplankton,
here we choose the timescale associated with the large-scale strain, which we dimensionally estimate as $\tau_s \sim l_0/u_0=1$.}
Using the values in Table~\ref{tab1}, we further obtain $\tau_P=\beta^{-1} \simeq 0.23$ and $\tau_Z=(\gamma \delta)^{-1} \simeq 10$ for the characteristic growth times of phytoplankton and zooplankton, respectively, and, thus, we have that the relation {$\tau_P < \tau_s < \tau_Z$}
is satisfied.
\begin{figure}[ht]
\captionsetup[subfigure]{labelformat=empty,justification=centering}
\captionsetup[figure]{justification=justified, singlelinecheck=off}
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d2a}
\caption{(a)}
\label{fig:ke}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d2b}
\caption{(b)}
\label{fig:pop}
\end{subfigure}%
\caption{(a) Kinetic energy $K_E$ and drag coefficient $C_D$ (in the inset) versus time, for the 3D and 2D cases. (b) Phytoplankton population density $P$,
averaged in space and normalized by the corresponding equilibrium value, as a function of time for the 3D and 2D cases. In the inset we show the corresponding zooplankton population densities $Z$ vs time.
In panel (b) the perturbation is introduced at $t^* = 200$.
}
\end{figure}
\subsection{{Spectral properties}}
{To characterize plankton patchiness we measured the spectra of population density variance, which we compared with the theoretical predictions from simple models of biological dynamics in turbulent flows, mentioned in Sec.~\ref{intro}, which we recall in slight more detail here below.}
{The results from idealized such idealized models vary depending on both the spatial dimensionality of the flow, and the details of the biological dynamics~\cite{DP1976,powell,franks}. If a single species is considered, in a 3D turbulent flow three regimes may be expected. In the low-wavenumber (i.e. large-scale) regime, dominated by biological growth, one should have $E^{lw}_S(k) \sim k^{-1}$, where $k$ is the wavenumber modulus. In an intermediate range of scales, dominated by turbulent motions, the plankton population density should behave like a passive (inert) scalar, and its spectrum should be $E^{iw}_S(k) \sim k^{-5/3}$.
The critical wavenumber separating these two regimes corresponds to the length scale for which the eddy turnover time and the phytoplankton growth time are comparable, $k_c = (b^3/ \epsilon_{\nu})^{1/2}$, where $b$ is a quantifier of the phytoplankton growth rate and $\epsilon_{\nu}$ is the kinetic energy dissipation rate.
At the highest wavenumbers (i.e. at the smallest scales), where a viscous-convective subrange exists only if $Sc>1$, one would have the third regime, characterized by $E^{hw}_S(k) \sim k^{-1}$. In a 2D turbulent flow, instead, the plankton variance spectrum should scale as $E_S(k) \sim k^{-1}$ at all wavenumbers~\cite{powell}. Extending these predictions to more species is not an easy task. Considering two species interacting according to a predator-prey model, namely the basic Lotka-Volterra system, it has been shown that, in the 3D case, the population density spectra of both phytoplankton and zooplankton should be given by the sum of a passive contribution $\sim k^{-5/3}$ and a reactive one $\sim k^{-3}$~\cite{powell}. In the 2D case, the same model predicts that the spectrum is not modified by biological interactions, and is then given by $E_S(k) \sim k^{-1}$ (for both populations), as for a single species.}
{One-dimensional spectra of velocity fluctuations, for each component $u_x,u_y,u_z$ (with $u_z=0$ in the 2D case), from our simulations, are reported in Figs.~\ref{fig:spu2d} and~\ref{fig:spu3d}. The one-dimensional spectra of the reactive scalar ($P$ and $Z$) fluctuations, in the cross-stream direction $y$, are presented in Figs.~\ref{fig:spp2d} and~\ref{fig:spp3d}.
All spectra are computed in the subdomain $1.5d \leq x \leq 10d$ (with $d$ the obstacle diameter), taking the Fourier transform along the $y$ direction, at fixed $x$, and subsequently
averaging along $x$. For the 3D case, the above procedure is repeated at each $z$, with further averaging in such spanwise direction. Finally, in all cases, we perform a temporal average, in the time interval $250 \leq t \leq 400$, in which the flow is statistically stationary. }
{Before discussing the spectra of the reactive scalar fields, let us illustrate the spectral properties of our turbulent flows. Kinetic energy spectra for the 2D case (Fig.~\ref{fig:spu2d}) are in agreement with previous results~\cite{jaccod2021predator}, and scale as $E(k) \sim k^{-3}$, or a steeper power law. We remark that, in such a 2D case with large-scale forcing, the precise slope of $E(k)$ is not expected to play a major role in the interplay between fluid and biological dynamics. In fact, if the spectrum is steep enough, the flow is smooth and possesses a single time scale, determined by the strain.}
{In the 3D case, the scaling of energy spectra (Fig.~\ref{fig:spu3d}) is compatible with $E(k)\sim k^{-5/3}$ over approximately one decade, pointing to the existence of a direct energy cascade~\cite{kolmogorov1941local,Frisch}. At higher wavenumbers, corresponding to unresolved scales, the spectrum rapidly falls off. We note that, at low wavenumbers, the energetic content of spanwise velocity ($u_z$) fluctuations is significantly smaller than that of the other velocity components, indicating that coherent structures arising from the cylinder three-dimensionality contain much less energy than purely 2D structures.}
{In the 2D case, the scalar fluctuation spectra (Fig.~\ref{fig:spp2d}) display a wavenumber range of power-law dependence close to $E_S(k) \sim k^{-1}$ (with $S=P,Z$), in overall agreement with the theoretical prediction, followed by a rapid decay.
Some more comments about the extent of the scaling range are provided in Appendix~\ref{app:schmidt}, with the aim of easing the comparison of the present results with those obtained in a simulation at the same Reynolds number but at higher Schmidt number, $Sc=100$~\cite{jaccod2021predator}.}
\begin{figure}[h!]
\captionsetup[subfigure]{labelformat=empty,justification=centering}
\captionsetup[figure]{justification=justified, singlelinecheck=off}
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d3a}
\caption{(a)}
\label{fig:spu2d}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d3b}
\caption{(b)}
\label{fig:spu3d}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d3c}
\caption{(c)}
\label{fig:spp2d}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d3d_new}
\caption{(d)}
\label{fig:spp3d}
\end{subfigure}%
\caption{Spatial spectra of velocity component fluctuations $E(k)$ for (a) the 2D case and (b) the 3D one.
Spatial spectra of phytoplankton and zooplankton fluctuations $E_S(k)$ (with $S=P,Z$) are shown in (c) for the 2D case and in (d) for the 3D one. All these spectra are normalized by the value corresponding to $k_d$, the wavenumber
associated with the obstacle diameter $d$. The spectra are computed along the $y$-direction and then averaged for $1.5d \leq x \leq 10d$ and $250 \leq t \leq 400$. For the 3D case, the spectra are first computed on each plane
$z=\mathrm{const}$ and then averaged along $z$. In the inset of (d),
spectra of $P$, for the 2D and 3D cases, compensated with the prediction $\sim k^{-1}$, are shown.}
\label{fig:spectra}
\end{figure}
{In the 3D case, for both planktonic species (Fig.~\ref{fig:spp3d}), we find a spectrum close to $ E_S(k)\sim k^{-5/3}$, which is the typical behavior expected for a passive non-reactive tracer advected by a 3D turbulent flow~\cite{batchelor1959small}. Note that this scaling is valid in the inertial subrange. The Batchelor regime, corresponding to scales smaller than the viscous cutoff, is here absent because $Sc=1$, i.e. $\ell_B = \ell^{3D}_{\nu}$, and scalar fluctuations are thus dissipated at the Kolmogorov scale $\ell^{3D}_{\nu}$.
At the smallest wavenumbers ($k<0.6 \, k_d$) the spectrum of the $P$ field somehow flattens a bit. To evaluate whether this behavior could be related to the $k^{-1}$ large-scale regime, one needs to estimate the critical wavenumber $k_c$. For this purpose, we measured the effective phytoplankton growth rate $b_{eff} = \partial_t \langle P \rangle/\langle P\rangle$ (in the early growth phase, $200 \leq t \leq 220$) and the energy dissipation rate (in the statistically steady state), to obtain $k_c=(b_{eff}^3/ \epsilon_{\nu})^{1/2} \approx 0.45$, which is slightly smaller than the diameter wavenumber $k_d=0.5$.
A clear $k^{-1}$ scaling in the range $k<k_c$ is not detectable in our spectra and we cannot safely conclude about the existence of the predicted large-scale, biologically dominated, regime~\cite{DP1976}.
Similarly, our data do not support the $k^{-3}$ prediction, either, for the reactive contribution to the spectra of interacting species~\cite{powell}, except perhaps on narrow subranges. A possible reason for this could be the peculiar form of the PZ predator-prey model we adopted, with respect to the basic formulation of Lotka-Volterra model.}
{It is interesting to compare the phytoplankton variance spectra in the 2D and 3D cases. These spectra are shown, after compensation with the $k^{-1}$ prediction, in the inset of Fig.~\ref{fig:spp3d}. As it can be seen, below $k_d =0.5$ the compensated 2D spectrum attains a constant value, while this is not really the case for the 3D spectrum, in line with the above discussion about the large-scale regime. Furthermore, the 2D spectrum is always larger than the 3D one, pointing to more energetic fluctuations at all scales, and thus larger variability (considering that the integral of the spectrum is the variance of the population density field), in the configuration of lower spatial dimensionality.}
\subsection{Spatial distributions}
\label{sub:spa}
\begin{figure}[t]
\captionsetup[subfigure]{labelformat=empty,justification=centering}
\captionsetup[figure]{justification=justified, singlelinecheck=off}
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d4a}
\caption{(a)}
\label{fig:phyto2d}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d4b}
\caption{(b)}
\label{fig:phyto3d}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d4c}
\caption{(c)}
\label{fig:zoo2d}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d4d}
\caption{(d)}
\label{fig:zoo3d}
\end{subfigure}%
\caption{
Instantaneous density fields of: phytoplankton $P$, panels (a) and (c) for the 2D and 3D cases, respectively;
zooplankton $Z$, panels (b) and (d) for the 2D and 3D cases, respectively. All these fields correspond to time $t = 350$,
in the statistically steady state.}
\label{fig:fields}
\end{figure}
{In this section, we complement the previous discussion about patchiness, by investigating the distributions of the planktonic
species in real space. Some visualizations, at a given time in the statistically steady state, are presented in Figs.~\ref{fig:phyto2d}
and~\ref{fig:zoo2d} for the 2D case, and in Figs.~\ref{fig:phyto3d} and~\ref{fig:zoo3d} for the 3D case. While both the 2D fields ($P$ and $Z$ ) are characterized by rather well defined coherent structures resembling those of the vorticity field (Fig.~\ref{fig:3dvort2}), the 3D distributions do not possess this feature. In the latter case, the $P$ and $Z$
fields reflect the more mixed nature of the 3D wake (Figs.~\ref{fig:3dvort} and~\ref{fig:3dvort3}), which is characterized by more
convoluted vortical structures and a complex, more fragmented, spatial distribution.
Plankton patches then appear to have smaller sizes,
essentially independent of the distance from the obstacle, differently from the 2D case,
where they are filaments, and almost circular structures colocating with vortices that
grow in size and do not travel along straight lines.}
{Nevertheless, independent of the space dimensionality, the relative abundance of the two species is locally determined
by the predator-prey biological interactions, with the prey ($P$) mostly localized where the predator ($Z$) is absent.
This evidence suggests that also in the 3D case, once the favorable conditions for phytoplankton growth are met,
the response of the scalars is mainly determined by their reactive nature and not particularly sensitive to the details
of the underlying flow.}
{We can further remark that, while the spatial distributions look different from a qualitative point of view,
when
passing from the 2D case to the 3D one, the range of values
taken by each population density field does not change.
At the same time, the spatially averaged population density is larger in the 2D case, particularly for phytoplankton
(see Fig.~\ref{fig:pop}). To understand the origin of such difference in the global average, it is instructive to examine
transects in the $P$ field in the two cases.
Note that we verified that the 3D population density is, to good extent, homogeneous
in the spanwise ($z$) direction, which allows comparing it to the 2D one.
This is illustrated in Fig.~\ref{fig:prof1}, which shows the $P$ population density, averaged over the spatial coordinates $x$ and $y$,
and over time, versus $z$. Indeed, such vertical profile only weakly fluctuates around a mean value ($2.2 P_{eq}$),
which is very close to the mean value in time of the global spatial average of $P$ (Fig.~\ref{fig:pop}).}
{In Fig.~\ref{fig:prof2}, the profiles of $P$ are shown as a function of the cross-stream coordinate $y$,
after averaging over time and $x\in[1.5d,10d]$ (and on $z$, for the 3D case). In the 3D case, phytoplankton appears
to be predominantly concentrated in a narrow interval centered on the obstacle ($y\in [-2d,2d]$), where it is almost
uniformly distributed, while further away in the cross-stream direction its density rapidly decays to the equilibrium value $P_{eq}$.
Conversely, in the 2D case, the phytoplankton profile reaches values considerably above $P_{eq}$ in a larger region, hinting at the
larger value of the global average $\overline{\langle P\rangle}$ in this case.
Moreover, the profile now has a more complex shape, displaying several peaks of high population density, indicative of the (average)
cross-stream location of filamentary structures.}
\begin{figure}[h!]
\captionsetup[subfigure]{labelformat=empty,justification=centering}
\captionsetup[figure]{justification=justified, singlelinecheck=off}
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d5a}
\caption{(a)}
\label{fig:prof1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\includegraphics[width=0.9\textwidth]{fig3d5b}
\caption{(b)}
\label{fig:prof2}
\end{subfigure}%
\caption{
(a) Profile of phytoplankton population density in the 3D case, averaged on $x$ and $y$,
as a function of the spanwise coordinate $z$.
(b) Profiles of phytoplankton population density, in both the 2D and the 3D case, averaged on $x$,
as a function of the cross-stream coordinate $y$.
For the 3D case, the profile is averaged also on $z$.
All the profiles shown in (a) and (b) are further averaged over time $t$ in the interval $250\leq t \leq 400$. }
\end{figure}
{Based on the above observations, we performed a more extensive analysis by computing the effective portion of domain
occupied by phytoplankton. Specifically, in analogy to what is done in~\cite{berti2005mixing} for quantifying the efficiency of a reaction,
we introduce the fraction of occupied domain, as:}
\begin{equation}
\phi = \frac{1}{\mathcal{V}}\int_{\mathcal{V}} \theta\Biggl(\frac{P(\boldsymbol{x},t)}{P_{eq}} - \xi_c\Biggr)\ d\boldsymbol{x}
\label{eq:frac}
\end{equation}
where $\mathcal{V}$ is the domain area (in 2D) or volume (in 3D), $d\boldsymbol{x}$ represents the surface/volume element,
$\theta(\cdot)$ is a step function and $\xi_c$ a given threshold.
{In Fig.~\ref{fig:frac}, we report the time-averaged value $\overline{\phi}$ of~(\ref{eq:frac}),
computed in the statistically stationary state,
for the two configurations, when varying the threshold $\xi_c$ in a wide range. In both the 2D and the 3D case,
$\overline{\phi}$ monotonically decreases with increasing $\xi_c$. However, the 2D-case values are always larger than the 3D ones, which
confirms that phytoplankton occupies a larger region in the lower dimensional case. Interestingly, the difference between the
2D and 3D cases increases with growing $\xi_c$, implying that regions of particularly high population density (extreme events)
represent a significantly larger fraction of the total extent of the domain, in the 2D case.
This feature can be more clearly appreciated in the inset of Fig.~\ref{fig:frac}, which reports the relative difference (in percentage),
$[\overline{\phi}^{2D}/\overline{\phi}^{3D} - 1]\times 100$; e.g., for $\xi_c >20$ the increase of the 2D case with respect to the 3D one
reaches $55 \% $.}
{The evidences reported in this section allow to rationalize the picture emerging from the main changes,
in terms of biological productivity, that occur when increasing the space dimensionality. Under advection by
the 2D turbulent flow, phytoplankton gets confined in filamentary structures, generated in straining regions,
and winding around vortices, where it can grow thanks to its high growth rate and the initially low local population density
of zooplankton. The more chaotic, and mixing, nature of the 3D flow, instead, hinders the formation of such structures and tends to homogenize the distributions
of the reactive scalars. As a consequence, grazing gets everywhere more efficient, implying that regions of high
phytoplankton density become smaller and, hence, the average biomass produced is lower.}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{fig3d6}
\caption{Time-averaged fraction of the area (in the 2D case), or volume (in the 3D case), occupied by phytoplankton,
as a function of several threshold values $\xi_c$. The inset shows the difference in percentage
$[\overline{\phi}^{2D}/\overline{\phi}^{3D} - 1]\times 100$ vs $\xi_c$.}
\label{fig:frac}
\end{figure}
\section{Conclusions}
\label{sec:conclu3d}
We have investigated predator-prey plankton dynamics behind a cylindrical obstacle both in 2D and in 3D turbulent flows
at moderate Reynolds number ($Re=2000$). Our main goal was to compare these two geometrical configurations,
in order to test the robustness of relevant findings
from the 2D case~\cite{jaccod2021predator},
about the conditions for blooming, and patchiness,
at increasing the space dimensionality.
{The choice of a moderate value of the Reynolds number, and of a smooth obstacle (i.e. of negligible roughness),
was motivated by the large computational cost of 3D simulations, and previous results indicating the overall weak role
of both $Re$ and the roughness (see~\cite{jaccod2021predator} for an extended discussion).}
{Notwithstanding the important differences between the carrying velocity fields, the qualitative behavior of the reactive scalars
(the phytoplankton and the zooplankton population densities) appears to be similar in the 2D and the 3D cases.
This result substantiates the general picture drawn in the 2D case, namely that the combined effect of flow transport and biological excitability is crucial to give rise to sustained plankton blooms.
Also in the 3D setup, the persistent excitation indeed appears to depend on the characteristic timescale of large-scale strain being intermediate between the timescales of phytoplankton growth
and of zooplankton reproduction, as originally discussed using theoretical arguments and 2D kinematic flows~\cite{neu,NLHP2002,fer}, and verified in 2D dynamic simulations
in turbulent flows~\cite{jaccod2021predator}.}
{We then investigated patchiness by analyzing the spectra of population density fluctuations.
The evidence of a $k^{-5/3}$ scaling (for both species) in the 3D case, and no clear hints of deviations from it of biological origin,
which would manifest as a $k^{-1}$ or $k^{-3}$ behavior according to theoretical predictions for a single species~\cite{DP1976}
and two interacting ones~\cite{powell}, respectively,
suggests that the reactive scalars behave as inert ones, from the point of view of their statistical properties.}
{This is a further similarity with the 2D case. In the latter, our numerical results agree with a $k^{-1}$ spectrum (see also~\cite{jaccod2021predator}),
which is the expectation for a non-reactive scalar, in 2D turbulence
forced at large scales. Nevertheless, the reactive contribution to the spectrum should also scale as $k^{-1}$ in such a case~\cite{powell},
which does not allow to fully ascertain whether plankton patchiness is controlled by fluid dynamics or biological ones.
Our results, being obtained in flows of different dimensionality, appear to rather robustly indicate the prevalence of turbulent transport, in this sense.}
{The main difference between the 2D and the 3D cases reveals in the spatial distribution of the populations, which is a consequence
of the different structure of the corresponding wakes. The 3D flow, in fact, lacks the large coherent vortices present in the 2D one,
which are replaced by more complex thinner vortical structures. Such difference in the spatial organization of the reactive scalars has an impact
on global properties,
reducing the average phytoplankton density $\overline{\langle P \rangle}$ in the 3D case, which can be understood
considering the smaller (larger in 2D) fraction of the total volume (area in 2D) occupied by phytoplankton. In essence, by destroying well-localized structures,
like vortices and filaments in between them, where phytoplankton rapidly reproduces, the 3D fluid dynamics hamper biological growth.}
\section*{Acknowledgements}
\noindent This work was granted access to the HPC resources [MESU] of the HPCaVe centre at UPMC-Sorbonne University.
\section*{Data Availability Statement}
\noindent The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.
|
train/arxiv
|
BkiUdtA5qrqCyqr5hUCN
| 5
| 1
|
\section{Introduction}
Qubit state readout is a mandatory step in quantum information processing. For superconducting circuits, the dispersive readout is the standard scheme \cite{PRApplied_Walter_2017, Sunada2022}. It relies on the transverse interaction between an anharmonic mode (whose first two levels are used as a qubit) and another mode (usually harmonic and used as a meter) in the dispersive approximation \cite{Blais_PRA_2004,PRA_Koch_2007}. With the resulting dispersive interaction in perturbation theory, the qubit state shifts the meter frequency. It is thus inferred by distinguishing the two pointer states in phase-space from the output field acquired when applying a coherent pulse close to the meter frequency for a time smaller than the relaxation time of the qubit $T_1$. Using the dispersive readout, single-shot readout with high fidelity is nowadays routinely achieved, notably thanks to quantum limited Josephson Parametric Amplifier (JPA) \cite{Aumentado2020}. However, the transverse interaction contains intrinsic limitations. The qubit states are slightly dressed by the meter states which leads to Purcell decay \cite{Houck2008} and prevents from an ideal quantum non-demolition (QND) readout \cite{Pereira_PRL_2022, Pereira_Arxiv_2022}. In addition, unwanted effects for the readout such as relaxation and excitation rate of the qubit can increase with readout photon number $\overline{n}$ \cite{Johnson2011,Minev2019,Lescanne2019}. To overcome these limitations, a non-perturbative cross-Kerr coupling between the qubit and the meter has been proposed \cite{PRA_Diniz_2013,Wang2019} and demonstrated thanks to the property of a transmon molecule \cite{Dumur2015,Roy2017,Roy2018,Dassonneville2020} achieving high fidelity and QND single shot readout of a transmon qubit \cite{Dassonneville2020}. This result was realized through a polariton meter in its linear regime whose signal was amplified through an external JPA.
Alternatively to JPA, superconducting qubit readout can also be performed using a Josephson Bifurcation Amplifier (JBA) \cite{Lupascu2006,Vijay2009, mallet_single-shot_2009}. JBA presents a nonlinear amplification relationship between its input amplitude and output amplitude leading to two stable states of small and large output amplitude for input signal below and above the bifurcation threshold respectively.
The information on the qubit state is then encoded into those two output states. In addition, the bifurcation presents hysteresys leading to a latching readout. The JBA dynamics is controlled by the detuning between the non-linear resonator and the pump, the resonator losses $\kappa$ and its anharmonicity $U$.
A same-chip implementation allows a direct coupling between the qubit and the JBA, with an \textit{in-situ} amplifying bifurcation, greatly increasing the quantum detection efficiency \cite{Siddiqi2006,Lupascu2006,Boulant2007,mallet_single-shot_2009,Dewes2012,Eddins2019,Rosenthal2021}.
Up to now, the bifurcation readout has been realized in the weak anharmonicity limit $U \ll \kappa$ in which the bistability regime is achieved for photon number $\overline{n}$ in the nonlinear resonator larger than a critical number $N_\mathrm{crit}=\kappa/(3\sqrt{3}U) \gg 1$ \cite{Ong2011,Ong2013} thus involving large photon number $\overline{n}$.
However the large photon number needed to reach the bistability exposes the qubit to the excess backaction of the nonlinear cavity \cite{Laflamme2012,Boissonneault2012,Ong2011,Ong2013}.
In this paper, we demonstrate a qubit state latching readout using an \textit{in-situ} polariton bifurcation. Contrary to our previous work \cite{Dassonneville2020} where we considered the linear regime with $\overline{n} \ll N_\mathrm{crit}$,
here we investigate the nonlinear regime of the polaritons at large occupation $\overline{n} \gtrsim N_\mathrm{crit}$.
The polaritons originate from a strong hybridisation between a cavity harmonic mode and the anharmonic ancilla mode of the transmon molecule.
This dynamics is similar to two coupled nonlinear Kerr resonators exibiting bistability \cite{Sarchi2008,Eichler2014,Winkel2020, Fischer2021}.
By adjusting the ancilla frequency through an external magnetic flux, we control the hybridisation, and consequently the anharmonicity and dissipation of each polariton. Instead of the usual weak anharmonicity regime ($U \ll \kappa$, $N_\mathrm{crit} \gg 1$) of the JBA, we consider the mesoscopic regime ($U \sim \kappa$, $N_\mathrm{crit} \sim 1$) at the limit with the quantum regime ($U \gtrsim \kappa$, $N_\mathrm{crit} \lesssim 1$) \cite{Muppalla2018, Andersen2020}. Then the bistable states appear with photon numbers close to unity $N_\mathrm{crit} \sim 1$, thus reducing excess backaction on the qubit. In \cref{sec:hybridisation}, we discuss the ancilla-cavity hybridisation, the resulting polariton modes and their tunability.
The polaritons response to a strong drive and its dependence on the qubit state are detailed in \cref{sec:transmission}. The hysteretic bistability behaviour of the nonlinear upper polaritonic meter is analyzed in \cref{sec:bistability}. Finally, in \cref{sec:readout}, we take advantage of this bistability to perform a latching readout of the qubit state with a high single-shot fidelity and without any external quantum-limited amplifier.
\section{Tuning the ancilla-cavity hybridisation}
\label{sec:hybridisation}
\subsection{Transmon molecule in a cavity}
\begin{figure}
\includegraphics[width=8.6cm]{fig1_tunability_v2.pdf}
\caption{a) Scheme of the setup. We consider a cavity mode $\hat{c}$ of frequency $\omega_c$ strongly coupled to an anharmonic ancilla mode of frequency $\omega_a$ and self-Kerr nonlinearity $U_a$. The ancilla is further coupled to the qubit via a non-perturbative cross-Kerr coupling of rate $g_{zz}$. To perform readout, we send a coherent signal on the input of the cavity mode $\langle \hat{c}_\mathrm{in}\rangle$ and measure the transmitted cavity output field $\langle \hat{c}_\mathrm{out}\rangle$. b) Representation of the system in terms of cavity-ancilla polariton modes. Lower and upper polariton modes have distinct frequencies $\omega_l$ and $\omega_u$, respectively, as well as different self-Kerr non-linearities $U_l$ and $U_u$ inherited from the ancilla. Both polaritons are independently coupled to the qubit via cross-Kerr terms $\chi_l$ and $\chi_u$, which allows us to use these polariton modes as direct meters of the qubit states. The readout can be extracted from the same output field $c_\mathrm{out}$ due to the polaritons leakage rate $\kappa_l$ and $\kappa_u$.
c)-e) Measurements of the non-perturbative qubit-polariton cross-Kerr $\chi_{j}$ (in c), self-Kerr couplings $U_{jj}$ and inter-polariton cross-Kerr coupling $U_{ul}$ (in d), and polariton decay rates $\kappa_j$ (in e) as function of the hybridisation angle $\theta$ for lower polariton $j=l$ (orange) and upper polariton $j=u$ (purple). Colored solid lines correspond to the predictions of the polariton model in Eqs. \eqref{eq:H_polaritons} and \eqref{eq:polariton_decay} assuming initial rates (black lines) $g_\mathrm{zz}$ in c), $U_a$ in d) and $\kappa_c$ and $\kappa_a$ in e). Green diamonds in d) indicate $U_{ul}$.
f) Normalized self-Kerr polariton non-linearity $U_{jj}/\kappa_j$ versus normalized qubit-polariton cross-Kerr coupling $2\chi_j/\kappa_j$ for lower (orange) and upper (purple) polaritons. The enlarged point circled in black indicates the working point in this present work.
\label{fig:principe} }
\end{figure}
We use the same sample as in Ref \cite{Dassonneville2020}. It consists of a transmon molecule embedded in a $3$D-cavity (see \cref{fig:principe}a).
It has three modes of interest, the harmonic TE$_{101}$ mode $\hat{c}$ of the rectangular $3$D Copper cavity with frequency $\omega_\mathrm{c}/2\pi = \SI{7.169}{GHz}$, and the two modes of the transmon molecule. This transmon molecule is made by coupling inductively and capacitively two nominally identical transmons,
and thus can be conveniently decomposed into two orthogonal modes: a symmetric mode encoding a transmon qubit and an antisymmetric mode encoding an anharmonic oscillator, called hereafter ancilla (see Refs. \cite{Dumur2015, Dassonneville2020}).
The ancilla is approximated as a weakly nonlinear mode $\hat{a}$ with frequency $\omega_{a}$ tunable by magnetic flux and self-Kerr rate $U_a$.
Approximating the multilevel transmon as a qubit $\hat{\sigma}_z$, the total system Hamiltonian including cavity, ancilla, and qubit reads,
\begin{align}
\frac{\hat{H}}{\hbar} ={}& \frac{\omega_q}{2}\hat{\sigma}_z +\omega_a \hat{a}^\dag \hat{a} +\omega_c \hat{c}^\dag \hat{c} \notag\\ &- \frac{U_a}{2} \hat{a}^{\dag^2} \hat{a}^{^2} -g_{zz}\hat{\sigma}_z\hat{a}^\dag \hat{a}+g_{ac}(\hat{c}^\dag\hat{a}+\hat{a}^\dag\hat{c}).\label{FullHamqac}
\end{align}
The qubit $\hat{\sigma}_{\rm z}$ with frequency $\omega_q/2\pi =\SI{6.283}{GHz}$ and coherence times $T_2,~T_1 \simeq \SI{3.3}{\mu s}$ is coupled to the ancilla mode $\hat{a}$ via a \textit{non-perturbative} cross-Kerr coupling with rate $g_{\rm zz}$. The non-perturbative nature of this coupling allows to maximize the speed, the single-shot
fidelity, and the QND properties of the readout, while minimizing the effect of unwanted decay
channels such as the Purcell effect \cite{Dassonneville2020}.
\subsection{Ancilla-cavity hybridisation leading to polaritons}\label{polariton_transf}
To use this coupling for reading out the state of the qubit, we {strongly hybridize} the ancilla and the cavity by {setting} their detuning $\Delta_{ac} = \omega_a - \omega_c$ {to values comparable to or smaller than} the transverse coupling $g_{ac}/2\pi = \SI{295}{MHz}$.
At this operation point, $|\Delta_{ac}|\lesssim g_{ac}$, this hybridisation leads to two new normal modes called upper and lower polariton modes, $\hat{c}_u$ and $\hat{c}_l$, which are a linear combination of ancilla and cavity fields (see \cref{fig:principe}.b). They are given by a rotation $\hat{c}_u =\cos(\theta)\hat{a}+\sin(\theta)\hat{c}$, and $\hat{c}_l =\cos(\theta)\hat{c}-\sin(\theta)\hat{a}$, where the cavity-ancilla hybridization angle reads $\tan(2\theta)= 2g_{ac}/ \Delta_{ac}$. At resonance ($\Delta_{ac}=0$), $\theta = \pi/4$ while {at large detuning} ($|\Delta_{ac}| \gg g_{ac}$), the angle vanishes $\theta \xrightarrow{} 0$.
In terms of these polariton modes and using rotating wave approximation, the total Hamiltonian takes the form
\begin{align}
\frac{\hat{H}_{\rm p}}{\hbar} &= \frac{\omega_q}{2}\hat{\sigma}_z - \sum_{j=u,l} \chi_{j}\hat{c}^\dagger_j \hat{c}_j\hat{\sigma}_z \notag \\
& + \sum_{j=u,l} (\omega_j \hat{c}_j^\dag \hat{c}_j - \frac{U_{jj}}{2} \hat{c}_j^{\dag^2} \hat{c}_j^{^2}) -U_{ul}\hat{c}^\dagger_l \hat{c}_l \hat{c}^\dagger_u \hat{c}_u ,
\label{eq:H_polaritons}
\end{align}
where $\omega_u= \sin^2(\theta) \omega_c + \cos^2(\theta)\omega_a + \sin(2 \theta) g_{ac} $ and $\omega_l = \cos^2(\theta)\omega_c + \sin^2(\theta)\omega_a- \sin(2 \theta)g_{ac}$ are the frequencies of the upper and lower polariton modes, respectively. Each polariton mode is in some proportion cavity-like and therefore can be probed in transmission and used for readout. Similarly, each polariton is also ancilla-like and thus inherits nonlinearities from the ancilla, notably the \textit{non-perturbative} cross-Kerr coupling to the qubit. The corresponding interaction strengths read $\chi_{u}= g_{zz}\cos^2(\theta)$ and $\chi_{l}= g_{zz}\sin^2(\theta)$, for the upper and lower polariton, respectively. Each polariton also inherits an anharmonicity from the ancilla given by $U_{ll} = \sin^4(\theta) U_a$ and $U_{uu} = \cos^4(\theta) U_a$. They also acquire a cross-anharmonicity or cross-Kerr interaction $U_{ul} = \sin^2(2\theta) U_a/2$, a coupling similar to the dispersive interaction that still occurs even beyond the dispersive regime \cite{Ansari2019}. Finally, the polaritons have effective decay rates given by a combination of the bare ancilla $\kappa_{a}$ and cavity $\kappa_{c}$ decay rates as (cf.~App.~\ref{Append:bistability})
\begin{align}
\kappa_u &=\kappa_c\sin^2({\theta})+\kappa_a\cos^2({\theta}), \notag \\
\kappa_l&=\kappa_c\cos^2({\theta})+\kappa_a\sin^2({\theta}).
\label{eq:polariton_decay}
\end{align}
\subsection{Tuning hybridisation}
The ancilla can be tuned at discrete frequencies independantly from the qubit and cavity by applying an external magnetic flux $\Phi$ with one external coil as the sample is built with two superconducting loops of different sizes (see Ref \cite{Dassonneville2020}). This allows to tune the hybridisation conditions and thus the different parameters in Eqs.~\ref{eq:H_polaritons} and \ref{eq:polariton_decay}. We extracted these parameters (see Figs. \ref{fig:principe}.c-f) as function of the hybridization angle by measuring at different flux points. The non-perturbative cross-Kerr couplings $\chi_{l}$ and $\chi_{u}$ are well-fitted by a bare qubit-ancilla cross-Kerr coupling $g_{zz}/2\pi = \SI{34.5}{MHz}$ (\cref{fig:principe}.c). The polaritons self-Kerr and cross-Kerr couplings $U_{ll}$, $U_{uu}$, and $U_{lu}$ are well fitted with the polariton model with a bare ancilla anharmoncity $U_a/2\pi = \SI{13.5}{MHz}$ (\cref{fig:principe}.d). The polariton decay rates are only qualitatively fitted by the polariton model (\cref{fig:principe}.e) with bare cavity and bare ancilla decay rates $\kappa_c/2\pi = \SI{12.7}{MHz}$ and $\kappa_a/2\pi = \SI{5.6}{MHz}$. Discrepancies may be explained by the fact that the bare ancilla decay rate can vary with its frequency due to the presence of other losses like fluctuating two-level systems, or that there is residual parasitic transverse coupling between the ancilla and cavity with the qubit (see Ref.~\cite{Dassonneville2020}).
Thanks to the hybridisation tunability, we can set different regimes (\cref{fig:principe}.f) for the polariton meters to readout the qubit. In Ref \cite{Dassonneville2020}, we focused on the linear response with a small self-Kerr $U/\kappa = 0.017$ and a moderate drive $\overline{n} \approx 2\ll N_\mathrm{crit}$. This linear regime is obtained at the zero flux point ($\Phi/\Phi_0 = 0$) where the lower polariton is mostly cavity-like. In this work, we focus {on} a different regime where the anharmonicity is comparable to dissipation ($U\sim \kappa$). This is obtained at $\Phi/\Phi_0 = 5$ when the ancilla is close to resonance to the cavity ($|\Delta_{ac}|\lesssim g_{ac}$). The hybridisation is close to maximum, the upper polariton inherits from the ancilla an anharmonicity $U_{uu}/2\pi=\SI{6.5}{MHz}$ and has a loss decay $\kappa_u/2\pi=\SI{7.6}{MHz}$. At this working point, the cross-Kerr coupling to the qubit is the dominant parameter with $\chi_u/2\pi=\SI{24}{MHz}$. All other parameters for these two flux points are summarized in \cref{table}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c c c c c c}
\hline
$\Phi/\Phi_0 $ & ${\omega}_l/2\pi$ & $\chi_l/2\pi$ & $\kappa_l/2\pi$ & $U_\mathrm{ll}/2\pi $& $\bar{\theta}$ \\
0 & 7.0335 {\rm GHz} & $4.5 {\rm MHz}$ & \SI{11.8}{\mega\hertz} & \SI{0.2}{MHz} & \SI{0.384}{\radian} \\
\end{tabular}
\\
\begin{tabular}{c c c c c c}
\hline
$\Phi/\Phi_0 $ & ${\omega}_u/2\pi$ & $\chi_u/2\pi$ & $\kappa_u/2\pi$ & $U_\mathrm{uu}/2\pi $& $\bar{\theta}$ \\
5 & 7.575 {\rm GHz} & $24 {\rm MHz}$ & $7.6 {\rm MHz}$ & \SI{6.5}{MHz} & $\SI{0.602}{\radian}$ \\
\hline
\end{tabular}
\caption{Effective polariton parameters at two flux points, $\Phi/\Phi_0=0$ corresponding to the working point in Ref~\cite{Dassonneville2020} and $\Phi/\Phi_0=5$ corresponding to the present working point. }
\label{table}
\end{center}
\end{table}
\section{Qubit dependent polaritons response}
\label{sec:transmission}
\begin{figure*}[t]
\includegraphics[width=0.9\textwidth]{Figure_1_v7.pdf}
\caption{ a) Measured mean distance $D_\mathrm{eg}^\mathrm{exp}$ between the two pointer states of the qubit as function of drive frequency $\omega_d$ and power $P_{\rm in}$. Horizontal dashed lines are guide to the eyes indicating different regimes of the system. From bottom to top: {linear}, {non-linear with self-Kerr}, higher order {non-linearities}, and {strongly driven} bare cavity {regime}.
b) Cross-sections of $D_{eg}$ versus $\omega_d$ normalized by their experimental maximum value at fixed drive powers indicated by the arrows. In green: experimental data, in red: theoretical model in the ancilla-cavity basis (\cref{eq:polaritons_motion}), and in dashed purple: theoretical model in the polariton basis (\cref{nonlinear_polaritons}). c) Computed $D_\mathrm{eg}^\mathrm{th}$ using the model \cref{eq:polaritons_motion}.
d) {Decomposition of upper polariton into} cavity {$p{c}^\eta$} (green) and ancilla {$p_{a}^\eta$} (brown) as function of power $P_{\rm in}$ {when the qubit is in $\eta=g$ (solid lines) or $\eta=e$ (dashed lines)}. The proportions $p_{c}^\eta$ and $p_{a}^\eta$ are computed using model (\ref{eq:polaritons_motion}) and following the drive frequency-power line $\omega_d(P_{\rm in})$ {indicated by the solid and dashed lines in panel c), corresponding to the qubit states $\eta=g$ and $\eta=e$, respectively.}
\label{fig:distance_map} }
\end{figure*}
{At time scales} much shorter than the qubit's lifetime $t\ll T_1$, the qubit state remains static in our setup and its main effect is to induce a qubit-dependent shift on the ancilla frequency as
\begin{align}
\omega_a\rightarrow {}&\bar{\omega}_a^{(\eta)}=\omega_a-g_{zz}\langle \hat{\sigma}_z\rangle_\eta,\label{shift}
\end{align}
where $\langle\hat{\sigma}_z\rangle_e=+1$ when the qubit is prepared in the excited state $\eta=e$ and $\langle\hat{\sigma}_z\rangle_g=-1$ when the qubit is in the ground state $\eta=g$. In terms of polaritons (\ref{eq:H_polaritons}), this translates to a similar qubit-dependent shift on each polariton mode $j=u,l$ as
\begin{align}
\omega_j\rightarrow {}&\bar{\omega}_j^{(\eta)}=\omega_j-\chi_j\langle \hat{\sigma}_z\rangle_{\eta},
\end{align}
as well as to a change in the hybridization angle as $\theta\rightarrow \arctan\left[2g_{ac}/(\bar{\omega}_a^{(\eta)}-\omega_c)\right]/2$.
To observe this qubit-dependent shift on the polaritons experimentally, we drive the cavity with a coherent field $\langle \hat{c}_{\rm in }\rangle=i(\Omega_c/\sqrt{\kappa_c}) e^{-i\omega_d t}$ of frequency $\omega_d$ and amplitude $\Omega_c$. We then perform homodyne detection at the transmission output of the cavity $\langle \hat{c}_{\rm out} \rangle_\eta=\sqrt{\kappa_c}\langle \hat{c} \rangle_\eta$, which contains information of the qubit state $\eta=g,e$ and of the polaritons as $\langle \hat{c}_{\rm out} \rangle_\eta =\sqrt{\kappa_c}(\sin(\theta) \langle \hat{c}_u \rangle_\eta + \cos(\theta) \langle \hat{c}_l \rangle_\eta)$. The experimental results are shown in Fig.~\ref{fig:distance_map}a, where we display the average pointer distance,
\begin{align}
D_{eg} = \vert \langle \hat{c}_{\rm out}\rangle_{e} - \langle \hat{c}_{\rm out}\rangle_g \vert,
\end{align}
as function of drive frequency $\omega_d$ and power $P_\mathrm{in}$, which relates to the amplitude as $\Omega_c= \sqrt{\kappa_c P_{\rm in}/\hbar\omega_d}$. For this, we integrate over a \SI{500}{ns} square readout pulse after preparing the qubit in its excited state ($\langle \hat{c}_{\rm out}\rangle_{e}$) or in its ground ($\langle \hat{c}_{\rm out}\rangle_{g}$) by applying a \SI{30}{ns} square $\pi$ pulse or not. The input power y-axis $P_\mathrm{in}$ is calibrated knowing the room temperature power and the attenuation in the input line. The colorbar of $D_{eg}$ is calibrated knowing the gain of the output line. For both calibrations of the input and output line, we assumed the calibrations to be flat in frequency which is true in our frequency window up to \SI{\pm 1}{dB}.
As a first approximation, we consider both polaritons independent by neglecting their mutual coupling $U_{ul}$. It is thus possible to find a simple semi-classical model that properly describes our measurements {up to moderate powers}. When reaching a quasi steady-state $1/\kappa_c\ll t \ll T_1$, the polariton amplitudes are solutions of the non-linear equations (cf.~App.~\ref{Append:bistability}),
\begin{align}
\langle \hat{c}_j\rangle_\eta = \frac{-i\Omega_j}{\kappa_j/2-i(\omega_d-\bar{\omega}_j^{(\eta)}+U_{jj} |\langle \hat{c}_j\rangle_\eta|^2)},\label{nonlinear_polaritons}
\end{align}
with effective polariton driving strenghts $\Omega_u=\sin(\theta)\Omega_c$ and $\Omega_l=\cos(\theta)\Omega_c$.
For very weak driving, we can further neglect the term $U_{jj}$ in \cref{nonlinear_polaritons} and obtain the standard Lorentzians lineshapes. In this {linear} regime ($P_\mathrm{in} \lesssim \SI{-112}{dBm}$), we observe four peaks in $D_{eg}$ at the qubit-dependent polariton frequencies $\bar{\omega}_u^{(e)}=\omega_u + \chi_{u} = 2\pi\cdot\SI{7.599}{GHz}$, $\bar{\omega}_l^{(e)}=\omega_l + \chi_{l} = 2\pi\cdot\SI{6.963}{GHz}$, $\bar{\omega}_u^{(g)}=\omega_u - \chi_{u} = 2\pi\cdot\SI{7.552}{GHz}$, and $\bar{\omega}_l^{(g)}=\omega_l - \chi_{l} = 2\pi\cdot\SI{6.942}{GHz}$, {which can be resolved due to the large cross-Kerr shifts $2\chi_j\gtrsim \kappa_j$}. {The agreement between measurement and model (\ref{nonlinear_polaritons}) can be observed in Fig.~\ref{fig:distance_map}b), where we plot cross sections of $D_{eg}$ for given input powers}. As the driving power increases, we need to consider the nonlinear term in the denominator of Eq.(\ref{nonlinear_polaritons}) and solve it self-consistently as a Duffing oscillator equation (cf.~\cref{Append:bistability}). As a result, the polariton frequencies are down-shifted due to their self-Kerr rate by an amount $\omega_j^{(\eta)}\rightarrow\bar{\omega}_j^{(\eta)}-U_{jj} |\langle \hat{c}_j\rangle_\eta|^2$ as shown in Figs.~\ref{fig:distance_map}a) and b).
Above a critical value of driving strength $\Omega_{crit}$, the polaritons enter a bistability region as can be hinted by the sharp wave-like shape {of the spectroscopic cross-sections above $P_{\rm in}\gtrsim -108$ dBm in} \cref{fig:distance_map}b. Contrary to the case of only one nonlinear resonator, here \cref{nonlinear_polaritons} presents two bistability zones close to each polariton frequency, similarly to two coupled nonlinear resonators \cite{Eichler2014}. This bistability region for the upper polariton is studied in the following \cref{sec:bistability}.
{For the upper polariton,} the effective model in \cref{nonlinear_polaritons} works well {up to $P_{\rm in}\lesssim -89$dBm} as indicated by the middle horizontal line in Fig.~\ref{fig:distance_map}a, but need to be refined at higher power. Indeed, {at} a given flux point, {describing} a polariton {by} a fixed proportion of being ancilla-like or cavity-like as function of qubit state {is no longer accurate at high power.} As the driving power increases, the ancilla effectively modifies its frequency as $\bar{\omega}_a^{(\eta)}\rightarrow \bar{\omega}_a^{(\eta)} - U_a |\langle \hat{a}\rangle_\eta|^2$, which further modifies the hybridisation condition as $\theta \rightarrow\arctan(2g_{ac}/ (\omega_a^{(\eta)}-U_a |\langle \hat{a}\rangle_\eta|^2-\omega_c) )/2$.
To achieve {a more complete} description of the circuit, we consider the dynamics in the cavity-ancilla basis. As shown in \cref{Append:bistability}, we can still neglect quantum fluctuations, obtaining the following non-linear system of equations for the quasi steady-state amplitudes:
\begin{align}
\label{eq:polaritons_motion}
&[\kappa_c/2-i(\omega_d-\omega_c)] \langle \hat{c} \rangle_\eta + i g_{ac} \langle \hat{a} \rangle_\eta + i\Omega_c = 0, \\
&[\kappa_a/2-i(\omega_d-\bar{\omega}_a^{(\eta)}+U_a |\langle \hat{a} \rangle_\eta|^2)]\langle \hat{a} \rangle_\eta + i g_{ac} \langle \hat{c} \rangle_\eta = 0.\nonumber
\end{align}
We map these equations to a standard Duffing oscillator polynomial equation of order 3 (see~App.~\ref{Append:bistability}), which can have two stable solutions and one unstable solution (or vice versa) depending on the driving frequency $\omega_d/2\pi$ and amplitude $\Omega_c$. In Fig.~\ref{fig:distance_map}c) we show the theoretical prediction which agrees well with the cavity transmission measurements in Figs.~\ref{fig:distance_map}a) and {b) up to large powers}. {Since the model (\ref{eq:polaritons_motion}) includes the coupling between polaritons and it agrees well with model (\ref{nonlinear_polaritons}), we conclude that the polariton modes are nearly un-coupled and therefore they can be used as independent meters for the qubit as discussed in the next section.}
Using the numerical simulation {(\ref{eq:polaritons_motion}), we also extract more information about the decomposition of the polariton modes in terms of cavity and ancilla components. In particular, we} compute for both qubit states $\eta$ the cavity and ancilla population proportions, {$p_{c}^\eta = \frac{|\langle \hat{c} \rangle_\eta|^2}{|\langle \hat{c} \rangle_\eta|^2+|\langle \hat{a} \rangle_\eta|^2}$} and {$p_{a}^\eta = \frac{|\langle \hat{a} \rangle_\eta|^2}{|\langle \hat{c} \rangle_\eta|^2+|\langle \hat{a} \rangle_\eta|^2}$}, along the upper polariton branch $\omega_d(P_\mathrm{in})$ {as indicated by the solid and dashed lines in \cref{fig:distance_map}c).}
The ancilla-cavity hybridization depends on the qubit state as it shifts the ancilla frequency by $2g_{zz}$. And in our case, the upper polariton will always be more ancilla-like when the qubit is in its ground state than in its excited state (\cref{fig:distance_map}.d). At low power, the upper polariton is more ancilla-like than cavity-like while it is the opposite for the lower polariton.
Effective ancilla-cavity resonance can be achieved for $|\expval{a}_\eta|^2 = (\bar{\omega}_a^{(\eta)} - \omega_c )/U_a$, where both polaritons become 50-50 ancilla-like and cavity-like.
At even further input power, branches going to upward frequencies with input power start to appear in the measurement (\cref{fig:distance_map}.a). These features are not captured by our numerical model. We {believe} they are due to neglected higher nonlinear terms in the anharmonic ancilla mode, like the 6th order which has an opposite sign to the 4th order self-Kerr term. The dynamics become more and more complex and is beyond the scope of this paper. However, we find worthwhile to note that for a large enough driving power ($P_\mathrm{in} \geq \SI{-75}{dBm}$ in our case), the system reaches a regime where the polariton physics disappears and the qubit state can be readout close to the bare cavity frequency $\omega_\mathrm{c}/2\pi = \SI{7.169}{GHz}$. This behaviour appears to be similar to the quantum-to-classical transition physics as in Ref \cite{Reed2010} resulting in a destructive readout of the qubit state. At this high power, the ancilla is effectively far detuned from the cavity ($\theta \rightarrow 0$), the upper polariton is becoming more and more cavity-like until only the bare cavity is recovered (\cref{fig:distance_map}.d).
\begin{figure}
\includegraphics[width=8.6cm]{Figure_2_v2.pdf}
\caption{Bistability region for the upper polariton mode. (a) Schematic of the hysteretic behaviour. Left: Pulse sequence $\Omega_c(t)$ for a ramp up and ramp down applied to cavity. This induces a bifurcation up (and down) in $\langle \hat{c}_{\rm out}\rangle^{\rm up}_\eta$ (and $\langle \hat{c}_{\rm out}\rangle^{\rm down}_\eta$), at different points $B^\mathrm{up}_\eta$ (and $B^\mathrm{down}_\eta$). Right: Hysteretic signal $D^\mathrm{ud}_{\eta}$ is reconstructed from the output signal during the ramp up and ramp down. (b)-(c) Measurement of the bistability hysteretic signal $D^\mathrm{ud}_\eta$ as a function of the input power $P_\mathrm{in}$ and frequency $\omega_d/2\pi$ when the qubit is prepared in $\eta=g$ (b) or in $\eta=e$ (c). The colormap corresponds to the experimental data and the contour lines correspond to the theory in \cref{eq:polaritons_motion}.
}
\label{fig:bistability}
\end{figure}
\section{Qubit dependent upper polariton bistability regions}
\label{sec:bistability}
We now focus on frequencies around the upper polariton mode, and we study the bistability region in detail. As commented in the previous section, above a certain driving strength $\Omega_{crit}$ the nonlinear polariton can have two stable quasi steady states. When crossing a bistability zone (where two stable solutions coexist), the system presents an hysteric behavior. For a fixed drive frequency, and a ramp up in amplitude $\dot{\Omega}_c(t)>0$, the polariton bifurcates from a stable low amplitude output to a stable high amplitude output when crossing the point $B^{\rm up}_{\eta} (\omega_d, P_{\rm in})$ (see \cref{fig:bistability}a). Reciprocally, for a ramp down in amplitude $\dot{\Omega}_c(t)<0$, the polariton bifurcates from a high to a low amplitude output when crossing a different point $B^{\rm down}_\eta(\omega_d, P_{\rm in}) < B^{\rm up}_\eta(\omega_d, P_{\rm in})$. To characterize the bistability zone for the upper polariton, we measure the distance
\begin{align}
D^{\rm ud}_\eta=\vert \langle \hat{c}_{\rm out}\rangle_{\eta}^{\rm up}- \langle \hat{c}_{\rm out}\rangle_{\eta}^{\rm down}\vert,
\end{align}
which compares the average output amplitude $\langle \hat{c}_{\rm out}\rangle^{\rm up}_\eta$ measuared after a \SI{500}{ns} ramp up with the average amplitude $\langle \hat{c}_{\rm out}\rangle^{\rm up}_\eta$ obtained after a \SI{500}{ns} ramp down to a new quasi steady state. The bistability region is identified by a non-zero hysteretic signal difference $D^{\rm ud}_\eta \neq 0$ since outside of this region, we have $D^{\rm ud}_\eta = 0$. Remarkably, the bistability region depends on the qubit-dependent shift on the upper polariton, which can be controlled experimentally by initializing the qubit in its excited $\eta=e$ or ground state $\eta=g$ by applying a $\pi$-pulse or not.
In Figs.~\ref{fig:bistability}b) and c) we display the measured $D^{\rm ud}_\eta$ and the bistability regions, for the qubit prepared in its ground $\eta=g$ or excited state $\eta=e$, respectively. Due to the cross-Kerr coupling between the qubit and the ancilla, the state of the qubit shifts the frequency of the ancilla and thus the ancilla-cavity detuning. In consequence, the qubit state shifts the parameters in \cref{eq:polaritons_motion} and thus moves the bistability zone. {From} the polaritons point of view, the qubit state shifts the upper polariton frequency by $2\chi_{u}$ and thus the bistability zone is also shifted by approximately $2\chi_{u}$. For both qubit states, the bistability zone and the hysteretic amplitude $D^{\rm ud}_\eta$ are captured by \cref{eq:polaritons_motion}.
\section{Qubit state latching readout}
\label{sec:readout}
\begin{figure}
\includegraphics[width=8.6cm]{Figure_3_v2.pdf}
\caption{Readout fidelity as a function of power and frequency. Superimposed are the computed bistability zones for the upper polariton as shown in \cref{fig:bistability}. The point of maximum fidelity is indicated by a black star. }
\label{fig:fidelity}
\end{figure}
We measure the single-shot readout fidelity around the bistability zones, for the upper polariton, shown in \cref{fig:bistability}. The results of the readout fidelity as function of the frequency and power of the signal are shown in \cref{fig:fidelity}.
We see that for an amplitude below $B^{\rm up}_{e}$ (region I), the polariton does not bifurcate up for both qubit states. For amplitude above $B^{\rm up}_{g}$ (region III), the polariton bifurcates up for both qubit states.
Therefore, for regions I and III the readout fidelity is close to zero. This is not the case of region II, where we observe a very high fidelity. Here, at the beginning of the pulse, the input power is ramp up and crosses $B^{\rm up}_{e}$ but not $B^{\rm up}_{g}$. The upper polariton thus bifurcates up only if the qubit is in its excited state $\ket{e}$. During the pulse, the upper polariton will not bifurcate down as long as the input power does not fall below $B^{\rm down}_{g}$ or $B^{\rm down}_{e}$, even if the qubit relaxed to its ground state $\ket{g}$. The high amplitude output state is thus latched as long as input power is maintained above $B^{\rm down}_\eta$ even if the qubit relaxed.
Using heralding to mitigate state preparation errors, we measure a maximum readout fidelity of $\mathcal{F}_\mathrm{RO} = 1 - [P(e \vert g) - P(g \vert e)]/2 =\SI{98.6}{\%}$ for a \SI{500}{ns} square pulse. Here, $P(\alpha \vert \beta)$ is the error probability to read state $\alpha$ when the system was prepared in state $\beta$. This fidelity is obtained at \SI{7.508}{GHz} drive frequency and \SI{-89}{dBm} input power (see \cref{fig:fidelity}). At this working point, the photon number of the readout mode is around $\overline{n}_g \sim 0$ and $\overline{n}_e \sim 9$ when the qubit is $\ket{g}$ and $\ket{e}$, respectively. The error probabilities $P(\alpha \vert \beta)$ are obtained by counting the statistic with thresholding from the readout histograms over $10^5$ repetitions where the qubit is prepared either in its ground or excited state.
We extract $P(e \vert g) = \SI{0.9}{\%}$ and $P(g \vert e) = \SI{1.9}{\%}$.
During the \SI{500}{ns}-long readout, we would expect the qubit to have relaxed around $1-e^{-t_\mathrm{int}/2T_1} \simeq \SI{8}{\%}$ of the times. Thanks to the latching bifurcation mechanism, this error is reduced to $1-e^{-t_\mathrm{b}/2T_1}$ as the bifurcation characteristic time $t_\mathrm{b}$ is smaller than the readout time.
Within the \SI{1.4}{\%} readout infidelity, we have \SI{0.1}{\%} error due overlaps between the two gaussians corresponding to each qubit state and we also expect around \SI{0.3}{\%} error due to false qubit preparation.
The remaining error of \SI{1}{\%} is attributed to qubit transitions before bifurcation and/or wrong bifurcation event. We believe that the wrong bifurcation event to be small thanks to the large shift $2\chi_u/2\pi = \SI{48}{MHz}$ allowing to work with input power far enough from the $B_{\eta}^\mathrm{down/up}$ points.
The state of the qubit can thus be readout in a single-shot manner without any external quantum-limited amplifier. It is accomplished thanks to the qubit-state dependent bistability and enhanced by a latching mechanism. Here, this regime is achieved at low photon number (about 9) and differentiates strongly from the very high photon number like in Refs \cite{Reed2010,Gusenkova2021}.
Contrary to the case of \textit{in-situ} JBA with transverse interaction to the qubit \cite{mallet_single-shot_2009, Schmitt2014, krantz_single-shot_2016}, here a large readout shift $\chi_u > \kappa_u, \text{ } U_{uu}$ is possible without suffering from Purcell decay thanks to the non-perturbative nature of the cross-Kerr coupling (see \cref{Append:comparison} for comparison).
Using a two steps pulse \cite{Schmitt2014} or shielving techniques \cite{Schmitt2014, Elder2020, Chen_arXiv2022} the errors due to wrong bifurcation events and relaxation before bifurcation may be even further reduced.
\section{Conclusions and Outlook}
We investigated the nonlinear response of polariton modes to a strong drive and its dependence on the qubit state $\ket{g}$ or $\ket{e}$. In particular, we measured an hysteric bifurcation due to a bistability of the upper polariton. Thanks to this effect, a latching-like readout with \SI{98.6}{\%} readout fidelity has been performed at large power and without any external quantum-limited amplification.
Due to the non-perturbative cross-Kerr coupling between the qubit and the polaritons, the qubit does not suffer from Purcell decay. An \textit{in-situ} bifurcation amplifier of the qubit state with a large readout shift $\chi>\kappa$ without deteriorating the qubit lifetime is thus possible and has been demonstrated. Such a readout with an \textit{in-situ} amplification could reach quantum detection efficiency close to one by optimizing the techniques presented here. To this end, imperfections can be systematically studied and parameters recalibrated using recent QND detector tomography protocols \cite{Pereira_PRL_2022, Pereira_Arxiv_2022,Rudinger2022}.
The polaritons could also be used as an \textit{in-situ} JPA or Josephson Parametric Dimer \cite{Eichler2014,Winkel2020} by degenerate pumping below the critical bistability points of each polaritons or by degenerate pumping in-between the frequencies of both polaritons.
\acknowledgments
The authors thanks D. Vion and E. Dumur for fruitful discussions. R. D. acknowledges funding from CFM pour la recherche. V.M. and O. B. acknowledge support from ANR REQUIEM (ANR-17-CE24-0012-01) and ANR OCTAVES (ANR-21-CE47-0007-01). Work in Madrid is funded by the Spanish project PGC2018-094792-B-I00 (MCIU/AEI/FEDER, UE), CSIC Interdisciplinary Thematic Platform (PTI+) on Quantum Technologies (PTI-QTEP+), and Proyecto Sinergico CAM 2020 Y2020/TCS-6545 (NanoQuCo-CM). T.R. further acknowledges support from the Juan de la Cierva fellowship IJC2019-040260-I.
|
train/arxiv
|
BkiUc4U5qdmB629MGqk_
| 5
| 1
|
\section{Introduction}
\subsection{Main results}
The moon is a complex dynamical system.\ Indeed, it is attracted not only by the earth but also with a comparable force by the sun.\ To this complexity George William Hill (1838--1914) made a good first approximation in 1877 and 1878 (see \cite{hill_det} and \cite{hill}).\ In particular, in his equation the true trajectory of the moon is close to a periodic orbit centered at the earth, which is known as ``variational orbit''.\ In other words, the true orbit is almost periodic.\ The variational orbit was found by Hill in \cite[p.\ 259]{hill} numerically.\ It is a planar symmetric direct periodic orbit
of Hill's system.\ Based on these works by Hill, Hénon \cite{henon}, \cite{henon_0} numerically explored and studied the stability of the families of planar direct and retrograde periodic orbits, which are referred to as family $g$ and $f$, respectively.\ These two families are the fundamental families of symmetric planar periodic orbits in the Hill lunar system.\
\textbf{On the geometry of planar periodic orbits in the spatial problem.}\ The Hamiltonian describing the motion of the moon
\begin{align*}
T^* \big( \mathbb{R}^3 \setminus \{ (0,0,0) \} \big) \to \mathbb{R},\quad (q,p) \mapsto \underbrace{\frac{1}{2}|p|^2 - \frac{1}{|q|} + p_1q_2 - p_2q_1 - q_1^2 + \frac{1}{2}q_2^2 + \frac{1}{2}q_3^2}_{\textstyle \frac{1}{2}\big( (p_1 + q_2)^2 + (p_2 - q_1)^2 + p_3^2 \big) - \frac{1}{|q|} - \frac{3}{2}q_1^2 + \frac{1}{2}q_3^2}
\end{align*}
is invariant under the symplectic involution
$$ \sigma \colon T^* \mathbb{R}^3 \to T^* \mathbb{R}^3,\quad (q_1,q_2,q_3,p_1,p_2,p_3) \mapsto (q_1,q_2,-q_3,p_1,p_2-p_3) $$
which arises from the reflection at the ecliptic $\{q_3=0\}$.\ Moreover, the planar problem can be viewed as the restriction of this system to the fixed point set $ \text{Fix}(\sigma) = \{ (q_1,q_2,0,p_1,p_2,0) \}$.\ For a planar periodic orbit $q=(q,p) \in \text{Fix}(\sigma)$ with $q_0 = \big(q(0),p(0)\big)$ and first return time $T_q$, we consider the time $T_q$ map of the linearized Hamiltonian flow, which is a linear symplectomorphism and is called the monodromy.\ Its restriction to the 5 dimensional energy hypersurface $\Sigma$ induces on the quotient by the line bundle ker$\omega|_{\Sigma} = \langle X_H | _{\Sigma} \rangle \subset T\Sigma$ the reduced monodromy, which is the linear symplectic map
$$ \overline{d \varphi _H^{T_q} | _{\Sigma} (q_0) } \colon T_{q_0} \Sigma / \text{ker} \omega_{q_0} \to T_{q_0} \Sigma / \text{ker} \omega_{q_0}.$$
Since the decomposition $ T_{q_0} \Sigma = T_{q_0}\text{Fix}(\sigma | _{\Sigma}) \oplus E_{-1} \big( d \sigma ( q_0 ) \big)$ into the 3-dimensional planar and 2-dimensional spatial components, the induced 4-dimensional symplectic vector space splits into two 2-dimensional symplectic vector spaces
$$ T_{q_0} \Sigma / \text{ker} \omega_{q_0} = \big( T_{q_0} \text{Fix}(\sigma|_{\Sigma}) / \text{ker} \omega_{q_0} \big) \oplus \Big( E_{-1} \big( d \sigma ( q_0 ) \big) \Big). $$
Hence the reduced monodromy splits into
$$\overline{d \varphi _H^{T_q} | _{\Sigma} (q_0) } = \begin{pmatrix}
\overline{A}_p & 0 \\
0 & A_s
\end{pmatrix},$$
where $\overline{A}_p$ is the reduced monodromy of $q$ viewed in the planar problem and $A_s$ is a $2 \times 2$ symplectic matrix which arises by linearization only along the spatial components.\ With respect to this symplectic splitting, we can study the linearization in the planar and spatial direction independently from each other.\ In particular, we obtain the following two important properties.\
\begin{itemize}[noitemsep]
\item[i)] The Floquet multipliers, which are the eigenvalues of the reduced monodromy, are determined by the Floquet multipliers of $\overline{A}_p$ and $A_s$, which are real or lie on the unit circle.\ Consequently, it is not possible that the Floquet multipliers are given by four different complex numbers of the form $\lambda, 1/\lambda, \overline{\lambda}$ and $1/\overline{\lambda}$.\
\item[ii)] The transversal Conley--Zehnder index of $q$ splits additively
$$ \mu_{CZ} = \mu_{CZ}^p + \mu_{CZ}^s, $$
where $\mu_{CZ}^p$ and $\mu_{CZ}^s$ are the Conley--Zehnder indices of the path of symplectic matrices generated by the planar and spatial part of the linearized Hamiltonian flow, respectively.\
\end{itemize}
Furthermore, if the planar orbit $q$ is planar and spatial elliptic, i.e.\ the Floquet multipliers are on the unit circle, then $\overline{A}_p$ and $A_s$ are conjugate to rotations in $\mathbb{R}^2$, respectively.\ In particular, $\mu_{CZ}^p$ and $\mu_{CZ}^s$ measure the number of complete rotations of neighbouring orbit during $T_q$, respectively.\ In this paper, for elliptic cases, we will define the \textbf{synodic month} of $q$ by $T_q$ expressed in days, the \textbf{anomalistic} and \textbf{draconitic period} as the mean time (in days) for a complete rotation of planar and spatial neighbouring orbits during $T_q$, respectively.\ These periods of $q$ are explicitely determined in terms of their Floquet mutlipliers (rotation angles) and Conley--Zehnder indices.\
\textbf{For very low energies.}\ To determine the Conley--Zehnder indices for the families $g$ and $f$, we go down to very low energies.\ In particular, after regularization, for very small energies the Hill lunar problem approaches the regularized Kepler problem, and the Kepler flow is just the geodesic flow.\ For all sufficiently small energies up until an undetermined $\varepsilon_0>0$, we are able to prove analytically the following two theorems.\
\begin{theorem}[Planar problem] \label{theorem_a}
From the circular direct and retrograde periodic orbit in the Kepler problem one family of periodic orbits bifurcates in each case, which are referred to as direct periodic orbits (family $g$) and retrograde periodic orbits (family $f$), respectively.\ These two orbits exist for all sufficiently low energies $\varepsilon \in (0,\varepsilon_0]$, and
\begin{align} \label{conley_zehnder_index_planar}
\mu_{CZ}^p = \begin{cases}
3 & \text{ for family $g$}\\
1 & \text{ for family $f$.}
\end{cases}
\end{align}
\end{theorem}
\begin{theorem}[Spatial problem] \label{theorem_b}
The planar families $g$ and $f$, and two families of spatial collision periodic orbits bifurcate from the Kepler problem.\ These four orbits exist for all sufficiently low energies $\varepsilon \in (0,\varepsilon_0]$, and
\begin{align} \label{conley_zehnder_index_spatial}
\mu_{CZ} = \begin{cases}
6 & \text{ for family $g$ (planar)}\\
4 & \text{ for the one family of collision orbits bouncing back (spatial)}\\
4 & \text{ for the other family of collision orbits bouncing back (spatial)}\\
2 & \text{ for family $f$ (planar).}
\end{cases}
\end{align}
Moreover, in view of $\mu_{CZ} = \mu_{CZ}^p + \mu_{CZ}^s$, by (\ref{conley_zehnder_index_planar}) and (\ref{conley_zehnder_index_spatial}),
\begin{center}
\begin{tabular}{c|c|c}
& family $g$ (planar) & family $f$ (planar)\\
\hline $\mu_{CZ}$ / $\mu_{CZ}^p$ / $\mu_{CZ}^s$ & 6 / 3 / 3 & 2 / 1 / 1
\end{tabular}
\end{center}
\end{theorem}
\noindent
Geometrically, during $T_q$ the planar and spatial neighbouring orbits of the planar direct periodic orbit make a complete rotation and additionally rotate by an angle, hence the anomalistic and draconitic periods are shorter than the synodic one.\ The planar and spatial neighbouring orbits of the planar retrograde periodic orbits only rotate by their respective angles during $T_q$, thus in this case the anomalistic and draconitic periods are longer than the synodic one.\ Note that the families $g$ and $f$ are in $ \text{Fix}(\sigma) = \{ (q_1,q_2,0,p_1,p_2,0) \} $ and that the two families of spatial collision periodic orbits are in $ \text{Fix}(-\sigma) = \{ (0,0,q_3,0,0,p_3) \}$.\ Note that $-\sigma$ corresponds to a rotation around the $q_3$-axis by $\pi$.\
\begin{figure}[H]
\centering
\definecolor{grgr}{RGB}{33,189,63}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm]
\clip(-5,-2.6) rectangle (5,3.2);
\draw [->,line width=1pt] (-2,-2)-- (2,2);
\draw [->,line width=1pt] (-4,0)-- (4,0);
\draw [->,line width=1pt] (0,-2.8)-- (0,3);
\draw [decoration={markings, mark=at position 0.5 with {\arrow{<}}},postaction={decorate},,color=magenta,line width=1pt] (-0.15,-3) -- (-0.15,-0.2);
\draw [decoration={markings, mark=at position 0.75 with {\arrow{<}}},postaction={decorate},,color=magenta,line width=1pt] (0.15,-0.2) -- (0.15,-3);
\draw [decoration={markings, mark=at position 0.8 with {\arrow{>}}},postaction={decorate},line width=1pt,rotate=45,color=red] (0,0) ellipse (1.6cm and 1cm);
\draw [decoration={markings, mark=at position 0.3 with {\arrow{<}}},postaction={decorate},line width=1pt,rotate=45,color=blue] (0,0) ellipse (1.2cm and 0.75cm);
\draw [decoration={markings, mark=at position 0.3 with {\arrow{>}}},postaction={decorate},,color=grgr,line width=1pt] (-0.15,2.6) -- (-0.15,0.2);
\draw [decoration={markings, mark=at position 0.7 with {\arrow{>}}},postaction={decorate},,color=grgr,line width=1pt] (0.15,0.2) -- (0.15,2.6);
\draw (4,0.4) node[anchor=north west] {$q_1$};
\draw (2,2.3) node[anchor=north west] {$q_2$};
\draw (0.1,3.2) node[anchor=north west] {$q_3$};
\draw[color=red] (1.5,1) node[anchor=north west] {$\text{direct}$};
\draw[color=blue] (-3.1,1) node[anchor=north west] {$\text{retrograde}$};
\begin{scriptsize}
\draw [fill=black] (0,0) circle (2pt);
\end{scriptsize}
\end{tikzpicture}
\caption{Periodic orbits for very low energies}
\label{figure_spatial_collision_orbits}
\end{figure}
\textbf{Symmetries.}\ All the periodic orbits we consider are symmetric with respect to an anti-symplectic involution.\ In particular, the spatial problem is endowed with linear symmetries, which are classified in this paper.\ By a symmetry we mean an anti-symplectic or symplectic involution which leaves the Hamiltonian invariant.\ The only linear symplectic symmetries are $ \pm\sigma$ and $\pm \text{id}$, and the anti-symplectic ones are:\
\begin{table}[H] \centering
\begin{tabular}{c|c}
notation & underlying geometry\\
\hline $\rho_1(q,p)=(q_1,-q_2,q_3,-p_1,p_2,-p_3)$ & reflection at the $q_1q_3$-plane \\
$\rho_2(q,p)=(-q_1,q_2,q_3,p_1,-p_2,-p_3)$ & reflection at the $q_2q_3$-plane \\
$\overline{\rho_1}(q,p)=(q_1,-q_2,-q_3,-p_1,p_2,p_3)$ & rotation around the $q_1$-axis by $\pi$ \\
$\overline{\rho_2}(q,p)=(-q_1,q_2,-q_3,p_1,-p_2,p_3)$ & rotation around the $q_2$-axis by $\pi$
\end{tabular}
\caption{The linear anti-symplectic symmetries}
\label{linear_anti_s_sym}
\end{table}
\noindent
In view of
$$ \rho_i \circ \sigma = \sigma \circ \rho_i = \overline{\rho_i},\quad \text{for }i \in \{1,2\},$$
and
$$ \rho_i \circ \rho_j = \overline{\rho_i} \circ \overline{\rho_j} = - \sigma,\quad \rho_i \circ \overline{\rho_j} = \overline{\rho_j} \circ \rho_i = -\text{id},\quad \text{for }i,j \in \{1,2\},\quad i\neq j, $$
these eight symmetries form the group $\mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_2$.\ If we restrict the system to the planar case, then two linear anti-symplectic symmetries are given by the reflection at the $q_1$- and $q_2$-axis, and two symplectic ones by $\pm \text{id}$.\ These four form a Klein four-group $\mathbb{Z}_2 \times \mathbb{Z}_2$.\
\textbf{Numerical results for higher energies.}\ We follow and study numerically the families $g$ and $f$ and also other families which bifurcate from them and have been found numerically as well, in the following way.\ If one of the indices jumps, then there bifurcates a new family of planar or spatial periodic orbits, respectively.\ These bifurcations occur when the eigenvalue 1 is crossed.\ In particular, if the rotation angle is a $\tilde{k}$-th root of unity, then the $\tilde{k}$-th cover moves through the eigenvalue 1.\ To determine the index of this kind of new families, the Floquet multipliers alone do not provide enough information.\ We will show that the monodromy and reduced monodromy of symmetric periodic orbits satisfy special symmetries, which allows for a general construction to specify the Floquet multipliers, in particular the rotation angles, and thereby the index jump.\ In addition, by using the invariance of the local Floer homology before and after bifurcation, and the stability, we will determine the Conley--Zehnder index of the families in the Table \ref{families_in_this_paper}.\ Note that the Euler characteristic of the local Floer homology groups stays invariant as well.\
\begin{table}[H] \centering
\begin{tabular}{c|c|c}
from the articles by & family & remark\\
\hline Hénon \cite{henon} (1969) & $g$, $f$ & planar\\
& $g'$ & planar (from $\mu_{CZ}^p$ jump from $g$) \\
Hénon \cite{henon_1} (1970), \cite{henon_0} (2003) & $g_3$ & planar (from the 3rd cover of $f$)\\
Batkhin--Batkhina \cite{batkhin} (2009) & $g_{2v}$ & spatial (from $\mu_{CZ}^s$ jump from $g$)\\
& $g_{1v}^{YOZ}$ & spatial (from the 2nd cover of $g$) \\
Michalodimitrakis \cite{michalodimitrakis} (1980) & $g1v$ & spatial (from the 2nd cover of $g$ \& $g'$)\\
Kalantonis \cite{kalantonis} (2020) & $f_g^{(2,3)}$, $f_g^{(2cut,3)}$ & spatial (from the 3rd cover of $g$) \\
& $f_{g'}^{(2,3)}$, $f_{g'}^{(2cut,3)}$ & spatial (from the 3rd cover of $g'$) \\
& $f_g^{(1,4)}$, $f_g^{(1cut,4)}$ & spatial (from the 4th cover of $g$) \\
& $f_{g'}^{(1,4)}$, $f_{g'}^{(1cut,4)}$ & spatial (from the 4th cover of $g'$)
\end{tabular}
\caption{The families of planar and spatial periodic orbits in this paper}
\label{families_in_this_paper}
\end{table}
\noindent
These data and invariants provide an organized structure for the practical work in the context of space mission design, which in particular shows how these families are connected to each other.\ In order to give such overviews we illustrate the resp.\ scenarios in form of bifurcation graphs (see for instance Figure \ref{overview_conclusion}).\ Furthermore, note that in this paper we use the traditional Jacobi integral $\Gamma=-2c$, where $c$ is the energy value given by the Hamiltonian.\ Starting with the indices for very low energies, our results are:\
\begin{table}[H] \centering
\begin{tabular}{c|c|c|c|c|c}
energy values $\Gamma$ & planar & spatial & $\mu_{CZ}^p$ & $\mu_{CZ}^s$ & $\mu_{CZ}$\\
\hline $(+\infty,4.49999)$ & elliptic & elliptic & 3 & 3 & 6\\
$(4.49999,1.3829)$ & pos. hyperbolic & elliptic & 2 & 3 & 5\\
$(1.3829,- \infty)$ & pos. hyperbolic & pos. hyperbolic & 2 & 4 & 6
\end{tabular}
\caption{The family $g$}
\label{overview_g}
\end{table}
\noindent
Therefore for our moon's variational orbit by Hill, which is at $\Gamma = 6.5088$, the indices do not change and the orbit is planar as well as spatial elliptic.\ Moreover, our geometrical approach and numerical computation give for the periods
$$ T_s = 29.528396,\quad T_a = 27.553954,\quad T_d = 27.212712, $$
which is a remarkably good approximation to the experimentally measured data.\ Note that the orbits of the family $g$ are doubly-symmetric with respect to the reflection at the $q_1$- and $q_2$-axis.\
The orbits from the family $f$ are doubly-symmetric as well, and they are planar and spatial elliptic for all times, thus their indices do not change.\ Furthermore, in the limit case, in which the Jacobi integral goes to $- \infty$, we analytically show that these orbits converge to a degenerate planar retrograde periodic orbit, where all three months coincide with the period of the earth around the sun, namely 365.25 days.\
In Table \ref{overview_g}, at the planar transition from elliptic to positive hyperbolic, $\mu_{CZ}^p$ jumps from 3 to 2 and there bifurcates the family $g'$, whose data are collected in Table \ref{overview_g'}.\ The orbits of the family $g'$ are simply-symmetric with respect to the reflection at the $q_1$-axis, and by using the reflection at the $q_2$-axis one obtains its symmetric family, hence the family $g'$ appears twice.\
\begin{table}[H] \centering
\begin{tabular}{c|c|c|c|c|c}
energy values $\Gamma$ & planar & spatial & $\mu_{CZ}^p$ & $\mu_{CZ}^s$ & $\mu_{CZ}$\\
\hline $(4.49999,4.2851)$ & elliptic & elliptic & 3 & 3 & 6\\
$(4.2851,4.2806)$ & elliptic & neg. hyperbolic & 3 & 3 & 6\\
$(4.2806,4.2714)$ & elliptic & elliptic & 3 & 3 & 6\\
$(4.2714,3.3901)$ & neg. hyperbolic & elliptic & 3 & 3 & 6\\
$(3.3901,0.4771)$ & neg. hyperbolic & pos. hyperbolic & 3 & 4 & 7\\
$(0.4771,-0.2195)$ & neg. hyperbolic & elliptic & 3 & 5 & 8\\
$(-0.2195,-4.6921)$ & neg. hyperbolic & neg. hyperbolic & 3 & 5 & 8\\
$(-4.6921,-4.7047)$ & elliptic & neg. hyperbolic & 3 & 5 & 8\\
$(-4.7047,- \infty)$ & pos. hyperbolic & neg. hyperbolic & 4 & 5 & 9
\end{tabular}
\caption{The family $g'$}
\label{overview_g'}
\end{table}
\noindent
It is easy to check that the Euler characteristics before and after the bifurcation of $g'$ coincide, namely they are in the planar problem
$$ (-1)^3 = -1,\quad \text{resp.}\quad (-1)^2 + 2\cdot(-1)^3 = -1, $$
and in the spatial problem (obtained by a shift of $\mu_{CZ}^s = 3$)
$$ (-1)^6 = 1,\quad \text{resp.}\quad (-1)^5 + 2\cdot(-1)^6 = 1. $$
At the spatial transition of the family $g$ from elliptic to positive hyperbolic, where $\mu_{CZ}^s$ jumps from 3 to 4, the family $g_{2v}$ bifurcates.\ Its orbits are doubly-symmetric with respect to $\rho_1$ and $\rho_2$, and by using $\sigma$ one obtains its symmetric family which is doubly-symmetric with respect to $\overline{\rho_1}$ and $\overline{\rho_2}$.\ They have $\mu_{CZ} = 5$.\
The other families of spatial orbits from Table \ref{families_in_this_paper} bifurcate from the respective iteration of the underlying planar periodic orbits.\ Moreover, we want to emphasize the following special bifurcation result, which is illustrated in Figure \ref{overview_conclusion}.\ We interpret this figure as a graph and call such a graph a ``\textbf{bifurcation graph}".\ It is constructed as follows:\
We draw from bottom to top in the direction of increasing energy.\ Each vertex corresponds to a degenerate periodic orbit and each edge to a family of periodic orbits with their (constant) Conley--Zehnder index.\ We distinguish two kinds of edges.\ The first kind corresponds to the underlying family of planar periodic orbits where the index jumps and the bifurcations happen.\ We draw these edges in black, vertically and shortly before and after the bifurcations.\ The second kind corresponds to a new family branching out from the index jump.\ We draw such edges coloured, and every new family gets his own colour.\ If there is a symmetric family, then these edges are drawn in the same colour, but dashed.\ The cross stands for collision and the term ``b-d" for a periodic orbit of birth-death type.\ In general, a periodic orbit of birth-death type is a degenerate orbit from which two families bifurcate with an index difference of 1 and into the same energy direction.\ Its local Floer homology and its Euler characteristic are therefore zero.\
\begin{figure}[H]
\centering
\definecolor{grgr}{RGB}{33,189,63}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-9,-3) rectangle (7,11.5);
\draw [->, line width=1pt] (-3-4.5,-2.5) -- (-3-4.5,11);
\draw (-3-4.5,11) node[anchor=south] {$\Gamma$};
\draw (-4-4.5,11) node[anchor=south] {$-\infty$};
\draw (-4-4.5,-2.5) node[anchor=north] {$+\infty$};
\draw[fill] (-3-4.5,-2) circle (1.5pt);
\draw (-3-4.5,-2) node[anchor=east] {4.347};
\draw[fill] (-3-4.5,0.5) circle (1.5pt);
\draw (-3-4.5,0.5) node[anchor=east] {3.876};
\draw[fill] (-3-4.5,4) circle (1.5pt);
\draw (-3-4.5,4) node[anchor=east] {3.274};
\draw[fill] (-3-4.5,10) circle (1.5pt);
\draw (-3-4.5,10) node[anchor=east] {0.755};
\draw[fill] (-3-4.5,4-0.5) circle (1.5pt);
\draw (-3-4.5,4-0.5) node[anchor=east] {3.280};
\draw[fill] (-3-4.5,4-1.1) circle (1.5pt);
\draw (-3-4.5,4-1.1) node[anchor=east] {3.362};
\draw[fill] (-3-4.5,4+1.1) circle (1.5pt);
\draw (-3-4.5,4+1.1) node[anchor=east] {3.136};
\draw[fill] (-3-4.5,4+1.6) circle (1.5pt);
\draw (-3-4.5,4+1.6) node[anchor=east] {3.101};
\draw [line width=1pt] (-2,-2.5) -- (-2,-1.5);
\draw (-2,-2.5) node[anchor=north] {$14$};
\draw (-2,-1.5) node[anchor=south] {$16$};
\draw [line width=1pt] (0,0) -- (0,1);
\draw (0,0) node[anchor=north] {$13$};
\draw (0,1) node[anchor=south] {$15$};
\draw [line width=1pt] (0,9.5) -- (0,10.5);
\draw (0,9.5) node[anchor=north] {$16$};
\draw (0,10.5) node[anchor=south] {$14$};
\draw [line width=1pt,color=red] (-2,-2) .. controls (0,0.6) and (1,-2.2) .. (2.5,4);
\draw [dashed,line width=1pt,color=red] (-2,-2) .. controls (-3.1,1) and (-3,3) .. (-2.5,4);
\draw [color=red] (1.8,-0.65) node[anchor=west] {$15$};
\draw [color=red] (-1.8,-0.65) node[anchor=east] {$15$};
\draw [color=red] (1.05,-0.95) node[anchor=west] {$15$};
\draw [color=red] (-1.05,-0.95) node[anchor=east] {$15$};
\draw [color=magenta] (2.4,-1.1) node[anchor=west] {$14$};
\draw [color=magenta] (-2.4,-1.1) node[anchor=east] {$14$};
\draw [color=magenta] (0.8,-1.45) node[anchor=west] {$14$};
\draw [color=magenta] (-0.8,-1.45) node[anchor=east] {$14$};
\draw [color=grgr] (0.7,2) node[anchor=west] {$13$};
\draw [color=grgr] (-0.7,2) node[anchor=east] {$13$};
\draw [color=grgr] (3.45,4.5) node[anchor=west] {$14$};
\draw [color=grgr] (-3.45,4.5) node[anchor=east] {$14$};
\draw [color=grgr] (4.7,4.8) node[anchor=west] {$15$};
\draw [color=grgr] (-4.7,4.8) node[anchor=east] {$15$};
\draw [color=grgr] (5.6,3.3) node[anchor=west] {$14$};
\draw [color=grgr] (-5.6,3.3) node[anchor=east] {$14$};
\draw [line width=1pt,color=blue] (0,0.5) .. controls (1,2.5) and (1.5,3.5) .. (2.5,4);
\draw [dashed,line width=1pt,color=blue] (0,0.5) .. controls (-1,2.5) and (-1.5,3.5) .. (-2.5,4);
\draw [color=blue] (0.5,2) node[anchor=south] {$14$};
\draw [color=blue] (-0.5,2) node[anchor=south] {$14$};
\draw [line width=1pt,color=blue] (2.5,4) .. controls (1.5,5) and (1.2,7.5) .. (0,10);
\draw [dashed,line width=1pt,color=blue] (-2.5,4) .. controls (-1.5,5) and (-1.2,7.5) .. (0,10);
\draw [color=blue] (1.3,4.5) node[anchor=west] {$15$};
\draw [color=blue] (-1.3,4.5) node[anchor=east] {$15$};
\draw [line width=1pt] (2,-2.5) -- (2,-1.5);
\draw (2,-2.5) node[anchor=north] {$14$};
\draw (2,-1.5) node[anchor=south] {$16$};
\draw [dashed,line width=1pt,color=red] (2,-2) .. controls (0,0.6) and (-1,-2.2) .. (-2.5,4);
\draw [line width=1pt,color=red] (2,-2) .. controls (3.1,1) and (3,3) .. (2.5,4);
\draw [line width=1pt,color=grgr] (0,0.5) .. controls (1.5,2.25) and (2,2.5) .. (3.5,3.5);
\draw [dashed,line width=1pt,color=grgr] (0,0.5) .. controls (-1.5,2.25) and (-2,2.5) .. (-3.5,3.5);
\draw [line width=1pt,color=grgr] (3.5,3.5) .. controls (3.9,3.9) and (4.3,4.6) .. (4.5,5.1);
\draw [dashed,line width=1pt,color=grgr] (-3.5,3.5) .. controls (-3.9,3.9) and (-4.3,4.6) .. (-4.5,5.1);
\draw [line width=1pt,color=grgr] (4.5,5.1) .. controls (4.9,4.6) and (5.3,3.9) .. (5.5,2.9);
\draw [dashed,line width=1pt,color=grgr] (-4.5,5.1) .. controls (-4.9,4.6) and (-5.3,3.9) .. (-5.5,2.9);
\draw [line width=1pt,color=grgr] (5.5,2.9) .. controls (5.9,3.9) and (6.1,4.6) .. (6.5,5.6);
\draw [dashed,line width=1pt,color=grgr] (-5.5,2.9) .. controls (-5.9,3.9) and (-6.1,4.6) .. (-6.5,5.6);
\draw [line width=1pt,color=magenta] (2,-2) .. controls (3.5,1) and (3.5,2) .. (3.5,3.5);
\draw [line width=1pt,color=magenta] (-2,-2) .. controls (3,-0.2) and (3,1.5) .. (3.5,3.5);
\draw [dashed,line width=1pt,color=magenta] (-2,-2) .. controls (-3.5,1) and (-3.5,2) .. (-3.5,3.5);
\draw [dashed,line width=1pt,color=magenta] (2,-2) .. controls (-3,-0.2) and (-3,1.5) .. (-3.5,3.5);
\draw[fill] (5.5,2.9) circle (2pt);
\draw (5.5,2.9) node[anchor=north] {b-d};
\draw[fill] (-5.5,2.9) circle (2pt);
\draw (-5.5,2.9) node[anchor=north] {b-d};
\draw[fill] (4.5,5.1) circle (2pt);
\draw (4.5,5.1) node[anchor=south] {b-d};
\draw[fill] (-4.5,5.1) circle (2pt);
\draw (-4.5,5.1) node[anchor=south] {b-d};
\draw[fill] (3.5,3.5) circle (2pt);
\draw[fill] (-3.5,3.5) circle (2pt);
\node[solid, cross out, draw=black, thick] at (6.5,5.6) {};
\node[solid, cross out, draw=black, thick] at (-6.5,5.6) {};
\draw[fill] (2,-2) circle (2pt);
\draw (2.1,-2) node[anchor=west] {$g'^3$};
\draw[fill] (-2.5,4) circle (2pt);
\draw[fill] (2.5,4) circle (2pt);
\draw[fill] (0,10) circle (2pt);
\draw (0.1,10) node[anchor=west] {$f^5$};
\draw[fill] (-2,-2) circle (2pt);
\draw (-2.2,-2) node[anchor=east] {$g'^3$};
\draw[fill] (0,0.5) circle (2pt);
\draw (0.1,0.5) node[anchor=west] {$g^3$};
\end{tikzpicture}
\caption{The bifurcation graph between the 3rd cover of $g$, the 3rd cover of $g'$ and the 5th cover of $f$ with the families \textcolor{blue}{$f_g^{(2,3)}$}, \textcolor{grgr}{$f_g^{(2cut,3)}$}, \textcolor{red}{$f_{g'}^{(2,3)}$} and \textcolor{magenta}{$f_{g'}^{(2cut,3)}$}}
\label{overview_conclusion}
\end{figure}
The branches (not dashed) correspond to the families \textcolor{blue}{$f_g^{(2,3)}$}, \textcolor{grgr}{$f_g^{(2cut,3)}$}, \textcolor{red}{$f_{g'}^{(2,3)}$} and \textcolor{magenta}{$f_{g'}^{(2cut,3)}$} with their resp.\ colours.\ In view of the symmetry $\sigma$, these families give rise to a second bifurcation branch (dashed).\
At $\Gamma = 3.876$ the index of the 3rd cover of the family $g$ jumps from 13 to 15.\ At this transition the two families \textcolor{blue}{$f_g^{(2,3)}$} and \textcolor{grgr}{$f_g^{(2cut,3)}$} bifurcate.\ Their indices are \textcolor{blue}{14} and \textcolor{grgr}{13}, respectively.\ The orbits of the first family are doubly-symmetric with respect to $\overline{\rho_1}$ and $\overline{\rho_2}$, hence they are invariant under $-\sigma$.\ They end at the value $\Gamma = 0.755$ at the 5th cover of $f$, and inbetween there is an index jump from \textcolor{blue}{14} to \textcolor{blue}{15}.\ In the second family, the orbits are doubly-symmetric with respect to $\rho_1$ and $\rho_2$, thus they are invariant under $-\sigma$ as well.\ At the value $\Gamma = 3.280$ the index jumps from \textcolor{grgr}{13} to \textcolor{grgr}{14}, and the orbits eventually undergo collision.\ Until then, this family consists of branches that bifurcate from a periodic orbit of birth-death type.\ The dashed branches are obtained by using $\sigma$.\ Note that the orbits of the symmetric family (dashed) of \textcolor{blue}{$f_g^{(2,3)}$} are doubly-symmetric with respect to $\rho_1$ and $\rho_2$, and the orbits of the symmetric family (dashed) of \textcolor{grgr}{$f_g^{(2cut,3)}$} are doubly-symmetric with respect to $\overline{\rho_1}$ and $\overline{\rho_2}$.\
Consider the family $g'$, where at $\Gamma = 4.347$ the index of the 3rd cover jumps from 14 to 16.\ At this value of $\Gamma$ the two families \textcolor{red}{$f_{g'}^{(2,3)}$} and \textcolor{magenta}{$f_{g'}^{(2cut,3)}$} bifurcate, with indices \textcolor{red}{15} resp.\ \textcolor{magenta}{14}.\ According to Kalantonis \cite[p.\ 11]{kalantonis}, the two families \textcolor{red}{$f_{g'}^{(2,3)}$} and \textcolor{magenta}{$f_{g'}^{(2cut,3)}$} terminate at the 3rd cover of the resp.\ planar orbit of $g'$ which is symmetric to the planar orbit of $g'$, from which they have bifurcated.\ These are the two not-dashed branches, respectively.\
We have a deeper insight:\
\begin{itemize}[noitemsep]
\item The orbits of the family \textcolor{red}{$f_{g'}^{(2,3)}$} are simply-symmetric with respect to $\overline{\rho_1}$, and its two not-dashed branches are symmetric by $\overline{\rho_2}$.\ Recall that the orbits of \textcolor{blue}{$f_g^{(2,3)}$} are doubly-symmetric with respect to $\overline{\rho_1}$ and $\overline{\rho_2}$.\ Therefore, very close to the value $\Gamma = 3.274$, by comparing the initial data, and by using these symmetries and especially the indices, we conclude that the family \textcolor{red}{$f_{g'}^{(2,3)}$} ends at the first index jump of the family \textcolor{blue}{$f_g^{(2,3)}$}.\ This explains why they come together at the value $\Gamma = 3.274$.\
\item The same happens for the family \textcolor{magenta}{$f_{g'}^{(2cut,3)}$}, namely its orbits are simply-symmetric with respect to $\rho_1$ and its two not-dashed branches are symmetric by $\rho_2$.\ Recall that the orbits of \textcolor{grgr}{$f_g^{(2cut,3)}$} are doubly-symmetric with respect to $\rho_1$ and $\rho_2$.\ Hence very close to the value $\Gamma = 3.280$, by comparing the initial data, and by using these symmetries and especially the indices, we obtain that \textcolor{magenta}{$f_{g'}^{(2cut,3)}$} ends at the first index jump of the family \textcolor{grgr}{$f_g^{(2cut,3)}$} (at the value $\Gamma = 3.280$).\
\item Each family yields the bifurcation of a second branch (dashed) by using $\sigma$.\ Furthermore, note that the symmetry properties of the orbits of the spatial families \textcolor{red}{$f_{g'}^{(2,3)}$} compared to \textcolor{blue}{$f_g^{(2,3)}$}, and of \textcolor{magenta}{$f_{g'}^{(2cut,3)}$} compared to \textcolor{grgr}{$f_g^{(2cut,3)}$}, are similar to the symmetry properties of the orbits of the planar family $g'$ compared to $g$.\
\end{itemize}
\noindent
In particular, the bifurcation graph organizes the local bifurcations and thereby helps to check the Euler characteristic of the local Floer homology groups.\ For instance, at the value $\Gamma = 3.876$ at $g^3$ the Euler characteristics before and after the bifurcation are
$$ (-1)^{13} = -1,\quad\text{resp.}\quad 2\cdot(-1)^{13} + 2\cdot(-1)^{14} + (-1)^{15} = -1. $$
Note that at the value $\Gamma = 0.755$ at $f^5$ the Euler characteristics before and after the bifurcation are
$$ 2\cdot(-1)^{15} + (-1)^{16} = -1,\quad\text{resp.}\quad (-1)^{14} = 1. $$
Therefore, at this transition there are still undiscovered families branching out from $f^5$.\
\subsection{Organization of the paper}
Section \ref{sec:periodic_orbits} is written for readers not familiar with this kind of flow language and the reduced monodromy in terms of symplectic geometry.\ Furthermore, in the case of $\text{Sp}(1) = \text{SL}(2,\mathbb{R})$, we discuss the stability, the Floquet multipliers and the transversal Conley--Zehnder index of periodic orbits.\ This section is based on the books of Hofer--Zehnder \cite{hofer} and Frauenfelder--van Koert \cite{frauenfelder}, and on the articles by Hofer--Wysocki--Zehnder \cite[Appendix]{hofer_w_z_1}, \cite[Section 3]{hofer_w_z}.\
In order to discuss the symplectic decomposition in a more general and more conceptual way, we introduce in Section \ref{sec:hamiltonian} the concept of Hamiltonian manifold, which generalizes energy hypersurfaces, based on \cite{frauenfelder} and fruitful discussions with Urs Frauenfelder.\ The general statement on the symplectic splitting is formulated in the Symplectic Splitting Theorem \ref{theorem_splitting}.\
In Section \ref{sec:4} we introduce symmetries of Hamiltonian systems and show that the monodromy and reduced monodromy of symmetric periodic orbits satisfies special symmetries.\ The content of this section gives a theoretical framework to specify the Floquet multipliers, in particular the rotation angles, and thereby the index jump.\ This section is based on \cite{frauenfelder_moreno} and helpful discussions with Urs Frauenfelder.\
The subject of Section \ref{sec:spatial_Hills_lunar} is to discuss the spatial Hill lunar problem.\ Firstly, we give a short astronomical lunar overview and a description of Hill's concept.\ Then we derive its Hamiltonian and equation of motion as a limit case of the circular restricted three body problem, following \cite{frauenfelder} where the planar case is studied.\ In addition, we determine the group of linear symmetries and discuss planar as well as spatial symmetric periodic orbits.\ By applying the theory developed in Section \ref{sec:4}, we show in the Subsections \ref{sec:6.5.1} and \ref{sec:6.5.2} how to calculate its reduced monodromy, and for planar ones how its three periods are explicitely defined.\
The goal of Section \ref{sec:6} is to prove Theorems \ref{theorem_a} and \ref{theorem_b}.\ Frauenfelder and van Koert showed in \cite[Chapter 8]{frauenfelder} the bifurcation scenario for the planar problem.\ We use and extend their technique to the spatial problem and prove the existence of two additional (spatial collision) periodic orbits.\ Moreover, by an explicit calculation of Morse--Bott indices, we determine their Conley--Zehnder indices, and the anomalistic and draconitic periods for the families $g$ and $f$ for very small energies.\
In Section \ref{sec:local_rabinowitz} we briefly sketch the local equivariant Rabinowitz-Floer homology and its Euler characteristic associated to good and bad orbits.\
The data for our numerical results for all the families from Table \ref{families_in_this_paper} are presented in Section \ref{sec:8}.\ They consist of all relevant data such as the inital conditions, the Floquet multipliers, the indices and the periods.\ We also plot these periodic orbits in the configuration space and give overviews in form of bifurcation graphs such as in the Figure \ref{overview_conclusion}.\
For our numerical approximation of the linearized flow and thereby for the computation of relevant data, we have written several Python codes which are collected in the Appendix.\
\textbf{Outlook.}\ In this paper, we interpret symmetric periodic orbits as periodic orbits.\ In view of the fixed point set of an anti-symplectic involution, which is a Lagrangian submanifold, they can also be interpreted as Lagrangian intersection points.\ Instead of the Conley--Zehnder index, we can therefore assign to them the Lagrangian Maslov index $\mu_L$ and hence consider the local Lagrangian Floer homology.\ However, the difference of $\mu_{CZ}$ and $\mu_L$ is the Hörmander index.\ By using the formula from Theorem 2.3 in \cite{frauenfelder_van}, these indices for all families in this paper will be computed as well.\ Furthermore, the local equivariant Rabinowitz Floer homology in Section \ref{sec:8} will be worked out in much more detail.\ Moreover, our general construction can be applied to every Hamiltonian system with the relevant symmetries.\
\textbf{Acknowledgement.}\ The author would like to thank Urs Frauenfelder, who was the supervisor at the Universität Augsburg of the author's Master's thesis, on which this research of his PhD thesis is based, for valuable discussions and support.\ He is deeply grateful to its supervisor Felix Schlenk for reading carefully the text, giving improvements and also helpful inputs.\ This work is supported by the SNF under grant No.\ 200021-181980/2.\ Moreover, he is grateful to Vassilis S.\ Kalantonis and Alexander Batkhin for providing the initial data for the orbits they have found.\ Furthermore, he is also thankful to his family for the support during the difficult time by the sudden death of his father Ibrahim Aydin on 8 December 2019.\ This paper is in memory of him.\ May God be merciful to him.\
\section{Periodic orbits of Hamiltonian systems} \label{sec:periodic_orbits}
\subsection{Periodic orbits, monodromy and reduced monodromy}
\label{sec:2.1}
\noindent
Let $(M,\omega)$ be a $2n$ dimensional symplectic manifold, i.e.\ $\omega \in \Omega^2(M)$ is a 2-form, called symplectic form on $M$, which is closed, i.e.\ $d \omega =0$, and non-degenerate.\
\begin{example}
The archetypical example is the cotangent bundle $T^*Q$ of a $n$ dimensional smooth manifold $Q$.\ In physics $T^*Q$ corresponds to phase space and $Q$ to configuration space.\ In canonical coordinates $(q,p)=(q_1,...,q_n,p_1,...,p_n) \in T^*Q$, where $q$ is a point in configuration space and $p \in T^*_qQ$ its momentum in the fiber, the cotangent bundle is endowed with the canonical symplectic form $\omega = \sum\limits_{i=1}^{n}dq_i \wedge dp_i$.\
\end{example}
\begin{remark}
The first analytical condition of a symplectic form forces that all symplectic manifolds locally look like the Euclidean space $\mathbb{R}^{2n}$ equipped with the standard symplectic form (Darboux's theorem, see for instance \cite[pp. 10--11]{hofer} for details).\\
For the second algebraic condition:\ Since $\omega \in \Omega^2(M)$, we get for each point $p \in M$ an alternating multilinear map
\begin{align} \label{omega_1}
\omega_p \colon T_p M \times T_p M \to \mathbb{R}.
\end{align}
Non-degeneracy means that for all $p \in M$ and for all $0 \neq v \in T_pM$ there exists $0 \neq w \in T_pM$ such that $\omega_p(v,w) \neq 0$.\ Equivalently, if $\omega_p(v,w)=0$ for all $v \in T_pM$, then $w=0$.\ Moreover, this condition implies that the top exterior power $ \omega^{\wedge n} = \omega \wedge ... \wedge \omega \neq 0 $ is a volume form, hence symplectic manifolds are necessarily even-dimensional and orientable.\ Furthermore, (\ref{omega_1}) defines a map
\begin{align} \label{omega_2}
T_p M \to T_p^* M,\quad v \mapsto \omega_p(v,\cdot).
\end{align}
Being non-degenerate is equivalent to (\ref{omega_2}) being an isomorphism.\
\end{remark}
Let $H \in C^{\infty}(M,\mathbb{R})$ be an autonomous Hamiltonian function and $X_H$ the Hamiltonian vector field, which is uniquely defined by
\begin{align} \label{ham_vector_field}
dH (\cdot) = \omega(X_H,\cdot).
\end{align}
\begin{definition}
A \textbf{periodic orbit} $x \in C^{\infty}(\mathbb{R},M)$ of $H$ is a solution to the first order ODE
\begin{align} \label{periodic_orbit_hamiltonian}
\dot{x}(t) = X_H\big(x(t)\big),\quad t \in \mathbb{R}
\end{align}
such that there exists a period $T > 0$ with $x(t+T) = x(t)$ for all $t \in \mathbb{R}$.\
\begin{figure}[H]\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-6,-1.7) rectangle (6,1.9);
\draw [rotate around={-163.3773418816643:(0.040509448327610796,0.04313429971099568)}] (0.040509448327610796,0.04313429971099568) [decoration={markings, mark=at position 0.685 with {\arrow{>}}}, postaction={decorate},line width=1pt] ellipse (2.428036135165941cm and 1.374491119408365cm);
\draw (2.44,1.0400000000000003) node[anchor=north west] {$x_0 = x(0)=x(T)$};
\draw (-3.54,1.2400000000000002) node[anchor=north west] {$X_H\big(x(t)\big)$};
\draw [->,line width=1pt] (-1.4500880042828483,0.8455453250780841) -- (-2.5,0.09999999999999998);
\begin{scriptsize}
\draw [fill=black] (2.3712772886663096,0.7231285200436788) circle (2pt);
\end{scriptsize}
\end{tikzpicture}
\caption{Periodic orbit}
\label{figure3}
\end{figure}
\end{definition}
\noindent
A special case of periodic orbits are critical points of $H$.\ For example the Lagrange points of the circular restricted three body problem (see Section \ref{subsec:discussion}) are such trivial periodic orbits.\ For a non-trivial periodic orbit $x$ we define the first return time by
$$T_x := \min \{ T \geq 0 \mid x(T) = x(0) \},$$
hence for every period $T$ we obtain $T = n \cdot T_x$ for some $n \in \mathbb{Z}$.\ Since $H$ is autonomous, there are no self-intersections, i.e.\ for $\tau \in (0,T_x)$ it holds that $x(0) \neq x(\tau).$\
The solutions of (\ref{periodic_orbit_hamiltonian}) generate the family of Hamiltonian flows $\varphi_H^t \colon M \to M$ of $X_H$ via
$$\varphi_H^0 = \text{id}_M,\quad \frac{d}{dt}\varphi_H^t(z) = X_H \big(\varphi_H^t(z)\big),$$
which are symplectomorphisms, meaning that the symplectic form $\omega$ is preserved under $\varphi_H^t$, i.e.\ $(\varphi_H^t)^* \omega = \omega$.\ Since the pullback commutes with the wedge product, the flow is volume preserving as well, meaning that $ (\varphi_H^t)^* \omega^{\wedge n} = \omega^{\wedge n}. $
\begin{remark}
Take $Q=\mathbb{R}^n$ as configuration space and as phase space its trivial cotangent bundle $T^*\mathbb{R}^n = \mathbb{R}^n \times \mathbb{R}^n$ with the canonical symplectic form.\ We have
$$dH = \sum\limits_{i=1}^{n} \bigg(\frac{\partial H}{\partial q_i} dq_i + \frac{\partial H}{\partial p_i} dp_i\bigg),\quad X_H = \sum\limits_{i=1}^n \bigg( \frac{\partial H}{\partial p_i} \frac{\partial }{\partial q_i} - \frac{\partial H}{\partial q_i} \frac{\partial }{\partial p_i} \bigg),$$
since the equation (\ref{ham_vector_field}) is satisfied by $X_H$,
\begin{align*}
\sum\limits_{i=1}^n dq_i \wedge dp_i (X_H,\cdot) &= \sum\limits_{i=1}^n \Big( dq_i(X_H)dp_i(\cdot) - dq_i(\cdot)dp_i(X_H) \Big)\\
&= \sum\limits_{i=1}^n \Bigg( \frac{\partial H}{\partial p_i}dp_i(\cdot) - dq_i(\cdot)\bigg(-\frac{\partial H}{\partial q_i}\bigg) \Bigg)\\
&= dH(\cdot).
\end{align*}
Therefore (\ref{periodic_orbit_hamiltonian}) is equivalent to the Hamiltonian equation of motion
$$ \frac{dq_i}{dt} = \frac{\partial H}{\partial p_i},\quad \frac{d p_i}{dt} = - \frac{\partial H}{\partial q_i}. $$
\end{remark}
Now we consider the linearised flow along a non-trivial periodic orbit $x$ with $x(0)=x_0 \in M$.\ Since $(\varphi_H^t)^*\omega=\omega,$ we obtain the linear symplectomorphism
\begin{align} \label{symplectic_autom_1}
d\varphi_H^T(x_0) \colon (T_{x_0} M, \omega_{x_0}) \to (T_{x_0}M,\omega_{x_0})
\end{align}
which is called the \textbf{monodromy}.\ We compute
\begin{align} \label{ker_invariant}
X_H(x_0) = \frac{d}{dt}\varphi_H^t(x_0) = \frac{d}{dt}\varphi_H^{t+T}(x_0) = \frac{d}{dt} \varphi_H^T\big(\varphi_H^t(x_0)\big) = d \varphi_H^T(x_0)X_H(x_0).
\end{align}
Since the periodic orbit $x$ is not trivial, $X_H(x_0)$ does not vanish.\ Hence $X_H(x_0)$ is an eigenvector of the monodromy (\ref{symplectic_autom_1}) with eigenvalue 1.\
The time independence of the Hamiltonian $H$ implies that the energy is constant along its flow.\ Thus we can assign the energy value $c = H(x)$ to the periodic orbit.\ For a regular value $c$ of $H$ we know that $dH(x) \neq 0$ which by (\ref{ham_vector_field}) is equivalent to $X_H(x) \neq 0$, for all $x \in H^{-1}(c)$.\ Therefore the level set
\begin{align} \label{energy_hypersurface}
\Sigma := \Sigma_c := H^{-1}(c) \subset M
\end{align}
is a codimension one energy hypersurface that is invariant under the Hamiltonian flow $\varphi_H^t$.\ Its tangent space at a point $x \in \Sigma \subset M$ is given by
$$T_x\Sigma = \{ \xi \in T_xM: dH(x)\xi=0 \} = \text{ker}\big( dH(x_0) \big)$$
and we obtain the following decomposition of the $2n$ dimensional symplectic vector space
\begin{align} \label{orientable}
T_xM = T_x \Sigma \oplus \langle \nabla H(x) \rangle.
\end{align}
Recall that a symplectic manifold is orientable via the volume form $\omega^{\wedge n}$.\ Since the line bundle $ \langle \nabla H(x) \rangle $ is also orientable, by (\ref{orientable}) the energy hypersurface $\Sigma$ is orientable.\
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-5,-1.6) rectangle (5.5,3.1);
\draw [line width=1pt] (-4,-1) .. controls (-1,-0.5) and (1,-0.5) .. (4,-1);
\draw [line width=1pt] (-2.5,1.5) .. controls (0.5,2) and (2.5,2) .. (5.5,1.5);
\draw [line width=1pt] (-4,-1) .. controls (-3.8,-0.3) and (-3.6,0.5) .. (-2.5,1.5);
\draw [line width=1pt] (4,-1) .. controls (4.2,-0.3) and (4.4,0.5) .. (5.5,1.5);
\draw (4,-1.1) node[anchor=north west] {$\Sigma$};
\draw[fill] (0.5,0.6) circle (1.5pt);
\draw (0,0.6) node[anchor=north west] {$x$};
\draw [line width=1pt] (-1.7,-0.1) -- (2.4,-0.1);
\draw [line width=1pt] (-1.2,1.3) -- (2.9,1.3);
\draw [line width=1pt] (-1.7,-0.1) -- (-1.2,1.3);
\draw [line width=1pt] (2.4,-0.1) -- (2.9,1.3);
\draw (-2.65,0.3) node[anchor=north west] {$T_x \Sigma$};
\draw [->,line width=1pt] (0.5,0.6) -- (0.5,3);
\draw (0.6,3.2) node[anchor=north west] {$\nabla H(x)$};
\draw [->, line width=1pt] (0.5,0.6) -- (2.1,0.6);
\draw (2.05,0.95) node[anchor=north west] {$\xi$};
\end{tikzpicture}
\caption{The tangent space of the energy hypersurface $\Sigma$ and the gradient of $H$}
\label{figure4}
\end{figure}
\noindent
By Definition (\ref{ham_vector_field}) of $X_H$ and by the anti-symmetry of $\omega$, we have
$$dH(x)X_H(x) = \omega \big(X_H(x), X_H(x)\big) = 0,\quad x \in \Sigma.$$
Hence the Hamiltonian vector field $X_H$ is tangent to the level sets (\ref{energy_hypersurface}) of $H$.\ In other words, $X_H$ defines a non-vanishing vector field on $\Sigma$, i.e.\
$$X_H(x) \in T_x \Sigma \setminus \{0\}.$$
For $x \in \Sigma$ we consider the subspace
$$\text{ker}\omega_x = \{ v \in T_x\Sigma: \omega_x(v,w) = 0, \forall w \in T_x\Sigma \} \subset T_x\Sigma$$
and compute for $\xi \in T_x \Sigma$ that
$$\omega_x \big(X_H(x),\xi\big) = dH(x)\xi = 0,$$
meaning that
$$X_H(x) \in \text{ker}\omega_x.$$
By the non-degeneracy of $\omega$,
\begin{align} \label{line_bundle}
\text{ker} \omega_x = \langle X_{H} (x) \rangle \subset T_x\Sigma.
\end{align}
Thus $\text{ker}\omega \vert _{\Sigma} \subset T\Sigma$ is a one-dimensional distribution, i.e.\ ($\text{ker}\omega \vert _{\Sigma},\pi_{\text{ker}\omega \vert _{\Sigma}},\Sigma$) is a line subbundle of the tangent bundle ($T\Sigma,\pi,\Sigma$).\ So the line bundle $\text{ker}\omega_{|\Sigma}$ is spanned by $X_H \vert _{\Sigma}$, which is a non-vanishing section of the line bundle $\text{ker}\omega \vert _{\Sigma}$, i.e.\ a smooth map $X_{H} \vert _{\Sigma}: \Sigma \to \text{ker}\omega \vert _{\Sigma}$ such that $\pi_{\text{ker}\omega \vert _{\Sigma}} \circ X_{H} \vert _{ \Sigma} = \text{id}_\Sigma$.\ The pair $(\Sigma,\omega)$ has the further good property that the restriction of the symplectic form $\omega$ to $\Sigma$ is closed, i.e.\ $d\omega \vert _{\Sigma} = 0.$\ These two properties motivate the general formulation in terms of a Hamiltonian manifold in Section \ref{sec:involutive}.\
Furthermore we have a foliation on $\Sigma$ (see Figure \ref{foliation}) where a leaf $L \subset \Sigma$ of the foliation is a one-dimensional submanifold such that $T_xL = \text{ker}\omega_x,$ for all $x \in L$.\ Indeed, a leaf trough $x$ corresponds to a trajectory of the Hamiltonian flow $\varphi_H^t$, i.e.\ $L_x = \{ \varphi_H^t(x): t \in \mathbb{R} \}$.\ Compact leaves are periodic orbits, which are diffeomorphic to $S^1$, and non-compact leaves are diffeomorphic to $\mathbb{R}$, hence non-periodic orbits.\ Describing the leaves instead of the flow is somewhat easier, since one does not care about time.\ Therefore, to understand the foliation on $\Sigma$ means to understand the dynamics of $X_{H} | _{\Sigma}$ up to time-parametrization.\
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-5,-1.5) rectangle (5.5,2);
\draw [line width=1pt] (-4,-1) .. controls (-1,-0.5) and (1,-0.5) .. (4,-1);
\draw [line width=1pt] (-2.5,1.5) .. controls (0.5,2) and (2.5,2) .. (5.5,1.5);
\draw [line width=1pt] (-4,-1) .. controls (-3.8,-0.3) and (-3.6,0.5) .. (-2.5,1.5);
\draw [line width=1pt] (4,-1) .. controls (4.2,-0.3) and (4.4,0.5) .. (5.5,1.5);
\draw (4,-1.1) node[anchor=north west] {$\Sigma$};
\draw[fill] (0.5,0.6) circle (1.5pt);
\draw (0,0.6) node[anchor=north west] {$x$};
\draw [line width=1pt] (-1.7,-0.1) -- (2.4,-0.1);
\draw [line width=1pt] (-1.2,1.3) -- (2.9,1.3);
\draw [line width=1pt] (-1.7,-0.1) -- (-1.2,1.3);
\draw [line width=1pt] (2.4,-0.1) -- (2.9,1.3);
\draw (-2.65,0.3) node[anchor=north west] {$T_x \Sigma$};
\draw [line width=1pt] (-0.6,0.6) -- (1.8,0.6);
\draw (0.7,1.25) node[anchor=north west] {$\text{ker}\omega_x$};
\end{tikzpicture}
\caption{The characteristic foliation on $\Sigma$}
\label{foliation}
\end{figure}
\noindent
Moreover, the symplectic form $\omega$ induces a symplectic form on the quotient space
$$T_x\Sigma / ( \text{ker}\omega_x \vert _{T_x\Sigma} )$$
which is in particular orientable.\ We obtain the quotient bundle $T\Sigma / ( \text{ker}\omega \vert _{ \Sigma} )$ over $\Sigma$, which is a symplectic vector bundle of rank $2n-2$.\ Recall that the tangent bundle $T \Sigma$ is orientable.\ The line bundle $\text{ker}\omega|_{\Sigma}$ is also orientable, since it has the non-vanishing section $X_H | _{\Sigma}$.\
Back to the monodromy (\ref{symplectic_autom_1}).\ We restrict it to the energy hypersurface $\Sigma$, so let $x$ be a periodic orbit on $\Sigma$ with $x(0) = x_0 \in \Sigma$.\ We obtain the linear diffeomorphism
$$d \varphi_H^T | _{\Sigma}(x_0) \colon (T_{x_0} \Sigma, \omega_{x_0} \vert _{T_{x_0}\Sigma}) \to (T_{x_0} \Sigma, \omega_{x_0} \vert _{T_{x_0}\Sigma}),$$
which leaves the one-dimensional distribution $\text{ker}\omega \vert_\Sigma \subset T\Sigma$ invariant by (\ref{ker_invariant}).\ This induces a symplectic bundle map, which is characterized by the commutative diagram
$$
\begin{tikzcd}
T\Sigma \arrow[r, "d\varphi_{H}^T | _{\Sigma}"] \arrow[swap,d, "\bar{\pi}"] & T\Sigma \arrow[d, "\bar{\pi}"]\\
T\Sigma / (\text{ker}\omega\vert_{ \Sigma}) \arrow[r] & T\Sigma / (\text{ker}\omega\vert_{ \Sigma})
\end{tikzcd}
$$
\begin{definition}
The induced map
\begin{align} \label{symplectic_autom_2}
A:= \overline{d\varphi _H ^{T_x} | _{\Sigma} (x_0)} \colon T_{x_0}\Sigma / ( \text{ker}\omega_{x_0} \vert _{T_{x_0}\Sigma} ) \to T_{x_0}\Sigma / ( \text{ker}\omega_{x_0} \vert _{T_{x_0}\Sigma} )
\end{align}
is called \textbf{reduced monodromy} which is a symplectomorphism of the $2n-2$ dimensional symplectic vector space $T_{x_0}\Sigma / ( \text{ker}\omega_{x_0} \vert _{T_{x_0}\Sigma} )$.\ Moreover, the \textbf{Floquet multipliers} are defined as the eigenvalues of (\ref{symplectic_autom_2}) and we call the periodic orbit $x$ \textbf{non-degenerate} if 1 is not an eigenvalue of (\ref{symplectic_autom_2}), which is equivalent to $\text{ker} (A - \text{id} ) = \{0\}.$\
\end{definition}
\noindent
Note that if $\lambda$ is a Floquet multiplier, then so are $1/\lambda, \overline{\lambda}$ and $ 1/ \overline{\lambda}$ (see Figure \ref{figure_7} and \cite[pp.\ 124--125]{frauenfelder} for details) and the eigenvalues of the monodromy and the reduced monodromy differ by the double eigenvalue 1 from (\ref{ker_invariant}).\
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-3.5,-3.1) rectangle (3.5,3);
\draw [line width=1pt] (0.0,-0.0) circle (2.0cm);
\draw [->,line width=1pt] (-3.0,0.0) -- (3.0,0.0);
\draw [->,line width=1pt] (0.0,-3.0) -- (0.0,3.0);
\draw (3.0,3.1000000000000005) node[anchor=north west] {$\lambda$};
\draw (3.02,-2.5600000000000005) node[anchor=north west] {$\overline{\lambda}$};
\draw (0.04,-0.04000000000000001) node[anchor=north west] {$0$};
\draw (2.02,-0.020000000000000004) node[anchor=north west] {$1$};
\draw (0.76,1.0000000000000002) node[anchor=north west] {$\frac{1}{\overline{\lambda}}$};
\draw (0.76,-0.4600000000000001) node[anchor=north west] {$\frac{1}{\lambda}$};
\begin{scriptsize}
\draw [fill=black] (0.0,-0.0) circle (2pt);
\draw [fill=black] (2.83,2.83) circle (2pt);
\draw [fill=black] (2.83,-2.83) circle (2pt);
\draw [fill=black] (0.71,0.71) circle (2pt);
\draw [fill=black] (0.71,-0.71) circle (2pt);
\end{scriptsize}
\end{tikzpicture}
\caption{Complex eigenvalues not on the unit circle}
\label{figure_7}
\end{figure}
\subsection{The case of $\text{Sp}(1)=\text{SL}(2,\mathbb{R})$}
We consider the case that the reduced monodromy (\ref{symplectic_autom_2}) is a symplectomorphism in
$$\text{Sp}(1)=\text{SL}(2,\mathbb{R}) = \{ \Psi \colon \mathbb{R}^2 \to \mathbb{R}^2 \text{ linear} \mid \det \Psi = 1\},$$
where linear symplectomorphisms are exactly the linear orientation area-preserving transformations of $\mathbb{R}^2$.\
\subsubsection{Stability and Floquet multipliers}
\label{sec:stability_floquet}
The Floquet multipliers are the zeros of the polynomial $\lambda^2 - \lambda \text{tr}(A) + 1$.\ If $|\text{tr}A|<2$, then the eigenvalues are on the unit circle, so of the form $e^{\pm \text{i} \theta}$.\ If $|\text{tr}A|>2$, then they are real and of the form $\lambda, 1 / \lambda$.\ In particular, they are given respectively by
\begin{align*}
\frac{1}{2} \text{tr}(A) \pm \text{i} \frac{1}{2} \sqrt{4 - \big(\text{tr}(A)\big)^2}, \quad \frac{1}{2} \text{tr}(A) \pm \frac{1}{2} \sqrt{\big(\text{tr}(A)\big)^2 - 4}.
\end{align*}
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-5,-2.4) rectangle (5,3);
\draw [line width=1pt] (0.0,0.0) circle (2.0cm);
\draw [->,line width=1pt] (-3.0,0.0) -- (5,0.0);
\draw [->,line width=1pt] (0.0,-3.0) -- (0.0,3.0);
\draw (1.48,2.0000000000000004) node[anchor=north west] {$e^{\text{i} \theta}$};
\draw (1.54,-1.3400000000000003) node[anchor=north west] {$e^{- \text{i} \theta}$};
\draw (0.04,-0.04000000000000001) node[anchor=north west] {$0$};
\draw (2.02,-0.020000000000000004) node[anchor=north west] {$1$};
\draw (-2.72,-0.020000000000000004) node[anchor=north west] {$-1$};
\draw (0.7199999999999999,-0.06000000000000001) node[anchor=north west] {$\frac{1}{\lambda}$};
\draw (3.7200000000000015,0.0) node[anchor=north west] {$\lambda$};
\begin{scriptsize}
\draw [fill=black] (0.0,-0.0) circle (2pt);
\draw [fill=black] (1.41,1.41) circle (2pt);
\draw [fill=black] (1.41,-1.41) circle (2pt);
\draw [fill=black] (1,0) circle (2pt);
\draw [fill=black] (4,0) circle (2pt);
\end{scriptsize}
\end{tikzpicture}
\caption{Eigenvalues of a $2 \times 2$ symplectic matrix}
\label{1_figure_eigenvalues}
\end{figure}
\noindent
Furthermore, $x$ is called \textbf{elliptic} if $|\text{tr}(A)|<2$, \textbf{positive hyperbolic} if $\text{tr}(A)>2$ and \textbf{negative hyperbolic} if $\text{tr}(A)< - 2$.\ Geometrically, in the elliptic case the reduced monodromy is conjugate to a rotation, i.e.\
$$ A = \overline{d\varphi _H ^{T_x} | _{\Sigma} (x_0)} \sim \begin{pmatrix}
\cos \theta & - \sin \theta\\
\sin \theta & \cos \theta
\end{pmatrix}. $$
Hence elliptic periodic orbits have the property that orbits starting sufficiently close to $x$, i.e.\ neighbouring orbits of the same energy, remain near $x$ for a long time, while for hyperbolic orbits they may fly away.\ In the elliptic cases, to measure the number of complete rotations of the neighbouring orbits during $T_x$ the Conley--Zehnder index is helpful.\
\subsubsection{The Conley--Zehnder index}
\label{sec:conley_zehnder_index}
In 1984, Charles Conley and Eduard Zehnder \cite{conley_zehnder} defined an index theory, which generalizes the usual Morse index for closed geodesics on a Riemannian manifold.\ We refer the curious reader for details on Morse Theory to the books of Milnor \cite{milnor}, Banyaga--Hurtubise \cite{banyaga} and for Morse--Bott Theory, which is a generalization, to the articles of Bott \cite{bott}, Banyaga--Hurtubise \cite{banyaga} and Frauenfelder \cite[Appendix]{frauenfelder_3}.\
One can roughly describe the Conley--Zehnder index as a mean winding number for the linearized Hamiltonian flow along the orbit $x$ or the number of times that an eigenvalue crosses 1.\ It roughly measures how often neighbouring orbits of the same energy wind round the orbit $x$.\ Since we treat the reduced monodromy (\ref{symplectic_autom_2}) in Sp(1), we consider the transversal Conley--Zehnder index with standard normalization or counter-clockwise normalization for non-degenerate paths, as defined by Hofer--Wysocki--Zehnder \cite[Appendix]{hofer_w_z_1}, \cite[Section 3]{hofer_w_z} where the details can be seen.\ Explicitly, it is given in the following way.\
Let $b_1(0), b_2(0)$ and $X_H|_{\Sigma}(x_0)$ be a basis for the 3 dimensional vector space $T_{x_0}\Sigma$ such that $\omega_{x_0}\big(b_1(0),b_2(0)\big) = 1$.\ Note that $\omega_{x_0} \big( b_i(0), X_H|_{\Sigma}(x_0) \big) = 0$, for $i=1,2$.\ The first two basis vectors induce a symplectic basis for the 2 dimensional symplectic vector space $T_{x_0}\Sigma / ( \text{ker}\omega_{x_0} \vert _{T_{x_0}\Sigma} )$ and we denote by $P:= \langle b_1(0), b_2(0) \rangle_{\mathbb{R}}$ the 2-plane.\
We choose a smooth disc map $\overline{x} \in C^{\infty} ( \mathbb{D}, \Sigma )$, where $\mathbb{D} = \{ z \in \mathbb{C} \mid |z| \leq 1 \}$, such that on the boundary it satisfies $\overline{x} (e^{2\pi \text{i}t/T_x}) = x(t)$.\ Furthermore, we fix a symplectic trivialization for the pullback bundle $\tau \colon \mathbb{D} \times \mathbb{R}^2 \to \overline{x}^*T\Sigma / \text{ker}\omega|_{\Sigma} $.\ For details about such trivializations we refer to \cite[Section 2.6]{mcduff_salamon}.\ With respect to these choices, the linearized flow along $x$ generates a path $\Phi_x \colon [0,T_x] \to \text{Sp}(1)$ of symplectic matrices in $\mathbb{R}^2$ defined by
$$
\begin{tikzcd}[column sep=4.5em]
\mathbb{R}^2 \arrow[r,dashed, "\Phi_x(t)"] \arrow[swap,d, "\big(\tau(1)\big)^{-1}"] & \mathbb{R}^2 \\
T_{x_0}\Sigma / ( \text{ker}\omega_{x_0} \vert _{T_{x_0}\Sigma}) \arrow[r,"\overline{d\varphi _H ^t | _{\Sigma} (x_0)}"] & T_{x(t)}\Sigma / ( \text{ker}\omega_{x(t)} \vert _{T_{x(t)}\Sigma}) \arrow[swap,u, "\tau(e^{2 \pi \text{i}t/T_x})"]
\end{tikzcd}
$$
This path starts at $\Phi_x(0)=\text{id}$ and has a well-defined Conley--Zehnder index, and the transversal Conley--Zehnder index of $x$ is the Conley--Zehnder index of this path, which we denote by $\mu_{CZ}$.\
If $x$ is elliptic, then the 2-plane $P$ was rotated by an angle.\ Consider the rotation function $\theta(t)$ which gives the rotation angle of the 2-plane $P$ at each time $t \in [0,T_x]$.\ Note that $\theta(0)=0$ and $\theta(t)$ is continuous in $t$.\ Then
\begin{align} \label{index_1}
\mu_{CZ} = 2 \lfloor \theta(T_x)/(2\pi) \rfloor + 1
\end{align}
and the number of complete rotations of the neighbouring orbits during $T_x$ is given by
\begin{align} \label{index_2}
\text{rot}(x) := \lfloor \theta(T_x)/(2\pi) \rfloor = \frac{1}{2}\left(\mu_{CZ} - 1\right) \in \mathbb{Z}.
\end{align}
Therefore for every complete rotation the index jumps by 2 and is odd.\
If $x$ is hyperbolic, then the 2-plane $P$ was rotated by $m\pi$ for an integer $m$ and
\begin{align*}
\mu_{CZ} = m \in \begin{cases}
2 \mathbb{Z} + 1 & \text{if $x$ is negative hyperbolic}\\
2 \mathbb{Z} & \text{if $x$ is positive hyperbolic.}
\end{cases}
\end{align*}
By the implicit function theorem, non-degenerate periodic orbits always come in a smooth family of periodic orbits and hence form a smooth orbit cylinder (see Figure \ref{orbitcylinder} and \cite[p.\ 202]{meyer} for details).\ Moreover, all periodic orbits on the orbit cylinder have the same index, since they are connected by a path of non-degenerate orbits.\
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-4,-1.9) rectangle (3.5,1.8);
\draw [dashed,line width=1pt] (0.4,-1.5) arc (0:180:1.1 and 0.15);
\draw [line width=1pt] (-1.8,-1.5) arc (180:360:1.1 and 0.15);
\draw [dashed,line width=1pt] (0,0) arc (0:180:0.7 and 0.15);
\draw [decoration={markings, mark=at position 0.5 with {\arrow{>}}}, postaction={decorate},line width=1pt] (-1.4,0) arc (180:360:0.7 and 0.15);
\draw [line width=1pt] (0.5,1.5) arc (0:360:0.9 and 0.15);
\draw[rounded corners=40pt,line width=1pt](0.4,-1.5)--(-0.4,0)--(0.5,1.5);
\draw[rounded corners=40pt,line width=1pt](-1.8,-1.5)--(-1.23,0)--(-1.3,1.5);
\draw[rounded corners=40pt,line width=1pt](-3.5,-0.7)--(-1,-0.55)--(1.5,-0.7);
\draw[rounded corners=20pt,line width=1pt] (1.5,-0.7)--(1.8,0)--(2.3,0.7);
\draw[rounded corners=20pt,line width=1pt] (-3.5,-0.7)--(-3.1,0)--(-2.5,0.7);
\draw[dash pattern=on 2pt off 2pt,opacity=0] (-2.5,0.7) .. controls (-0.1,0.85) ..
coordinate[pos=0.21] (A)
coordinate[pos=0.57] (B)
coordinate[pos=0.58] (G)
coordinate[pos=0.1] (C)
coordinate[pos=0.4] (D)
coordinate[pos=0.7] (E)
coordinate[pos=0.19] (F) (2.3,0.7);
\draw [line width=1pt] (-2.5,0.7) .. controls (C) .. (F);
\draw [dashed,line width=1pt] (A) .. controls (D) .. (B);
\draw [line width=1pt] (G) .. controls (E) .. (2.3,0.7);
\draw (1.82,-0.4) node[anchor=north west] {$\Sigma$};
\end{tikzpicture}
\caption{Orbit cylinder}
\label{orbitcylinder}
\end{figure}
Let $x^n$ be the $n$ times iteration of $x$ with first return time $nT_x$, $n \geq 1$.\ We call $x^n$ the $n$-th cover of $x$.\ Assume that $x^n$ is non-degenerate for all $n \geq 1$, then the index iteration (we refer to \cite[p.\ 249]{hofer_w_z_1} for details) is given by
\begin{table}[H]
\centering
\begin{tabular}{c|c}
$x$ & Conley--Zehnder index of $x^n$\\
\hline pos. / neg. hyperbolic & $n \mu_{CZ}$\\
elliptic & $2 \lfloor n \theta(T_x)/(2\pi) \rfloor + 1$
\end{tabular}
\vspace{0.1cm}
\caption{Index iteration}
\label{table_index_iteration}
\end{table}
\begin{remark} \label{remark_2_7}
Suppose that $x$ is elliptic.\ If $x$ becomes positive hyperbolic, i.e.\ the reduced monodromy moves through the eigenvalue 1, then $\mu_{CZ}$ jumps by $\pm 1$.\ If the rotation angle moves through 0 to positive hyperbolic, then $\mu_{CZ}$ jumps by $-1$ and if it goes through $2 \pi$ to positive hyperbolic, then $\mu_{CZ}$ jumps by $+1$.\ For the other way, the change of index is exactly backwards, see the Figure \ref{scenario_1}.\
For the cases from elliptic to negative hyperbolic or from negative hyperbolic to elliptic, the Conley--Zehnder index of $x$ does not change, but the reduced monodromy of its double cover crosses the eigenvalue 1, hence its Conley--Zehnder index jumps by $\pm 1$.\ Note that this gives rise to a bad orbit (see Example \ref{example_7_2} in Subsection \ref{sec:7.1}) and furthermore, the double cover of a negative hyperbolic periodic orbit is positive hyperbolic.\
In the elliptic case, if the roation angle is a $\tilde{k}$-th root of unity, for $\tilde{k} \geq 3$ the $\tilde{k}$-th cover is still elliptic and its reduced monodromy goes through the eigenvalue 1, thus its Conley--Zehnder index jumps by $\pm 2$.\
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-3,-2.4) rectangle (14,3.7);
\draw [decoration={markings, mark=at position 0.07 with {\arrow{<}}}, postaction={decorate},line width=1pt] [decoration={markings, mark=at position 0.94 with {\arrow{>}}}, postaction={decorate},line width=1pt] [line width=1pt] (0.0,0.0) circle (2.0cm);
\draw [decoration={markings, mark=at position 0.8 with {\arrow{>}}}, postaction={decorate},line width=1pt] [line width=1pt] [->,line width=1pt] (-2.7,0.0) -- (4.5,0.0);
\draw [->,line width=1pt] (0.0,-3.0) -- (0.0,3.0);
\draw (0.04,-0.04000000000000001) node[anchor=north west] {$0$};
\draw (2.02,-0.020000000000000004) node[anchor=north west] {$1$};
\draw (-2.72,-0.020000000000000004) node[anchor=north west] {$-1$};
\draw (2.2,1) node[anchor=north west] {$\mu_{CZ} - 1$};
\draw (2.2,-0.5) node[anchor=north west] {$\mu_{CZ} + 1$};
\draw (-2.7,3.7) node[anchor=north west] {from elliptic to positive hyperbolic};
\begin{scriptsize}
\draw [fill=black] (0.0,0.0) circle (2pt);
\draw [fill=black] (2,0.0) circle (2pt);
\draw [fill=black] (-2,0.0) circle (2pt);
\draw [fill=black] (1.41,1.41) circle (2pt);
\draw [fill=black] (1.41,-1.41) circle (2pt);
\draw [fill=black] (3.5,0) circle (2pt);
\end{scriptsize}
\draw [decoration={markings, mark=at position 0.07 with {\arrow{>}}}, postaction={decorate},line width=1pt] [decoration={markings, mark=at position 0.94 with {\arrow{<}}}, postaction={decorate},line width=1pt] [line width=1pt] (9,0.0) circle (2.0cm);
\draw [decoration={markings, mark=at position 0.8 with {\arrow{<}}}, postaction={decorate},line width=1pt] [line width=1pt] [->,line width=1pt] (-2.7+9,0.0) -- (4.5+9,0.0);
\draw [->,line width=1pt] (9,-3.0) -- (9,3.0);
\draw (0.04+9,-0.04000000000000001) node[anchor=north west] {$0$};
\draw (2.02+9,-0.020000000000000004) node[anchor=north west] {$1$};
\draw (-2.72+9,-0.020000000000000004) node[anchor=north west] {$-1$};
\draw (2.2+9,1) node[anchor=north west] {$\mu_{CZ} + 1$};
\draw (2.2+9,-0.5) node[anchor=north west] {$\mu_{CZ} - 1$};
\draw (-2.7+9,3.7) node[anchor=north west] {from positive hyperbolic to elliptic};
\begin{scriptsize}
\draw [fill=black] (9,0.0) circle (2pt);
\draw [fill=black] (2+9,0) circle (2pt);
\draw [fill=black] (-2+9,0) circle (2pt);
\draw [fill=black] (1.41+9,1.41) circle (2pt);
\draw [fill=black] (1.41+9,-1.41) circle (2pt);
\draw [fill=black] (3.5+9,0) circle (2pt);
\end{scriptsize}
\end{tikzpicture}
\caption{The index jump by $\pm 1$ in the elliptic and positive hyperbolic transitions}
\label{scenario_1}
\end{figure}
\noindent
The index jump gives rise to the bifurcation of new families of periodic orbits (see discussion in the Subsection \ref{sec8.2}).\
\end{remark}
\section{Hamiltonian manifolds}
\label{sec:hamiltonian}
\begin{center}
\textit{``A Hamiltonian manifold is the odd-dimensional analog of a symplectic manifold."}
\end{center}
\begin{flushright}
- U. Frauenfelder and O. van Koert in \cite[p.\ 20]{frauenfelder}
\end{flushright}
\subsection{Involutive Hamiltonian manifolds and Symplectic Splitting}
\label{sec:involutive}
The archetypical examples of Hamiltonian manifolds are energy hypersurfaces, see (\ref{energy_hypersurface}) above.
\begin{definition}
A \textbf{Hamiltonian manifold} is a pair $(\Sigma,\omega)$ where $\Sigma$ is a $2n-1$ dimensional manifold and $\omega \in \Omega^2(\Sigma)$ a closed $2$-form such that $\text{ker}\omega \subset T\Sigma$ is a one-dimensional distribution.\ The $2$-form $\omega$ is called a \textbf{Hamiltonian structure} on $\Sigma$.
\end{definition}
\begin{definition}
A \textbf{Hamiltonian vector field} $X$ on a Hamiltonian manifold is a non-vanishing section of the line bundle $\text{ker}\omega$.
\end{definition}
\begin{remark}
Since the Lie derivative vanishes by Cartan's identity, $\mathcal{L}_X \omega = d \iota_X \omega + \iota_x d \omega = 0$, the Hamiltonian structure is preserved under the flow of $X$ for all times $t \in \mathbb{R}$, i.e. $(\varphi_X^t)^* \omega = \omega.$
\end{remark}
\begin{remark}
As seen before in the case of energy hypersurfaces (\ref{energy_hypersurface}), a Hamiltonian manifold comes with a foliation on $\Sigma$ (see Figure \ref{foliation}), whose leaves $L \subset \Sigma$ are the one-dimensional submanifolds such that $T_xL = \text{ker}\omega_x$, for all $x \in L$.\ Compact leaves are diffeomorphic to $S^1$ and non compact ones are diffeomorphic to $\mathbb{R}$.
\end{remark}
\begin{definition}
A \textbf{symplectic involution} $\sigma$ on a Hamiltonian manifold $(\Sigma,\omega)$ is a diffeomorphism $\sigma: \Sigma \to \Sigma$ such that $\sigma^2=\text{id}_{\Sigma}$ and $\sigma^* \omega = \omega$.
\end{definition}\
\noindent
We next study the properties of the linear isomorphism
\begin{align} \label{linear_isomorphism}
d \sigma (x) : T_x \Sigma \to T_{\sigma(x)}\Sigma,\quad x \in \Sigma.
\end{align}
\begin{lemma1} \label{lemma_3_6}
The one-dimensional distribution is invariant under $d\sigma(x)$, i.e.\ if $\xi \in \text{ker}\omega_x$, then
$$d\sigma(x)\xi \in \text{ker}\omega_{\sigma(x)}.$$
\end{lemma1}
\begin{proof}
Let $\xi \in \text{ker}\omega_x$, i.e.
$$\omega_x(\xi,\eta)=0,\quad \text{for all }\eta \in T_x \Sigma.$$
By (\ref{linear_isomorphism}) we know that for all $\eta' \in T_{\sigma(x)}\Sigma$ there exists a unique element $\eta \in T_x\Sigma$ such that $d\sigma(x)\eta=\eta'$. Using $\sigma^*\omega = \omega$ we compute for all $\eta' \in T_{\sigma(x)}\Sigma$ that
$$\omega_{\sigma(x)}\big(d\sigma(x)\xi,\eta' \big) = \omega_{\sigma(x)}\big(d\sigma(x)\xi, d\sigma(x)\eta \big) = \omega_x(\xi,\eta) = 0.$$
\end{proof}\
\noindent
Denote by $F$ the fixed point set of $\sigma$, i.e.\ $F=\text{Fix}(\sigma)=\{ x \in \Sigma \mid \sigma (x) = x \}$.
\begin{remark}
Let $x \in F$.\ By choosing a Riemannian metric on $\Sigma$ which is $\sigma$-invariant, we can parametrize $F$, locally around $x$, by the restriction of the exponential map to $E_1 \big( d\sigma(x) \big)$.\ Thus $F$ is a submanifold of $\Sigma$ and
\begin{align*}
T_x F = E_1 \big( d\sigma(x) \big) = \text{ker}\big( d \sigma(x) - \text{id} \big).
\end{align*}
\begin{lemma1}
For $x \in F$, the eigenvalues of $d\sigma(x)$ are $\pm 1$.
\end{lemma1}
\begin{proof}
Let $x \in F$ and $\lambda$ be an eigenvalue of $d\sigma(x)$ and $\xi \in T_x\Sigma$ an eigenvector to $\lambda$.\ The calculation
$$\xi = d\sigma^2(x)\xi = d\sigma(x)\big(d\sigma(x)\xi\big) = d\sigma(x)(\lambda\xi) = \lambda d \sigma(x)\xi = \lambda^2 \xi$$
implies that $\lambda=\pm1$.
\end{proof}\
\noindent
For $x \in F=\text{Fix}(\sigma)$ we obtain the two eigenspaces
$$ T_x F = E_1\big(d\sigma(x)\big) = \{ \xi \in T_x\Sigma: d\sigma(x)\xi=\xi \} = \text{ker}\big(d\sigma(x) - \text{id}\big) \subset T_x\Sigma$$
and
$$E_{-1}\big(d\sigma(x)\big) = \{ \eta \in T_x\Sigma: d\sigma(x)\eta=-\eta \} = \text{ker}\big(d\sigma(x) + \text{id}\big) \subset T_x\Sigma.$$
\begin{lemma1} \label{lemma_1}
The tangent space splits into the direct sum of the eigenspaces, i.e.
$$T_x\Sigma = E_1\big(d\sigma(x)\big) \oplus E_{-1}\big(d\sigma(x)\big),\quad x \in F.$$
\end{lemma1}
\begin{proof}
Obviously,
$$E_1\big(d\sigma(x)\big) \cap E_{-1}\big(d\sigma(x)\big) = \{0\}.$$
Moreover, with the projection maps
$$
\begin{tikzcd}
& T_x\Sigma \arrow[dl,"\pi_1"'] \arrow{rd}{\pi_2} & \\
E_1\big(d\sigma(x)\big) & & E_{-1}\big(d\sigma(x)\big)
\end{tikzcd}$$
$$\pi_1 = \frac{1}{2} \big( \text{id} + d\sigma(x) \big) \text{ and } \pi_2 = \frac{1}{2} \big( \text{id} - d\sigma(x) \big)$$
we can write any $\xi \in T_x\Sigma$ as
$$\xi = \pi_1(\xi) + \pi_2(\xi) = \frac{1}{2}\big(\xi + d\sigma(x)\xi\big) + \frac{1}{2}\big(\xi - d\sigma(x)\xi\big).$$
\end{proof}
\noindent
\begin{remark} \label{remark_linear_symplectic}
We recall from linear symplectic geometry:\ Let $(V,\omega)$ be a symplectic vector space and $W \subset V$ a linear subspace.\ The symplectic complement $W^{\omega}$ of $W$ is defined as the subspace
$$W^{\omega} := \{ v \in V \mid \omega(v,w)=0, \forall w \in W \}.$$
Furthermore $W$ is called symplectic if $\omega | _{W}$ is symplectic. Equivalently,
$$W \cap W^{\omega} = \{ 0 \},$$
i.e. $W$ and $W^{\omega}$ are symplectically orthogonal. In addition, for any subspace $W$ it holds that
$$\text{dim}V = \text{dim}W + \text{dim}W^{\omega},\quad (W^{\omega})^{\omega} = W.$$
The non-degeneracy of $\omega$ implies that for a symplectic subspace $W \subset V$ we obtain
$$V = W \oplus W^{\omega}$$
and the symplectic complement $W^{\omega} \subset V$ is also symplectic.\ Finally, if $V_1$ and $V_2$ are sub-vector spaces of $V$ that are symplectically orthogonal, then $V_1$ and $V_2$ are symplectic.\
\end{remark}
\begin{definition}
Let $S \subset T_x\Sigma$ be a linear subspace. The \textbf{symplectic complement} of $S$ is defined as the linear subspace $\{ \xi \in T_x\Sigma \mid \omega_x(\xi,\eta) = 0, \forall \eta \in S \}$.\ We call two subspaces $S_1, S_2 \subset T_x\Sigma$ \textbf{symplectically orthogonal} if
$$\omega_x(\xi,\eta) = 0,\quad \text{for all } \xi \in S_1, \text{ }\eta \in S_2.$$
\end{definition}
\begin{lemma1} \label{lemma_2}
The eigenspaces $E_1\big(d\sigma(x)\big)$ and $E_{-1}\big(d\sigma(x)\big)$ are symplectically orthogonal.
\end{lemma1}
\begin{proof}
Since $\sigma^* \omega = \omega$ we obtain for all $\xi \in E_1 \big( d\sigma(x) \big)$ that
$$ \omega_x(\xi,\eta) = \omega_{\sigma(x)}\big( d\sigma(x)\xi, d\sigma(x)\eta \big) = \omega_x(\xi,-\eta) = - \omega_x(\xi,\eta)$$
for all $\eta \in E_{-1}\big(d\sigma(x)\big)$.
\end{proof}
\begin{lemma1} \label{lemma_flow_invariant}
Let $X$ be a non-vanishing section of the line bundle ker$\omega$ such that $X$ is invariant under the symplectic involution $\sigma$.\ In other words, $\sigma$ and the flow of $X$ commute.\ Then the linearized flow map
$$ d \varphi_X^t(x) \colon T_x \Sigma \to T_{\varphi_X^t(x)} \Sigma,\quad x \in F $$
leaves the two eigenspaces invariant.
\end{lemma1}
\begin{proof}
For $\xi \in E_{\pm 1} \big( d \sigma(x) \big)$
$$ d \varphi_X^t(x) \xi = \pm d \varphi_X^t(x) \big( d \sigma(x) \xi \big) = \pm d \sigma \big( \varphi_X^t(x) \big) \big( d \varphi_X^t(x) \xi \big) $$
and hence
$$ d \sigma \big( \varphi_X^t(x) \big) \big( d \varphi_X^t(x) \xi \big) = \pm d \sigma^2 \big( \varphi_X^t(x) \big) \big( d \varphi_X^t(x) \xi \big) = \pm d \varphi_X^t(x) \xi, $$
which means that $d \varphi_X^t(x) \xi \in E_{\pm 1} \Big( d \sigma \big( \varphi_X^t(x) \big) \Big)$.
\end{proof}
Moreover, we obtain an induced $2n-2$ dimensional symplectic vector space
$$T_x\Sigma / \text{ker}\omega_x$$
and a quotient bundle $T\Sigma / \text{ker}\omega$ over $\Sigma$ which is a symplectic vector bundle of rank $2n-2$.\ Therefore, if the Hamiltonian manifold $(\Sigma,\omega)$ is orientable, then the line bundle $\text{ker}\omega$ is also orientable.\ This motivates the next definition and lemma.
\end{remark}
\begin{definition} \label{definition_inv_ham}
An \textbf{involutive Hamiltonian manifold} is a triple $(\Sigma,\omega,\sigma)$, where $(\Sigma,\omega)$ is an orientable Hamiltonian manifold and $\sigma$ a symplectic involution which preserves the orientation of the line bundle $\text{ker}\omega$.
\end{definition}
\begin{corollary1} \label{corollary_1}
Let $(\Sigma,\omega,\sigma)$ be an involutive Hamiltonian manifold and $x \in F = \text{Fix}(\sigma)$.\ Then the line bundle $\text{ker}\omega | _{F}$ is contained in the eigenspace to the eigenvalue $1$, i.e.
$$ \text{ker}\omega_x \subset E_1 \big( d\sigma(x) \big) = T_x F.$$
\end{corollary1}
\begin{proof}
Since $x$ is a fixed point of $\sigma$, the linear map (\ref{linear_isomorphism})
$$d\sigma(x): T_x\Sigma \to T_x\Sigma$$
is an automorphism.\ Together with Lemma \ref{lemma_3_6} we obtain for $\xi \in \text{ker}\omega_x$ that $d\sigma(x)\xi = \pm \xi$, and in fact $+\xi$ since $\sigma$ is orientation preserving.
\end{proof}
\begin{theorem1}[\textbf{Symplectic Splitting}] \label{theorem_splitting} \textit{
Let $(\Sigma,\omega,\sigma)$ be an involutive Hamiltonian manifold and $x \in F = \textnormal{Fix}(\sigma)$.\ Then:\
\begin{itemize}
\item[1)] The induced $2n-2$ dimensional symplectic vector space at $x$ splits into two symplectic vector spaces,
\begin{align*}
T_x\Sigma / \textnormal{ker} \omega_x &= \Big( E_1\big( d\sigma(x) \big) / \textnormal{ker} \omega_x \Big) \oplus \Big( E_{-1}\big( d\sigma(x) \big) \Big)\\
&= \big( T_x\textnormal{Fix}(\sigma) / \textnormal{ker} \omega_x \big) \oplus \Big( E_{-1}\big( d\sigma(x) \big) \Big).
\end{align*}
\item[2)] Let $X$ be a non-vanishing section of the line bundle \textnormal{ker}$\omega$ such that $X$ is invariant under the symplectic involution $\sigma$.\ Then the induced linearized flow map
$$ \overline{d\varphi _X ^t (x)} \colon T_{x}\Sigma / \textnormal{ker}\omega_{x} \to T_{\varphi _X ^t (x)}\Sigma / \textnormal{ker}\omega_{\varphi _X ^t (x)} $$
is a $(2n-2) \times (2n-2)$ symplectic matrix that splits into two symplectic maps.\ More precisely,
\begin{align} \label{decomposition}
\overline{d\varphi _X ^t (x)} = \begin{pmatrix}
A_1 & 0\\
0 & A_2
\end{pmatrix},
\end{align}
where
$$ A_1 \colon T_{x} \textnormal{Fix}(\sigma) / \textnormal{ker}\omega_{x} \to T_{\varphi _X ^t (x)} \textnormal{Fix}(\sigma) / \textnormal{ker}\omega_{\varphi _X ^t (x)} $$
$$ A_2 \colon E_{-1}\big( d\sigma(x) \big) \to E_{-1}\big( d\sigma(\varphi _X ^t (x)) \big) $$
are symplectic matrices.\
\end{itemize}}
\end{theorem1}
\begin{proof}
Lemma \ref{lemma_1} gives the splitting
$$T_x\Sigma = E_1\big(d\sigma(x)\big) \oplus E_{-1}\big(d\sigma(x)\big),$$
where the two eigenspaces are symplectically orthogonal by Lemma \ref{lemma_2}.\ By Corollary \ref{corollary_1} we know that $ \text{ker}\omega_x \subset E_1 \big( d\sigma(x) \big) = T_x\text{Fix}(\sigma)$, which implies the first statement.\ The matrix form of the induced linearized flow consists of four block matrices,
$$\overline{d\varphi _X ^t (x)} = \begin{pmatrix}
A_1 & B_1\\
B_2 & A_2
\end{pmatrix},$$
where
$$ A_1 \colon T_{x} \text{Fix}(\sigma) / \text{ker}\omega_{x} \to T_{\varphi _X ^t (x)} \text{Fix}(\sigma) / \text{ker}\omega_{\varphi _X ^t (x)} $$
$$ A_2 \colon E_{-1}\big( d\sigma(x) \big) \to E_{-1}\big( d\sigma(\varphi _X ^t (x)) \big) $$
and where
$$ B_1 \colon E_{-1}\big( d\sigma(x) \big) \to T_{\varphi _X ^t (x)} \text{Fix}(\sigma) / \text{ker}\omega_{\varphi _X ^t (x)} $$
$$ B_2 \colon T_{x} \text{Fix}(\sigma) / \text{ker}\omega_{x} \to E_{-1}\big( d\sigma(\varphi _X ^t (x)) \big) $$
are zero maps since, by Lemma \ref{lemma_flow_invariant}, the linearized flow leaves the two eigenspaces invariant.\ It now follows that $A_1$ and $A_2$ are symplectic.
\end{proof}
\begin{remark}
The dimensions of $\big( T_x\text{Fix}(\sigma) / \text{ker} \omega_x \big)$ and $\Big( E_{-1}\big( d\sigma(x) \big) \Big)$ are even, since they are symplectic vector spaces.\ Moreover, they are determined by the fixed point set $F= \text{Fix}(\sigma)$, which is odd-dimensional.\ In particular,
\begin{center}
\begin{tabular}{c|c}
& dimension\\
\hline $T_x\text{Fix}(\sigma) / \text{ker} \omega_x$ & $\text{dim}F - 1$\\
$E_{-1}\big( d\sigma(x) \big)$ & $2n-1-\text{dim}F$
\end{tabular}
\end{center}
\end{remark}
\begin{remark}
An important property arises for the cases if $A_1$ and $A_2$ are both symplectic matrices in $\text{Sp}(1)=\text{SL}(2,\mathbb{R})$.\ Assume that the Floquet multipliers of $\overline{d\varphi _X ^t (x)}$ are neither real nor on the unit circle, then they are four different complex numbers of the form $\lambda, 1/\lambda, \overline{\lambda}$ and $ 1/ \overline{\lambda}$ (see Figure \ref{figure_7}).\ The decomposition (\ref{decomposition}) shows that this kind of eigenvalues are not possible.\ This is a phenomenon for the reduced monodromy of planar periodic orbits in the spatial Hill lunar problem as well as in the spatial CR3BP.\
\end{remark}
\subsection{Every contact manifold is a Hamiltonian manifold}
\label{subsec:contact}
Let $(\Sigma,\lambda)$ be a contact manifold, i.e. $\Sigma$ is a $2n-1$ dimensional manifold and $\lambda$ a $1$-form on $\Sigma$, called contact form on $\Sigma$, such that $\lambda \wedge (d\lambda)^{\wedge(n-1)}$ is a volume form on $\Sigma$.\ The Reeb vector field $R$ on $\Sigma$ is defined by the conditions $\lambda(R)=1$ and $\iota_R d \lambda = 0$.\ Then $\omega:=d\lambda$ is a Hamiltonian structure on $\Sigma$, i.e. the tuple $(\Sigma, \omega= d \lambda)$ is a Hamiltonian manifold.\ Namely, for $x \in \Sigma$ the Reeb vector field is a non-vanishing section of the line bundle ker$d \lambda$ = ker$\omega$ $\subset T\Sigma$ and
\begin{align} \label{ker_reeb}
\text{ker}\omega_x = \langle R(x) \rangle.
\end{align}
The hyperplane distribution is defined as $ \xi := \text{ker}\lambda \subset T\Sigma $, it is also called contact structure on $\Sigma$.\ This leads to the decomposition
\begin{align}\label{decomposition_1}
T \Sigma = \xi \oplus \langle R \rangle.
\end{align}
The restriction of $\omega = d \lambda$ to $\xi$ makes $\xi$ a symplectic vector bundle of rank $2n-2$ over $\Sigma$, i.e.\ for $x \in \Sigma$ the space $( \xi_x , \omega |_{\xi_x} = d\lambda | _{\xi_x} )$ is a symplectic vector space.\ The contact structure $\xi = \text{ker}\lambda$ is determined by the contact form $\lambda$, but the converse is not true:\ For every positive smooth function $f$, the 1-form $f\lambda$ is also a contact form that gives the same contact structure, i.e.
$$ \xi = \text{ker} \lambda = \text{ker} f \lambda, $$
but in general, the Reeb vector fields of $\lambda$ and $f \lambda$ are not parallel.\ Therefore we cannot obtain the Hamiltonian structure $\omega = d \lambda$ from the contact structure $\xi = \text{ker} \lambda$.\ Since the Hamiltonian structure $\omega = d \lambda$ determines the dynamics up to time reparametrization, our major attention is on $\omega=d\lambda$.
\begin{remark}
If contact manifolds arise as energy hypersurfaces (\ref{energy_hypersurface}), then by (\ref{line_bundle}) and (\ref{ker_reeb}), i.e.\ by
$$ \langle X_{H} (x) \rangle = \text{ker}\omega_x = \langle R(x) \rangle, $$
the restriction of the Hamiltonian vector field $X_H |_{\Sigma}$ and the Reeb vector field are parallel, i.e.\ up to time reparametrization their flows coincide.\ In view of (\ref{decomposition_1}), for a periodic orbit $x$ with first return time $T_x$ the reduced monodromy (\ref{symplectic_autom_2}) corresponds to the transverse linearized Reeb flow
$$ d \varphi_R^{T_x} (x_0)|_{\xi_{x_0}} \colon \xi_{x_0} \to \xi_{x_0}.$$
\end{remark}
\begin{definition}
A \textbf{contact form} on a Hamiltonian manifold $(\Sigma,\omega)$ is a $1$-form $\lambda$ on $\Sigma$ such that $d\lambda=\omega$ and $\lambda \wedge \omega^{\wedge(n-1)} > 0.$
\end{definition}
\begin{remark}
Every Hamiltonian manifold $(\Sigma,\omega)$ with a contact form $\lambda$ is orientable via the volume form $\lambda \wedge \omega^{\wedge(n-1)} >0$.\ Not every Hamiltonian manifold has a contact form.\ The next lemma gives a condition.
\end{remark}
\begin{lemma1}
Let $(\Sigma,\omega)$ be a Hamiltonian manifold which is simply connected. If there exists a closed leaf $L \subset \Sigma$ with filling disk $D \subset \Sigma$ (i.e.\ $\partial D= L$) such that $\int \limits_D \omega \leq 0$, then $(\Sigma,\omega)$ has no contact form.
\end{lemma1}
\begin{proof}
Assume that $(\Sigma,\omega)$ has a contact form $\lambda$ and there is a closed leaf $L \subset \Sigma$ with filling disk $D\subset \Sigma$, which is a periodic orbit of the Reeb vector field $R$.\ Then there exists $T>0$ and a smooth map $x : [0,T] \to \Sigma$, which is injective on $[0,T)$ and such that $\dot{x}(t) = R\big(x(t)\big), \text{ } x(T)=x(0)$ and im$(x)=L$ (see Figure \ref{figure7}).\
\begin{figure}[H]
\centering
\begin{tikzpicture}
[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm]
\clip(-4.2,-2.8) rectangle (4.2,2.8);
\draw [line width=1pt,fill=gray,opacity=0.45] (0,0) circle (2cm);
\draw [line width=1pt] (0,0) circle (2cm);
\draw (1.45,-1.2) node[anchor=north west] {$L= \partial D$};
\draw [->,line width=1pt] (1.4142,1.4142) -- (0.41,2.41);
\draw (1.15,2.3) node[anchor=north west] {$\dot{x}(t) = R \big( x(t) \big)$};
\draw [->,line width=1pt] (0,2) -- (-1.41,2);
\draw [->,line width=1pt] (-1.41,1.41) -- (-2.41,0.41);
\draw [->,line width=1pt] (-2,0) -- (-2,-1.41);
\draw [->,line width=1pt] (-1.41,-1.41) -- (-0.41,-2.41);
\draw [->,line width=1pt] (0,-2) -- (1.41,-2);
\draw [->,line width=1pt] (1.41,-1.41) -- (2.41,-0.41);
\draw [->,line width=1pt] (2,0) -- (2,1.41);
\draw [->,line width=1pt] (-1.41,1.41) -- (-2.41,0.41);
\begin{scriptsize}
\draw [fill=black] (1.4142,1.4142) circle (2pt);
\end{scriptsize}
\end{tikzpicture}
\caption{Periodic orbit of Reeb vector field with filling disk}
\label{figure7}
\end{figure}
\noindent
By Stokes' theorem and the definition of $R$ we obtain
$$ \int \limits_D \omega = \int \limits_D d \lambda = \int \limits_{\partial D = L} \lambda = \int \limits_0^T \lambda \big( x(t) \big) \underbrace{\dot{x}(t)}_{\mathlarger{R\big(x(t)\big)}} dt = \int \limits_0^T 1 dt = T > 0.$$
\end{proof}
In the following let $(\Sigma, \omega = d \lambda, \sigma)$ be an involutive Hamiltonian manifold, i.e.\ $\lambda$ is a contact form on $(\Sigma,\omega)$ and $\sigma$ is a symplectic involution which preserves the orientation of the line bundle ker$\omega$ = ker$d\lambda$.\
\begin{remark} \label{remark_orientation}
For $x \in \Sigma$, as in (\ref{ker_reeb}) and (\ref{decomposition_1}) consider the decomposition
$$ T_x \Sigma = \text{ker}\omega_x + \xi_x. $$
Since $\sigma$ preserves the orientation of the line bundle, the orientation of $T \Sigma$ is $\sigma$-invariant.\
\end{remark}
\begin{remark}
For $x \in \text{Fix}(\sigma)$ we obtain by Corollary \ref{corollary_1} that
$$\langle R(x) \rangle \subset E_1\big( \sigma(x) \big) = T_x\text{Fix}(\sigma).$$
Moreover, by the first part of the Symplectic Splitting Theorem \ref{theorem_splitting} the $2n-2$ dimensional symplectic vector space $\xi_x$ splits into two symplectic spaces, namely
\begin{align} \label{splitting_contact}
\xi_x = \big( T_x\text{Fix}(\sigma) / \langle R(x) \rangle \big) \oplus \Big( E_{-1}\big( d\sigma(x) \big) \Big) = T_x \Sigma / \langle R(x) \rangle.
\end{align}
\end{remark}
\begin{lemma1} \label{lemma:contact}
The restriction of $\lambda$ to the fixed point set $F = \text{Fix}(\sigma)$ is preserved under $\sigma$ and a contact form on $F$.
\end{lemma1}
\begin{proof}
Since $\sigma^* \omega = \omega$ and the pullback commutes with the exterior derivate, we have
$$d \lambda = \omega = \sigma^* \omega = \sigma^* d \lambda = d \sigma^* \lambda.$$
Hence there is a closed 1-form $\lambda'$ on $\Sigma$ such that
$$ \sigma^* \lambda = \lambda + \lambda'. $$
Therefore for all $\eta \in T_x F$ we obtain
$$ \lambda_x(\eta) = \lambda_{x} \big( d \sigma(x)\eta \big) = \lambda_x(\eta) + \lambda'_x(\eta), $$
i.e.\ $\lambda'$ vanishes on $T_x F$.\ Thus $\lambda_F := \lambda | _{TF}$ is preserved under $\sigma$.\ We next show that $\lambda_F$ is a volume form.\ For $x \in F$ consider the symplectic splitting
$$ \text{ker}\lambda_x = \xi_x = \big( T_x F / \langle R(x) \rangle \big) \oplus \Big( E_{-1}\big( d\sigma(x) \big) \Big) = T_x \Sigma / \langle R(x) \rangle $$
from (\ref{splitting_contact}).\ Since $E_1':= T_x F / \langle R(x) \rangle$ is symplectic, for a basis $v_1,...,v_{2k}$ of $E_1'$ we have
$$ (\omega_F)_x (v_1,...,v_{2k}) \neq 0. $$
Then on the basis $R,v_1,...,v_{2k}$ of $T_x F$ we obtain
$$ \lambda_F (x) \wedge \big( \omega_F (x) \big)^{\wedge k} (R,v_1,...,v_{2k}) = \lambda_x (R) \cdot (\omega_x)^{\wedge k} (v_1,...,v_{2k}) \neq 0 .$$
\end{proof}
\begin{remark}
As in the spatial Hill lunar problem, the planar CR3BP arises in the spatial CR3BP as a fixed point set of a symplectic involution.\ In \cite{albers} it was proved that the regularized planar CR3BP is of contact type for energies below and also slightly above the first critical value.\ In \cite{cho} it was shown that the regularized spatial CR3BP has also the contact property.\ Now Lemma \ref{lemma:contact} together with the result from \cite{cho} implies the result from \cite{albers}.\
\end{remark}
In the previous lemma it was not necessary that $\lambda$ is preserved by $\sigma$.\ In the next lemma and corollary we show that the average $\frac{1}{2} (\lambda + \sigma^* \lambda)$ is always a contact form which is preserved under $\sigma$.\
\begin{lemma1}
$\sigma^* \lambda$ is a contact form on $(\Sigma,\omega)$.
\end{lemma1}
\begin{proof}
It holds that
$$ d \sigma^*\lambda = \sigma^* d \lambda = \sigma^* \omega = \omega. $$
By using $ \lambda \wedge \omega^{\wedge (n-1)} > 0$ and Remark \ref{remark_orientation} we have that
\begin{align} \label{contact_volume}
\sigma^* \lambda \wedge \omega^{\wedge (n-1)} = \sigma^* \lambda \wedge (\sigma^*\omega)^{\wedge (n-1)} = \sigma^*( \lambda \wedge \omega^{\wedge (n-1)} ) > 0.
\end{align}
\end{proof}
\begin{corollary1}
If $\sigma^* \lambda \neq \lambda$, then $\frac{1}{2} (\lambda + \sigma^* \lambda)$ is a contact form on ($\Sigma,\omega$) which is preserved by $\sigma$.
\end{corollary1}
\begin{proof}
We define the average as $ \tilde{\lambda} := \frac{1}{2}( \lambda + \sigma^* \lambda ) $.\ It is easy to see that $d \tilde{\lambda} = \omega$.\ By taking the sum of $ \lambda \wedge \omega^{\wedge (n-1)} > 0$ and (\ref{contact_volume})
we obtain that $ \tilde{\lambda} \wedge \omega^{\wedge (n-1)} >0$, hence $\tilde{\lambda}$ is a contact form on ($\Sigma,\omega$).\ Since $\sigma$ is an involution, it is obvious that $\sigma^* \tilde{\lambda} = \tilde{\lambda}$.\
\end{proof}
\section{On monodromy with respect to symmetries}
\label{sec:4}
\subsection{Symmetries of Hamiltonian systems}
Let $(M,\omega)$ be a $2n$ dimensional symplectic manifold and $H \in C^{\infty}(M,\mathbb{R})$ be an autonomous Hamiltonian function.\ A \textbf{symmetry} is a symplectic or anti-symplectic involution on $(M,\omega)$ which leaves $H$ invariant, meaning that $\sigma \colon M \to M$ is a diffeomorphism satisfying $\sigma^2 = \text{id}_M$, $\sigma^* \omega = \pm \omega$ and $H \circ \sigma = H$.\ If it is symplectic, then we call it a symplectic symmetry and otherwise an anti-symplectic symmetry.\
Let $\sigma$ be a symplectic symmetry.\ Then in view of definition (\ref{ham_vector_field}) of the Hamiltonian vector field $X_H$ we obtain
$$ \omega (X_H,\cdot) = \omega (X_{H \circ \sigma},\cdot) = d (H \circ \sigma) (\cdot) = \sigma^* \big( dH(\cdot) \big) = \sigma^* \big( \omega (X_H,\cdot) \big) = \underbrace{\sigma^* \omega}_{\omega} (\sigma^*X_H,\cdot). $$
Since $\omega$ is non-degenerate, $X_H$ is therefore invariant under $\sigma$, i.e.\
$$ \sigma ^* X_H = X_H. $$
In other words, its flow and $\sigma$ commute, i.e.\
\begin{align} \label{flow_sigma}
\varphi_H^t \circ \sigma = \sigma \circ \varphi_H^t.
\end{align}
Let $\rho$ be an anti-symplectic symmetry, then we have
$$ \rho ^* X_H = - X_H,$$
which means that $X_H$ is anti-invariant under $\rho$.\ Equivalently,
\begin{align} \label{flow_rho}
\varphi_H^t \circ \rho = \rho \circ \varphi_H^{-t}.
\end{align}
A periodic orbit $x$ with first return time $T_x$ is called \textbf{symmetric with respect to $\rho$} if
$$ x(t) = \rho \big( x(-t) \big),\quad t \in [0,T_x]. $$
Note that $x(0) = x_0,x(T_x/2) \in \text{Fix}(\rho)$.\
\begin{remark}
Consider the standard symplectic vector space $(\mathbb{R}^{2n},\omega_0)$ with
$$ \omega_0 (v,w) : = \langle Jv,w \rangle = v^T J^T w = \langle v,J^Tw \rangle,\quad \text{ for all }v,w \in \mathbb{R}^{2n}, $$
where
$$ J = \begin{pmatrix}
0 & I_n\\
-I_n & 0
\end{pmatrix} $$
with respect to the splitting $\mathbb{R}^{2n} = \mathbb{R}^n \times \mathbb{R}^n$.\ Note that $J^2 = -I_{2n}$ and $J^T=J^{-1}=-J$.\ A linear isomorphism $\Psi$ of $(\mathbb{R}^{2n},\omega_0)$ is called symplectic if $ \omega_0 (\Psi v, \Psi w) = \omega_0 (v,w), \text{ for all }v,w \in \mathbb{R}^{2n},$ which is equivalent to $\Psi^T J \Psi = J$.\ The set of symplectic matrices in $\mathbb{R}^{2n}$ is denoted by
$$ \text{Sp}(n) = \{ \Psi \colon (\mathbb{R}^{2n},\omega_0) \to (\mathbb{R}^{2n},\omega_0) \text{ linear isomorphism } | \text{ } \Psi^T J \Psi = J \}. $$
It is easy to show that if $\Psi,\Phi \in \text{Sp}(n)$, then $\Psi \Phi, \Psi^{-1}$, $\Psi^T \in \text{Sp}(n)$ and also $J \in \text{Sp}(n)$.\ In particular, $\text{Sp}(n)$ is a group under matrix multiplication.\ Moreover, a $2n \times 2n$ matrix which is written as
\begin{align} \label{matrix_block}
\begin{pmatrix}
A & B\\
C & D
\end{pmatrix}
\end{align}
with respect to the splitting $\mathbb{R}^{2n} = \mathbb{R}^n \times \mathbb{R}^n$, is symplectic if and only if
\begin{align} \label{matrix_symplectic}
A^T C , B^T D \text{ are symmetric and } A^T D - C^T B = I_n.
\end{align}
Its inverse is given by
\begin{align*}
\begin{pmatrix}
D^T & -B^T\\
-C^T & A^T
\end{pmatrix}.
\end{align*}
The set of anti-symplectic matrices in $\mathbb{R}^{2n}$ we denote by
$$ \text{Sp}^-(n) = \{ \Psi \colon (\mathbb{R}^{2n},\omega_0) \to (\mathbb{R}^{2n},\omega_0) \text{ linear isomorphism } | \text{ } \Psi^T J \Psi = - J \}, $$
which is not a group, since for $\Psi,\Phi \in \text{Sp}^-(n)$ the multiplication $\Psi \Phi$ is symplectic.\ Nevertheless $\Psi^{-1},\Psi^T \in \text{Sp}^-(n)$ and $-J \in \text{Sp}^-(n)$.\ A $2n \times 2n$ matrix given in the block form (\ref{matrix_block}) is anti-symplectic if and only if
\begin{align} \label{matrix_anti_symplectic}
A^T C , B^T D \text{ are symmetric and } A^T D - C^T B = -I_n.
\end{align}
The inverse matrix is given by
$$\begin{pmatrix}
-D^T & B^T\\
C^T & -A^T
\end{pmatrix}.$$
\end{remark}
\subsection{Monodromy with respect to symplectic symmetries}
\label{sec:2.2}
Let $\sigma$ be a symplectic symmetry and $x$ be a periodic orbit with $x_0 \in \text{Fix}(\sigma)$ and first return time $T_x$.\ In particular, the symplectic vector space splits into two symplectic vector spaces
$$ T_{x_0} M = E_1\big( d \sigma (x_0) \big) \oplus E_{-1}\big( d \sigma (x_0) \big) = T_{x_0}\text{Fix}(\sigma) \oplus E_{-1}\big( d \sigma (x_0) \big), $$
since they are symplectically orthogonal.\ Moreover, because the Hamiltonian flow and $\sigma$ commute (see (\ref{flow_sigma})), the monodromy
$$ d \varphi_H^{T_x} (x_0) \colon T_{x_0} M \to T_{x_0}M $$
leaves the two eigenspaces invariant, i.e.\ if $\xi \in E_{\pm 1} \big( d \sigma(x_0) \big)$ then $d \varphi_H^{T_x}(x_0) \xi \in E_{\pm 1} \big( d \sigma(x_0) \big)$.\ Therefore the monodromy is of the form
$$ d\varphi _H ^{T_x} (x_0) = \begin{pmatrix}
A_1 & 0\\
0 & A_2
\end{pmatrix}, $$
where
$$ A_1 \colon T_{x_0} \text{Fix}(\sigma) \to T_{x_0} \text{Fix}(\sigma) ,\quad A_2 \colon E_{-1}\big( d\sigma(x_0) \big) \to E_{-1}\big( d\sigma(x_0) \big) $$
are symplectic matrices.\ Recall from (\ref{ker_invariant}) that $X_H(x_0)$ is an eigenvector of the monodromy to the eigenvalue 1, hence
$$ X_H(x_0) \in T_{x_0}\text{Fix}(\sigma) .$$
On energy level sets $\Sigma$, which are orientable by (\ref{orientable}), $\sigma|_{\Sigma}$ preserves the orientation of the line bundle ker$\omega | _{\Sigma} = \langle X_H | _{\Sigma} \rangle$.\ Note that the triple $(\Sigma, \omega | _{\Sigma}, \sigma | _{\Sigma})$ is an involutive Hamiltonian manifold and
$$ \text{dim}\big( T_{x_0}\text{Fix}(\sigma) \cap T_{x_0} \Sigma \big) = \text{dim}\big(\text{Fix}(\sigma)\big) - 1,\quad E_{-1}\big( d\sigma | _{\Sigma}(x_0) \big) = E_{-1}\big( d\sigma(x_0) \big). $$
Therefore by the Symplectic Splitting Theorem \ref{theorem_splitting}, the $2n-2$ dimensional symplectic vector space $T_x\Sigma / (\text{ker}\omega_{x_0} | _{T_{x_0}\Sigma})$ splits into two symplectic vector spaces, namely
$$ T_{x_0}\Sigma / (\text{ker}\omega_{x_0} | _{T_{x_0}\Sigma}) = \big( T_{x_0}\text{Fix}(\sigma | _{\Sigma}) / (\text{ker}\omega_{x_0} | _{T_{x_0}\Sigma}) \big) \oplus \Big( E_{-1}\big( d\sigma(x_0) \big) \Big) $$
and the reduced monodromy is of the form
$$ \overline{d\varphi _H ^{T_x} | _{\Sigma} (x_0)} = \begin{pmatrix}
\overline{A}_1 & 0\\
0 & A_2
\end{pmatrix}, $$
where $\overline{A}_1$ is a linear symplectic map on $T_{x_0} \text{Fix}(\sigma|_{\Sigma}) / (\text{ker}\omega_{x_0} | _{T_{x_0}\Sigma})$.\
\begin{remark}
Note that a periodic orbit starting at the fixed point set of $\sigma$ stays there forever and the Floquet multipliers are given by those of $\overline{A}_1$ and $A_2$.\
\end{remark}
\subsection{Monodromy with respect to anti-symplectic symmetries}
\label{sec:4.3}
\subsubsection{Monodromy}
\label{sec:4.3.1}
Let $\rho$ be an anti-symplectic symmetry and $x$ be a periodic orbit with first return time $T_x$ which is symmetric with respect to $\rho$.\ Then the differential
$$d \rho (x_0) \colon T_{x_0}M \to T_{x_0}M $$
is a linear anti-symplectic involution on the symplectic vector space $(T_{x_0}M,\omega_{x_0})$.\
\begin{lemma1}
The decomposition
\begin{align} \label{lagrangian_splitting}
T_{x_0}M = T_{x_0}\text{Fix}(\rho) \oplus E_{-1}\big( d \rho(x_0) \big)
\end{align}
is a Lagrangian splitting, meaning that the two eigenspaces are Lagrangian submanifolds, i.e.\ their respective dimensions are $\frac{1}{2} \text{dim} (M) = n$ and $\omega$ vanishes on the respective tangent bundles.\
\end{lemma1}
\begin{proof}
Their dimensions are $n$ in view of the isomorphism
$$ E_1 \big( d\rho(x_0) \big) = T_{x_0}\text{Fix}(\rho) \to E_{-1} \big( d\rho(x_0) \big),\quad \xi + d \rho (x_0) \xi \mapsto \xi - d \rho (x_0) \xi $$
and the decomposition
$$T_{x_0}M \to T_{x_0}\text{Fix}(\rho) \oplus E_{-1}\big( d \rho(x_0) \big),\quad \xi \mapsto \frac{1}{2} \big( \xi + d \rho (x_0) \xi \big) + \frac{1}{2} \big( \xi - d \rho (x_0) \xi \big). $$
Moreover, for all $\xi, \eta \in T_{x_0}\text{Fix}(\rho)$ we have
$$ \omega_{x_0} (\xi,\eta) = - (\rho^* \omega)_{x_0} (\xi,\eta) = - \omega_{\rho(x_0)} \big( d\rho(x_0) \xi, d\rho(x_0) \eta \big) = - \omega_{x_0} (\xi,\eta), $$
hence $\omega_{x_0} (\xi,\eta)=0$.\ The same holds for all $\xi, \eta \in E_{-1}\big( d \rho(x_0) \big)$.\
\end{proof}
The existence of a symplectic (or canonical) basis for a symplectic vector space is given by a skew-symmetric version of the Gram--Schmidt process (see for instance \cite[pp.\ 3--4]{hofer}).\ Similarly, in view of the Lagrangian splitting (\ref{lagrangian_splitting}), there exist bases
$$(v_1,...,v_n) \text{ of } T_{x_0}\text{Fix}(\rho),\quad (w_1,...,w_n) \text{ of } E_{-1}\big( d \rho(x_0) \big)$$
such that
$$ \omega_{x_0}(v_i,w_j) = \delta_{ij},\quad i,j=1,...,n $$
and
\begin{align} \label{basis_symplectic}
v_1,...,v_n,w_1,...,w_n
\end{align}
is a symplectic basis of $T_{x_0}M$ (see for instance \cite[pp.\ 532--533]{albers_frauenfelder}).\ We refer to this kind of basis as \textbf{Lagrangian basis}.\ With respect to this basis the differential $d \rho (x_0)$ is represented by the standard anti-symplectic involution
$$ \rho_0 = \begin{pmatrix}
I_n & 0\\
0 & -I_n
\end{pmatrix}.$$
\begin{Proposition}\label{prop_block}
The monodromy written in block form (\ref{matrix_block}) has the form
\begin{align} \label{matrix_block_2}
d \varphi_H^{T_x} (x_0) = \begin{pmatrix}
A & B\\
C & A^T
\end{pmatrix},
\end{align}
where
\begin{align} \label{matrix_block_3}
B, C, CA, AB \text{ are symmetric and } A^2 - BC = I_n.
\end{align}
\end{Proposition}
\begin{proof}
In view of (\ref{flow_rho}) we get
\begin{align} \label{equation_monodromy}
d \rho (x_0) \circ d \varphi_H^{T_x} (x_0) \circ d \rho (x_0) = \big( d \varphi_H^{T_x} (x_0) \big)^{-1},
\end{align}
which means that the monodromy is conjugated to its inverse by the linear anti-symplectic involution.\ With respect to the choice of a Lagrangian basis (\ref{basis_symplectic}) and the monodromy written in block form (\ref{matrix_block}), the equation (\ref{equation_monodromy}) becomes
$$ \begin{pmatrix}
I_n & 0\\
0 & -I_n
\end{pmatrix} \begin{pmatrix}
A & B\\
C & D
\end{pmatrix} \begin{pmatrix}
I_n & 0\\
0 & -I_n
\end{pmatrix} = \begin{pmatrix}
D^T & -B^T\\
-C^T & A^T
\end{pmatrix}, $$
which is equivalent to
$$ \begin{pmatrix}
A & -B\\
-C & D
\end{pmatrix} = \begin{pmatrix}
D^T & -B^T\\
-C^T & A^T
\end{pmatrix} $$
and the statement follows by (\ref{matrix_symplectic}).\
\end{proof}
\begin{lemma1}
All linear anti-symplectic involutions are symplectically conjugated to each other.\
\end{lemma1}
\begin{proof}
Let $\rho_1, \rho_2 \in Sp^{-}(n)$ and $\{v_1,...,v_n,w_1,...w_n\}$ and $\{\tilde{v}_1,...,\tilde{v}_n,\tilde{w}_1,...\tilde{w}_n\}$ be Lagrangian bases, respectively.\ Then the basis change is given by the linear symplectic map
$$ v_i \mapsto \tilde{v}_i,\quad w_i \mapsto \tilde{w}_i,\quad i=1,...,n. $$
\end{proof}
\begin{corollary1} \label{corollary_involution}
Every $\rho \in Sp^{-}(n)$ is symplectically conjugated to the standard anti-symplectic involution $\rho_0$.\
\end{corollary1}
We denote the set of symplectic matrices of the form (\ref{matrix_block_2}) satisfying (\ref{matrix_block_3}) by
$$ \text{Sp}^{\rho_0}(n) = \left\{ \begin{pmatrix}
A & B\\
C & A^T
\end{pmatrix} \colon B, C, CA, AB \text{ are symmetric and } A^2 - BC = I_n \right\}. $$
The next lemma generalizes Proposition \ref{prop_block}.\
\begin{lemma1}
Every symplectic matrix $\Psi \in \text{Sp}(n)$ is symplectically conjugated to a symplectic matrix from $\text{Sp}^{\rho_0}(n)$.\
\end{lemma1}
\begin{proof}
By a thereom of Wonenburger \cite{wonenburger}, every $\Psi \in \text{Sp}(n)$ is the product of two linear anti-symplectic involutions, i.e.\
$$ \Psi = \rho_1 \rho_2, $$
where $\rho_1,\rho_2 \in \text{Sp}^-(n)$.\ Hence
$$ \Psi^{-1} = \rho_2 \rho_1 = \rho_1 \Psi \rho_1, $$
which is a general form of the equation (\ref{equation_monodromy}).\ By Corollary \ref{corollary_involution}, there exists $\Psi_1 \in \text{Sp}(n)$ such that
$$ \Psi_1^{-1} \rho_1 \Psi_1 = \rho_0. $$
This implies
$$ \Psi_1^{-1} \Psi \Psi_1 = \Psi_1^{-1} (\rho_1 \rho_2 ) \Psi_1 = \Psi_1^{-1} (\Psi_1 \rho_0 \Psi_1^{-1} ) \rho_2 \Psi_1 = \rho_0 \Psi_1^{-1} \rho_2 \Psi_1. $$
One readily checks that $\Psi_1^{-1} \rho_2 \Psi_1$ belongs to $\text{Sp}^-(n)$, thus $\Psi$ is symplectically conjugated to the product of two linear anti-symplectic involutions as well.\ In addition,
$$ (\Psi_1^{-1} \Psi \Psi_1)^{-1} = \Psi_1^{-1} \rho_2 \Psi_1 \rho_0 = \rho_0 ( \Psi_1^{-1} \Psi \Psi_1 ) \rho_0, $$
i.e.\ the symplectic conjugacy $\Psi_1^{-1} \Psi \Psi_1$ is conjugated to its inverse by the standard anti-symplectic involution.\ Therefore by the same steps as in the proof of Proposition \ref{prop_block}, the symplectic conjugacy $\Psi_1^{-1} \Psi \Psi_1$ is an element from $\text{Sp}^{\rho_0}(n)$.\
\end{proof}
\begin{remark} \label{remark4.8}
Since $X_H$ is anti-invariant under $\rho$, we have
$$ X_H(x_0) \in E_{-1} \big( d \rho(x_0) \big). $$
On energy level sets $\Sigma$, the restriction $ T_{x_0} \text{Fix} (\rho | _{\Sigma}) $ is $n-1$ dimensional, and on the quotient by the line bundle ker$\omega | _{\Sigma} = \langle X_H | _{\Sigma} \rangle$ we obtain the splitting
$$ T_{x_0}\Sigma / (\text{ker}\omega_{x_0} | _{T_{x_0}\Sigma}) = \big( T_{x_0} \text{Fix}(\rho | _{\Sigma}) \big) \oplus \Big( E_{-1} \big( d\rho|_{\Sigma} (x_0) \big) / ( \text{ker}\omega_{x_0} | _{T_{x_0}\Sigma} ) \Big). $$
By using the Lagrangian basis $v_1,...,v_n,w_1,...,w_n$ (see (\ref{basis_symplectic})), $n-1$ basis vectors of the first vector space are determined by $v_1,...,v_n$ and the energy condition.\ We denote them by $\tilde{v}_1,...,\tilde{v}_{n-1}$.\ By the Steinitz exchange lemma on $w_1,...,w_n$ and $X_H|_{\Sigma}(x_0)$ we choose $\tilde{w}_1,...,\tilde{w}_{n-1}$ such that
$$ \omega_{x_0}(\tilde{v}_i,\tilde{w}_j) = \delta_{ij},\quad i,j=1,...,n-1 $$
and
$$ T_{x_0} \Sigma = \langle \tilde{v}_1,...,\tilde{v}_{n-1} \rangle_{\mathbb{R}} \oplus \langle \tilde{w}_1,...,\tilde{w}_{n-1}, X_H|_{\Sigma}(x_0) \rangle_{\mathbb{R}}. $$
With respect to this basis, the reduced monodromy is an element from $\text{Sp}^{\rho_0}(n-1)$.\
\end{remark}
\subsubsection{The signatures of a symmetric periodic orbit}
Let the monodromy be a symplectic matrix from $\text{Sp}^{\rho_0}(n)$, then the following lemma shows that its spectrum is determined by the spectrum of $A$.\
\begin{lemma1}
The characteristic polynomial of the monodromy equals
$$ \lambda^n \det \big( -2A - (- \lambda - \frac{1}{\lambda})I_n \big), $$
i.e.\ $\lambda^n \chi_{-2A} (- \lambda - \frac{1}{\lambda})$.
\end{lemma1}
\begin{proof}
The decomposition
$$ \begin{pmatrix}
A - \lambda I_n & B\\
C & A^T - \lambda I_n
\end{pmatrix} \begin{pmatrix}
A - \lambda I_n & 0\\
-C & I_n
\end{pmatrix} = \begin{pmatrix}
\lambda^2 I_n - 2 \lambda A + I_n & B\\
0 & A^T - \lambda I_n
\end{pmatrix} $$
implies that the characteristic polynomial of the monodromy is given by
$$ \det ( \lambda^2 I_n - 2 \lambda A + I_n ), $$
which is equivalent to $\lambda^n \det \big( -2A - (- \lambda - \frac{1}{\lambda})I_n \big)$.
\end{proof}
\begin{remark}
In the case $n=1$, i.e.\
$$ d \varphi_H^{T_x} (x_0) = \begin{pmatrix}
a & b\\
c & a
\end{pmatrix},\quad a^2 - bc = 1, $$
where $a,b,c \in \mathbb{R}$, we have for its characteristic polynomial,
$$ \lambda^2 - 2a \lambda + 1. $$
Note that $a$ is the half of its trace.\ For the case $n=2$ we obtain
$$ \lambda^4 - 2 \text{tr}A \lambda^3 + (2 + 4 \det A)\lambda^2 - 2 \text{tr}A \lambda + 1. $$
\end{remark}
\begin{definition}
Let $\lambda$ be an eigenvalue of $A$ of multiplicity $1$, $v$ an eigenvector of $A$ and $\tilde{v}$ an eigenvector of $A^T$ to the eigenvalue $\lambda$.\ The \textbf{signature with respect to $C$} of a symmetric periodic orbit is defined as the signature of $v^TCv$ and the \textbf{signature with respect to $B$} as the signature of $\tilde{v}^T B \tilde{v}$.\ We denote them by
$$\text{sign}_C (\lambda) = \text{sign} (v^TCv), \quad \text{sign}_B (\lambda) = \text{sign}(\tilde{v}^T B \tilde{v}),$$
respectively.\
\end{definition}
\begin{remark}
Neither of the signatures depends on the resp.\ eigenvectors, since for a constant $k \in \mathbb{R}^*$,
$$ \text{sign}\big( (kv)^T C (kv) \big) = k^2 \text{sign}(v^T C v) = \text{sign} (v^T C v) = \text{sign}_C(\lambda). $$
\end{remark}
\begin{remark} \label{remark4.13}
If a Lagrangian basis (\ref{basis_symplectic}) is given, then $R \in \text{GL}(n,\mathbb{R})$ acts $\text{Sp}^{\rho_0}(n)$ by conjugation
$$ \begin{pmatrix}
R & 0\\
0 & (R^{-1})^T
\end{pmatrix} \begin{pmatrix}
A & B\\
C & A^T
\end{pmatrix} \begin{pmatrix}
R^{-1} & 0\\
0 & R^T
\end{pmatrix} = \begin{pmatrix}
RAR^{-1} & RBR^T\\
(R^{-1})^TCR^{-1} & (R^{-1})^TA^TR^T
\end{pmatrix}, $$
where $(R^{-1})^TA^TR^T = (RAR^{-1})^T$.\ Moreover, this basis change is symplectic, since
$$ \begin{pmatrix}
R^T & 0\\
0 & R^{-1}
\end{pmatrix} \begin{pmatrix}
0 & I_n\\
-I_n & 0
\end{pmatrix} \begin{pmatrix}
R & 0\\
0 & (R^{-1})^T
\end{pmatrix} = \begin{pmatrix}
0 & I_n\\
-I_n & 0
\end{pmatrix}. $$
\end{remark}
\begin{Proposition} \label{prop_signature}
Both signatures of a symmetric periodic orbit are invariant under the Lagrangian basis change, i.e.\ they are independent of the choice of the Lagrangian basis.\
\end{Proposition}
\begin{proof}
For the invariance of $\text{sign}_C(\lambda)$ we consider the identity
$$ AR^{-1}R v = Av = \lambda v, $$
thus
$$ RAR^{-1}Rv = \lambda Rv, $$
meaning that $Rv$ is an eigenvector of $RAR^{-1}$ to the eigenvalue $\lambda$.\ In view of the transformations
$$ C \mapsto (R^{-1})^TCR^{-1} ,\quad A \mapsto RAR^{-1},\quad v \mapsto Rv, $$
we have
$$ \text{sign} \big( (Rv)^T (R^{-1})^T C R^{-1} (Rv) \big) = \text{sign} (v^T C v) = \text{sign}_C(\lambda). $$
For $\text{sign}_B(\lambda)$ we consider
$$ A^T R^T (R^{-1})^T \tilde{v} = A^T \tilde{v} = \lambda \tilde{v} $$
and therefore
$$ (R^{-1})^T A^T R^T (R^{-1})^T \tilde{v} = \lambda (R^{-1})^T \tilde{v} ,$$
which means that $(R^{-1})^T \tilde{v}$ is an eigenvector of $(R^{-1})^TA^TR^T$ to the eigenvalue $\lambda$.\ In view of
$$ B \mapsto RBR^T,\quad A^T \mapsto (R^T)^{-1}A^TR^T,\quad \tilde{v} \mapsto (R^{-1})^T \tilde{v},$$
we obtain
$$ \text{sign} \Big( \big( (R^{-1})^T \tilde{v} \big)^T RBR^T (R^{-1})^T \tilde{v} \Big) = \text{sign} (\tilde{v}^T B \tilde{v}) = \text{sign}_B(\lambda).$$
\end{proof}
\begin{remark} \label{remark4.15}
In the case $n=1$, scaling on the Lagrangian yields
$$ \begin{pmatrix}
k & 0\\
0 & \frac{1}{k}
\end{pmatrix} \begin{pmatrix}
a & b\\
c & a
\end{pmatrix} \begin{pmatrix}
\frac{1}{k} & 0\\
0 & k
\end{pmatrix} = \begin{pmatrix}
a & k^2 b\\
\frac{1}{k^2}c & a
\end{pmatrix},\quad k \in \mathbb{R}^*, $$
hence the trace is invariant as well under conjugation.\
\end{remark}
\subsection{Monodromy if the symplectic \& anti-symplectic symmetries commute}
\label{sec:4.4}
Let $\sigma$ be a symplectic and $\rho$ be an anti-symplectic symmetry such that
$$ \sigma \circ \rho = \rho \circ \sigma. $$
Moreover, let $x$ be a periodic orbit with $x_0 \in \text{Fix}(\sigma)$ and first return time $T_x$, which is symmetric with respect to $\rho$, hence $x_0 \in \text{Fix}(\rho)$.\ Consider the symplectic decomposition
$$ T_{x_0} M = T_{x_0}\text{Fix}(\sigma) \oplus E_{-1} \big( d \sigma(x_0)\big) $$
and let
$$ \text{dim} \big( T_{x_0}\text{Fix}(\sigma) \big) = 2k,\quad \text{dim} \Big( E_{-1} \big( d \sigma(x_0)\big) \Big) = 2 \tilde{k},\quad 2k + 2\tilde{k} = 2n. $$
\begin{lemma1}
The monodromy is of the form
$$ d\varphi _H ^{T_x} (x_0) = \begin{pmatrix}
A_1 & 0\\
0 & A_2
\end{pmatrix}, $$
where
$$ A_1 \colon T_{x_0} \text{Fix}(\sigma) \to T_{x_0} \text{Fix}(\sigma),\quad A_2 \colon E_{-1}\big( d\sigma(x_0) \big) \to E_{-1}\big( d\sigma(x_0) \big) $$
and
$$ A_1 \in \text{Sp}^{\rho_0}(k),\quad A_2 \in \text{Sp}^{\rho_0}(\tilde{k}) .$$
\end{lemma1}
\begin{proof}
Since $\sigma$ and $\rho$ commute, the linear anti-symplectic involution
$$ d \rho (x_0) \colon T_{x_0} M \to T_{x_0} M $$
leaves the symplectic decomposition
$$ T_{x_0} M = T_{x_0}\text{Fix}(\sigma) \oplus E_{-1} \big( d \sigma(x_0)\big) $$
invariant, meaning that for $\xi \in E_{\pm 1} \big( d \sigma(x_0) \big)$ we have $d \rho(x_0) \xi \in E_{\pm 1} \big( d \sigma(x_0) \big)$.\ If we denote
$$ E_1^{d\sigma}:= T_{x_0}\text{Fix}(\sigma),\quad E_{-1}^{d\sigma}:= E_{-1}\big( d\sigma(x_0)\big), $$
this invariance implies that the restrictions
$$ d\rho|_{E_1^{d\sigma}} (x_0) \colon E_1^{d\sigma} \to E_1^{d\sigma},\quad d\rho|_{E_{-1}^{d\sigma}} (x_0) \colon E_{-1}^{d\sigma} \to E_{-1}^{d\sigma} $$
are linear anti-symplectic involutions.\ Hence the symplectic decomposition splits into two Lagrangian splittings,
\begin{align*}
T_{x_0}M &= E_1^{d\sigma} \oplus E_{-1}^{d\sigma}\\
&= \Big(E_1 \big( d\rho|_{E_1^{d\sigma}} (x_0) \big) \oplus E_{-1} \big( d\rho|_{E_1^{d\sigma}} (x_0)\big)\Big) \oplus \Big(E_1 \big( d\rho|_{E_{-1}^{d\sigma}} (x_0) \big) \oplus E_{-1} \big( d\rho|_{E_{-1}^{d\sigma}} (x_0)\big)\Big),
\end{align*}
with Lagrangian bases
$$ v_1,...,v_k,w_1,...,w_k,\quad \tilde{v}_1,...,\tilde{v}_{\tilde{k}},\tilde{w}_1,...,\tilde{w}_{\tilde{k}}, $$
respectively.\ In view of Section \ref{sec:4.3.1}, this proves the lemma.\
\end{proof}
\begin{remark}
The two Lagrangian bases from the proof give a Lagrangian basis
$$ v_1,...,v_k,\tilde{v}_1,...,\tilde{v}_{\tilde{k}},w_1,...,w_k,\tilde{w}_1,...,\tilde{w}_{\tilde{k}} $$
on $T_{x_0}M$ with respect to the Lagrangian splitting and decomposition
\begin{align*}
T_{x_0}M &= T_{x_0}\text{Fix}(\rho) \oplus E_{-1}\big(d\rho(x_0)\big)\\
&= \Big(E_1 \big( d\rho|_{E_1^{d\sigma}} (x_0) \big) \oplus E_{1} \big( d\rho|_{E_{-1}^{d\sigma}} (x_0)\big)\Big) \oplus \Big(E_{-1} \big( d\rho|_{E_1^{d\sigma}} (x_0) \big) \oplus E_{-1} \big( d\rho|_{E_{-1}^{d\sigma}} (x_0)\big)\Big).
\end{align*}
\end{remark}
\begin{remark}
In view of Remark \ref{remark4.8}, the reduced monodromy is of the form
$$ \overline{d\varphi _H ^{T_x} | _{\Sigma} (x_0)} = \begin{pmatrix}
\overline{A}_1 & 0\\
0 & A_2
\end{pmatrix}, $$
where
$$\overline{A}_1 \colon T_{x_0} \text{Fix}(\sigma|_{\Sigma}) / (\text{ker}\omega_{x_0} | _{T_{x_0}\Sigma}) \to T_{x_0} \text{Fix}(\sigma|_{\Sigma}) / (\text{ker}\omega_{x_0} | _{T_{x_0}\Sigma}),\quad \overline{A}_1 \in \text{Sp}^{\rho_0}(k-1). $$
\end{remark}
\subsection{GIT quotient in dimension two}
\label{sec:4.5}
Let $G$ be a Lie group, i.e.\ $G$ is a group and a smooth manifold such that
$$ G \to G,\quad g \mapsto g^{-1},\quad \quad G \times G \to G,\quad (g,h) \mapsto gh $$
are smooth.\ Let $G$ act on a manifold $M$, meaning that there is a group homomorphism $ \psi \colon G \to \text{Diff}(M)$ such that
$$ G \times M \to M,\quad (g,m) \mapsto \psi(g)(m) =: g_* m $$
is smooth.\ For any $m \in M$ the orbit through $m$ is the set $Gm = \{g_* m \mid g \in G\}$.\ If $m$ and $n$ lie in the same orbit, then $Gm=Gn$.\ Moreover, $M$ can be written as the disjoint union of orbits and the space of orbits is the quotient space $M/G$ which is in general not a Hausdorff space.\ To ensure the Hausdorff property we consider the orbit closure relation on $M$ which is defined by
$$ m \sim n \quad : \Leftrightarrow \quad \overline{Gm} \cap \overline{Gn} \neq \emptyset, $$
meaning that $m$ is related to $n$ if the closure of the orbits through $m$ and $n$ intersect.\ It is clear that this relation is reflexive and symmetric, but it is not necessarily transitive.\ If it is an equivalence relation, then the \textbf{GIT quotient} (geometric invariant theory quotient) is defined as
$$ M /\!/ G := M / \sim. $$
\begin{example}
Let $\mathbb{R}_{>0}$ acting on $\mathbb{R}$ by multiplication, then there are exactly the three orbits $ \mathbb{R}^-, \{0\}$ and $\mathbb{R}^+$.\ The open sets in the orbit space $\mathbb{R} / \mathbb{R}_{>0}$ are $ \{ \mathbb{R}^- \}, \{\mathbb{R}^+\}, \{ \mathbb{R}^-, \{ 0\}, \mathbb{R}^+\}$ and $\{\emptyset\},$
hence it is not Hausdorff.\ By the closure of the orbits $ \overline{\{0\}} = \{0\}, \overline{\mathbb{R}^-} = (-\infty,0]$ and $\overline{\mathbb{R}^+} = [0,\infty)$ we see that the GIT quotient is
$$ \mathbb{R} /\!/ \mathbb{R}_{>0} = \{\text{pt}.\} .$$
\end{example}
\begin{example}
Let $\text{GL}(n,\mathbb{R})$ act on $\text{Mat}(n,\mathbb{R})$ by conjugation.\ The GIT quotient avoids Jordan factors, therefore two matrices $A,B$ are equivalent if and only if their characteristic polynomials are the same (see \cite[Appendix A]{frauenfelder_moreno} for details).\ For a matrix $A$ let $\chi_A(\lambda) = \lambda^n + a_{n-1}\lambda^{n-1} + ... + a_0$ be its characteristic polynomial.\ Then a homeomorphism is given by
$$ \text{Mat}(n,\mathbb{R}) /\!/ \text{GL}(n,\mathbb{R}) \to \mathbb{R}^n,\quad [A] \mapsto (a_{n-1},...,a_0). $$
\end{example}
\begin{example}
For this paper the relevant GIT quotient is
$$ \text{Sp}^{\rho_0} (1) /\!/ \text{GL}(1,\mathbb{R}), $$
which is well studied in \cite[pp.\ 25--28]{zhou}.\ In particular, this space is important for the study of periodic orbits whose reduced monodromy is an element from $\text{Sp}^{\rho_0}(1)$.\ Let $A \in \text{Sp}^{\rho_0} (1)$, and recall from Subsection \ref{sec:stability_floquet} that the eigenvalues in the elliptic and hyperbolic case are resp.\ given by
$$ a \pm \text{i} \sqrt{(1-a^2)},\quad a \pm \sqrt{(a^2 - 1)}. $$ Each of the positive and negative hyperbolic cases consists of two subcases, namely
\begin{center}
\begin{tabular}{cccc}\centering
pos. hyperb. I & pos. hyperb. II & neg. hyperb. I & neg. hyperb. II\\
$\begin{pmatrix}
a > 1 & b < 0\\
c < 0 & a > 1
\end{pmatrix}$ & $\begin{pmatrix}
a > 1 & b > 0\\
c > 0 & a > 1
\end{pmatrix}$ & $\begin{pmatrix}
a < -1 & b > 0\\
c > 0 & a < -1
\end{pmatrix}$ & $\begin{pmatrix}
a < -1 & b < 0\\
c < 0 & a < -1
\end{pmatrix}$
\end{tabular}
\end{center}
Furthermore, recall from Remark \ref{remark4.15} that the action is given by
$$ k_* A := \begin{pmatrix}
a & k^2 b\\
\frac{1}{k^2}c & a
\end{pmatrix},\quad k \in \mathbb{R}^*. $$
Note that $A_1, A_2 \in \text{Sp}^{\rho_0} (1)$ are equivalent in the GIT quotient if and only if $ \overline{k_* A_1} \cap \overline{k_* A_2} \neq \emptyset $.\
If 1 is not an eigenvalue and $b\neq0,c\neq0$, then one can always choose $k$ such that $k^2 b = \pm \frac{1}{k^2}c$.\
If $A$ is elliptic, then $A$ is equivalent to a rotation of $\mathbb{R}^2$.\
If $A$ is hyperbolic, then $A$ is equivalent to
$$ \begin{pmatrix}
a & \pm \sqrt{(a^2 - 1)}\\
\pm \sqrt{(a^2 - 1)} & a
\end{pmatrix},\quad a>1 \text{ or } a<-1 .$$
Obviously, the identity matrix lies in the closure of the orbits of
$$ \begin{pmatrix}
1 & \pm b\\
0 & 1
\end{pmatrix},\quad \begin{pmatrix}
1 & 0\\
\pm b & 1
\end{pmatrix},\quad b > 0, $$
hence these three matrices are equivalent in the GIT quotient and identified to the single point $\{ +1 \}$.\ The same holds for the matrices replacing 1 in the diagonals by $-1$, which we identify to the single point $\{ -1 \}$.\
Topologically, the GIT quotient $\text{Sp}^{\rho_0} (1) /\!/ \text{GL}(1,\mathbb{R})$ is isomorphic to a circle with four spikes (see Figure \ref{circle_four_spikes}).\ Geometrically, the unit circle $\{ z \in \ \mathbb{C} \mid |z| = 1 \} \setminus \{ \pm 1 \}$ corresponds to equivalence classes of elliptic matrices, and each spike minus $\{ \pm 1 \}$ represents a hyperbolic subcase.\
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-8,-4) rectangle (8,4.5);
\draw [line width=1.5pt,color=blue] (0,0) circle (1.5cm);
\draw [->,line width=1pt] (-4,0) -- (4.5,0);
\draw [->,line width=1pt] (0,-4) -- (0,4.5);
\draw [fill=blue,color=blue] (1.5,0) circle (2pt);
\draw [fill=blue,color=blue] (-1.5,0) circle (2pt);
\draw (-1.7,3.2) node[anchor=north west,color=blue] {$\begin{pmatrix}
\cos \theta & - \sin \theta\\
\sin \theta & \cos \theta
\end{pmatrix}$};
\draw (-1.7,-1.6) node[anchor=north west,color=blue] {$\begin{pmatrix}
\cos \theta & \sin \theta\\
- \sin \theta & \cos \theta
\end{pmatrix}$};
\draw (3,3.5) node[anchor=north west,color=blue] {$\begin{pmatrix}
\cosh x & \sinh x\\
\sinh x & \cosh x
\end{pmatrix}$};
\draw (3.4,1.8) node[anchor=north west,color=blue] {$\text{pos. hyperb. II}$};
\draw (-7.1,3.5) node[anchor=north west,color=blue] {$\begin{pmatrix}
- \cosh x & \sinh x\\
\sinh x & - \cosh x
\end{pmatrix}$};
\draw (3.4,-1.2) node[anchor=north west,color=blue] {$\text{pos. hyperb. I}$};
\draw (-7.1,-1.9) node[anchor=north west,color=blue] {$\begin{pmatrix}
- \cosh x & - \sinh x\\
- \sinh x & - \cosh x
\end{pmatrix}$};
\draw (-6.5,1.8) node[anchor=north west,color=blue] {$\text{neg. hyperb. I}$};
\draw (3,-1.9) node[anchor=north west,color=blue] {$\begin{pmatrix}
\cosh x & - \sinh x\\
- \sinh x & \cosh x
\end{pmatrix}$};
\draw (-6.5,-1.2) node[anchor=north west,color=blue] {$\text{neg. hyperb. II}$};
\draw [line width=1.5pt,color=blue] (3,2.8284271247) .. controls (1,0) .. (3,-2.8284271247);
\draw [line width=1.5pt,color=blue] (-3,2.8284271247) .. controls (-1,0) .. (-3,-2.8284271247);
\end{tikzpicture}
\caption{Topology of $\text{Sp}^{\rho_0} (1) /\!/ \text{GL}(1,\mathbb{R})$}
\label{circle_four_spikes}
\end{figure}
\noindent
Note that the eigenvalues of the hyperbolic matrices are $e^{\pm x}$ for the pos.\ hyperb.\ and $-e^{\pm x}$ for the neg.\ hyperb.\ cases, which equal the Floquet multipliers $\lambda$ and $1/\lambda$.\ Furthermore, in the elliptic case, if $b<0$, then the rotation is by $\theta \in (0,\pi)$ and if $b>0$, then it is by $-\theta$, so the rotation angle equals $2\pi - \theta \in (\pi,2\pi)$.\
\end{example}
\begin{example}
Topologically, the GIT quotient $\text{Sp}(1) /\!/ \text{Sp}(1)$, where $\text{Sp}(1) = \text{SL}(2,\mathbb{R})$ acts on itself by conjugation, is isomorphic to a circle with two spikes (see Figure \ref{circle_two_spikes} and \cite[Section 10.5]{frauenfelder} for details).\ Geometrically, the unit circle $\{ z \in \ \mathbb{C} \mid |z| = 1 \} \setminus \{ \pm 1 \}$ corresponds to equivalence classes of elliptic matrices, the spike $\{ r \in \mathbb{R} \mid r > 1 \}$ represents the positive hyperbolic case, the spike $\{ r \in \mathbb{R} \mid r < -1 \}$ represents the negative hyperbolic case.\ Moreover, since
$$ \begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}, \begin{pmatrix}
1 & 1\\
0 & 1
\end{pmatrix}, \begin{pmatrix}
1 & -1\\
0 & 1
\end{pmatrix},\quad\quad \begin{pmatrix}
-1 & 0\\
0 & -1
\end{pmatrix}, \begin{pmatrix}
-1 & 1\\
0 & -1
\end{pmatrix}, \begin{pmatrix}
-1 & -1\\
0 & -1
\end{pmatrix}, $$
are resp.\ equivalent, the single point $\{ +1 \}$ corresponds to the first three Jordan forms and the single point $\{ -1 \}$ corresponds to the second three Jordan forms.\
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-8,-2) rectangle (8,2.5);
\draw [line width=1.5pt,color=blue] (0,0) circle (1.5cm);
\draw [line width=1.5pt,color=blue] (-3,0) -- (-1.5,0);
\draw [line width=1pt] (-1.5,0) -- (1.5,0);
\draw [line width=1.5pt,color=blue] (1.5,0) -- (3,0);
\draw [->,line width=1pt] (0,-2) -- (0,2.5);
\draw [fill=blue,color=blue] (1.5,0) circle (2pt);
\draw [fill=blue,color=blue] (-1.5,0) circle (2pt);
\end{tikzpicture}
\caption{Topology of $\text{Sp}(1) /\!/ \text{Sp}(1)$}
\label{circle_two_spikes}
\end{figure}
\end{example}
\section{Planar and spatial Hill lunar problem} \label{sec:spatial_Hills_lunar}
\begin{center}
\textit{``The pioneer in the search for periodic orbits in the restricted three-body problem was Hill}\\
\textit{with his discovery of the retrograde and direct periodic orbit in Hill's lunar problem...}\\
\textit{His motivation was to describe the motion of the moon."}
\end{center}
\begin{flushright}
- Urs Frauenfelder and Otto van Koert \cite[p.\ 94]{frauenfelder}
\end{flushright}
\subsection{Short astronomical lunar overview and Hill's concept}
In astronomy, one distinguishes two kinds of periodic orbits, retrograde and direct ones.\ The sun rotates about its own axis.\ The planets circle around the sun in the same direction, and all of them rotate about their own axis in this direction, except Venus and Uranus, which rotate about their own axis in the other direction.\ Usually, moons, which are the companions of the planet, move around the planet in the way the planet circles around the sun.\ Such a periodic orbit of the moon is called a direct periodic orbit, and in the other case is a retrograde one, see Figure \ref{figure1}.\ For instance, the moon ``Triton" of Neptun is retrograde.\ Our moon is direct.
\begin{figure}[H]
\centering
\definecolor{qqqqff}{rgb}{0.0,0.0,1.0}
\definecolor{uuuuuu}{rgb}{0.26666666666666666,0.26666666666666666,0.26666666666666666}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-1.5,-1.4) rectangle (10.0,1.4);
\draw [fill=black,fill opacity=0.3, line width=1pt] (-0.4,0.0) circle (1.0cm);
\draw [fill=black,fill opacity=0.3,line width=1pt] (4.0,0.0) circle (0.3cm);
\draw [shift={(0.0-0.4,0.0)},line width=1pt] [decoration={markings, mark=at position 0.8 with {\arrow{>}}}, postaction={decorate}] plot[domain=-0.6186443509247468:0.7687330396835076,variable=\t]({1.0*1.4484474446799924*cos(\t r)+-0.0*1.4484474446799924*sin(\t r)},{0.0*1.4484474446799924*cos(\t r)+1.0*1.4484474446799924*sin(\t r)});
\draw [shift={(0,0.0)},line width=1pt] [decoration={markings, mark=at position 0.18 with {\arrow{>}}}, postaction={decorate}] plot[domain=0.07500383243652106:0.6998928697192437,variable=\t]({1.0*3.9761263820253814*cos(\t r)+-0.0*3.9761263820253814*sin(\t r)},{0.0*3.9761263820253814*cos(\t r)+1.0*3.9761263820253814*sin(\t r)});
\draw [shift={(3.95,0.0)},line width=1pt] [decoration={markings, mark=at position 0.65 with {\arrow{>}}}, postaction={decorate}] plot[domain=-0.588002603547574:0.8615476803894101,variable=\t]({1.0*0.5812585692566169*cos(\t r)+-0.0*0.5812585692566169*sin(\t r)},{0.0*0.5812585692566169*cos(\t r)+1.0*0.5812585692566169*sin(\t r)});
\draw (-0.3923106060605868-0.4,-1.05) node[anchor=north west] {$sun$};
\draw (3.37018939393941,-0.2249368686868733) node[anchor=north west] {$planet$};
\draw [dashed,line width=1pt] (4.0,0.0) [decoration={markings, mark=at position 0.224 with {\arrow{>}}}, postaction={decorate}] circle (0.9105960150386894cm);
\draw (4.746325757575772,1.0984722222222156) node[anchor=north west] {$moon$};
\begin{scriptsize}
\draw [fill=black] (4.673598484848501,0.6127398989898932) circle (2pt);
\end{scriptsize}
\draw [fill=black,fill opacity=0.3,line width=1pt] (4.0+4,0.0) circle (0.3cm);
\draw [shift={(4.0,-0.0)},line width=1pt] [decoration={markings, mark=at position 0.18 with {\arrow{>}}}, postaction={decorate}] plot[domain=0.07500383243652106:0.6998928697192437,variable=\t]({1.0*3.9761263820253814*cos(\t r)+-0.0*3.9761263820253814*sin(\t r)},{0.0*3.9761263820253814*cos(\t r)+1.0*3.9761263820253814*sin(\t r)});
\draw [shift={(3.95+4,0.0)},line width=1pt] [decoration={markings, mark=at position 0.65 with {\arrow{>}}}, postaction={decorate}] plot[domain=-0.588002603547574:0.8615476803894101,variable=\t]({1.0*0.5812585692566169*cos(\t r)+-0.0*0.5812585692566169*sin(\t r)},{0.0*0.5812585692566169*cos(\t r)+1.0*0.5812585692566169*sin(\t r)});
\draw (3.37018939393941+4,-0.2249368686868733) node[anchor=north west] {$planet$};
\draw [dashed,line width=1pt] (4.0+4,0.0) [decoration={markings, mark=at position 0.04 with {\arrow{<}}}, postaction={decorate}] circle (0.9105960150386894cm);
\draw (4.746325757575772+4,1.0984722222222156) node[anchor=north west] {$moon$};
\begin{scriptsize}
\draw [fill=black] (4.673598484848501+4,0.6127398989898932) circle (2pt);
\end{scriptsize}
\end{tikzpicture}
\caption{Direct and retrograde periodic orbit}
\label{figure1}
\end{figure}
\noindent
The synodic month, or lunation, is the period of the Moon's full moon phase, i.e.,\ the average period of the Moon's orbit with respect to the line joining the sun and the earth.\ Of course one can look at new moons instead.\ If the moon is closer to the earth, then its motion is faster, i.e.\ the speed of the moon varies.\ The anomalistic month is the time the moon takes from the closest point (``perigee") back to the closest point, or equivalently to return to the same speed.\ Considering the farthest point (``apogee") gives the same period.\ The orbit of the moon is inclined to the ecliptic by about $5^{\circ}$, and the draconitic month is the period from one intersection point with the ecliptic, called node, back to itself.\ The term ``draconitic" or ``the nodes dragon" has the following background.\ When the moon is near to one of its nodes, then a solar eclipse appears at full moon and a moon eclipse at new moon.\ In the past, in case of these eclipses people said that there is a dragon eating the sun or the moon.\ However, the mean values of these three periods, named in the abstract, were already known to the Babylonians around 600 BCE.\ For a description about Babylonian astronomy we refer to Neugebauer \cite{neugebauer} and to a more current work by Brack-Bernsen \cite{brack_bernsen}, and about the history of lunar theory to Linton \cite{linton}.\ These periods are also mentioned in Ptolemy's (c.90--c.160) Almagest (see \cite{toomer}) which was one of the most influential works for scientists, in particular for those mentioned in the following quote.\
\begin{center}
\textit{``Over the
centuries, through the work of men such as Ptolemy, Ibn ash-Sh\={a}\d{t}ir,}\\
\textit{Copernicus, Tycho Brahe, Kepler, and Newton, models of the heavens}\\
\textit{came to reproduce the results of observations with greater and greater accuracy.}"
\end{center}
\begin{flushright}
- Christopher M. Linton \cite[p.\ xi]{linton}
\end{flushright}
According to planetological data (see \cite[pp.\ 52--53, 62]{schultz}) we have Table \ref{planetological_data}.\
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c}
& mass & mean distance to sun & mean distance to earth\\
\hline sun & $1.989 \times 10^{30}$ kg & & \\
earth & $5.98 \times 10^{24}$ kg & $149.6 \times 10^6$ km & \\
moon & $7.35 \times 10^{22}$ kg & & $384000$ km
\end{tabular}
\caption{Planetological data}
\label{planetological_data}
\end{table}
\noindent
We see that the mass of the earth is about 0.0003\% compared to the sun and the one of the moon is about 1.234\% compared to the earth.\ Moreover, the mean distance of the moon to the earth compared to the one between earth and sun is about 0.25668\%.\ Given these extreme proportions, Hill's idea was to study a limit case in which the earth is at the origin, the moon is very close to it, but the huge sun is infinitely far away.\ One zooms into a region around the earth (see Figure \ref{figure:hill}) and this is a simplified model of the circular restricted three body problem.\
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-2.5,-1) rectangle (2.5,1.1);
\draw (-2.52,-0.05999999999999944) node[anchor=north west] {$s$};
\draw (0.06,-0.019999999999999435) node[anchor=north west] {$e$};
\draw (0.22,0.6600000000000007) node[anchor=north west] {$m$};
\draw [dashed,color=black,line width=1pt] (0.0,-0.0) circle (1.0cm);
\begin{scriptsize}
\draw [fill=black] (0.0,-0.0) circle (2.0pt);
\draw [fill=black] (-2.0,0.0) circle (4.5pt);
\draw [fill=black] (0.28,0.24) circle (0.5pt);
\end{scriptsize}
\end{tikzpicture}
\caption{Hill lunar problem}
\label{figure:hill}
\end{figure}
Hill's clever concept was that in his equation the true trajectory of the moon must be close to a periodic orbit centered at the earth, called ``variational orbit" or ``Hill's intermediate orbit", i.e.\ the true orbit is almost periodic.\ For the ``motion of the lunar perigee" (i.e.\ the analysis of anomalistic period) he arrived, by transforming his equations to an infinite set of homogeneous linear equations, at an infinite determinant, which corresponds to these equations written in power series.\ He was the first who attacked such a problem.\ In 1877 John Couch Adams (1819--1892) used exactly the same method in \cite{adams} to study the motion of the lunar node (i.e.\ the analysis of draconitic period).\ Since the determinant is infinite, it was not obvious that it converges.\ In 1881, according to \cite[p.\ 116]{wilson}, Poincaré, who was greatly influenced by Hill's approach, proved the relevant theorem, namely that an infinite determinant converges if and only if the non-diagonal elements have finite sum and the product of the elements on the diagonal is finite.\ We refer to \cite{gutzwiller_2} and \cite{wilson} for more details with a lot of history.\
Nowadays nobody makes use of their huge computations (see \cite{adams}, \cite{hill_det} and \cite{hill}).\ In contrast to this computational approach by Hill and Adams, our geometrical way leads us to a much less computational approximation of their periods in terms of the Floquet multipliers of the linearized spatial Hill equation and their Conley--Zehnder indices.\
\subsection{Derivation of the Hamiltonians}
\label{subsec:discussion}
Since our periodic orbits are in the plane, we start to introduce the planar circular restricted three body problem (PCR3BP from now on).\ In the restricted three body problem we consider two masses, which we call sun and earth, and a massless moon which does not influence the two masses and is attracted by them according to Newton's law of gravitation.\ We denote them respectively by $s$, $e$ and $m$.\ This assumption is a good approximation of the actual system in view of the relations of their masses (see Table \ref{planetological_data} from previous subsection).\ The goal is to understand the dynamics of the massless body, which moves in the same plane as the sun and the earth.\ Moreover, we normalize the total mass to unity, i.e.\ the mass of the earth is $\mu \in [0,1]$ and that of the sun is $1-\mu$.\ If $\mu$ is bigger than $1-\mu$, then we can just interchange their roles.\ Furthermore, the two masses move on circles with common center of mass, with coordinates $ s(t)= - \mu (\cos t, \sin t)$ and $e(t) = (1 - \mu)(\cos t, \sin t)$ (see Figure \ref{figure8}).\
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-4.2,-2.3) rectangle (4.2,2.8);
\draw(0.0,0.0) [decoration={markings, mark=at position 0.63 with {\arrow{>}}}, postaction={decorate},line width=1pt] circle (1.0cm);
\draw(0.0,0.0) [decoration={markings, mark=at position 0.065 with {\arrow{>}}}, postaction={decorate},line width=1pt] circle (2.0cm);
\draw [->,line width=1pt] (0.0,-3.0) -- (0.0,2.8);
\draw (-1.826666666666667,0.6999999999999997) node[anchor=north west] {$s(t)$};
\draw (1.9533333333333345,0.01999999999999944) node[anchor=north west] {$e(t)$};
\draw (3.533333333333354,-0.040000000000000584) node[anchor=north west] {$q_1$};
\draw (0.19333333333333377,2.9400000000000006) node[anchor=north west] {$q_2$};
\draw (2.5533333333333344,1.6799999999999998) node[anchor=north west] {$m$};
\draw [->,line width=1pt] (-3.0,0.0) -- (3.6,0.0);
\begin{scriptsize}
\draw [fill=black] (0.0,0.0) circle (2pt);
\draw [fill=black] (-1.0,1.2246467991473532E-16) circle (2pt);
\draw [fill=black] (2.0,0.0) circle (2pt);
\draw [fill=black] (2.54,1.28) circle (1pt);
\end{scriptsize}
\end{tikzpicture}
\caption{PCR3BP}
\label{figure8}
\end{figure}
\noindent
We exclude collisions of the moon with one of the masses, such that the configuration space is $\mathbb{R}^2 \setminus\{s(t),e(t)\}$ and the phase space the trivial cotangent bundle $T^*\big(\mathbb{R}^2\setminus\{s(t),e(t)\}\big) = \big(\mathbb{R}^2\setminus\{s(t),e(t)\}\big) \times \mathbb{R}^2$ with the canonical symplectic form $\omega = dq_1 \wedge dp_1 + dq_2 \wedge dp_2$.\ Let $q=(q_1,q_2)$ denote the position of $m$ and $p=(p_1,p_2)$ its momentum in the fiber.\
By the time dependence of the Hamiltonian in the inertial frame of the moon, which is given by the kinetic energy and Newton's potential, the energy is not preserved by the Hamiltonian flow.\ Hence we consider the angular momentum
\begin{align} \label{angular_momentum_original}
L \colon T^* \mathbb{R}^2 \to \mathbb{R},\quad (q,p) \mapsto p_1q_2 - p_2q_1,
\end{align}
which generates a uniform counterclockwise rotation of the coordinate system such that the sun and the earth are fixed at $s = (- \mu,0)$ and $e = (1 - \mu,0)$ on the $q_1$-axis in the rotating frame.\
In this new coordinate system, the Hamiltonian describing the motion of the moon is now autonomous,
$$H \colon T^* \big(\mathbb{R}^2\setminus\{s,e\}\big) \to \mathbb{R},\quad (q,p) \mapsto \frac{1}{2}|p|^2 - \frac{1-\mu}{|q-s|} - \frac{\mu}{|q-e|} + p_1q_2 - p_2q_1.$$
This integral was discovered by Jacobi, so it is called the Jacobi integral.\ The traditional one is $-2H$.\ To understand the physics, we complete the squares and obtain
\begin{align} \label{hamiltonian_1}
H(q,p) = \frac{1}{2}\big( (p_1 + q_2)^2 + (p_2 - q_1)^2 \big) - \frac{1-\mu}{|q-s|} - \frac{\mu}{|q-e|} - \frac{1}{2}(q_1^2 + q_2^2).
\end{align}
The last three terms only depend on the position $q$, so one defines the effective potential by
\begin{align} \label{effective_1}
U \colon \mathbb{R}^2\setminus\{s,e\} \to \mathbb{R},\quad q \mapsto - \frac{1-\mu}{|q-s|} - \frac{\mu}{|q-e|} - \frac{1}{2}(q_1^2 + q_2^2).
\end{align}
Note that $U$ consists of the Newtonian potential for the gravitational force between sun and earth and of the term $-\frac{1}{2}(q_1^2 + q_2^2)$, which appears in rotating coordinates - it is the centrifugal force.\ In the kinetic part is a twist corresponding to an extra force, namely the Coriolis force, which depends on the velocity.\ The dynamics of the moon is therefore very complicated.\
The Lagrange points are the critical points of the Hamiltonian $H$.\ The set of critical points of $H$ and of the effective potential $U$ are in bijection under the restriction of the footpoint projection
$$\pi\vert_{\text{crit}(H)} \colon \text{crit}(H) \to \text{crit}(U),\quad (q,p) \mapsto q,$$
and its inverse given by $(q_1,q_2) \mapsto (q_1,q_2,-q_2,q_1)$.\
For the mass $\mu \in (0,1)$, i.e.\ if neither the sun nor the earth have zero mass, there are five Lagrange points (see \cite[pp.\ 63--68]{frauenfelder} for details).\ $L_1$, $L_2$ and $L_3$ are saddle points and $L_4$ and $L_5$ are global maxima of $U$.\
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-4.5,-2.2) rectangle (4.5,2.2);
\draw (-1.4066666666666668,0.01999999999999887) node[anchor=north west] {$s$};
\draw (1.8833333333333345,0.01999999999999887) node[anchor=north west] {$e$};
\draw (-0.22666666666666638,0.6069999999999991) node[anchor=north west] {$0$};
\draw [line width=1pt] (-3.0,0.0)-- (4.0,0.0);
\draw (1.1333333333333342,0.01999999999999887) node[anchor=north west] {$\textcolor{red}{L_1}$};
\draw (-2.446666666666667,0.01999999999999887) node[anchor=north west] {$\textcolor{red}{L_3}$};
\draw (1.1133333333333342,2.32) node[anchor=north west] {$\textcolor{red}{L_4}$};
\draw (1.013333333333334,-1.7200000000000017) node[anchor=north west] {$\textcolor{red}{L_5}$};
\draw (2.3333333333333344,0.01999999999999887) node[anchor=north west] {$\textcolor{red}{L_2}$};
\begin{scriptsize}
\draw [fill=black] (0.0,0.0) circle (2pt);
\draw [fill=black] (-1.0,0.0) circle (2pt);
\draw [fill=black] (2.0,0.0) circle (2pt);
\draw [fill=red,color=red] (-2.0,0.0) circle (2pt);
\draw [fill=red,color=red] (1.7,0.0) circle (2pt);
\draw [fill=red,color=red] (2.3,0.0) circle (2pt);
\draw [fill=red,color=red] (1.0,1.73) circle (2pt);
\draw [fill=red,color=red] (1.0,-1.73) circle (2pt);
\end{scriptsize}
\end{tikzpicture}
\caption{The five Lagrange points}
\label{figure9}
\end{figure}
\noindent
One limit case of PCR3BP is the rotating Kepler problem, a two body problem, for which $\mu = 0$, i.e.\ the earth is massless and can ben neglected.\ Hence the moon is just attracted by the sun with mass 1.\ For a detailed discussion we refer to \cite[pp.\ 71--72]{frauenfelder}.\ Another limit case is Hill lunar problem, whose idea and motivation we described in the introduction.\ In contrast to the Kepler problem, the motion here is chaotic, but it simplifies the Hamiltonian (\ref{hamiltonian_1}).\
Recall from the introduction that the orbit of the moon is inclined to the ecliptic, thus from now on the moon moves spatially in three dimensional Euclidean space $\mathbb{R}^3$, i.e.\ its position is given by $q=(q_1,q_2,q_3)$ and its momentum by $p=(p_1,p_2,p_3)$.\ The fixed sun and earth in rotating coordinates are at the positions
$$s = (-\mu, 0,0),\quad e =(1-\mu,0,0).$$
We assume that the sun has a much bigger mass than the earth and the earth a much bigger mass than the moon.\ In addition, the moon moves very close to the earth.\ To goal is to shift the earth into the origin, blow up the coordinates around it and let its mass $\mu$ tend to zero.\ Hence we zoom in an area around the earth (see Figure \ref{figure:hill}).\
Based on the arguments for the planar case in \cite[pp.\ 77--78]{frauenfelder}, we treat the spatial problem in an analogous way.\ The symplectic change of coordinates and momenta
$$ T^* \mathbb{R}^3 \to T^*\mathbb{R}^3, \quad (q,p) \mapsto (q_1 - 1 + \mu, q_2,q_3, p_1, p_2 - 1 + \mu,p_3) $$
puts the earth to the origin and the sun to $(-1,0,0)$.\ By adding the constant $\frac{(1-\mu)^2}{2}$, which does not change the Hamiltonian vector field, we obtain the new Hamiltonian $\widetilde{H}$ on $T^*\big( \mathbb{R}^3 \setminus \{ (-1,0,0),(0,0,0) \} \big)$,
$$\widetilde{H}(q,p) = \frac{1}{2}|p|^2 - \frac{\mu}{|q|} - (1 - \mu) \Bigg( \frac{1}{\sqrt{(q_1 + 1)^2 + q_2^2 + q_3^2}} + q_1 \Bigg) + p_1q_2 - p_2q_1. $$
Now consider the conformally symplectic scaling (the blow up)
$$ \phi_{\mu} \colon T^*\mathbb{R}^3 \to T^*\mathbb{R}^3,\quad (q,p) \mapsto (\mu^{\frac{1}{3}}q,\mu^{\frac{1}{3}}p) $$
by the constant conformal factor $\mu^{\frac{2}{3}}$, i.e.\ $\phi^*_{\mu} \omega = \mu^{\frac{2}{3}}\omega$.\ We define the family of Hamiltonians
$$H^{\mu} \colon T^* \big( \mathbb{R}^3 \setminus \{ (-\mu^{-\frac{1}{3}},0,0),(0,0,0) \} \big) \to \mathbb{R},\quad (q,p) \mapsto \mu^{-\frac{2}{3}} \big( ( \tilde{H} \circ \phi_\mu )(q,p) + 1 - \mu \big). $$
Then
$$ \phi_{\mu}^* X_{\widetilde{H}} = X_{H^{\mu}} $$
and, in explicit form,
$$ H^{\mu}(q,p) = \frac{1}{2}|p|^2 - \frac{1}{|q|} + p_1 q_2 - p_2 q_1 - \frac{1 - \mu}{\mu^{\frac{2}{3}}} \bigg( \frac{1}{\sqrt{ 1 + 2 \mu^{\frac{1}{3}} q_1 + \mu^{\frac{2}{3}}|q|^2 } } + \mu^{\frac{1}{3}} q_1 - 1 \bigg). $$
We simplify $H^{\mu}$ by using the second order Taylor expansion of the function
$$\frac{1}{\sqrt{1+x}} = 1 - \frac{x}{2} + \frac{3x^2}{8} + \mathcal{O}(x^3)$$
for $|x|<1$.\ By setting $x = 2\mu^{\frac{1}{3}}q_1 + \mu^{\frac{2}{3}}|q|^2$, we obtain
$$ H^{\mu}(q,p) = \frac{1}{2}|p|^2 - \frac{1}{|q|} + p_1 q_2 - p_2 q_1 - \bigg( - \frac{1}{2}|q|^2 + \frac{3}{2}q_1^2 + \mathcal{O}(\mu)\bigg). $$
For $\mu \to 0$, $H^{\mu}$ converges uniformly in the $C^{\infty}$-topology on each compact subset to the following Hamiltonian $H$, which we call \textbf{Hamiltonian of the spatial Hill lunar problem}.\ It is given by
\begin{align} \label{hamiltonian_hill}
H \colon T^* \big( \mathbb{R}^3 \setminus \{ (0,0,0) \} \big) \to \mathbb{R},\quad (q,p) \mapsto \frac{1}{2}|p|^2 - \frac{1}{|q|} + p_1q_2 - p_2q_1 - q_1^2 + \frac{1}{2}q_2^2 + \frac{1}{2}q_3^2.
\end{align}
Completing the squares gives us
\begin{align} \label{hamiltonian_2}
H \colon T^* \big( \mathbb{R}^3 \setminus \{ (0,0,0) \} \big) \to \mathbb{R},\quad (q,p) \mapsto \frac{1}{2}\big( (p_1 + q_2)^2 + (p_2 - q_1)^2 + p_3^2 \big) +V(q),
\end{align}
where the effective potential is defined as
\begin{align*}
V \colon \mathbb{R}^3 \setminus \{ (0,0,0) \} \to \mathbb{R},\quad q \mapsto - \frac{1}{|q|} - \frac{3}{2}q_1^2 + \frac{1}{2}q_3^2.
\end{align*}
In the planar case there are no $q_3$ and $p_3$ terms. Compared to PCR3BP, (\ref{hamiltonian_1}) and (\ref{effective_1}), we see that there is again the velocity dependent Coriolis force in the kinetic part.\ This time the effective potential includes just the Newtonian potential for the earth, given by the gravitational force.\ In addition, the therm $-\frac{3}{2}q_1^2$ appers, which is a combination of the gravitational force and the centrifugal force by the huge sun infinitely far away.\ They cancel each other up which is a kind of tidal force and repulsive in the $q_1$-direction.\ In the spatial case, since the sun is in the plane, there is a strong attraction back to the ecliptic, hence the term $\frac{1}{2}q_3^2$ is a kind of left over of the gravitational force of the sun.\
Moreover, in contrast to the PCR3BP the planar Hill lunar problem has only two critical points (see \cite[pp.\ 80--81]{frauenfelder} for details).\ They are saddle points at $(\pm 3 ^{-\frac{1}{3}},0)$, which are the limits of $L_1$ and $L_2$ (from Figure \ref{figure9}), by blowing up the coordinates around the earth.\ In view of symmetries (see (\ref{involution_1} in next Section \ref{sec:involutions_shooting}), they have the same critical value $- \frac{1}{2} 3^{4/3}$, while the traditional one is $3^{4/3}$.\
\subsection{Hill equation and linearization}
For finding periodic orbits, we use the Hamiltonian equation of motion
$$ \frac{dq_i}{dt} = \frac{\partial H}{\partial p_i},\quad \frac{d p_i}{dt} = - \frac{\partial H}{\partial q_i} $$
and compute for the Hamiltonian (\ref{hamiltonian_2}) the \textbf{spatial Hill equation in $(q,p)$-coordinates}:
\begin{align} \label{equation_of_motion_q_p}
\begin{cases}
\dot{q}_1 = p_1 + q_2,\quad\quad\quad & \mathlarger{\dot{p}_1 = p_2 - q_1 - \frac{\partial V}{\partial q_1}},\\[0.8em]
\dot{q}_2 = p_2 - q_1, & \mathlarger{\dot{p}_2 = - p_1 - q_2 - \frac{\partial V}{\partial q_2}},\\
\dot{q}_3 = p_3, & \mathlarger{\dot{p}_3 = - \frac{\partial V}{\partial q_3}}.
\end{cases}
\end{align}
For the \textbf{spatial Hill equation in $(q,\dot{q})$-coordinates} we transform the first order ODE to a second order ODE
$$ \ddot{q}_1 = \dot{p}_1 + \dot{q}_2,\quad \ddot{q}_2 = \dot{p}_2 - \dot{q}_1,\quad \ddot{q}_3 = \dot{p}_3 $$
and obtain
\begin{align} \label{equation_of_motion_0}
\begin{pmatrix}
\ddot{q}_1\\
\ddot{q}_2\\
\ddot{q}_3
\end{pmatrix} + 2 \begin{pmatrix}
0 & -1 & 0\\
1 & 0 & 0\\
0 & 0 & 0
\end{pmatrix} \cdot \begin{pmatrix}
\dot{q}_1\\
\dot{q}_2\\
\dot{q}_3
\end{pmatrix} + \nabla V (q) = 0,
\end{align}
which reads
\begin{equation*}
\left\{ \begin{array}{l}
\mathlarger{\ddot{q}_1 = 2 \dot{q}_2 + 3q_1 - \frac{q_1}{|q|^3}} \\[0.7em]
\mathlarger{\ddot{q}_2 = -2\dot{q}_1 - \frac{q_2}{|q|^3}} \\[0.7em]
\mathlarger{\ddot{q}_3 = -q_3\bigg(\frac{1}{|q|^3} + 1 \bigg)}.
\end{array} \right.
\end{equation*}
Note that the first two equations are for the planar case and the submatrix
$$ \begin{pmatrix}
0 & -1\\
1 & 0
\end{pmatrix}, $$
which is a rotation by $\frac{\pi}{2}$, arises by the coriolis force and is inherited from the planar problem.\
To linearize the Hill equation along a periodic orbit we expand $(q + \Delta q, \dot{q} + \Delta \dot{q})$ near $(q,\dot{q})$.\ Therefore, in view of (\ref{equation_of_motion_0}) the \textbf{linearized equation in $(q,\dot{q})$-coordinates} is given by
$$ \begin{pmatrix}
\Delta \ddot{q}_1\\
\Delta \ddot{q}_2\\
\Delta \ddot{q}_3
\end{pmatrix} + 2 \begin{pmatrix}
0 & -1 & 0\\
1 & 0 & 0\\
0 & 0 & 0
\end{pmatrix} \cdot \begin{pmatrix}
\Delta \dot{q}_1\\
\Delta \dot{q}_2\\
\Delta \dot{q}_3
\end{pmatrix} + \text{H}_V (q) \cdot \begin{pmatrix}
\Delta q_1\\
\Delta q_2\\
\Delta q_3
\end{pmatrix} = 0, $$
which is
\begin{equation*}
\left\{ \begin{array}{l}
\mathlarger{\Delta \ddot{q}_1 - 2 \Delta \dot{q}_2 = \bigg(\frac{2q_1^2 - q_2^2 - q_3^2}{|q|^5} + 3\bigg) \Delta q_1 + \frac{3q_1q_2}{|q|^5} \Delta q_2 + \frac{3q_1q_3}{|q|^5} \Delta q_3}\\[1em]
\mathlarger{\Delta \ddot{q}_2 + 2 \Delta \dot{q}_1 = \frac{3q_1q_2}{|q|^5} \Delta q_1 + \frac{2q_2^2 - q_1^2 - q_3^2}{|q|^5} \Delta q_2 + \frac{3q_2q_3}{|q|^5} \Delta q_3} \\[1em]
\mathlarger{\Delta \ddot{q}_3 = \frac{3q_1q_3}{|q|^5} \Delta q_1 + \frac{3q_2q_3}{|q|^5} \Delta q_2 + \bigg(\frac{2q_3^2 - q_1^2 - q_2^2}{|q|^5} - 1\bigg) \Delta q_3 }.
\end{array} \right.
\end{equation*}
For a planar periodic orbit which is considered in the spatial system, by the restriction of the Hessian to $\{q_3=0\}$, the linearized equations become
\begin{equation*}
\left\{ \begin{array}{l}
\mathlarger{\Delta \ddot{q}_1 - 2 \Delta \dot{q}_2 = \bigg(\frac{2q_1^2 - q_2^2}{|q|^5} + 3\bigg) \Delta q_1 + \frac{3q_1q_2}{|q|^5} \Delta q_2} \\[1em]
\mathlarger{\Delta \ddot{q}_2 + 2 \Delta \dot{q}_1 = \frac{3q_1q_2}{|q|^5} \Delta q_1 + \frac{2q_2^2 - q_1^2}{|q|^5} \Delta q_2} \\[1em]
\mathlarger{\Delta \ddot{q}_3 = - \bigg(\frac{1}{|q|^3} + 1\bigg) \Delta q_3, }
\end{array} \right.
\end{equation*}\\
which are similar to the equations in Gutzwiller \cite[p.\ 70]{gutzwiller} and for the planar case in Wintner \cite[p.\ 403]{wintner}.\ The \textbf{linearized equation in $(q,p)$-coordinates} with respect to (\ref{equation_of_motion_q_p}) yields
\begin{align*}
\begin{cases}
\Delta \dot{q}_1 = \Delta p_1 + \Delta q_2,\quad\quad\quad & \Delta \dot{p}_1 = \Delta p_2 - \Delta q_1 - \left(\text{H}_V (q) \cdot \Delta q \right)_1,\\
\Delta \dot{q}_2 = \Delta p_2 - \Delta q_1, & \Delta \dot{p}_2 = - \Delta p_1 - \Delta q_2 - \left(\text{H}_V (q) \cdot \Delta q \right)_2,\\
\Delta \dot{q}_3 = \Delta p_3, & \Delta \dot{p}_3 = - \left(\text{H}_V (q) \cdot \Delta q \right)_3,
\end{cases}
\end{align*}
which reads for planar periodic orbits
\begin{align} \label{linearized_q_p_fix}
\begin{cases}
\Delta \dot{q}_1 = \Delta p_1 + \Delta q_2,\quad\quad\quad & \Delta \dot{p}_1 = \Delta p_2 - \Delta q_1 + \mathlarger{\bigg(\frac{2q_1^2 - q_2^2}{|q|^5} + 3\bigg) \Delta q_1 + \frac{3q_1q_2}{|q|^5} \Delta q_2},\\[1em]
\Delta \dot{q}_2 = \Delta p_2 - \Delta q_1, & \Delta \dot{p}_2 = - \Delta p_1 - \Delta q_2 + \mathlarger{\frac{3q_1q_2}{|q|^5} \Delta q_1 + \frac{2q_2^2 - q_1^2}{|q|^5} \Delta q_2}, \\[1em]
\Delta \dot{q}_3 = \Delta p_3, & \Delta \dot{p}_3 = \mathlarger{ - \bigg(\frac{1}{|q|^3} + 1\bigg) \Delta q_3 }.
\end{cases}
\end{align}
\subsection{The group of linear symmetries}
\label{sec:involutions_shooting}
\textbf{Planar case:}\ The planar part of the Hamiltonian (\ref{hamiltonian_hill})
\begin{align} \label{hamiltonian_linear_planar}
H \colon T^* \big( \mathbb{R}^2 \setminus \{ (0,0) \} \big) \to \mathbb{R},\quad (q,p) \mapsto \frac{1}{2}|p|^2 - \frac{1}{|q|} + p_1q_2 - p_2q_1 - q_1^2 + \frac{1}{2}q_2^2
\end{align}
is invariant under the double-symmetry given by the two commuting linear anti-symplectic involutions
\begin{align} \label{involution_1}
&\rho_1 \colon T^* \mathbb{R}^2 \to T^*\mathbb{R}^2,\quad (q,p) \mapsto (q_1,-q_2,-p_1,p_2), \nonumber \\
&\rho_2 \colon T^* \mathbb{R}^2 \to T^*\mathbb{R}^2,\quad (q,p) \mapsto (-q_1,q_2,p_1,-p_2).
\end{align}
Their product $\rho_1 \circ \rho_2 = \rho_2 \circ \rho_1 = -\text{id}$ is symplectic.\ Geometrically, in the configuration space, the Hamiltonian $H$ in (\ref{hamiltonian_linear_planar}) is invariant under the reflections about the $q_1$- and $q_2$-axes, i.e.\ it is not possible to say whether we are going to the sun or away from it.\ The symplectic involutions $\pm$id correspond to a rotation by $0$ and $\pi$, respectively.\
\begin{remark}
Algebraically, $\rho_1, \rho_2$ and $\pm$ id form a Klein four-group, i.e.
$$ \text{Sym}^{\text{P}}_{\text{L}}(H) := \langle \rho_1, \rho_2 \mid \rho_1^2 = \rho_2^2 = (\rho_1 \circ \rho_2)^2 = \text{id} \rangle \cong \mathbb{Z}_2 \times \mathbb{Z}_2.$$
\end{remark}
\noindent
Now we show that these four linear symmetries are the only ones in the planar system.\
\begin{Proposition} \label{prop_planar}
The set
\begin{align} \label{set_1}
\{ \rho \colon T^* \mathbb{R}^2 \to T^* \mathbb{R}^2 \text{ linear} \mid \rho^2 = \text{ id, } H \circ \rho = H \text{ and } \rho ^* \omega = \pm \omega \}
\end{align}
is $\text{Sym}^{\text{P}}_{\text{L}}(H)$.
\end{Proposition}
\begin{proof}
Let $\rho$ be an element from the set (\ref{set_1}).\ We prove the proposition in three steps where the first one is obvious.\\
\textbf{Step 1.}\ \textit{The Hamiltonian (\ref{hamiltonian_linear_planar}) is the sum of
$$ H_2(q,p) = \frac{1}{2}|p|^2 + p_1q_2 - p_2q_1 - q_1^2 + \frac{1}{2}q_2^2 \quad \text{ and } \quad H_{-1}(q,p) = - \frac{1}{|q|}, $$
where $H_2$ is homogeneous of degree 2 and $H_{-1}$ is homogeneous of degree $-1$.\ Hence $H_2 \circ \rho = H_2$ and $H_{-1} \circ \rho = H_{-1}.$}\\
\textbf{Step 2.}\ \textit{The matrix form of $\rho$ with respect to the splitting $\mathbb{R}^4 = \mathbb{R}^2 \times \mathbb{R}^2$ and to the coordinates $(q_1,q_2,p_1,p_2)$ is
\begin{align*}
\begin{cases}
\begin{pmatrix}
A & 0\\
C & A
\end{pmatrix},\quad A \in O(2),\quad A=A^T,\quad C=-C^T,\quad AC=-CA, & \text{ if $\rho$ is symplectic}\\
\\
\begin{pmatrix}
A & 0\\
C & -A
\end{pmatrix},\quad A \in O(2),\quad A=A^T,\quad C=C^T,\quad AC=CA, & \text{ if $\rho$ is anti-symplectic.}
\end{cases}
\end{align*}}
To see that, we write $\rho$ in matrix form
$$ \begin{pmatrix}
A & B\\
C & D
\end{pmatrix}, $$
with respect to the coordinates $(q_1,q_2,p_1,p_2)$, where $A,B,C,D \in \text{Mat}(2,\mathbb{R}).$\ The $\rho$-invariance of $H_{-1}$ yields
$$ |Aq + Bp| = |q|,\quad \forall q,p. $$
For fixed $p$ we take $q$ with $|q|$ very small and find $Bp=0$.\ Hence
$$B = 0,\quad A \in O(2).$$
Next, the $\rho$-invariance of $H_2$ yields for $q=0$
$$ |Dp| = |p|,\quad \forall p, $$
whence also $D \in O(2).$\ Since $\rho$ is an involution, we obtain
$$ \rho \circ \rho = \begin{pmatrix}
A^2 & 0\\
CA + DC & D^2
\end{pmatrix} = \begin{pmatrix}
I_2 & 0\\
0 & I_2
\end{pmatrix}, $$
and with $AA^T = DD^T = I_2$, we obtain
\begin{align} \label{relation_1}
A = A^T,\quad D = D^T,\quad CA + DC = 0.
\end{align}
If $\rho$ is symplectic, then (\ref{relation_1}) and the linear symplectic relations (\ref{matrix_symplectic}) imply
$$AC=A^TC=C^TA,\quad A^TD=AD=I_2.$$
With $ A^2=I_2 $ we have
$$D=A,$$
and therefore
$$ 0 = CA + DC = CA + AC = CA + C^TA = (C+C^T)A.$$
Since det$(A)=\pm1$, the matrix $C$ is skew-symmetric and this proves the first assertion of the second step.\\
If $\rho$ is anti-symplectic, then by (\ref{relation_1}) and the linear anti-symplectic conditions (\ref{matrix_anti_symplectic}) we obtain
$$AC=A^TC=C^TA,\quad A^T D = AD = -I_2,$$
hence
$$D=-A,$$
and
$$ 0 = CA + DC = CA - AC = CA - C^TA = (C-C^T)A.$$
Therefore the matrix $C$ is symmetric and the second assertion follows.\\
\textbf{Step 3.}\ \textit{In both cases, the matrix $C$ is the zero matrix and
\begin{align*}
\begin{cases}
\rho \in \{ \pm \text{id} \}, &\text{ if $\rho$ is symplectic}\\
\rho \in \{\rho_1, \rho_2\}, & \text{ if $\rho$ is anti-symplectic.}
\end{cases}
\end{align*}}
To prove this, we first note that in both cases, $A$ is of the form
$$ A = \begin{pmatrix}
a & b\\
b & c
\end{pmatrix}. $$
Since $A^2 = I_2$ and $A\in O(2)$, we have
\begin{align} \label{relation_2}
\begin{pmatrix}
a^2 + b^2 & b(a+c)\\
b(a+c) & b^2 + c^2
\end{pmatrix} = \begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix},\quad a^2 = c^2,\quad \det(A)=ac-b^2 = \pm 1.
\end{align}
If $\rho$ is symplectic, then $\rho(q,p)$ is of the form
$$ \begin{pmatrix}
a & b & 0 & 0\\
b & c & 0 & 0\\
0 & d & a & b\\
-d & 0 & b & c
\end{pmatrix} \cdot \begin{pmatrix}
q_1\\
q_2\\
p_1\\
p_2
\end{pmatrix} = \begin{pmatrix}
aq_1 + bq_2\\
bq_1 + cq_2\\
dq_2 + ap_1 + bp_2\\
-dq_1 + bp_1 + cp_2
\end{pmatrix}. $$
Since $AC=-CA$ we have
\begin{align} \label{relation_3}
AC = \begin{pmatrix}
-bd & ad\\
-cd & bd
\end{pmatrix} = \begin{pmatrix}
-bd & -cd\\
ad & bd
\end{pmatrix} = -CA,\quad ad=-cd.
\end{align}
In the following we show that $AC=0$.\ Then $\det(A)=\pm1$ shows that $C$ needs to be the zero matrix.\ In view of the $\rho$-invariance of $H_2$ we compare
$$ H_2(q,p) = \frac{1}{2}|p|^2 + p_1q_2 - p_2q_1 - q_1^2 + \frac{1}{2}q_2^2 $$
with
\begin{align*}
H_2\big(\rho(q,p)\big) = &\frac{1}{2}|p|^2 + p_1q_2\big(ad + \det(A)\big) - p_2q_1\big( cd + \det(A) \big) - q_1^2 \left( - \frac{1}{2}d^2 - ad + a^2 - \frac{1}{2}b^2 \right)\\
& + \frac{1}{2}q_2^2 \left(d^2 + 2cd - 2b^2 + c^2\right) - bdp_1q_1 + bdp_2q_2 + q_1q_2\left(2bd-2ab+bc\right).
\end{align*}
Therefore $bd=0$.\ Now looking at the coefficients of $p_1q_2$ and $p_2q_1$, we find $ad=cd$ and by the second equation in (\ref{relation_3}), $ad=cd=0$.\ Hence $AC=0$ and $C=0$ as well.\ Furthermore
$$\det(A) = 1,$$
i.e.\ $A \in SO(2)$, meaning that $A$ is an involutive rotation, hence $A = \pm I_2$ and therefore
$$ \rho \in \{ \pm \text{id} \}. $$
Analogously, in the anti-symplectic case,
$$ \begin{pmatrix}
a & b & 0 & 0\\
b & c & 0 & 0\\
c_1 & c_2 & -a & -b\\
c_2 & c_3 & -b & -c
\end{pmatrix} \cdot \begin{pmatrix}
q_1\\
q_2\\
p_1\\
p_2
\end{pmatrix} = \begin{pmatrix}
aq_1 + bq_2\\
bq_1 + cq_2\\
c_1q_1 + c_2q_2 - ap_1 - bp_2\\
c_2q_1 + c_3q_2 - bp_1 - cp_2
\end{pmatrix}, $$
where $AC=CA$, yields
\begin{align} \label{relation_4}
\begin{pmatrix}
ac_1 + bc_2 & ac_2 + bc_3\\
bc_1 + cc_2 & bc_2 + cc_3
\end{pmatrix} = \begin{pmatrix}
ac_1 + bc_2 & bc_1 + cc_2\\
ac_2 + bc_3 & bc_2 + cc_3
\end{pmatrix},\quad ac_2 + bc_3 = bc_1 + cc_2.
\end{align}
Now $H_2\big( \rho(q,p) \big)$ is
\begin{align*}
& \frac{1}{2}|p|^2 + p_1q_2\big( -ac_2 - bc_3 - \det(A) \big) - p_2q_1 \big( bc_1 + cc_2 - \det(A) \big)\\
& -q_1^2 \left( - \frac{1}{2}c_1^2 - \frac{1}{2}c_2^2 - bc_1 + ac_2 + a^2 - \frac{1}{2}b^2 \right) + \frac{1}{2}q_2^2 \left(c_2^2 + c_3^2 + 2cc_2 - 2bc_3 - 2b^2 + c^2\right)\\
& - p_1q_1 \left(ac_1 + bc_2\right) - p_2q_2\left(bc_2 + cc_3\right) + q_1q_2 \left(c_1c_2 + c_2c_3 + cc_1 - ac_3 - 2ab + bc\right).
\end{align*}
Hence $ac_1+bc_2 = bc_2+cc_3=0$ and the coefficients of $p_1q_2$ and $p_2q_1$ imply $-ac_2-bc_3=bc_1+cc_2$.\ The second equation in (\ref{relation_4}) yields $ac_2+bc_3 = bc_1 + cc_2 = 0 $, thus $AC=0$ and $C=0$.\ Moreover by the other coefficients,
$$ \det(A) = -1,\quad a^2 - \frac{1}{2}b^2 = 1,\quad -2b^2 + c^2 =1, $$
and together with (\ref{relation_2}),
$$ b=0,\quad a^2 = c^2 = 1,\quad ac=-1 ,$$
which correspond to $\rho_1$ and $\rho_2$.
\end{proof}
\noindent
\textbf{Spatial case:}\ The following eight linear symplectic or anti-symplectic involutions leave invariant the spatial Hamiltonian from (\ref{hamiltonian_hill})
\begin{align} \label{hamiltonian_linear_spatial}
H \colon T^* \big( \mathbb{R}^3 \setminus \{ (0,0,0) \} \big) \to \mathbb{R},\quad (q,p) \mapsto \frac{1}{2}|p|^2 - \frac{1}{|q|} + p_1q_2 - p_2q_1 - q_1^2 + \frac{1}{2}q_2^2 + \frac{1}{2}q_3^2.
\end{align}
The reflection across the ecliptic $\{q_3=0\}$ induces the symplectic involution
\begin{align} \label{symplectic_involution}
\sigma \colon T^* \mathbb{R}^3 \to T^*\mathbb{R}^3,\quad (q,p) \mapsto (q_1,q_2,-q_3,p_1,p_2,-p_3).
\end{align}
Its fixed point set
$$\text{Fix}(\sigma) = \{ (q_1,q_2,0,p_1,p_2,0) \}$$
is the planar components, hence the planar problem can be interpreted as $\text{Fix}(\sigma)$ and the $q_1q_2$-plane is invariant under $\sigma$.\
The two commuting anti-symplectic involutions from the planar case are now of the form
\begin{align} \label{involution_2}
&\rho_1 \colon T^* \mathbb{R}^3 \to T^*\mathbb{R}^3,\quad (q,p) \mapsto (q_1,-q_2,q_3,-p_1,p_2,-p_3), \nonumber \\
&\rho_2 \colon T^* \mathbb{R}^3 \to T^*\mathbb{R}^3,\quad (q,p) \mapsto (-q_1,q_2,q_3,p_1,-p_2,-p_3),
\end{align}
where $\rho_1$ corresponds to the reflection at the $q_1q_3$-plane and $\rho_2$ to the reflection at the $q_2q_3$-plane.\ Their product gives the symplectic involution,
\begin{align} \label{involution_3}
\rho_1 \circ \rho_2 = \rho_2 \circ \rho_1 = - \sigma \colon T^* \mathbb{R}^3 \to T^*\mathbb{R}^3,\quad (q,p) \mapsto (-q_1,-q_2,q_3,-p_1,-p_2,p_3),
\end{align}
which corresponds to a rotation around the $q_3$-axis by $\pi$.\ Its fixed point set is
\begin{align} \label{fixed_point_2}
\text{Fix}(-\sigma) = \{ ( 0,0,q_3,0,0,p_3 ) \},
\end{align}
thus the $q_3$-axis is invariant under $-\sigma$.\
Moreover, each of the anti-symplectic involutions $\rho_1$ and $\rho_2$ commutes with $\sigma$ and they yield the two anti-symplectic involutions
\begin{align}
\overline{\rho_1}:= \rho_1 \circ \sigma = \sigma \circ \rho_1 \colon T^* \mathbb{R}^3 \to T^*\mathbb{R}^3,\quad &(q,p) \mapsto (q_1,-q_2,-q_3,-p_1,p_2,p_3)\\
\overline{\rho_2}:= \rho_2 \circ \sigma = \sigma \circ \rho_2 \colon T^* \mathbb{R}^3 \to T^*\mathbb{R}^3,\quad &(q,p) \mapsto (-q_1,q_2,-q_3,p_1,-p_2,p_3).
\end{align}
Note that $\overline{\rho_1}$ corresponds to a rotation around the $q_1$-axis by $\pi$ and $\overline{\rho_2}$ to a rotation around the $q_2$-axis by $\pi$ as well.\
The two symplectic involutions $-\sigma$ and $\sigma$ commute as well and give $-$id.\ Together with id, we have now 8 linear symplectic or anti-symplectic involutions which leave the Hamiltonian (\ref{hamiltonian_linear_spatial}) invariant.\ Four of them are anti-symplectic and four are symplectic.\ They form a group which we denote by $\text{Sym}^{\text{S}}_{\text{L}}(H)$.\ Their group structure is given by Table \ref{group_structure}.
\begin{table}[H]
\centering
\begin{tabular}{c|cccccccc}
$\circ$ & id & $-$id & $-\sigma$ & $\sigma$ & $\rho_1$ & $\rho_2$ & $\overline{\rho_1}$ & $\overline{\rho_2}$\\
\hline id & id & $-$id & $-\sigma$ & $\sigma$ & $\rho_1$ & $\rho_2$ & $\overline{\rho_1}$ & $\overline{\rho_2}$ \\
$-$id & $-$id & id & $\sigma$ & $-\sigma$ & $\overline{\rho_2}$ & $\overline{\rho_1}$ & $\rho_2$ & $\rho_1$ \\
$-\sigma$ & $-\sigma$ & $\sigma$ & id & $-$id & $\rho_2$ & $\rho_1$ & $\overline{\rho_2}$ & $\overline{\rho_1}$\\
$\sigma$ & $\sigma$ & $-\sigma$ & $-$id & id & $\overline{\rho_1}$ & $\overline{\rho_2}$ & $\rho_1$ & $\rho_2$\\
$\rho_1$ & $\rho_1$ & $\overline{\rho_2}$ & $\rho_2$ & $\overline{\rho_1}$ & id & $-\sigma$ & $\sigma$ & $-$id\\
$\rho_2$ & $\rho_2$ & $\overline{\rho_1}$ & $\rho_1$ & $\overline{\rho_2}$ & $-\sigma$ & id & $-$id & $\sigma$\\
$\overline{\rho_1}$ & $\overline{\rho_1}$ & $\rho_2$ & $\overline{\rho_2}$ & $\rho_1$ & $\sigma$ & $-$id & id & $-\sigma$\\
$\overline{\rho_2}$ & $\overline{\rho_2}$ & $\rho_1$ & $\overline{\rho_1}$ & $\rho_2$ & $-$id & $\sigma$ & $-\sigma$ & id
\end{tabular}
\caption{Group structure of $\text{Sym}^{\text{S}}_{\text{L}}(H)$}
\label{group_structure}
\end{table}
\noindent
It is generated by $\{ \rho_1,\rho_2,\sigma \}$ and
$$ \text{Sym}^{\text{S}}_{\text{L}}(H) \cong \mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_2 . $$
By considering the four symplectic involutions we see that like in the planar case, a Klein four-group arises, namely as a sub-group of $\text{Sym}^{\text{S}}_{\text{L}}(H)$.\ It is generated by $\{\pm \sigma\}$ and we denote it by $\omega\text{-Sym}^{\text{P}}_{\text{L}}(H) \cong \mathbb{Z}_2 \times \mathbb{Z}_2$, hence
$$ \text{Sym}^{\text{P}}_{\text{L}}(H) \cong \omega\text{-Sym}^{\text{S}}_{\text{L}}(H) \subset \text{Sym}^{\text{S}}_{\text{L}}(H). $$
The following proposition shows that these eight linear symmetries are the only ones in the spatial problem.\
\begin{Proposition} \label{prop_spatial}
The set
\begin{align} \label{set_2}
\{ \rho \colon T^* \mathbb{R}^3 \to T^* \mathbb{R}^3 \text{ linear} \mid \rho^2 = \text{ id, } H \circ \rho = H \text{ and } \rho ^* \omega = \pm \omega \}
\end{align}
is $\text{Sym}^{\text{S}}_{\text{L}}(H)$.
\end{Proposition}
\begin{proof}
Let $\rho$ be an element from the set (\ref{set_2}).\ As in the planar case, we prove the proposition in three analogous steps where only the calculation for the third assertion involves more coefficients.\\
\textbf{Step 1.}\ \textit{The Hamiltonian (\ref{hamiltonian_linear_spatial}) is the sum of
$$ H_2(q,p) = \frac{1}{2}|p|^2 + p_1q_2 - p_2q_1 - q_1^2 + \frac{1}{2}q_2^2 + \frac{1}{2}q_3^2 \quad \text{ and } \quad H_{-1}(q,p) = - \frac{1}{|q|}, $$
where $H_2$ is homogeneous of degree 2 and $H_{-1}$ is homogeneous of degree $-1$.\ Hence $H_2 \circ \rho = H_2$ and $H_{-1} \circ \rho = H_{-1}.$}\\
\textbf{Step 2.}\ \textit{The matrix form of $\rho$ with respect to the splitting $\mathbb{R}^6 = \mathbb{R}^3 \times \mathbb{R}^3$ and to the coordinates $(q_1,q_2,q_3,p_1,p_2,p_3)$ is
\begin{align*}
\begin{cases}
\begin{pmatrix}
A & 0\\
C & A
\end{pmatrix},\quad A \in O(3),\quad A=A^T,\quad C=-C^T,\quad AC=-CA, & \text{ if $\rho$ is symplectic}\\
\\
\begin{pmatrix}
A & 0\\
C & -A
\end{pmatrix},\quad A \in O(3),\quad A=A^T,\quad C=C^T,\quad AC=CA, & \text{ if $\rho$ is anti-symplectic.}
\end{cases}
\end{align*}}
\textbf{Step 3.}\ \textit{In both cases, the matrix $C$ is the zero matrix and
\begin{align*}
\begin{cases}
\rho \in \{ \sigma, \overline{\sigma}, \pm \text{id} \}, &\text{ if $\rho$ is symplectic}\\
\rho \in \{\rho_1, \rho_2\, \overline{\rho_1}, \overline{\rho_2}\}, & \text{ if $\rho$ is anti-symplectic.}
\end{cases}
\end{align*}}
In both cases, $A$ is of the form
$$ A = \begin{pmatrix}
a & d & e\\
d & b & f\\
e & f & c
\end{pmatrix}. $$
Since $A^2 = I_3$ and $A \in O(3)$, we have
\begin{align} \label{relation_5}
A^2 = \begin{pmatrix}
a^2 + d^2 + e^2 & ad+bd+ef & ae+df+ce\\
ad+bd+ef & d^2 + b^2 + f^2 & de+bf+cf\\
ae+df+ce & de+bf+cf & e^2 + f^2 + c^2
\end{pmatrix} = \begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}
\end{align}
and
$$ \det(A)= abc + 2def - af^2 - be^2 - cd^2 = \pm 1. $$
If $\rho$ is symplectic, then $\rho(q,p)$ is of the form
$$ \begin{pmatrix}
a & d & e & 0 & 0 & 0\\
d & b & f & 0 & 0 & 0\\
e & f & c & 0 & 0 & 0\\
0 & c_1 & c_2 & a & d & e\\
-c_1 & 0 & c_3 & d & b & f\\
-c_2 & -c_3 & 0 & e & f & c
\end{pmatrix} \cdot \begin{pmatrix}
q_1\\
q_2\\
q_3\\
p_1\\
p_2\\
p_3
\end{pmatrix} = \begin{pmatrix}
aq_1 + dq_2 + eq_3\\
dq_1 + bq_2 + fq_3\\
eq_1 + fq_2 + cq_3\\
c_1q_2 + c_2q_3 + ap_1 + dp_2 + ep_3\\
-c_1q_1 + c_3q_3 + dp_1 + bp_2 + fp_3\\
-c_2q_1 - c_3q_2 + ep_1 + fp_2 + cp_3
\end{pmatrix}. $$
The equation $AC=-CA$ yields
\begin{align} \label{relation_6}
\begin{pmatrix}
-dc_1 - ec_2 & ac_1 - ec_3 & ac_2 + dc_3\\
-bc_1 - fc_2 & dc_1 - fc_3 & dc_2 + bc_3\\
-fc_1 - cc_2 & ec_1 - cc_3 & ec_2 + fc_3
\end{pmatrix} = \begin{pmatrix}
-dc_1 - ec_2 & -bc_1 - fc_2 & -fc_1 - cc_2\\
ac_1 - ec_2 & dc_1 - fc_3 & ec_1 - cc_3\\
ac_2 + dc_3 & dc_2 + bc_3 & ec_2 + fc_3
\end{pmatrix},
\end{align}
and therefore
\begin{align} \label{relation_7}
ac_1 - ec_3 = -bc_1 - fc_2,\quad ac_2 + dc_3 = -fc_1 - cc_2,\quad dc_2 + bc_3 = ec_1 - cc_3.
\end{align}
In view of the $\rho$-invariance of $H_2$ we compare
$$ H_2(q,p) = \frac{1}{2}|p|^2 + p_1q_2 - p_2q_1 - q_1^2 + \frac{1}{2}q_2^2 + \frac{1}{2}q_3^2 $$
with $H_2\big( \rho(q,p) \big)$, which is
\begin{align*}
&\frac{1}{2}|p|^2 + p_1q_2\left(ac_1 - ec_3 + ab - d^2\right) - p_2q_1\left(bc_1 + fc_2 + ab - d^2\right)\\
& - q_1^2 \left(\frac{1}{2}\left(-c_1^2 -c_2^2 - d^2 - e^2 \right) - ac_1 + a^2 \right)\\
& + \frac{1}{2}q_2^2 \left(c_1^2 + c_3^2 + 2bc_1 - 2d^2 + b^2 + f^2\right)\\
& + \frac{1}{2}q_3^2 \left(c_2^2 + c_3^2 + 2fc_2 - 2ec_3 - 2e^2 + f^2 + c^2\right)\\
& + p_1q_1 \left( -dc_1 - ec_2 \right) + p_2q_2 \left( dc_1 - fc_3 \right) + p_3q_3 \left( ec_2 + fc_3 \right)\\
& + p_1q_3 \left(ac_2 + dc_3 + af - de\right) + p_3q_1 \left( -fc_1 - cc_2 + de - af \right)\\
& + p_2q_3 \left( dc_2 + bc_3 + df - be \right) + p_3q_2 \left( ec_1 - cc_3 + be - df \right)\\
& + q_1q_2 \left( c_2c_3 + 2dc_1 - 2ad + bd + ef \right)\\
& + q_1q_3 \left( -c_1c_3 + dc_2 + ec_1 - ac_3 - 2ae + df + ce \right)\\
& + q_2q_3 \left( c_1c_2 + fc_1 + bc_2 - dc_3 - 2de + bf + cf \right).
\end{align*}
By the coefficients of $p_i q_i$, for $i=1,2,3$, we immediately have that the diagonal entries of $AC$ in (\ref{relation_6}) are all zero.\ To see that the other entries of $AC$ are also all zero, we set equal the coefficients of $p_1q_2$ with $p_2q_1$, of $p_1q_3$ with $p_3q_1$, of $p_2q_3$ with $p_3q_2$, and use (\ref{relation_7}), which imply
$$ ac_1 - ec_3 = -bc_1 - fc_2=0,\quad ac_2 + dc_3 = -fc_1 - cc_2=0,\quad dc_2 + bc_3 = ec_1 - cc_3=0. $$
Hence $AC=0$ and thus $C=0$.\ By the coefficients of $p_1q_2$ and $p_2q_1$,
$$ ab-d^2 = 1, $$
which means that $a \neq 0$ and $b \neq 0$.\ In view of $A^2 = I_3$ in (\ref{relation_5}) the two equations $ad+bd+ef=0$ and $ae+df+ce=0$ together with the coefficients of $q_1q_2$ and $q_1q_3$ imply $ad=ae=0$.\ Since $a \neq 0$ we obtain
$$ d=e=0 .$$
Furthermore, by the coefficient of $p_1q_3$ we have $af=de=0$, hence $f=0$.\ Together with the coefficients from the second until the fourth lines, we obtain
$$ ab=1,\quad a^2=b^2=c^2=1 ,$$
which correspond to $\sigma$, $\overline{\sigma}$ and $\pm$id.\
If $\rho$ is anti-symplectic, then $\rho(q,p)$ is of the form
$$ \begin{pmatrix}
a & d & e & 0 & 0 & 0\\
d & b & f & 0 & 0 & 0\\
e & f & c & 0 & 0 & 0\\
c_1 & c_2 & c_3 & -a & -d & -e\\
c_2 & c_4 & c_5 & -d & -b & -f\\
c_3 & c_5 & c_6 & -e & -f & -c
\end{pmatrix} \cdot \begin{pmatrix}
q_1\\
q_2\\
q_3\\
p_1\\
p_2\\
p_3
\end{pmatrix} = \begin{pmatrix}
aq_1 + dq_2 + eq_3\\
dq_1 + bq_2 + fq_3\\
eq_1 + fq_2 + cq_3\\
c_1q_1 + c_2q_2 + c_3q_3 - ap_1 - dp_2 - ep_3\\
c_2q_1 + c_4q_2 + c_5q_3 - dp_1 - bp_2 - fp_3\\
c_3q_1 + c_5q_2 + c_6q_3 - ep_1 - fp_2 - cp_3
\end{pmatrix}. $$
The equation $AC=CA$ yields
\begin{align*} \footnotesize
\begin{pmatrix}
ac_1 + dc_2 + ec_3 & ac_2 + dc_4 + ec_5 & ac_3 + dc_5 + ec_6\\
dc_1 + bc_2 + fc_3 & dc_2 + bc_4 + fc_5 & dc_3 + bc_5 + fc_6\\
ec_1 + fc_2 + cc_3 & ec_2 + fc_4 + cc_5 & ec_3 + fc_5 + cc_6
\end{pmatrix} = \begin{pmatrix}
ac_1 + dc_2 + ec_3 & dc_1 + bc_2 + fc_3 & ec_1 + fc_2 + cc_3\\
ac_2 + dc_4 + ec_5 & dc_2 + bc_4 + fc_5 & ec_2 + fc_4 + cc_5\\
ac_3 + dc_5 + ec_6 & dc_3 + bc_5 + fc_6 & ec_3 + fc_5 + cc_6
\end{pmatrix}
\end{align*}
and thus \footnotesize{
\begin{align*}
ac_2 + dc_4 + ec_5 = dc_1 + bc_2 + fc_3,\quad ac_3 + dc_5 + ec_6 = ec_1 + fc_2 + cc_3,\quad dc_3 + bc_5 + fc_6 = ec_2 + fc_4 + cc_5.
\end{align*} } \normalsize
Now $H_2\big( \rho(q,p) \big)$ is
\begin{align*}
&\frac{1}{2}|p|^2 + p_1q_2\left(-ac_2 - dc_4 - ec_5 + d^2 - ab\right) - p_2q_1\left(dc_1 + bc_2 + fc_3 + d^2 - ab\right)\\
& - q_1^2 \left(\frac{1}{2}\left(-c_1^2 -c_2^2 - d^2 - e^2 \right) - dc_1 + ac_2 + a^2 \right)\\
& + \frac{1}{2}q_2^2 \left(c_2^2 + c_4^2 + c_5^2 + 2bc_2 - 2dc_4 - 2d^2 + b^2 + f^2\right)\\
& + \frac{1}{2}q_3^2 \left(c_3^2 + c_5^2 + c_6^2 + 2fc_3 - 2ec_5 - 2e^2 + f^2 + c^2\right)\\
& + p_1q_1 \left( -ac_1 - dc_2 - ec_3 \right) + p_2q_2 \left( -dc_2 - bc_4 - fc_5 \right) + p_3q_3 \left( -ec_3 - fc_5 - cc_6 \right)\\
& + p_1q_3 \left(-ac_3 - dc_5 - ec_6 + de - af\right) + p_3q_1 \left( -ec_1 - fc_2 - cc_3 + af - de \right)\\
& + p_2q_3 \left( -dc_3 - bc_5 - fc_6 + be - df \right) + p_3q_2 \left( -ec_2 - fc_4 - cc_5 + df - be \right)\\
& + q_1q_2 \left( c_1c_2 + c_2c_4 + c_3c_5 + bc_1 - ac_4 - 2ad + bd + ef \right)\\
& + q_1q_3 \left( c_1c_3 + c_2c_5 + c_3c_6 + fc_1 + dc_3 - ec_2 - ac_5 - 2ae + df + ce \right)\\
& + q_2q_3 \left( c_2c_3 + c_4c_5 + c_5c_6 + fc_2 + bc_3 - ec_4 - dc_5 - 2de + bf + cf \right).
\end{align*}
In a similar way to the symplectic case we find that $C=0$.\ In view of $A^2 = I_3$ in (\ref{relation_5}) the three equations $ad+bd+ef=0$, $ae+df+ce=0$ and $de + bf + cf=0$ together with the coefficients of $q_1q_2$, $q_1q_3$ $q_2q_3$ imply $ad=ae=de=0$.\ Since the coefficients of $p_1q_3$ and $p_3q_1$ yield $de=af$, we have
$$ ad=ae=af=0.$$
Suppose that $a = 0$, then in view of the coefficients of $q_1^2$ we see that $d^2 + e^2 = -2$ which is a contradiction.\ Hence
$$a \neq 0,\quad d=e=f=0.$$
By the first four lines we obtain
$$ ab = -1,\quad a^2 = b^2 = c^2 = 1,$$
which correspond to $\rho_1,\rho_2,\overline{\rho_1}$ and $\overline{\rho_2}$.\
\end{proof}
\subsection{Symmetric periodic orbits via shooting}
\label{sec:symmetries}
\textbf{Planar periodic orbits:}\ The fixed point sets of (\ref{involution_1})
\begin{align*}
\text{Fix}(\rho_1) = \{ (q_1,0,0,p_2) \},\quad \text{Fix}(\rho_2) = \{ (0,q_2,p_1,0) \}
\end{align*}
are Lagrangian submanifolds.\ A planar orbit intersects Fix$(\rho_1)$ if it hits the $q_1$-axis perpendicularly, and it intersects Fix$(\rho_2)$ if it hits the $q_2$-axis perpendicularly.\ If an orbit starts at Fix$(\rho_1)$ perpendicularly and hits Fix$(\rho_2)$ perpendicularly in time $t$, then by using the double symmetry $\rho_1,\rho_2$, we obtain a periodic orbit with period $4t = T_q$.\ We refer to this kind of a planar periodic orbit as \textbf{doubly-symmetric}.\ If it starts at Fix($\rho_1$) or Fix($\rho_2$) and hits perpendicularly only the same fixed point set then we call it \textbf{simply-symmetric}.\ Note that a family of simply-symmetric periodic orbits is mapped to another such family by the other symmetry.\
Finding orbits in such a way is known as shooting method.\ In particular, George David Birkhoff (1884--1944) gave an analytical proof for the existence of a planar retrograde periodic orbit which is doubly symmetric (see Figure \ref{1_birkhoff}) for energy level sets below the unique critical value $- \frac{1}{2} 3^{4/3}$ by the so called ``Birkhoff's shooting method", see for instance \cite[pp.\ 140--144]{frauenfelder}.
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-4.5,-2.35) rectangle (4.5,3);
\draw [shift={(0.0,-0.0)},line width=1pt] [decoration={markings, mark=at position 0.82 with {\arrow{<}}}, postaction={decorate}] plot[domain=1.5707963267948966:3.141592653589793,variable=\t]({1.0*2.0*cos(\t r)+-0.0*2.0*sin(\t r)},{0.0*2.0*cos(\t r)+1.0*2.0*sin(\t r)});
\draw [shift={(0.0,-0.0)},dashed,line width=1pt] plot[domain=-3.141592653589793:1.5707963267948966,variable=\t]({1.0*2.0*cos(\t r)+-0.0*2.0*sin(\t r)},{0.0*2.0*cos(\t r)+1.0*2.0*sin(\t r)});
\draw (-2.15,0.42) node[anchor=north west] {$\boxdot$};
\draw [->,line width=1pt] (-3.0,0.0) -- (4.0,0.0);
\draw (3.38,0.6) node[anchor=north west] {$q_1$};
\draw [->,line width=1pt] (0.0,-2.56) -- (0.0,3);
\draw (0.16,3.0400000000000005) node[anchor=north west] {$q_2$};
\draw (-0.44,2.128) node[anchor=north west] {$\boxdot$};
\draw (0,0) node[anchor=north west] {$e$};
\begin{scriptsize}
\draw [fill=black] (-2,0) circle (2pt);
\draw [fill=black] (0,0) circle (2pt);
\end{scriptsize}
\end{tikzpicture}
\caption{Birkhoff's shooting method for the retrograde periodic orbit}
\label{1_birkhoff}
\end{figure}
\noindent
Note that in PCR3BP the masses of the sun and the earth are comparable, so its Hamiltonian (\ref{hamiltonian_1}) is only invariant under $\rho_1$.\ In this case Birkhoff used a double shooting argument (see \cite[pp.\ 144--146]{frauenfelder}).\
Our moon, however, is close to a direct periodic orbit, and for such orbits Birkhoff's analytical proof does not work.\ In 1915 Birkhoff \cite{birkhoff} conjectured that the retrograde one spans a disk-like global surface of section.\ By applying Brouwer's translation theorem to its Poincaré return map one can find a fixed point which should correspond to a direct one.\ This conjecture is still an open question and nowadays modern methods of symplectic geometry are developed to find an answer.\ We refer to \cite{frauenfelder} for a profound discussion.\
Nevertheless, his shooting method can be applied to find and study numerically periodic orbits.\ For direct periodic orbits, we start shooting upwards right from the origin.\
Note that in this paper we work with symplectic $(q,p)$-coordinates, hence the initial conditions for Hill's equation in $(q,p)$-coordinates (\ref{equation_of_motion_q_p}) are determined as follows:\
Since we start perpendicularly at Fix($\rho_1$), we need to give the position and its momentum at time zero, i.e.\ $q_1(0)$ and $p_2(0)$.\ Its velocity $\dot{q}_2(0)$ is determined in the following way.\ Consider the traditional Jacobi integral $\Gamma := -2c$ in $(q,\dot{q})$-coordiantes, where $c$ is the energy from the Hamiltonian (\ref{hamiltonian_2}), meaning that
\begin{align} \label{energy_gamma}
\Gamma = \frac{2}{|q|} + 3q_1^2 - \dot{q}_1^2 - \dot{q}_2^2.
\end{align}
If the starting point $q_1(0)$ and the energy $\Gamma$ are given, then the starting velocity $\dot{q}_2(0)$ is given by the energy condition (\ref{energy_gamma}), i.e.\
\begin{align*}
\big( \dot{q}_2(0) \big)^2 = \frac{2}{|q_1(0)|} + 3\big( q_1(0) \big)^2 - \Gamma
\end{align*}
and therefore in view of the transformation (\ref{equation_of_motion_q_p}), the momentum by
$$p_2(0) = \dot{q}_2(0) + q_1(0).$$
\textbf{Spatial periodic orbits:}\ Different combinations of the fixed point sets of the four linear anti-symplectic symmetries give various doubly and simply symmetric spatial periodic orbits.\ For their initial conditions we consider the Jacobi integral $\Gamma$ in $(q,\dot{q})$-coordiantes in the spatial case which is
\begin{align} \label{energy_gamma_spatial}
\Gamma = \frac{2}{|q|} + 3q_1^2 - q_3^2 - \dot{q}_1^2 - \dot{q}_2^2 - \dot{q}_3^2.
\end{align}
The initial conditions for position and velocity are determined as follows:\
\begin{table}[H] \centering \footnotesize
\setlength{\extrarowheight}{.5em}
\begin{tabular}{c|c|c}
starting perpendicularly & given $\Gamma$ and & by energy condition (\ref{energy_gamma_spatial})\\
\hline $\text{Fix}(\overline{\rho_1}) = \{ (q_1,0,0,0,p_2,p_3) \}$ & $q_1(0)$, $\dot{q}_2(0)$ & $\big( \dot{q}_3(0) \big)^2 = \frac{2}{|q_1(0)|} + 3\big( q_1(0) \big)^2 - \big( \dot{q}_2(0) \big)^2 - \Gamma$ \\
$\text{Fix}(\rho_1) = \{ (q_1,0,q_3,0,p_2,0) \}$ & $q_1(0)$, $q_3(0)$ & $\big( \dot{q}_2(0) \big)^2 = 2 / \sqrt{ \big( q_1(0) \big)^2 + \big( q_3(0) \big)^2 } + 3\big( q_1(0) \big)^2 - \big( q_3(0) \big)^2 - \Gamma$ \\
$\text{Fix}(\overline{\rho_2}) = \{ (0,q_2,0,p_1,0,p_3) \}$ & $q_2(0)$, $\dot{q}_1(0)$ & $\big( \dot{q}_3(0) \big)^2 = \frac{2}{|q_2(0)|} - \big( \dot{q}_1(0) \big)^2 - \Gamma$ \\
$\text{Fix}(\rho_2) = \{ (0,q_2,q_3,p_1,0,0) \}$ & $q_2(0)$, $q_3(0)$ & $\big( \dot{q}_1(0) \big)^2 = 2 / \sqrt{ \big( q_2(0) \big)^2 + \big( q_3(0) \big)^2 } - \big( q_3(0) \big)^2 - \Gamma$
\end{tabular}
\caption{Initial conditions for symmetric spatial periodic orbits}
\label{initial_conditions_spatial}
\end{table}
\noindent
In view of the transformations (\ref{equation_of_motion_q_p}), the $p$-coordinates are obtained by
$$ p_1(0) = \dot{q}_1(0) - q_2(0),\quad p_2(0) = \dot{q}_2(0) + q_1(0),\quad p_3(0) = \dot{q}_3(0). $$
\subsection{On monodromy \& reduced monodromy for symmetric periodic orbits}
\label{sec:6.5}
In this subsection we apply the discussion from Section \ref{sec:4} to planar and spatial symmetric periodic orbits.\
\subsubsection{For planar ones \& the synodic, anomalistic and draconitic periods}
\label{sec:6.5.1}
Recall from (\ref{symplectic_involution}) and (\ref{involution_2}) the symplectic and anti-symplectic symmetries
$$ \sigma(q,p) = (q_1,q_2,-q_3,p_1,p_2,-p_3),\quad \rho_1 (q,p) = (q_1,-q_2,q_3,-p_1,p_2,-p_3), $$
which commute.\ In the following we focus on planar periodic orbits $q:=(q,p) \in \text{Fix}(\sigma)$ which are symmetric with respect to $\rho_1$.\ Note that the discussion of symmetric planar periodic orbits with respect to $\rho_2$ from (\ref{involution_2}) works analogously.\
Let $T_q$ be the first return time.\ We denote $q_0:=\big( q(0),p(0) \big)$ and after choosing a Lagrangian basis with respect to the symplectic decomposition into planar and spatial components,
$$ T_{q_0} T^*\mathbb{R}^3 = T_{q_0} \text{Fix}(\sigma) \oplus E_{-1}\big(d \sigma (q_0) \big), $$
the monodromy is of the form
$$ d\varphi _H ^{T_q} (q_0) = \begin{pmatrix}
A_p & 0\\
0 & A_s
\end{pmatrix}, $$
where
$$ A_p \colon T_{q_0} \text{Fix}(\sigma) \to T_{q_0} \text{Fix}(\sigma),\quad A_s \colon E_{-1}\big( d\sigma(q_0) \big) \to E_{-1}\big( d\sigma(q_0) \big) $$
and
$$ A_p \in \text{Sp}^{\rho_0}(2),\quad A_s \in \text{Sp}^{\rho_0}(1) .$$
For the planar monodromy $A_p$ we can ignore the spatial components and for $A_s$ the planar ones, such that in view of
$$ \text{Fix}(\rho_1) = \{ (q_1,0,q_3,0,p_2,0) \},\quad E_{-1} \big( d\rho_1 (q_0) \big) = \{ (0,q_2,0,p_1,0,p_3) \} ,$$
Lagrangian basis vectors for $ T_{q_0} \text{Fix}(\sigma)$ and $E_{-1}\big(d \sigma (q_0) \big)$ are given by
$$ \left\{ \begin{pmatrix}
1\\0\\0\\0
\end{pmatrix}, \begin{pmatrix}
0\\0\\0\\1
\end{pmatrix}, \begin{pmatrix}
0\\0\\-1\\0
\end{pmatrix}, \begin{pmatrix}
0\\1\\0\\0
\end{pmatrix} \right\} \text{ and } \left\{ \begin{pmatrix}
1\\0
\end{pmatrix}, \begin{pmatrix}
0\\-1
\end{pmatrix} \right\}. $$
For computing the matrices $A_p$ and $A_s$ we consider the linearized equations in $(q,p)$-coordinates for planar periodic orbits (\ref{linearized_q_p_fix}).\ In particular, for computing $A_s$, which is of the form
$$ A_s = \begin{pmatrix}
\tilde{a} & \tilde{b}\\
\tilde{c} & \tilde{a}
\end{pmatrix},\quad \tilde{a}^2 - \tilde{b}\tilde{c} = 1, $$
we only need to consider the third line of (\ref{linearized_q_p_fix}).\ This implies the system of linear equations of the form
$$ d\varphi_H^{T_q}\big( (1,0) \big) = \begin{pmatrix}
\tilde{a}\\
\tilde{c}
\end{pmatrix} = \tilde{a} \begin{pmatrix}
1\\
0
\end{pmatrix} - \tilde{c} \begin{pmatrix}
0\\
-1
\end{pmatrix},\quad d\varphi_H^{T_q}\big((0,-1)\big) = \begin{pmatrix}
\tilde{b}\\
\tilde{a}
\end{pmatrix} = \tilde{b} \begin{pmatrix}
1\\
0
\end{pmatrix} - \tilde{a} \begin{pmatrix}
0\\
-1
\end{pmatrix}. $$
For the planar monodromy $A_p$ we consider the first two lines of (\ref{linearized_q_p_fix}) and obtain four vectors by the linearized flow to each of the four Lagrangian basis vectors.\ Since these four vectors are linear combinations of the four Lagrangian basis vectors, the coefficients of these linear combinations give the matrix coefficients of $A_p \in \text{Sp}^{\rho_0}(2)$.\
For the planar reduced monodromy $\overline{A}_p \in \text{Sp}^{\rho_1}(1)$, the restriction to the energy hypersurface $\Sigma$ means to linearize the planar Jacobi integral $\Gamma$ from (\ref{energy_gamma}) in $(q,p)$-coordinates, which is equivalent to
$$ \Delta p_2(0) = \frac{1}{p_2(0) - q_1(0)} \bigg( \frac{- q_1(0)}{|q|^3} + 2q_1(0) + p_2(0) \bigg)\Delta q_1(0). $$
Therefore $\Delta q_1(0)$ and $\Delta p_2(0)$ determine each other such that a first basis vector of
$$ T_{q_0} \text{Fix}(\sigma|_{\Sigma}) / (\text{ker}\omega_{q_0} | _{T_{q_0}\Sigma}) $$
is given by
$$ \tilde{v}:= (\Delta q_1(0),0,0,\Delta p_2(0)). $$
The Hamiltonian vector field $X_H \vert _{\Sigma} (q_0) \in E_{-1}\big(d \sigma (q_0) \big) = \langle (0,0,-1,0),(0,1,0,0) \rangle_{\mathbb{R}} $ is of the form
$$ \big( 0, \Delta q_2(0), \Delta p_1(0), 0 \big) = \big( 0, \dot{q}_2(0), \dot{p}_1(0), 0 \big), $$
which is determined by the inital conditions and Hill's equation in $(q,p)$-coordinates (\ref{equation_of_motion_q_p}).\ For simplicity we choose $\Delta q_1(0)=1$ such that the second basis vector is given by
$$\tilde{w} := (0,0,-1,0). $$
Note that
$$T_{q_0} \text{Fix}(\sigma|_{\Sigma}) = \langle \tilde{v},\tilde{w},X_H \vert _{\Sigma}(q_0) \rangle_{\mathbb{R}}$$
and
$$\omega_{q_0} (\tilde{v},\tilde{w})=1,\quad \omega_{q_0} \big( \tilde{v} , X_H \vert _{\Sigma}(q_0) \big) = 0,\quad \omega_{q_0}\big( \tilde{w} , X_H \vert _{\Sigma}(q_0) \big) = 0.$$
By writing the two vectors
$$ d \varphi_H^{T_q} (\tilde{v}),\quad d \varphi_H^{T_q} (\tilde{w}) $$
as a linear combination of $\tilde{v},\tilde{w}$ and $X_H \vert _{\Sigma} (q_0)$, the coefficients of $\tilde{v}$ and $\tilde{w}$ give the planar reduced monodromy $\overline{A}_p \in \text{Sp}^{\rho_0}(1)$, i.e.\ the reduced monodromy is of the form
$$ \overline{d\varphi _H ^{T_q} | _{\Sigma} (q_0)} = \begin{pmatrix}
\overline{A}_p & 0\\
0 & A_s
\end{pmatrix} = \begin{pmatrix}
a & b & 0 & 0\\
c & a & 0 & 0\\
0 & 0 & \tilde{a} & \tilde{b}\\
0 & 0 & \tilde{c} & \tilde{a}
\end{pmatrix},$$
where $a^2 - bc = 1$ as well.\ Recall from Proposition \ref{prop_signature} that the signatures of $b,c,\tilde{b}$ and $\tilde{c}$ are invariant under the choice of the Lagrangian basis.\ Moreover, for a symmetric planar periodic orbit we separate the elliptic case and each of the four hyerbolic subcases in planar and spatial parts.\
In both elliptic cases, i.e.\ the Floquet multipliers are of the form $e^{\pm \text{i}\theta}$ resp. $e^{\pm \text{i}\vartheta}$, we denote by
$$ \varphi_p := \begin{cases}
\theta, & \text{if } b < 0,\\
2\pi - \theta, & \text{if } b >0,
\end{cases}\quad\quad \varphi_s := \begin{cases}
\vartheta, & \text{if } \tilde{b} < 0,\\
2\pi - \vartheta, & \text{if } \tilde{b} >0,
\end{cases} $$
the \textbf{mean planar angle of rotation} and the \textbf{mean spatial angle of rotation}, respectively.\
In particular, we have
$$ \overline{d \varphi _H^{T_q} | _{\Sigma} (q_0) } = \begin{pmatrix}
\overline{A}_p & 0 \\
0 & A_s
\end{pmatrix} \sim \begin{pmatrix}
\cos \varphi_p & - \sin \varphi_p & 0 & 0\\
\sin \varphi_p & \cos \varphi_p & 0 & 0\\
0 & 0 & \cos \varphi_s & - \sin \varphi_s\\
0 & 0 & \sin \varphi_s & \cos \varphi_s
\end{pmatrix}. $$
With respect to the symplectic splitting the transversal Conley--Zehnder index $\mu_{CZ}$ of $q$ is additive, meaning that
\begin{align*}
\mu_{CZ} = \mu_{CZ}^p + \mu_{CZ}^s,
\end{align*}
where the planar and spatial indices are the Conley--Zehnder indices of any path of symplectic matrices generated by the planar and spatial part of the linearized flow, respectively.\ Furthermore recall from Section \ref{sec:conley_zehnder_index} that the measure of the number of complete rotations of each neighbouring orbit during $T_q$ is given by the Conley--Zehnder index.\
We denote by $\theta(t)$ and $\vartheta(t)$ the respective rotation functions for $t \in [0,T_q]$.\ Note that $\theta(T_q) \equiv \varphi_p \text{ mod } 2 \pi$ and $\vartheta(T_q) \equiv \varphi_s \text{ mod } 2 \pi$.\ Moreover, let $\text{rot}^p(q)$ resp.\ $\text{rot}^s(q)$ be the number of complete rotations of neighbouring orbits during $T_q$.\ As in (\ref{index_1}) and (\ref{index_2}),
$$ \text{rot}^p(q) := \lfloor \theta(T_q)/(2\pi) \rfloor \in \mathbb{Z}, \quad \text{rot}^s(q) := \lfloor \vartheta(T_q)/(2\pi) \rfloor \in \mathbb{Z} $$
and
$$ \mu_{CZ}^p = 2 \lfloor \theta(T_q)/(2\pi) \rfloor + 1,\quad \mu_{CZ}^s = 2 \lfloor \vartheta(T_q)/(2\pi) \rfloor + 1. $$
The \textbf{synodic period} $T_s$ is related to $T_q$ in the following way.\ Recall that $T_s$ is the number of days from full moon to full moon.\ The lunarity $m$ of a moon is defined as the average number of synodic months during a complete rotation of the planet around the sun, which is 365.25 days for the Earth.\ In the Hill lunar problem, since the earth and sun are fixed, we have $m= 2 \pi / T_q$, hence
\begin{align}\label{synodic_period}
T_s = \frac{365.25}{m} = \frac{365.25 \cdot T_q}{2 \pi}.
\end{align}
The \textbf{anomalistic period} $T_a$ is defined as the mean time (in days) for a complete rotation of the planar neighbouring orbits during $T_q$.\ The mean angular velocity $v_a$ of the anomaly is given by $ \theta(T_q) / T_s$ and we obtain $T_a$ by $v_a \cdot T_a = 2 \pi$, i.e.\
\begin{align} \label{anomalistic_period}
T_a = \frac{2 \pi \cdot T_s}{( \mu_{CZ}^p - 1 )\cdot \pi + \varphi_p}.
\end{align}
In a similar way the \textbf{draconitic period} $T_d$ is defined as the mean time (in days) for a complete rotation of the spatial neighbouring orbits during $T_q$, and
\begin{align} \label{draconitic_period}
T_d = \frac{2 \pi \cdot T_s}{( \mu_{CZ}^s - 1 )\cdot \pi + \varphi_s}.
\end{align}
Note that $T_a$ and $T_d$ only exist in elliptic cases.\ In addition, we note that the bounded component of the regularized Hill lunar problem has a contact structure for energies below the critical value, where the Conley--Zehnder indices of closed characteristics are nonnegative (see \cite{lee} for details).\
\subsubsection{For spatial periodic orbits}
\label{sec:6.5.2}
We recall the four linear anti-symplectic symmetries
\begin{align*}
&\overline{\rho_1}(q,p) = (q_1,-q_2,-q_3,-p_1,p_2,p_3),\quad &\rho_1(q,p) = (q_1,-q_2,q_3,-p_1,p_2,-p_3),\\
&\overline{\rho_2}(q,p) = (-q_1,q_2,-q_3,p_1,-p_2,p_3),\quad &\rho_2(q,p) = (-q_1,q_2,q_3,p_1,-p_2,-p_3),
\end{align*}
and consider spatial periodic orbits $q:=(q,p)$ which are symmetric with respect to $\overline{\rho_1}$.\ Note that for any other linear anti-symplectic symmetry the following procedure is analogous.\ Let $q_0 := \big( q(0),p(0) \big)$ and $T_q$ be its first return time.\ In view of
$$ q_0 \in \text{Fix}(\overline{\rho_1}) = \{ (q_1,0,0,0,p_2,p_3) \},\quad E_{-1} \big( d\overline{\rho_1} (q_0) \big) = \{ (0,q_2,q_3,p_1,0,0) \} ,$$
a Lagrangian basis is given by
$$ \left\{ \begin{pmatrix}
1\\0\\0\\0\\0\\0
\end{pmatrix}, \begin{pmatrix}
0\\0\\0\\0\\1\\0
\end{pmatrix}, \begin{pmatrix}
0\\0\\0\\0\\0\\1
\end{pmatrix}, \begin{pmatrix}
0\\0\\0\\-1\\0\\0
\end{pmatrix}, \begin{pmatrix}
0\\1\\0\\0\\0\\0
\end{pmatrix}, \begin{pmatrix}
0\\0\\1\\0\\0\\0
\end{pmatrix} \right\} .$$
By using this basis the monodromy becomes an element of $\text{Sp}^{\rho_0}(3)$.\ By fixing the energy, the linearization of the spatial Jacobi integral $\Gamma$ from (\ref{energy_gamma_spatial}) in $(q,p)$-coordinates is equivalent to
$$ \Delta p_3(0) = \frac{1}{p_3(0)} \Bigg( \bigg( \frac{- q_1(0)}{|q|^3} + 2q_1(0) + p_2(0) \bigg)\Delta q_1(0) + \big(q_1(0) - p_2(0)\big) \Delta p_2(0) \Bigg) , $$
which means that any two of $\Delta p_3(0), \Delta q_1(0)$ and $\Delta p_2(0)$ determine the thirds.\ Hence two basis vectors of the five dimensional vector space $T_{q_0} \Sigma$ are of the form
$$ \big( \Delta q_1(0), 0, 0, 0, \Delta p_2(0), \Delta p_3(0) \big). $$
For simplicity we choose
$$ \tilde{v}_1 = \big( 1, 0, 0, 0, 0, \Delta p_3(0) \big),\quad \tilde{v}_2 = \big( 0, 0, 0, 0, 1, \Delta p_3(0) \big). $$
The Hamiltonian vector field $X_H |_{\Sigma}(q_0) \in E_{-1} \big( d\overline{\rho_1} (q_0) \big)$, which is spanned by the last three Lagrangian basis vectors, is of the form
$$ \big( 0,\Delta q_2(0), \Delta q_3(0), \Delta p_1(0),0,0 \big) = \big( 0, \dot{q}_2(0), \dot{q}_3(0), \dot{p}_1(0), 0, 0 \big),$$
determined by the inital conditions and Hill's equation in $(q,p)$-coordinates (\ref{equation_of_motion_q_p}).\ By choosing
$$ \tilde{w}_1 = (0,0,0,-1,0,0),\quad \tilde{w}_2 = (0,1,0,0,0,0) $$
we obtain
$$ T_{q_0}\Sigma = \langle \tilde{v}_1,\tilde{v}_2,\tilde{w}_1,\tilde{w}_2,X_H |_{\Sigma}(q_0)\rangle_{\mathbb{R}},\quad \omega_{x_0}(\tilde{v}_i,\tilde{w}_j) = \delta_{ij},\quad i,j=1,2, $$
and that the reduced monodromy is a symplectic matrix from $\text{Sp}^{\rho_0}(2)$, meaning that it is of the form
$$ \overline{d \varphi_H^{T_q} (q_0)} = \begin{pmatrix}
A & B\\
C & A^T
\end{pmatrix},\quad B, C, CA, AB \text{ are symmetric and } A^2 - BC = I_2.$$
\section{Proof of Theorem \ref{theorem_a} and \ref{theorem_b}}
\label{sec:6}
\subsection{Regularized energy hypersurface}
\textbf{Planar case:}\ To avoid collision orbits, for given energy $c<0$, we regularize the Hamiltonian of the planar Hill lunar problem (\ref{hamiltonian_hill}) by
\begin{align*}
K_c (-p,q) &= \frac{1}{2} \bigg( - \frac{|q|}{2c} \Big( H\big( - \frac{q}{2c}, \sqrt{-2c}p \big) - c \Big) + 1 \bigg)^2 - \frac{1}{2}\\
&= \frac{1}{2} \bigg( \frac{1}{2} \big( 1 + |p|^2 \big) + \frac{p_1 q_2 - p_2 q_1}{(-2c)^{\frac{3}{2}}} + \frac{- q_1^2 + \frac{1}{2}q_2^2}{(-2c)^3} \bigg)^2 |q|^2 - \frac{1}{2} \nonumber
\end{align*}
which satisfies $\Sigma_c:=H^{-1}(c) = K_c^{-1}(0)$.\ Note that the transformation consists of the switch map
\begin{align} \label{switch_map}
(-p,q) \mapsto (q,p),
\end{align}
which is a linear symplectomorphism on $\mathbb{R}^4$, and $ (q,p) \mapsto ( - \frac{q}{2c} , \sqrt{-2c} p )$, which is conformally symplectic with its conformal factor $\frac{1}{\sqrt{-2c}}$.\ In symplectic geometry, we don't want to change the dynamics and indeed this factor only gives rise to a reparametrization of the Hamiltonian flow.\ Moreover, the original angular momentum (\ref{angular_momentum_original}) is invariant under the switch map (\ref{switch_map}), i.e.\
\begin{align} \label{angular_momentum_invariant}
L(q,p) = p_1 q_2 - p_2 q_1 = L(-p,q).
\end{align}
Now the roles of the positions $q$ and momenta $p$ are switched and $-p$ corresponds to the base coordinate and $q$ to the fiber coordinate.\ By thinking of the two-sphere as $S^2 = \mathbb{R}^2 \cup \{ \infty \}$ via stereographic projection from the north pole $N$, we obtain the inclusion
\begin{align} \label{inclusion}
\iota \colon T^* \mathbb{R}^2 \hookrightarrow T^* S^2,
\end{align}
where by adding $N$ the closure of the regularized energy hypersurface $\overline{\iota(\Sigma_c)}$ is a subset of $T^* S^2$.\
Furthermore, for every $c<0$, $K_c$ smoothly extends to a Hamiltonian on $T^* S^2$, and by abuse of notation we denote this canonical smooth extension by the same letter.\ To study the limit case $c \to - \infty$, we replace the energy parameter by $c = \frac{-1}{2r^{2/3}}$, for a homotopy variable $r \in (0,\infty)$.\ Hence we obtain
\begin{align} \label{regularized_hamiltonian}
K_r(-p,q) := \frac{1}{2} \Big( \frac{1}{2} \big( 1 + |p|^2 \big) + (p_1 q_2 - p_2 q_1)r + (- q_1^2 + \frac{1}{2}q_2^2)r^2 \Big)^2 |q|^2 - \frac{1}{2}.
\end{align}
The Hamiltonian (\ref{regularized_hamiltonian}) smoothly extends to $r=0$, where it becomes just the regularized Kepler Hamiltonian
\begin{align*}
K_0 (-p,q) = \frac{1}{2} \Big( \frac{1}{2}(1 + |p|^2) \Big)^2|q|^2 - \frac{1}{2},
\end{align*}
which is the kinetic energy of the ``momentum" $q$ with respect to the round metric on $S^2$, i.e.\ $ g_{ij} = \frac{4 \delta_{ij}}{( 1 + |p|^2 )^2}$.\ In general, the regularization of collision orbits for the Kepler problem in $\mathbb{R}^n$ goes back to Moser \cite{moser}, where the Kepler flow is just the geodesic flow on $S^n$ in the chart given by stereographic projection from $N$.\ To regularize the flow means to add the fiber over $N$.\ Going through $N$ (the point at infinity) corresponds precisely to the collisions, where $p$ explodes.\ In the original picture: By regularizing, the third body moves into the mass at the origin like falling onto some kind of trampoline, i.e.\ at collision it bounces back.\ Hence, these bounce orbits become periodic according to the periodic geodesic flows on $S^2$.\
Note that if $K(q,p)$ is the Hamiltonian of the Kepler problem, then on the energy hypersurface $K_0^{-1}(0) = K^{-1}(0)$ the Hamiltonian vector fields are related by
\begin{align} \label{two_hamilt_2}
X_{K_0}|_{K^{-1}(0)}(p,q) = |q| X_K |_{K^{-1}(0)}(-q,p),
\end{align}
see for instance \cite[pp.\ 47--48]{frauenfelder}.\
Now by adding $N$, the closure of the regularized energy hypersurface $\Sigma := K_0^{-1}(0)$, which is the image of $\Sigma$ under the embedding (\ref{inclusion}), i.e.\
$$ \overline{\iota(\Sigma)} \subset T^*S^2 $$
corresponds to the space of great circles on $S^2$ parametrized by arc length, i.e.\ the space of oriented simple closed geodesics on $S^2$.\ Topologically, it is the set of pairs $(-p,q)$ formed by base points $-p$ and unit co-vectors $q$ at $-p$ (see Figure \ref{figure_geodesic}), meaning that it is diffeomorphic to the unit cotangent bundle of $S^2$,
$$ \overline{\iota(\Sigma)} \cong S^*S^2 = \{ (-p,q) : |q|_{-p} = 1 \}, $$
where $|q|_{-p}$ is the length of $q$ with respect to the round co-metric on $S^2$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm]
\clip(-2.2,-2.05) rectangle (2.2,2.05);
\draw [line width=1pt] (0,0) circle (2cm);
\draw [dashed,line width=1pt] (2,0) arc (0:180:2 and 0.5);
\draw [decoration={markings, mark=at position 0.5 with {\arrow{>}}}, postaction={decorate},line width=1pt] (-2,0) arc (180:360:2 and 0.5);
\draw [rotate=60,color=blue,dashed,line width=1pt] (2,0) arc (0:180:2 and 0.5);
\draw [rotate=60,color=blue,decoration={markings, mark=at position 0.8 with {\arrow{>}}}, postaction={decorate},line width=1pt] (-2,0) arc (180:360:2 and 0.5);
\draw [color=blue](-0.5,-1.4) node[anchor=north west] {$-p$};
\draw [color=blue](0.2,-0.8) node[anchor=north west] {$q$};
\draw [->,color=blue,line width=1pt] (-0.46,-1.46) -- (0.3,-0.85);
\begin{scriptsize}
\draw [fill=blue] (-0.46,-1.46) circle (2pt);
\end{scriptsize}
\end{tikzpicture}
\caption{The space of parametrized great circles on $S^2$}
\label{figure_geodesic}
\end{figure}
\noindent
By identifying it with the unit tangent bundle, denoted by $SS^2$, it is diffeomorphic to
$$ \overline{\iota(\Sigma)} \cong S^*S^2 \cong SS^2 \cong SO(3) \cong S^3 / \mathbb{Z}_2 \cong \mathbb{R}P^3,$$
see \cite[p.\ 195]{bott_tu}.\\
\\
\textbf{Spatial case:}\ By the same regularization of the Hamiltonian for the spatial Hill lunar problem (\ref{hamiltonian_hill}), the closure of the regularized energy hypersurface is the unit cotangent bundle $S^*S^3$ which is the space of parametrized great circles (parametrized simple closed geodesics) on $S^3$.\ To see that the tangent bundle of $S^3$ is trivial, we identify $\mathbb{R}^4$ with the set of quaternions $\mathbb{H}$ via the canonical bijection
$$ \mathbb{R}^4 \to \mathbb{H},\quad (a,b,c,d) \mapsto a + bi + cj + dk ,$$
with the identities
$$ i^2 = j^2 = k^2 = ikj = -1,\quad ij=k, jk=i, ki=j,\quad ji=-k, kj=-i, ik=-j. $$
Consider $S^3$ as the unit quaternions, i.e.\
$$ S^3 = \{ a + bi + cj + dk \mid \sqrt{ a^2 + b^2 + c^2 + d^2 } = 1 \}. $$
For $q = a + bi + cj + dk \in S^3$ we have
\begin{align*}
(a,b,c,d) \mapsto (-b,a,-d,c), \quad \quad \quad & q \mapsto iq,\\
(a,b,c,d) \mapsto (-c,d,a,-b), \quad \quad \quad & q \mapsto jq,\\
(a,b,c,d) \mapsto (-d,-c,b,a), \quad \quad \quad & q \mapsto kq.
\end{align*}
It is obvious that the three resulting vectors are orthogonal to each other and to $(a,b,c,d)$, hence they are three linearly independent non vanishing tangent vector fields on $S^3$ and form an orthonormal basis on $T_qS^3$.\ Therefore $TS^3 \cong S^3 \times \mathbb{R}^3 $ and $SS^3 \cong S^3 \times S^2$.\
\subsection{From the Rabinowitz action functional to Morse--Bott}
\subsubsection{Rabinowitz action functional}
\textbf{Planar case:}\ We interpret variationally parametrized periodic orbits of $K_r$ (\ref{regularized_hamiltonian}) as critical points of the Rabinowitz action functional given by
\begin{align} \label{rabinowitz_action_functional}
\mathscr{A}_r := \mathscr{A}^{K_r} \colon \mathcal{L} \times \mathbb{R}_{>0} \to \mathbb{R},\quad (\gamma,\tau ) \mapsto \int_{S^1} \lambda\big( \dot{\gamma}(t) \big) - \tau \int_{S^1} K_r\big(\gamma(t)\big) dt,
\end{align}
where $\mathcal{L} = C^{\infty}(S^1 = \mathbb{R} / \mathbb{Z},T^* S^2)$ is the free loop space of $T^* S^2$, $\lambda$ is the canonical Liouville one-form and where $\tau$ can be regarded as the period of $\gamma$.\ This functional can be thought as the Lagrange multiplier functional of the area functional for the constraint given by the mean value of $K_r$.\
For the critical points of $\mathscr{A}_r$ let $(\gamma, \tau) \in \mathcal{L} \times \mathbb{R}_{>0}$ and $( \hat{\gamma}_1, \hat{\tau}_1 ), (\hat{\gamma}_2, \hat{\tau}_2) \in T_{(\gamma,\tau)} (\mathcal{L} \times \mathbb{R}_{>0})$ two tangent vectors.\ For the gradient of $\mathscr{A}_r$ we denote by $g$ the metric on $\mathcal{L} \times \mathbb{R}_{>0}$ which is defined as the product metric of the $L^2$-metric on $\mathcal{L}$ and the standard metric on $\mathbb{R}_{>0}$.\ For the $L^2$-metric we choose a smooth family $\{J_t\}_{t \in S^1}$ of $\omega$-compatible almost complex structures on $T^* S^2$, meaning that $J_t$ is an automorphism of $T_{\gamma} T^* S^2$ with $J_t^2 = - \text{id}$ and $\omega ( \cdot , J_t \cdot) $ defines a Riemannian metric.\ Then the metric $g$ is given by
$$ g \big( (\hat{\gamma}_1, \hat{\tau}_1 ), (\hat{\gamma}_2, \hat{\tau}_2 ) \big) = \langle \hat{\gamma}_1 , \hat{\gamma}_2 \rangle_{L^2} + \hat{\tau}_1 \hat{\tau}_2 = \int_{S^1} \omega \big( \hat{\gamma}_1 , J_t(\gamma) \hat{\gamma}_2 \big) dt + \hat{\tau}_1 \hat{\tau}_2$$
and with respect to this metric the gradient of $\mathscr{A}_r$ reads
\begin{align} \label{gradient_rabinowitz}
\nabla_g \mathscr{A}_r (\gamma,\tau) = \begin{pmatrix}
- J_t (\gamma) \left( \dot{\gamma}(t) - \tau X_{K_r} \big(\gamma(t)\big) \right)\\
- \int_{S^1} K_r\big(\gamma (t) \big) dt
\end{pmatrix} .
\end{align}
Hence critical points of $\mathscr{A}_r$ consist of pairs $(\gamma,\tau)$ which are solutions of
$$ \begin{cases}
\dot{\gamma}(t) = \tau X_{K_r}\big( \gamma(t) \big)\\
0 = \int_{S^1} K_r\big( \gamma(t) \big) dt
\end{cases} \Leftrightarrow\quad \begin{cases}
\dot{\gamma}(t) = \tau X_{K_r}\big( \gamma(t) \big)\\
0 = K_r\big(\gamma(t)\big),
\end{cases} $$
where the equivalence follows from preservation of energy.\ In other words, the critical points of $\mathscr{A}_r$ are parametrized periodic orbits of $X_{K_r}$ of period $\tau$, i.e.\ of the form $\gamma(t) = \varphi_{K_r}^{\tau t}$, $t \in \mathbb{R}$, on the fixed energy level set $K_r^{-1}(0)$.\
The circle $S^1$ acts on $\mathcal{L}$ by rotating the loop $\gamma$.\ This $S^1$-action extends to an action on $\mathcal{L} \times \mathbb{R}_{>0}$, where $S^1$ acts trivially on $\mathbb{R}_{>0}$, that is
\begin{align} \label{circle_action}
s_* \big( \gamma(t), \tau \big) = \big( \gamma (t + s), \tau \big),\quad t,s \in S^1.
\end{align}
Note that for every $r$ the Rabinowitz action functional $\mathscr{A}_r$ is invariant under (\ref{circle_action}).\ By the Morse Lemma (see for instance \cite[pp.\ 6--8]{milnor}) non-degenerate critical points are isolated.\ Since the critical points of $\mathscr{A}_r$ come in $S^1$-families, $\mathscr{A}_r$ is never a Morse function, i.e.\ the kernel of its Hessian at a critical point is never just the zero vector space.\ Further, the geodesic flow on $S^2$ is invariant under rotation, thus closed geodesics are not isolated.\
If $\gamma : S^1 \to S^* S^2$ is a parametrized periodic orbit of $X_{K_0}$ corresponding to a simple closed geodesic on $S^2$, then $ (\gamma,2\pi) \in \text{crit}\mathscr{A}_0$.\ We denote the space of all these pairs by
\begin{align} \label{c_critical_set}
C:= \left\{ (\gamma, 2\pi) \mid \dot{\gamma}(t) = 2\pi X_{K_0}\big( \gamma(t) \big), 0 = K_0\big( \gamma(t) \big) \right\} \cong \mathbb{R}P^3 \subset \text{crit}\mathscr{A}_0,
\end{align}
on which the circle $S^1$ also acts by time shift.\
We next study the derivative of $\mathscr{A}_r$ with respect to $r$ at $r=0$ at $( \gamma, 2 \pi ) = (-p,q,2\pi) \in C$.\ For this we compute
$$ \frac{\partial K_r}{\partial r} \bigg|_{r=0} (-p,q) = \frac{1}{2} \big(1 + |p|^2\big) (p_1q_2 - p_2q_1)|q|^2 = \sqrt{\big(2 K_0(-p,q) + 1\big)} |q| (p_1 q_2 - p_2 q_1). $$
Recall from (\ref{angular_momentum_invariant}) that $p_1 q_2 - p_2 q_1 = L(q,p) = L(-p,q)$.\ For $( \gamma, 2 \pi ) \in C \subset \text{crit}(\mathscr{A}_0)$ we have that $K_0(\gamma)=0$ and since $L$ is constant along periodic orbits of the Kepler problem we obtain
\begin{align} \label{partial_derivative_functional}
\mathring{\mathscr{A}}_0(\gamma,2\pi) := \frac{\partial \mathcal{A}_r}{\partial r} \bigg|_{r=0} (\gamma,2\pi) = - 2 \pi \int_{S^1} \frac{\partial K_r}{\partial r} \bigg|_{r=0} \big(\gamma(t)\big) &= - 2\pi \int_{S^1} |q|L\big( \gamma(t) \big) dt \nonumber\\
&= - 2\pi L \big( \gamma(0) \big) \int_{S^1} |q|dt.\nonumber\\
&= - 2\pi L (-p,q).
\end{align}
Note that by (\ref{two_hamilt_2}) the last integral term determines the ratio of a Kepler ellipse's period of energy 0 before and after regularization, which is $2\pi$ in this two cases.\ Hence it is 1 and does not depend on the orbit.\ In the \textbf{spatial case}, the analogous procedure gives the same equation and function (\ref{partial_derivative_functional}) of the same form, which we denote by $-L \colon SS^3 \to \mathbb{R}, (-p,q) \mapsto q_1 p_2 - q_2p_1$.\
\subsubsection{Critical points of $-L$ on $SS^2$ and $SS^3$ and their Morse--Bott indices}
\label{sec:explicit_computation}
Recall that if $x$ is a critical point of a Morse--Bott function $f$, then the Morse--Bott index $\text{ind}_f(x)$ of $f$ at $x$ is the number of negative eigenvalues of the Hessian of $f$ at $x$.\\
\\
In view of (\ref{partial_derivative_functional}) we wish to compute the critical points of
\begin{align} \label{function_critical_points}
-L \colon SS^2 \to \mathbb{R},\quad (-p,q) \mapsto q_1p_2 - q_2p_1
\end{align}
and of
\begin{align} \label{function_critical_points_spatial}
-L \colon SS^3 \to \mathbb{R},\quad (-p,q) \mapsto q_1p_2 - q_2p_1.
\end{align}
\begin{lemma1}[Planar case]
\textit{The critical points of (\ref{function_critical_points}) are exactly two circles over the equator moving in opposite direction (see Figure \ref{figure_geodesic}) and (\ref{function_critical_points}) is Morse--Bott along them.\ Furthermore, one is a maximum and the other one is a minimum, and
\begin{center}
\begin{tabular}{c|c|c}
\textnormal{crit}($-L$) & Morse--Bott index & corresponding periodic orbit in the original picture\\
\hline maximum & 2 & direct\\
minimum & 0 & retrograde
\end{tabular}
\end{center}}
\end{lemma1}
\begin{proof}
Given the switch (\ref{switch_map}), let $S^2 = \{ (-p_1,-p_2,-p_3) \mid p_1^2 + p_2^2 + p_3^2 = 1 \} \subset \mathbb{R}^3$ be the unit sphere.\ Then view $T^*S^2$ as a subset of $\mathbb{R}^6 = \{ -p_1,-p_2,-p_3,q_1,q_2,q_3 \}$ and the unit tangent bundle $SS^2 \subset TS^2$, which is the closure of the regularized energy hypersurface
$$ \mathbb{R}P^3 \cong S^*S^2 \cong SS^2 = \{ (-p,q) \in \mathbb{R}^3 \times \mathbb{R}^3 \mid \| p \|^2 = \| q \|^2 = 1, \langle - p , q \rangle = 0 \} \subset \mathbb{R}^6. $$
To find the critical points of (\ref{function_critical_points}) we parametrize $S^2$ by
$$ \Phi \colon [0,2\pi) \times (-\frac{\pi}{2},\frac{\pi}{2}) \to \mathbb{R}^3,\quad (\varphi,\theta) \mapsto \begin{pmatrix}
\cos \theta \cos \varphi\\
\cos \theta \sin \varphi\\
\sin \theta
\end{pmatrix}, $$
hence the north and south poles are not parametrized.\ This does not matter since one can check that the same calculation with a chart including the north and south poles shows that there are no critical points of (\ref{function_critical_points}) at the two circles there.\ However, the tangent plane $T_{-p}S^2$ at a point $-p \in S^2$ is spanned by the two orthogonal vectors
$$ \frac{\partial \Phi}{\partial \varphi} = \begin{pmatrix}
- \cos \theta \sin \varphi\\
\cos \theta \cos \varphi\\
0
\end{pmatrix},\quad \frac{\partial \Phi}{\partial \theta} = \begin{pmatrix}
- \sin \theta \cos \varphi\\
- \sin \theta \sin \varphi\\
\cos \theta
\end{pmatrix} $$
with $|| \frac{\partial \Phi}{\partial \varphi} || = \cos \theta$ and $|| \frac{\partial \Phi}{\partial \theta} || = 1$, whence we can take the orthonormal basis $\left\{ (\cos \theta)^{-1} \frac{\partial \Phi}{\partial \varphi}, \frac{\partial \Phi}{\partial \theta} \right\}$.\ Any unit tangent vector $q$ at $-p$ is given by
$$ q = \cos \alpha (\cos \theta)^{-1} \frac{\partial \Phi}{\partial \varphi} + \sin \alpha \frac{\partial \Phi}{\partial \theta} = \cos \alpha \begin{pmatrix}
- \sin \varphi\\
\cos \varphi\\
0
\end{pmatrix} + \sin \alpha \begin{pmatrix}
- \sin \theta \cos \varphi\\
- \sin \theta \sin \varphi\\
\cos \theta
\end{pmatrix} , $$
for $\alpha \in [0,2\pi),$ and we can parametrize $SS^2$ away from the circles at the two poles by $(\varphi,\theta,\alpha)$.\ With this we readily find that (\ref{function_critical_points}) becomes
$$ -L(\varphi,\theta,\alpha) = \cos \theta \cos \alpha. $$
Hence the gradient $\nabla (-L) = ( 0 , - \sin \theta \cos \alpha , - \cos \theta \sin \alpha )^{\mathrm{T}} $ vanishes exactly for $\theta = 0$ and $ \alpha \in \{ 0, \pi \}$, i.e.\ at the circles over the equator.\ For each $\varphi$, the two critical points
\begin{align*}
C_1 := &( \cos \varphi, \sin \varphi, 0, -\sin \varphi, \cos \varphi, 0 ),\quad -L( \varphi,0,0 ) = 1\\
C_2 := &( \cos \varphi, \sin \varphi, 0, \sin \varphi, -\cos \varphi, 0 ),\quad -L( \varphi,0,\pi ) = -1\nonumber
\end{align*}
give the initial conditions at $\varphi_0 := \varphi \in [0,2\pi)$.\ The corresponding trajectories in the loop space $C^{\infty}(S^1,SS^2)$ are, respectively,
\begin{align*}
\gamma_1(t) &= \big( \cos(\varphi_0 + t),\sin (\varphi_0 + t),0,- \sin (\varphi_0 + t), \cos (\varphi_0 + t),0 \big)\\
\gamma_2(t) &= \big( \cos(\varphi_0 - t),\sin (\varphi_0 - t),0, \sin (\varphi_0 - t), - \cos (\varphi_0 - t),0 \big),\nonumber
\end{align*}
for $t \in [0,2\pi)$.\ The Hessian of $-L$ at these two circles $\gamma_1(t)$ and $\gamma_2(t)$ is, respectively:
$$ \text{H}_{-L}(\varphi_0 + t,0,0) = \begin{pmatrix}
0 & 0 & 0\\
0 & -1 & 0\\
0 & 0 & -1
\end{pmatrix},\quad \text{H}_{-L}(\varphi_0 - t,0,\pi) = \begin{pmatrix}
0 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{pmatrix}. $$
Therefore $-L$ is Morse--Bott along these two circles.\ In other words, all critical points are transverse Morse and its Hessian is non-degenerate in the direction normal to $SS^2 \cong \mathbb{R}P^3$.\ The maximum is attained along $\gamma_1(t)$ and the minimum along $\gamma_2(t)$ and their Morse--Bott indices are
\begin{align*}
\text{ind}_{-L}\big( \gamma_1(t) \big) = 2,\quad \text{ind}_{-L}\big( \gamma_2(t) \big) = 0.
\end{align*}
Note that $\gamma_1(t)$ and $\gamma_2(t)$ are two circles over the equator moving in opposite direction, of Figure \ref{figure_geodesic}.\ To see them in the original picture, we consider the stereographic projection
$$ \sigma_N \colon S^2 \setminus \{N\} \to \mathbb{R}^2,\quad (x_1,x_2,x_3) \mapsto \bigg( \frac{x_1}{1-x_3}, \frac{x_2}{1 - x_3} \bigg), $$
and its cotangent lift
$$ T^* \big(S^2 \setminus \{N\} \big) \to T^*\mathbb{R}^2,\quad (x,y) \mapsto \big( \sigma_N(x), ( d \sigma_N(x)^{\mathrm{T}} )^{-1} (y) \big), $$
where $ (d \sigma_N(x)^{\mathrm{T}} )^{-1} (y) = \big(y_1(1-x_3) + x_1y_3, y_2(1-x_3) + x_2y_3 \big) $.\ Together with the switch (\ref{switch_map}) we obtain that the maximum $\gamma_1$ corresponds to
$$ \big( q_1(t),q_2(t),p_1(t),p_2(t) \big) = \big( -\sin(\varphi_0 + t), \cos (\varphi_0 + t) , - \cos (\varphi_0 + t), - \sin (\varphi_0 + t) \big), $$
which rotates in a direct motion, and the minimum $\gamma_2$ to
$$ \big( q_1(t),q_2(t),p_1(t),p_2(t) \big) = \big( \sin(\varphi_0 - t), - \cos (\varphi_0 - t) , - \cos (\varphi_0 - t), - \sin (\varphi_0 - t) \big), $$
which rotates in a retrograde motion.\ Their angular momentum is $-1$ and $1$, respectively.\
\end{proof}
\begin{lemma1}[Spatial case]
\textit{The critical points of (\ref{function_critical_points_spatial}) are exactly four circles, and (\ref{function_critical_points_spatial}) is Morse--Bott along them.\ There are one maximum, two saddle points and one minimum.\ The maximum and minimum are two critical points inherited from the planar case, and the two saddle points are two circles moving through the north pole.\ Moreover,
\begin{center}
\begin{tabular}{c|c|c|c}
\textnormal{crit}($-L$) & Morse--Bott index & corresponding periodic orbit in the original picture\\
\hline maximum & 4 & direct (planar)\\
saddle point & 2 & collision orbit bouncing back (spatial)\\
saddle point & 2 & collision orbit bouncing back (spatial)\\
minimum & 0 & retrograde (planar)
\end{tabular}
\end{center}}
\end{lemma1}
\begin{proof}
By using a parametrization of $S^3$ where the north and south poles are not parametrized, one obtains two critical points inherited from the planar problem, namely
\begin{align*}
C_1 := &( \cos \varphi, \sin \varphi, 0, 0, -\sin \varphi, \cos \varphi, 0, 0 ),\quad\quad -L( C_1 ) = 1, &\text{ind}_{-L}(C_1) &= 4,\\
C_2 := &( \cos \varphi, \sin \varphi, 0, 0, \sin \varphi, -\cos \varphi, 0, 0 ),\quad\quad -L( C_2 ) = -1, &\text{ind}_{-L}(C_2) &= 0.\nonumber
\end{align*}
Therefore we parametrize $S^3$ by
$$ \Phi \colon [0,2\pi) \times (-\frac{\pi}{2},\frac{\pi}{2}) \times (-\frac{\pi}{2},\frac{\pi}{2}) \to \mathbb{R}^4,\quad (\varphi,\theta_1,\theta_2) \mapsto \begin{pmatrix}
\sin \theta_2\\
\sin \theta_1 \cos \theta_2\\
\cos \theta_1 \cos \theta_2 \cos \varphi\\
\cos \theta_1 \cos \theta_2 \sin \varphi
\end{pmatrix}. $$
At a point $-p \in S^3$ we choose $ \left\{ (\cos \theta_1)^{-1} (\cos \theta_2)^{-1} \frac{\partial \Phi}{\partial \varphi}, (\cos \theta_2)^{-1} \frac{\partial \Phi}{\partial \theta_1}, \frac{\partial \Phi}{\partial \theta_2} \right\} $ as the orthonormal basis of the tangent plane $T_{-p}S^3$.\ Then every unit tangent vector $q$ at $-p$ is written as
$$ q = \cos \alpha_1 \cos \alpha_2 \begin{pmatrix}
0\\
0\\
- \sin \varphi\\
\cos \varphi
\end{pmatrix} + \cos \alpha_1 \sin \alpha_2 \begin{pmatrix}
0\\
\cos \theta_1 \\
- \sin \theta_1 \cos \varphi\\
- \sin \theta_1 \sin \varphi
\end{pmatrix} + \sin \alpha_1 \begin{pmatrix}
\cos \theta_2\\
-\sin \theta_1 \sin \theta_2\\
- \cos \theta_1 \sin \theta_2 \cos \varphi\\
- \cos \theta_1 \sin \theta_2 \sin \varphi
\end{pmatrix}, $$
for $\alpha_1 \in (-\frac{\pi}{2},\frac{\pi}{2})$, $\alpha_2 \in [0,2\pi)$.\ Moreover, we parametrize $SS^3$ away from circles on $S^3$ with only first and second coordinates by $(\varphi,\theta_1,\theta_2,\alpha_1,\alpha_2)$.\ Then (\ref{function_critical_points_spatial}) becomes
$$ - L (\varphi,\theta_1,\theta_2,\alpha_1,\alpha_2) = \cos \theta_1 \sin \theta_2 \cos \alpha_1 \sin \alpha_2 - \sin \theta_1 \sin \alpha_1. $$
For the critical points, the third component of the gradient
$$ \nabla(-L) = \begin{pmatrix}
0\\
- \sin \theta_1 \sin \theta_2 \cos \alpha_1 \sin \alpha_2 - \cos \theta_1 \sin \alpha_1\\
\cos \theta_1 \cos \theta_2 \cos \alpha_1 \sin \alpha_2\\
- \cos \theta_1 \sin \theta_2 \sin \alpha_1 \sin \alpha_2 - \sin \theta_1 \cos \alpha_1\\
\cos \theta_1 \sin \theta_2 \cos \alpha_1 \cos \alpha_2
\end{pmatrix} $$
implies that $\sin\alpha_2=0$, hence $\alpha_2 \in \{0,\pi\}$.\ Morevoer, by the fifth component we obtain that $\sin \theta_2=0$, i.e.\ $\theta_2=0$.\ The two cases for $\alpha_2$ together with the second and fourth component give the further solutions $\alpha_1=0$ and $\theta_1=0$.\ Therefore in addition there are for every $\varphi$ two critical points
\begin{align*}
C_3 := &(0,0, \cos \varphi, \sin \varphi,0,0, -\sin \varphi, \cos \varphi ),\quad\quad -L( \varphi,0,0,0,0 ) = 0,\\
C_4 := &(0,0, \cos \varphi, \sin \varphi, 0, 0, \sin \varphi, -\cos \varphi ),\quad\quad -L( \varphi,0,0,0,\pi ) = 0,\nonumber
\end{align*}
giving the initial conditions at $\varphi_0 := \varphi \in [0,2\pi)$ for the corresponding trajectories in the loop space $C^{\infty}(S^1,SS^3)$, which are respectively
\begin{align*}
\gamma_3(t) &= \big(0,0, \cos(\varphi + t),\sin (\varphi + t),0,0,- \sin (\varphi + t), \cos (\varphi + t) \big)\\
\gamma_4(t) &= \big(0,0, \cos(\varphi - t),\sin (\varphi - t),0,0, \sin (\varphi - t), - \cos (\varphi - t) \big).\nonumber
\end{align*}
The Hessian at $(\varphi+t,0,0,0,0)$ and $(\varphi-t,0,0,0,\pi)$ is given respectively by
$$ \text{H}_{-L} = \begin{pmatrix}
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & -1 & 0\\
0 & 0 & 0 & 0 & 1\\
0 & -1 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0
\end{pmatrix},\quad \text{H}_{-L} = \begin{pmatrix}
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & -1 & 0\\
0 & 0 & 0 & 0 & -1\\
0 & -1 & 0 & 0 & 0\\
0 & 0 & -1 & 0 & 0
\end{pmatrix}.$$
Hence $-L$ is Morse--Bott along $\gamma_3$ and $\gamma_4$, and in each case there are besides the zero eigenvalue the eigenvalues 1 and $-1$ with double multiplicity, thus $\gamma_3(t)$ and $\gamma_4(t)$ are saddle points with indices
\begin{align*}
\text{ind}_{-L}\big( \gamma_3(t) \big) = \text{ind}_{-L}\big( \gamma_4(t) \big) = 2.
\end{align*}
Moreover, these two circles move trough the north pole for $\varphi_0 \pm t = \frac{\pi}{2}$ in opposite direction, i.e.\ in the original picture, these two great circles correspond to collision orbits moving into the mass at the origin with only the spatial coordinates bouncing back.\ More precisely, analogous to the planar case, by using the stereographic projection from the north pole
$$ \sigma_N \colon S^3 \setminus \{N\} \to \mathbb{R}^3,\quad (x_1,x_2,x_3,x_4) \mapsto \bigg( \frac{x_1}{1-x_4}, \frac{x_2}{1 - x_4}, \frac{x_3}{1 - x_4} \bigg), $$
its cotangent lift
$$ T^* \big(S^3 \setminus \{N\} \big) \to T^*\mathbb{R}^3,\quad (x,y) \mapsto \big( \sigma_N(x), y_1(1-x_4) + x_1y_4, y_2(1-x_4) + x_2y_4, y_3(1-x_4) + x_3y_4 \big) $$
and the switch (\ref{switch_map}), $\gamma_3$ and $\gamma_4$ become
\begin{align*}
\bigg( 0,0,1 - \sin (\varphi_0 + t),0,0,\frac{\cos (\varphi_0 + t)}{\sin (\varphi_0 + t)-1} \bigg), \quad \bigg( 0,0,\sin (\varphi_0 - t)-1,0,0,\frac{\cos (\varphi_0 - t)}{\sin (\varphi_0 - t)-1} \bigg),
\end{align*}
respectively.\ The first one collides with the origin from above and the second one from below, see Figure \ref{figure_spatial_collision_orbits}.\ Their angular momentum is zero.\
\end{proof}
\subsection{Morse case and bifurcation of family $g$ and $f$ from the geodesic flow}
\label{sec:7.3}
In each case, the $S^1$-action obtained by rotating the loop corresponds to the zero eigenvalue of the Hessian.\ By taking the quotient by this action we obtain the space of oriented unparametrized great circles on $S^2$ (planar case), and the same is true for all $n \geq 2$, see for instance \cite{besse}, \cite{klingenberg}.\ Denote by $\widetilde{S^n_+}$ the space of oriented unparametrized great circles on $S^n$.\ Since an oriented great circle on $S^n$ corresponds to an oriented 2-plane through the origin of $\mathbb{R}^{n+1}$, $\widetilde{S^n_+}$ is diffeomorphic to the oriented Grassmannian $G^+(2,n+1)$ of oriented 2-planes through the origin of $\mathbb{R}^{n+1}$.\ For instance, $\widetilde{S^2_+}$ is clearly $S^2$ by associating to an oriented 2-plane its unit normal vector.\ The space $\widetilde{S^3_+}$, which is important for the spatial problem, is diffeomorphic to $S^2 \times S^2$, see \cite[p.\ 55]{besse}.\ We summarize:
\begin{align} \label{unparametrized_space}
\widetilde{S^n_+} \cong G^+(2,n+1) = \begin{cases}
S^2, & n=2\\
S^2 \times S^2, & n=3
\end{cases}
\end{align}
\textbf{Planar case:}\ By the invariance of the functionals $\mathscr{A}_r$ under the circle action (\ref{circle_action}), they induce action functionals
$$ \overline{\mathscr{A}_r} : ( \mathcal{L} \times \mathbb{R}_{>0} )/S_1 \to \mathbb{R}. $$
Note that the quotient $( \mathcal{L} \times \mathbb{R}_{>0} )/S_1$ is an orbifold.\ Recall the definition (\ref{c_critical_set}) of the critical component $C$.\ Since $(\gamma,2\pi) \in C$ corresponds to a simple closed geodesic on $S^2$, the $S^1$-action at $C \subset \mathcal{L} \times \mathbb{R}_{>0}$ is free, hence the quotient space
$$C/S^1 \subset \text{crit}(\overline{\mathscr{A}_0})$$
is a submanifold of $(\mathcal{L} \times \mathbb{R}_{>0})/S^1$.\ This quotient space is diffeomorphic to $\mathbb{R}P^3/S^1 \cong S^2$ which is the space of oriented unparametrized great circles on $S^2$.\ Hence in view of (\ref{partial_derivative_functional}) the restriction
$$\mathring{\overline{\mathscr{A}_0}} | _{S^2} = \overline{ - 2 \pi L(-p,q) } | _{S^2} \colon S^2 \to \mathbb{R}$$
to the quotient space $C / S^1 \cong S^2$ is a Morse function on $S^2$ having one maximum and one minimum with Morse indices 2 and 0, respectively.\ In particular, $\mathring{\overline{\mathscr{A}_0}} | _{S^2}$ is diffeomorphic to the standard height function on $S^2$.\ Hence two families of periodic orbits bifurcate like the critical points on $S^2$.\ The familly bifurcating from the circular direct one (corresponding to the maximum) is referred to as \textbf{direct periodic orbits}, and the family bifurcating from the circular retrograde orbit (corresponding to the minimum) as \textbf{retrograde periodic orbits}.\ We call them \textbf{family $g$ and $f$}, respectively.\ This bifurcation is generated by a small perturbation of $\mathring{\overline{\mathscr{A}_0}} | _{S^2}$ on $C/S^1 \cong S^2$, as follows from the implicit function theorem.\ More precisely, there exists $\varepsilon > 0$, an open neighborhood $U$ of $S^2$ in $( \mathcal{L} \times \mathbb{R}_{>0} )/S^1$ and a smooth function
\begin{align} \label{small_perturbation}
f : \text{crit} ( \mathring{\overline{\mathscr{A}_0}} | _{S^2} ) \times [0,\varepsilon) \to U
\end{align}
with the following two properties:
\begin{itemize}
\item[i)] If $\iota : \text{crit} ( \mathring{\overline{\mathscr{A}_0}} | _{S^2} ) \to U $ is the inclusion, then $ f(\cdot,0) = \iota $.
\item[ii)] For every $\tilde{r} \in (0, \varepsilon)$ the restriction $\mathring{\overline{\mathscr{A}_{\tilde{r}}}} | _U$ is Morse and $ \text{crit}( \mathring{\overline{\mathscr{A}_{\tilde{r}}}} | _U ) = \text{im}( f(\cdot,\tilde{r}) ).$
\end{itemize}
\begin{figure}[H]
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm,scale=1.3]
\clip(-6.2,-5.9) rectangle (6.5,2.75);
\draw [dashed,color=blue,line width=1pt] (-0.5,0) arc (0:180:0.7 and 0.15);
\draw [decoration={markings, mark=at position 0.5 with {\arrow{<}}}, postaction={decorate},color=blue,line width=1pt] (-1.9,0) arc (180:360:0.7 and 0.15);
\draw [color=blue,line width=1pt] (-0.2,1.75) arc (0:360:0.9 and 0.15);
\draw[rounded corners=20pt,color=blue,line width=1pt](-1.9,0)--(-2.4,0.6)--(-2,1.75);
\draw[rounded corners=20pt,color=blue,line width=1pt](-0.5,0)--(0.1,0.6)--(-0.2,1.75);
\draw [dashed,color=red,line width=1pt] (2.1,0) arc (0:180:0.8 and 0.15);
\draw [decoration={markings, mark=at position 0.5 with {\arrow{>}}}, postaction={decorate},color=red,line width=1pt] (0.5,0) arc (180:360:0.8 and 0.15);
\draw [color=red,line width=1pt] (2.8,1.75) arc (0:360:0.8 and 0.15);
\draw[rounded corners=20pt,color=red,line width=1pt](0.5,0)--(1,0.6)--(1.2,1.75);
\draw[rounded corners=20pt,color=red,line width=1pt](2.1,0)--(1.9,0.6)--(2.8,1.75);
\draw [line width=1pt] (-3.2,-0.8) .. controls (-0.3,-0.65) .. (2.6,-0.8);
\draw [line width=1pt] (2.6,-0.8) .. controls (2.75,0) .. (3.1,0.8);
\draw [line width=1pt] (-3.2,-0.8) .. controls (-3,0) .. (-2.6,0.8);
\draw [dash pattern=on 2pt off 2pt,opacity=0] (-2.6,0.8) .. controls (0.25,0.95) ..
coordinate[pos=0.038] (A)
coordinate[pos=0.02] (B)
coordinate[pos=0.046] (C)
coordinate[pos=0.432] (D)
coordinate[pos=0.2] (E)
coordinate[pos=0.445] (F)
coordinate[pos=0.67] (G)
coordinate[pos=0.72] (H)
coordinate[pos=0.53] (I)
coordinate[pos=0.87] (J)
coordinate[pos=0.876] (K)
coordinate[pos=0.65] (L)
coordinate[pos=0.94] (M) (3.1,0.8);
\draw [line width=1pt] (-2.6,0.8) .. controls (B) .. (A);
\draw [dashed,line width=1pt] (C) .. controls (E) .. (D);
\draw [line width=1pt] (F) .. controls (I) .. (G);
\draw [dashed,line width=1pt] (H) .. controls (L) .. (J);
\draw [line width=1pt] (K) .. controls (M) .. (3.1,0.8);
\draw [->,line width=1pt] (3.7,0) -- (3.7,2.5);
\draw (3.7611111111111205,2.55) node[anchor=north west] {$r$};
\draw (3.7611111111111205,1.95) node[anchor=north west] {$\varepsilon$};
\draw (3.7611111111111205,0.2) node[anchor=north west] {$0$};
\draw (2.2,-0.8) node[anchor=north west] {$S^*S^2 \cong \mathbb{R}P^3$};
\draw [->,line width=1pt] (0,-1) -- (0,-1.9);
\draw [line width=1pt] (0,-3.9) circle (1.5cm);
\draw [dashed,line width=1pt] (1.5,-3.9) arc (0:180:1.5 and 0.45);
\draw [line width=1pt] (-1.5,-3.9) arc (180:360:1.5 and 0.45);
\draw (1.4,-2.4) node[anchor=north west] {$\mathbb{R}P^3 / S^1 \cong S^2$};
\draw[color=red] (1.5,2.85) node[anchor=north west] {$\text{direct}$};
\draw[color=red] (1.3,2.45) node[anchor=north west] {$\text{family }g$};
\draw[color=blue] (-1.85,2.85) node[anchor=north west] {$\text{retrograde}$};
\draw[color=blue] (-1.75,2.45) node[anchor=north west] {$\text{family }f$};
\begin{scriptsize}
\draw [fill=black] (3.7,0) circle (2pt);
\draw [fill=black] (3.7,1.75) circle (2pt);
\draw [color=red,fill=red] (0,-2.4) circle (2pt);
\draw [color=blue,fill=blue] (0,-5.4) circle (2pt);
\draw [color=red,line width=1pt] (0,-2.4) to[out=-130,in=-100] (0.2,-2);
\draw [color=blue,line width=1pt] (0,-5.4) to[out=-90,in=-110] (-0.1,-5.8);
\end{scriptsize}
\end{tikzpicture}
\caption{Periodic orbits bifurcate like ciritical points of the height function on $S^2$}
\label{figure_planar_bifurcation}
\end{figure}
\textbf{Spatial case:}\ Recall by (\ref{unparametrized_space}) that the space of oriented unparametrized great circles on $S^3$ is given by $S^2 \times S^2$.\ Thus the restriction
$$\mathring{\overline{\mathscr{A}_0}} | _{S^2 \times S^2} = \overline{ - 2 \pi L (-p,q) } | _{S^2 \times S^2} \colon S^2 \times S^2 \to \mathbb{R}$$
is a Morse function on $S^2 \times S^2$ having one maximum, two saddle points and one minimum with Morse indices 4, 2, 2, and 0, respectively.\ The planar direct orbit, two collision spatial orbits and planar retrograde perdiodic orbit bifurcate now like these critial points on $S^2 \times S^2$, respectively.\ We illustrate the bifurcation scenario in the spatial problem in Figure \ref{figure_spatial_bifurcation}.\
\begin{figure}[H]
\centering
\definecolor{grgr}{RGB}{33,189,63}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm,scale=1.3]
\clip(-3.5,-6.5) rectangle (8.2,2.9);
\draw [dashed,color=blue,line width=1pt] (-0.5,0) arc (0:180:0.7 and 0.15);
\draw [decoration={markings, mark=at position 0.5 with {\arrow{<}}}, postaction={decorate},color=blue,line width=1pt] (-1.9,0) arc (180:360:0.7 and 0.15);
\draw [color=blue,line width=1pt] (-0.2,1.75) arc (0:360:0.9 and 0.15);
\draw[rounded corners=20pt,color=blue,line width=1pt](-1.9,0)--(-2.4,0.6)--(-2,1.75);
\draw[rounded corners=20pt,color=blue,line width=1pt](-0.5,0)--(0.1,0.6)--(-0.2,1.75);
\draw [dashed,color=red,line width=1pt] (2.1,0) arc (0:180:0.8 and 0.15);
\draw [decoration={markings, mark=at position 0.5 with {\arrow{>}}}, postaction={decorate},color=red,line width=1pt] (0.5,0) arc (180:360:0.8 and 0.15);
\draw [color=red,line width=1pt] (2.8,1.75) arc (0:360:0.8 and 0.15);
\draw[rounded corners=20pt,color=red,line width=1pt](0.5,0)--(1,0.6)--(1.2,1.75);
\draw[rounded corners=20pt,color=red,line width=1pt](2.1,0)--(1.9,0.6)--(2.8,1.75);
\draw [dashed,color=magenta,line width=1pt] (4.6,0) arc (0:180:0.7 and 0.15);
\draw [decoration={markings, mark=at position 0.5 with {\arrow{<}}}, postaction={decorate},color=magenta,line width=1pt] (3.2,0) arc (180:360:0.7 and 0.15);
\draw [color=magenta,line width=1pt] (4.5,1.75) arc (0:360:0.6 and 0.15);
\draw[rounded corners=20pt,color=magenta,line width=1pt](3.2,0)--(3.5,0.6)--(3.3,1.75);
\draw[rounded corners=20pt,color=magenta,line width=1pt](4.6,0)--(5.2,0.9)--(4.5,1.75);
\draw [dashed,color=grgr,line width=1pt] (6.9,0) arc (0:180:0.8 and 0.15);
\draw [decoration={markings, mark=at position 0.5 with {\arrow{>}}}, postaction={decorate},color=grgr,line width=1pt] (5.3,0) arc (180:360:0.8 and 0.15);
\draw [color=grgr,line width=1pt] (6.9,1.75) arc (0:360:0.8 and 0.15);
\draw[rounded corners=20pt,color=grgr,line width=1pt](5.3,0)--(5.6,0.6)--(5.3,1.75);
\draw[rounded corners=20pt,color=grgr,line width=1pt](6.9,0)--(6.6,0.6)--(6.9,1.75);
\draw [line width=1pt] (-3.2,-0.8) .. controls (1.9,-0.6) .. (7,-0.8);
\draw [line width=1pt] (7,-0.8) .. controls (7.15,0) .. (7.5,0.8);
\draw [line width=1pt] (-3.2,-0.8) .. controls (-3,0) .. (-2.6,0.8);
\draw [dash pattern=on 2pt off 2pt,opacity=0pt] (-2.6,0.8) .. controls (2.45,1) ..
coordinate[pos=0.021] (A)
coordinate[pos=0.01] (B)
coordinate[pos=0.025] (C)
coordinate[pos=0.205] (D)
coordinate[pos=0.1] (E)
coordinate[pos=0.21] (F)
coordinate[pos=0.318] (G)
coordinate[pos=0.25] (H)
coordinate[pos=0.326] (I)
coordinate[pos=0.461] (J)
coordinate[pos=0.4] (K)
coordinate[pos=0.469] (L)
coordinate[pos=0.622] (M)
coordinate[pos=0.55] (N)
coordinate[pos=0.629] (O)
coordinate[pos=0.8] (P)
coordinate[pos=0.71] (Q)
coordinate[pos=0.804] (R)
coordinate[pos=0.844] (S)
coordinate[pos=0.825] (T)
coordinate[pos=0.849] (U)
coordinate[pos=0.942] (V)
coordinate[pos=0.9] (W)
coordinate[pos=0.946] (X)
coordinate[pos=0.97] (Y) (7.5,0.8);
\draw [line width=1pt] (-2.6,0.8) .. controls (B) .. (A);
\draw [dashed,line width=1pt] (C) .. controls (E) .. (D);
\draw [line width=1pt] (F) .. controls (H) .. (G);
\draw [dashed,line width=1pt] (I) .. controls (K) .. (J);
\draw [line width=1pt] (L) .. controls (N) .. (M);
\draw [dashed,line width=1pt] (O) .. controls (Q) .. (P);
\draw [line width=1pt] (R) .. controls (T) .. (S);
\draw [dashed,line width=1pt] (U) .. controls (W) .. (V);
\draw [line width=1pt] (X) .. controls (Y) .. (7.5,0.8);
\draw [->,line width=1pt] (7.8,0) -- (7.8,2.5);
\draw (7.8611111111111205,2.55) node[anchor=north west] {$r$};
\draw (7.8611111111111205,1.95) node[anchor=north west] {$\varepsilon$};
\draw (7.8611111111111205,0.2) node[anchor=north west] {$0$};
\draw (5.1,-0.85) node[anchor=north west] {$S^*S^3 \cong S^3 \times S^2$};
\draw (5.02,-5.3) node[anchor=north west] {$(S^3 \times S^2)/S^1 \cong S^2 \times S^2$};
\draw [->,line width=1pt] (1.85,-1.5) -- (1.85,-2.4);
\draw [line width=1pt] (0,-4.5) circle (1.5cm);
\draw [dashed,line width=1pt] (1.5,-4.5) arc (0:180:1.5 and 0.45);
\draw [line width=1pt] (-1.5,-4.5) arc (180:360:1.5 and 0.45);
\draw [line width=1pt] (3.8,-4.5) circle (1.5cm);
\draw [dashed,line width=1pt] (5.3,-4.5) arc (0:180:1.5 and 0.45);
\draw [line width=1pt] (2.3,-4.5) arc (180:360:1.5 and 0.45);
\draw (1.67,-4.34) node[anchor=north west] {$\times$};
\draw[color=red] (1.35,2.5) node[anchor=north west] {$\text{family } g$};
\draw[color=red] (1.1,2.9) node[anchor=north west] {$\text{planar direct}$};
\draw[color=blue] (-1.75,2.5) node[anchor=north west] {$\text{family }f$};
\draw[color=blue] (-2.25,2.9) node[anchor=north west] {$\text{planar retrograde}$};
\draw[color=magenta] (3.25,2.5) node[anchor=north west] {$\text{collision}$};
\draw[color=magenta] (3.37,2.9) node[anchor=north west] {$\text{spatial}$};
\draw[color=grgr] (5.45,2.5) node[anchor=north west] {$\text{collision}$};
\draw[color=grgr] (5.57,2.9) node[anchor=north west] {$\text{spatial}$};
\begin{scriptsize}
\draw [fill=black] (7.8,0) circle (2pt);
\draw [fill=black] (7.8,1.75) circle (2pt);
\draw [color=magenta,fill=red] (0,-3) circle (2pt);
\draw [color=grgr,fill=blue] (0,-6) circle (2pt);
\draw [color=grgr,fill=red] (3.8,-3) circle (2pt);
\draw [color=magenta,fill=blue] (3.8,-6) circle (2pt);
\draw [color=red,line width=1pt] (0,-3) to[out=-130,in=-100] (0.2,-2.6);
\draw [color=blue,line width=1pt] (0,-6) to[out=-90,in=-110] (-0.1,-6.4);
\draw [color=red,line width=1pt] (3.8,-3) to[out=-60,in=-100] (3.6,-2.6);
\draw [color=blue,line width=1pt] (3.8,-6) to[out=-70,in=-80] (3.9,-6.4);
\draw [color=grgr,line width=1pt] (3.8,-3) to[out=-130,in=-100] (4,-2.6);
\draw [color=magenta,line width=1pt] (3.8,-6) to[out=-90,in=-110] (3.7,-6.4);
\draw [color=magenta,line width=1pt] (0,-3) to[out=-60,in=-100] (-0.2,-2.6);
\draw [color=grgr,line width=1pt] (0,-6) to[out=-70,in=-80] (0.1,-6.4);
\end{scriptsize}
\end{tikzpicture}
\caption{Bifurcation picture in the spatial case}
\label{figure_spatial_bifurcation}
\end{figure}
\subsection{Conley--Zehnder indices and the months $T_a$ and $T_d$}
For the planar case, in our transversally non-degenerate setting the transversal Conley--Zehnder index equals the Morse index (see \cite{weber_1}, \cite{weber_2}).\ By the Morse index theorem (see \cite[§15]{milnor}), the Morse--Bott index of every simple closed geodesic on the Morse--Bott component $C \cong \mathbb{R}P^3$ of $\mathcal{A}_0$ is 1, since such a geodesic only goes through one antipodal point.\ After the small perturbation on the quotient space $\mathbb{R}P^3 / S^1 \cong S^2$, the orbits in the families $g$ and $f$ bifurcating from the geodesic flow acquire additionally their respective Morse indices of 2 and 0.\ This proves Theorem \ref{theorem_a}.\
For the spatial case, again by the Morse index theorem, we start with Morse--Bott index 2 for every simple closed geodesic on $SS^3 \cong S^3 \times S^2$.\ After the bifurcation scenario, they obtain additionally their respective Morse indices, which implies Theorem \ref{theorem_b}.\
Now we have
\begin{table}[H]\centering
\begin{tabular}{c|c|c|c}
& $\mu_{CZ}$ & $\mu_{CZ}^p$ & $\mu_{CZ}^s$\\
\hline family $g$ of direct periodic orbits (planar)& 6 & 3 & 3\\
\hline family $f$ of retrograde periodic orbits (planar) & 2 & 1 & 1
\end{tabular}
\caption{Conley--Zehnder indices for very low energies}
\label{table_indices}
\end{table}
\noindent
For all sufficiently small energies up until the undetermined $\varepsilon_0 > 0$ given by the implicit function theorem (\ref{small_perturbation}), we obtain for the anomalistic period $T_a$ (\ref{anomalistic_period}) and draconitic period $T_d$ (\ref{draconitic_period}) that
\begin{align} \label{periods_low_energies}
T_a = \begin{cases}
\mathlarger{\frac{2 \pi \cdot T_s}{2 \pi + \varphi_p}}\\[1em]
\mathlarger{\frac{2 \pi \cdot T_s}{ \varphi_p}}
\end{cases} \text{ and }\quad T_d = \begin{cases}
\mathlarger{\frac{2 \pi \cdot T_s}{2 \pi + \varphi_s}} & \text{ for family } g\\[1em]
\mathlarger{\frac{2 \pi \cdot T_s}{ \varphi_s}} & \text{ for family }f.
\end{cases}
\end{align}
\subsection{General Hamiltonians}
\begin{Proposition}
The bifurcation picture of this section holds for all Hamiltonians of the form
\begin{align} \label{hamiltonian_general}
T^*\big( \Omega \setminus \{ (0,0,0) \} \big) \to \mathbb{R},\quad (q,p) \mapsto \frac{1}{2}|p|^2 - \frac{\mu}{|q|} + p_1 q_2 - p_2q_1 + V(q),
\end{align}
where $\Omega \subset \mathbb{R}^3$ is an open subset containing the origin, $V \colon \Omega \to \mathbb{R}$ is a smooth function such that $(0,0,0) \in \text{crit}(V)$ and $\mu > 0$.\
\end{Proposition}
\begin{remark}
These forms of Hamiltonians consist of the rotating Kepler problem plus some velocity independent forces given by $V(q)$, such as the Hill lunar problem (\ref{hamiltonian_hill}).\
\end{remark}
\begin{proof}[Proof of Proposition 6.3]
For the proof one needs to verify the same steps as Frauenfelder and van Koert did in \cite[pp.\ 138--140]{frauenfelder} for the analog statement of the planar case.\ For given $c < 0 $, the regularization of (\ref{hamiltonian_general}) is given by
\begin{align*}
K_c (-p,q) &= \frac{1}{2} \bigg( - \frac{|q|}{2c} \Big( H\big( - \frac{q}{2c}, \sqrt{-2c}p \big) - c - V(0) \Big) + \mu \bigg)^2 - \frac{\mu^2}{2}\\
&= \frac{1}{2} \bigg( \frac{1}{2} \big( 1 + |p|^2 \big) + \frac{p_1 q_2 - p_2 q_1}{(-2c)^{\frac{3}{2}}} - \frac{V(-\frac{q}{2c}) - V(0)}{2c} \bigg)^2 |q|^2 - \frac{\mu^2}{2}.
\end{align*}
As in the spatial Hill lunar problem, we change the energy parameter to $c = \frac{-1}{2r^{2/3}}$ and obtain
\begin{align*}
K_r(-p,q) := \frac{1}{2} \Big( \frac{1}{2} \big( 1 + |p|^2 \big) + (p_1 q_2 - p_2 q_1)r + \big( V(qr^{\frac{2}{3}}) - V(0) \big)r^{\frac{2}{3}} \Big)^2 |q|^2 - \frac{\mu^2}{2}.
\end{align*}
The Hamiltonian
\begin{align*}
K_0 (-p,q) = \frac{1}{2} \Big( \frac{1}{2}(1 + |p|^2) \Big)^2|q|^2 - \frac{\mu^2}{2}
\end{align*}
is independent from the choice of $V$, and its flow on $K_0^{-1}(0)$ is the geodesic flow on $S^3$ up to reparametrization.\ The same calculation as in \cite[pp.\ 138--140]{frauenfelder} shows that $K_r$ is twice continuously differentiable in $r \in [0,\infty)$ and
$$ \frac{\partial K_r}{\partial r} \bigg|_{r=0} (-p,q) = \sqrt{\big(2 K_0(-p,q) + \mu^2\big)} |q| (p_1 q_2 - p_2 q_1) = \sqrt{\big(2 K_0(-p,q) + \mu^2\big)} |q| L(-p,q), $$
which does not depend on $V$ as well.\ Hence, exactly as in the spatial Hill lunar problem, the geodesic flow bifurcates at $r=0$ into the four known periodic orbits.\
\end{proof}
\begin{remark}
The further assumption that the Hamiltonian (\ref{hamiltonian_general}) is invariant under the symplectic involution $\sigma$ from (\ref{symplectic_involution}) implies the splitting of the Conley--Zehnder indices as in Table \ref{table_indices} and the two periods $T_a$ and $T_d$ in (\ref{periods_low_energies}).\
\end{remark}
\section{Local equivariant Rabinowitz-Floer homology}
\label{sec:local_rabinowitz}
\subsection{Local Floer homology and its Euler characteristic}
\label{sec:7.1}
As in (\ref{rabinowitz_action_functional}), we consider for the Hamiltonian of the spatial Hill lunar problem (\ref{hamiltonian_hill}) the Rabinowitz action functional
\begin{align*}
\mathscr{A}^H \colon \mathcal{L} \times \mathbb{R}_{>0} \to \mathbb{R},\quad (\gamma,\tau ) \mapsto \int_{S^1} \lambda\big( \dot{\gamma}(t) \big) - \tau \int_{S^1} H\big(\gamma(t)\big) dt,
\end{align*}
where $\mathcal{L} = C^{\infty}(S^1,T^* \mathbb{R}^3)$ is the free loop space of $M:=T^* \mathbb{R}^3$.\ The Floer homology for this functional was introduced by Cieliebak--Frauenfelder \cite{cieliebak_frauenfelder}.\ In view of (\ref{gradient_rabinowitz}),
$$ \nabla_g \mathscr{A}^H (\gamma,\tau) = \begin{pmatrix}
- J_t (\gamma) \left( \partial_t \gamma - \tau X_H \big(\gamma(t) \big) \right)\\
- \int_{S^1} H \big( \gamma(t) \big) dt
\end{pmatrix} $$
and hence, the critical points of $\mathscr{A}^H$ are parametrized periodic orbits of $X_H$ of period $\tau$ on the energy hypersurface $H^{-1}(0)$.\ Therefore gradient flow lines are maps $(\gamma,\tau) \in C^{\infty}(\mathbb{R},\mathcal{L} \times \mathbb{R}_{>0})$ satisfying
$$ \partial_s \big( \gamma(s), \tau(s) \big) = \nabla_g \mathscr{A}^H \big( \gamma(s), \tau(s) \big), $$
i.e.\ they are solutions $\big(\gamma,\tau\big) \in C^{\infty} (\mathbb{R} \times S^1,M) \times C^{\infty}(\mathbb{R},\mathbb{R}_{>0})$ of the PDE
$$ \begin{cases}
\partial_s \gamma + J_t (\gamma) \big( \partial_t \gamma - \tau X_H( \gamma ) \big) = 0\\
\partial_s \tau + \int_{S^1} H(\gamma) dt = 0.
\end{cases}$$
We now work locally near a family of non-degenerate periodic orbits.\ Since we consider unparametrized periodic orbits, we need to use the local $S^1$-equivariant Rabinowitz-Floer homology, which is the local $S^1$-equivariant Floer homology of the Rabinowitz functional.\ Since its discussion goes beyond the scope of this article, its details will be discussed in a later work, based on the following articles.\ In the non-equivariant and non-Rabinowitz case, without Lagrangian multipliers, the local Floer homology is described in \cite{ginzburg}, and the equivariant Floer homology and its local version are described in \cite{ginzburg_gurel}.\ In the same way, one can work out the local version of the $S^1$-equivariant Rabinowitz-Floer homology constructed by Frauenfelder--Schlenk \cite{frauenfelder_schlenk}.\ Note that for every $\tilde{k}$-th cover of a periodic orbit there is a local $S^1$-equivariant Rabinowitz Floer homology associated to this $\tilde{k}$-th cover.\ Given a family $\tilde{\gamma}$ of non-degenerate unparametrized periodic orbits, we denote by $RFH^{S^1}_*(\tilde{\gamma})$ its local $S^1$-equivariant Rabinowitz-Floer homology and by $\chi (\tilde{\gamma})$ its Euler characteristic
$$ \chi (\tilde{\gamma}) = \sum_{m \in \mathbb{Z}} (-1)^m \dim RFH^{S^1}_m (\tilde{\gamma}). $$
Since the local homology is invariant, the Euler characteristic is invariant as well.\ The following two examples, which are analogous to the two examples in \cite[p.\ 540]{ginzburg_gurel}, are crucial for our study.\
\begin{example}
Let $\tilde{\gamma}$ be a family of simple closed non-degenerate periodic orbits.\ Then, $RFH^{S^1}_*(\tilde{\gamma})$ has rank one when $*$ equals the Conley--Zehnder index of $\tilde{\gamma}$ and zero otherwise.\
\end{example}
\begin{example} \label{example_7_2}
Assume that an iterated planar periodic orbit $x^{\tilde{k}}$ is non-degenerate for all $\tilde{k} \geq 1$.\ Recall that its Conley--Zehnder index is the sum of a planar and spatial index, which we denote by $\mu_{CZ}^p(x^{\tilde{k}})$ and $\mu_{CZ}^s(x^{\tilde{k}})$.\ Furthermore, each index is given by the index iteration in Table \ref{table_index_iteration}.\ Let $\mu_{CZ}^p(x)$ and $\mu_{CZ}^s(x)$ be the indices of the underlying simple closed periodic orbit $x$, then if
$$ \mu_{CZ}^p(x^{\tilde{k}}) \equiv \mu_{CZ}^p(x) \text{ mod } 2 ,\quad \mu_{CZ}^s(x^{\tilde{k}}) \equiv \mu_{CZ}^s(x) \text{ mod } 2, $$
then $x^{\tilde{k}}$ is called a \textbf{good orbit}.\ Otherwise, $x^{\tilde{k}}$ is called a \textbf{bad orbit}.\ All simple closed periodic orbits are good.\ Furthermore, bad orbits occur as $\tilde{k}$-th cover of negative hyperbolic periodic orbits, where $\tilde{k}$ is even.\ If $\tilde{\gamma}$ is a family of good orbits, then
$$ RFH^{S^1}_* (\tilde{\gamma} \, ; \, \mathbb{Q}) = \begin{cases}
\mathbb{Q}, & * = \mu_{CZ}\\
0, & \text{otherwise}.
\end{cases} $$
If $\tilde{\gamma}$ is a family of bad orbits, then
$$ RFH^{S^1}_* (\tilde{\gamma} \, ; \, \mathbb{Q}) = 0 $$
in all degrees.\ Hence bad orbits contribute nothing to the local homology and the Euler characteristic.\ Note that for planar families we have two local $S^1$-equivariant Rabinowitz-Floer homologies with its resp.\ Euler characteristics, namely one viewed in the planar problem and one in the spatial problem.\ We denote them by
$$ pRFH^{S^1}_* (\tilde{\gamma} \, ; \, \mathbb{Q}),\quad \chi_p(\tilde{\gamma}),\qquad \quad sRFH^{S^1}_* (\tilde{\gamma} \, ; \, \mathbb{Q}),\quad \chi_s(\tilde{\gamma}).$$
They differ only by the index shift given by $\mu_{CZ}^s$.\
\end{example}
\begin{remark} \label{remark_8.1}
We denote by $\mathfrak{J}$ the space of $\omega$-compatible almost complex structures and by $\mathfrak{M}$ the space of all Riemannian metrics (inner products), i.e.\ positive definite symmetric bilinear forms.\ There is the natural map
$$ \mathfrak{J} \to \mathfrak{M},\quad J \mapsto g_J := \omega(\cdot, J \cdot). $$
Starting with a Riemannian metric $g$ there is a construction (see the proof of Proposition 2.50 in \cite[pp.\ 63--65]{mcduff_salamon}) of a map
$$ \mathfrak{M} \to \mathfrak{J},\quad g \mapsto J_g $$
such that $J_g$ is an $\omega$-compatible almost complex structure, which depends on $g$, and
$$ J_{g_J} = J,\quad \sigma^* J_g = J_{\sigma^*g}, $$
for every symplectomorphism $\sigma$.\ Furthermore, $ \rho^* J_g = - J_{\rho^*g}$, for every anti-symplectic linear map $\rho$.\ Now let $\rho$ be an anti-symplectic involution.\ Since $g$ is not necessarily invariant under $\rho$, we consider its average $\frac{1}{2} (\rho^* g + g)$, which satisfies $ \rho^* \big( \frac{1}{2} (\rho^* g + g) \big) = \frac{1}{2} (\rho^* g + g)$.\ From this $\rho$-invariant Riemannian metric we obtain an $\omega$-compatible almost complex structure $ J_{\frac{1}{2} (\rho^* g + g)} $ which is anti-invariant under $\rho$.\ This gives raise to the symmetry of the gradient flow lines with respect to $\rho$, meaning that we can reflect them with respect to $\rho$.\
\end{remark}
\subsection{The index of planar and spatial families bifurcating from $g$ and $f$}
\label{sec8.2}
Let us consider the two families $g$ and $f$ for very low energies from Section \ref{sec:7.3}.\ In view of their Conley--Zehnder indices from Table \ref{table_indices} we obtain
$$ pRFH^{S^1}_* (g \, ; \, \mathbb{Q}) = \begin{cases}
\mathbb{Q}, & *=3\\
0, &\text{otherwise},
\end{cases}\quad sRFH^{S^1}_* (g \, ; \, \mathbb{Q}) = \begin{cases}
\mathbb{Q}, & *=6\\
0, &\text{otherwise},
\end{cases} $$
and
$$ pRFH^{S^1}_* (f \, ; \, \mathbb{Q}) = \begin{cases}
\mathbb{Q}, & *=1\\
0, &\text{otherwise},
\end{cases}\quad sRFH^{S^1}_* (f \, ; \, \mathbb{Q}) = \begin{cases}
\mathbb{Q}, & *=2\\
0, &\text{otherwise}.
\end{cases} $$
Therefore the resp.\ Euler characteristics are
$$ \chi_p(g) = -1,\quad \chi_s(g) = 1,\quad \chi_p(f) = -1, \quad \chi_s(f) = 1. $$
By Remark \ref{remark_2_7}, if any of $\overline{A}_p$ and $A_s$ or its $\tilde{k}$-th iteration (in the case that the rotation angle is a $\tilde{k}$-th root of unity) moves trough the eigenvalue 1, then the respective index jumps by $\pm 1$ or $\pm 2$.\ Moreover, the crossing of the eigenvalue 1 generates bifurcations of new families of planar or spatial periodic orbits, respectively.\ For details on the existence and properties of such bifurcations we refer to the book of Abraham--Marsden \cite[pp.\ 597--604]{abraham_marsden} and to the articles \cite{deng} and \cite{kim} for a Floer-theoretical approach.\ Since the local Floer homology and its Euler characteristic are not changed under such transitions, together with the signatures, this helps to classify the Conley--Zehnder indices of new bifurcation families and to search for bridges between two families.\ By a \textbf{bridge} we mean a family of periodic orbits with constant Conley--Zehnder index connecting two families of periodic orbits.\ Therefore a bridge is an orbit cylinder between two families.\
\section{Application to symmetric periodic orbits in the spatial Hill lunar problem}
\label{sec:8}
\subsection{Our moon - the companion of the Earth}
\label{sec:our moon}
Hill \cite[p.\ 259]{hill} found that for given energy $\Gamma = 6.5088$ and position $q_1(0) = 0.176097$ one obtains a planar direct periodic orbit (variational orbit), to which our moon is close.\ Our numerical data (see the next subsection) verify that for this energy value the Conley--Zehnder indices $\mu_{CZ}^p=3$ and $\mu_{CZ}^s=3$ do not change, so we use (\ref{periods_low_energies}) to calculate $T_a$ and $T_d$.\
For the given initial data, by our first Python program (see Appendix (\ref{python1})) we compute $\dot{q}_2(0)=2.222972, T_q=0.507959, m=12.369448,$ and in view of (\ref{synodic_period}), the synodic month corresponds to
$$ T_s = 29.528396 .$$
Our second program (see Appendix (\ref{python2})) computes the planar reduced monodromy $\overline{A}_p \in \text{Sp}^{\rho_1}(1)$ and $A_s \in \text{Sp}^{\rho_1}(1)$ as well with their relevant data.\ They are
$$ \overline{A}_p = \begin{pmatrix}
0.900415 & -0.047423\\
3.987879 & 0.900272
\end{pmatrix},\quad A_s = \begin{pmatrix}
0.860448 & -0.035353\\
7.344293 & 0.860422
\end{pmatrix}. $$
Note that $ \det(\overline{A}_p) = 0.999739,\ \text{tr}(\overline{A}_p) = 1.800688, \det(A_s) = 1.000000$ and $\text{tr}(A_s) = 1.720871$.\ Therefore this orbit is planar and spatial elliptic, and the Floquet multipliers are on the unit circle, so of the form $e^{\pm \text{i} \theta}, e^{\pm \text{i} \vartheta}$.\ In particular, in the case of our moon the orbit has to be planar and spatial elliptic since otherwise our moon would fly away.\ Moreover, $ \text{sign}_b(\theta) < 0$ and $\text{sign}_{\tilde{b}}(\vartheta) < 0 $, hence each rotation is by
$$\varphi_p = \theta = 0.450236,\quad \varphi_s = \vartheta = 0.534603,$$
respectively.\ For the anomalistic and draconitic periods we compute
$$T_a = 27.553954, \quad T_d = 27.212712,$$
thus these computed values are a very good approximation to the physically measured data.\
\subsection{Planar direct periodic orbits}
\label{sec:other_direct}
\subsubsection{The family $g$}
Our first plot in Figure \ref{plot_1} is similar to the well-known pictures from Hill \cite[p.\ 261]{hill}, Hénon \cite[p.\ 228]{henon} and Gutzwiller \cite[p.\ 69]{gutzwiller}, \cite[p.\ 621]{gutzwiller_2}.\ It goes until the energy 2.55788, which is the last value with a periodic orbit found by Hill.\ However, Hénon \cite{henon} found further ones for higher energy values.\ These orbits are all doubly-symmetric with respect to $\rho_1$ and $\rho_2$.\
\begin{figure}[H]
\centering
\includegraphics[scale=0.65]{1_2_3_direct.png}
\caption{The family $g$}
\label{plot_1}
\end{figure}
\noindent
We read the following Table \ref{table_3}, and all tables of this type in this section, in the direction of decreasing $\Gamma$.\ Recall that this corresponds to the increase of the energy.\
\begin{table}[H]\scriptsize \centering
\begin{tabular}{ccccccccccc}
$\Gamma$ & $q_1(0)$ & $\dot{q}_2(0)$ & $T_s$ & tr($\overline{A}_p$) & $\text{sign}_{c/b} (\varphi_p / \lambda_p)$ & $T_a$ & tr($A_s$) & $\text{sign}_{\tilde{c}/\tilde{b}} (\varphi_s / \lambda_s)$ & $T_d$ & $\mu_{CZ}^p /\mu_{CZ}^s / \mu_{CZ}$\\
\hline8 & 0.13772 & 2.56 & 19.78 & 1.89 & $(+/-)$ $\varphi_p = 0.31$ & 18.82 & 1.87 & $(+/-)$ $\varphi_s = 0.35$ & 18.73 & 3 / 3 / 6\\
6.5088 & 0.176097 & 2.22 & 29.52 & 1.80 & $(+/-)$ $\varphi_p = 0.44$ & 27.55 & 1.72 & $(+/-)$ $\varphi_s = 0.53$ & 27.21 & 3 / 3 / 6\\
5 & 0.247 & 1.81 & 53.64 & 1.65 & $(+/-)$ $\varphi_p = 0.59$ & 49.01 & 1.07 & $(+/-)$ $\varphi_s = 1.01$ & 46.27 & 3 / 3 / 6\\
4.49999 & 0.283500 & 1.67 & 71.25 & 1.99 & $(+/-)$ $\varphi_p = 0.03$ & 70.91 & 0.43 & $(+/-)$ $\varphi_s = 1.34$ & 58.66 & 3 / 3 / 6\\
4.278924 & 0.301158 & 1.62 & 82.44 & 2.71 & $(-/-)$ $\lambda_p = 2.26$ & & $\approx 0$ & $(+/-)$ $\varphi_s = 1.57$ & 65.95 & 2 / 3 / 5\\
3.876616 & 0.327645 & 1.59 & 109.5 & 8.64 & $(-/-)$ $\lambda_p = 8.53$ & & $-$1.0 & $(+/-)$ $\varphi_s = 2.09$ & 82.15 & 2 / 3 / 5\\
3.75 & 0.33178 & 1.61 & 119.3 & 14.1 & $(-/-)$ $\lambda_p = 14.04$ & & $-$1.3 & $(+/-)$ $\varphi_s = 2.27$ & 87.63 & 2 / 3 / 5\\
3.5 & 0.331730 & 1.69 & 139.9 & 37.5 & $(-/-)$ $\lambda_p = 37.48$ & & $-$1.7 & $(+/-)$ $\varphi_s = 2.62$ & 98.05 & 2 / 3 / 5\\
3.057471 & 0.310843 & 1.91 & 171.2 & 154.3 & $(-/-)$ $\lambda_p = 154.3$ & & $-$2 & $(+/-)$ $\varphi_s = 3.14$ & 114.1 & 2 / 3 / 5\\
3 & 0.306900 & 1.94 & 175.1 & 178.9 & $(-/-)$ $\lambda_p = 178.9$ & & $-$1.9 & $(-/+)$ $\varphi_s = 3.20$ & 116.0 & 2 / 3 / 5\\
2.55788 & 0.271795 & 2.24 & 204.8 & 453.9 & $(-/-)$ $\lambda_p = 453.9$ & & $-$1.7 & $(-/+)$ $\varphi_s = 3.64$ & 129.7 & 2 / 3 / 5\\
2.073537 & 0.228450 & 2.61 & 238.9 & 932.6 & $(-/-)$ $\lambda_p = 932.6$ & & $-$0.9 & $(-/+)$ $\varphi_s = 4.18$ & 143.3 & 2 / 3 / 5\\
2 & 0.221683 & 2.67 & 244.5 & 1034 & $(-/-)$ $\lambda_p = 1034$ & & $-$0.8 & $(-/+)$ $\varphi_s = 4.29$ & 145.2 & 2 / 3 / 5\\
1.746370 & 0.198221 & 2.90 & 264.9 & 1356 & $(-/-)$ $\lambda_p = 1356$ & & $\approx 0$ & $(-/+)$ $\varphi_s = 4.71$ & 151.4 & 2 / 3 / 5\\
1.5 & 0.175446 & 3.16 & 287.1 & 1746 & $(-/-)$ $\lambda_p = 1746$ & & 1.22 & $(-/+)$ $\varphi_s = 5.37$ & 154.8 & 2 / 3 / 5\\
1.383094 & 0.164715 & 3.35 & 298.6 & 1916 & $(-/-)$ $\lambda_p = 1916$ & & 2.00 & $(+/+)$ $\lambda_s = 1.01$ & & 2 / 4 / 6\\
1 & 0.130319 & 3.79 & 341.7 & 2601 & $(-/-)$ $\lambda_p = 2601$ & & 5.86 & $(+/+)$ $\lambda_s = 5.69$ & & 2 / 4 / 6\\
0.5 & 0.089019 & 4.68 & 412.1 & 4913 & $(-/-)$ $\lambda_p = 4913$ & & 13.4 & $(+/+)$ $\lambda_s = 13.3$ & & 2 / 4 / 6\\
$-$0.5 & 0.033920 & 7.71 & 550.5 & 9830 & $(-/-)$ $\lambda_p = 9830$ & & 26.5 & $(+/+)$ $\lambda_s = 26.4$ & & 2 / 4 / 6\\
$-$1.5 & 0.013560 & 12.2 & 620.5 & 27860 & $(-/-)$ $\lambda_p = 27860$ & & 16.2 & $(+/+)$ $\lambda_s = 16.2$ & & 2 / 4 / 6\\
$-$2.5 & 0.006313 & 17.8 & 653.8 & 63560 & $(-/-)$ $\lambda_p = 63560$ & & 6.07 & $(+/+)$ $\lambda_s = 5.90$ & & 2 / 4 / 6\\
$-$3 & 0.004523 & 21.1 & 665.7 & 90300 & $(-/-)$ $\lambda_p = 90300$ & & 3.69 & $(+/+)$ $\lambda_s = 3.39$ & & 2 / 4 / 6
\end{tabular}
\caption{The family $g$}
\label{table_3}
\end{table}
\noindent
Table \ref{table_3} shows that the maximum of $q_1(0)$ is reached at about $\Gamma=3.75$ and the orbits come closer and closer to the earth.\ According to \cite[pp.\ 230--234]{henon}, based on numerical results and not on analytical arguments, the distance $q_1(0)$ converges to $0$ if the energy $\Gamma$ goes to $-\infty$.\ Hence in the limit there is a collision.\ In that case the period is $4 \pi$, thus the synodic month $T_s$ takes 730.5 days.\ In addition, by \cite[p.\ 319]{henon_2}, tr$(A_s)$ converges to 2 from above.\ Moreover, the speed of the orbit increases if it is closer to the earth.\
Very shortly above $\Gamma = 4.49999$ the planar index $\mu_{CZ}^p$ jumps from 3 to 2 since the rotation by $\varphi_p$ goes to zero at that point.\ From then on the orbits are planar positive hyperbolic type I.\ Slightly before this transition the anomalistic period $T_a$ is almost the synodic period $T_s$.\ Furthermore, at this transition a new family of planar periodic orbits bifurcates (see the family $g'$ in the next subsection).\ This bifurcation arises below the critical value $3^{4/3}$ and above the energy value for our moon.\
Furthermore, these orbits are all spatial elliptic until shortly before the energy value $\Gamma = 1.383094$ where they become spatial positive hyperbolic type II and $\varphi_s$ goes through $2 \pi$.\ Thus $\mu_{CZ}^s$ jumps from 3 to 4, and shortly before this change, the draconitic period $T_d$ is almost half of the synodic period $T_s$.\ At this transition a new family of spatial periodic orbits bifurcates from the planar one which we discuss in the Subsection \ref{sec:9.4.0}.\
We note that all initial data are from \cite{henon}, \cite{henon_2} and \cite{hill}, except the ones for the energy values 4.278924, 3.876616, 2.073537 and 1.746370, where $\varphi_s$ is a 3th resp.\ 4th root of unity, which are from \cite{kalantonis}.\
\subsubsection{The family $g'$}
Hénon \cite{henon} found starting from the energy value $\Gamma=4.49999$ (see Table \ref{table_3}), where the planar index $\mu_{CZ}^p$ of $g$ jumps from 3 to 2, a new family of planar direct periodic orbits bifurcating from $g$.\ At this bifurcation the double-symmetry breaks, meaning that the orbits are only symmetric with respect to $\rho_1$ (see Figure \ref{figure_retrograde_plot_1}).\ By using $\rho_2$, i.e.\ the reflection on the $q_2$-axis, these orbits appear twice.\ The data of the family $g'$ are collected in the Table \ref{table_6}.\ The family $g'$ and its symmetric family start being planar as well as spatial elliptic and they start with the planar index 3.\ Let us verify that this is in accordance with the Euler characteristics before and after bifurcation.\ The local planar Floer homology before this transition is
$$ pRFH^{S^1}_* (g \, ; \, \mathbb{Q}) = \begin{cases}
\mathbb{Q}, & *=3\\
0, &\text{otherwise},
\end{cases} $$
and by the index shift by $\mu_{CZ}^s = 3$, it is in the spatial problem
$$ sRFH^{S^1}_* (g \, ; \, \mathbb{Q}) = \begin{cases}
\mathbb{Q}, & *=6\\
0, &\text{otherwise}.
\end{cases} $$
Therefore the Euler characteristics are in the planar problem
$$ \chi_p(g) = (-1)^3 = -1,\quad \text{resp.}\quad \chi_p(g) = (-1)^2 + 2 \cdot (-1)^3 = -1 ,$$
and in the spatial problem
$$ \chi_s(g) = (-1)^6 = 1,\quad \text{resp.}\quad \chi_s(g) = (-1)^5 + 2 \cdot (-1)^6 = 1 .$$
\begin{figure}[H]
\centering
\includegraphics[scale=0.65]{4_5_6_7_direct.png}
\caption{The family $g'$}
\label{figure_retrograde_plot_1}
\end{figure}
\noindent
\begin{table}[H]\scriptsize \centering
\begin{tabular}{ccccccccccc}
$\Gamma$ & $q_1(0)$ & $\dot{q}_2(0)$ & $T_s$ & tr($\overline{A}_p$) & $\text{sign}_{c/b} (\varphi_p / \lambda_p$) & $T_a$ & tr($A_s$) & $\text{sign}_{\tilde{c}/\tilde{b}} (\varphi_s / \lambda_s)$ & $T_d$ & $\mu_{CZ}^p / \mu_{CZ}^s / \mu_{CZ}$ \\
\hline4.49999 & 0.283500 & 1.67 & 71.25 & 1.99 & $(+/-)$ $\varphi_p = 0.03$ & 70.92 & 0.43 & $(+/-)$ $\varphi_s = 1.34$ & 58.66 & 3 / 3 / 6 \\
4.45 & 0.383360 & 1.09 & 76.34 & 1.77 & $(+/-)$ $\varphi_p = 0.47$ & 70.96 & 0.11 & $(+/-)$ $\varphi_s = 1.51$ & 61.52 & 3 / 3 / 6 \\
4.435711 & 0.399433 & 1.02 & 78.07 & 1.69 & $(+/-)$ $\varphi_p = 0.55$ & 71.72 & $\approx 0$ & $(+/-)$ $\varphi_s = 1.57$ & 62.45 & 3 / 3 / 6\\
4.4 & 0.436840 & 0.86 & 83.11 & 1.45 & $(+/-)$ $\varphi_p = 0.75$ & 74.18 & $-$0.3 & $(+/-)$ $\varphi_s = 1.73$ & 65.11 & 3 / 3 / 6\\
4.35 & 0.489180 & 0.67 & 93.16 & 0.95 & $(+/-)$ $\varphi_p = 1.07$ & 79.57 & $-$0.9 & $(+/-)$ $\varphi_s = 2.07$ & 70.02 & 3 / 3 / 6\\
4.347942 & 0.491443 & 0.66 & 93.70 & 0.93 & $(+/-)$ $\varphi_p = 1.08$ & 79.88 & $-$1.0 & $(+/-)$ $\varphi_s = 2.09$ & 70.27 & 3 / 3 / 6\\
4.3 & 0.550290 & 0.49 & 112.1 & 0.04 & $(+/-)$ $\varphi_p = 1.54$ & 90.01 & $-$1.8 & $(+/-)$ $\varphi_s = 2.73$ & 78.18 & 3 / 3 / 6\\
4.285183 & 0.570854 & 0.44 & 122.2 & $-$0.5 & $(+/-)$ $\varphi_p = 1.86$ & 94.28 & $-$1.9 & $(+/-)$ $\varphi_s = 3.13$ & 81.54 & 3 / 3 / 6\\
4.282893 & 0.573907 & 0.43 & 124.0 & $-$0.7 & $(+/-)$ $\varphi_p = 1.94$ & 94.73 & $-$2.0 & $(+/+)$ $\lambda_s = -1.06$ & & 3 / 3 / 6\\
4.280603 & 0.576960 & 0.42 & 125.9 & $-$0.9 & $(+/-)$ $\varphi_p = 2.04$ & 95.05 & $-$1.9 & $(-/+)$ $\varphi_s = 3.14$ & 83.95 & 3 / 3 / 6\\
4.27143 & 0.587690 & 0.41 & 133.8 & $-$1.9 & $(+/-)$ $\varphi_p = 3.11$ & 89.47 & $-$1.8 & $(-/+)$ $\varphi_s = 3.47$ & 86.15 & 3 / 3 / 6\\
4.25 & 0.600900 & 0.40 & 150.0 & $-$8.1 & $(+/+)$ $\lambda_p = -8.0$ & & $-$1.2 & $(-/+)$ $\varphi_s = 4.04$ & 91.25 & 3 / 3 / 6\\
4.242877 & 0.602371 & 0.40 & 154.0 & $-$11.2 & $(+/+)$ $\lambda_p = -$11.1 & & $-$0.9 & $(-/+)$ $\varphi_s = 4.18$ & 92.43 & 3 / 3 / 6\\
4.200105 & 0.600418 & 0.46 & 169.0 & $-$35.6 & $(+/+)$ $\lambda_p = -$35.6 & & $\approx 0$ & $(-/+)$ $\varphi_s = 4.71$ & 96.58 & 3 / 3 / 6\\
4.2 & 0.600400 & 0.46 & 169.0 & $-$35.6 & $(+/+)$ $\lambda_p = -$35.6 & & $\approx 0$ & $(-/+)$ $\varphi_s = 4.71$ & 96.58 & 3 / 3 / 6\\
4.15 & 0.59171 & 0.52 & 177.8 & $-$67.2 & $(+/+)$ $\lambda_p = -$67.2 & & 0.6 & $(-/+)$ $\varphi_s = 5.07$ & 98.86 & 3 / 3 / 6\\
4 & 0.562913 & 0.70 & 190.1 & $-150$ & $(+/+)$ $\lambda_p = -150$ & & 1.3 & $(-/+)$ $\varphi_s = 5.43$ & 102.0 & 3 / 3 / 6\\
3.5 & 0.480802 & 1.16 & 207.9 & $-305$ & $(+/+)$ $\lambda_p = -304$ & & 1.9 & $(-/+)$ $\varphi_s = 6.02$ & 106.1 & 3 / 3 / 6\\
3.390159 & 0.464697 & 1.24 & 211.0 & $-$313 & $(+/+)$ $\lambda_p = -$313 & & 2.0 & $(+/+)$ $\lambda_s = 1.00$ & & 3 / 4 / 7\\
2 & 0.283653 & 2.30 & 262.1 & $-421$ & $(+/+)$ $\lambda_p = -421$ & & 2.6 & $(+/+)$ $\lambda_s = 2.16$ & & 3 / 4 / 7\\
1.5 & 0.224220 & 2.75 & 293.2 & $-439$ & $(+/+)$ $\lambda_p = -439$ & & 2.7 & $(+/+)$ $\lambda_s = 2.36$ & & 3 / 4 / 7\\
1 & 0.167780 & 3.31 & 337.7 & $-465$ & $(+/+)$ $\lambda_p = -465$ & & 2.7 & $(+/+)$ $\lambda_s = 2.35$ & & 3 / 4 / 7\\
0.5 & 0.116370 & 4.08 & 401.6 & $-511$ & $(+/+)$ $\lambda_p = -511$ & & 2.1 & $(+/+)$ $\lambda_s = 1.32$ & & 3 / 4 / 7\\
0.477157 & 0.114196 & 4.13 & 405.1 & $-$514 & $(+/+)$ $\lambda_p = -$514 & & 2.0 & $(+/+)$ $\lambda_s = 1.03$ & & 3 / 4 / 7\\
0.063099 & 0.078843 & 5.03 & 474.4 & $-$596 & $(+/+)$ $\lambda_p = -$596 & & $\approx 0$ & $(+/-)$ $\varphi_s = 1.57$ & 210.8 & 3 / 5 / 8\\
0 & 0.074220 & 5.19 & 485.8 & $-614$ & $(+/+)$ $\lambda_p = -614$ & & $-$0.4 & $(+/-)$ $\varphi_s = 1.78$ & 212.6 & 3 / 5 / 8\\
$-$0.081977 & 0.068564 & 5.40 & 500.2 & $-$639 & $(+/+)$ $\lambda_p = -$639 & & $-$0.9 & $(+/-)$ $\varphi_s = 2.09$ & 214.4 & 3 / 5 / 8\\
$-$0.219528 & 0.059949 & 5.79 & 524.6 & $-$686 & $(+/+)$ $\lambda_p = -$686 & & $-$1.9 & $(+/-)$ $\varphi_s = 3.13$ & 209.9 & 3 / 5 / 8\\
$-$1 & 0.029281 & 8.32 & 643.0 & $-880$ & $(+/+)$ $\lambda_p = -880$ & & $-$6.6 & $(-/-)$ $\lambda_s = -6.5$ & & 3 / 5 / 8\\
$-$2 & 0.014641 & 11.7 & 743.8 & $-732$ & $(+/+)$ $\lambda_p = -732$ & & $-$11.8 & $(-/-)$ $\lambda_s = -11.7$ & & 3 / 5 / 8\\
$-$3 & 0.008613 & 15.3 & 796.6 & $-499$ & $(+/+)$ $\lambda_p = -499$ & & $-$14.9 & $(-/-)$ $\lambda_s = -14.8$ & & 3 / 5 / 8\\
$-$4.69219 & 0.004302 & 21.6 & 835.1 & $-2$ & $(+/+)$ $\lambda_p = -1.0$ & & $-$18.3 & $(-/-)$ $\lambda_s = -18.2$ & & 3 / 5 / 8\\
$-$4.69849 & 0.004292 & 21.6 & 836.2 & 0.07 & $(-/+)$ $\varphi_p = 4.75$ & 476.2 & $-$18.9 & $(-/-)$ $\lambda_s = -18.8$ & & 3 / 5 / 8\\
$-$4.70479 & 0.004283 & 21.7 & 836.3 & $2$ & $(-/-)$ $\lambda_p = 1.0$ & & $-$19.5 & $(-/-)$ $\lambda_s = -19.4$ & & 4 / 5 / 9\\
$-$5 & 0.003870 & 22.8 & 840.4 & 96.8 & $(-/-)$ $\lambda_p = 96.8$ & & $-$20.3 & $(-/-)$ $\lambda_s = -20.2$ & & 4 / 5 / 9\\
$-$6 & 0.002837 & 26.6 & 850.9 & 432 & $(-/-)$ $\lambda_p = 432$ & & $-$22.9 & $(-/-)$ $\lambda_s = -22.8$ & & 4 / 5 / 9\\
$-$10 & 0.001162 & 41.6 & 870.8 & 1989 & $(-/-)$ $\lambda_p = 1989$ & & $-$59.5 & $(-/-)$ $\lambda_s = -59.5$ & & 4 / 5 / 9\\
$-$20 & 0.000363 & 74.2 & 876.6 & 7190 & $(-/-)$ $\lambda_p = 7190$ & & $-$3727 & $(-/-)$ $\lambda_s = -3727$ & & 4 / 5 / 9
\end{tabular}
\caption{The family $g'$}
\label{table_6}
\end{table}
\noindent
Moreover, we see the same behaviour as for the family $g$ of the distances $q_1(0)$, namely there is a collision in the limit.\ Note that we have found the initial conditions for the $\Gamma$ value $-4.69849$ by ourselves.\ Since at this $\Gamma$ we have $\varphi_p=4.75$, we can imply that the planar index jumps from 3 to 4 in which we can see that $\varphi_p$ goes through $2 \pi$ and hence $\mu_{CZ}^p$ jumps from 3 to 4.\ The initial data for the energy values 4.435711, 4.347942, 4.242877, 4.200105, 0.063099 and $-0.081977$, where $\varphi_s$ is a 3rd resp.\ 4th root of unity, are from \cite{kalantonis} and the others from \cite{henon} and \cite{henon_2}.\
\subsection{Planar retrograde periodic orbits}
\label{sec:retrograde}
\subsubsection{The family $f$}
Some of its orbits are plotted in Figure \ref{fig_fam_f} and its data are given in Table \ref{table_9}.\ We observe that the retrograde periodic orbits are all planar and spatial elliptic and are at larger and larger distance from the earth.\ Therefore the index of the simple closed orbits does not change.\
\begin{figure}[H]
\centering
\includegraphics[scale=0.65]{8_9_retrograde.png}
\caption{The family $f$}
\label{fig_fam_f}
\end{figure}
\begin{table}[H]\scriptsize \centering
\begin{tabular}{ccccccccccc}
$\Gamma$ & $q_1(0)$ & $\dot{q}_2(0)$ & $T_s$ & tr($\overline{A}_p$) & $\text{sign}_{c/b} (\varphi_p)$ & $T_a$ & tr($A_s$) & $\text{sign}_{\tilde{c}/\tilde{b}} (\varphi_s)$ & $T_d$ & $\mu_{CZ}^p / \mu_{CZ}^s / \mu_{CZ}$\\
\hline6 & $-$0.147790 & 2.75 & 19.72 & 1.88 & $(-/+)$ $\varphi_p = 5.93$ & 20.87 & 1.89 & $(-/+)$ $\varphi_s = 5.97$ & 20.80 & 1 / 1 / 2\\
4 & $-$0.204210 & 2.43 & 31.19 & 1.70 & $(-/+)$ $\varphi_p = 5.73$ & 34.19 & 1.75 & $(-/+)$ $\varphi_s = 5.78$ & 33.89 & 1 / 1 / 2\\
3 & $-$0.250710 & 2.27 & 41.49 & 1.48 & $(-/+)$ $\varphi_p = 5.55$ & 47.40 & 1.59 & $(-/+)$ $\varphi_s = 5.63$ & 46.24 & 1 / 1 / 2\\
2 & $-$0.321630 & 2.12 & 58.28 & 1.05 & $(-/+)$ $\varphi_p = 5.27$ & 69.48 & 1.30 & $(-/+)$ $\varphi_s = 5.42$ & 67.52 & 1 / 1 / 2\\
1.359293 & $-$0.389537 & 2.05 & 75.25 & 0.57 & $(-/+)$ $\varphi_p = 5.00$ & 94.5 & 0.99 & $(-/+)$ $\varphi_s = 5.23$ & 90.35 & 1 / 1 / 2\\
1 & $-$0.439910 & 2.03 & 88.25 & 0.20 & $(-/+)$ $\varphi_p = 4.81$ & 115.1 & 0.78 & $(-/+)$ $\varphi_s = 5.11$ & 108.4 & 1 / 1 / 2\\
0.755141 & $-$0.481217 & 2.02 & 99.10 & $-$0.08 & $(-/+)$ $\varphi_p = 4.66$ & 133.4 & 0.61 & $(-/+)$ $\varphi_s = 5.02$ & 123.8 & 1 / 1 / 2\\
0.015388 & $-$0.655072 & 2.07 & 145.5 & $-$0.99 & $(-/+)$ $\varphi_p = 4.18$ & 218.3 & 0.18 & $(-/+)$ $\varphi_s = 4.80$ & 190.3 & 1 / 1 / 2\\
0 & $-$0.659660 & 2.08 & 146.7 & $-$1.01 & $(-/+)$ $\varphi_p = 4.18$ & 220.6 & 0.18 & $(-/+)$ $\varphi_s = 4.80$ & 192.0 & 1 / 1 / 2\\
$-$0.2154 & $-$0.72779 & 2.13 & 165.0 & $-$1.24 & $(-/+)$ $\varphi_p = 4.03$ & 256.8 & 0.14 & $(-/+)$ $\varphi_s = 4.78$ & 216.7 & 1 / 1 / 2\\
$-$0.2538 & $-$0.74179 & 2.14 & 168.3 & $-$1.23 & $(-/+)$ $\varphi_p = 4.04$ & 261.4 & 0.14 & $(-/+)$ $\varphi_s = 4.78$ & 221.1 & 1 / 1 / 2\\
$-$0.2681 & $-$0.74679 & 2.14 & 169.6 & $-$1.24 & $(-/+)$ $\varphi_p = 4.04$ & 263.7 & 0.14 & $(-/+)$ $\varphi_s = 4.78$ & 222.8 & 1 / 1 / 2\\
$-$0.2847 & $-$0.75279 & 2.15 & 171.1 & $-$1.24 & $(-/+)$ $\varphi_p = 4.03$ & 266.2 & 0.14 & $(-/+)$ $\varphi_s = 4.78$ & 224.7 & 1 / 1 / 2\\
$-$0.3152 & $-$0.76259 & 2.16 & 174.0 & $-$1.30 & $(-/+)$ $\varphi_p = 4.00$ & 273.1 & 0.15 & $(-/+)$ $\varphi_s = 4.78$ & 228.3 & 1 / 1 / 2\\
$-$0.5269 & $-$0.84189 & 2.24 & 196.9 & $-$1.37 & $(-/+)$ $\varphi_p = 3.95$ & 308.1 & 0.22 & $(-/+)$ $\varphi_s = 4.82$ & 252.4 & 1 / 1 / 2\\
$-$1 & $-$1.034000 & 2.47 & 237.1 & $-$1.29 & $(-/+)$ $\varphi_p = 4.01$ & 372.4 & 0.62 & $(-/+)$ $\varphi_s = 5.02$ & 296.2 & 1 / 1 / 2\\
$-$1.411618 & $-$1.199879 & 2.71 & 267.3 & $-$0.99 & $(-/+)$ $\varphi_p = 4.18$ & 401.1 & 1.02 & $(-/+)$ $\varphi_s = 5.25$ & 319.7 & 1 / 1 / 2\\
$-$2 & $-$1.416810 & 3.07 & 296.5 & $-$0.46 & $(-/+)$ $\varphi_p = 4.47$ & 416.8 & 1.44 & $(-/+)$ $\varphi_s = 5.52$ & 337.6 & 1 / 1 / 2\\
$-$3 & $-$1.731950 & 3.62 & 323.2 & 0.28 & $(-/+)$ $\varphi_p = 4.85$ & 418.3 & 1.76 & $(-/+)$ $\varphi_s = 5.79$ & 350.3 & 1 / 1 / 2\\
$-$10 & $-$3.162278 & 6.37 & 357.4 & 1.64 & $(-/+)$ $\varphi_p = 5.67$ & 395.8 & 1.99 & $(-/+)$ $\varphi_s = 6.18$ & 362.8 & 1 / 1 / 2\\
$-$15 & $-$3.872983 & 7.77 & 360.9 & 1.80 & $(-/+)$ $\varphi_p = 5.83$ & 388.8 & 1.99 & $(-/+)$ $\varphi_s = 6.23$ & 363.9 & 1 / 1 / 2\\
$-$50 & $-$7.071067 & 14.1 & 364.5 & 1.96 & $(-/+)$ $\varphi_p = 6.09$ & 375.4 & 1.99 & $(-/+)$ $\varphi_s = 6.27$ & 365.0 & 1 / 1 / 2\\
$-$100 & $-$10 & 20.0 & 364.9 & 1.98 & $(-/+)$ $\varphi_p = 6.17$ & 371.4 & 1.99 & $(-/+)$ $\varphi_s = 6.28$ & 365.1 & 1 / 1 / 2\\
$-$300 & $-$17.320508 & 34.6 & 365.2 & 1.99 & $(-/+)$ $\varphi_p = 6.23$ & 368.0 & 1.99 & $(-/+)$ $\varphi_s = 6.28$ & 365.2 & 1 / 1 / 2\\
$-$500 & $-$22.360679 & 44.7 & 365.2 & 1.99 & $(-/+)$ $\varphi_p = 6.25$ & 367.1 & 1.99 & $(-/+)$ $\varphi_s = 6.28$ & 365.2 & 1 / 1 / 2\\
$-$1000 & $-$31.622776 & 63.2 & 365.2 & 1.99 & $(-/+)$ $\varphi_p = 6.26$ & 366.3 & 1.99 & $(-/+)$ $\varphi_s = 6.28$ & 365.2 & 1 / 1 / 2
\end{tabular}
\caption{The family $f$}
\label{table_9}
\end{table}
\noindent
Moreover, the planar rotation angle $\varphi_p$ as well as the spatial one $\varphi_s$ decrease and then increase.\ The orbits for the $\Gamma$ values from $-0.2154$ to $-0.5269$ we have found ourselves.\ They show that $\varphi_s$ never becomes a 4th root of unity, hence the smallest one is a 5th one.\ Both rotation angles approach $2\pi$.\ In other words, all three periods approach 365.25 days, which is the period of the earth around the sun.\ The initial data for the $\Gamma$ values 1.359293 and 0.755141, where $\varphi_s$ is a 6th resp. 5th root of unity, are from the data provided on request by the author of \cite{kalantonis}.\ All the others are from \cite{henon_0} and \cite{henon}.\
\textbf{The limit case.}\ Like Hénon \cite[p.\ 227]{henon} for $\Gamma \to -\infty$ in the planar problem, we can neglect the gravitational force of the earth for a first approximation, since $q_1(0)$ increases for higher energies.\ Then the spatial Hill equation in $(q,\dot{q})$-coordiantes (see (\ref{equation_of_motion_0})) reduces to
\begin{equation} \label{limit_hill}
\left\{ \begin{array}{l}
\mathlarger{\ddot{q}_1 = 2 \dot{q}_2 + 3q_1} \\
\mathlarger{\ddot{q}_2 = -2\dot{q}_1} \\
\mathlarger{\ddot{q}_3 = -q_3.}
\end{array} \right.
\end{equation}
A planar solution of (\ref{limit_hill}) is given by
\begin{align} \label{limit_solution}
q_1(t) = c \cos t, \quad q_2(t)= - 2c \sin t,
\end{align}
where $c \in \mathbb{R}$ is a constant.\ This limit solution in $q$-variables is an ellipse in a retrograde motion with the earth at the origin as its center, semi-major axis $c$ and semi-minor axis $-2c$.\ The energy condition (\ref{energy_gamma}) implies
$$\Gamma = 3q_1^2(t) - \dot{q}_1^2(t) - \dot{q}_2^2(t) = -c^2,$$
hence the relation between the initial condition for the position and the energy is
\begin{align} \label{energy_limit}
q_1(0) = c = - \sqrt{- \Gamma}.
\end{align}
Note that for Table \ref{table_9}, from $\Gamma = -15$ on, we use the formula (\ref{energy_limit}) for $q_1(0)$.\
\begin{Proposition} \label{proposition_retrograde}
\textit{In the limit case for $\Gamma \to -\infty$, the planar retrograde periodic orbit converges to a planar retrograde periodic orbit with synodic period $T_s$ of 365.25 days and which is planar and spatial degenerate.\ In particular, \textnormal{tr}$(\overline{A}_p)$ and \textnormal{tr}$(A_s)$ converge to 2 from below and $\varphi_p$ and $\varphi_s$ to $2 \pi$.\ In other words, each of $T_a$ and $T_d$ converges to 365.25 days during $T_s$.\ }
\end{Proposition}
\begin{proof}
To obtain the Hamiltonian in the limit, for a constant $\gamma < 0$ we zoom out by the coordinate transformation
$$ \phi_\gamma \colon T^* \mathbb{R}^3 \to T^* \mathbb{R}^3,\quad (q,p) \mapsto \big( \sqrt{(-2 \gamma)} q , \sqrt{(-2 \gamma)} p \big), $$
which is conformally symplectic, i.e.\ $\phi_\gamma ^* \omega = -2 \gamma \omega$.\ We introduce the family of Hamiltonians
$$ H_\gamma \colon T^* \big( \mathbb{R} \setminus \{(0,0,0)\} \big) \to \mathbb{R},\quad (q,p) \mapsto - \frac{1}{2 \gamma} (H \circ \phi_\gamma) (q,p), $$
and compute
$$H_\gamma (q,p) = \frac{1}{2} \big( (p_1 + q_2)^2 + (p_2 - q_1)^2 + p_3^2 \big) - \frac{3}{2}q_1^2 + \frac{1}{2}q_3^2 - \frac{1}{|q|2 \gamma \sqrt{(-2 \gamma)}}. $$
\\
For $\gamma \to -\infty$, $H_\gamma$ converges uniformly in the $C^{\infty}$-topology on each compact subset to the Hamiltonian
$$ \widetilde{H} \colon T^* \mathbb{R}^3 \to \mathbb{R},\quad (q,p) \mapsto \frac{1}{2} \big( (p_1 + q_2)^2 + (p_2 - q_1)^2 + p_3^2 \big) - \frac{3}{2}q_1^2 + \frac{1}{2}q_3^2. $$
Note that this limit Hamiltonian does not contain the gravitational force of the earth, in contrast to the spatial Hill lunar problem (\ref{hamiltonian_2}).\ We restrict to the planar case and consider the energy hypersurface $\Sigma := \widetilde{H}^{-1}(\frac{1}{2})$.\ Hence by the relation (\ref{energy_limit}) and the limit solution (\ref{limit_solution}) we obtain
$$ c = -1,\quad q_1(t) = - \cos t,\quad q_2(t) = 2 \sin t, $$
with the first return time $T_q = 2 \pi$.\ Thus the synodic month is 365.25 days.\ Since the gravitational force disappears, the linearized equation in $(q,\dot{q})$-coordiantes reduces to
\begin{equation*}
\left\{ \begin{array}{l}
\mathlarger{\Delta \ddot{q}_1 = 2 \Delta \dot{q}_2 + 3 \Delta q_1 } \\
\mathlarger{ \Delta \ddot{q}_2 = -2 \Delta \dot{q}_1 } \\
\mathlarger{ \Delta \ddot{q}_3 = - \Delta q_3.}
\end{array} \right.
\end{equation*}
By a short calculation, planar linearized solutions in $(q,p)$-coordinates are given by
\begin{equation*}
\left\{ \begin{array}{l}
\mathlarger{ \Delta q_1(t) = c_1 \cos t + c_2 \sin t + 2c_3 } \\
\mathlarger{ \Delta q_2(t) = -2c_1 \sin t + 2 c_2 \cos t - 3 c_3 t + c_4 } \\
\mathlarger{ \Delta p_1(t) = c_1 \sin t - c_2 \cos t + 3 c_3 t - c_4 } \\
\mathlarger{ \Delta p_2(t) = - c_1 \cos t - c_2 \sin t - c_3, }
\end{array} \right.
\end{equation*}
where $c_1$, $c_2$, $c_3$ and $c_4$ are constants.\ Note that these solutions are not periodic along the flow if $c_3 \neq 0$.\ The basis vectors of the tangent space at $q_0$ are of the form
$$ \big( \Delta q_1(0), \Delta q_2(0), \Delta p_1(0), \Delta p_2(0) \big) = ( c_1 + 2c_3, 2c_2 + c_4, - c_2 - c_4, - c_1 - c_3 ). $$
The energy condition (see Subsection \ref{sec:6.5.1}) implies
$$ \Delta q_1(0) = - \Delta p_2(0),\quad c_3= 0.$$
Therefore the linearized solutions on the energy hypersurface are $2 \pi$ periodic, which means that
$$\overline{A}_p = \begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix},$$
where the eigenvalue 1 has algebraic and geometric multiplicity 2.\
A real fundamental system for the spatial equation $\Delta \ddot{q}_3 + \Delta q_3 = 0$ is given by $\{ \cos t, \sin t \}$, hence the $2 \pi$-periodic solutions in $(q,p)$-coordiantes are of the form
\begin{equation*}
\left\{ \begin{array}{l}
\mathlarger{ \Delta q_3(t) = c_1 \cos t + c_2 \sin t } \\
\mathlarger{ \Delta p_3(t) = -c_1 \sin t + c_2 \cos t,}
\end{array} \right.
\end{equation*}
where $c_1$ and $c_2$ are constants.\ In particular, we have\\
$$A_s = \begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}, $$
where the eigenvalue $1$ has algebraic and geometric multiplicity $2$ as well.\ Therefore the limit orbit is planar and spatial degenerate, and the planar and spatial neighbouring orbits make exactly one complete rotation during $T_q$, which corresponds to 365.25 days.
\end{proof}
\subsubsection{Family $g_3$:\ Planar bifurcation from a 3rd cover of $f$ to a 3rd cover of $f$}
At the $\Gamma$ values 0.015388 and $-1.411618$ for the family $f$, where $\varphi_p$ is a 3th root of unity (see Table \ref{table_9}), the planar index of each 3rd cover jumps from 5 to 3 resp.\ from 3 to 5.\ Before and after each transition of each 3rd cover of these two retrograde orbits, new families of planar retrograde periodic orbits bifurcate.\ These families were discovered by Hénon \cite{henon_0}, \cite{henon_1} in which he named them family $g_3$.\
The orbits in these families do not lose the symmetry from $f$, i.e.\ they are also doubly-symmetric with respect to $\rho_1$ and $\rho_2$ (see Figure \ref{family_g_3}).\ Their data are collected in Table \ref{table_10}.\
\begin{figure}[H]
\centering
\includegraphics[scale=0.42]{10_g3.png}
\caption{The family $g_3$}
\label{family_g_3}
\end{figure}
\noindent
Recall that the planar index of the 3rd cover of the $f$-orbit at 0.015388 jumps from 5 to 3 and the one of the $f$-orbit at $-1.411618$ from 3 to 5.\ At each transition each new family is planar positive hyperbolic, hence at each transition each new family has $\mu_{CZ}^p = 4$, and thereby the resp.\ local Floer homology as well as its Euler characteristic are zero.\
Furthermore, the family starting at 0.015388 and ending at $-1.411618$ has constant Conley--Zehnder index, thus it forms a planar bridge between these 3rd covers.\ Note that this bridge is planar positive hyperbolic and spatial elliptic.\
Moreover, if the energy decreases, then the family $g_3$ ends at a degenerate periodic orbit which is of birth-death type.\ In other words, the family $g_3$ starts from one branch of a degenerate planar periodic orbit which is of birth-death type.\ Recall from the introduction that a periodic orbit of birth-death type is a degenerate orbit from which two families bifurcate with an index difference of 1 and into the same energy direction.\ Its local Floer homology and its Euler characteristic are therefore zero.\
\begin{table}[H]\scriptsize \centering
\begin{tabular}{ccccccccccc}
$\Gamma$ & $q_1(0)$ & $\dot{q}_2(0)$ & $T_s$ & tr($\overline{A}_p$) & $\text{sign}_{c/b} (\varphi_p / \lambda_p)$ & $T_a$ & tr($A_s$) & $\text{sign}_{\tilde{c}/\tilde{b}} (\varphi_s / \lambda_s)$ & $T_d$ & $\mu_{CZ}^p / \mu_{CZ}^s / \mu_{CZ}$\\
\hline & & & & & & & & & & birth-death\\
3.806201 & $-$0.77896 & 0.76 & 386.1 & 2.01 & $(-/-)$ $\lambda_p = 1.01$ & & 2.10 & $(+/+)$ $\lambda_s = 1.37$ & & 4 / 6 / 10\\
3.2 & $-$0.77176 & 1.08 & 279.2 & 5921 & $(-/-)$ $\lambda_p = 5921$ & & $-$1.7 & $(-/+)$ $\varphi_s = 3.62$ & 108.36 & 4 / 5 / 9\\
3 & $-$0.76939 & 1.17 & 271.6 & 3665 & $(-/-)$ $\lambda_p = 3665$ & & $-$1.9 & $(-/+)$ $\varphi_s = 3.45$ & 106.5 & 4 / 5 / 9\\
2.5 & $-$0.75961 & 1.36 & 262.7 & 1649 & $(-/-)$ $\lambda_p = 1649$ & & $-$1.9 & $(-/+)$ $\varphi_s = 3.20$ & 104.6 & 4 / 5 / 9\\
2 & $-$0.74453 & 1.53 & 263.5 & 367 & $(-/-)$ $\lambda_p = 367$ & & $-1.9$ & $(+/-)$ $\varphi_s = 3.03$ & 106.1 & 4 / 5 / 9\\
1 & $-$0.69838 & 1.82 & 298.0 & 26.4 & $(-/-)$ $\lambda_p = 26.3$ & & $-$1.7 & $(+/-)$ $\varphi_s = 2.62$ & 123.2 & 4 / 5 / 9\\
0.5 & $-$0.66969 & 1.95 & 346.9 & 5.40 & $(-/-)$ $\lambda_p = 5.21$ & & $-$1.2 & $(+/-)$ $\varphi_s = 2.26$ & 147.0 & 4 / 5 / 9\\
0.1 & $-$0.65482 & 2.05 & 417.3 & 2.07 & $(-/-)$ $\lambda_p = 1.31$ & & $-$0.6 & $(+/-)$ $\varphi_s = 1.90$ & 181.2 & 4 / 5 / 9\\
0.015388 & $-$0.655072 & 2.07 & 436.5 & 2.00 & $\lambda_p^3 = 1.00$ & & $-$0.5 & $(+/-)$ $\varphi_s = 1.84$ & 190.3 & 5$\shortto$3 / 5 / 10$\shortto$8\\
$-$0.1 & $-$0.65866 & 2.10 & 465.3 & 2.12 & $(+/+)$ $\lambda_p = 1.41$ & & $-$0.4 & $(+/-)$ $\varphi_s = 1.81$ & 203.3 & 4 / 5 / 9\\
$-$0.5 & $-$0.7161 & 2.19 & 575.5 & 3.50 & $(+/+)$ $\lambda_p = 3.19$ & & $-$0.8 & $(+/-)$ $\varphi_s = 2.03$ & 247.6 & 4 / 5 / 9\\
$-$1 & $-$0.92602 & 2.39 & 709.6 & 2.92 & $(+/+)$ $\lambda_p = 2.53$ & & $-$1.7 & $(+/-)$ $\varphi_s = 2.58$ & 294.2 & 4 / 5 / 9\\
$-$1.3 & $-$1.1221 & 2.61 & 779.7 & 2.1 & $(+/+)$ $\lambda_p = 1.32$ & & $-$1.9 & $(+/-)$ $\varphi_s = 3.01$ & 314.3 & 4 / 5 / 9\\
$-$1.411618 & $-$1.199879 & 2.71 & 801.9 & 2.00 & $\lambda_p^3 = 1.00$ & & $-$1.9 & $(-/+)$ $\varphi_s = 3.19$ & 319.7 & 3$\shortto$5 / 5 / 8$\shortto$10\\
$-$1.6 & $-$1.32953 & 2.89 & 834.0 & 2.24 & $(-/-)$ $\lambda_p = 1.63$ & & $-$1.8 & $(-/+)$ $\varphi_s = 3.48$ & 326.5 & 4 / 5 / 9\\
$-$2 & $-$1.58227 & 3.28 & 882.4 & 4.63 & $(-/-)$ $\lambda_p = 4.41$ & & $-$1.3 & $(-/+)$ $\varphi_s = 4.00$ & 334.6 & 4 / 5 / 9\\
$-$3 & $-$2.0889 & 4.11 & 941.8 & 27.1 & $(-/-)$ $\lambda_p = 27.0$ & & 0.25 & $(-/+)$ $\varphi_s = 4.84$ & 339.9 & 4 / 5 / 9\\
$-$4 & $-$2.49041 & 4.83 & 967.9 & 84.7 & $(-/-)$ $\lambda_p = 84.7$ & & 1.37 & $(-/+)$ $\varphi_s = 5.46$ & 337.2 & 4 / 5 / 9\\
$-$5 & $-$2.49041 & 5.45 & 982.2 & 188 & $(-/-)$ $\lambda_p = 188$ & & 2.25 & $(-/-)$ $\lambda_s = 1.64$ & & 4 / 6 / 10\\
$-$9 & $-$3.90443 & 7.43 & 1005 & 1223 & $(-/-)$ $\lambda_p = 1223$ & & 5.40 & $(-/-)$ $\lambda_s = 5.21$ & & 4 / 6 / 10
\end{tabular}
\caption{The family $g_3$}
\label{table_10}
\end{table}
\subsection{Spatial periodic orbits bifurcating from planar ones}
\subsubsection{From the spatial index jump of $g$}
\label{sec:9.4.0}
From the family $g$ at the $\Gamma$ value 1.383094 (see Table \ref{table_3}), where $\mu_{CZ}^s$ jumps from 3 to 4, a new family of spatial periodic orbits bifurcates, which was found by Batkhin--Batkhina \cite{batkhin} (family $g_{2v}$).\ Some of its orbits are plotted in Figure \ref{figure_spatial_g}.\ They are doubly-symmetric with respect to
$$ \rho_1(q,p) = \{ (q_1,-q_2,q_3,-p_1,p_2,-p_3) \},\quad \rho_2(q,p) = \{ (-q_1,q_2,q_3,p_1,-p_2,-p_3) \}, $$
hence they start perpendicularly and hit perpendicularly
$$ \text{Fix}(\rho_1) = \{ (q_1,0,q_3,0,p_2,0) \},\quad \text{Fix}(\rho_2) = \{ (0,q_2,q_3,p_1,0,0) \}. $$
Recall in view of the symmetry $\sigma(q,p)=(q_1,q_2,-q_3,p_1,p_2,-p_3)$ that $\rho_1 \circ \rho_2 = \rho_2 \circ \rho_1 = -\sigma$, and note that the orbits are invariant under $-\sigma$, but by using $\sigma$, these spatial orbits give raise to yet another family of spatial orbits.\ Since $\sigma \circ \rho_i = \rho_i \circ \sigma = \overline{\rho_i}$, for $i \in \{1,2\}$, the orbits of the symmetric family are doubly-symmetric with respect to $\overline{\rho_1}$ and $\overline{\rho_2}$ (see the last row in Figure \ref{figure_spatial_g}).\ The local spatial Floer homology before this transition is
$$ sRFH^{S^1}_* (g \, ; \, \mathbb{Q}) = \begin{cases}
\mathbb{Q}, & *=5\\
0, &\text{otherwise},
\end{cases} $$
and after bifurcation the index of $g$ is 6.\ In view of the data given in Table \ref{table_10_1} (its initial data was provided on personal request by the first author of \cite{batkhin}), after this transition the index of the family $g_{2v}$ and its symmetric family equals 5.\ We check that the Euler characteristics are
$$ \chi_s(g) = (-1)^5 = -1,\quad \text{resp.}\quad \chi_s(g) = 2 \cdot (-1)^5 + (-1)^6 = -1 .$$
\begin{figure}[H]
\centering
\includegraphics[scale=0.59]{10_g2v.png}
\caption{From the spatial index jump of $g$ (family $g_{2v}$)}
\label{figure_spatial_g}
\end{figure}
\begin{table}[H]\scriptsize \centering
\begin{tabular}{ccccccc}
$\Gamma$ & $q_1(0)$ & $q_3(0)$ & $\dot{q}_2(0)$ & $T_q / 2$ & $\text{sign}_{C/B}$ and Floquet multipliers & $\mu_{CZ}$\\
\hline 1.383094 & $-$0.164715 & 0 & $-$3.2924 & 2.56855 & $(-/-)$ $\lambda = 1916.4$ \& $\varphi_s =0$ & $2+(3 \shortto 4) = 5 \shortto 6$\\
1.38273 & $-$0.164661 & 0.002986 & $-$3.2928 & 2.56857 & $(-/-)$ $\lambda = 1881.2$ \& $(-/+)$ $\varphi = 6.180$ & 5\\
1.37736 & $-$0.16386 & 0.011810 & $-$3.2980 & 2.5678 & $(-/-)$ $\lambda = 1906.7$ \& $(-/+)$ $\varphi = 5.993$ & 5\\
1.36493 & $-$0.162007 & 0.020887 & $-$3.3101 & 2.566 & $(-/-)$ $\lambda = 2013.8$ \& $(-/+)$ $\varphi = 5.791$ & 5\\
1.34451 & $-$0.158974 & 0.030092 & $-$3.3304 & 2.563 & $(-/-)$ $\lambda = 1853.9$ \& $(-/+)$ $\varphi = 5.531$ & 5\\
1.30865 & $-$0.153669 & 0.040951 & $-$3.3669 & 2.55756 & $(-/-)$ $\lambda = 1816.1$ \& $(-/+)$ $\varphi = 5.241$ & 5
\end{tabular}
\caption{From the spatial index jump of $g$ (family $g_{2v}$)}
\label{table_10_1}
\end{table}
\subsubsection{The 2nd cover of $g$ and the 2nd cover of $g'$}
\label{sec:9.4.1}
From the double cover of the orbits of the families $g$ and $g'$ at the $\Gamma$ values 3.057471 resp.\ 4.285183 each spatial index jumps by +2 (index jump from 9 to 11) resp.\ +1 (index jump from 10 to 11) (see Table \ref{table_3} resp.\ \ref{table_6}).\ Note that in view of Example \ref{example_7_2}, in the latter there exist bad orbits, which are the double covers of the $g'$-orbits after the index jump.\ More precisely, the underlying simple closed orbits of $g'$ are planar elliptic, and the spatial behaviour changes from elliptic to negative hyperbolic with the resp.\ indices $ \mu_{CZ}^p(g') = \mu_{CZ}^s(g') = 3$.\ The indices of the double cover before and after this transition are
$$ \mu_{CZ}^p(g'^2) = \mu_{CZ}^s(g'^2) = 5,\quad \text{resp.}\quad \mu_{CZ}^p(g'^2) = 5,\quad \mu_{CZ}^s(g'^2) = 6. $$
Recall that the bad orbits do not contribute to the local Floer homology groups nor to the Euler characteristics.\
At these transitions new families of spatial symmetric periodic orbits bifurcate.\ In particular, two families bifurcate from the double cover of $g$ and one family from the double cover of $g'$.\ All these families bifurcate after the index jump.\ Using the same notation as in Figure \ref{overview_conclusion} of the introduction, the bifurcation graph for this scenario is shown in Figure \ref{overview_double_cover}.\ Note that we draw the families of bad orbits by dashed black edges, thus the dashed black edges do not contribute to the local Floer homology groups nor to the Euler characteristics.\
\begin{figure}[H]
\centering
\definecolor{grgr}{RGB}{33,189,63}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-7.5,-1) rectangle (6,8);
\draw [->, line width=1pt] (-3-3,-0.5) -- (-3-3,7.5);
\draw (-3-3,7.5) node[anchor=south] {$\Gamma$};
\draw (-4-3,7.5) node[anchor=south] {$-\infty$};
\draw (-4-3,-0.5) node[anchor=north] {$+\infty$};
\draw[fill] (-3-3,0) circle (1.5pt);
\draw (-3-3,0) node[anchor=east] {4.285};
\draw[fill] (-3-3,3) circle (1.5pt);
\draw (-3-3,3) node[anchor=east] {3.057};
\draw[fill] (-3-3,4) circle (1.5pt);
\draw (-3-3,4) node[anchor=east] {3.013};
\draw[fill] (-3-3,5) circle (1.5pt);
\draw (-3-3,5) node[anchor=east] {2.952};
\draw[fill] (-3-3,6.5) circle (1.5pt);
\draw (-3-3,6.5) node[anchor=east] {2.484};
\draw [line width=1pt,color=blue] (0,3) .. controls (0.5,4.5) and (1.5,5.5) .. (2,6.5);
\draw [dashed,line width=1pt,color=blue] (0,3) .. controls (-0.5,4.5) and (-1.5,5.5) .. (-2,6.5);
\draw [color=blue] (0.4,3.15) node[anchor=south] {$9$};
\draw [color=blue] (-0.4,3.15) node[anchor=south] {$9$};
\draw [color=blue] (0.75,3.7) node[anchor=south] {$10$};
\draw [color=blue] (-0.75,3.7) node[anchor=south] {$10$};
\draw [color=blue] (1.3,4.7) node[anchor=south] {$9$};
\draw [color=blue] (-1.3,4.7) node[anchor=south] {$9$};
\draw [line width=1pt,color=blue] (2,6.5) .. controls (2.5,4) and (2.5,3) .. (4,0);
\draw [dashed,line width=1pt,color=blue] (-2,6.5) .. controls (-2.5,4) and (-2.5,3) .. (-4,0);
\draw [color=blue] (3.4,0.3) node[anchor=south] {$10$};
\draw [color=blue] (-3.4,0.3) node[anchor=south] {$10$};
\draw [line width=1pt,color=grgr] (0,3) .. controls (0.2,3.1) and (0.3,3.2) .. (1.2,3.5);
\draw [dashed,line width=1pt,color=grgr] (0,3) .. controls (-0.2,3.1) and (-0.3,3.2) .. (-1.2,3.5);
\draw [color=grgr] (1.2,3.5) node[anchor=north] {$10$};
\draw [color=grgr] (-1.2,3.5) node[anchor=north] {$10$};
\draw [line width=1pt] (0,2.5) -- (0,3.5);
\draw (0,2.5) node[anchor=north] {$9$};
\draw (0,3.5) node[anchor=south] {$11$};
\draw[fill] (0,3) circle (2pt);
\draw (0.3,3) node[anchor=west] {$g^2$};
\draw[fill] (0.33,3.78) circle (2pt);
\draw[fill] (-0.33,3.78) circle (2pt);
\draw[fill] (0.72,4.5) circle (2pt);
\draw[fill] (-0.72,4.5) circle (2pt);
\draw [line width=1pt] (4,-0.5) -- (4,0);
\draw [dashed,line width=1pt] (4,0) -- (4,0.5);
\draw (4,-0.5) node[anchor=north] {$10$};
\draw (4,0.5) node[anchor=south] {$11$};
\draw [line width=1pt] (-4,-0.5) -- (-4,0);
\draw [dashed,line width=1pt] (-4,0) -- (-4,0.5);
\draw (-4,-0.5) node[anchor=north] {$10$};
\draw (-4,0.5) node[anchor=south] {$11$};
\draw[fill] (4,0) circle (2pt);
\draw (4.1,0) node[anchor=west] {$g'^2$};
\draw[fill] (-4,0) circle (2pt);
\draw (-4.1,0) node[anchor=east] {$g'^2$};
\draw[fill] (-2,6.5) circle (2pt);
\draw (-2,6.5) node[anchor=south] {b-d};
\draw[fill] (2,6.5) circle (2pt);
\draw (2,6.5) node[anchor=south] {b-d};
\end{tikzpicture}
\caption{The bifurcation graph between the 2nd cover of $g'$ and the 2nd cover of $g$ with the families \textcolor{blue}{$g1v$} and \textcolor{grgr}{$g_{1v}^{YOZ}$}}
\label{overview_double_cover}
\end{figure}
\noindent
The family \textcolor{blue}{$g1v$} bifurcates from the double cover of $g$.\ It was found by Michalodimitrakis \cite{michalodimitrakis} where the initial conditions for our data given in Table \ref{table_11} are from.\ Some of its orbits are plotted in Figure \ref{figure_double_cover_g}).\ If the energy increases, then the orbits end at a degenerate periodic orbit of birth-death type.\ The other branch bifurcation of this birth-death type periodic orbit is the branch bifurcation from the double cover of $g'$.\ All the orbits of the family \textcolor{blue}{$g1v$} are doubly-symmetric with respect to $\overline{\rho_1}$ and $\rho_1$, therefore they start perpendicularly and hit perpendicularly
$$ \text{Fix}(\overline{\rho_1}) = \{ (q_1,0,0,0,p_2,p_3) \},\quad \text{Fix}(\rho_1) = \{ (q_1,0,q_3,0,p_2,0) \}. $$
Moreover, they are invariant under $\sigma$, but the symmetry $-\sigma$ yields the symmetrical family (dashed in the bifurcation graph), whose orbits are also symmetric with respect to $\rho_2$ and $\overline{\rho_2}$ (see the last row in Figure \ref{figure_double_cover_g}).\
The other family \textcolor{grgr}{$g_{1v}^{YOZ}$} bifurcating from the double cover of $g$ was discovered by Batkhin--Batkhina \cite{batkhin}.\ The orbits are doubly-symmetric with respect to $\overline{\rho_2}$ and $\rho_2$.\ Note that we have not studied them further, but since at the value $\Gamma = 3.057$ the index of the double cover of $g$ jumps from 9 to 11 and the family \textcolor{blue}{$g1v$} and its symmetric family start with index 9, the family \textcolor{grgr}{$g_{1v}^{YOZ}$} and its symmetric family have to start with index 10.\
The Euler characteristics are
$$ \chi_s(g^2) = (-1)^9 = -1,\quad \text{resp.}\quad \chi_s(g^2) = 2\cdot(-1)^9 + 2\cdot(-1)^{10} + (-1)^{11} = -1 $$
and
$$ \chi_s(g'^2) = (-1)^{10} = 1,\quad \text{resp.}\quad \chi_s(g'^2) = (-1)^{10} + 0 \cdot(-1)^{11} = 1.$$
\begin{figure}[H]
\centering
\includegraphics[scale=0.59]{11_g2v_2.png}
\caption{From the 2nd cover of $g$ to the 2nd cover of $g'$ (family \textcolor{blue}{$g1v$})}
\label{figure_double_cover_g}
\end{figure}
\begin{table}[H]\scriptsize \centering
\begin{tabular}{ccccccc}
$\Gamma$ & $q_1(0)$ & $\dot{q}_2(0)$ & $\dot{q}_3(0)$ & $T_q / 4$ & $\text{sign}_{C/B}$ and Floquet multipliers & $\mu_{CZ}$\\
\hline 3.057471 & 0.310843 & 1.9148 & 0 & 1.4727 & $(-/-)$ $\lambda_p^2 = 24713.62$ \& $\varphi_s^2 =0$ & $4+(5 \shortto 7) = 9 \shortto 11$\\
3.046842 & 0.311905 & 1.88871 & 0.299 & 1.4733 & $(-/-)$ $\lambda = 22211$ \& $(-/+)$ $\varphi = 5.847$ & 9\\
3.013637 & 0.31526 & 1.807899 & 0.600 & 1.4751 & $(-/-)$ $\lambda_1 = 20289$ \& $(-/-)$ $\lambda_2 = 1.250$ & 10\\
2.952973 & 0.321529 & 1.663566 & 0.899 & 1.4783 & $(-/-)$ $\lambda = 15610$ \& $(-/+)$ $\varphi = 6.117$ & 9\\
2.852855 & 0.332366 & 1.43388 & 1.199 & 1.4833 & $(-/-)$ $\lambda = 10175$ \& $(-/+)$ $\varphi = 6.013$ & 9\\
2.676808 & 0.353928 & 1.048739 & 1.500 & 1.4895 & $(-/-)$ $\lambda = 3938.3$ \& $(-/+)$ $\varphi = 5.772$ & 9\\
2.48446 & 0.396 & 0.561861 & 1.649 & 1.4675 & $(-/-)$ $\lambda = 300.99$ \& $(-/+)$ $\varphi = 5.661$ & 9\\
& & & & & & birth-death\\
2.687271 & 0.44745 & 0.364873 & 1.500 & 1.3488 & $-8.925 \pm 3.401\text{i}$ \& $-0.097 \pm 0.037\text{i}$ & 10\\
3.202673 & 0.49409 & 0.370874 & 1.199 & 1.2151 & $-3.433 \pm 6.225\text{i}$ \& $-0.067 \pm 0.123\text{i}$ & 10\\
3.654803 & 0.52756 & 0.401491 & 0.899 & 1.1343 & $-4.565 \pm 2.778\text{i}$ \& $-0.159 \pm 0.097\text{i}$ & 10\\
3.998524 & 0.551501 & 0.424737 & 0.600 & 1.0857 & $-2.900 \pm 0.096\text{i}$ \& $-0.344 \pm 0.011\text{i}$ & 10\\
4.212563 & 0.565996 & 0.438274 & 0.300 & 1.0596 & $(-/+)$ $\varphi_1 = 3.507$ \& $(-/+)$ $\varphi_2 = 4.757$ & 10\\
4.285183 & 0.570854 & 0.4426 & 0 & 1.0513 & $(-/+)$ $\varphi_p^2 = 3.72$ \& $\varphi_s^2 =0$ & $5+(5 \shortto 6) = 10 \shortto 11$
\end{tabular}
\caption{From the 2nd cover of $g$ to the 2nd cover of $g'$ (family \textcolor{blue}{$g1v$})}
\label{table_11}
\end{table}
\subsubsection{The 3rd cover of $g$, the 5th cover of $f$ and the 3rd cover of $g'$}
In this subsection we collect the underlying data for the bifurcation graph in Figure \ref{overview_conclusion} and its discussion from the introduction with the families \textcolor{blue}{$f_g^{(2,3)}$}, \textcolor{grgr}{$f_g^{(2cut,3)}$}, \textcolor{red}{$f_{g'}^{(2,3)}$} and \textcolor{magenta}{$f_{g'}^{(2cut,3)}$}.\ All the inital data are provided on personal request from the author of \cite{kalantonis}.\
The orbits of the family \textcolor{blue}{$f_g^{(2,3)}$} are doubly-symmetric with respect to $\overline{\rho_1}$ and $\overline{\rho_2}$.\ Some of its orbits are plotted in Figure \ref{figure_spatial_g_f}.\ Its symmetric family is obtained by using $\sigma$ (see the last row in Figure \ref{figure_spatial_g_f}).\ The data for the orbits are given in Table \ref{table_12}.\
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{12_1.png}
\caption{From the 3rd cover of $g$ to the 5th cover of $f$ (family \textcolor{blue}{$f_g^{(2,3)}$})}
\label{figure_spatial_g_f}
\end{figure}
\begin{table}[H]\tiny \centering
\begin{tabular}{ccccccc}
$\Gamma$ & $q_1(0)$ & $\dot{q}_2(0)$ & $\dot{q}_3(0)$ & $T_q / 2$ & $\text{sign}_{C/B}$ and Floquet multipliers & $\mu_{CZ}$\\
\hline 3.876616 & 0.327645 & 1.596748 & 0 & 2.82652 & $(-/-)$ $\lambda_p^3 = 619.96$ \& $\varphi_s^3 =0$ & $6+(7 \shortto 9) = 13 \shortto 15$\\
3.876404 & 0.327652 & 1.596390 & 0.034 & 2.82665 & $(-/-)$ $\lambda_1 = 620.39$ \& $(-/-)$ $\lambda_2 = 1.003$ & 14\\
3.874882 & 0.327708 & 1.593827 & 0.100 & 2.82688 & $(-/-)$ $\lambda_1 = 615.97$ \& $(-/-)$ $\lambda_2 = 1.010$ & 14\\
3.866582 & 0.328011 & 1.579855 & 0.239 & 2.82811 & $(-/-)$ $\lambda_1 = 592.21$ \& $(-/-)$ $\lambda_2 = 1.008$ & 14\\
3.692429 & 0.334427 & 1.292631 & 0.975 & 2.85507 & $(-/-)$ $\lambda_1 = 238.02$ \& $(-/-)$ $\lambda_2 = 1.023$ & 14\\
3.274148 & 0.350335 & 0.647245 & 1.544 & 2.92956 & $(-/-)$ $\lambda_1 = 1.228$ \& $(-/-)$ $\lambda_2 = 1.053$ & 14\\
3.274118 & 0.350344 & 0.647179 & 1.544 & 2.92982 & $(-/-)$ $\lambda_1 = 1.231$ \& $\lambda_2 = 1$ & $14 \shortto 15$\\
3.272832 & 0.350387 & 0.645310 & 1.545 & 2.92982 & $(-/-)$ $\lambda = 1.229$ \& $(+/-)$ $\varphi = 0.334$ & 15\\
3.239877 & 0.351673 & 0.597055 & 1.569 & 2.93637 & $(-/-)$ $\lambda = 1.270$ \& $(+/-)$ $\varphi = 1.750$ & 15\\
3.171140 & 0.354373 & 0.497587 & 1.612 & 2.95039 & $(-/-)$ $\lambda = 1.402$ \& $(-/+)$ $\varphi = 3.154$ & 15\\
3.088817 & 0.357637 & 0.380540 & 1.656 & 2.96784 & $(-/-)$ $\lambda = 1.852$ \& $(-/+)$ $\varphi = 4.538$ & 15\\
2.815738 & 0.368722 & 0.008233 & 1.736 & 3.03160 & $(-/-)$ $\lambda = 17.24$ \& $(-/+)$ $\varphi = 5.876$ & 15\\
2.809462 & 0.368982 & $-$0.000038 & 1.737 & 3.03319 & $(-/-)$ $\lambda = 17.64$ \& $(-/+)$ $\varphi = 5.879$ & 15\\
2.659074 & 0.375277 & $-$0.194503 & 1.747 & 3.07285 & $(-/-)$ $\lambda = 25.89$ \& $(-/+)$ $\varphi = 5.922$ & 15\\
1.783658 & 0.415479 & $-$1.185880 & 1.463 & 3.39795 & $(-/-)$ $\lambda = 20.95$ \& $(-/+)$ $\varphi = 6.051$ & 15\\
1.067081 & 0.456943 & $-$1.812982 & 0.805 & 3.89795 & $(-/-)$ $\lambda = 2.986$ \& $(-/+)$ $\varphi = 6.213$ & 15\\
0.939632 & 0.466113 & $-$1.904426 & 0.613 & 4.03095 & $(-/-)$ $\lambda = 1.035$ \& $(-/+)$ $\varphi = 5.563$ & 15\\
0.813156 & 0.476186 & $-$1.988018 & 0.338 & 4.18381 & $(-/-)$ $\lambda = 1.015$ \& $(-/+)$ $\varphi = 4.764$ & 15\\
0.755192 & 0.481212 & $-$2.023751 & 0.010 & 4.26211 & $(-/-)$ $\lambda = 1.008$ \& $(-/+)$ $\varphi = 4.503$ & 15\\
0.755141 & 0.481217 & $-$2.02378 & 0 & 4.26215 & $\varphi_s^5 = 0$ \& $(-/+)$ $\varphi_p^5 = 4.503$ & $7+(9 \shortto 7) = 16 \shortto 14$
\end{tabular}
\caption{From the 3rd cover of $g$ to the 5th cover of $f$ (family \textcolor{blue}{$f_g^{(2,3)}$})}
\label{table_12}
\end{table}
The orbits of the family \textcolor{red}{$f_{g'}^{(2,3)}$} are simply-symmetric with respect to $\overline{\rho_1}$.\ Some of its orbits are plotted in Figure \ref{figure_spatial_g'_g_1}.\ The 3rd row in Figure \ref{figure_spatial_g'_g_1} shows the symmetric orbit obtained by using $\overline{\rho_2}$, which bifurcates from the planar orbit which is symmetric to $g'$.\ The symmetry $\sigma$ yields the symmetric family bifurcation from the same planar orbit of $g'$ (see the last row in Figure \ref{figure_spatial_g'_g_1}).\ The data for the orbits are given in Table \ref{table_14}.\
\begin{figure}[H]
\centering
\includegraphics[scale=0.48]{12_2.png}
\caption{From the 3rd cover of $g'$ to the index jump $(14 \shortto 15)$ in Table \ref{table_12} (family \textcolor{red}{$f_{g'}^{(2,3)}$})}
\label{figure_spatial_g'_g_1}
\end{figure}
\begin{table}[H]\tiny \centering
\begin{tabular}{ccccccc}
$\Gamma$ & $q_1(0)$ & $\dot{q}_2(0)$ & $\dot{q}_3(0)$ & $T_q / 2$ & $\text{sign}_{C/B}$ and Floquet multipliers & $\mu_{CZ}$\\
\hline 4.347942 & 0.491443 & 0.668022 & 0 & 2.41792 & $(-/+)$ $\varphi_p^3 = 3.260$ \& $\varphi_s^3 =0$ & $7+(7 \shortto 9) = 14 \shortto 16$\\
4.347872 & 0.491437 & 0.668012 & 0.010 & 2.41797 & $(-/+)$ $\varphi = 3.261$ \& $(-/-)$ $\lambda = 1.014$ & 15\\
4.332819 & 0.490248 & 0.666260 & 0.154 & 2.42320 & $(-/+)$ $\varphi = 3.238$ \& $(-/-)$ $\lambda = 1.006$ & 15\\
4.308139 & 0.488286 & 0.663388 & 0.251 & 2.43186 & $(-/-)$ $\lambda_1 = -1.008$ \& $(-/-)$ $\lambda_2 = 1.012$ & 15\\
3.916686 & 0.454659 & 0.618385 & 0.848 & 2.58670 & $(-/-)$ $\lambda_2 = -1.591$ \& $(-/-)$ $\lambda_2 = 1.390$ & 15\\
3.720467 & 0.435295 & 0.597320 & 1.042 & 2.67843 & $(+/-)$ $\varphi = 3.126$ \& $(-/-)$ $\lambda = 1.705$ & 15\\
3.419999 & 0.397616 & 0.577871 & 1.322 & 2.84056 & $(+/-)$ $\varphi = 1.622$ \& $(-/-)$ $\lambda = 2.049$ & 15\\
3.274124 & 0.350644 & 0.646271 & 1.542 & 2.92956 & $(+/-)$ $\varphi = 0.030$ \& $(-/-)$ $\lambda = 1.231$ & 15\\
3.274118 & 0.350344 & 0.647179 & 1.544 & 2.92982 & $\lambda_1 = 1$ \& $(-/-)$ $\lambda_2 = 1.231$ & $14 \shortto 15$\\
3.274123 & 0.350042 & 0.648097 & 1.545 & 2.92956 & $(+/-)$ $\varphi = 0.029$ \& $(-/-)$ $\lambda = 1.230$ & 15\\
3.690961 & 0.238777 & 1.376348 & 1.720 & 2.69315 & $(+/-)$ $\varphi = 2.849$ \& $(-/-)$ $\lambda = 1.755$ & 15\\
3.917774 & 0.199644 & 1.850406 & 1.672 & 2.58622 & $(-/-)$ $\lambda_1 = -1.592$ \& $(-/-)$ $\lambda_2 = 1.389$ & 15\\
4.308371 & 0.137956 & 3.113046 & 0.744 & 2.43178 & $(+/-)$ $\varphi = 3.145$ \& $(-/-)$ $\lambda = 1.016$ & 15\\
4.347515 & 0.132083 & 3.292440 & 0.080 & 2.41809 & $(+/-)$ $\varphi = 3.265$ \& $(-/-)$ $\lambda = 1.053$ & 15\\
4.347942 & 0.132020 & 3.294475 & 0 & 2.41792 & $(-/+)$ $\varphi_p^3 = 3.260$ \& $\varphi_s^3 =0$ & $7+(7 \shortto 9) = 14 \shortto 16$
\end{tabular}
\caption{From the 3rd cover of $g'$ to the index jump $(14 \shortto 15)$ in Table \ref{table_12} (family \textcolor{red}{$f_{g'}^{(2,3)}$})}
\label{table_14}
\end{table}
The orbits of the family \textcolor{grgr}{$f_g^{(2cut,3)}$} are doubly-symmetric with respect to $\rho_1$ and $\rho_2$.\ Some of these orbits are plotted in Figure \ref{figure_spatial_g'_g_2}.\ Its symmetric family is obtained by using $\sigma$ (see the last row in Figure \ref{figure_spatial_g'_g_2}).\ The data for the orbits are given in Table \ref{table_13}.\
\begin{figure}[H]
\centering
\includegraphics[scale=0.59]{12_3.png}
\caption{From the 3rd cover of $g$ to collision (family \textcolor{grgr}{$f_g^{(2cut,3)}$})}
\label{figure_spatial_g'_g_2}
\end{figure}
\begin{table}[H]\tiny \centering
\begin{tabular}{ccccccc}
$\Gamma$ & $q_1(0)$ & $q_3(0)$ & $\dot{q}_2(0)$ & $T_q / 2$ & $\text{sign}_{C/B}$ and Floquet multipliers & $\mu_{CZ}$\\
\hline 3.876616 & $-$0.327645 & 0 & $-$1.596 & 2.82652 & $(-/-)$ $\lambda_p^3 = 619.96$ \& $\varphi_s^3 =0$ & $6+(7 \shortto 9) = 13 \shortto 15$\\
3.833903 & $-$0.320106 & 0.072500 & $-$1.600 & 2.83301 & $(-/-)$ $\lambda = 505.63$ \& $(-/+)$ $\varphi = 6.278$ & 13\\
3.283429 & $-$0.212345 & 0.250499 & $-$1.696 & 2.92604 & $(-/-)$ $\lambda = 1.725$ \& $(-/+)$ $\varphi = 6.048$ & 13\\
3.280180 & $-$0.211571 & 0.250950 & $-$1.698 & 2.92662 & $\lambda = 1$ \& $(-/+)$ $\varphi = 6.081$ & $13 \shortto 14$\\
3.279799 & $-$0.211477 & 0.250999 & $-$1.698 & 2.92669 & $(+/-)$ $\varphi_1 = 0.186$ \& $(-/+)$ $\varphi_2 = 6.080$ & 14\\
3.189269 & $-$0.186389 & 0.258719 & $-$1.766 & 2.94124 & $(+/-)$ $\varphi_1 = 3.129$ \& $(-/+)$ $\varphi_2 = 5.968$ & 14\\
3.136701 & $-$0.150862 & 0.240737 & $-$1.978 & 2.92889 & $(-/+)$ $\varphi_1 = 6.277$ \& $(-/+)$ $\varphi_2 = 4.689$ & 14\\
3.136701 & $-$0.150811 & 0.24068 & $-$1.978 & 2.92882 & $\lambda = 1$ \& $(-/+)$ $\varphi = 4.691$ & birth-death\\
3.231169 & $-$0.097655 & 0.166387 & $-$2.671 & 2.77297 & $(+/+)$ $\lambda = 3.712$ \& $(-/+)$ $\varphi = 4.346$ & 15\\
3.362152 & $-$0.042376 & 0.077137 & $-$4.400 & 2.43258 & $\lambda = 1$ \& $(-/+)$ $\varphi = 3.498$ & birth-death\\
3.362152 & $-$0.042344 & 0.077087 & $-$4.401 & 2.43234 & $(-/+)$ $\varphi_1 = 6.249$ \& $(-/+)$ $\varphi_2 = 3.497$ & 14\\
3.329430 & $-$0.024918 & 0.050537 & $-$5.671 & 2.29164 & $(-/+)$ $\varphi_1 = 5.662$ \& $(-/+)$ $\varphi_2 = 3.143$ & 14\\
3.101438 & $-$0.004205 & 0.014387 & $-$11.41 & 2.06718 & $(-/+)$ $\varphi_1 = 6.012$ \& $(-/+)$ $\varphi_2 = 2.426$ & 14
\end{tabular}
\caption{From the 3rd cover of $g$ to collision (family \textcolor{grgr}{$f_g^{(2cut,3)}$})}
\label{table_13}
\end{table}
The orbits of the family \textcolor{magenta}{$f_{g'}^{(2cut,3)}$} are simply-symmetric with respect to $\rho_1$.\ Some of these orbits are plotted in Figure \ref{figure_spatial_g'_g_3}.\ The 3rd row in Figure \ref{figure_spatial_g'_g_3} shows the symmetric orbits obtained by using $\rho_2$, which bifurcate from the planar orbit which is symmetric to $g'$.\ The symmetry $\sigma$ yields the symmetric family bifurcation from the same planar orbit of $g'$ (see the last row in Figure \ref{figure_spatial_g'_g_3}).\ The data for the orbits are given in Table \ref{table_15}.\
\begin{figure}[H]
\centering
\includegraphics[scale=0.54]{12_4.png}
\caption{From the 3rd cover of $g'$ to the index jump $(13 \shortto 14)$ in Table \ref{table_13} (family \textcolor{magenta}{$f_{g'}^{(2cut,3)}$})}
\label{figure_spatial_g'_g_3}
\end{figure}
\begin{table}[H]\tiny \centering
\begin{tabular}{ccccccc}
$\Gamma$ & $q_1(0)$ & $q_3(0)$ & $\dot{q}_2(0)$ & $T_q / 2$ & $\text{sign}_{C/B}$ and Floquet multipliers & $\mu_{CZ}$\\
\hline 4.347942 & $-$0.132020 & 0 & $-$3.294 & 2.41792 & $(-/+)$ $\varphi_p^3 = 3.260$ \& $\varphi_s^3 =0$ & $7+(7 \shortto 9) = 14 \shortto 16$\\
4.347931 & $-$0.132019 & 0.000400 & $-$3.294 & 2.41795 & $(-/+)$ $\varphi_1 = 3.261$ \& $(-/+)$ $\varphi_2 = 6.275$ & 14\\
4.307008 & $-$0.131939 & 0.024999 & $-$3.261 & 2.43226 & $(-/-)$ $\lambda = -1.020$ \& $(-/+)$ $\varphi = 5.858$ & 14\\
3.724373 & $-$0.138067 & 0.120000 & $-$2.692 & 2.67594 & $(+/-)$ $\varphi_1 = 3.051$ \& $(-/+)$ $\varphi_2 = 5.741$ & 14\\
3.381250 & $-$0.165321 & 0.192269 & $-$2.133 & 2.86377 & $(+/-)$ $\varphi_1 = 1.013$ \& $(-/+)$ $\varphi_2 = 5.311$ & 14\\
3.381167 & $-$0.165336 & 0.192296 & $-$2.133 & 2.86382 & $0.551 \pm 0.816 \text{i}$ \& $0.541 \pm 0.858 \text{i}$ & 14\\
3.329148 & $-$0.177357 & 0.211699 & $-$1.990 & 2.89575 & $0.973 \pm 0.919 \text{i}$ \& $0.543 \pm 0.511 \text{i}$ & 14\\
3.280274 & $-$0.209888 & 0.249451 & $-$1.710 & 2.92656 & $1.039 \pm 0.161 \text{i}$ \& $0.939 \pm 0.145 \text{i}$ & 14\\
3.280237 & $-$0.210257 & 0.249782 & $-$1.707 & 2.92659 & $(+/-)$ $\varphi_1 = 0.131$ \& $(-/+)$ $\varphi_2 = 6.127$ & 14\\
3.280180 & $-$0.211571 & 0.250950 & $-$1.698 & 2.92662 & $\lambda = 1$ \& $(-/+)$ $\varphi = 6.081$ & $13 \shortto 14$\\
3.280236 & $-$0.212874 & 0.252084 & $-$1.689 & 2.92659 & $(+/-)$ $\varphi_1 = 0.122$ \& $(-/+)$ $\varphi_2 = 6.121$ & 14\\
3.280241 & $-$0.212932 & 0.252134 & $-$1.688 & 2.92658 & $1.002 \pm 0.146 \text{i}$ \& $0.976 \pm 0.142 \text{i}$ & 14\\
3.357252 & $-$0.266603 & 0.281661 & $-$1.390 & 2.87838 & $0.773 \pm 0.967 \text{i}$ \& $0.504 \pm 0.630 \text{i}$ & 14\\
3.380289 & $-$0.275442 & 0.283788 & $-$1.350 & 2.86435 & $0.571 \pm 0.871 \text{i}$ \& $0.527 \pm 0.801 \text{i}$ & 14\\
3.381154 & $-$0.275758 & 0.283852 & $-$1.349 & 2.86383 & $(+/-)$ $\varphi_1 = 1.006$ \& $(-/+)$ $\varphi_2 = 5.304$ & 14\\
3.454933 & $-$0.300084 & 0.286253 & $-$1.247 & 2.82013 & $(+/-)$ $\varphi_1 = 1.663$ \& $(-/+)$ $\varphi_2 = 5.497$ & 14\\
3.727521 & $-$0.370162 & 0.266224 & $-$0.999 & 2.67439 & $(+/+)$ $\lambda = -1.025$ \& $(-/+)$ $\varphi_2 = 5.744$ & 14\\
4.347932 & $-$0.491441 & 0.001213 & $-$0.668 & 2.41795 & $(-/+)$ $\varphi_1 = 3.261$ \& $(-/+)$ $\varphi_2 = 6.279$ & 14\\
4.347942 & $-$0.491443 & 0 & $-$0.668 & 2.41792 & $(-/+)$ $\varphi_p^3 = 3.260$ \& $\varphi_s^3 =0$ & $7+(7 \shortto 9) = 14 \shortto 16$
\end{tabular}
\caption{From the 3rd cover of $g'$ to the index jump $(13 \shortto 14)$ in Table \ref{table_13} (family \textcolor{magenta}{$f_{g'}^{(2cut,3)}$})}
\label{table_15}
\end{table}
\subsubsection{The 4th cover of $g$, the 6th cover of $f$ and the 4th cover of $g'$}
At the value $\Gamma = 4.435711$ the spatial index of the 4th cover of the orbit of the family $g'$ jumps by +2 and its index jumps from 18 to 20 (see Table \ref{table_6}).\ Moreover, at the value $\Gamma = 4.278924$ the spatial index of the 4th cover of $g$ jumps from 17 to 19 (see Table \ref{table_3}).\ The bifurcation graph for this case is illustrated in Figure \ref{overview_fourth_cover}.\
\begin{figure}[H]
\centering
\definecolor{grgr}{RGB}{33,189,63}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-7.5,-1) rectangle (6,10.5);
\draw [->, line width=1pt] (-3-3,-0.5) -- (-3-3,10);
\draw (-3-3,10) node[anchor=south] {$\Gamma$};
\draw (-4-3,10) node[anchor=south] {$-\infty$};
\draw (-4-3,-0.5) node[anchor=north] {$+\infty$};
\draw[fill] (-3-3,0) circle (1.5pt);
\draw (-3-3,0) node[anchor=east] {4.435};
\draw[fill] (-3-3,2) circle (1.5pt);
\draw (-3-3,2) node[anchor=east] {4.278};
\draw[fill] (-3-3,3.5) circle (1.5pt);
\draw (-3-3,3.5) node[anchor=east] {4.066};
\draw[fill] (-3-3,5) circle (1.5pt);
\draw (-3-3,5) node[anchor=east] {3.633};
\draw[fill] (-3-3,8) circle (1.5pt);
\draw (-3-3,8) node[anchor=east] {1.567};
\draw[fill] (-3-3,9) circle (1.5pt);
\draw (-3-3,9) node[anchor=east] {1.359};
\draw [line width=1pt,color=blue] (0,2) .. controls (0.3,2.7) and (1,2.8) .. (2,3.5);
\draw [dashed,line width=1pt,color=blue] (0,2) .. controls (-0.3,2.7) and (-1,2.8) .. (-2,3.5);
\draw [color=blue] (0.75,2.8) node[anchor=south] {$17$};
\draw [color=blue] (-0.75,2.8) node[anchor=south] {$17$};
\draw [line width=1pt,color=red] (0,2) .. controls (0.2,2.2) and (0.3,2.4) .. (1.2,2.7);
\draw [dashed,line width=1pt,color=red] (0,2) .. controls (-0.2,2.2) and (-0.3,2.4) .. (-1.2,2.7);
\draw [color=red] (1.2,2.65) node[anchor=north] {$18$};
\draw [color=red] (-1.2,2.65) node[anchor=north] {$18$};
\draw [line width=1pt,color=blue] (2,3.5) .. controls (2,2.5) and (2.5,1) .. (3,0);
\draw [dashed,line width=1pt,color=blue] (-2,3.5) .. controls (-2,2.5) and (-2.5,1) .. (-3,0);
\draw [color=blue] (2.4,0.25) node[anchor=south] {$18$};
\draw [color=blue] (-2.4,0.25) node[anchor=south] {$18$};
\draw [line width=1pt,color=grgr] (3,0) .. controls (5.5,5) and (2.6,5.5) .. (0,9);
\draw [dashed,line width=1pt,color=grgr] (-3,0) .. controls (-5.5,5) and (-2.6,5.5) .. (0,9);
\draw [color=grgr] (3.6,0.3) node[anchor=south] {$19$};
\draw [color=grgr] (-3.6,0.3) node[anchor=south] {$19$};
\draw [color=grgr] (3.2,6.2) node[anchor=north] {$20$};
\draw [color=grgr] (-3.2,6.2) node[anchor=north] {$20$};
\draw [color=grgr] (0.75,8.8) node[anchor=north] {$19$};
\draw [color=grgr] (-0.75,8.8) node[anchor=north] {$19$};
\draw [line width=1pt] (3,-0.5) -- (3,0.5);
\draw (3,-0.5) node[anchor=north] {$18$};
\draw (3,0.5) node[anchor=south] {$20$};
\draw[fill] (3,0) circle (2pt);
\draw (3.1,0) node[anchor=west] {$g'^4$};
\draw [line width=1pt] (-3,-0.5) -- (-3,0.5);
\draw (-3,-0.5) node[anchor=north] {$18$};
\draw (-3,0.5) node[anchor=south] {$20$};
\draw[fill] (-3,0) circle (2pt);
\draw (-3.1,0) node[anchor=east] {$g'^4$};
\draw [line width=1pt] (0,1.5) -- (0,2.5);
\draw (0,1.5) node[anchor=north] {$17$};
\draw (0,2.5) node[anchor=south] {$19$};
\draw[fill] (0,2) circle (2pt);
\draw (0.1,2) node[anchor=west] {$g^4$};
\draw [line width=1pt] (0,8.5) -- (0,9.5);
\draw (0,8.5) node[anchor=north] {$20$};
\draw (0,9.5) node[anchor=south] {$18$};
\draw[fill] (0,9) circle (2pt);
\draw (0.1,9) node[anchor=west] {$f^6$};
\draw[fill] (-2,3.5) circle (2pt);
\draw (-2,3.5) node[anchor=south] {b-d};
\draw[fill] (2,3.5) circle (2pt);
\draw (2,3.5) node[anchor=south] {b-d};
\draw[fill] (0.82,8) circle (2pt);
\draw[fill] (-0.82,8) circle (2pt);
\draw[fill] (3.45,5) circle (2pt);
\draw[fill] (-3.45,5) circle (2pt);
\end{tikzpicture}
\caption{The bifurcation graph between the 4th cover of $g'$, the 4th cover of $g$ and the 6th cover of $f$ with the families \textcolor{blue}{$f_{g'}^{(1,4)}$}, \textcolor{grgr}{$f_{g'}^{(1cut,4)}$} and \textcolor{red}{$f_{g}^{(1,4)}$}}
\label{overview_fourth_cover}
\end{figure}
\noindent
The three families \textcolor{blue}{$f_{g'}^{(1,4)}$}, \textcolor{grgr}{$f_{g'}^{(1cut,4)}$} and \textcolor{red}{$f_{g}^{(1,4)}$} were found by Kalantonis \cite{kalantonis}, who provided on personal request the initial data.\ All orbits are doubly-symmetric with respect to $\overline{\rho_1}$ and $\rho_1$, hence they are invariant under $\sigma$.\ Each symmetric family (dashed) is obtained by using the symmetry $-\sigma$.\ Note that the orbits of each symmetric family are doubly-symmetric with respect to $\overline{\rho_2}$ and $\rho_2$.\
The family \textcolor{blue}{$f_{g'}^{(1,4)}$} consists of two branches bifurcating respectively from the 4th cover of $g'$ at the value $\Gamma = 4.435$ and from the 4th cover of $g$ at the value $\Gamma = 4.278$.\ The two branches meet at the value $\Gamma = 4.066$ at a degenerate orbit of birth-death type.\ Some orbits of the family \textcolor{blue}{$f_{g'}^{(1,4)}$} are plotted in Figure \ref{figure_spatial_g'_f_1}, where the last row shows a symmetric orbit.\ The data for the orbits are collected in Table \ref{table_17}.\
The family \textcolor{grgr}{$f_{g'}^{(1cut,4)}$} bifurcating from the 4th cover of $g'$ at the value $\Gamma = 4.435$ ends planar at the 6th cover of the retrograde orbit at the value $\Gamma = 1.359$.\ Note that inbetween there are two index jumps.\ In view of Table \ref{table_9}, the index of $f^6$ at the value $\Gamma = 1.359$ jumps from 20 to 18.\ At this transition, the Euler characteristics show that there are still undiscovered families branching out from $f^6$.\ Some of the orbits of the family \textcolor{grgr}{$f_{g'}^{(1cut,4)}$} are plotted in Figure \ref{figure_spatial_g'_f}, where the last row shows a symmetric orbit, and the data are collected in Table \ref{table_16}.\
We have not studied the family \textcolor{red}{$f_{g}^{(1,4)}$} further, but since at the value $\Gamma = 4.278$ the index of the 4th cover of $g$ jumps from 17 to 19 and the family \textcolor{blue}{$f_{g'}^{(1,4)}$} and its symmetric family start with index 17, the family \textcolor{red}{$f_{g}^{(1,4)}$} and its symmetric family have to start with index 18.\
\begin{figure}[H]
\centering
\includegraphics[scale=0.56]{13_2.png}
\caption{From the 4th cover of $g'$ to the 4th cover of $g$ (family \textcolor{blue}{$f_{g'}^{(1,4)}$})}
\label{figure_spatial_g'_f_1}
\end{figure}
\begin{table}[H]\scriptsize \centering
\begin{tabular}{ccccccc}
$\Gamma$ & $q_1(0)$ & $\dot{q}_2(0)$ & $\dot{q}_3(0)$ & $T_q / 4$ & $\text{sign}_{C/B}$ and Floquet multipliers & $\mu_{CZ}$\\
\hline 4.435711 & $-$0.188043 & $-$2.511218 & 0 & 1.34304 & $(+/-)$ $\varphi_p^4 = 2.224$ \& $\varphi_s^4 =0$ & $9+(9 \shortto 11) = 18 \shortto 20$\\
4.435711 & $-$0.188043 & $-$2.511218 & $-$0.0004 & 1.34305 & $(+/-)$ $\varphi_1 = 2.232$ \& $(-/+)$ $\varphi_2 = 6.261$ & 18\\
4.384342 & $-$0.198011 & $-$2.339590 & $-$0.6000 & 1.35458 & $(+/-)$ $\varphi_1 = 2.079$ \& $(-/+)$ $\varphi_2 = 6.210$ & 18\\
4.251346 & $-$0.226833 & $-$1.933856 & $-$0.9900 & 1.38612 & $(+/-)$ $\varphi_1 = 1.616$ \& $(-/+)$ $\varphi_2 = 6.196$ & 18\\
4.068814 & $-$0.294711 & $-$1.340590 & $-$1.0866 & 1.43386 & $(+/-)$ $\varphi_1 = 0.143$ \& $(-/+)$ $\varphi_2 = 6.195$ & 18\\
4.068801 & $-$0.294728 & $-$1.340490 & $-$1.0866 & 1.43386 & $1.000 \pm 0.087\text{i}$ \& $0.986 \pm 0.141 \text{i}$ & 18\\
4.067084 & $-$0.297534 & $-$1.324490 & $-$1.0798 & 1.43439 & $1.094 \pm 0.018\text{i}$ \& $0.913 \pm 0.016 \text{i}$ & 18\\
4.066595 & $-$0.298720 & $-$1.317990 & $-$1.0766 & 1.43456 & $(-/-)$ $\lambda_1 = 1.119$ \& $(+/+)$ $\lambda_2 = 1.064$ & 18\\
4.066199 & $-$0.300790 & $-$1.307190 & $-$1.0703 & 1.43477 & $(-/-)$ $\lambda_1 = 1.321$ \& $(+/+)$ $\lambda_2 = 1.001$ & 18\\
4.066199 & $-$0.300810 & $-$1.307090 & $-$1.0702 & 1.43477 & $(-/-)$ $\lambda_1 = 1.322$ \& $\lambda_2 = 1$ & birth-death\\
4.066199 & $-$0.300851 & $-$1.306890 & $-$1.0701 & 1.43478 & $(-/-)$ $\lambda = 1.325$ \& $\varphi = 6.276$ & 17\\
4.083662 & $-$0.304840 & $-$1.312317 & $-$1.0167 & 1.43352 & $(-/-)$ $\lambda = 2.779$ \& $\varphi = 6.275$ & 17\\
4.211437 & $-$0.302530 & $-$1.514041 & $-$0.6178 & 1.42338 & $(-/-)$ $\lambda = 15.30$ \& $\varphi = 6.275$ & 17\\
4.278924 & $-$0.301158 & $-$1.623019 & $-$0.0003 & 1.41824 & $(-/-)$ $\lambda = 26.32$ \& $\varphi = 6.276$ & 17\\
4.278924 & $-$0.301158 & $-$1.623018 & 0 & 1.41824 & $\lambda_p^4 = 26.26$ \& $\varphi_s^4 = 0$ & $8+(9 \shortto 11) = 17 \shortto 19$
\end{tabular}
\caption{From the 4th cover of $g'$ to the 4th cover of $g$ (family \textcolor{blue}{$f_{g'}^{(1,4)}$})}
\label{table_17}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[scale=0.59]{13_1.png}
\caption{From the 4th cover of $g'$ to the 6th cover of $f$ (family \textcolor{grgr}{$f_{g'}^{(1cut,4)}$})}
\label{figure_spatial_g'_f}
\end{figure}
\begin{table}[H]\scriptsize \centering
\begin{tabular}{ccccccc}
$\Gamma$ & $q_1(0)$ & $\dot{q}_2(0)$ & $\dot{q}_3(0)$ & $T_q / 4$ & $\text{sign}_{C/B}$ and Floquet multipliers & $\mu_{CZ}$\\
\hline 4.435711 & 0.399433 & 1.024708 & 0 & 1.34304 & $(+/-)$ $\varphi_p^4 = 2.224$ \& $\varphi_s^4 =0$ & $9+(9 \shortto 11) = 18 \shortto 20$\\
4.435711 & 0.399433 & 1.024704 & 0.001 & 1.34305 & $(+/-)$ $\varphi = 2.225$ \& $(-/-)$ $\lambda = 1.009$ & 19\\
4.358078 & 0.388745 & 1.036921 & 0.405 & 1.36065 & $(+/-)$ $\varphi = 1.998$ \& $(-/-)$ $\lambda = 1.045$ & 19\\
4.064544 & 0.313078 & 1.240223 & 1.039 & 1.43467 & $(+/-)$ $\varphi = 0.298$ \& $(-/-)$ $\lambda = 1.035$ & 19\\
3.888069 & 0.309540 & 1.005463 & 1.359 & 1.45012 & $(+/-)$ $\varphi = 3.139$ \& $(-/-)$ $\lambda = 1.006$ & 19\\
3.721469 & 0.313247 & 0.753339 & 1.546 & 1.46539 & $(-/+)$ $\varphi = 4.711$ \& $(-/-)$ $\lambda = 1.007$ & 19\\
3.634638 & 0.315238 & 0.624474 & 1.617 & 1.47381 & $(-/+)$ $\varphi = 6.123$ \& $(-/-)$ $\lambda = 1.010$ & 19\\
3.633298 & 0.315269 & 0.622499 & 1.618 & 1.47394 & $(-/-)$ $\lambda_1 = 1.088$ \& $(-/-)$ $\lambda_2 = 1.010$ & 20\\
3.194897 & 0.325968 & $-$0.000610 & 1.805 & 1.52212 & $(-/-)$ $\lambda_1 = 11.37$ \& $(-/-)$ $\lambda_2 = 1.009$ & 20\\
3.039053 & 0.330063 & $-$0.210854 & 1.817 & 1.54192 & $(-/-)$ $\lambda_1 = 13.01$ \& $(-/-)$ $\lambda_2 = 1.009$ & 20\\
2.115894 & 0.358524 & $-$1.326885 & 1.444 & 1.70332 & $(-/-)$ $\lambda_1 = 7.647$ \& $(-/-)$ $\lambda_2 = 1.006$ & 20\\
1.567391 & 0.380122 & $-$1.874481 & 0.783 & 1.86052 & $(-/-)$ $\lambda_1 = 1.026$ \& $(-/-)$ $\lambda_2 = 1.006$ & 20\\
1.567110 & 0.380135 & $-$1.874738 & 0.782 & 1.86062 & $(-/-)$ $\lambda = 1.007$ \& $(-/+)$ $\varphi = 6.240$ & 19\\
1.507647 & 0.382751 & $-$1.928321 & 0.662 & 1.88232 & $(-/-)$ $\lambda = 1.007$ \& $(-/+)$ $\varphi = 5.551$ & 19\\
1.411644 & 0.387099 & $-$2.012299 & 0.393 & 1.91992 & $(-/-)$ $\lambda = 1.007$ \& $(-/+)$ $\varphi = 5.085$ & 19\\
1.359329 & 0.389535 & $-$2.056719 & 0.010 & 1.94188 & $(-/-)$ $\lambda = 1.014$ \& $(-/+)$ $\varphi = 4.888$ & 19\\
1.359293 & 0.389537 & $-$2.05674 & 0 & 1.94179 & $\varphi_s^6 = 0$ \& $(-/+)$ $\varphi_p^6 = 4.886$ & $9+(11 \shortto 9)= 20 \shortto 18$
\end{tabular}
\caption{From the 4th cover of $g'$ to the 6th cover of $f$ (family \textcolor{grgr}{$f_{g'}^{(1cut,4)}$})}
\label{table_16}
\end{table}
\begin{appendix}
\setcounter{secnumdepth}{0}
\section{Appendix - Python Codes}
\label{sec:python_codes}
In this appendix we collect our four Python codes, which we use to compute the data given in Section \ref{sec:8}.\ Our numerical method is the fourth-order Runge-Kutta method, which can be found for instance in \cite[pp.\ 51--53]{sewell}.\
The first and second code are used for planar symmetric periodic orbits.\ From given initial data, the first code gives the starting velocity $\dot{q}_2(0)$, the first return time $T_q$ and the synodic period $T_s$.\ It also plots planar symmetric periodic orbits in $q_1q_2$-coordinates starting at $\text{Fix}(\rho_1) = \{ (q_1,0,0,p_2) \}$.\ The second code follows the description of Subsection \ref{sec:6.5.1}, thus it linearizes along a planar symmetric periodic orbit and calculates the planar reduced monodromy $\overline{A}_p \in \text{Sp}^{\rho_1}(1)$ and $A_s \in \text{Sp}^{\rho_1}(1)$.\ Moreover, it gives the determinant, the trace, the elliptic or hyperbolic behaviour and the Floquet multipliers.\ In elliptic cases, it computes $T_a$ and $T_d$ as well.\ Note that this code is written for the case that $\mu_{CZ}^p=\mu_{CZ}^s=3$ and for the variational orbit of our moon from Section \ref{sec:our moon}.\
The third code plots spatial symmetric orbits in $q_1q_2q_3$-coordinates starting at $\text{Fix}(\overline{\rho_1}) = \{ (q_1,0,0,0,p_2,p_3) \}$.\ The fourth code, as described in Subsection \ref{sec:6.5.2}, linearizes along a spatial symmetric periodic orbit starting at $\text{Fix}(\overline{\rho_1})$ and computes the spatial reduced monodromy, which is a matrix in $\text{Sp}^{\rho_1}(2)$, its Floquet multipliers and signatures $\text{sign}_B(\lambda)$ and $\text{sign}_C(\lambda)$.\ Both codes are written for the orbits of Subsection \ref{sec:9.4.1}.\
For any other symmetric periodic orbit one only needs to adjust the respective code with its initial data.\
\begin{align} \label{python1}
\text{ \textbf{1st Python code:}}
\end{align}
\tiny{
\lstinputlisting[language=Python,breaklines=true]{0_python_code_1.py}}
\normalsize{
\begin{gather} \label{python2}
\text{ \textbf{2nd Python code:}}
\end{gather}}
\tiny{
\lstinputlisting[language=Python,breaklines=true]{0_python_code_2.py}}
\normalsize{
\begin{gather*} \label{python3}
\text{ \textbf{3rd Python code:}}
\end{gather*}}
\tiny{
\lstinputlisting[language=Python,breaklines=true]{0_python_code_3.py}}
\normalsize{
\begin{gather*} \label{python4}
\text{ \textbf{4th Python code:}}
\end{gather*}}
\tiny{
\lstinputlisting[language=Python,breaklines=true]{0_python_code_4_reduced.py}}
\end{appendix}
\addcontentsline{toc}{section}{References}
|
train/arxiv
|
BkiUdTQ5qdmB6xJ6-Ya0
| 5
| 1
|
\section{Introduction}
\input{Sections/Introduction}
\section{Background and Related Work}
\input{Sections/Related_Work}
\input{Sections/Approach}
\section{Experiments}
\input{Sections/Experiments}
\section{Discussion}
\input{Sections/Discussion}
\bibliographystyle{SageH}
\section{Riemannian manifold learning}
\label{sec:riemannian_manifolds}
In this section, we describe how learning complex robot motion skills from demonstrations can be treated from a Riemannian manifold perspective.
Unlike previous works~\cite{Havoutis:MotionPlanningManifold13,Li:TaskManifoldConstrainedManip18}, where skill manifolds are built from locally smooth manifold learning~\cite{Dollar07:LSML}, we leverage a Riemannian formulation.
We develop a model that has enough capacity to learn and synthesize the relevant patterns of a motion while being flexible enough to adapt to new conditions (e.g. dynamics obstacles).
We next describe how we use VAEs to access a low-dimensional learned manifold of the demonstrations to learn an ambient space Riemannian metric.
As mentioned previously, this metric is exploited to reconstruct robot motions in both task space $\mathbb{R}^3 \times \mathcal{S}^3$ and joint space $\R^\eta$.
In addition, we discuss the design of the corresponding VAE for each ambient space as well as the formulation of the corresponding Riemannian metric.
In Section~\ref{sec:geodesic_motion}, we explain how we exploit these learned metrics to generate robot motion trajectories using geodesics.
\subsection{Task space $\mathbb{R}^3 \times \mathcal{S}^3$}
To begin, we focus on learning motion skills characterized by full-pose end-effector trajectories, where each pose is represented in $\mathbb{R}^3 \times \mathcal{S}^3$.
Before exploiting the VAE to compute the Riemannian metric, we must ensure its capability to properly learn and reconstruct full-pose end-effector states, i.e.\ position $\x~\in~\mathbb{R}^3$ and orientation $\q~\in~\mathcal{S}^3$, while accounting for specific properties of the data, such as quaternions antipodality.
To do so, we propose a VAE architecture that models the joint density of the robot end-effector state.
Our model retains the usual Gaussian prior $p(\z) = \mathcal{N}(\z | \bm{0}, \I_d)$, but modifies the generative distribution $p_{\bm{\phi}, \bm{\psi}}(\x, \q | \z)$.
Specifically, we assume that position and orientation are conditionally independent,
\begin{align}
p_{\bm{\phi}, \bm{\psi}}(\x, \q | \z) = p_{\bm{\phi}}(\x | \z) p_{\bm{\psi}}(\q | \z) ,
\end{align}
where the latent variable $\z$ captures the correlation between position and quaternion data.
Next, we describe how each conditional distribution is parameterized and learned.
\subsubsection{Position encoding in $\mathbb{R}^3$:}
To model the conditional distribution of end-effector positions $\x$, we opt for simplicity and choose this to be Gaussian,
\begin{align}
p_{\bm{\phi}}(\x | \z) &= \mathcal{N}(\x | \mu_{\bm{\phi}}(\z), \I_3 \sigma^2_{\bm{\phi}}(\z)) ,
\end{align}
where $\mu_{\bm{\phi}}$ and $\sigma_{\bm{\phi}}$ are neural networks parametrized by $\bm{\phi}$.
One could argue that $p_{\bm{\phi}}(\x | \z)$ should have zero probability mass outside the workspace of the robot, but we disregard
such concerns as $\sigma_{\bm{\phi}}^2$ tends to take small values due to limited data noise.
This implies that only a negligible fraction of the probability mass falls outside the robot workspace.
\subsubsection{Quaternion encoding in $\mathcal{S}^3$:}
On a robot motion trajectory, each position is paired with an orientation, and together they define the full pose of the end-effector.
There are several representations for the end-effector orientation, for example, Euler angles, rotation matrices, and unit quaternions.
Euler angles and rotation matrices are widely used for their simplicity and intuitiveness, however, the former suffer from gimbal lock~\citep{Hemingway2018:gimballock} which makes them an inadequate orientation parametrization, and the latter are a redundant representation requiring a high number of parameters.
Unit quaternions, on the other hand, are a convenient way to represent orientations since they are compact, not redundant, and prevent gimbal lock.
Also, they provide strong stability guarantees in closed-loop orientation control~\citep{Camarillo08:quaternions}, and they have been recently exploited in complicated robotic tasks learning~\citep{Rozo2020:SkillsSeq}, and for data-efficient robot control tuning~\citep{Jaquier2019:GaBO} using Riemannian-manifold formulations.
We choose to represent orientations $\q$ as a unit quaternion, such that $\q \in \mathcal{S}^3$ with the additional antipodal identification that $\q$ and $-\q$ correspond to the same orientation.
Formally, a unit quaternion $\q$ lying on the surface of a $3$-sphere $\mathcal{S}^3$ can be represented using a $4$-dimensional unit vector $\q~=~[q_w, q_x, q_y, q_z]$, where the scalar $q_w$ and vector $(q_x, q_y, q_z)$ represent the real and imaginary parts of the quaternion, respectively.
To cope with antipodality, one could model $\q$ as a point in a projective space, but for simplicity we let $\q$ live on the unit sphere $\mathcal{S}^3$.
We then choose a generative distribution $p_{\bm{\psi}}(\q | \z)$ such that $p_{\bm{\psi}}(\q | \z)~=~p_{\bm{\psi}}(-\q | \z)$.
In other words, the quaternions $\q$ and $-\q$ are considered to be antipodal: they lie on diametrically opposite points on the $3$-sphere while representing the same orientation.
To formulate a suitable distribution $p_{\bm{\psi}}(\q | \z)$ over $\mathcal{S}^3$, we leverage the von Mises-Fischer (vMF) distribution, which is merely an isotropic Gaussian constrained to lie on the unit sphere~\citep{Sra18:DirectionalStats}.
This distribution is described by a mean direction $\bm{\mu}$ with $\left \| \bm{\mu} \right \| = 1$, and a concentration parameter $\kappa \ge 0$.
The vMF density function is defined as,
\begin{align}
\mathrm{vMF}(\q | \bm{\mu}, \kappa) = C_{D}(\kappa) \exp\left({\kappa\bm{\mu}^{\trsp} \q}\right) ,
\qquad \| \bm{\mu} \| = 1,
\label{eq:vmf_density}
\end{align}
where $C_{D}$ is the normalization constant
\begin{align}
C_{D}({\kappa}) = \frac{\kappa^{\frac{D}{2}-1}}{(2\pi)^{\frac{D}{2}} \mathit{I}_{\frac{D}{2}-1}(\kappa)} ,
\end{align}
with $\mathit{I}_{\frac{D}{2}-1}(\kappa)$ being the modified Bessel function of the first kind.
Like the Gaussian, from which the distribution was constructed, the von Mises-Fischer distribution is unimodal.
To build a distribution that is antipodal symmetric, i.e.\ $p_{\bm{\psi}}(\q | \z) = p_{\bm{\psi}}(-\q | \z)$, we define a mixture of antipodal vMF distributions~\citep{hauberg:tpami:grassmann},
\begin{align}
p_{\bm{\psi}}(\q | \z) &= \frac{1}{2} \mathrm{vMF}(\q | \bm{\mu}_{\bm{\psi}}(\z), \kappa_{\bm{\psi}}(\z)) \nonumber\\
&+ \frac{1}{2} \mathrm{vMF}(\q | -\bm{\mu}_{\bm{\psi}}(\z), \kappa_{\bm{\psi}}(\z)) ,
\end{align}
where $\bm{\mu}$ and $\kappa$ are parametrized as neural networks.
This mixture model is conceptually similar to a Bingham distribution~\citep{Sra18:DirectionalStats}, but is easier to implement numerically.
\subsubsection{Variational inference:}
To train the VAE, we maximize an adapted evidence lower bound (ELBO) Eq.~\eqref{eq:elbo}, defined as
\begin{align}
\label{eq:final_elbo}
\Loss_{ELBO} &= \beta_1 \Loss_\x + \beta_2 \Loss_\q - \mathrm{KL}\left(q_{\bm{\xi}}(\z|\x)||\Prob(\z)\right) ,\\
\Loss_\x &= \mathbb{E}_{q_{\bm{\xi}}(\z|\x)}\left[\log p_{{\bm{\phi}}}(\x|\z) \right] , \\
\Loss_\q &= \mathbb{E}_{q_{\bm{\xi}}(\z|\x)}\left[\log p_{{\bm{\psi}}}(\q|\z) \right] ,
\end{align}
where $\x \in \R^3$ and $\q \in \Sph^3$ represent the position and orientation of the end-effector, respectively.
The scaling factors $\beta_1>0$ and $\beta_2>0$ balance the log-likelihood of position and orientation components.
Due to the quaternions antipodality, raw demonstration data may contain negative or positive values for the same orientation.
So, we avoid any pre-processing step of the data by considering two vMF distributions that encode the same orientation at both sides of the hypersphere.
Practically, we double the training data, by including $\q_n$ and $-\q_n$ for all observations $\q_n$.
Note that as the Riemannian manifold is learned using task space data, the model is kinematics agnostic, which means that the generated motion may be used across different robots as long as the trajectory is reachable.
\subsubsection{Induced Riemannian metric:}
Our generative process is parametrized by a set of neural networks. Specifically, $\mu_{\bm{\phi}}$ and $\sigma_{\bm{\phi}}$ are position mean and variance neural networks parameterized by $\bm{\phi}$, while $\mu_{\bm{\psi}}$ and $\kappa_{\bm{\psi}}$ are neural networks parameterized by $\bm{\psi}$ that represent the mean and concentration of the quaternion distribution.
Following Section~\ref{sec:vae_manifold}, the Jacobians of these functions govern the induced Riemannian metric as,
\begin{align}
\label{eq:pos_quat_metric}
\Metric(\z) &= \Metric_\mu^\x(\z) + \Metric_\sigma^\x(\z) + \Metric_\mu^\q(\z) + \Metric_\kappa^\q(\z) ,
\end{align}
with
\begin{align}
\Metric_\mu^\x(\z) &= \Jac_{\mu_{\bm{\phi}}}(\z)^{\trsp} \Jac_{\mu_{\bm{\phi}}}(\z) , \\ \Metric_\sigma^\x(\z) &= \Jac_{\sigma_{\bm{\phi}}}(\z)^{\trsp} \Jac_{\sigma_{\bm{\phi}}}(\z) ,\\
\Metric_\mu^\q(\z) &= \Jac_{\mu_{\bm{\psi}}}(\z)^{\trsp} \Jac_{\mu_{\bm{\psi}}}(\z) , \\ \Metric_\kappa^\q(\z) &= \Jac_{\kappa_{\bm{\psi}}}(\z)^{\trsp} \Jac_{\kappa_{\bm{\psi}}}(\z) ,
\label{eq:OurMetric}
\end{align}
where $\Jac_{\mu_{\bm{\phi}}}$, $\Jac_{\sigma_{\bm{\phi}}}$, $\Jac_{\mu_{\bm{\psi}}}$, $\Jac_{\kappa_{\bm{\psi}}}$ are the Jacobians of the functions representing the position mean and variance, as well as the quaternion mean and concentration, respectively.
In practice, we want this Riemannian metric $\Metric(\z)$ to take large values in regions with little or no data, so that geodesics avoid passing through them.
We achieve this by using radial basis function (RBF) networks as our variance representation, whose kernels reliably extrapolate over the whole space~\citep{Arvanitidis:LatentSO}.
Since one of the main differences between Gaussian and von Mises-Fischer distributions is the representation of the data dispersion, the RBF network should consider a reciprocal behavior when estimating variance for positions.
In summary, the data uncertainty is encoded by the RBF networks representing $\sigma^{-1}_{\bm{\theta}}(\z)$ and $\kappa_{\bm{\psi}}(\z)$, which affect the Riemannian metric through their corresponding Jacobians as in Eq.~\eqref{eq:pos_quat_metric}.
\subsection{Joint space $\mathbb{R}^{\eta}$}
The joint space $\mathbb{R}^{\eta}$, also known as \emph{configuration space}, is another space to represent robot motion trajectories\footnote{We did not model the robot joint space as a high-dimensional torus for simplicity. However, we showed that our approach can easily encode data on Riemannian manifolds as in the task space case.}.
In this space, each trajectory point is represented as the vector $\bm{\theta} = \left [\theta_1, \theta_2, \ldots, \theta_n \right ]~\in~\R^\eta$, where $\eta$ is the number of degrees of freedom of the robot.
Learning motion skills in this space is known to be challenging as it is less intuitive to provide joint-level demonstrations.
However, being able to learn and generate joint space movements is relevant as some tasks may demand specific robot postures.
Moreover, joint space skills can be extended to provide whole-body obstacle avoidance.
In this context, we formulate a Riemannian robot motion learning approach to generate collision-free joint space movements.
Previously, we computed a Riemannian metric in the latent space using the VAE decoder trained on task space demonstrations.
Similarly, a new VAE architecture can be designed to compute a Riemannian metric from joint space demonstrations.
We use this metric to compute geodesics and generate robot movements that resemble the demonstrations in joint space.
This joint space approach also allows us to endow the robot with whole-body obstacle avoidance capabilities.
By using ambient metrics, we can again reshape the learned metric to make the robot move away from obstacles in an online fashion.
The ambient metrics exclusively uses task space information of the obstacles and the robot body, in contrast to classical motion planning that often works in the configuration space.
Note that the data manifold learned using joint space demonstrations is kinematics-dependent, meaning that the generated motion cannot be directly transferred to other robots with different kinematics.
\subsubsection{Variational inference:}
To train the joint space VAE, we maximize a modified version of the evidence lower bound (ELBO) Eq.~\eqref{eq:elbo}, defined as
\begin{equation}
\mathcal{L}_{ELBO} = \Loss_{\bm{\theta}}-\mathrm{KL}\left(q_\xi(\bm{z}_i|\bm{\theta}_i)||p(z)\right) ,
\label{equ:final_ELBO_joint}
\end{equation}
\noindent
with,
\begin{align*}
\Loss_{\bm{\theta}} = \mathbb{E}_{q_\xi(\bm{z}_i|{\bm{\theta}}_i)}&\left [ p_{\mathcal{X}}(f_{\textrm{FK}}(\bm{\theta})|\bm{z}_i)\right ] , \\
= \mathbb{E}_{q_\xi(\bm{z}_i|{\bm{\theta}}_i)}&\left[ \log(p_\Theta(\bm{\theta}|\bm{z}_i)) - \log(\mathcal{V})\right] ,
\end{align*}
where $p_\Theta(\bm{\theta}|\bm{z}_i)$ and $p_{\mathcal{X}}(f_{\textrm{FK}}(\bm{\theta})|\bm{z}_i)$ are the estimated conditional densities in the joint space $\Theta$ and task space $\ambient$, respectively.
Also, $\mathcal{V}$ is the volume measure defined as
\begin{align}
\mathcal{V} =\sqrt{(\det(\Jac_{f_{\textrm{FK}}}^{\mathsf{T}}(\mu_{\alpha}(\bm{z}_i)) \Jac_{f_{\textrm{FK}}}(\mu_{\alpha}(\bm{z}_i)))} ,
\label{eq:volume_joint}
\end{align}
where $\Jac_{f_{\textrm{FK}}}$ is the Jacobian of the forward kinematics $f_{\textrm{FK}}$ given the joint configuration estimated by the VAE decoder $\mu_{\alpha}$.
Furthermore, the generative distribution $p_\Theta(\bm{\theta}|\bm{z}_i)~=~\mathcal{N}(\mu_{\alpha}(\bm{z}_i), \I_{\eta}\sigma_{\alpha}(\bm{z}_i)^2)$ is parameterized by the VAE decoder mean $\mu_{\alpha}(\bm{z}_i)$ and variance $\sigma_{\alpha}(\bm{z}_i)$ networks.
Note that the new ELBO formulation in Eq.~\eqref{equ:final_ELBO_joint} leverages the change of variable theorem~\citep{MathForML2020:Deisenroth} to transform probability densities from joint to task space.
As a result, the VAE is still trained using task space information, while the given demonstration trajectories are defined in joint space.
This is motivated by the fact that most robot skills may still depend on task space variables (e.g. the manipulated objects pose), despite the same skill is also required to imitate particular robot postures.
As we are interested in whole-body obstacle avoidance, we can leverage the forward kinematics model to access the Cartesian position of different points on the robot (e.g., its joints locations).
Therefore, we use a set of $M$ forward kinematic functions $f_{\textrm{FK}}^{1:M}(\bm{\mu_{\alpha}}^{1:\eta})$, where $\bm{\mu_{\alpha}}^{1:n}$ is $n=1, \hdots, \eta$ elements of the joint-values vector $\mu_{\alpha}(\bm{z})$, and $M$ is the number of considered points on the robot.
Note that for certain points on the robot structure, the forward kinematics only needs a subset of the joint values.
For simplicity, we consider $M$ to be equal to the number of robot joints plus the end-effector (i.e. $M=\eta+1$).
Then, the full forward kinematic function $f_{\textrm{FK}}$ is defined as,
\begin{align*}
f_{\textrm{FK}}(\bm{\mu_{\alpha}}) =&
\begin{bmatrix}
f_{\textrm{FK}}^1(\bm{\mu_{\alpha}}^{1}),
\hdots,
f_{\textrm{FK}}^M(\bm{\mu_{\alpha}}^{1:\eta}),
f_{\textrm{FK}}^{ee}(\bm{\mu_{\alpha}}^{1:\eta})
\end{bmatrix}^\trsp,\\
=& \begin{bmatrix}
\bm{p}_{1},
\hdots,
\bm{p}_{M},
\bm{p}_{\textrm{ee}},
\bm{q}_{\textrm{ee}}
\end{bmatrix}^\trsp,
\end{align*}
where given the joint value vector $\bm{\mu_{\alpha}}^{1:n}$ as input, all the functions compute the corresponding position $\bm{p}_{m}$ of the $m$-th point on the robot, except the last function $f_{\textrm{FK}}^{ee}$ which also provides both the position $\bm{p}_{\textrm{ee}}$ and the orientation $\bm{q}_{\textrm{ee}}$ of the end-effector.
Furthermore, the volume measure $\mathcal{V}$ in Eq.~\eqref{eq:volume_joint} uses the Jacobian of the full forward kinematics function, which is defined as,
\begin{align*}
\Jac_{f_{\textrm{FK}}(\bm{\mu_{\alpha}})} =&
\begin{bmatrix}
\Jac_{\bm{p_{1}}},
\hdots,
\Jac_{\bm{p}_M},
\Jac_{\bm{p}_{\textrm{ee}}},
\Jac_{\bm{q}_{\textrm{ee}}}
\end{bmatrix}^\trsp,
\end{align*}
where $\Jac_{\bm{p}_i}$ and $\Jac_{\bm{q}_i}$ are the linear and angular components of the corresponding Jacobians.
\subsubsection{Induced Riemannian metric:}
With the new integrated forward kinematic layer, we can calculate a pullback metric that directly uses task space information.
This adds an additional step in the formulation of the Riemannian metric, which now requires the Jacobian of the forward kinematics $\Jac_{f_{\textrm{FK}}}$ as well as the Jacobians of the VAE decoder $\Jac_{\mu_{\bm{\alpha}}}$ and $\Jac_{\sigma_{\bm{\alpha}}}$, computed from the mean and variance decoder networks.
Using these two Jacobians the metric can be defined as,
\begin{equation}
\Metric^{\bm{\theta}}(\z) = \Metric^{\bm{\theta}}_{\mu_{\bm{\alpha}}}(\z) + \Metric^{\bm{\theta}}_{\sigma_{\bm{\alpha}}}(\z)
\label{eq:Joint_space_VAE_Metric}
\end{equation}
with,
\begin{align*}
\Metric^{\bm{\theta}}_{\mu_{\bm{\alpha}}}(\z) &= \left (\Jac_{f_{\textrm{FK}}}(\mu_{\bm{\alpha}}(\z)) \Jac_{\mu_{\bm{\alpha}}}(\z) \right ) ^ \trsp \left (\Jac_{f_{\textrm{FK}}}(\mu_{\bm{\alpha}}(\z)) \Jac_{\mu_{\bm{\alpha}}}(\z) \right ) ,\\
\Metric^{\bm{\theta}}_{\sigma_{\bm{\alpha}}}(\z) &= \left (\Jac_{f_{\textrm{FK}}}(\mu_{\bm{\alpha}}(\z)) \Jac_{\sigma_{\bm{\alpha}}}(\z) \right ) ^ \trsp \left (\Jac_{f_{\textrm{FK}}}(\mu_{\bm{\alpha}}(\z)) \Jac_{\sigma_{\bm{\alpha}}}(\z) \right ) .
\end{align*}
Similarly to our Riemannian metric in task space, this new metric $\Metric^{\bm{\theta}}(\z)$ takes large values in regions with little or no data, so that geodesics avoid passing through them.
Therefore, geodesic curves generated via Eq.~\eqref{eq:Joint_space_VAE_Metric} allow us to reproduce joint space robot skills.
\section{Geodesic motion skills}
\label{sec:geodesic_motion}
As explained previously, geodesics follow the trend of the data, and they are here exploited to reconstruct motion skills that resemble the human demonstrations.
In this section, we describe geodesic computation in both settings, namely, where the VAE is trained on task space or joint space trajectories.
Moreover, we explain how new geodesic paths, that avoid obstacles on the fly, can be obtained by metric reshaping.
In particular, we exploit ambient space metrics defined as a function of the obstacles configuration to locally deform the original learned Riemannian metric.
Last but not least, our approach can encode multiple-solution skills, from which new hybrid trajectories (not previously shown to the robot) can be synthesized.
We elaborate on each of these features in the sequel.
\begin{figure}[t]
\centering
\begin{subfigure}{\linewidth}
\includegraphics[width=1\linewidth]{Images/Task_space/Toy_Example/Variance_Final.pdf}
\end{subfigure}
\begin{subfigure}{\linewidth}
\includegraphics[width=1\linewidth]{Images/Task_space/Toy_Example/Magnification_Factor_Final.pdf}
\end{subfigure}
\caption{ \emph{Top}: The variance measure, \emph{bottom}: The magnification factor of the Riemannian manifold learned from trajectories based on $\mathsf{J}$ and $\mathsf{C}$ English alphabet characters defined in $\R^2 \times \mathcal{S}^2$. The semi-transparent white points depict the encoded training set, and the yellow curve depicts the geodesic in the latent space. The resulting manifold is composed of two similar clusters due to the antipodal encoding of the quaternions, where each cluster represents one side of the hyper-sphere. The yellow and red curves show the geodesics computed based on Riemannian and Euclidean metrics, respectively.}
\label{fig:Toy_example}
\end{figure}
\subsection{Generating motion:}
Geodesic curves generally follow the trend of the demonstrations data, due to the role of uncertainty in the metric.
Specifically, the Riemannian metrics Eq.~\eqref{eq:VAE_metric} and Eq.~\eqref{eq:Joint_space_VAE_Metric} tell us that geodesics are penalized for crossing through regions where the VAE predictive uncertainty grows.
This implies that if a set of demonstrations follows a circular motion pattern, geodesics starting from arbitrary points on the learned manifold will also generate a circular motion (see Fig. \ref{fig:Teaser}).
This behavior is due to the way that the metric $\Metric$ is defined: Our Riemannian metric $\Metric$ is characterized by low values where data uncertainty is low (and vice-versa).
Since the geodesics minimize the energy of the curve between two points on $\Manifold$, which is a function of $\Metric$, they tend to stay on the learned manifold and avoid outside regions.
This property makes us suggest that geodesics form a natural motion generation mechanism.
Note that when using a Euclidean metric (i.e., an identity matrix), geodesics correspond to straight lines.
Such geodesics certainly neglect the data manifold geometry.
Noted that geodesics do not typically follow a closed-form equation on these learned manifolds, and numerical approximations are required.
This can be done by direct minimization of curve length~\citep{Shao:TheRiemannianGeometry, kalatzis:icml:2020}, $\textrm{A}^*$ search~\citep{Chen2019:FastApproximateGeodesics}, integration of the associated ODE~\citep{arvanitidis:aistats:2019}, or various heuristics~\citep{chen:MetricsforDeep}.
In this paper, we compute geodesics on $\Manifold$ by approximating them by cubic splines $\curve \approx {\omega_\lambda}(\z_{c})$,
with $\z_c~=~\{\z_{c_0}, \ldots, \z_{c_K} \}$, where $\z_{c_k} \in \latent$ is a vector defining a control point of the spline over the latent space $\latent$.
Given $K$ control points, $K-1$ cubic polynomials $\omega_{\lambda_i}$ with coefficients $\lambda_{i,0}$, $\lambda_{i,1}$, $\lambda_{i,2}$, $\lambda_{i,3}$ have to be estimated to minimize its Riemannian length,
\begin{equation}
\label{eq:cost_geodesic}
\Loss_{{\omega_\lambda}(\z_{c})} = \int_0^1 \sqrt{\left \langle \dot{\omega}_\lambda(\z_{c}), \Metric(\omega_\lambda(\z_{c}))\dot{\omega}_\lambda(\z_{c}) \right \rangle} \mathrm{d}t .
\end{equation}
The resulting geodesic $\curve$ computed in $\latent$ is used to generate the robot motion by decoding it through the VAE networks $\mu_{\bm{\psi}}$ and $\mu_{\bm{\psi}}$ or $\mu_{\bm{\alpha}}$ depending on the ambient space setting.
The obtained trajectory is then executed on the robot arm to reproduce the required skill.
In the task space setting, the decoded geodesics can be deployed on the robot using a Cartesian impedance controller or inverse kinematics.
In the joint space setting, the decoded geodesics can be employed directly on the robot as a joint trajectory reference to be tracked by joint position or impedance controllers.
\subsection{Geodesics in task space $\R^3 \times \Sph^3$:}
In this section, we investigate the geodesic motion generation in task space.
To illustrate the motion generation mechanism, we consider a simple experiment where the demonstration data at each time point is confined to $\R^2 \times \Sph^2$, i.e. only two-dimensional positions and orientations are considered.
We artificially create position data that follows a $\mathsf{J}$ shape and orientation data that follows a $\mathsf{C}$ shape projected on the sphere (see Fig.~\ref{fig:Toy_example_data}).
We fit our VAE model to this dataset, and visualize the corresponding latent space in Fig.~\ref{fig:Toy_example}, where the top panel shows the latent mean embeddings of the training data with a background color corresponding to the predictive uncertainty.
We see low uncertainty near the data, and high otherwise.
The bottom panel of Fig.~\ref{fig:Toy_example} shows the same embedding but with a background color proportional to $\log\sqrt{\det\Metric}$.
This quantity, known as the magnification factor~\citep{Bishop1997:magfactor}, generally takes large values in regions where distances are large, implying that geodesics will try to avoid such regions.
In Fig.~\ref{fig:Toy_example}, we notice that the magnification factor is generally low, except on the `boundary' of the data manifold, i.e. in regions where the predictive variance grows.
Consequently, we observe that Riemannian geodesics (yellow curves in the figure) stay within the `boundary' and hence resemble the training data patterns.
In contrast, Euclidean geodesics (red curves in the figure) fail to stay in the data manifold.
Our proposal is to use these geodesics on the learned manifolds as our robot motion generation mechanism.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{Images/Task_space/Toy_Example/Euclidean_Reconstruction_Final.pdf}
\caption{As an illustration, we consider synthetic data that belong to $\R^2 \times \Sph^2$. The left panel depicts the $\mathsf{J}$-shaped position data in $\R^2$ and the right panel shows the $\mathsf{C}$-shaped orientation data on $\mathcal{S}^2$. The yellow and red curves show the geodesics computed based on Riemannian and Euclidean metrics, respectively.}
\label{fig:Toy_example_data}
\end{figure}
Note that both panels in Fig.~\ref{fig:Toy_example} depict two distinct horseshoe-like clusters, which is a result of the antipodality of the data in $\Sph^2$.
More precisely, the bimodal distribution of the antipodal quaternion data is encapsulated by these two clusters in the latent space.
In practical settings, as long as the geodesic curve does not cross across clusters (both start and goal points belong to the same cluster), the quaternion sign is unchanged.
We experimentally examine this in Section.~\ref{sec:experiments}.
\begin{figure*}
\centering
\begin{subfigure}{.33\linewidth}
\centering
\includegraphics[width = \linewidth]{Images/Joint_Space/Toy_Example/demonstrations_Final.pdf}
\end{subfigure}%
\begin{subfigure}{.33\linewidth}
\centering
\includegraphics[width = \linewidth]{Images/Joint_Space/Toy_Example/Variance_Measure_Final.pdf}
\end{subfigure}
\begin{subfigure}{.33\linewidth}
\centering
\includegraphics[width = \linewidth]{Images/Joint_Space/Toy_Example/Magnification_Factor_Final.pdf}
\end{subfigure}
\caption{Illustration of joint space motion generation via geodesics. \emph{Left}: $\mathsf{S}$-shaped joint space demonstrations. \emph{Middle}: the resulting variance measure. \emph{Right}: The magnification factor of the learned Riemannian manifold. The semi-transparent white points depict the encoded training set and the blue curve represents the geodesic in the latent space. The resulting manifold is composed of two similar clusters due to the two different inverse-kinematics solutions for the task. The yellow and red curves show the geodesics computed based on Riemannian and Euclidean metrics, respectively.}
\label{fig:Toy_Example_Joint_Metric}
\end{figure*}
\subsection{Geodesics in joint space $\R^\eta$:}
In this section, we investigate the geodesic computation for joint space movements.
We use a toy example where a $2$-DOF robot arm follows an $\mathsf{S}$-shaped trajectory in task space using two different joint configurations (i.e., two different inverse-kinematics solutions), as shown in Fig.~\ref{fig:Toy_Example_Joint_Metric}--left.
We observe two sets of demonstrations in joint space that reproduce the same end-effector movements when applied to the robot.
We can also see in the middle and right panels of Fig.~\ref{fig:Toy_Example_Joint_Metric} the geodesics computed using the Riemannian and Euclidean metrics, depicted as blue and red curves, respectively.
The background of the middle panel illustrates the predictive uncertainty over the latent space $\latent$, where we again see low uncertainty near the data, and high otherwise.
For completeness, the background in the right panel illustrates the magnification factor.
The latent mean embedding of the training data are depicted as semi-transparent white points.
Similar to the previous section, the geodesics generated using the Riemannian metric stay within the `boundary' near the training data.
Furthermore, it is easy to note that the learned manifold comprises two clusters, but unlike the previous task space example, these clusters arise from the two different joint space solutions provided in the training data.
This indicates that the clusters in the learned manifold encapsulate the provided solutions in the demonstrations.
When the number of clusters grows, the geodesic has a higher chance to travel among them to find a path with minimal energy as the high-energy regions may become narrow.
However, unnecessary frequent switching among these clusters may often lead to jerky geodesics, therefore negatively impacting the geodesics quality, particularly in robots with a high degree of freedom (e.g. $\textrm{DOF} \ge 7$).
Later in Section~\ref{sec:experiments}, we experimentally show that this issue can be alleviated by increasing the latent space dimensionality.
\subsection{Obstacle avoidance using ambient space metrics}
Often human demonstrations do not include any notion of obstacles in the environment.
Therefore, obstacle avoidance is usually treated as a separate problem when generating robot motions in unstructured environments.
A possible solution to integrate both problems is to provide obstacle--aware demonstrations, where the robot is explicitly taught how to avoid known obstacles.
The main drawback is that the robot is still unable to avoid unseen obstacles on the fly.
Our Riemannian approach provides a natural solution to this problem.
The learned metrics in latent space Eq.~\eqref{eq:VAE_metric} and Eq.~\eqref{eq:Joint_space_VAE_Metric} measure the length of a geodesic curve under the Euclidean space of the ambient space $\ambient$.
We can easily modify this to account for unseen and dynamic obstacles.
Intuitively, we can increase the length of curves that intersect obstacles, such that geodesics are less likely to go near them. Next, we explain how obstacle avoidance can be achieved for both task space and joint space settings.
\subsubsection{Obstacle avoidance in task space $\R^3 \times \Sph^3$:}
Here we explain how we can reshape the learned metric to avoid obstacles in the task space setting, where only the robot end-effector is considered.
Formally, we propose to define the ambient metric of the end-effector position to be
\begin{align}
\Metric_\ambient^\x(\x) &= \left( 1 + \zeta \exp\left( \frac{-\| \bm{x} - \bm{o} \|^2}{2r^2} \right)\right)\I_3,
\quad \bm{x} \in \mathbb{R}^3,
\label{eq:ambient_metric_R3S3}
\end{align}
where $\zeta > 0$ scales the cost, $\bm{o} \in \R^3$ and $r>0$ represent the position and radius of the obstacle, respectively.
For the orientation component, we assume a flat ambient metric $\Metric_\ambient^\q(\x) = \I_4$.
Under this new ambient metric, geodesics will generally avoid the obstacle, though we emphasize this is only a \emph{soft} constraint.
This approach is similar in spirit to CHOMP~\citep{Ratliff2009:chomp} except our formulation works along a low-dimensional learned manifold, whose solution is then decoded to the task space of the robot.
Under this ambient metric, the associated (reshaped) Riemannian metric of the latent space $\latent$ becomes,
\begin{equation}
\label{eq:obstacle_metric}
\Metric(\z) = \Metric_\mu^\x(\z) + \Metric_\sigma^\x(\z) + \Metric_\mu^\q(\z) + \Metric_\kappa^\q(\z) ,
\end{equation}
\begin{align}
\text{with}\quad \Metric_\mu^\x(\z) &= \Jac_{\mu_{\bm{\phi}}}(\z)^{\trsp} \Metric_{\ambient}^\x(\mu_{\bm{\phi}}(\z)) \Jac_{\mu_{\bm{\phi}}}(\z) , \nonumber \\
\Metric_\sigma^\x(\z) &= \Jac_{\sigma_{\bm{\phi}}}(\z)^{\trsp} \Metric_{\ambient}^\x(\mu_{\bm{\phi}}(\z)) \Jac_{\sigma_{\bm{\phi}}}(\z) , \nonumber \\
\Metric_\mu^\q(\z) &= \Jac_{\mu_{\bm{\psi}}}(\z)^{\trsp} \Metric_\ambient^\q(\mu_{\bm{\psi}}(\z)) \Jac_{\mu_{\bm{\psi}}}(\z) , \nonumber \\
\Metric_\kappa^\q(\z) &= \Jac_{\kappa{\bm{\psi}}}(\z)^{\trsp} \Metric_\ambient^\q(\mu_{\bm{\psi}}(\z)) \Jac_{\kappa{\bm{\psi}}}(\z) , \nonumber
\end{align}
where $\Metric_{\ambient}^\x$ and $\Metric_{\ambient}^\q$ represent the position and orientation components of the obstacle-avoidance metric $\Metric_\ambient$, respectively.
We emphasize that as the object changes position, the VAE does not need to be re-trained as the change is only in the ambient metric.
As stated before, obstacles can be avoided only by the end-effector under this task space setting.
Next we explain how we can extend this approach so that the robot can move away from obstacles using its whole body.
\subsubsection{Obstacle avoidance in joint space $\R^\eta$:}
Avoiding obstacles at the robot link level while performing motion skills requires to consider the whole robot kinematic structure.
Classical motion planning methods model the obstacles geometry into the configuration space, and later compute an obstacle-free path via sampling methods~\citep{Elbanhawi14:MotionPlanning}.
In contrast, we take advantage of the forward kinematics layer (see Fig.~\ref{fig:architecture}-bottom), which provides us with task space poses of any point on the robot body, to compute an obstacle-avoidance ambient metric.
Similar to the task space formulation presented previously, this ambient metric is then exploited to reshape the learned metric and generate modified geodesic curves that produce collision-free robot movements.
Specifically, we need to define a collection of points on the robot body $\bm{p}_1, \ldots, \bm{p}_M $ with $\bm{p}_m \in \mathbb{R}^3$.
These points are then used to compute the ambient space metric for obstacle-avoidance purposes.
Therefore, a larger collection of points provides a more robust obstacle-avoidance performance at the cost of higher computational complexity.
Given the set of points of interest, we compute an associated ambient metric following Eq.~\eqref{eq:ambient_metric_R3S3} with $\bm{x} = \bm{p}_m$.
Similar to the task space setting, since the orientation of obstacles is not considered, the corresponding ambient space metric is an identity matrix.
Finally, we form the whole ambient metric as $\Metric_{\ambient}~=~\operatorname{blockdiag}(\begin{bmatrix}
\Metric_\ambient^{\bm{p}_1}, \Metric_\ambient^{\bm{p}_2}, \cdots, \Metric_\ambient^{\bm{p}_M}
\end{bmatrix})$, which is then used to reshape the learned metric of Eq.~\eqref{eq:Joint_space_VAE_Metric} as,
\begin{equation}
\Metric(\z) = \Metric^{\bm{\theta}}_\mu(\z) + \Metric^{\bm{\theta}}_\sigma(\z) ,
\label{eq:Joint_space_VAE_Metric_with_ambient}
\end{equation}
with,
\begin{align*}
\Metric^{\bm{\theta}}_\mu(\z) &= \left (\Jac_{f_{\textrm{FK}}}(\mu_{\bm{\alpha}}) \Jac_{\mu_{\bm{\alpha}}}(\z) \right ) ^ \trsp \Metric_{\ambient} \left (\Jac_{f_{\textrm{FK}}}(\mu_{\bm{\alpha}}) \Jac_{\mu_{\bm{\alpha}}}(\z) \right ) ,\\
\Metric^{\bm{\theta}}_\sigma(\z) &= \left (\Jac_{f_{\textrm{FK}}}(\mu_{\bm{\alpha}}) \Jac_{\sigma_{\bm{\alpha}}}(\z) \right ) ^ \trsp \Metric_{\ambient} \left (\Jac_{f_{\textrm{FK}}}(\mu_{\bm{\alpha}}) \Jac_{\sigma_{\bm{\alpha}}}(\z) \right ) .
\end{align*}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Images/Graph_Final.pdf}
\caption{Concept drawing. \emph{Left:} The latent space is discretized to form a grid graph consisting of linearly spaced nodes with edge weights matching Riemannian distances. \emph{Right:} To efficiently handle obstacles, the graph is decoded, such that obstacles can easily be mapped to latent space.}
\label{fig:graph}
\end{figure}
\subsubsection{Generating geodesics on discrete manifolds:}
Having proposed the robot motion generation as the computation of geodesic curves, we evidently need a fast and robust algorithm for computing geodesics to make our method practical.
As we work with low-dimensional latent spaces, we here propose to simply discretize them on a regular grid and use a graph-based algorithm for computing shortest paths.
Specifically, we create a uniform grid over the latent space $\latent$, and assign a weight to each edge in the graph corresponding to the Riemannian distance between neighboring nodes (see Fig.~\ref{fig:graph}).
Geodesics are then found using Dijkstra's algorithm~\citep{cormen2009introduction}.
This algorithm selects a set of graph nodes,
\begin{align}
G_\curve = \left \{\g_0, \g_1, \ldots, \g_{S-1}, \g_S \right \}, \quad \g_s \in \R^D, \nonumber
\end{align}
where $\g_0$ and $\g_S$ represent the start and the target of the geodesic in the graph, respectively.
To select these points, the shortest path on the graph is calculated by minimising the accumulated weight (cost) of each edge connecting two nodes, computed as in Eq.~\eqref{eq:length_chain_rule}.
To ensure a smooth trajectory, we fit a cubic spline $\omega_\lambda$ to the resulting set $G_c$ by minimizing the mean square error.
The spline computed in $\latent$ is finally used to generate the robot motion through the mean decoder networks: $\mu_{\bm{\theta}}$ and $\mu_{\bm{\psi}}$ or $\mu_{\bm{\alpha}}$.
The resulting trajectory can be executed on the robot arm to reproduce the required skill.
One issue with this approach is that dynamic obstacles imply that geodesic distances between nodes may change dynamically.
To avoid recomputing all edge weights every time an obstacle moves we do as follows.
Since the learned manifold does not change, we can keep a decoded version of the latent graph in memory (Fig.~\ref{fig:graph}).
This way we avoid querying the decoders at run-time.
We can then find points on the decoded graph that are near obstacles and rescale their weights to take the ambient metric into account.
Once the obstacle moves, we can reset the weights of points that are now sufficiently far from the obstacle.
\subsubsection*{Geodesics generalization:}
During our experiments, we observed that when the geodesic curves are forced to leave the learned manifold, thus crossing high-energy boundaries, the parts of the geodesic that lie outside the manifold may cause inaccurate and undesirable motions.
This is a potential issue when the given start or target points are placed outside the data manifold, or when the obstacles fully blocked it, which forces the geodesics to leave the manifold to comply with the desired task specifications.
We believe this is a consequence of using VAEs to learn the skill manifold where the data lying outside the training data support may be arbitrarily misrepresented in the latent space $\latent$.
We hypothesize that this problem may be addressed by learning a bijective mapping between old demonstrations and new conditions, and then use this function to transform the learned manifold (e.g., by expanding or rotating) to fit another region of the space.
\subsubsection*{Obstacle avoidance:}
Regarding our obstacle avoidance approach, the ambient metric used to reshape the learned metric enforces a ``soft constraint" rather than a ``hard constraint".
This means, under certain situations, the geodesic might still cross the obstacle instead of avoiding it.
It is worth noting that we were unable to replicate this scenario since the generated geodesics tended to avoid the obstacle by abandoning the manifold rather than crossing the obstacle, which can be easily detected and prevented if necessary.
To address this potential issue, the graph-based geodesic can be further exploited by removing the nodes near the obstacle from the graph instead of re-weighting the edges.
While this strategy may reduce the computational overhead and work in practice, it is not theoretically grounded.
Finally, our obstacle avoidance formulation only considered simple obstacles, but the strategy can be extended to multiple dynamic obstacles.
Instead of working with single balls, one can imagine extending the approach to complex obstacle shapes represented as point clouds.
This may increase the implementation demands in order to remain real time, but such an extension seems reasonable.
\subsubsection*{Latent space topology:}
As explained in Section~\ref{sec:experiments}, we observed that the latent space dimensionality plays a critical role when learning joint space motions.
As reported, a $3$-dimensional latent space was necessary to calculate smooth geodesic curves for joint space tasks.
This resulted in increased computational complexity when calculating the ambient metric and geodesics in the latent space.
This implies that this issue may exacerbate when working with high-DOF robots, requiring the use of higher-dimensional latent spaces, and therefore raising the need to design more efficient geodesic computation methods.
Note that this phenomenon may be related to the latent space topology, which is here assumed to be Euclidean in accordance to the used VAE Gaussian prior.
We plan to theoretically and experimentally analyze the effects of the latent space topology when learning Riemannian manifolds for robot motion generation.
\subsection{Setup description}
We consider a set of demonstrations involving a $7$-DOF Franka Emika Panda robot arm endowed with a two-finger gripper.
The demonstrations were recorded using kinesthetic teaching at a frequency of $10$Hz.
We calculate geodesics on $100 \times 100$ and $50 \times 50 \times 50$ graphs under task space and joint space settings, respectively.
Our Python implementation runs at $100$Hz on ordinary PC hardware.
The approach readily runs in real time.
Additionally, obstacles are simulated along with a digital twin of the robot in a simulated environment to provide real-time obstacle information (we did not integrate obstacle localization systems in our setups).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Images/Architecture_Final.pdf}
\caption{\textit{Top}: The VAE architecture under the task space setting. The blue, orange, green and gray blocks correspond to ambient spaces, functions with trainable parameters, latent variables, and functions with fixed parameters. The arrows indicate the direction in which the data flows during the query. \textit{Bottom}: The architecture of the VAE under the joint space setting.}
\label{fig:architecture}
\end{figure}
\subsection{Architecture}
In this section, we describe the VAE architecture under both $\R^3 \times \Sph^3$ and $\R^7$ settings. The VAE architectures are implemented using PyTorch~\citep{NEURIPS2019_9015}.
\subsubsection{VAE architecture in $\R^3 \times \Sph^3$:}
This particular VAE network is design to reconstruct end-effector poses in $\R^3~\times~\Sph^3$.
The overall architecture is depicted in Figure~\ref{fig:architecture}-\emph{top} with the different components that are required to correctly reproduce end-effector poses.
In this architecture, both the decoder and encoder networks have two hidden layers with $200$ and $100$ neuron units (depicted as beige boxes) which output the mean and variance vectors (depicted as orange boxes).
Furthermore, as previously explained, the variance and concentration parameters for position and quaternion data are estimated using RBF decoder networks with $500$ kernels calculated by $k$-means over the training dataset~\citep{Arvanitidis:LatentSO} with predefined bandwidth.
In this particular setup, the VAE uses a $2$-dimensional latent space (depicted by the green box) to encode the $7$-dimensional input vector (depicted as blue boxes on the left).
Moreover, the VAE reconstructs the encoded input back to the ambient space (depicted as blue boxes on the right) using the decoder networks.
\begin{SCfigure*}[][b]
\centering
\centering
\begin{subfigure}{.78\linewidth}
\includegraphics[width=1\linewidth]{Images/Task_space/Grasp/Metric_Final.pdf}
\end{subfigure}
\begin{subfigure}{.7\linewidth}
\includegraphics[width=1\linewidth]{Images/Task_space/Grasp/Robot_Final.pdf}
\end{subfigure}
\caption{\emph{Left}: The yellow curves in the right cluster show geodesics starting from the same points and ending up at random targets, and the blue curves in the left cluster connect random points on the manifold. The background depicts the magnification factor derived from the learned manifold, and the semi-transparent white points show the encoded training dataset. \emph{Right}: The decoded geodesic employed on the robot.}
\label{fig:Grasping_geodesic_task_mf}
\end{SCfigure*}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Images/Task_space/Grasp/Quaternion_Trajectory_Flip_Final.pdf}
\caption{ The evolution of the individual quaternion components of the green geodesic as it passes through high-energy regions.}
\label{fig:quaternion_flip}
\end{figure}
In this setting, the VAE is implemented using a single neural-network as the decoder mean for the position and orientation, while the variance and concentration networks are implemented separately.
As a result, the learned metric is defined as,
\begin{equation}
\label{eq:obstacle_metric_exp}
\Metric(\z) = \Metric_{\mu}^{\x,\q}(\z) + \Metric_\sigma^\x(\z) + \Metric_\kappa^\q(\z) ,
\end{equation}
\begin{align}
\text{with} \quad \Metric_{\mu}^{\x,\q}(\z) &= \Jac_{\mu_{\bm{\phi}}}(\z)^{\trsp} \Metric_{\ambient\Q}(\z)\Jac_{\mu_{\bm{\phi}}}(\z) , \nonumber\\
\Metric_{\ambient\Q}(\z) &=
\begin{bmatrix}
\Metric_\ambient(\mu_{\bm{\phi}}(\z)) & \bm{0} \\
\bm{0} & \Metric_\Q(\mu_{\bm{\psi}}(\z))
\end{bmatrix} , \nonumber\\
\Metric_\sigma^\x(\z) &= \Jac_{\sigma_{\bm{\phi}}}(\z)^{\trsp} \Metric_\ambient(\mu^\x_{\bm{\phi}}(\z)) \Jac_{\sigma_{\bm{\phi}}}(\z) , \nonumber\\
\Metric_\kappa^\q(\z) &= \Jac_{\kappa{\bm{\psi}}}(\z)^{\trsp} \Metric_\Q(\mu_{\bm{\psi}}(\z)) \Jac_{\kappa{\bm{\psi}}}(\z) . \nonumber
\end{align}
where $\Jac_{\mu_{\bm{\theta}}} \in \R^{(D_\ambient + D_\Q) \times d}$ is the Jacobian of the joint decoder mean network (position and quaternion), and $\Jac_{\sigma_{\bm{\theta}}}~\in~\R^{D_\ambient \times d}$ and $\Jac_{\kappa_{\bm{\psi}}} \in \R^{D_\Q \times d}$ are the Jacobians of the decoder variance and concentration RBF networks, respectively.
Since the position and quaternion share the same decoder mean network, the output vector is split into two parts, accordingly.
The quaternion part of the decoder mean is projected to the $\Sph^3$ to then define the corresponding von Mises-Fischer distribution as in Eq.~\eqref{eq:vmf_density}.
The yellow arrow in Figure~\ref{fig:architecture}-\emph{top} shows the flow of the data from ambient space back to the latent space in order to compute the pullback metric.
The ELBO parameters $\beta_1$ and $\beta_2$ in Eq.~\eqref{eq:final_elbo} are found experimentally to guarantee good reconstruction of both position and quaternion data.
It is worth pointing out that we manually provided antipodal quaternions during training, which leads to better latent space structures and reconstruction accuracy.
The same architecture is used for all the experiments.
\subsubsection{VAE architecture in $\R^\eta$:}
Here, we describe our VAE network that reconstructs joint space movements in $\R^7$.
The overall architecture is depicted in Figure~\ref{fig:architecture}--\emph{bottom} with different components.
The input vector (depicted as blue box on the left) is a joint-value vector representing a single configuration of the robot arm on a trajectory.
This vector is fed to the encoder network with two hidden layers of $200$ and $100$ neuron units (depicted as beige boxes) which are the mean and variance vectors for the latent variable (depicted as orange boxes).
Moreover, the variance RBF decoder network uses $500$ kernels calculated by $k$-means over the training with predefined bandwidth.
Under the joint space setting, the VAE uses a $3$-dimensional latent space $\latent$ to encode the input vectors $\bm{\theta}$.
As usual, the decoder network reconstructs the encoded inputs back to the joint space.
However, in order to access the task space information, necessary for whole-body obstacle avoidance, our architecture is integrated with a forward kinematics layer (depicted as gray box).
Note that this layer is predefined based on the robot arm kinematic model and does not change during training.
We leverage this layer to compute task space information regarding multiple points on the robot (and not just the end-effector) given the input configuration vector $\bm{\theta}$.
To implement this component we used the Python Kinematic and Dynamic Library PyKDL~\citep{PyKDL}.
It is worth noting that as this VAE architecture uses the forward kinematics layer during training, singular kinematic configurations need special attention.
The main problem arises in the formulation of our ELBO in Eq.~\eqref{equ:final_ELBO_joint}, which uses the volume measure computed as a function of the determinant of the Jacobian of the forward kinematics function.
We can detect singularities when $\det(\Jac_{f_{\textrm{FK}}}(\mu_{\alpha}(\bm{z}_i)))=0$, which may occur due to random initialization of the VAE.
To circumvent this issue and guarantee that the learning process is not disrupted, we simply add a small regularization term to the kinematic Jacobian.
Finally, we evaluate the obstacle avoidance capabilities in different scenarios where the obstacles partially or entirely obstruct the solutions in joint space.
In these experiments, the ambient metric of Eq.\eqref{eq:Joint_space_VAE_Metric_with_ambient} is formulated by considering all the joints on the robot, in addition to the end-effector.
This ensures that the robot will avoid the obstacles as long as they obstruct the solution for one or more joints or the end-effector.
In other words, we do not consider points on the robot links lying between joints, since it was not necessary in our experiments.
However, extra points on the robot links can be easily added to guarantee a more robust obstacle avoidance using the whole robot body.
\begin{SCfigure*}[][b]
\centering
\centering
\begin{subfigure}{.8\linewidth}
\includegraphics[width=1\linewidth]{Images/Task_space/Pouring/Metric_Obatcle_Avoidance_Final.pdf}
\end{subfigure}
\begin{subfigure}{.6\linewidth}
\includegraphics[width=1\linewidth]{Images/Task_space/Pouring/Robot_Obstacle_Avoidance_Final.pdf}
\end{subfigure}
\caption{\textit{Left}: Red and yellow curves depict geodesics at two different time frames performing dynamic obstacle avoidance with obstacles depicted as red and yellow circles.
These geodesics also take advantage of the different group of demonstrations to generate a hybrid solution.
The background depicts magnification factor derived from the learned manifold with encoded demonstration sets depicted as dot clusters in red, green and blue. \textit{Right}: The decoded geodesics performed on the robot in different time steps. }
\label{fig:obstacle_avoidance_pouring_task_mf}
\end{SCfigure*}
\subsection{Experiments in $\R^3 \times \Sph^3$}
The first set of the experiments focuses on tasks where only the robot end-effector motion is relevant for the task, therefore the demonstrations are recorded in $\R^3 \times \Sph^3$.
\subsubsection{Reach-to-grasp:}
The first set of experiments is based on a dataset collected while an operator performs kinesthetic demonstrations of a grasping skill.
The grasping motion includes a $90$\textdegree~rotation when approaching the object for performing a side grasp (see Fig.~\ref{fig:Grasping_geodesic_task_mf}).
The demonstration trajectories start from same end-effector pose, and they reach the same target position in task space.
To reproduce this grasping skill, we computed a geodesic in $\latent$ which is decoded to generate a continuous trajectory in $\ambient$, which closely reproduces the rotation pattern observed during demonstrations.
Figure~\ref{fig:Grasping_geodesic_task_mf}--\emph{left} depicts the magnification factor related to the learned manifold.
The semi-transparent white points correspond to the latent representation of the training set, and the yellow and blue curves are geodesics between points assigned from the start and endpoints of the demonstrations.
The top panel in Fig.~\ref{fig:Grasping_geodesic_task_mf} shows geodesics in two different scenarios: The yellow geodesics in the right cluster start from same pose and end up at different targets, while the blue geodesics in the left cluster start and end in different random targets.
The results show that the method can successfully generate geodesics that respect the geometry of the manifold learned from demonstration.
As expected, the magnification factor (Fig.~\ref{fig:Grasping_geodesic_task_mf}--\emph{left}) shows that the learned manifold is composed of two similar clusters, similarly to the illustrative example in Fig.~\ref{fig:Toy_example}.
We observed that this behavior emerged due to the antipodal encoding of the quaternions, where each cluster represents one side of the hyper-sphere.
To confirm this, we generated a new geodesic, depicted in green in the top panel of Fig.~\ref{fig:Grasping_geodesic_task_mf}, which is designed to cross the clusters boundaries (start and end locations belong to different clusters).
Figure~\ref{fig:quaternion_flip} depicts the evolution of the quaternion components corresponding to decoded green geodesic.
As this geodesic curve crosses the clusters, the sign of the end-effector quaternion flips (highlighted by the black rectangle).
It is worth emphasizing that by staying on the manifold and avoiding these boundaries, no post-processing of raw quaternion data is required during training or reconstruction.
This can be easily guaranteed by monitoring the energy along the curves, which indicates when geodesics approach a high-energy region.
For instance, the average energy of the blue and yellow geodesics are $7.50$ and $10.51$, respectively, while that of the green geodesic is $2.49 \times 10^9$.
As a result, we can simply identify and avoid these geodesics.
Figure~\ref{fig:Grasping_geodesic_task_mf}--\emph{right} shows the reconstructed geodesic executed by the robot, where the overlapping images display the time evolution of the skill.
It is easy to observe that the desired motion is successfully generated by our model.
Note how the end-effector orientation evolves on the decoded geodesic in the ambient space, showing that the $90$\textdegree~rotation is properly encoded and reproduced using our approach.
\subsubsection{Pouring:}
To evaluate our model on a more complicated scenario, we collected a dataset of pouring task demonstrations.
The task involves grasping $3$ bottles from $3$ different positions and then pouring at placed at $3$ different locations.
The demonstrated trajectories cross each other, therefore providing a multiple-solution setting.
As a result, with $3$ sets of demonstrations, all $9$ permutations for grasping any bottle from the table and then pouring at any cup are feasible, despite only a small subset of them are demonstrated.
\begin{SCfigure*}[][!b]
\centering
\centering
\begin{subfigure}{.8\linewidth}
\includegraphics[width=1\linewidth]{Images/Task_space/Pouring/Metric_Pouring_Multiple_Final.pdf}
\end{subfigure}
\begin{subfigure}{.6\linewidth}
\includegraphics[width=1\linewidth]{Images/Task_space/Pouring/Robot_Pouring_Multiple_Final.pdf}
\end{subfigure}
\caption{\textit{Left}: The geodesic shown as the yellow curve combines the blue, green, and white demonstration groups to form a hybrid solution. This experiment uses three geodesics to pour all the cups using a single bottle. \textit{Right}: The decoded geodesics performed on the robot depicted by superimposing images from different time frames. The transparent robot arms depict the trace of the motion trajectory, which begins with pouring the right cup, the middle, and lastly the left cup. }
\label{fig:Multiple_solution_task}
\end{SCfigure*}
\begin{SCfigure*}[][t]
\centering
\centering
\begin{subfigure}{.8\linewidth}
\includegraphics[width=1\linewidth]{Images/2D_3D_Latent_Space/2D_Metric_Final.pdf}
\end{subfigure}
\begin{subfigure}{.7\linewidth}
\includegraphics[width=1\linewidth]{Images/2D_3D_Latent_Space/3D_Metric_Final.pdf}
\end{subfigure}
\caption{\textit{Left}: The geodesics calculated in $2$--dimensional latent space, depicted as the yellow curve, reveal several unnecessary transitions between different solutions when connecting two points in the same demonstration. \textit{Right}: Geodesics computed in $3$--dimensional latent space, shown by the yellow curve, shows that the geodesic does not switch between clusters. }
\label{fig:geodesic_switch}
\end{SCfigure*}
The first feature we want to test in this setting is the obstacle avoidance capabilities via metric reshaping.
To do so, we compute the ambient space metric in Eq.~\eqref{eq:ambient_metric_R3S3} based on a spherical obstacle that partially blocks the low-energy regions that the geodesics exploit to find a solution.
This way the geodesics are forced to either use the low-energy regions that the individual demonstrations provide or improvise and find a hybrid novel path based on a subset of demonstrations.
Figure~\ref{fig:obstacle_avoidance_pouring_task_mf}--\emph{left} shows the geodesic performing obstacle avoidance around a moving obstacle while following the geometry of the manifold.
Two time instances of the obstacle are depicted as red and yellow circles in the latent space for illustration purposes.
The red and yellow curves represent geodesics avoiding the red and yellow obstacles, respectively.
These curves correspond to one time frame of the adapted geodesic, showing how our method can deal with dynamic obstacles. Fig.~\ref{fig:obstacle_avoidance_pouring_task_mf}--\emph{right} shows the decoded geodesics performed on the robot, where transparent robot arms show the temporal evolution of the skill.
In order to correctly perform obstacle avoidance, the parameter $\bm{x}$ in ambient space metric in Eq.~\eqref{eq:ambient_metric_R3S3} represents the position of the bottle when grasped and the obstacle radius $r$ is modified to account for the bottle. To do so, we simplify the bottle geometry by approximating it using a sphere and add its radius $r_{\textrm{bot}}$ to the radius of the obstacle as $r = r_{\textrm{obs}} + r_{\textrm{bot}}$, where $r_{\textrm{obs}}$ is the obstacle radius.
This prevents the bottle from colliding with the yellow and red spheres that represent the obstacle.
Figure~\ref{fig:Multiple_solution_task}--\emph{left} shows the ability to leverage multiple-solution tasks to generate novel movements emerging as combinations of the observed demonstrations.
Specifically, we generate a combination of three geodesics that can be used to pour all the cups with a single bottle.
The geodesic (depicted as yellow curve) starts from an initial point (depicted in green) in the white-demonstrations group at the top, grasps the bottle, pours the first cup using the green demonstration group, then goes to the second cup using the blue-demonstrations group, and finally pours the third cup using the white-demonstration group again. Figure~\ref{fig:Multiple_solution_task}--\emph{right} uses the same visualization technique to show the decoded geodesics employed on the robot by superimposing images from different time frames.
The task begins by pouring the right-side cup, then the center, and lastly the left-side cup.
As observed, our approach generates the geodesics that properly switch among the demonstrated solutions to reproduce novel movements in ambient space and successfully pour all the cups in the given order.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Images/2D_3D_Latent_Space/Illustration_Final.pdf}
\caption{Concept drawing. \textit{Left}: The hollow tube represents the metric, with low-energy regions inside surrounded with high-energy boundaries. The geodesic depicted as blue curve travels through the low-energy regions. \textit{Right}: The hollow tube is partially blocked by a solid high-energy region corresponding to the ambient space metric representing the obstacle. The geodesic depicted as blue curve successfully travels through the low-energy regions meanwhile avoiding the obstacle. }
\label{fig:3D_metric_concept_drawing}
\end{figure}
\begin{figure*}
\centering
\begin{subfigure}{.33\linewidth}
\centering
\includegraphics[width = \linewidth]{Images/Joint_Space/Grasp_Separated/Metric_1_Final.pdf}
\end{subfigure}%
\begin{subfigure}{.33\linewidth}
\centering
\includegraphics[width = \linewidth]{Images/Joint_Space/Grasp_Separated/Metric_2_Final.pdf}
\end{subfigure}
\begin{subfigure}{.33\linewidth}
\centering
\includegraphics[width = \linewidth]{Images/Joint_Space/Grasp_Separated/Metric_3_Final.pdf}
\end{subfigure}
\caption{\emph{left}: The magnification factor in $3$--dimensional latent space contains hollow tubes representing the learned Riemannian metric. These hollow tubes contain low energy regions surrounded by high energy boundaries. Several geodesics are calculated and visualized in blue. The manifold's low energy areas are rendered transparent. \emph{middle}: The same magnification factor when an obstacle is introduced in such a manner that all viable solutions in the ambient space are blocked. Due to the fact that the obstacle introduces a high energy zone (in red) that goes through all of these hollow tubes, none of the geodesics are feasible, therefore no geodesic is shown in this plot. \emph{right}: The same magnification factor when an obstacle is introduced in such a manner that partially obstruct the solutions in the ambient space. Several geodesics (blue curves) were left outside of the obstacle region on the left side of the panel. }
\label{fig:Joint_Space_90_Grasp_Separated}
\end{figure*}
\begin{SCfigure*}[][h]
\centering
\centering
\begin{subfigure}{.66\linewidth}
\includegraphics[width=1\linewidth]{Images/Joint_Space/Grasp_Separated/Experiment_3_Final.pdf}
\end{subfigure}
\begin{subfigure}{0.7\linewidth}
\includegraphics[width=1\linewidth]{Images/Joint_Space/Grasp_Separated/Experiment_4_Final.pdf}
\end{subfigure}
\caption{\textit{Left}: The decoded geodesic employed on the robot arm when no obstacle is present in the environment. \textit{Right}: The decoded geodesic employed on the robot arm with an obstacle partially obstructing the solutions. The obstacle is depicted as the green sphere. }
\label{fig:Joint_Space_90_Dgrasp_Separated_robot}
\end{SCfigure*}
\subsection{Experiments in the joint space $\mathbb{R}^\eta$}
In this section, we focus on tasks where joint-level motion patterns are relevant, and therefore the human teacher provides kinesthetic demonstrations in $\R^\eta$, where $\eta$ is the number of DOF of the robot.
When learning Riemannian metrics in this setting, we initially designed a $2$-dimensional latent space for our VAE, which proved to be insufficient to encode the demonstrated motion patterns.
Specifically, we analyze the capacity of the latent space to encode the skill manifold experimentally.
To do so, we investigated the switching behavior of geodesics in $\latent$.
In other words, overlap between low-energy regions in $\latent$, representing two or more different demonstration sets, may lead to unnecessary and frequent switches between these solutions when computing a geodesic. To put it differently, when computing a geodesic, switching between two demonstration sets is unavoidable when the total energy of the geodesic switching between them is less than the total energy without the switch.
However, switching behaviors may be avoided by having high-energy regions among demonstration clusters in $\latent$.
Furthermore, specifically under the joint space setting, the frequent switching in geodesics may lead to jittery motion in task space when applied to the robot.
To illustrate this issue, we designed a simple experiment in which a robotic arm follows a circular pattern with its end-effector.
The start and target configuration of the geodesic are selected from the same demonstrated trajectory, and the objective is to evaluate if the geodesic stays on the low-energy regions corresponding to the same demonstrated trajectory.
Fig.~\ref{fig:geodesic_switch}--\emph{left} shows a geodesic curve computed in the $2$-dimensional latent space depicted as the yellow curve.
This geodesic exhibits unnecessary switches among different solutions (i.e. circular white-demonstrations).
Therefore, when the decoded geodesic is deployed on the robot, it results in jerky movements and undesirable back-and-forth motions.
To solve this issue, we evaluated the same experiment using a $3$-dimensional latent space. Figure~\ref{fig:geodesic_switch}--\emph{right} shows the magnification factor of the metric learned using the same training data but in a $3$--dimensional latent space.
This magnification factor shows several torus-like clusters, representing separate demonstration groups instead of collapsing them into a plane, as in the $2$-dimensional case.
Moreover, the resulting geodesic (depicted as the yellow curve) does not switch among solutions, which provides stable and smooth robot end-effector movements when decoded.
To provide further details on the learned manifold in the $3$-dimensional latent space, we create an illustration shown in Fig.~\ref{fig:3D_metric_concept_drawing}, where the learned metric is shown as yellow hollow tubes.
Their inner part encodes low-energy regions which are surrounded by high-energy boundaries.
Figure~\ref{fig:3D_metric_concept_drawing} displays an horizontal cut to show the inner part of the learned metric.
To illustrate the obstacles, the right hollow tube is partially blocked by a red sphere representing the ambient space metric in the latent space $\latent$, which is a solid high-energy region.
Additionally, the figure depicts a geodesic curve traveling successfully along both tubes, showcasing a collision-free trajectory at the right-side plot.
\subsubsection{Reach-to-grasp:}
Similar to the task space setting, we used the reach-to-grasp task to evaluate our approach under the joint space setting.
In this case the demonstrations are quite similar at the end-effector level but differ at the joint space, as we exploited the kinematic robot redundancy to provide different joint trajectories.
Figure~\ref{fig:Joint_Space_90_Grasp_Separated}--\emph{left} shows the magnification factor in a $3$-dimensional latent space, where it can be seen that the learned metric corresponds to several connected and separated hollow tubes.
As mentioned previously, these hollow tubes have low-energy inner regions surrounded by high-energy boundaries, analogous to the learned metric in a $2$-dimensional latent space.
As shown in the figure, the generated geodesics stay inside the tubes and avoid crossing the boundaries.
Figure~\ref{fig:Joint_Space_90_Dgrasp_Separated_robot}--\emph{left} shows the robot executing the decoded geodesic by applying the joint values directly on the robot using a joint position controller.
We can observe that the decoded geodesic is able to generate the demonstrated grasp motion with $90$\textdegree~rotation during the approaching part.
Note that the start and end points of the geodesics are extracted from the demonstrations.
\begin{SCfigure*}[][h]
\centering
\centering
\begin{subfigure}{.66\linewidth}
\includegraphics[width=1\linewidth]{Images/Joint_Space/Grasp_Combined/Metric_Final.pdf}
\end{subfigure}
\begin{subfigure}{.7\linewidth}
\includegraphics[width=1\linewidth]{Images/Joint_Space/Grasp_Combined/Experiment_1_Final.pdf}
\end{subfigure}
\caption{\textit{Left}: The geodesic depicted as blue curve avoids the obstacle by traveling through different demonstrations. \textit{Right}: The decoded geodesic employed on the robot arm. Images from different time steps are superimposed to sow the trace of the motion.
The green sphere represents the obstacle in the ambient space. }
\label{fig:Joint_Space_90_Grasp_Connected}
\end{SCfigure*}.
\begin{SCfigure*}[][h]
\centering
\centering
\begin{subfigure}{.66\linewidth}
\includegraphics[width=1\linewidth]{Images/Joint_Space/Pouring/Metric_Final.pdf}
\end{subfigure}
\begin{subfigure}{.7\linewidth}
\includegraphics[width=1\linewidth]{Images/Joint_Space/Pouring/Robot_Final.pdf}
\end{subfigure}
\caption{\emph{Left}: The magnification factor in $3$--dimensional latent space representing the learned Riemannian metric. This metric contains low-energy regions surrounded by high energy boundaries depicted as red hollow-tubes. The geodesic curve is depicted in black. The outline color of each tube (green, red, or blue) indicates which demonstration group it belongs to. \emph{Right}: The decoded geodesic employed on the robot. }
\label{fig:Joint_Space_Pouring_Separated}
\end{SCfigure*}
Figure~\ref{fig:Joint_Space_90_Grasp_Separated}--\emph{middle} displays the magnification factor where an obstacle is placed in such a way that all possible solutions in joint space are blocked (e.g. obstacle placed on the common target of all the demonstrations).
Note that the obstacle introduces a high energy zone (depicted in red) that passes through all of the hollow tubes representing the learned manifold, therefore, none of the geodesics are feasible.
Additionally, Fig.~\ref{fig:Joint_Space_90_Grasp_Separated}--\emph{right} illustrates the same learned metric but reshaped using a different ambient space metric.
We can now see that the obstacle partially obstructs the possible solutions, and therefore some few geodesics (depicted as blue curves) are still able to successfully generate obstacle-free movements. Figure~\ref{fig:Joint_Space_90_Dgrasp_Separated_robot}--\emph{right} shows the robot executing the decoded geodesic using a joint position controller while avoiding the obstacle.
We designed another experiment to showcase how the multiple-solution capabilities can be leveraged to generate collision-free movements.
If the different demonstrated joint space trajectories sufficiently overlap, the learned manifold will be characterized by several overlapping low-energy regions (i.e. hollow tubes), which geodesic curves can travel through.
As a result, if the obstacles partially block the learned manifold, the geodesic may still travel among solutions to generate new movements out of combinations of the provided demonstrations.
To show this behavior, we used a different demonstrations dataset where we accounted for several overlapping solutions starting from the same joint configuration.
Figure~\ref{fig:Joint_Space_90_Grasp_Connected}--\emph{left} shows the magnification factor of the learned Riemannian metric in the $3$--dimensional latent space.
The learned manifold and the ambient metric can be distinguished visually based on their energy values.
The ambient metric, which represents the obstacle regions (depicted in red), encodes high-energy zones.
The learned manifold (depicted in yellow) is characterized by several entangled hollow tubes.
The geodesic, shown as the blue curve, begins from the common start configuration on the left and successfully navigates around the obstacle to reach the target on the right side.
Figure~\ref{fig:Joint_Space_90_Grasp_Connected}--\emph{right} displays the decoded geodesic deployed directly on the robot arm using a joint position controller.
Note how the robot arm avoids the obstacle by carefully maneuvering around it while successfully performing the grasp skill.
\subsubsection{Pouring:}
To evaluate the method in a more complex scenario, we also performed the pouring task under the joint space setting.
Similar to the task space experiment, the demonstrations also overlap in joint space.
For this specific experiment, we mainly focus on evaluating the multiple-solution trajectories.
Figure~\ref{fig:Joint_Space_Pouring_Separated}--\emph{left} shows the magnification factor in $3$-dimensional latent space showing entangled hollow tubes, which represent the learned Riemannian metric.
Each hollow tube encodes low-energy regions surrounded by high-energy boundaries, the latter outlined by green, red, and blue solid lines.
Each color represents one group of demonstrations.
Since the magnification factor learned with the original dataset of $5$ demonstrations per group (corresponding to the bottle initial position) is very hard to visualize, we opt for simplicity and used a subset of $2$ demonstrations per bottle.
As shown, the geodesic (depicted as the black curve) starts from a point in the second demonstration group (green border) and switches to the first group (red border) to reach the target located in the third group (blue boundary).
Figure~\ref{fig:Joint_Space_Pouring_Separated}--\emph{right} shows that the decoded geodesic employed on the robot successfully performs the pouring task.
It is also evident the geodesic uses a multiple-solution strategy to reproduce a new trajectory that was not explicitly demonstrated to the robot in the training phase.
\subsection{Learning from Demonstration (LfD)}
LfD is a robot programming technique that leverages human demonstrations, recorded via kinesthetic teaching or teleoperation, to learn a model of a task~\citep{Ravichandar20:LfD}.
End-effector positions and orientations, joint configurations, linear or angular velocities, and accelerations are all examples of data that can be used in LfD.
The methods for dealing with motion dynamics are outside the scope of this study; Instead, we focus on techniques for learning and synthesizing robot skills built on joint and task space trajectories.
We identify three key lines of research and their particular features (e.g. obstacle avoidance) for robot motion learning.
The first group focuses on motion dynamics learning~\citep{Ijspeert13:dmp, MixtureOfAttractors2018:Manschitz}, where demonstrations are considered as solutions to specific dynamical systems.
These techniques are well behaved when confronted with changes in the environment due to their autonomous-systems formulation.
The second set of approaches builds on probabilistic methods that exploit data variability and model uncertainty~\citep{Huang19:KMP, calinon2016:tutorial, paraschos2018:ProMP}.
Their probabilistic formulation allows robots to execute the skill using a large diversity of trajectories sampled from the skill model.
The last category includes approaches that use neural networks for increasing generalization in robot motion learning~\citep{Yunus2019:CNMPs, GoalConditionVAE2019:Osa}.
Furthermore, there exist methods that combine dynamical systems and neural networks~\citep{Bahl20:NDPs, EuclidFlow2020:Rana}, or dynamical systems and probabilistic models~\citep{Ugur2020:CompliantDMP, Khansari2011:StabNonDynSys}.
All of the aforementioned methods belong to the category of movement primitives (MPs), which is an alternative approach to classic motion planning~\citep{Elbanhawi14:MotionPlanning} for robot motion generation.
In contrast, we propose a reactive motion generation technique that lies on a middle ground between movement primitives and motion planning.
Similar to motion primitives, we exploit human demonstrations to learn a model of a skill with the assumption that these demonstrations belong to a smooth surface characterized as a Riemannian manifold.
Additionally, our method, like motion planners, derives a time-independent reference trajectory generated from geodesic curves, which can be locally deformed to avoid unseen obstacles.
Specifically, our method leverages a neural network (VAE) to learn a Riemannian metric that incorporates the network uncertainty.
This metric allows us to generate motions that resemble the demonstrations via geodesics.
The decoded geodesics are then decoded and used as reference trajectories on the robot to reproduce motion trajectories that resemble the demonstrations.
Complex robot movements may involve sophisticated full-pose end-effector trajectories, making it imperative to have a learning framework capable of encoding full-pose motion patterns.
The main challenge is then how to properly encode and reproduce orientation movements.
Despite most LfD approaches have overlooked this problem~\citep{paraschos2018:ProMP, Yunus2019:CNMPs, calinon2016:tutorial, Huang19:KMP}, recent works have addressed it using probabilistic models~\citep{OrientationProMP2021:Rozo, Zeestraten17:riemannian}.
In our previous work~\citep{GeodesicMotionSkill2021:BeikMohammadi}, we proposed a VAE architecture capable of encoding full-pose trajectories, which is here exploited for learning a variety of real robotic tasks.
Obstacle avoidance is another feature that a reactive motion generation should offer.
While several approaches rely solely on obstacles information given before learning~\citep{ObstacleAvoidanceRL1992:Prescott, VisionReactivePolicies2020:Aljalbout}, these are ineffective in dynamic environments.
Other techniques exploit via-points~\citep{paraschos2018:ProMP, Yunus2019:CNMPs, Huang19:KMP} to tackle this problem in an indirect fashion, but they do not require retraining the skill model.
A different perspective to the obstacle avoidance problem is to see obstacle as costs in an optimization framework that seeks to generate optimal and obstacle-free trajectories~\citep{Urainetal2021:ComposableEnergy, RMP}.
Meanwhile, our method tackles the obstacle avoidance problem from a metric reshaping viewpoint.
We design obstacle-aware ambient space metrics to reshape the learned Riemannian metric.
The combination of these two metrics yields a new metric that is exploited to generate trajectories that simultaneously follow the demonstrations while also avoiding obstacles.
Note that the ambient space metric always represents a notion of distance to the obstacles in task space.
It is worth mentioning that the choice of demonstration ambient space defines how this metric can be designed to avoid obstacles.
For example, when using joint space demonstrations, the ambient space metric can incorporate information about the distance from any point of the robot to the obstacles, resulting in a multiple-limb obstacle avoidance capability.
On the contrary, when using task space end-effector demonstrations, this metric only provides obstacle avoidance at the level of the robot end-effector.
We believe our obstacle avoidance technique is conceptually comparable to CHOMP~\citep{Ratliff2009:chomp} since it also uses a simplified geometric description of the robot to construct a uniform grid in task space to determine whether a trajectory may cause collision with an obstacle.
Human demonstrations can naturally show various solutions to a single motion skill~\citep{Rozo11:multipleSol, Yunus2019:CNMPs}, which is typically addressed using hierarchical techniques~\citep{Konidaris12:SkillTrees, Ewertonetal2015:multcollab}.
Our approach enables the encoding of multiple-solution tasks on the learned Riemannian manifold, which is then exploited to replicate the demonstrated skill and generate novel hybrid solutions based on a synergy of a (sub)set of the demonstrations.
These hybrid solutions naturally emerge from our geodesic motion generator on the learned Riemannian manifold.
Note that the aforementioned robot motion learning techniques only provide trajectories that strictly follow the demonstrated patterns without the ability to generate hybrid solutions.
\subsection{Variational auto-encoders (VAEs)}
VAEs are generative models~\citep{kingma:autoencoding} that learn and reconstruct data by encapsulating their density into a lower-dimensional latent space $\latent$.
Specifically, VAEs encode the training data density $\Prob(\x)$ in an ambient space $\ambient$ through a low-dimensional latent variable $\z$.
For simplicity, we consider the generative process of a Gaussian VAE defined as,
\begin{align}
\Prob(\z) &= \mathcal{N}\left(\z | \bm{0}, \I_d\right), & \z \in \latent ; \\
\Prob_{\bm{\phi}}(\x|\z) &= \mathcal{N}\left(\z|\mu_{\bm{\phi}}(\z), \I_D \sigma_{\bm{\phi}}^2(\z)\right), & \x \in \ambient.
\label{eq:vae_gen}
\end{align}
where $\mu_{\bm{\phi}} : \latent \rightarrow \ambient$ and $\sigma_{\bm{\phi}} : \latent \rightarrow \R^D_+$ are deep neural networks with parameters $\bm{\phi}$ estimating the mean and the variance of the posterior distribution $\Prob_{\bm{\phi}}(\x|\z)$. Since the exact inference of the generative process is in general intractable, a variational approximation of the evidence (marginal likelihood) can be used,
\begin{align}
\begin{split}
\mathcal{L}_{ELBO}
&= \mathbb{E}_{q_{\bm{\xi }}(\z|\x)}\left[\log(\Prob_{\bm{\phi}}(\x|\z))\right] \\
&- \mathrm{KL}\left(q_{\bm{\xi }}(\z|\x)||\Prob(\z)\right) ,
\label{eq:elbo}
\end{split}
\end{align}
where $q_{\bm{\xi}}(\z|\x) = \mathcal{N}(\x | \mu_{\bm{\xi}}(\x), \I_d\sigma^2_{\bm{\xi}}(\x))$ approximates the posterior distribution $p(\z | \x)$ by two deep neural networks $\mu_{\bm{\xi}}(\x) : \ambient \rightarrow \latent$ and $\sigma_{\bm{\xi}}(\x)): \ambient \rightarrow \R^d_+$.
The posterior distribution $p_{\bm{\xi }}(\z | \x)$ is called \emph{inference} or \emph{encoder} distribution, while the generative distribution $p_{\bm{\phi}}(\x|\z)$ is known as the \emph{generator} or \emph{decoder}.
In the next subsection, we use a VAE to learn a skill-specific Riemannian manifold from human demonstrations.
\begin{SCfigure*}
\includegraphics[width=0.7\textwidth]{Images/Noisy_Manifold_Final.pdf}
\caption{In a Gaussian VAE, samples are generated by a random projection of the manifold jointly spanned by $\mu$ and $\sigma$.}
\label{fig:Noisy_manifold}
\end{SCfigure*}
\subsection{Riemannian Manifolds}
In differential geometry, Riemannian manifolds are referred to as curved $d$-dimensional continuous and differentiable surfaces characterized by a Riemannian metric~\citep{Lee18Riemann}.
This metric is characterized by family of smoothly varying positive-definite inner products acting on the tangent spaces of the manifold, which locally resembles the Euclidean space $\R^d$.
In this paper, we use the mapping function $f$ to represent a manifold $\Manifold$ immersed in the ambient space $\ambient$ defined as,
\begin{align}
\Manifold = f(\latent) \quad \mathrm{with} \quad f: \latent \rightarrow \ambient,
\label{eq:mapping}
\end{align}
where $\latent$ and $\ambient$ are open subsets of Euclidean spaces with $\dim{\latent} < \dim{\ambient}$.
An important operation on Riemannian manifolds is the computation of the length of a smooth curve $\curve: [0, 1] \rightarrow \latent$, defined as,
\begin{align}
\Length_{\curve} &= \int_0^1 \| \partial_t f(\curve(t)) \| \mathrm{d}t .
\label{eq:length}
\end{align}
This length can be reformulated using the chain-rule as,
\begin{align}
\Length_{\curve} &= \int_0^1 \sqrt{\dot{\curve}(t)^{\trsp} \Metric(c(t)) \dot{\curve}(t)} \mathrm{d}t,
\label{eq:length_chain_rule}
\end{align}
where $\Metric$ and $\dot{\curve}_t = \partial_t \curve_t$ are the Riemannian metric and curve derivative, respectively.
Note that the Riemannian metric corresponds to,
\begin{equation}
\Metric(\z) = \Jac_f(\z)^{\trsp} \Jac_f(\z) .
\label{eq:RiemMetric}
\end{equation}
Here, $\Jac_f(\z)$ is the Jacobian of the mapping function $f$.
This metric can be used to measure local distances in $\latent$.
The shortest path on the manifold, also known as the geodesic, can be computed given the curve length in Eq.~\eqref{eq:length_chain_rule}.
Geodesics on Riemannian manifolds can be seen as a generalization of straight lines in Euclidean space.
However, geodesics might not be unique, e.g. great circles on the sphere manifold.
Later, we demonstrate that calculating geodesics on a learned Riemannian manifold can be leveraged to recover demonstrated motion patterns. It should be noted that geodesics have recently been utilized as solutions of trajectory optimizers for quadrotor control~\cite{ScannellTrajectory2021}.
\subsection{Learning Riemannian Manifolds with VAEs}
\label{sec:vae_manifold}
In this subsection, we examine the link between VAEs and Riemannian geometry.
To begin, we first define the VAE generative process of Eq.~\eqref{eq:vae_gen} as a stochastic function,
\begin{equation}
f_{\bm{\phi}}(\z) = \mu_{\bm{\phi}}(\z) + \operatorname{diag}(\epsilon)\sigma_{\bm{\phi}}(\z), \quad \epsilon \sim \mathcal{N}(\bm{0}, \I_D) .
\label{eq:StochasticF}
\end{equation}
where $\mu_{\bm{\phi}}(\z)$ and $\sigma_{\bm{\phi}}(\z)$ are decoder mean and variance neural networks, respectively.
Also, $\operatorname{diag}(\cdot)$ is a diagonal matrix, and $\I_D$ is a $D \times D$ identity matrix.
The above formulation is referred to as the reparameterization trick~\citep{kingma:autoencoding}, which can be interpreted as samples generated out of a random projection of a manifold jointly spanned by $\mu_{\bm{\phi}}$ and $\sigma_{\bm{\phi}}$, as depicted in Fig.~\ref{fig:Noisy_manifold}.
Riemannian manifolds may arise from mapping functions between two spaces as in Eq.~\eqref{eq:mapping}.
As a result, Eq.~\eqref{eq:StochasticF} may be seen as a stochastic version of the mapping function of Eq.~\eqref{eq:mapping}, which in turn defines a Riemannian manifold~\citep{Hauberg:OnlyBS}.
We can now write the stochastic form of the Riemannian metric of Eq.~\eqref{eq:RiemMetric}.
To do so, we first recast the stochastic function Eq.~\eqref{eq:StochasticF} as follows~\citep{eklund:arxiv:2019},
\begin{align}
f_{\bm{\phi}}(\z) &= \begin{pmatrix} \I_D, & \operatorname{diag}(\epsilon) \end{pmatrix} \begin{pmatrix} \mu_{\bm{\phi}}(\z) \\ \sigma_{\bm{\phi}}(\z) \end{pmatrix}
= \bm{P}\;g(\z) ,
\end{align}
where $\bm{P}$ is a random matrix, and $g(\z)$ is the concatenation of $\mu_{\bm{\phi}}(\z)$ and $\sigma_{\bm{\phi}}(\z)$. Therefore, the VAE can be seen as a random projection of a deterministic manifold spanned by $g$.
Given that this stochastic mapping function is defined by a combination of mean $\mu_{\bm{\phi}}(\z)$ and variance $\sigma_{\bm{\phi}}(\z)$, the metric is likewise based on a mixture of both as follows,
\begin{equation}
\bar{\Metric}(\z) = \Jac_{\mu_{\bm{\phi}}}(\z)^{\trsp} \Jac_{\mu_{\bm{\phi}}}(\z) + \Jac_{\sigma_{\bm{\phi}}}(\z)^{\trsp} \Jac_{\sigma_{\bm{\phi}}}(\z) .
\label{eq:VAE_metric}
\end{equation}
where $\Jac_{\mu_{\bm{\phi}}}(\z)$ and $\Jac_{\sigma_{\bm{\phi}}}(\z)$ are respectively the Jacobian of $\mu_{\bm{\phi}}(\z)$ and $\sigma_{\bm{\phi}}(\z)$ evaluated at $\z \in \latent$, with $\latent$ being the VAE low-dimensional latent space.
Notably, the decoder variance network $\sigma_{\bm{\phi}}(\z)$ approximates the data uncertainty, which plays a critical role in the metric Eq.~\eqref{eq:VAE_metric} by associating low values to regions with a high number of data points and vice-versa.
Indeed, omitting this element results in a nearly flat manifold geometry~\citep{Hauberg:OnlyBS}.
For example, \citet{Shao:TheRiemannianGeometry} suggest a similar technique that learns Riemannian manifolds using generative models that do not model data uncertainty, resulting in flat manifolds and often straight lines as geodesics.
As explained earlier, the Riemannian metric is required to compute the geodesics, which conform to the geometry of the training data~\citep{Arvanitidis:LatentSO}.
In summary, we exploit the link between VAEs and Riemannian metrics for robot motion generation.
Specifically, we learn a Riemannian metric that describes the motion patterns observed during the demonstrations.
These demonstrations may take place in two different ambient spaces: Task and joint space, which define the VAE architecture needed to learn the manifold of interest.
The geodesic curves generated on the learned manifold produce robot movements mimic the given demonstrations in the ambient space.
\subsection{Ambient space metric}
According to the preceding section, we leverage a VAE to learn a Riemannian metric.
However, there are situations where this metric needs to be changed.
For example, if our metric encapsulates the main patterns of the robot motion demonstrations, we may be interested in endowing the robot with obstacle avoidance capabilities.
This means that we now require to reshape the manifold to take these obstacles into account.
To do so, we need to reshape the previously-learned metric so that the new geodesics lead to obstacle-free robot motions.
A na\"ive approach would entail retraining the VAE model with new data, which is time consuming and data intensive, and can only be executed offline.
We propose to reshape the learned metric by considering problem-specific ambient metrics.
Note that although the definition of curve length relies on the Euclidean metric of $\ambient$, this is not a strict requirement.
Indeed, \cite{arvanitidis:arxiv:2020} argued that there is value in giving the ambient space a manually-defined \emph{Riemannian} metric and including that into the definition of curve length.
The resulting metric can then be used to compute the curve length as,
\begin{align}
\Length_{\curve} &= \int_0^1 \sqrt{\dot{\curve}(t)^{\trsp} \Jac_{f_{\bm{\phi}}}(c(t))^{\trsp} \Metric_{\ambient}(f_{\bm{\phi}}(c(t))) \Jac_{f_{\bm{\phi}}}(c(t)) \dot{\curve}(t)} \mathrm{d}t,
\end{align}
where $\Metric_\ambient$ is the ambient metric, which can now vary smoothly across $\ambient$.
The reshaped Riemannian metric in $\latent$ is then computed as follows,
\begin{align}
\bar{\Metric}(\z) &= \Jac_{\mu_{\bm{\phi}}}(\z)^{\trsp} \Metric_\ambient(\mu_{\bm{\phi}}(\z)) \Jac_{\mu_{\bm{\phi}}}(\z) \nonumber\\
&+ \Jac_{\sigma_{\bm{\phi}}}(\z)^{\trsp} \Metric_\ambient(\mu_{\bm{\phi}}(\z)) \Jac_{\sigma_{\bm{\phi}}}(\z) .
\label{eq:ambient_VAE_metric}
\end{align}
Given this metric, we can compute geodesics that are repelled from certain regions of the ambient space $\ambient$ by increasing the value of the ambient metric $\Metric_\ambient$.
We demonstrate how this metric reshaping method is leveraged to generate obstacle-free robot motions in Section~\ref{sec:experiments}.
|
train/arxiv
|
BkiUao66NNjgBvv4McD7
| 5
| 1
|
\section{Introduction}
This work considers (stochastic) iterative solutions for linear operator equations of the form
\begin{align}\label{eqn:inv}
{\mathbf{A}}{\boldsymbol{\mathsf{x}}}={\boldsymbol{\mathsf{y}}}
\end{align}
where ${\mathbf{A}}:\CX\rightarrow\CY$ is a bounded linear operator between Banach spaces $\CX$ and $\CY$ (equipped with the norms $\|\cdot\|_{\CX}$ and $\|\cdot\|_{\CY}$, respectively), and ${\boldsymbol{\mathsf{y}}}\in\range{({\mathbf{A}})}$ is the exact data.
In practice, we only have access to noisy data ${\boldsymbol{\mathsf{y}}}^\delta={\boldsymbol{\mathsf{y}}}+{\boldsymbol{\xi}}$, where {$\boldsymbol{\xi}$} denotes the measurement noise with a noise level $\delta\geq0$ such that $\yN{{\boldsymbol{\mathsf{y}}}^\delta-{\boldsymbol{\mathsf{y}}}}{\leq}\delta$.
Linear inverse problems arise naturally in many applications in science and engineering, and also form the basis for studying nonlinear inverse problems. Hence, design and analysis of stable reconstruction methods for linear inverse problems have received much attention.
Iterative regularisation is a powerful algorithmic paradigm that has been successfully employed
for many inverse problems \cite[Chapters 6 and 7]{EnglHankeNeubauer:1996} \cite{KaltenbacherNeubauerScherzer:2008}.
Classical iterative methods for inverse problems include (accelerated) Landweber method, conjugate
gradient method, Levenberg-Marquardt method, and Gauss-Newton method, to name a few.
The per-iteration computational bottleneck of many iterative methods lies in utilising all the data at each iteration, which can be of a prohibitively large size.
For example, this occurs while computing the derivative of an objective. One promising strategy to
overcome this challenge is stochastic gradient descent (SGD), due to Robbins and Monro \cite{RM51}. SGD
decomposes the original problem into (finitely many) sub-problems, and then at each iteration uses
only a single datum, or a mini-batch of data, typically selected uniformly at random. This greatly
reduces the computational complexity per-iteration, and enjoys excellent scalability with respect to
data size. In the standard, and best
studied setting, $\CX$ and $\CY$ are finite dimensional Euclidean spaces and the corresponding data fitting objective is the
(rescaled) least squares $\Psi({\boldsymbol{\mathsf{x}}})=\frac{1}{2N}\yN{{\mathbf{A}}{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}}^2 =
\frac{1}{N}\sum_{i=1}^N \frac{1}{2}\yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^2$. In this setting SGD takes the form
\begin{align*}
{\boldsymbol{\mathsf{x}}}_{k+1} = {\boldsymbol{\mathsf{x}}}_k - \mu_{k+1}{\mathbf{A}}_{i_{k+1}}^\ast({\mathbf{A}}_{i_{k+1}}{\boldsymbol{\mathsf{x}}}_k-{\boldsymbol{\mathsf{y}}}_{i_{k+1}}), \quad k=0,1,\ldots,
\end{align*}
where $\mu_k>$ is the step-size, ${i_{k+1}}$ a randomly selected index, ${\mathbf{A}}_i$ the $i$-th row of a matrix ${\mathbf{A}}$, and ${\boldsymbol{\mathsf{y}}}_i$ the $i$-th entry of ${\boldsymbol{\mathsf{y}}}$. In the seminal work \cite{RM51}, Robbins and Monro presented SGD as a Markov chain,
laying the groundwork for the field of stochastic approximation \cite{KushnerYin:2003}. SGD has
since had a major impact on statistical inference and machine learning, especially for the training of neural networks. SGD has been extensively studied in the Euclidean setting; see \cite{BCN18} for an overview of the convergence theory from the viewpoint
of optimisation.
SGD has also been a popular method for image reconstruction, especially in medical imaging.
For example, the (randomised) Kaczmarz method is a reweighted version of SGD that has been extensively
used in computed tomography \cite{HM93, NZZ15}. Other applications of SGD and its variants
include optical tomography \cite{ChenLiLiu:2018}, phonon transmission coefficient recovery
\cite{GambaLiNair:2022}, positron emission tomography \cite{Z+21}, as well as general sparse recovery \cite{SchopferLorenz:2019,SchopferLorenzTondji:2022}. For linear inverse problems in Euclidean spaces, Jin and Lu \cite{JL19} gave a first proof of convergence
of SGD iterates towards the minimum norm solution, and analysed the regularising behaviour in the presence of noise; see \cite{JahnJin:2020,JinZhouZou:2020,JinZhouZou:2021siuq,LuMathe:2022,RabeloLeitao:2022} for further convergence results, \textit{a posteriori} stopping rules (discrepancy principle), nonlinear problems, and general step-size schedules, etc.
Iterative methods in Euclidean and Hilbert spaces are effective for reconstructing smooth solutions but fail to capture special features of the solutions, such as sparsity and piecewise constancy. In practice, many imaging inverse problems are more adequately described in non-Hilbert settings, including
sequence spaces $\ell^p(\bbR)$ and Lebesgue spaces $\CL^p(\Omega)$, with $p\in[1,\infty]
\setminus\{2\}$, which requires changing either the solution, the data space, or both. For example, inverse problems with impulse noise are better modelled by setting the data
space $\CY$ to a Lebesgue space $\CL^p(\Omega)$ with $p\approx 1$ \cite{ClasonJinKunisch:2010}, whereas the recovery of sparse solutions
is modelled by doing the same to the solution space $\CX$ \cite{CandesRombertTao:2006}. Thus, it is of great importance to develop
and analyse algorithms for inverse problems in Banach spaces, and this has received much attention \cite{SG+09,TBHK_12}. For the Landweber method for linear inverse problems in Banach spaces, Sch\"opfer et al \cite{SLS_06} were the first to prove strong convergence of the iterates under a suitable
step-size schedule for a smooth and uniformly convex Banach space $\CX$ and an arbitrary Banach space
$\CY$. This has since been extended and refined in various aspects, e.g. regarding acceleration \cite{SS09, W18, GHC19,ZhongWangJin:2019}, nonlinear forward models \cite{dHQS12, M18},
and Gauss-Newton methods \cite{KH10}.
In this work, we investigate SGD for inverse problems in Banach spaces, which has thus far lagged behind due to outstanding challenges in extending the analysis of standard Hilbert
space approaches to the Banach space setting. The main challenges in analysing SGD-like gradient-based
methods in Banach spaces are two-fold:
\begin{enumerate}
\item The use of duality maps results in non-linear update rules, which greatly complicates the convergence analysis. For example, the (expected)
difference between successive updates can no longer be identified as the (sub-)gradient of the objective.
\item Due to geometric characteristics of Banach spaces, it is more common to use the Bregman distance
for the convergence analysis, which results in the loss of useful algebraic tools, e.g. triangle
inequality and bias-variance decomposition, that are typically needed for the analysis.
\end{enumerate}
In this work,
we develop an SGD approach for the numerical solution of linear inverse problems in Banach spaces,
using the sub-gradient approach based on duality maps, and present a novel convergence analysis.
We first consider the case of exact data, and show that SGD iterates converge to a minimising
solution (first almost surely and then in expectation) under standard assumptions on summability of step-sizes, and geometric properties of the space $\CX$, cf. Theorems
\ref{thm:as_lin_convergence} and \ref{thm:L1_lin_convergence}. This solution is identified
as the minimum norm solution if the initial guess ${\boldsymbol{\mathsf{x}}}_0$ satisfies the range condition $\dmapX{p}({\boldsymbol{\mathsf{x}}}_0)\in\overline{\rm{range}({\mathbf{A}}^\ast)}$.
Further, we give a convergence rate in Theorem \ref{thm:lin-main} when the forward operator ${\mathbf{A}}$ satisfies a conditional stability
estimate. In case of noisy observations, we show the regularising property
of SGD, for properly chosen stopping indices, cf. Theorem \ref{thm:regularisation_property}. The analysis rests on a descent property in Lemma \ref{lem:descent_property} and Robbins-Siegmund theorem for almost super-martingales. In addition, we perform extensive numerical experiments on a model inverse problem (linear integral equation) and computed tomography (with parallel beam geometry) to illustrate distinct features of the proposed Banach space SGD, and examine the influence of various factors, such as the choice of the spaces $\CX$ and $\CY$, mini-batch size and noise characteristics (Gaussian or impulse).
When finalising the paper, we became aware of the independent and simultaneous work \cite{JinLuZhang:2022} on a stochastic mirror descent method for linear inverse problems between a Banach space $\mathcal{X}$ and a Hilbert space $\mathcal{Y}$. The method is a randomised version of the well-known Landweber-Kaczmarz method. The authors prove convergence results under \textit{a priori} stopping rules, and also establish an order-optimal convergence rate when the exact solution ${\boldsymbol{\mathsf{x}}}^\dag$ satisfies a benchmark source condition, by interpreting the method as a randomised block gradient method applied to the dual problem. Thus, the current work differs significantly from \cite{JinLuZhang:2022} in terms of problem setting, main results and analysis techniques.
The rest of the paper is organised as follows. In Section \ref{sec:prelims}, we recall background
materials on the geometry of Banach spaces, e.g. duality maps and Bregman distance. In Section \ref{sec:linear}, we
present the convergence of SGD for exact data and in Section \ref{sec:regularisation}, we discuss the regularising property of SGD for noisy observations. Finally, in Section \ref{sec:experiments}, we
provide some experimental results on a model inverse problem and computed tomography. In the {A}ppendix we collect several useful inequalities and auxiliary estimates.
Throughout, let $\CX$ and $\CY$ be two real Banach spaces, with their norms denoted by $\xN{\cdot}$ and $\yN{\cdot}$, respectively.
$\CX^\ast$ and $\CY^\ast$ are their respective dual spaces, with their norms denoted by $\|\cdot\|_{\CX^\ast}$ and $\|\cdot\|_{\CY^\ast}$, respectively.
For ${\boldsymbol{\mathsf{x}}}\in\CX$ and ${\boldsymbol{\mathsf{x}}}^\ast\in\CX^\ast$, we denote the corresponding duality pairing by
$\DP{{\boldsymbol{\mathsf{x}}}^\ast}{{\boldsymbol{\mathsf{x}}}}=\DP{{\boldsymbol{\mathsf{x}}}^\ast}{{\boldsymbol{\mathsf{x}}}}_{\CX^\ast\times\CX}={\boldsymbol{\mathsf{x}}}^\ast({\boldsymbol{\mathsf{x}}})$.
For a continuous linear operator ${\mathbf{A}}:\CX\rightarrow \CY$, we use $\|{\mathbf{A}}\|_{\CX\to \CY}$ to denote the operator norm (often with the subscript omitted).
The adjoint of ${\mathbf{A}}$ is denoted as ${\mathbf{A}}^\ast:\CY^\ast\rightarrow\CX^\ast$ and it is a continuous linear operator, with $\norm{{\mathbf{A}}}_{\CX\to\CY}=\norm{{\mathbf{A}}^\ast}_{\CY^\ast\to\CX^\ast}$.
The conjugate exponent of $p\in (1,\infty)$ is denoted by $p^\ast$, such that $1/p+1/p^\ast=1$ holds.
The Cauchy-Schwarz inequality of the following form holds for any ${\boldsymbol{\mathsf{x}}}\in\CX$ and ${\boldsymbol{\mathsf{x}}}^\ast\in\CX^\ast$
\begin{equation}\label{eqn:Cauchy_Schwarz}
| \DP{{\boldsymbol{\mathsf{x}}}^\ast}{{\boldsymbol{\mathsf{x}}}}|\le \xsN{{\boldsymbol{\mathsf{x}}}^\ast}\xN{{\boldsymbol{\mathsf{x}}}}.
\end{equation}
For reals $a,b$ we write $a\wedge b=\min\{a,b\}$ and $a\vee b=\max\{a,b\}$. By $(\CF_k)_{k\in\bbN}$,
we denote the natural filtration, i.e. a growing sequence of $\sigma$-algebras such that $\CF_k
\subset\CF_{k+1}\subset \CF$, for all $k\in\bbN$ and a $\sigma$-algebra $\CF$, and $\CF_k$ is generated by
random indices $i_j$, for $j\leq k$.
In the context of SGD, $k\in\bbN$ is the iteration number and
$\CF_k$ denotes the iteration history, that is, information available at time $k$, and for a given
initialisation ${\boldsymbol{\mathsf{x}}}_0$, we can identify $\CF_k=\sigma({\boldsymbol{\mathsf{x}}}_1,\ldots,{\boldsymbol{\mathsf{x}}}_k)$.
For a filtration $(\CF_k)_{k\in\bbN}$ we denote by $\bbE_k[\cdot]=\bbE[\cdot\mid{\boldsymbol{\mathsf{x}}}_1,\ldots{\boldsymbol{\mathsf{x}}}_k]$ the conditional expectation with respect to $\CF_k$.
{A sequence of random variables $(x_k)_{k\in\bbN}$ (adapted to the filtration $(\mathcal{F}_k)_{k\in\mathbb{N}}$) is a called super-martingale if $\bbE_k[x_{k+1}]\leq x_k.$}
Throughout, the notation a.s. denotes almost sure events.
\section{Preliminaries on Banach spaces}\label{sec:prelims}
In this section we recall relevant concepts from Banach space theory and the geometry of Banach spaces.
\subsection{Duality map}
In a Hilbert space $\CH$, for every ${\boldsymbol{\mathsf{x}}}\in\CH$, there exists a unique ${\boldsymbol{\mathsf{x}}}^\ast\in\CH^\ast$ such that $\DP{{\boldsymbol{\mathsf{x}}}^\ast}{{\boldsymbol{\mathsf{x}}}}=\|{\boldsymbol{\mathsf{x}}}\|_{\CH}\|{\boldsymbol{\mathsf{x}}}^\ast\|_{\CH^\ast}$ and $\|{\boldsymbol{\mathsf{x}}}^\ast\|_{\CH^\ast}=\|{\boldsymbol{\mathsf{x}}}\|_{\CH}$, by the Riesz representation theorem.
For Banach spaces, however, such an ${\boldsymbol{\mathsf{x}}}^\ast$ is not necessarily unique, motivating the notion of duality maps.
\begin{definition}[Duality map]\label{defn:duality_map}
For any $p>1$, a \emph{duality map} $\dmapX{p}:\CX\rightarrow 2^{\CX^\ast}$ is the sub-differential of the {\rm(}convex{\rm)} functional $\frac{1}{p}\|{\boldsymbol{\mathsf{x}}}\|_{\CX}^p$
\begin{align}\label{eqn:dmap_defined}
\dmapX{p} ({\boldsymbol{\mathsf{x}}}) = \left\{{\boldsymbol{\mathsf{x}}}^\ast\in\CX^\ast : \DP{{\boldsymbol{\mathsf{x}}}^\ast}{{\boldsymbol{\mathsf{x}}}}=\xN{{\boldsymbol{\mathsf{x}}}}\xsN{{\boldsymbol{\mathsf{x}}}^\ast}, \text{ and } \xN{{\boldsymbol{\mathsf{x}}}}^{p-1}=\xsN{{\boldsymbol{\mathsf{x}}}^\ast} \right\},
\end{align}
with gauge function $t\mapsto t^{p-1}$.
A single-valued selection of $\dmapX{p}$ is denoted by $\svaldmapX{p}$.
\end{definition}
In practice, the choice of the power parameter $p$ depends on geometric properties of the space $\CX$. {For single-valued duality maps, we use $\dmapX{p}$ and $\svaldmapX{p}$ interchangeably.}
Next we recall standard notions of smoothness and convexity of Banach spaces.
For an overview of Banach space geometry, we refer an interested reader to the monographs \cite{C_09, Cioranescu:1990,TBHK_12}.
\begin{definition}\label{defn:smoothness_and_convexity}
Let $\CX$ be a Banach space.
{$\CX$ is said to be reflexive if the canonical map ${\boldsymbol{\mathsf{x}}}\mapsto\widehat{\boldsymbol{\mathsf{x}}}$ between $\CX$ and the bidual $\CX^{\ast\ast}$, defined by $\widehat{\boldsymbol{\mathsf{x}}}({\boldsymbol{\mathsf{x}}}^\ast) = {\boldsymbol{\mathsf{x}}}^\ast({\boldsymbol{\mathsf{x}}})$, is surjective. $\CX$ is smooth if for every $0\neq{\boldsymbol{\mathsf{x}}}\in\CX$ there is a unique ${\boldsymbol{\mathsf{x}}}^\ast\in\CX^\ast$ such that $\DP{{\boldsymbol{\mathsf{x}}}^\ast}{{\boldsymbol{\mathsf{x}}}}=\xN{{\boldsymbol{\mathsf{x}}}}$ and $\xsN{{\boldsymbol{\mathsf{x}}}^\ast}=1$.}
The function $\delta_\CX:(0,2]\rightarrow\bbR$ defined as
\begin{align*}
\delta_\CX(\tau) = \inf\Big\{1-\tfrac{1}{2}\xN{{\boldsymbol{\mathsf{z}}}+{\boldsymbol{\mathsf{w}}}} : \xN{{\boldsymbol{\mathsf{z}}}}=\xN{{\boldsymbol{\mathsf{w}}}}=1, \xN{{\boldsymbol{\mathsf{z}}}-{\boldsymbol{\mathsf{w}}}}\geq\tau \Big\}
\end{align*}
is the \emph{modulus of convexity} of $\CX$.
$\CX$ is said to be \emph{uniformly convex} if $\delta_\CX(\tau)>0$ for all $\tau\in(0,2]$, and \emph{$p$-convex}, for $p>1$, if $\delta_\CX(\tau)\geq K_p\tau^p$ for some $K_p>0$ and all $\tau\in(0,2]$. The function $\rho_\CX:{[0,\infty)\to[0,\infty)}$ defined as
\begin{align*}
\rho_\CX(\tau) = \sup\Big\{\frac{\xN{{\boldsymbol{\mathsf{z}}}+\tau{\boldsymbol{\mathsf{w}}}}+\xN{{\boldsymbol{\mathsf{z}}}-\tau{\boldsymbol{\mathsf{w}}}}}{2}-1 : \xN{{\boldsymbol{\mathsf{z}}}}=\xN{{\boldsymbol{\mathsf{w}}}}=1\Big\}
\end{align*}
is the \emph{modulus of smoothness} of $\CX$, and is a convex and continuous function such that $\frac{\rho_\CX(\tau)}{\tau}$ is a non-decreasing function with $\rho_\CX(\tau)\leq\tau$.
$\CX$ is said to be \emph{uniformly smooth} if $\lim_{\tau\searrow0}\frac{\rho_\CX(\tau)}{\tau}=0$, and \emph{$p$-smooth}, for $p>1$, if $\rho_\CX(\tau)\leq K_p\tau^p$ for some $K_p>0$ and all $\tau\in(0,\infty)$.
\end{definition}
The following relationships between Banach spaces and duality maps will be used extensively.
\begin{theorem}[{\cite[Theorems 2.52 and 2.53, and Lemma 5.16]{TBHK_12}}]\label{rem:dmap_properties}
\begin{enumerate}
\item[{\rm(i)}] For every ${\boldsymbol{\mathsf{x}}}\in\CX$, the set $\dmapX{p}({\boldsymbol{\mathsf{x}}})$ is non-empty, convex, and weakly-$\star$ closed in $\CX^\ast$.
\item[{\rm(ii)}]\label{rem:Xp_Xstarpstar} $\CX$ is $p$-smooth if and only if $\CX^\ast$ is $p^\ast$-convex.
$\CX$ is $p$-convex if and only if $\CX^\ast$ is $p^\ast$-smooth.
\item[{\rm(iii)}]\label{property:dual_inverse}
$\CX$ is smooth if and only if $\dmapX{p}$ is single valued.
If $\CX$ is {convex of power type and smooth,} then $\dmapX{p}$ is invertible and $\big(\dmapX{p}\big)^{-1}=\dmapXs{p}$.
If $\CX$ is uniformly smooth and uniformly convex, then $\dmapX{p}$ and $\dmapXs{p}$ are both {uniformly} continuous.
\item[{\rm(iv)}] {Let $\CX$ be a uniformly smooth Banach space with duality map $\dmapX{p}$ with $p\geq 2$. Then, for all ${\boldsymbol{\mathsf{x}}},\widetilde{\boldsymbol{\mathsf{x}}}\in \CX$, there holds
\begin{align*}
\xsN{\dmapX{p}({\boldsymbol{\mathsf{x}}})-\dmapX{p}(\widetilde {\boldsymbol{\mathsf{x}}})}^{p^\ast}\leq C \max\{1, \xN{{\boldsymbol{\mathsf{x}}}}, \xN{\widetilde{\boldsymbol{\mathsf{x}}}}\}^{p}\, {\overline{\rho}_\CX}(\xN{{\boldsymbol{\mathsf{x}}}-\widetilde{\boldsymbol{\mathsf{x}}}})^{p^\ast},
\end{align*}
where $\overline{\rho}_\CX(\tau)=\rho_\CX(\tau)/\tau$ is a modulus of smoothness function such that $\overline{\rho}(\tau)\leq1$.}
\end{enumerate}
\end{theorem}
Next we list some common Banach spaces, the corresponding duality maps and convexity and smoothness properties.
\begin{example}\label{eg:spaces}
\begin{enumerate}
\item[{\rm(i)}] A Hilbert space $\CX$ is $2$-smooth and $2$-convex, and $\dmapX{2}$ is the identity.
\item[{\rm(ii)}] If $\CX$ is smooth, then $\dmapX{p}$ is the Gateaux derivative of the functional ${\boldsymbol{\mathsf{x}}}\mapsto\frac{1}{p}\xN{{\boldsymbol{\mathsf{x}}}}^p$.
\item[{\rm(iii)}] If $\CX=\ell^r(\bbR)$ with $1<r<\infty$, then $\dmapX{p}$ is single-valued, and the duality map is given by
$\dmapX{p}({\boldsymbol{\mathsf{x}}}) = \|{\boldsymbol{\mathsf{x}}}\|_r^{p-r} |{\boldsymbol{\mathsf{x}}}|^{r-1}\sign({\boldsymbol{\mathsf{x}}}).$
Moreover, $\dmapX{p}=\nabla({\frac{1}{p}\xN{\cdot}^p})$ since $\CX$ is smooth.
\item[{\rm(iv)}] Lebesgue spaces $\CL^p(\Omega)$, Sobolev spaces $W^{s,p}(\Omega)$, with $s>0$, {\rm(}for an open bounded domain $\Omega${\rm)}, and sequence spaces $\ell^p(\bbR)$ are $p\wedge 2$-smooth and $p\vee2$-convex, for $1<p<\infty$.
For $p\in\{1,\infty\}$, they are neither smooth nor strictly convex.
\end{enumerate}
\end{example}
\subsection{Bregman distance}
Due to the geometry of Banach spaces, it is often more convenient to use the
Bregman distance than the standard Banach space norm $\xN{\cdot}$ in the convergence analysis.
\begin{definition}[Bregman distance]\label{defn:Bregman_distance}
For a smooth Banach space $\CX$, the functional
\begin{align*} \bregman{{\boldsymbol{\mathsf{z}}}}{{\boldsymbol{\mathsf{w}}}}
&= \frac{1}{p^\ast}\xN{{\boldsymbol{\mathsf{z}}}}^p+\frac{1}{p}\xN{{\boldsymbol{\mathsf{w}}}}^p-\DP{\dmapX{p}({\boldsymbol{\mathsf{z}}})}{{\boldsymbol{\mathsf{w}}}},
\end{align*}
is called the \emph{Bregman distance}, where $1/p+1/p^\ast=1$.
\end{definition}
Note that the dependence of the Bregman distance $\bregman{{\boldsymbol{\mathsf{z}}}}{{\boldsymbol{\mathsf{w}}}}
$ on the space $\CX$ is omitted, which is often clear from the context. The Bregman distance does not satisfy the triangle inequality, and is generally non-symmetric. Thus it is not a distance. The next theorem lists
useful properties of the Bregman distance, which show the relationship between the
geometry of the underlying Banach space and duality maps.
\begin{theorem}[{\cite[Theorem 2.60, Lemmas 2.62 and 2.63]{TBHK_12}}]\label{thm:bregman_properties}
The following properties hold.
\begin{enumerate}
\item[{\rm(i)}] If $\CX$ is smooth and reflexive, then
$ \bregman{{\boldsymbol{\mathsf{z}}}}{{\boldsymbol{\mathsf{w}}}}={\bregmanS{{\boldsymbol{\mathsf{w}}}}{{\boldsymbol{\mathsf{z}}}}}.$
\item[{\rm(ii)}] Bregman distance satisfies the three-point identity
\begin{align}\label{eqn:3_point_id} \bregman{{\boldsymbol{\mathsf{z}}}}{{\boldsymbol{\mathsf{w}}}}=\bregman{{\boldsymbol{\mathsf{z}}}}{{\boldsymbol{\mathsf{v}}}}+\bregman{{\boldsymbol{\mathsf{v}}}}{{\boldsymbol{\mathsf{w}}}}+\DP{\dmapX{p}({\boldsymbol{\mathsf{v}}})-\dmapX{p}({\boldsymbol{\mathsf{z}}})}{{\boldsymbol{\mathsf{w}}}-{\boldsymbol{\mathsf{v}}}}.\end{align}
\item[{\rm(iii)}]\label{rem:bregman_pconv} If $\CX$ is $p$-convex, then it is reflexive, $p\geq 2$ and there exists $C_p>0$ such that
\begin{align}\label{eqn:norm_leq_bregman} \bregman{{\boldsymbol{\mathsf{z}}}}{{\boldsymbol{\mathsf{w}}}}\geq p^{-1}C_p\xN{{\boldsymbol{\mathsf{w}}}-{\boldsymbol{\mathsf{z}}}}^p.\end{align}
\item[{\rm(iv)}]\label{rem:bregman_psmooth} If $\CX^\ast$ is $p^\ast$-smooth, then it is reflexive, $p^\ast\le 2$ and there exists $G_{p^\ast}>0$ such that
\begin{align}\label{eqn:norm_geq_bregman} {\bregmanSdef{{\boldsymbol{\mathsf{z}}}^\ast}{{\boldsymbol{\mathsf{w}}}^\ast}\leq (p^\ast)^{-1}G_{p^\ast}\xsN{{\boldsymbol{\mathsf{w}}}^\ast-{\boldsymbol{\mathsf{z}}}^\ast}^{p^\ast}}.\end{align}
\item[{\rm(v)}] $\bregman{{\boldsymbol{\mathsf{z}}}}{{\boldsymbol{\mathsf{w}}}}\geq0$, and if $\CX$ is uniformly convex, $\bregman{{\boldsymbol{\mathsf{z}}}}{{\boldsymbol{\mathsf{w}}}}=0$ if and only if {${\boldsymbol{\mathsf{z}}}={\boldsymbol{\mathsf{w}}}$}.
\item[{\rm(vi)}] $\bregman{{\boldsymbol{\mathsf{z}}}}{{\boldsymbol{\mathsf{w}}}}$ is continuous in the second argument. If $\CX$ is smooth and uniformly convex, then $\dmapX{p}$ is continuous on bounded subsets and $\bregman{{\boldsymbol{\mathsf{z}}}}{{\boldsymbol{\mathsf{w}}}}$ is continuous in its first argument.
\end{enumerate}
\end{theorem}
\section{Convergence analysis for exact data}\label{sec:linear}
Now we develop an SGD type approach for problem \eqref{eqn:inv} and analyse its convergence.
{Throughout, we make the following assumption on the Banach spaces $\CX$ and $\CY$, unless indicated otherwise.}
{\begin{assumption}\label{ass:space-basic}
The Banach space $\CX$
is $p$-convex and smooth, and $\CY$ is arbitrary.
\end{assumption}}
To recover the solution ${\boldsymbol{\mathsf{x}}}^\dag$, we minimise a least-squares type objective
$\argmin_{{\boldsymbol{\mathsf{x}}}\in\CX} \tfrac{1}{p}\yN{{\mathbf{A}}{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}}^p,$ for some $p>1$.
By $\CX_{\min}$, we denote the (non-empty) set of minimisers over $\CX$.
Among the elements of $\CX_{\min}$, the regularisation theory focuses on the so-called minimum norm solution.
\begin{definition}\label{defn:minimum_norm}
An element $\xbs^\dagger\in\CX$ is called a \emph{minimum norm solution} {\rm(}MNS\,{\rm)} of \eqref{eqn:inv} if
\[ {\mathbf{A}}\xbs^\dagger={\boldsymbol{\mathsf{y}}}\quad\text{ and }\quad \xN{\xbs^\dagger}=\inf \{\xN{{\boldsymbol{\mathsf{x}}}}:{\boldsymbol{\mathsf{x}}}\in\CX,\, {\mathbf{A}}{\boldsymbol{\mathsf{x}}}={\boldsymbol{\mathsf{y}}} \}.\]
\end{definition}
The MNS $\xbs^\dagger$ is not unique for general Banach spaces. The following
lemma states sufficient geometric assumptions on $\CX$ for uniqueness.
\begin{lemma}[{\cite[Lemma 3.3]{TBHK_12}}]
\label{lem:minnorm}
Let Assumption \ref{ass:space-basic} hold.
Then there exists a unique MNS $\xbs^\dagger$.
Furthermore, $\dmapX{p}(\xbs^\dagger)\in\overline{\range({\mathbf{A}}^\ast)}$, for $1<p<\infty$.
If some $\widehat{{\boldsymbol{\mathsf{x}}}}\in\CX$ satisfies $\dmapX{p}(\widehat{{\boldsymbol{\mathsf{x}}}})\in\overline{\range({\mathbf{A}}^\ast)}$ and $\widehat{{\boldsymbol{\mathsf{x}}}}-\xbs^\dagger\in\Null({\mathbf{A}})$, then $\widehat{{\boldsymbol{\mathsf{x}}}}=\xbs^\dagger$.
\end{lemma}
By Lemma \ref{lem:minnorm}, the MNS $\xbs^\dagger$ is unique modulo
the null space of ${\mathbf{A}}$, under certain smoothness and convexity assumptions on $\CX$.
These conditions exclude Lebesgue and sequence spaces $\CL^1(\Omega)$ and $\ell^1(\bbR)$, cf. Example \ref{eg:spaces}(iv).
The standard Landweber method \cite{Landweber:1951,SLS_06} constructs an approximation to the MNS $\xbs^\dagger$ by running the iterations
\begin{align}\label{eqn:Landweber}
{\boldsymbol{\mathsf{x}}}_{k+1} = \dmapXs{p}\LRR{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k) - \mu_{k+1}{\mathbf{A}}^\ast\svaldmapY{p}({\mathbf{A}}{\boldsymbol{\mathsf{x}}}_k-{\boldsymbol{\mathsf{y}}})}, \quad k=0,1,\ldots,
\end{align}
where $\mu_{k+1}>0$ is the step-size.
Asplund's theorem \cite[Theorem 2.28]{TBHK_12} allows characterising the duality map as the
sub-differential, $\dmapX{p}=\partial({\frac{1}{p}\xN{\cdot}^p})$ for $p>1$.
This identifies the descent direction ${\mathbf{A}}^\ast\svaldmapY{p}({\mathbf{A}}{\boldsymbol{\mathsf{x}}}_k-{\boldsymbol{\mathsf{y}}})$ as the sub-gradient: ${\mathbf{A}}^\ast\svaldmapY{p}({\mathbf{A}}{\boldsymbol{\mathsf{x}}}_k-{\boldsymbol{\mathsf{y}}})=\partial(\frac{1}{p}\yN{{\mathbf{A}}\cdot-{\boldsymbol{\mathsf{y}}}})({\boldsymbol{\mathsf{x}}}_k)$.
{Note that $\dmapX{p}$ is single valued by Assumption \ref{ass:space-basic} and Theorem \ref{rem:dmap_properties}, though $\dmapY{p}$ is not.}
For well selected step-sizes, Landweber iterations \eqref{eqn:Landweber} converge to an MNS of \eqref{eqn:inv} \cite[Theorem 3.3]{SLS_06}.
The evaluation of the sub-gradient ${\mathbf{A}}^\ast \svaldmapY{p}({\mathbf{A}}{\boldsymbol{\mathsf{x}}}_k-{\boldsymbol{\mathsf{y}}})$ represents the main
per-iteration cost of the iteration \eqref{eqn:Landweber}. {In this work, we consider the following Kaczmarz type setting:
\begin{align}\label{eqn:Kaczmarz_model}
{\mathbf{A}}=\begin{pmatrix}{\mathbf{A}}_1\\\vdots\\{\mathbf{A}}_N\end{pmatrix}\quad \text{and} \quad{\mathbf{A}}{\boldsymbol{\mathsf{x}}} = \begin{pmatrix}{\mathbf{A}}_1{\boldsymbol{\mathsf{x}}}\\\vdots\\{\mathbf{A}}_N{\boldsymbol{\mathsf{x}}}\end{pmatrix}=\begin{pmatrix}{\boldsymbol{\mathsf{y}}}_1\\\vdots\\{\boldsymbol{\mathsf{y}}}_N\end{pmatrix},
\end{align}
where ${\mathbf{A}}_i:\CX\rightarrow\CY_i$, ${\boldsymbol{\mathsf{y}}}_i\in\CY_i$, for $i\in[N]=\{1,\ldots,N\}$. Problem \eqref{eqn:Kaczmarz_model} is defined on the direct product $(\otimes_{i=1}^N\CY_i, \ell^r)$, equipped with the $\ell^r$ norm, for $r\geq 1$
\begin{align}\label{eqn:Norm-Y}
\|{\boldsymbol{\mathsf{y}}}\|_{\mathcal{Y}}:=\|({\boldsymbol{\mathsf{y}}}_1,\ldots,{\boldsymbol{\mathsf{y}}}_N)\|_{\mathcal{Y}}= \|(\|{\boldsymbol{\mathsf{y}}}_1\|_{\CY_1},\ldots,\|{\boldsymbol{\mathsf{y}}}_N\|_{\CY_N})\|_{\ell^r}=\Big(\sum_{i=1}^N \|{\boldsymbol{\mathsf{y}}}_i\|_{\mathcal{Y}_i}^r\Big)^{1/r}.
\end{align}
Below we identify $\CY_i=\CY$ for notational brevity, and use $\yN{\cdot}$ to denote both the norm of the direct product space and the component spaces, though all the relevant proofs and concepts easily extend to the general case. Then the objective $\Psi({\boldsymbol{\mathsf{x}}})$ is given by
\begin{equation*}
\Psi({\boldsymbol{\mathsf{x}}})=\frac{1}{N}\sum_{i=1}^N \Psi_i({\boldsymbol{\mathsf{x}}}), \quad \mbox{with } \Psi_i({\boldsymbol{\mathsf{x}}})=\frac{1}{p}\yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^p.
\end{equation*}
Note that for many common imaging problems we use $\mathcal{Y}=\ell^p(\mathbb{R})$, which then naturally gives
$\Psi({\boldsymbol{\mathsf{x}}})=\frac{1}{pN}\yN{{\mathbf{A}}{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}}^p$. To reduce the computational cost per-iteration, we exploit the finite-sum structure of the objective $\Psi({\boldsymbol{\mathsf{x}}})$ and adopt SGD iterations of the form
\begin{align}\label{eqn:sgd}
{\boldsymbol{\mathsf{x}}}_{k+1} = \dmapXs{p}\LRR{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k) - \mu_{k+1}{\boldsymbol{\mathsf{g}}}_{k+1}},
\end{align}
where ${\boldsymbol{\mathsf{g}}}_{k+1}=g({\boldsymbol{\mathsf{x}}}_k,{\boldsymbol{\mathsf{y}}},i_{k+1})$ is the stochastic update direction given by
\begin{align}\label{eqn:Kaczmarz_linear_gradients} g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)={\mathbf{A}}_i^\ast \svaldmapY{p}({\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i)= \partial\Big(\tfrac{1}{p}\yN{{\mathbf{A}}_i\cdot-{\boldsymbol{\mathsf{y}}}_i}^p\Big)({\boldsymbol{\mathsf{x}}}),
\end{align}
and the random index $i_{k}$ {is sampled uniformly over the index set $[N]$,}
independent of ${\boldsymbol{\mathsf{x}}}_k$. Clearly, it is an unbiased estimator of the sub-gradient $\partial\Psi({\boldsymbol{\mathsf{x}}})$, i.e. $\bbE[g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)]=\partial\Psi({\boldsymbol{\mathsf{x}}})$, and the per-iteration cost is reduced by a factor of $N$.}
\begin{remark}\label{rmk:product}
In the model \eqref{eqn:Kaczmarz_model}, if $\CY$ admits a complemented sum $\CY=\sum_{i=1}^N\CY_i$, we can take the {\rm(}internal\,{\rm)} direct sum $(\oplus_{i=1}^N \CY_i, \ell^r)$, so that ${\boldsymbol{\mathsf{y}}}={\boldsymbol{\mathsf{y}}}_1+\ldots+{\boldsymbol{\mathsf{y}}}_N$ and the corresponding norm
$\|{\boldsymbol{\mathsf{y}}}\|=\|(\|{\rm Proj}_{\CY_1}({\boldsymbol{\mathsf{y}}}) \|_{\CY_1},\ldots, \|{\rm Proj}_{\CY_N}({\boldsymbol{\mathsf{y}}}) \|_{\CY_N})\|_r$.
With this identification the spaces $(\otimes_{i=1}^N\CY_i, \ell^r)$ and $(\oplus_{i=1}^N \CY_i, \ell^r)$ are isometrically isomorphic \cite{Unser:2022} and the norms are equivalent for all $r\ge1$.
\end{remark}
{We now collect some useful properties about the objective $\Psi$ and the Bregman divergence. Throughout, $L_{\max}=\max_{i\in[N]}\|{\mathbf{A}}_i\|.$ Note that $c_N=1/N$ if $\CY=\CL^p(\Omega)$.
\begin{lemma}\label{lem:Kaczmarz-basic}
For all $i\in[N]$, ${\boldsymbol{\mathsf{x}}}\in\CX$, and any $\widehat{\boldsymbol{\mathsf{x}}}\in\mathcal{X}_{\min}$ (such that ${\mathbf{A}}\widehat{\boldsymbol{\mathsf{x}}}={\boldsymbol{\mathsf{y}}}$), we have
\begin{align}\label{eqn:id-subgrad}
\DP{\partial\Psi_i({\boldsymbol{\mathsf{x}}})}{{\boldsymbol{\mathsf{x}}}-\widehat{\boldsymbol{\mathsf{x}}}} = p\Psi_i({\boldsymbol{\mathsf{x}}})\quad \text{and}\quad \DP{\partial\Psi({\boldsymbol{\mathsf{x}}})}{{\boldsymbol{\mathsf{x}}}-\widehat{\boldsymbol{\mathsf{x}}}} = p\Psi({\boldsymbol{\mathsf{x}}}).
\end{align}
Moreover, $\Psi_i({\boldsymbol{\mathsf{x}}})\leq \frac{\|{\mathbf{A}}_i\|^p}{C_p}\bregman{{\boldsymbol{\mathsf{x}}}}{\widehat{\boldsymbol{\mathsf{x}}}}$, $\Psi({\boldsymbol{\mathsf{x}}})\leq \frac{L_{\max}^p}{C_p}\bregman{{\boldsymbol{\mathsf{x}}}}{\widehat{\boldsymbol{\mathsf{x}}}}$, and for some $C_N>0$ we have
$\Psi({\boldsymbol{\mathsf{x}}})\geq \frac{C_N}{p}\yN{{\mathbf{A}}{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}}^p.$
\end{lemma}
\begin{proof}
It follows from the identity ${\mathbf{A}}\widehat{\boldsymbol{\mathsf{x}}}={\boldsymbol{\mathsf{y}}}$ that
\begin{align*}
\DP{\partial\Psi_i({\boldsymbol{\mathsf{x}}})}{{\boldsymbol{\mathsf{x}}}-\widehat{\boldsymbol{\mathsf{x}}}} = \DP{{\mathbf{A}}_i^\ast \svaldmapY{p}({\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i)}{{\boldsymbol{\mathsf{x}}}-\widehat{\boldsymbol{\mathsf{x}}}}=\DP{ \svaldmapY{p}({\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i)}{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i} = p \Psi_i({\boldsymbol{\mathsf{x}}}).
\end{align*}
Since $\partial\Psi({\boldsymbol{\mathsf{x}}})=\frac{1}{N}\sum_{i=1}^N \partial\Psi_i({\boldsymbol{\mathsf{x}}})$, the second identity in \eqref{eqn:id-subgrad} follows from the linearity of the dual product.
By the $p$-convexity of the space $\CX$ and Theorem \ref{thm:bregman_properties}(iii), we get
\begin{align*}
\Psi_i({\boldsymbol{\mathsf{x}}})=\frac{1}{p}\yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^p =\frac{1}{p}\yN{{\mathbf{A}}_i({\boldsymbol{\mathsf{x}}}-\widehat{\boldsymbol{\mathsf{x}}})}^p\leq \frac{\|{\mathbf{A}}_i\|^p}{p}\xN{{\boldsymbol{\mathsf{x}}}-\widehat{\boldsymbol{\mathsf{x}}}}^p\leq \frac{\|{\mathbf{A}}_i\|^p}{C_p}\bregman{{\boldsymbol{\mathsf{x}}}}{\widehat{\boldsymbol{\mathsf{x}}}}.
\end{align*}
The second claim follows since $\Psi({\boldsymbol{\mathsf{x}}})=\frac{1}{N}\sum_{i=1}^N\Psi_i({\boldsymbol{\mathsf{x}}})$.
Lastly, by the norm equivalence \eqref{eqn:Norm-Y} for $1<r<\infty$, there exists $C_N>0$ such that
\begin{align*}
\Psi({\boldsymbol{\mathsf{x}}})=\frac{1}{N}\sum_{i=1}^N\frac{1}{p}\yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^p\geq \frac{C_N}{p}\yN{{\mathbf{A}}{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}}^p.
\end{align*}
\end{proof}}
We now focus on the convergence study of the iterations \eqref{eqn:sgd}, without and with noise in the data, and discuss convergence rates under conditional stability.
\subsection{Convergence for the Kaczmarz model}
Below the notation $\bbE[\cdot]$ denotes taking expectation with respect to the sampling of the random indices $i_k$ and $\bbE_k[\cdot]$ denotes taking conditional expectation with respect to $\mathcal{F}_k$. The remaining variables, e.g. ${\boldsymbol{\mathsf{x}}}$ and ${\boldsymbol{\mathsf{y}}}$, are measurable with respect to the underlying probability measure. To study the convergence of SGD \eqref{eqn:sgd}, we first establish
a descent property in terms of the Bregman distance.
\begin{lemma}\label{lem:descent_property}
Let Assumption \ref{ass:space-basic} hold. For any $\widehat{\boldsymbol{\mathsf{x}}}\in\CX$, the iterates in \eqref{eqn:sgd} satisfy
\begin{align}\label{eqn:descent_property}
\bregman{{\boldsymbol{\mathsf{x}}}_{k+1}}{\widehat{\boldsymbol{\mathsf{x}}}}\leq\bregman{{\boldsymbol{\mathsf{x}}}_k}{\widehat{\boldsymbol{\mathsf{x}}}}-\mu_{k+1}\DP{{\boldsymbol{\mathsf{g}}}_{k+1}}{{\boldsymbol{\mathsf{x}}}_k-\widehat{\boldsymbol{\mathsf{x}}}} + \frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast}\xsN{{\boldsymbol{\mathsf{g}}}_{k+1}}^{p^\ast}.
\end{align}
\end{lemma}
\begin{proof}
Let $\Delta_k:=\bregman{{\boldsymbol{\mathsf{x}}}_k}{\widehat{\boldsymbol{\mathsf{x}}}}$.
By Definition \ref{defn:Bregman_distance} and expression \eqref{eqn:sgd}, we have
\begin{align*}
\Delta_{k+1}&=\frac{1}{p}\xN{\widehat{\boldsymbol{\mathsf{x}}}}^p+\frac{1}{p^\ast}\xN{{\boldsymbol{\mathsf{x}}}_{k+1}}^{p}-\DP{\dmapX{p}({\boldsymbol{\mathsf{x}}}_{k+1})}{\widehat{\boldsymbol{\mathsf{x}}}}\\
&{=\frac{1}{p}\xN{\widehat{\boldsymbol{\mathsf{x}}}}^p+\frac{1}{p^\ast}\xN{\dmapXs{p}\LRR{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k) - \mu_{k+1}{\boldsymbol{\mathsf{g}}}_{k+1}}}^{p}-\DP{\dmapX{p}({\boldsymbol{\mathsf{x}}}_{k+1})}{\widehat{\boldsymbol{\mathsf{x}}}}}.
\end{align*}
{Using Definition \ref{defn:duality_map}, the identity $p(p^\ast-1)=p^\ast$ and Theorem \ref{rem:dmap_properties}(iii), we deduce
\begin{align*}
\Delta_{k+1}&=\frac{1}{p}\xN{\widehat{\boldsymbol{\mathsf{x}}}}^p+\frac{1}{p^\ast}\xsN{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)-\mu_{k+1}{\boldsymbol{\mathsf{g}}}_{k+1}}^{p(p^\ast-1)}-\DP{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)-\mu_{k+1}{\boldsymbol{\mathsf{g}}}_{k+1}}{\widehat{\boldsymbol{\mathsf{x}}}}\\
&=\frac{1}{p}\xN{\widehat{\boldsymbol{\mathsf{x}}}}^p+\frac{1}{p^\ast}\xsN{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)-\mu_{k+1}{\boldsymbol{\mathsf{g}}}_{k+1}}^{p^\ast}-\DP{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)-\mu_{k+1}{\boldsymbol{\mathsf{g}}}_{k+1}}{\widehat{\boldsymbol{\mathsf{x}}}}.
\end{align*}}
Since $\CX$ is $p$-convex, $\CX^\ast$ is $p^\ast$-smooth, cf. Theorem \ref{rem:dmap_properties}(i).
By \cite[Corollary 5.8]{C_09}, this implies
\[ \frac{1}{p^\ast}\xsN{{\boldsymbol{\mathsf{x}}}^\ast -\tilde{\boldsymbol{\mathsf{x}}}^\ast}^{p^\ast}\leq\frac{1}{p^\ast}\xsN{{\boldsymbol{\mathsf{x}}}^\ast}^{p^\ast}
+\frac{G_{p^\ast}}{p^\ast}\xsN{\tilde{\boldsymbol{\mathsf{x}}}^\ast}^{p^\ast}-\DP{\dmapXs{p}({\boldsymbol{\mathsf{x}}}^\ast)}{\tilde{\boldsymbol{\mathsf{x}}}^\ast},\quad \forall {\boldsymbol{\mathsf{x}}}^\ast,\tilde {\boldsymbol{\mathsf{x}}}^\ast\in\CX^\ast.
\]
Using the identities $p^\ast(p-1)=p$ and $(\dmapX{p})^{-1}=\dmapXs{{p}}$, cf. Theorem \ref{rem:dmap_properties}(iii), we get
\begin{align*}\frac{1}{p^\ast}\xsN{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)-\mu_{k+1}{\boldsymbol{\mathsf{g}}}_{k+1}}^{p^\ast}&\leq \frac{1}{p^\ast}\xsN{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)}^{p^\ast} + \frac{G_{p^\ast}}{p^\ast}\xsN{\mu_{k+1}{\boldsymbol{\mathsf{g}}}_{k+1}}^{p^\ast} - \DP{\mu_{k+1}{\boldsymbol{\mathsf{g}}}_{k+1}}{{\boldsymbol{\mathsf{x}}}_k}\\
&= \frac{1}{p^\ast}\xN{{\boldsymbol{\mathsf{x}}}_k}^{p} + \frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast}\xsN{{\boldsymbol{\mathsf{g}}}_{k+1}}^{p^\ast} - \mu_{k+1}\DP{{\boldsymbol{\mathsf{g}}}_{k+1}}{{\boldsymbol{\mathsf{x}}}_k}.
\end{align*}
Combining the preceding estimates gives the desired assertion through
\begin{align*}
\Delta_{k+1} &\leq\frac{1}{p}\xN{\widehat{\boldsymbol{\mathsf{x}}}}^p+\frac{1}{p^\ast}\xsN{{\boldsymbol{\mathsf{x}}}_k}^{p}-\DP{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)}{\widehat{\boldsymbol{\mathsf{x}}}}+\frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast}\xsN{{\boldsymbol{\mathsf{g}}}_{k+1}}^{p^\ast} - \mu_{k+1}\DP{{\boldsymbol{\mathsf{g}}}_{k+1}}{{\boldsymbol{\mathsf{x}}}_k-\widehat{\boldsymbol{\mathsf{x}}}}\\
&=\Delta_k-\mu_{k+1}\DP{{\boldsymbol{\mathsf{g}}}_{k+1}}{{\boldsymbol{\mathsf{x}}}_k-\widehat{\boldsymbol{\mathsf{x}}}} + \frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast}\xsN{{\boldsymbol{\mathsf{g}}}_{k+1}}^{p^\ast}.
\end{align*}
\end{proof}
Lemma \ref{lem:descent_property} allows showing that the sequence of Bregman distances $(\bregman{{\boldsymbol{\mathsf{x}}}_k}{\widehat{\boldsymbol{\mathsf{x}}}})_{k\in\bbN}$ forms an almost super-martingale {(in the Robbins-Siegmund sense defined below)} for $\widehat{\boldsymbol{\mathsf{x}}}\in\CX_{\min}$ and well chosen step-sizes $(\mu_k)_{k\in\bbN}$.
We will show the almost sure convergence of the iterates using Robbins-Siegmund theorem.
\begin{theorem}[{Robbins-Siegmund theorem on the convergence of almost super-martingales, \cite[Lemma 11]{P87}}] \label{thm:sub_martingale_convergence}
Consider a filtration $(\CF_k)_{k\in\bbN}$ and four non-negative, $(\CF_k)_{k\in\bbN}$ adapted processes $(\alpha_k)_{k\in\bbN}$, $(\beta_k)_{k\in\bbN}$, $(\gamma_k)_{k\in\bbN}$, and $(\delta_k)_{k\in\bbN}$.
{Let the sequence $(\alpha_k)_{k\in\bbN}$ be an \emph{almost super-martingale}, i.e. for all $k$ we have
$\bbE_k[\alpha_{k+1}]\le(1+\beta_k)\alpha_k+\gamma_k-\delta_k.$}
Then the sequence $(\alpha_k)_{k\in\bbN}$ converges a.s. to a random variable $\alpha_\infty$, and $\sum_{k=1}^\infty\delta_k <\infty$ a.s. {on the set $\{\sum_{k=1}^\infty \beta_k<\infty, \,\sum_{k=1}^\infty \gamma_k<\infty \}$.}
\end{theorem}
Under certain conditions on ${\boldsymbol{\mathsf{x}}}_0$, the limit is the MNS $\xbs^\dagger$. Below $\bbE_k$ denotes the conditional expectation with respect to the filtration $\CF_k$.
\begin{theorem}\label{thm:as_lin_convergence}
Let $(\mu_k)_{k\in\bbN}$ satisfy $\sum_{k=1}^\infty\mu_{k}=\infty$ and $\sum_{k=1}^\infty \mu_{k}^{p^\ast} <\infty,$
Assumption \ref{ass:space-basic} hold, and ${\boldsymbol{\mathsf{x}}}^\dag$ be the MNS.
{Then the sequence $({\boldsymbol{\mathsf{x}}}_k)_{k\in\bbN}$ converges a.s. to a solution of \eqref{eqn:inv}:
\begin{align*}
\bbP\Big(\lim_{k\rightarrow\infty} \inf_{\widetilde {\boldsymbol{\mathsf{x}}}\in \CX_{\min}}\xN{{\boldsymbol{\mathsf{x}}}_k-\widetilde {\boldsymbol{\mathsf{x}}}}=0\Big)=1.
\end{align*}}
Moreover, if $\dmapX{p}({\boldsymbol{\mathsf{x}}}_0)\in\overline{\range({\mathbf{A}}^\ast)}$,
we have $\lim_{k\rightarrow\infty}\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger}=0$ a.s.
\end{theorem}
\begin{proof}
{By Lemma \ref{lem:Kaczmarz-basic}, we have
$\DP{\partial\Psi({\boldsymbol{\mathsf{x}}}_k)}{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger} = p\Psi({\boldsymbol{\mathsf{x}}}_k).$}
Moreover,
\begin{align*}
\xsN{g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)}=\xsN{{\mathbf{A}}_i^\ast \svaldmapY{p}({\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i)}\leq\|{\mathbf{A}}_i\| \ysN{\svaldmapY{p}({\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i)}\leq L_{\max} \yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^{p-1},
\end{align*}
with $L_{\max}=\max_{i\in[N]}\|{\mathbf{A}}_i\|$. Thus, since $p^\ast(p-1)=p$, we have
\begin{align*}
\bbE\big[\xsN{g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)}^{p^\ast}\big] \leq pL_{\max}^{p^\ast}\frac{1}{N}\sum_{i=1}^N\frac{1}{p} \yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^p= pL_{\max}^{p^\ast}\Psi({\boldsymbol{\mathsf{x}}}).
\end{align*}
Upon taking the conditional expectation $\mathbb{E}_k[\cdot]$ of the descent property \eqref{eqn:descent_property} (with $\widehat{\boldsymbol{\mathsf{x}}}=\xbs^\dagger$), and using the measurability of ${\boldsymbol{\mathsf{x}}}_k$ with respect to $\CF_k$, we deduce
\begin{align*}
\bbE_k[\Delta_{k+1}] &\leq \Delta_{k}-p\mu_{k+1}\Psi({\boldsymbol{\mathsf{x}}}_k)+pL_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast}\Psi({\boldsymbol{\mathsf{x}}}_k).
\end{align*}
{Using Lemma \ref{lem:Kaczmarz-basic} again we have $\Psi({\boldsymbol{\mathsf{x}}}_k)\leq \frac{L_{\max}^p}{C_p}\Delta_k$,}
which yields
\begin{align*}
\bbE_k[\Delta_{k+1}] &\leq \LRR{1+L_{\max}^{p^\ast+p}\frac{p}{C_p}\frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast}}\Delta_{k}-p\mu_{k+1}\Psi({\boldsymbol{\mathsf{x}}}_k).
\end{align*}
Since $\sum_{k=1}^\infty\mu_{k}^{p^\ast}<\infty$ by assumption, we can apply Theorem \ref{thm:sub_martingale_convergence}
and deduce that the sequence $(\Delta_k)_{k\in\bbN}$ converges a.s. to a random variable $\Delta_\infty$ and
$\sum_{k=0}^\infty\mu_{k+1}\Psi({\boldsymbol{\mathsf{x}}}_k)<\infty$ a.s.
{Let $\Omega$ be the measurable set on which $(\Delta_k)_{k\in\bbN}$ converges, $\sum_{k=0}^\infty\mu_{k+1}\Psi({\boldsymbol{\mathsf{x}}}_k)<\infty$, and $\bbP(\Omega)=1$.}
Next we show $\liminf_{k} \Psi({\boldsymbol{\mathsf{x}}}_k) =0$ a.s. {Consider an event $\omega$ on which this is not the case, i.e. where} $\liminf_{k} \Psi({\boldsymbol{\mathsf{x}}}_k) >0$.
Then there exist $\epsilon>0$ and $k_\epsilon\in\bbN$ such that for all $k\geq k_\epsilon$, $\Psi({\boldsymbol{\mathsf{x}}}_k)\geq \epsilon$, giving
$\sum_{k\geq k_\epsilon} \mu_{k+1} \Psi({\boldsymbol{\mathsf{x}}}_k) \geq \epsilon \sum_{k\geq k_\epsilon} \mu_{k+1}$.
{Since for all events in $\Omega$ this would lead to a contradiction: the right hand side diverges ($\sum_{k=1}^\infty\mu_{k}=\infty$ by assumption), whereas the left hand side is the remainder of a convergent series, we conclude $\omega\not\in\Omega$. Since $\bbP(\Omega^c)=0$, we have $\liminf_k \Psi({\boldsymbol{\mathsf{x}}}_k)=0$ a.s.
For every event in the set where $\liminf_k \Psi({\boldsymbol{\mathsf{x}}}_k)=0$ holds we can then find a sub-sequence $({\boldsymbol{\mathsf{x}}}_{n_k})_{k\in\bbN}$ such that
$\lim_{k\rightarrow\infty} \Psi({\boldsymbol{\mathsf{x}}}_{n_k})=0$.}
Define also $\widehat\Psi({\boldsymbol{\mathsf{x}}})=\sum_{i=1}^N \widehat\Psi_i({\boldsymbol{\mathsf{x}}})$, with $\widehat\Psi_i({\boldsymbol{\mathsf{x}}})=\yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}$.
We have $\liminf_k \widehat\Psi({\boldsymbol{\mathsf{x}}}_k)=0$ and $\lim_{j\rightarrow\infty} \widehat\Psi({\boldsymbol{\mathsf{x}}}_{n_j})=0$ (on the same subsequence), since by Young's inequality,
\begin{align*}
\Big(\sum_{i=1}^N \yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^p\Big)^{1/p}\leq\sum_{i=1}^N \yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}\leq N\Big(\frac{1}{N}\sum_{i=1}^N \yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^p\Big)^{1/p}.
\end{align*}
Moreover, $\widehat\Psi({\boldsymbol{\mathsf{x}}})^p \leq pN^p \Psi({\boldsymbol{\mathsf{x}}})$.
{The following argument is understood pointwise on the a.s. set $\Omega$ where $(\Delta_k)_{k\in\bbN}$ converges, $\sum_{k=0}^\infty\mu_{k+1}\Psi({\boldsymbol{\mathsf{x}}}_k)<\infty$, and $\liminf_k \widehat\Psi({\boldsymbol{\mathsf{x}}}_k)=0$}.
Since $(\Delta_k)_{k\in\bbN}$ {converges it is bounded.}
By the coercivity of the Bregman distance {(see Lemma \ref{lem:bregman_bound_xk_bound})} so are $({\boldsymbol{\mathsf{x}}}_k)_{k\in\bbN}$ and $(\dmapX{p}({\boldsymbol{\mathsf{x}}}_k))_{k\in\bbN}$.
By further passing to a subsequence, we can find a subsequence of $({\boldsymbol{\mathsf{x}}}_{n_k})_{k\in\bbN}$, that we denote the same, such that
$(\xN{{\boldsymbol{\mathsf{x}}}_{n_k}})_{k\in\bbN}$ is convergent, $(\dmapX{p}({\boldsymbol{\mathsf{x}}}_{n_k}))_{k\in\bbN}$ is weakly convergent, and
\begin{align} \label{eqn:monotonic_subseq}
\lim_{k\rightarrow\infty} \widehat\Psi({\boldsymbol{\mathsf{x}}}_{n_k})=0\quad \text{and}\quad \widehat\Psi({\boldsymbol{\mathsf{x}}}_{n_k}) \leq \widehat\Psi({\boldsymbol{\mathsf{x}}}_{n}) \text{ for all } n <n_k.
\end{align}
The latter can be obtained by setting $n_1=1$, and then recursively defining $n_{k+1}=\min\{k> n_k: \Psi({\boldsymbol{\mathsf{x}}}_k)\leq \Psi({\boldsymbol{\mathsf{x}}}_{n_k})/2 \}$, for $k\in\bbN$. Any following subsequence satisfies the same property.
Using {Theorem \ref{thm:bregman_properties}(ii),}
we have for $k>\ell$
\begin{align*}
\bregman{{\boldsymbol{\mathsf{x}}}_{n_\ell}}{{\boldsymbol{\mathsf{x}}}_{n_k}} &\!=\!\frac{1}{p^\ast} \!\Big(\!\xN{{\boldsymbol{\mathsf{x}}}_{n_\ell}}^p\!-\!\xN{{\boldsymbol{\mathsf{x}}}_{n_k}}^p\!\Big) \!+ \!\DP{\!\dmapX{p}({\boldsymbol{\mathsf{x}}}_{n_k}\!)\!-\!\dmapX{p}({\boldsymbol{\mathsf{x}}}_{n_\ell}\!)}{\xbs^\dagger\!}\!+\!\DP{\!\dmapX{p}({\boldsymbol{\mathsf{x}}}_{n_k}\!)\!-\!\dmapX{p}({\boldsymbol{\mathsf{x}}}_{n_\ell}\!)}{{\boldsymbol{\mathsf{x}}}_{n_k}\!\!-\!\xbs^\dagger\!}.
\end{align*}
Since the first two terms involve Cauchy sequences, it suffices to treat the last term, denoted by ${\rm I}_{k,\ell}$.
Using telescopic sum and applying the iterate update rule, we have
\begin{align*}
{\rm I}_{k,\ell}&=\sum_{n=n_\ell}^{n_k-1} \!\DP{\dmapX{p}({\boldsymbol{\mathsf{x}}}_{n+1})\!-\!\dmapX{p}({\boldsymbol{\mathsf{x}}}_{n})}{{\boldsymbol{\mathsf{x}}}_{n_k}\!-\!\xbs^\dagger}=\!\sum_{n=n_\ell}^{n_k-1}\! \mu_{n+1}\DP{{\mathbf{A}}_{i_{n+1}}^\ast\svaldmapY{p}({\mathbf{A}}_{i_{k+1}}{\boldsymbol{\mathsf{x}}}_n-{\boldsymbol{\mathsf{y}}}_{i_{n+1}})}{{\boldsymbol{\mathsf{x}}}_{n_k}-\xbs^\dagger}\\
&=\sum_{n=n_\ell}^{n_k-1} \mu_{n+1}\DP{\svaldmapY{p}({\mathbf{A}}_{i_{n+1}}{\boldsymbol{\mathsf{x}}}_n-{\boldsymbol{\mathsf{y}}}_{i_{k+1}})}{{\mathbf{A}}_{i_{n+1}}{\boldsymbol{\mathsf{x}}}_{n_k}-{\boldsymbol{\mathsf{y}}}_{i_{n+1}}}.
\end{align*}
By the Cauchy-Schwarz inequality and properties of the duality map, we get
\begin{align*}
|{\rm I}_{k,\ell}|&\leq\sum_{n=n_\ell}^{n_k-1}\!\! \mu_{n+1}\!\yN{{\mathbf{A}}_{i_{n+1}}\!{\boldsymbol{\mathsf{x}}}_n\!-\!{\boldsymbol{\mathsf{y}}}_{i_{n+1}}}^{p-1}\yN{{\mathbf{A}}_{i_{n+1}}{\boldsymbol{\mathsf{x}}}_{n_k}\!-\!{\boldsymbol{\mathsf{y}}}_{i_{n+1}}\!}
\leq\!\sum_{n=n_\ell}^{n_k-1}\!\mu_{n+1}\widehat\Psi_{i_{n+1}}({\boldsymbol{\mathsf{x}}}_{n})^{p-1}\widehat\Psi_{i_{n+1}}({\boldsymbol{\mathsf{x}}}_{n_k}).
\end{align*}
Since $\widehat\Psi_i({\boldsymbol{\mathsf{x}}})\leq \widehat\Psi({\boldsymbol{\mathsf{x}}})$, for all $i\in[N]$, we use \eqref{eqn:monotonic_subseq} and get
\begin{align*}
|{\rm I}_{k,\ell}|&\leq\sum_{n=n_\ell}^{n_k-1} \mu_{n+1}\widehat\Psi({\boldsymbol{\mathsf{x}}}_{n})^{p-1}\widehat\Psi({\boldsymbol{\mathsf{x}}}_{n_k})\leq\sum_{n=n_\ell}^{n_k-1} \mu_{n+1}\widehat\Psi({\boldsymbol{\mathsf{x}}}_{n})^{p}.
\end{align*}
Since $\widehat\Psi({\boldsymbol{\mathsf{x}}})^{p}\le pN^p \Psi({\boldsymbol{\mathsf{x}}})$, the right hand side of the inequality converges to $0$ as $n_\ell\to\infty$.
Therefore, by \cite[Theorem 2.12(e)]{SLS_06}, it follows that $({\boldsymbol{\mathsf{x}}}_{n_k})_{k\in\bbN}$, is a Cauchy sequence, and thus converges strongly to an $\widehat{\boldsymbol{\mathsf{x}}}$ such that $\Psi(\widehat{\boldsymbol{\mathsf{x}}})=0$.
{The above argument showing the a.s. convergence of
$(\Delta_k)_{k\in\bbN}$ can be applied pointwise to any solution.
Namely, on the event where $({\boldsymbol{\mathsf{x}}}_{n_k})_{k\in\bbN}$ converges strongly to an $\widehat {\boldsymbol{\mathsf{x}}}\in\mathcal{X}_{\min}$ (i.e. ${\mathbf{A}}\widehat{\boldsymbol{\mathsf{x}}}={\boldsymbol{\mathsf{y}}}$), define $\widehat\Delta_k:=\bregman{{\boldsymbol{\mathsf{x}}}_k}{\widehat{\boldsymbol{\mathsf{x}}}}$.
By repeating the argument using Lemma \ref{lem:Kaczmarz-basic}, we deduce
\begin{align*}
\widehat\Delta_{k+1} \leq \LRR{1+L_{\max}^{p^\ast+p}\frac{p}{C_p}\frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast}}\widehat\Delta_k - p\mu_{k+1}\Psi_{i_k}({\boldsymbol{\mathsf{x}}}_k).
\end{align*}
Since $\sum_{k=1}^\infty\mu_{k}^{p^\ast}<\infty$, it follows that the (deterministic) sequence $(\widehat\Delta_k)_{k\in\bbN}$ converges to a $\widehat\Delta_\infty\geq0$.
The continuity of the Bregman distance in the first argument (Theorem \ref{thm:bregman_properties}(vi)) gives
$\lim_{j\rightarrow\infty}\bregman{{\boldsymbol{\mathsf{x}}}_{n_j}}{\widehat{\boldsymbol{\mathsf{x}}}}=\bregman{\widehat{\boldsymbol{\mathsf{x}}}}{\widehat{\boldsymbol{\mathsf{x}}}}=0$,
and thus $\widehat\Delta_\infty=0$.
Moreover, by the $p$-convexity of $\CX$ (Theorem \ref{thm:bregman_properties}(iii)), we have
$0\leq \xN{{\boldsymbol{\mathsf{x}}}_k-\widehat{\boldsymbol{\mathsf{x}}}}^p \leq \frac{p}{C_p} \widehat\Delta_k.$
From the squeeze theorem it follows that $\lim_{k\rightarrow\infty}\xN{{\boldsymbol{\mathsf{x}}}_k-\widehat{\boldsymbol{\mathsf{x}}}}=0$.
Thus, for every event in an a.s. set $\Omega$, the sequence $({\boldsymbol{\mathsf{x}}}_k)_{k\in\bbN}$ strongly converge to some minimising solution, that is
\begin{align*}
\bbP\Big(\lim_{k\rightarrow\infty} \inf_{\widetilde {\boldsymbol{\mathsf{x}}}\in \CX_{\min}}\xN{{\boldsymbol{\mathsf{x}}}_k-\widetilde {\boldsymbol{\mathsf{x}}}}=0\Big)=1.
\end{align*}
}
Next assume $\dmapX{p}({\boldsymbol{\mathsf{x}}}_0)\in\overline{\mathrm{range}({\mathbf{A}}^\ast)}$.
From \eqref{eqn:sgd}, it follows that $\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)\in\overline{\range({\mathbf{A}}^\ast)}$ holds for all $k\geq1$.
By the continuity of $\dmapX{p}$, we have $\dmapX{p}(\widehat{\boldsymbol{\mathsf{x}}})\in\overline{\range({\mathbf{A}}^\ast)}$.
Thus, from ${\mathbf{A}}(\widehat{\boldsymbol{\mathsf{x}}}-\xbs^\dagger)=0$ and Lemma \ref{lem:minnorm} it follows $\widehat{\boldsymbol{\mathsf{x}}}=\xbs^\dagger$.
\end{proof}
The assumptions and conclusions of Theorem \ref{thm:as_lin_convergence} can be broken down into two parts.
The step-size conditions $\sum_{k=1}^\infty\mu_{k}=\infty$ and $\sum_{k=1}^\infty \mu_{k}^{p^\ast} <\infty$ are required to show the a.s. convergence of $(\bregman{{\boldsymbol{\mathsf{x}}}_k}{\widehat{\boldsymbol{\mathsf{x}}}})_{k\in\bbN}$ to $0$, for some non-deterministic $\widehat{\boldsymbol{\mathsf{x}}}\in \mathcal{X}_{\min}$.
The remaining assumption $\dmapX{p}({\boldsymbol{\mathsf{x}}}_0)\in\overline{\mathrm{range}({\mathbf{A}}^\ast)}$ is needed to identify this limit as the MNS $\xbs^\dagger$, as the Landweber method \cite[Remark 3.12]{SLS_06}.
If
$\dmapX{p}({\boldsymbol{\mathsf{x}}}_0)\not\in\overline{\mathrm{range}({\mathbf{A}}^\ast)}$, we commonly establish
convergence to an MNS relative to ${\boldsymbol{\mathsf{x}}}_0$, i.e. a solution
which minimises $\xN{{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{x}}}_0}$, analogous to the Euclidean case \cite{JL19}.
\begin{remark}
The stepsize conditions
$\sum_{k=1}^\infty \mu_{k} =\infty$ and $\sum_{k=1}^\infty \mu_{k}^{p^\ast}<\infty$ are satisfied by a polynomially decaying step-size schedule $(\mu_{k})_{k\in\bbN}=(\mu_0k^{-\beta})_{k\in\bbN}$, with {$\frac{1}{p^\ast}<\beta\le1$}.
\end{remark}
Theorem \ref{thm:as_lin_convergence} states sufficient conditions ensuring the a.s. convergence of $(\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger})_{k\in\bbN}$ to $0$. To strengthen this to the convergence
in expectation, we require an additional assumption to ensure that $(\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger})_{k\in\bbN}$
is a uniformly integrable super-martingale and the space $\CX$ being uniformly smooth. Note that removing the assumptions of Theorem
\ref{thm:as_lin_convergence} from Theorem \ref{thm:L1_lin_convergence}
would still result in convergence in expectation to
some non-negative random variable, but not necessarily to $0$. {Recall that a family $(X_t)_t$ of random variables is uniformly integrable provided $\lim_{k\rightarrow\infty}\sup_{t}\bbE[\|X_t\| \mid \Vone_{\|X_t\|\geq k}]=0$, where $\Vone(\cdot)$ is the indicator function.}
\begin{theorem}\label{thm:L1_lin_convergence}
Let the conditions of Theorem \ref{thm:as_lin_convergence} hold {with $\dmapX{p}({\boldsymbol{\mathsf{x}}}_0)\in\overline{\range({\mathbf{A}}^\ast)}$} and let
$\mu_{k}^{p^\ast-1}\leq \frac{p^\ast}{G_{p^\ast}L_{\max}^{p^\ast}}$ for all $k\in\bbN$.
Then there holds
$\lim_{k\rightarrow\infty}\bbE[\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger}]=0.$
{Moreover, for $1\leq r\leq p$, we have $\lim_{k\rightarrow\infty} \bbE[\xN{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger}^r]=0$ and if $\CX$ is additionally uniformly smooth, then $\lim_{k\rightarrow\infty} \bbE[\xsN{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)-\dmapX{p}(\xbs^\dagger)}^{p^\ast}]=0$.}
\end{theorem}
\begin{proof}
{The step-size conditions allow applying Lemma \ref{lem:iterate_boundedness}, which yields $\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger}\leq\bregman{{\boldsymbol{\mathsf{x}}}_0}{\xbs^\dagger}$ for all $k$.}
{It follows that $(\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger})_{k\in\bbN}$ is bounded, and is thus uniformly integrable, and by Theorem \ref{thm:as_lin_convergence}, it converges a.s. to $0$.}
Then, by Vitali's convergence theorem \cite[Theorem 4.5.4]{B07}, we deduce that
$(\Delta_k)_{k\in\bbN}$
converges to $0$ in expectation as well.
Using now the $p$-convexity of $\CX$ and the monotonicity of expectation, we have
\begin{align*}
0\leq\frac{C_p}{p}\lim_{k\rightarrow\infty}\bbE[\xN{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger}^p] \leq \lim_{k\rightarrow\infty}\bbE[\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger}]=0.
\end{align*}
By the continuity of the power function and the Lyapunov inequality for $1\leq r {\leq p}$, we have
\begin{align*}
0\leq\lim_{k\rightarrow\infty}\bbE[\xN{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger}^r]\leq\lim_{k\rightarrow\infty}(\bbE[\xN{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger}^p])^{r/p}=0.
\end{align*}
{To prove the last claim we use uniform smoothness of $\CX$ and Theorem \ref{rem:dmap_properties}(iv), to deduce}
\begin{align*}
\xsN{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)-\dmapX{p}(\xbs^\dagger)}^{p^\ast}\leq C \max\{1, \xN{{\boldsymbol{\mathsf{x}}}_k}, \xN{\xbs^\dagger}\}^{p}\, {\overline{\rho}_\CX}(\xN{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger})^{p^\ast},
\end{align*}
where $\overline{\rho}_\CX(\tau)=\rho_\CX(\tau)/\tau$ is a modulus of smoothness function such that $\overline{\rho}(\tau)\leq1$ and $\lim_{\tau\rightarrow0}\overline{\rho}(\tau)=0$, cf. Definition \ref{defn:smoothness_and_convexity}.
{By Lemmas \ref{lem:iterate_boundedness} and \ref{lem:bregman_bound_xk_bound}
$(\xN{{\boldsymbol{\mathsf{x}}}_k}^p)_{k\in\bbN}$ is {(uniformly) bounded, giving that the sequence $(\xsN{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)-\dmapX{p}(\xbs^\dagger)}^{p^\ast})_{k\in\bbN}$ is bounded and thus} uniformly integrable.}
Since $\lim_{k\rightarrow\infty}\bbE[\xN{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger}]=0$, it follows that $\xN{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger}$ converges to $0$ in probability, and thus by the continuous mapping theorem ${\overline{\rho}_\CX}(\xN{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger})^{p^\ast}$ also converges to $0$ in probability.
{Applying Vitaly's theorem to the uniformly integrable sequence $(\xsN{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)-\dmapX{p}(\xbs^\dagger)}^{p^\ast})_{k\in\bbN}$ yields that it converges to $0$ in measure,} and the claim follows.
\end{proof}
\begin{remark}
Note that the condition $\dmapX{p}({\boldsymbol{\mathsf{x}}}_0)\in\overline{\range({\mathbf{A}}^\ast)}$ on ${\boldsymbol{\mathsf{x}}}_0$ is crucial for ensuring that all the limits are the same.
Landweber iterations converge for uniformly convex and smooth $\CX$, and any Banach
space $\CY$ \cite[Theorem 3.3]{SLS_06}. In our analysis, we have assumed that $\CX$ is $p$-convex to simplify the analysis. First, $p$-convexity is used in the proof of Lemma \ref{lem:descent_property}.
If $\CX$ were only uniformly convex (and $\CX^\ast$ only uniformly smooth),
then we may use the modulus of smoothness function $\rho_{\mathcal{X}}$, cf. \eqref{defn:smoothness_and_convexity}
and \cite[Theorem 2.41]{TBHK_12}, to establish a suitable analogue of the descent
property \eqref{eqn:descent_property}. Second, $p$-convexity is used in the proof of Theorem \ref{thm:as_lin_convergence}, allowing a more direct application of Robbins-Siegmund theorem by relating the objective
values to Bregman distances. Meanwhile, the Landweber method in \cite{SLS_06} requires
step-sizes that depend on the modulus of smoothness, the current iterate and objective
value, which is more restrictive than that in this work.
\end{remark}
\subsection{Convergence analysis for the generalised Kaczmarz model}\label{sec:further_kaczmarz}
Sch\"opfer et al \cite{SLS_06} studied general powers of
the Banach space norm and sub-gradients of the form $\partial(\frac{1}{q}\yN{{\mathbf{A}}\cdot-{\boldsymbol{\mathsf{y}}}}^q)({\boldsymbol{\mathsf{x}}})$.
Now we take an analogous perspective for the objective
\begin{equation*}
\Psi({\boldsymbol{\mathsf{x}}})=\frac{1}{N}\sum_{i=1}^N \Psi_i({\boldsymbol{\mathsf{x}}}),\quad\mbox{with }\Psi_i({\boldsymbol{\mathsf{x}}}) := \frac{1}{q}\yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^q,
\end{equation*}
with $1<q\leq 2$. {This model is herein called the generalised Kaczmarz model. (Note that this is different from the randomised extended Kaczmarz method \cite{Zouzias:2013}.)} We shall show the convergence of SGD with stochastic directions
\begin{align}\label{eqn:q_kaczmarz}
g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)={\mathbf{A}}_i^\ast \svaldmapY{q}({\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i)= \partial(\tfrac{1}{q}\yN{{\mathbf{A}}_i\cdot-{\boldsymbol{\mathsf{y}}}_i}^q)({\boldsymbol{\mathsf{x}}}).
\end{align}
The descent property \eqref{eqn:descent_property} is unaffected, and a direct computation again yields
\begin{equation}\label{eqn:descent_property-2}
\bregman{{\boldsymbol{\mathsf{x}}}_{k+1}}{\xbs^\dagger}\leq\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger}-\mu_{k+1}\DP{{\boldsymbol{\mathsf{g}}}_{k+1}}{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger} + \frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast}\xsN{{\boldsymbol{\mathsf{g}}}_{k+1}}^{p^\ast}.
\end{equation}
However, Robbins-Siegmund theorem cannot be applied directly.
Instead, we pursue a different proof strategy by first establishing the uniform boundedness of iterates.
\begin{lemma}\label{lem:bounded_q_kaczmarz}
Let Assumption \ref{ass:space-basic} hold.
Consider SGD with descent directions \eqref{eqn:q_kaczmarz} for $1<q\leq 2$, and assume that $\mu_{k}^{p^\ast-1}<\frac{p^\ast}{G_{p^\ast}L_{\max}^{p^\ast}}$ holds for all $k\in\bbN$ and $\sum_{k=1}^\infty\mu_{k}^{p^\ast}=:\Gamma<\infty$.
Then $(\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger})_{k\in\bbN}$ and $({\boldsymbol{\mathsf{x}}}_k)_{k\in\bbN}$ are uniformly bounded.
\end{lemma}
\begin{proof}
Let $\overline{\Psi}_i({\boldsymbol{\mathsf{x}}})=\yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^q$, and $\Delta_k=\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger}$. Then we have
$\DP{{\boldsymbol{\mathsf{g}}}_{k+1}}{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger}=\overline{\Psi}_{i_{k+1}}({\boldsymbol{\mathsf{x}}}_k)$ and
\begin{align*}
\xsN{{\boldsymbol{\mathsf{g}}}_{k+1}}^{p^\ast}&=\xsN{{\mathbf{A}}_{i_{k+1}}^\ast \svaldmapY{q}({\mathbf{A}}_{i_{k+1}}{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_{i_{k+1}})}^{p^\ast} \leq L_{\max}^{p^\ast}\yN{{\mathbf{A}}_{i_{k+1}}{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_{i_{k+1}}}^{p^\ast (q-1)} \\&\leq L_{\max}^{p^\ast}\overline{\Psi}_{i_{k+1}}({\boldsymbol{\mathsf{x}}}_k)^{p^\ast\frac{q-1}{q}}=L_{\max}^{p^\ast}\overline{\Psi}_{i_{k+1}}({\boldsymbol{\mathsf{x}}}_k)^{\frac{p^\ast}{q^\ast}},
\end{align*}
where $ q^\ast\geq2$ is the conjugate exponent of $q$.
Plugging this into \eqref{eqn:descent_property-2} gives
\begin{align}\label{eqn:descent_for_q_kaczmarz}
\Delta_{k+1}\leq\Delta_k-\mu_{k+1}\overline{\Psi}_{i_{k+1}}({\boldsymbol{\mathsf{x}}}_k) + L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast}\overline{\Psi}_{i_{k+1}}({\boldsymbol{\mathsf{x}}}_k)^{\frac{p^\ast}{q^\ast}}.
\end{align}
Since $1<p^\ast\leq 2$ by Theorem \ref{thm:bregman_properties}(iii), and $q^\ast\geq2$, we have $\frac{p^\ast}{q^\ast}\leq1$.
Now we define two sets of indices
\[ \CI = \{j\leq k: \overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j) \geq 1\} \text{ and } \CJ = \{j\leq k: \overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j) <1\},\]
so that $\CI\cap\CJ=\emptyset$, and $\CI\cup\CJ=[k]$. Note that $\CI$ and $\CJ$ actually depend on the current iterate index $k$. Applying the inductive argument to \eqref{eqn:descent_for_q_kaczmarz} gives
\begin{align*}
\Delta_{k+1} &\leq \Delta_0 -\sum_{j=0}^k\mu_{j+1}\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j) + L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\sum_{j=0}^k\mu_{j+1}^{p^\ast}\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j)^{\frac{p^\ast}{q^\ast}}\\
&=\Delta_0 \underbrace{-\sum_{j\in\CI}\mu_{j+1}\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j) + L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\sum_{j\in\CI}\mu_{j+1}^{p^\ast}\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j)^{\frac{p^\ast}{q^\ast}}}_{(\star)}\underbrace{-\sum_{j\in\CJ}\mu_{j+1}\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j)}_{(\star\star)}\\
&\qquad\qquad\qquad\qquad\qquad\quad+\underbrace{L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\sum_{j\in\CJ}\mu_{j+1}^{p^\ast}\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j)^{\frac{p^\ast}{q^\ast}}}_{(\star\star\star)}.
\end{align*}
Next we analyse these three terms separately. First, for $j\in\CI$, we have $\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j)\geq 1$ and since $\frac{p^\ast}{q^\ast}<1$, we have $\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j)^{\frac{p^\ast}{q^\ast}}\leq\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j)$, giving
\begin{align*}
(\star)&\!\leq\!-\!\sum_{j\in\CI}\mu_{j+1}\!\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j)\!+\! L_{\max}^{p^\ast}\!\frac{G_{p^\ast}}{p^\ast}\!\sum_{j\in\CI}\mu_{j+1}^{p^\ast}\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j)\\
&=\!-\!\sum_{j\in\CI}\!\Big(\!1\!-\!L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_{j+1}^{p^\ast-1}\!\Big)\!\mu_{j+1}\!\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j).
\end{align*}
Since $\mu_{j+1}^{p^\ast-1}<\frac{p^\ast}{G_{p^\ast}L_{\max}^{p^\ast}}$ holds by assumption,
the term $(\star)$ is non-positive. Moreover, $(\star\star)$ is trivially non-positive.
Since $\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j) <1$ for $j\in\CJ$, the last term $(\star\star\star)$ can be bounded as
\begin{align*}
L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\sum_{j\in\CJ}\mu_{j+1}^{p^\ast}\overline{\Psi}_{i_{j+1}}({\boldsymbol{\mathsf{x}}}_j)^{\frac{p^\ast}{q^\ast}}\leq L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\sum_{j\in\CJ}\mu_{j+1}^{p^\ast}\leq L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\sum_{j=1}^\infty\mu_{j}^{p^\ast}=L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\Gamma.
\end{align*}
By combining the last three bounds on $(\star)$, $(\star\star)$ and $(\star\star\star)$, we get
\[\Delta_{k+1}\leq \Delta_0+L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\Gamma, \text{ for all } k\geq 0.\]
Thus, $(\Delta_k)_{k\in\bbN}$ is uniformly bounded and by Lemma \ref{lem:bregman_bound_xk_bound}, so is $({\boldsymbol{\mathsf{x}}}_k)_{k\in\bbN}$.
\end{proof}
The proof of Lemma \ref{lem:bounded_q_kaczmarz} exposes the challenge in extending the convergence
results to general stochastic directions. Namely, in the proof of Theorem
\ref{thm:as_lin_convergence}, we showed the convergence by taking conditional expectation of \eqref{eqn:descent_property}, recasting the resulting expression
as an almost super-martingale, and then relating objective values to Bregman distances via $\Psi({\boldsymbol{\mathsf{x}}}_k)\leq C\Delta_k$, for some $C>0$.
Here, using $\frac{q}{q^\ast}=q-1$ and $\frac{p^\ast}{p}=p^\ast-1$, we instead have
\begin{equation*}
\Psi({\boldsymbol{\mathsf{x}}}_k)^{\frac{p^\ast}{q^\ast}}\leq C\Delta_k^{(p^\ast-1)(q-1)}, \quad \mbox{with } C=q^{-\frac{p^\ast}{q^\ast}}L_{\max}^{p^\ast(q-1)}\Big(\frac{p}{C_p}\Big)^{(p^\ast-1)(q-1)},
\end{equation*}
which gives
\begin{align*}
\bbE_k[\Delta_{k+1}]\leq \Delta_k + CL_{\max}^{p^\ast}q^\frac{p^\ast}{q^\ast}\frac{G_{p^\ast}}{p^{\ast}}\mu_{k+1}^{p^\ast}\Delta_k^{(p^\ast-1)(q-1)} -q\mu_{k+1}\Psi({\boldsymbol{\mathsf{x}}}_k).
\end{align*}
Here $0<(p^\ast-1)(q-1)<1$, provided $p^\ast\neq2$ and $q\neq2$.
Therefore, Robbins-Siegmund theorem cannot be applied directly. Nonetheless, we still have the following analogue of Theorem \ref{thm:L1_lin_convergence}.
\begin{theorem}\label{thm:L1_q_Kaczmarz_convergence}
Consider iterations \eqref{eqn:sgd} with descent directions \eqref{eqn:q_kaczmarz} for $1<q\leq 2$ and let Assumption \ref{ass:space-basic} hold and $\xbs^\dagger$ be the MNS.
Let the step-sizes $(\mu_k)_{k\in\bbN}$ satisfy $\sum_{k=1}^\infty\mu_{k}=\infty$, $\sum_{k=1}^\infty\mu_{k}^{p^\ast}<\infty$, and $\mu_{k}^{p^\ast-1}<\frac{p^\ast}{G_{p^\ast}L_{\max}^{p^\ast}}$ for all $k\in\bbN$.
{Then the sequence $({\boldsymbol{\mathsf{x}}}_k)_{k\in\bbN}$ converges a.s. to a solution of \eqref{eqn:inv}:
\begin{align*}
\bbP\Big(\lim_{k\rightarrow\infty} \inf_{\widetilde {\boldsymbol{\mathsf{x}}}\in \CX_{\min}}\xN{{\boldsymbol{\mathsf{x}}}_k-\widetilde {\boldsymbol{\mathsf{x}}}}=0\Big)=1.
\end{align*}}
Moreover, if $\dmapX{p}({\boldsymbol{\mathsf{x}}}_0)\in\overline{\mathrm{range}({\mathbf{A}}^\ast)}$, we have
$$\lim_{k\rightarrow\infty}\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger}=0\ \mbox{ a.s.}\quad\mbox{and}\quad \lim_{k\rightarrow\infty}\bbE[\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger}]=0.$$
\end{theorem}
\begin{proof}
To establish the a.s. convergence of iterates, we first take the conditional
expectation of the descent property \eqref{eqn:descent_property-2} and obtain
\begin{equation}\label{eqn:descent-proper-22}
\bbE_k[\Delta_{k+1}]\leq\Delta_k-\mu_{k+1}\DP{\bbE_k[{\boldsymbol{\mathsf{g}}}_{k+1}]}{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger} + \frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast}\bbE_k\big[\xsN{{\boldsymbol{\mathsf{g}}}_{k+1}}^{p^\ast}\big].
\end{equation}
We now have
$\DP{\bbE_k[{\boldsymbol{\mathsf{g}}}_{k+1}]}{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger}=\DP{\partial\Psi({\boldsymbol{\mathsf{x}}}_k)}{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger}=q\Psi({\boldsymbol{\mathsf{x}}}_k)$,
and
\begin{equation*}
\xsN{g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)}\leq L_{\max} \yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^{q-1}.
\end{equation*}
Then taking the conditional expectation of $\xsN{g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)}^{p^\ast}$ yields
\begin{align*}
\bbE\big[\xsN{g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)}^{p^\ast}\big] &\leq L_{\max}^{p^\ast} \bbE\Big[\yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^{p^\ast(q-1)}\Big]=L_{\max}^{p^\ast} \bbE\Big[(\yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^{q})^{\frac{p^\ast}{q^\ast}}\Big].
\end{align*}
We have $0<\frac{p^\ast}{q^\ast}\leq 1$, with the equality achieved only if $p^\ast=q^\ast=2$.
In the latter case, it trivially follows that $\bbE[\xsN{g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)}^{p^\ast}] \leq qL_{\max}^{p^\ast}\Psi({\boldsymbol{\mathsf{x}}})$.
If $0<\frac{p^\ast}{q^\ast}<1$, by Jensen's inequality, we have
\begin{align*}
\bbE\big[\xsN{g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)}^{p^\ast}\big] &\leq L_{\max}^{p^\ast} \bbE\Big[(\yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^{q})^{\frac{p^\ast}{q^\ast}}\Big]\leq L_{\max}^{p^\ast} (\bbE[\yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i}^{q}])^{\frac{p^\ast}{q^\ast}}\leq L_{\max}^{p^\ast} q^{\frac{p^\ast}{q^\ast}} \Psi({\boldsymbol{\mathsf{x}}})^{\frac{p^\ast}{q^\ast}}.
\end{align*}
Plugging this estimate into the conditional descent property \eqref{eqn:descent-proper-22} yields
\begin{align*}
\bbE_k[\Delta_{k+1}]\leq \Delta_k -q\mu_{k+1}\Psi({\boldsymbol{\mathsf{x}}}_k) + L_{\max}^{p^\ast}q^\frac{p^\ast}{q^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast} \Psi({\boldsymbol{\mathsf{x}}}_k)^{\frac{p^\ast}{q^\ast}}.
\end{align*}
Since the sequence $({\boldsymbol{\mathsf{x}}}_k)_{k\in\bbN}$ is uniformly bounded by Lemma \ref{lem:bounded_q_kaczmarz}, so is $(\Psi({\boldsymbol{\mathsf{x}}}_k))_{k\in\bbN}$, and we thus have
\[
\sum_{k=0}^\infty\mu_{k+1}^{p^\ast} \Psi({\boldsymbol{\mathsf{x}}}_k)^{\frac{p^\ast}{q^\ast}} \leq C \sum_{k=0}^\infty\mu_{k+1}^{p^\ast}<\infty.
\]
Thus, we can apply Robbins-Siegmund theorem for almost super-martingales, and deduce
that $(\Delta_k)_{k\in\bbN}$ converges a.s. to a non-negative random variable $\Delta_\infty$.
Moreover, $\sum_{k=0}^\infty \mu_{k+1}\Psi({\boldsymbol{\mathsf{x}}}_k)<\infty$ holds a.s. By repeating the argument for Theorem \ref{thm:as_lin_convergence}, there exists a subsequence
$({\boldsymbol{\mathsf{x}}}_{k_j})_{j\in\bbN}$ that a.s. converges to some $\widehat{\boldsymbol{\mathsf{x}}}\in\CX_{\min}$,
and hence $\Delta_\infty=0$, as desired. Moreover, by Lemma \ref{lem:bounded_q_kaczmarz},
the sequence $(\Delta_k)_{k\in\bbN}$ is bounded, and thus uniformly integrable.
Since it converges to $0$ a.s., from Vitali's theorem it follows that $\lim_{k\rightarrow
\infty}\bbE[\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger}]=0$.
\end{proof}
The results in Theorem \ref{thm:L1_q_Kaczmarz_convergence} are similar to that of Theorem \ref{thm:L1_lin_convergence}, but the generality of the latter is compensated for by an additional step-size assumption ensuring boundedness of iterates $({\boldsymbol{\mathsf{x}}}_k)_{k\in\bbN}$.
\subsection{Convergence rates for conditionally stable operators}
Theorem \ref{thm:L1_lin_convergence} states the conditions needed for the convergence of Bregman distances in expectation.
However, it does not provide convergence rates. In order to obtain convergence rates, one needs additional conditions on the MNS $\xbs^\dagger$, which are collectively known as source conditions. One approach is via conditional stability:
for a locally conditionally stable operator, we can extract convergence in expectation and quantify the convergence speed.
Conditional stability is known for many inverse problems for PDEs, and has been used extensively to investigate regularised solutions \cite{ChengYamamoto:2000,EggerHofmann:2018}.
It is useful for analysing ill-posed problems that are locally well-posed, and in case of a (possibly) non-linear forward operator $F$ it is of the form
\begin{align}\label{eqn:cond_stab_measure}
\xN{{\boldsymbol{\mathsf{x}}}_1-{\boldsymbol{\mathsf{x}}}_2}\leq \Phi(\yN{F({\boldsymbol{\mathsf{x}}}_1)-F({\boldsymbol{\mathsf{x}}}_2)}),\quad \forall {\boldsymbol{\mathsf{x}}}_1,{\boldsymbol{\mathsf{x}}}_2\in\CM\subset\CX,
\end{align}
where $\Phi:[0,\infty)\rightarrow[0,\infty)$ with $\Phi(0)=0$ is a continuous, non-decreasing function, and $\CM$ is typically a ball in the ambient norm \cite{H94}.
In Banach space settings, the conditional stability needs to be adjusted, by replacing the left hand side of \eqref{eqn:cond_stab_measure} with a non-negative error measure \cite{CHL14}. Since the most relevant error measure for Banach space analysis is the Bregman distance $\bregman{{\boldsymbol{\mathsf{x}}}_1}{{\boldsymbol{\mathsf{x}}}_2}$, a H\"{o}lder type stability estimate then reads: for some $\alpha\ge1$ and $C_\alpha>0$
\begin{equation}
\bregman{{\boldsymbol{\mathsf{x}}}}{\xbs^\dagger}^\alpha\le C_\alpha^{-1} \yN{{\mathbf{A}}{\boldsymbol{\mathsf{x}}}-{\mathbf{A}}\xbs^\dagger}^p.\label{eqn:cond-stab}
\end{equation}
Now we give a convergence rate under conditional stability bound \eqref{eqn:cond-stab}. The constant $C_N$ appears in Lemma \ref{lem:Kaczmarz-basic} and denotes the norm equivalence constant.
\begin{theorem}\label{thm:lin-main}
Let the forward operator ${\mathbf{A}}$ satisfy the conditional stability bound
\eqref{eqn:cond-stab} for some $\alpha\ge1$ and $C_\alpha>0$.
Let $\dmapX{p}({\boldsymbol{\mathsf{x}}}_0)\in\overline{\range({\mathbf{A}}^\ast)}$, and for $C_k=C_NC_\alpha(1 - L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_{k}^{p^\ast-1})>0$, the step-sizes satisfy $\sum_{k=1}^\infty \mu_{k} C_k=\infty$.
Then there holds
\begin{equation*}
\lim_{k\rightarrow\infty}\bbE[\bregman{{\boldsymbol{\mathsf{x}}}_{k}}{\xbs^\dagger}]=0.
\end{equation*}
Moreover,
\[
\bbE[\bregman{{\boldsymbol{\mathsf{x}}}_{k}}{\xbs^\dagger}]\leq\left\{\begin{aligned}
\frac{\bregman{{\boldsymbol{\mathsf{x}}}_{0}}{\xbs^\dagger}}{\Big(1+(\alpha-1)\bregman{{\boldsymbol{\mathsf{x}}}_{0}}{\xbs^\dagger}^{\alpha-1}\sum_{j=1}^k \mu_{j}C_j\Big)^{\frac{1}{\alpha-1}}}, & \quad \text{ if } \alpha>1,\\
{\exp\Big(-\sum_{j=1}^k \mu_{j}C_j\Big)\bregman{{\boldsymbol{\mathsf{x}}}_{0}}{\xbs^\dagger},} &\quad\text{ if } \alpha=1.
\end{aligned}\right.
\]
\end{theorem}
\begin{proof}
Let $\Delta_k:=\bregman{{\boldsymbol{\mathsf{x}}}_k}{\xbs^\dagger}$. The proof of Theorem \ref{thm:as_lin_convergence} and
the conditional stability bound \eqref{eqn:cond-stab} imply
\begin{align}\label{eqn:decreasing}
\bbE_k[\Delta_{k+1}] &\leq \Delta_{k}-p\mu_{k+1}\LRR{1-L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast-1}}\Psi({\boldsymbol{\mathsf{x}}}_k)\\
&\leq\Delta_k-p\mu_{k+1}\frac{C_NC_\alpha}{p}\Big(1 - L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast-1}\Big) \Delta_k^{\alpha},\nonumber
\end{align}
{since by Lemma \ref{lem:Kaczmarz-basic}, there exists a $C_N>0$ such that
$\Psi({\boldsymbol{\mathsf{x}}})\geq \frac{C_N}{p}\yN{{\mathbf{A}}{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}}^p$.}
Taking the full expectation and using Jensen's inequality lead to
\[ \bbE[\Delta_{k+1}] \leq \bbE[\Delta_k] -\mu_{k+1}C_{k+1} \bbE[\Delta_k]^\alpha.\]
Since $C_{k+1}>0$ by assumption, $(\bbE[\Delta_k])_{k\in\bbN}$ is a monotonically decreasing sequence.
By the convexity of the function $x\mapsto x^\alpha$ (for $\alpha\geq1$), for any $\epsilon>0$ and $x\ge\epsilon$, we have $\epsilon^\alpha\geq \frac{\epsilon}{x} x^\alpha$. We claim that for every $\epsilon>0$, there exists a $k_\epsilon\in\bbN$ such that $\bbE[\Delta_k]\le\epsilon$ for all $k\ge k_\epsilon$.
Assuming the contrary, $\bbE[\Delta_k]\ge\epsilon$ for all $k$, gives
\begin{align*}\bbE[\Delta_{k+1}]\leq \bbE[\Delta_k]-\mu_{k+1}C_{k+1}\bbE[\Delta_k]^{\alpha}\le \bbE[\Delta_k]-\mu_{k+1}C_{k+1}\epsilon^\alpha\le\Delta_0-\epsilon^\alpha\sum_{j=1}^{k+1} \mu_{j}C_j\rightarrow -\infty, \end{align*}
since $\sum_{j=1}^\infty \mu_{j}C_j=\infty$ by assumption, which is a contradiction.
Therefore, $\lim_{k\rightarrow\infty}\bbE[\Delta_k]=0$.
For $\alpha>1$, by Polyak's inequality (cf. Lemma \ref{lem:polyak_series}), we have
\begin{align*}
\bbE[\Delta_{k+1}]\leq \frac{\Delta_0}{\Big(1+(\alpha-1)\Delta_0^{\alpha-1} \sum_{j=1}^{k+1} \mu_{j} C_j\Big)^{\frac{1}{\alpha-1}}}.
\end{align*}
Meanwhile, for $\alpha=1$, {using the inequality $1-x\leq e^{-x}$ for $x\geq0$}, a direct computation yields
\begin{align*}
\bbE[\Delta_{k+1}] \leq (1-\mu_{k+1}C_{k+1})\bbE[\Delta_{k}]\leq \prod_{j=1}^{k+1} (1-\mu_jC_j) \Delta_0 \leq{ \exp\Big(-\sum_{j=1}^{k+1} \mu_jC_j\Big)\Delta_0,}
\end{align*}
completing the proof of the theorem.
\end{proof}
\begin{remark}
We have the following comments on Theorem \ref{thm:lin-main}.
\begin{itemize}
\item[(i)] {The estimates for $\alpha>1$ and $\alpha=1$ in Theorem \ref{thm:lin-main} are consistent in the sense that
\begin{equation*}
\lim_{\alpha\searrow 1}
\frac{\bregman{{\boldsymbol{\mathsf{x}}}_{0}}{\xbs^\dagger}}{\Big(1+(\alpha-1)\bregman{{\boldsymbol{\mathsf{x}}}_{0}}{\xbs^\dagger}^{\alpha-1}\sum_{j=1}^k \mu_{j}C_j\Big)^{\frac{1}{\alpha-1}}} = \exp\Big(-\sum_{j=1}^k \mu_{j}C_j\Big)\bregman{{\boldsymbol{\mathsf{x}}}_{0}}{\xbs^\dagger}.
\end{equation*}}
\item[(ii)] {While it might seem counter-intuitive, $\alpha=1$ gives a better convergence rate than $\alpha>1$, because of the following
\begin{align*}
\bregman{{\boldsymbol{\mathsf{x}}}}{\xbs^\dagger}^\alpha\ge\bregman{{\boldsymbol{\mathsf{x}}}}{\xbs^\dagger}^{\tilde\alpha} \text{ if and only if } \alpha\log\bregman{{\boldsymbol{\mathsf{x}}}}{\xbs^\dagger} \ge \tilde\alpha\log\bregman{{\boldsymbol{\mathsf{x}}}}{\xbs^\dagger}.
\end{align*}
Hence, whenever $\bregman{{\boldsymbol{\mathsf{x}}}}{\xbs^\dagger}<1$, we have $\bregman{{\boldsymbol{\mathsf{x}}}}{\xbs^\dagger}\ge\bregman{{\boldsymbol{\mathsf{x}}}}{\xbs^\dagger}^{\alpha}$ for $\alpha>1$.
Plugging this into the conditional stability bound \eqref{eqn:cond-stab} yields
\begin{align*}
\bregman{{\boldsymbol{\mathsf{x}}}}{\xbs^\dagger}^\alpha\le\bregman{{\boldsymbol{\mathsf{x}}}}{\xbs^\dagger}\le C_1^{-1} \yN{{\mathbf{A}}{\boldsymbol{\mathsf{x}}}-{\mathbf{A}}\xbs^\dagger}^p=C_1^{-1}pN\Psi({\boldsymbol{\mathsf{x}}}).
\end{align*}
Meanwhile, the proof of Theorem \ref{thm:lin-main} uses the conditional stability bound to establish a relationship between the objective value and the Bregman distance to the MNS ${\boldsymbol{\mathsf{x}}}^\dag$, cf. \eqref{eqn:decreasing}.
Putting these together gives that $\alpha=1$ provides a greater decrease of the expected Bregman distance, once we are close enough to the solution.}
\end{itemize}
\end{remark}
The conditional stability estimate \eqref{eqn:cond-stab} for a linear operator ${\mathbf{A}}$ {implies} its injectivity. Then the objective $\Psi({\boldsymbol{\mathsf{x}}})$ is strongly convex.
Under condition \eqref{eqn:cond-stab}, there can indeed be only one solution: if ${\mathbf{A}}\tilde{\boldsymbol{\mathsf{x}}}={\mathbf{A}}{\boldsymbol{\mathsf{x}}}$, then $\bregman{\tilde{\boldsymbol{\mathsf{x}}}}{{\boldsymbol{\mathsf{x}}}}=0$ follows from \eqref{eqn:cond-stab}.
The step-size condition $\sum_{k=1}^\infty \mu_{k} C_k=\infty$ is weaker than that in Theorem \ref{thm:L1_lin_convergence}.
Namely, it follows from step-size conditions in Theorem \ref{thm:as_lin_convergence}, since
$$\sum_{k=1}^\infty \mu_{k} C_k=C_NC_\alpha \Big(\sum_{k=1}^\infty\mu_k- L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\sum_{k=1}^\infty\mu_k^{p^\ast}\Big)=\infty$$
holds if $\sum_{k=1}^\infty \mu_{k}=\infty$ and $\sum_{k=1}^\infty\mu_k^{p^\ast}<\infty$.
Further, if there exists a $C>0$ such that $1 - L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_{k}^{p^\ast-1}>C$ holds for all $k\in\bbN$, e.g. if $\mu_k$ is a constant satisfying this condition, then $\sum_{k=1}^\infty \mu_kC_k=\infty$ is weaker than the conditions in Theorem \ref{thm:as_lin_convergence}, since the condition $\sum_{k=1}^\infty\mu_k^{p^\ast}<\infty$ is no longer needed for convergence, and $\sum_{k=1}^\infty\mu_k=\infty$ suffices. {Moreover,
we can choose constant step-sizes. Indeed, setting $\mu_k=\mu_0$, with $1-L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_0^{p^\ast-1}=\frac{1}{2}$,
we get an exponential convergence rate for $\alpha=1$, since $C_k=\frac{C_NC_\alpha}{2}$, we have
\begin{align*}
\bbE[\Delta_{k+1}] &\leq (1-\mu_0C_{k+1})\bbE[\Delta_{k}]\leq \bigg(1-2^{-1-1/(p^\ast-1)}L_{\max}^{-p^\ast/p^\ast-1}\Big(\frac{p^\ast}{G_{p^\ast}}C_NC_\alpha\Big)^{1/p^\ast-1}\bigg)^k\bbE[\Delta_{0}]\\
&\leq \bigg(1-2^{-p}L_{\max}^{-p}\Big(\frac{p^\ast}{G_{p^\ast}}\Big)^{p^\ast/p}C_NC_\alpha\bigg)^k\Delta_{0}.
\end{align*}
Note that this convergence rate is largely comparable with that in the Hilbert case: the conditional stability bound implies the strict convexity of the quadratic objective $\Psi({\boldsymbol{\mathsf{x}}})$, and the SGD is known to converge exponentially fast (see e.g. \cite[Theorem 3.1]{Gower:2019}), with the rate determined by a variant of the condition number.}
\begin{remark}
The conditional stability bound \eqref{eqn:cond-stab} is stated globally. However,
such conditions are often valid only locally. A local definition could have been
employed in \eqref{eqn:cond-stab}, with minor modifications of the argument. Indeed, by the argument of Theorem
\ref{thm:L1_lin_convergence}, we appeal to Lemma \ref{lem:iterate_boundedness},
showing that the Bregman distances of the iterates are non-increasing. Thus, it
suffices to assume that the initial point ${\boldsymbol{\mathsf{x}}}_0$ is sufficiently close to the MNS ${\boldsymbol{\mathsf{x}}}^\dag$.
\end{remark}
\begin{remark}
Conditional stability is intimately tied with classical source conditions.
For example, as shown in \cite{SG+09}, assuming $\alpha=1$ in \eqref{eqn:cond-stab} allows to show a variational inequality
\[ \DP{\dmapX{p}(\xbs^\dagger)}{{\boldsymbol{\mathsf{x}}}-\xbs^\dagger}\leq \xN{\xbs^\dagger}^{p-1}C_\alpha^{-1}(pC_p^{-1})^{1/p}\yN{{\mathbf{A}}({\boldsymbol{\mathsf{x}}}-\xbs^\dagger)}.\]
Then Hahn-Banach theorem and \cite[Lemma 8.21]{SG+09} give the canonical range type condition $\dmapX{p}(\xbs^\dagger)={\mathbf{A}}^\ast {\boldsymbol{\mathsf{w}}}$, for ${\boldsymbol{\mathsf{w}}}\in\CX$ such that $\xN{{\boldsymbol{\mathsf{w}}}}\leq1$.
Connections between source conditions and conditional stability estimates have been studied, e.g. for linear operators in Hilbert spaces \cite{T+13} and in $\CL^p$ spaces \cite{CY21}.
Moreover, variational source conditions often imply conditional stability estimates \cite{WH17}, and in case of bijective and continuous operators they are trivially inferred by a standard source condition {\rm(}albeit only in a possibly small neighbourhood around the solution{\rm)}.
See the book \cite{W19} about the connections between source conditions and conditional stability estimates, and \cite{I17} for inverse problems for differential equations.
\end{remark}
\section{Regularising property}\label{sec:regularisation}
In practice, we often do not have access to the exact data ${\boldsymbol{\mathsf{y}}}$ but only to noisy
observations ${\boldsymbol{\mathsf{y}}}^\delta$, such that $\yN{{\boldsymbol{\mathsf{y}}}^\delta-{\boldsymbol{\mathsf{y}}}}{\leq}\delta$. The convergence study
in the presence of observational noise requires a different approach, since the sequence of objective
values $(\yN{{\mathbf{A}}{\boldsymbol{\mathsf{x}}}_k^\delta-{\boldsymbol{\mathsf{y}}}^\delta}^p)_{k\in\bbN}$ generally will not converge to $0$. In this section
we show that SGD has a regularising effect, in the sense that the expected error
$\bbE[\bregman{{\boldsymbol{\mathsf{x}}}_{k(\delta)}^{\delta}}{\xbs^\dagger}]$ converges to $0$ as the noise level $\delta$
decays to $0$, for properly selected stopping indices $k(\delta)$.
Let $({\boldsymbol{\mathsf{x}}}_k)_{k\in\bbN}$ and $({\boldsymbol{\mathsf{x}}}_k^\delta)_{k\in\bbN}$ be the
noiseless and noisy iterates, defined respectively by
\begin{align}
{\boldsymbol{\mathsf{x}}}_{k+1} & = \dmapXs{p}\LRR{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k) - \mu_{k+1}{\boldsymbol{\mathsf{g}}}_{k+1}},\quad \mbox{with }
{\boldsymbol{\mathsf{g}}}_{k+1} = g({\boldsymbol{\mathsf{x}}}_{k},{\boldsymbol{\mathsf{y}}},i_{k+1}),\label{eqn:sgd_cleaniterates}\\
{\boldsymbol{\mathsf{x}}}_{k+1}^\delta &= \dmapXs{p}\LRR{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k^\delta) - \mu_{k+1}{\boldsymbol{\mathsf{g}}}^\delta_{k+1}},
\quad \mbox{with }{\boldsymbol{\mathsf{g}}}_{k+1}^\delta = g({\boldsymbol{\mathsf{x}}}_{k}^\delta,{\boldsymbol{\mathsf{y}}}^\delta, i_{k+1}).\label{eqn:sgd_noisyiterates}
\end{align}
The key step in proving the regularising property is to show the stability
of SGD iterates with respect to noise. The noise enters into the iterations
through the update directions ${\boldsymbol{\mathsf{g}}}^\delta_{k+1}$ and thus, the stability of the
iterates requires that of update directions. This however requires imposing suitable assumptions
on the observation space $\CY$ since in general, the single valued duality maps $\svaldmapY{p}$ are
continuous only at $0$. If $\CY$ is uniformly smooth, the corresponding duality maps
are also smooth. This assumption is also needed for deterministic iterates, cf.
\cite[Proposition 6.17]{TBHK_12} or \cite[Lemma 9]{M18}. Thus we make the following assumption.
\begin{assumption}\label{ass:smooth-Y}
The Banach space $\CX$ is $p$-convex and uniformly smooth, and $\CY$ is uniformly smooth.
\end{assumption}
We then have the following stability result on the iterates with respect to noise, whose elementary but lengthy proof is deferred to the appendix.
\begin{lemma}\label{lem:coupled_noise_convergence}
Let Assumption \ref{ass:smooth-Y} hold.
Consider the iterations \eqref{eqn:sgd_cleaniterates} and \eqref{eqn:sgd_noisyiterates} with
the same initialisation ${\boldsymbol{\mathsf{x}}}_0^\delta={\boldsymbol{\mathsf{x}}}_0$, and following the same path {\rm(}i.e.
using same random indices $i_{k}${\rm)}. Then, for any fixed $k\in\bbN$, we have
\begin{align*}
\lim_{\delta\searrow0}\bbE[\bregman{{\boldsymbol{\mathsf{x}}}^{\delta}_k}{{\boldsymbol{\mathsf{x}}}_k}]=\lim_{\delta\searrow0}\bbE[\xN{{\boldsymbol{\mathsf{x}}}^{\delta}_k -{\boldsymbol{\mathsf{x}}}_k}]=\lim_{\delta\searrow0}\bbE[\xsN{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k^\delta)-\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)}]=0.
\end{align*}
\end{lemma}
Now we show the regularising property of SGD for suitable stopping indices $k(\delta)$.
\begin{theorem}\label{thm:regularisation_property}
Let Assumption \ref{ass:smooth-Y} hold, and the step-sizes $(\mu_k)_{k\in\bbN}$ satisfy $\sum_{k=1}^\infty\mu_{k}=\infty$, $\sum_{k=1}^\infty \mu_{k}^{p^\ast} <\infty$ and $1 - L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_{k}^{p^\ast-1}>C>0$. If $\lim_{\delta\searrow0}k(\delta)=\infty$ and $\lim_{\delta\searrow0} \delta^p\sum_{\ell=1}^{k(\delta)}\mu_\ell=0$, then
\begin{align*}
\lim_{\delta\searrow0} \bbE[\bregman{{\boldsymbol{\mathsf{x}}}_{k(\delta)}^{\delta}}{\xbs^\dagger}]=0.
\end{align*}
\end{theorem}
\begin{proof}
Let $\Delta_k = \bregman{{\boldsymbol{\mathsf{x}}}_{k}}{\xbs^\dagger}$ and $\Delta_k^\delta = \bregman{{\boldsymbol{\mathsf{x}}}_{k}^\delta}{\xbs^\dagger}$. Take any $\delta>0$ and $k\in\bbN$.
By the three point identity \eqref{eqn:3_point_id}, we have
\begin{align}\label{eqn:noisy_3point}
\Delta_k^\delta &= \bregman{{\boldsymbol{\mathsf{x}}}_k^\delta}{{\boldsymbol{\mathsf{x}}}_k} + \Delta_k + \DP{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)-\dmapX{p}({\boldsymbol{\mathsf{x}}}_k^\delta)}{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger}\nonumber \\
&\leq \bregman{{\boldsymbol{\mathsf{x}}}_k^\delta}{{\boldsymbol{\mathsf{x}}}_k} + \Delta_k + \xsN{\dmapX{p}({\boldsymbol{\mathsf{x}}}_k)-\dmapX{p}({\boldsymbol{\mathsf{x}}}_k^\delta)}\xN{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger}.
\end{align}
{Consider a sequence $(\delta_j)_{j\in\bbN}$ decaying to zero.
Taking any $\epsilon>0$, it suffices to find a $j_\epsilon\in\bbN$ such that for all $j\geq j_\epsilon$ we have $\bbE[\Delta_{k(\delta_j)}^{\delta_j}]\leq 4\epsilon$.}
By Theorem \ref{thm:L1_lin_convergence},
there exists a $k_\epsilon\in\bbN$ such that for all $k\geq k_\epsilon$ we have
\begin{align}\label{eqn:noiseless_kbound}
\bbE[\Delta_k]<\epsilon\quad \text{and}\quad \bbE[\xN{{\boldsymbol{\mathsf{x}}}_k-\xbs^\dagger}]<\epsilon^{1/2}.
\end{align}
Moreover, for any fixed $k_\epsilon$, by Lemma \ref{lem:coupled_noise_convergence},
there exists $j_1\in\bbN$ such that for all $j\geq j_1$ we have
\begin{align}\label{eqn:noisy_kepsilon_bound}
\bbE[\bregman{{\boldsymbol{\mathsf{x}}}_{k_\epsilon}^{\delta_j}}{{\boldsymbol{\mathsf{x}}}_{k_\epsilon}}] <\epsilon \quad \text{and}\quad \bbE[\xsN{\dmapX{p}({\boldsymbol{\mathsf{x}}}_{k_\epsilon})-\dmapX{p}({\boldsymbol{\mathsf{x}}}_{k_\epsilon}^{\delta_j})}]<\epsilon^{1/2}.
\end{align}
Thus, plugging the estimates \eqref{eqn:noiseless_kbound} and \eqref{eqn:noisy_kepsilon_bound} into \eqref{eqn:noisy_3point}, we have $\bbE[\Delta_{k_\epsilon}^{\delta_j}] < 3\epsilon$, for all $j\geq j_1$.
Note, however, that the same does not necessarily hold for all $k\geq k_\epsilon$, and thus for a monotonically increasing sequence of stopping indices $k(\delta_j)$, since $\bbE[\Delta_{k(\delta_j)}^{\delta_j}]$ are not necessarily monotone.
Instead, taking the expectation of the descent property \eqref{eqn:descent_property} with respect to $\CF_k$ yields
\begin{align*}
\bbE_k[\Delta_{k+1}^\delta] \leq\Delta_k^\delta -\mu_{k+1}\DP{\bbE_k[{\boldsymbol{\mathsf{g}}}_{k+1}^\delta]}{{\boldsymbol{\mathsf{x}}}_k^\delta-\xbs^\dagger} +pL_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast} \Psi({\boldsymbol{\mathsf{x}}}_k^\delta).
\end{align*}
Then we decompose the middle term into
\begin{align*}
\DP{\bbE_k[{\boldsymbol{\mathsf{g}}}_{k+1}^\delta]}{\xbs^\dagger-{\boldsymbol{\mathsf{x}}}_k^\delta}
&=\frac{1}{N}\sum_{i=1}^N\DP{\svaldmapY{p}({\mathbf{A}}_{i}{\boldsymbol{\mathsf{x}}}_k^\delta-{\boldsymbol{\mathsf{y}}}_i^\delta)}{-({\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}_k^\delta-{\boldsymbol{\mathsf{y}}}_i^\delta)+{\boldsymbol{\mathsf{y}}}_i-{\boldsymbol{\mathsf{y}}}_i^\delta} \\
&=-p\Psi({\boldsymbol{\mathsf{x}}}_k^\delta) +\frac{1}{N}\sum_{i=1}^N\DP{\svaldmapY{p}({\mathbf{A}}_{i}{\boldsymbol{\mathsf{x}}}_k^\delta-{\boldsymbol{\mathsf{y}}}_i^\delta)}{{\boldsymbol{\mathsf{y}}}_i-{\boldsymbol{\mathsf{y}}}_i^\delta}\\
&\leq -p\Psi({\boldsymbol{\mathsf{x}}}_k^\delta) +\frac{1}{N}\sum_{i=1}^N\yN{{\mathbf{A}}_{i}{\boldsymbol{\mathsf{x}}}_k^\delta-{\boldsymbol{\mathsf{y}}}_i^\delta}^{p-1}\yN{{\boldsymbol{\mathsf{y}}}_i-{\boldsymbol{\mathsf{y}}}_i^\delta}\\
&\leq -p\Psi({\boldsymbol{\mathsf{x}}}_k^\delta) +\delta\frac{1}{N}\sum_{i=1}^N\yN{{\mathbf{A}}_{i}{\boldsymbol{\mathsf{x}}}_k^\delta-{\boldsymbol{\mathsf{y}}}_i^\delta}^{p-1},
\end{align*}
where we have used \eqref{eqn:dmap_defined} and the Cauchy-Schwarz inequality.
Taking the full expectation gives
\begin{align*}
\bbE[\Delta^\delta_{k+1}]&\leq \bbE[\Delta^\delta_{k}] \!- \!p\mu_{k+1} \bbE[\Psi({\boldsymbol{\mathsf{x}}}_k^\delta)]\!+\!pL_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_{k+1}^{p^\ast}\bbE[\Psi({\boldsymbol{\mathsf{x}}}_k^\delta)] \!+\! \delta\mu_{k+1} \frac{1}{N}\sum_{i=1}^N\bbE[\yN{{\mathbf{A}}_{i}{\boldsymbol{\mathsf{x}}}_k^\delta-{\boldsymbol{\mathsf{y}}}_i^\delta}^{p-1}] \\
&= \bbE[\Delta^\delta_{k}] - p\mu_{k+1}C_{k+1}\bbE[\Psi({\boldsymbol{\mathsf{x}}}_k^\delta)]+ \delta\mu_{k+1}\frac{1}{N}\sum_{i=1}^N\bbE[\yN{{\mathbf{A}}_{i}{\boldsymbol{\mathsf{x}}}_k^\delta-{\boldsymbol{\mathsf{y}}}_i^\delta}^{p-1}] ,
\end{align*}
where $C_k\!=\!1-L_{\max}^{p^\ast}\frac{G_{p^\ast}}{p^\ast}\mu_k^{p^\ast-1}>C>0$. Now using the Lyapunov inequality
\begin{align*}
\frac{1}{N}\sum_{i=1}^N\bbE[\yN{{\mathbf{A}}_{i}{\boldsymbol{\mathsf{x}}}_k^\delta-{\boldsymbol{\mathsf{y}}}_i^\delta}^{p-1}]\leq\frac{1}{N}\sum_{i=1}^N\Big(\bbE[\yN{{\mathbf{A}}_{i}{\boldsymbol{\mathsf{x}}}_k^\delta-{\boldsymbol{\mathsf{y}}}_i^\delta}^{p}]\Big)^{(p-1)/p}=p^{1/p^\ast}\frac{1}{N}\sum_{i=1}^N\Big(\bbE[\Psi_i({\boldsymbol{\mathsf{x}}}_k^\delta)]\Big)^{1/p^\ast},
\end{align*}
we deduce
\begin{align}\label{eqn:Delta-new}
\bbE[\Delta^\delta_{k+1}]&\leq \bbE[\Delta^\delta_{k}] - p\mu_{k+1}C_{k+1}\bbE[\Psi({\boldsymbol{\mathsf{x}}}_k^\delta)]+ \delta\mu_{k+1}p^{1/p^\ast}\frac{1}{N}\sum_{i=1}^N\Big(\bbE[\Psi_i({\boldsymbol{\mathsf{x}}}_k^\delta)]\Big)^{1/p^\ast}.
\end{align}
Next we remove the exponent in the last term. Using Young's inequality $ab\leq \frac{a^p}{p}\omega^{-p}+\frac{b^{p^\ast}}{p^\ast}\omega^{p^\ast}$, with $a=\delta$ and $b=\bbE[\Psi_i({\boldsymbol{\mathsf{x}}}_k^\delta)]^{1/p^\ast}$, we have
\begin{align*}
\frac{1}{N}\sum_{i=1}^N\delta\Big(\bbE[\Psi_i({\boldsymbol{\mathsf{x}}}_k^\delta)]\Big)^{1/p^\ast} \leq \delta^p\frac{\omega^{-p}}{p} + \bbE\Big[\frac{1}{N}\sum_{i=1}^N\Psi_i({\boldsymbol{\mathsf{x}}}_k^\delta)\Big]\frac{\omega^{p^\ast}}{p^\ast}\leq\delta^p\frac{\omega^{-p}}{p} + \bbE[\Psi({\boldsymbol{\mathsf{x}}}_k^\delta)]\frac{\omega^{p^\ast}}{p^\ast}.
\end{align*}
Plugging this back in \eqref{eqn:Delta-new} gives
\begin{align*}
\bbE[\Delta^\delta_{k+1}] &\leq \bbE[\Delta^\delta_{k}] - p\mu_{k+1}C_{k+1}\bbE[\Psi({\boldsymbol{\mathsf{x}}}_k^\delta)]+ p^{1/p^\ast}(p^\ast)^{-1}\omega^{p^\ast}\mu_{k+1}\bbE[\Psi({\boldsymbol{\mathsf{x}}}_k^\delta)]+ p^{-1/p}\delta^p\omega^{-p}\mu_{k+1}.
\end{align*}
Taking $\omega>0$ small enough so that $\omega^{p^\ast}\leq p^\ast p^{1/p}C_k$ (which can be made uniformly on $k$, thanks to the positive lower bound on $C_k$), replacing $k+1$ with $k(\delta)$ and using the inductive argument, we have
\begin{align*}
\bbE[\Delta^\delta_{k(\delta)}]&\leq \bbE[\Delta^\delta_{k(\delta)-1}] +p^{-1/p}\omega^{-p}\delta^p\mu_{k(\delta)}
\leq\bbE[\Delta^\delta_{k_\epsilon}] +p^{-1/p}\omega^{-p}\delta^p\sum_{\ell=1}^{k(\delta)}\mu_{\ell}.
\end{align*}
Since $\lim_{\delta\searrow0}\delta^p\sum_{\ell=1}^{k(\delta)}\mu_{\ell}=0$ and $\lim_{\delta\searrow0}k(\delta)=\infty$, there exists $j_2\in\bbN$ such that for all $j\geq j_2$ we have $k(\delta_j)\geq k_\epsilon$ and $p^{-1/p}\omega^{-p}\delta_j^p\sum_{\ell=1}^{k(\delta_j)}\mu_{\ell}<\epsilon$.
{Taking ${j_\epsilon}=j_1\vee j_2$ shows $\bbE[\Delta_{k(\delta_j)}^{\delta_j}] < 4\epsilon$ for all $j\geq j_\epsilon$, and hence the desired claim follows.}
\end{proof}
\begin{remark}
In the constant step-size regime, such as in the case of conditionally stable operators, the correspondence between the noise level and the step-size regime takes a more standard form.
Namely, the condition in Theorem \ref{thm:regularisation_property} reduces to
$\lim_{\delta\searrow0} \delta^p k(\delta) = 0$. In other words, we have $k(\delta)=\CO(\delta^{-p})$, mirroring the traditional conditions in Euclidean spaces. {Note that the condition on $k(\delta)$ is fairly broad, and does not give useful concrete stopping rules directly. Generally, the issue of a posterior stopping rules for stochastic iterative methods is completely open, even for the Hilbert setting \cite{JahnJin:2020}. For a polynomially decaying step-sizes $\mu_k=c_0k^{-\beta}$, the conditions $\frac{1}{p^*}<\beta\leq1$ and $c_0<(\frac{P^\ast}{L_{\max}^{p^\ast}G_{p^\ast}})^{\frac{1}{p^\ast-1}}$ give a valid step-size choice, and the stopping index $k(\delta)$ should satisfy $\lim_{\delta\searrow0}k(\delta)=\infty$ and $\lim_{\delta\searrow0} k(\delta)\delta^\frac{p}{1-\beta}=0$.} \end{remark}
\begin{remark}
{It is of much interest to derive a convergence rate for noisy data under a conditional stability condition as in Theorem \ref{thm:lin-main}, as a natural extension of the regularising property. However, this is still unavailable. Within the current analysis strategy, deriving the rate would require quantitative versions of stability estimates in Lemma \ref{lem:coupled_noise_convergence} in terms of $\delta$ and $k$. Generally the convergence rate analysis for iterative regularisation methods in Banach space remains a very challenging task, and much more work is still needed.}
\end{remark}
\section{Numerical experiments}\label{sec:experiments}
We present numerical results on two sets of experiments to illustrate distinct features of the SGD \eqref{eqn:sgd}. The first set of experiments deals with an integral operator and the reconstruction of a sparse signal in the presence of either Gaussian or impulse noise. On this model example, we investigate the impact of the number of batches and the choice of the spaces $\CX$ and $\CY$ on the performance of the algorithm.
{To simplify the study we investigate spaces $\CX$ and $\CY$ that are smooth and convex of power type, and thus the corresponding duality maps are singletons.}
{To facilitate a direct comparison of the SGD with the Landweber method, we count the computational complexity with respect to the number of epochs, i.e. the size $N_b$ of partition defined below.
Note moreover that our implementation of the Landweber method does not use the stepsizes described in \cite[Method 3.1]{SLS_06}, since the latter requires knowledge of quantities that are inconvenient to compute in practice.}
The second set of experiments is about tomographic reconstruction, with respect to different types of noise. {All the shown reconstructions are obtained with a single stochastic run, as is often done in practice, and the stopping index is determined in a trial and error manner so that the corresponding reconstruction yields small errors.}
\subsection{Model linear inverse problem}
First we consider the following model inverse problem studied in \cite{JinStals:2012}. Let $\kappa:\overline{\Omega}\times\overline{\Omega}\rightarrow\bbR^+$, with $\Omega=(0,1)$, be a continuous function, and define an integral operator $\CT_\kappa:\CL^{r_{\CX}}(\Omega)\rightarrow\CL^{r_{\CY}}(\Omega)$, for $1<r_{\CX},r_{\CY}<\infty$, by
\begin{align}
(\CT_\kappa x)(t) = \int_\Omega \kappa(t,s) x(s) ds.
\end{align}
This is a compact linear operator between $\CL^{r_{\CX}}(\Omega)$ and $\CL^{r_{\CY}}(\Omega)$, with the adjoint $\CT_\kappa^\ast\!:\!\CL^{r^\ast_{\CY}}(\Omega)\!\rightarrow\!\CL^{r^\ast_{\CX}}(\Omega)$ given by
$(\CT_\kappa^\ast y)(s)\! =\! \int_\Omega \kappa(t,s) y(t) dt$.
To approximate the integrals, we subdivide the interval $\overline\Omega$ into $N\!=\!1000$ subintervals $[\frac{k}{N}, \frac{k+1}{N}]$, for $k\!=\!0,\!\ldots\!,N\!-\!1$, and then use quadrature, giving a finite-dimensional model ${\mathbf{A}}{\boldsymbol{\mathsf{x}}}\!=\!{\boldsymbol{\mathsf{y}}}$, with ${\mathbf{A}} \!=\! \frac{1}{N}\!\left(\kappa\!\left(\frac{j-1}{N}, \frac{2k-1}{N}\right) \!\right)_{j,k=1}^N$ and ${\boldsymbol{\mathsf{x}}}\!=\!\left(x\left(\frac{2j-1}{2N}\right)\right)_{j=1}^N$.
For SGD we use $N_b\in[N]$ mini-batches. To obtain equisized batches, we assume that $N_b$ divides $N$.
The mini-batch matrices ${\mathbf{A}}_j$ are then constructed by taking every $N_b$-th row of ${\mathbf{A}}$, shifted by $j$, resulting in well-balanced mini-batches, in the sense that the norm $\|{\mathbf{A}}_j\|$ is (nearly) independent of $j$.
The kernel function $k(t,s)$ and the exact signal $x^\dag$ are defined respectively by
\begin{align*}
\kappa(t,s) = \begin{cases}40t(1-s), &\text{ if } t\leq s,\\
40s(1-t), &\text{ otherwise},\end{cases}
\quad\mbox{and} \quad x^\dag(s) = \begin{cases}1, &\text{ if } s\in[\frac{9}{40},\frac{11}{40}]\cup[\frac{29}{40},\frac{31}{40}],\\
2, &\text{ if } s\in[\frac{19}{40},\frac{21}{40}],\\
0, &\text{ otherwise}.
\end{cases}
\end{align*}
This is a sparse signal and we expect sparsity promoting norms to perform well.
To illustrate this, we compare the following four settings: (a) $\CX=\CY=\CL^2(\Omega)$; (b) $\CX=\CL^2(\Omega)$ and $\CY=\CL^{1.1}(\Omega)$; (c) $\CX=\CL^{1.5}(\Omega)$ and $\CY=\CL^{2}(\Omega)$; (d) $\CX=\CL^{1.1}(\Omega)$ and $\CY=\CL^{2}(\Omega)$.
Setting (a) is the standard Hilbert space setting, suitable for recovering smooth solutions from measurement data with i.i.d. Gaussian noise, whereas settings (b)-(d) use Banach spaces.
Settings (c) and (d) both aim at sparse solutions, and we expect the latter to yield sparser solutions, since spaces $\CL^{r}(\Omega)$ progressively enforce sparser solutions as the exponent $r$ gets closer to $1$.
In the experiments, we employ the step-size schedule
$\mu_k = \frac{L_{\max}}{1+0.05 (k/N_b)^{1/p^\ast+0.01}},$
with $L_{\max} = \max_{j\in[N_b]} \|{\mathbf{A}}_j\|$.
{This satisfies the summability conditions $\sum_{k=1}^\infty \mu_k=\infty$ and $\sum_{k=1}^\infty \mu_k^{p^*}<\infty$ required by Theorem \ref{thm:as_lin_convergence}}.
The operator norm $\|{\mathbf{A}}_j\|=\|{\mathbf{A}}_j\|_{\CL^{r_{\CX}}\rightarrow \CL^{r_{\CY}}}=\max_{{\boldsymbol{\mathsf{x}}}\neq 0} \frac{\|{\mathbf{A}}_j{\boldsymbol{\mathsf{x}}}\|_{\CL^{r_{\CY}}}}{\|{\boldsymbol{\mathsf{x}}}\|_{\CL^{r_{\CX}}}}$ is estimated using Boyd's power method \cite{B74}.
All the reconstruction algorithms are initialised with a zero vector.
In Fig. \ref{fig:sparse_solution_comparison}, we compare the reconstructions with settings (a)-(d) for exact data.
We observe from Fig. \ref{fig:sparse_solution_comparison}(a) that settings (a) and (b), with $\CX=\CL^2(\Omega)$, result in smooth solutions that fail to capture the sparsity structure of the true signal ${\boldsymbol{\mathsf{x}}}^\dag$. In contrast, the choice $\CX=\CL^{1.5}(\Omega)$ recovers a sparser solution, and the choice $\CX=\CL^{1.1}(\Omega)$ gives a truly sparse reconstruction, but with peaks that overshoot the magnitude of ${\boldsymbol{\mathsf{x}}}^\dag$.
This might be related to the fact ${\boldsymbol{\mathsf{x}}}^\dag$ exhibits a cluster structure in addition to sparsity, which is not accounted for in the choice of the space $\CX=\CL^{1.1}(\Omega)$ \cite{ZouHastie:2005,JinLorenzSchiffler:2009}. Fig. \ref{fig:sparse_solution_comparison}(b) indicates that early stopping would result in lower peaks and significantly reduce the overshooting, but a more explicit form of regularisation \cite{ZouHastie:2005,JinLorenzSchiffler:2009} might allow faster convergence.
\begin{figure}[h!]
\centering
\setlength{\tabcolsep}{0pt}
\begin{tabular}{cc}
\includegraphics[width=.488\textwidth]{figures/SolutionComparison.pdf} &
\includegraphics[width=.488\textwidth]{figures/EarlyStopping.pdf}\\
{\scriptsize{(a) Changing $\CX$ and $\CY$ for $N_b=100$ }}& {\scriptsize{(b) Progression of iterates for $\CX=\CL^{1.1}(\Omega)$}}
\end{tabular}
\caption{Comparison of reconstructed solutions after $500$ epochs.}
\label{fig:sparse_solution_comparison}
\end{figure}
In Fig. \ref{fig:objective_function}, we investigate the convergence of the objective value with respect to the number of batches $N_b$ and the choice of the solution space $\CX$.
As expected, having a larger number of batches results in a faster initial convergence, but also in increased variance, as shown by the oscillations.
Moreover, the variance is lower in the case of a smoother space $\CX$ (promoting smoother solutions), where the variance existing in early epochs is dramatically reduced later on.
This observation can be explained by the gradient expression $g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)={\mathbf{A}}_i^\ast \svaldmapY{p}({\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i)$, which tends to zero as SGD iterates converge to the true solution $\xbs^\dagger$ and so does its variance, and the larger is the exponent $p$, the faster is the convergence.
\begin{figure}[h!]
\centering
\setlength{\tabcolsep}{0pt}
\begin{tabular}{cc}
\includegraphics[width=.488\textwidth]{figures/NumBatchesComparisonX11Updated.pdf}
& \includegraphics[width=.488\textwidth]{figures/NumBatchesComparisonX15Updated.pdf} \\
{\scriptsize{(a) $\CX=\CL^{1.1}(\Omega)$ and $\CY=\CL^2(\Omega)$ }}& {\scriptsize{(b) $\CX=\CL^{1.5}(\Omega)$ and $\CY=\CL^2(\Omega)$}}
\end{tabular}
\caption{The variation of $\frac{1}{p}\sum_{i=1}^{N_b}\yN{{\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}_k-{\boldsymbol{\mathsf{y}}}_i}^p$ with respect to the number of batches $N_b$.}
\label{fig:objective_function}
\end{figure}
Next we examine the performance of the algorithm when the observational data ${\boldsymbol{\mathsf{y}}}^\delta$ contains (random-valued) impulse noise, cf. Fig. \ref{fig:noisy_reconstruction}, {which is generated by
\begin{align*}
y^\delta_i = \left\{\begin{aligned}
y_i^\dag, & \quad\mbox{with probability } 1-p,\\
(1-\xi)y_i^\dag, & \quad\mbox{with probability } p/2,\\
1.4\xi +(1-\xi)y_i^\dag, & \quad\mbox{with probability } p/2,
\end{aligned}\right.
\end{align*}
where $p\in (0,1)$ denotes the percentage of corruption (which is set to $0.05$ in the experiment) and $\xi\sim{\rm Uni}(0.1, 0.4)$ follows a uniform distribution over the interval $(0.1,0.4)$.
It is known that $\mathcal{L}^r(\Omega)$ fittings with $r$ close to 1 is suitable for impulsive noise.}
This allows investigating the role of not only the space $\CX$ but also $\CY$.
The results in Fig. \ref{fig:noisy_reconstruction}(b) show that the choice $\CY=\CL^{r_{\CY}}(\Omega)$, with $r_{\CY}$ close to $1$, performs significantly better.
Indeed, the Hilbert setting $\CX=\CY=\CL^2(\Omega)$ produces overly smooth, non-sparse solutions with pronounced artefacts. In sharp contrast, setting $\CX=\CY=\CL^{1.1}(\Omega)$ yields solutions that can correctly identify the sparsity structure of the true solution, and have no artefacts.
Similar as before, the reconstruction in this setting overestimates the signal magnitude on its support, which is exacerbated as the exponent $r_{\CY}$ gets closer to $1$.
\begin{figure}[h!]
\centering
\setlength{\tabcolsep}{0pt}
\begin{tabular}{cc}
\includegraphics[width=.488\textwidth]{figures/NoisyData.pdf}
& \includegraphics[width=.505\textwidth]{figures/NoisyReconstruction.pdf} \\
{\scriptsize{(a) Data with impulse noise }}& {\scriptsize{(b) Reconstructions with respect to $\CX$ and $\CY$}}
\end{tabular}
\caption{The reconstruction performance in case of impulse noise. The algorithms utilised $N_b=100$ batches and were run for $250$ epochs.}
\label{fig:noisy_reconstruction}
\end{figure}
Lastly, we investigate the convergence behaviour of the method for the generalised model \eqref{eqn:q_kaczmarz} in Section \ref{sec:further_kaczmarz}, where stochastic directions $g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)$ are defined as $g({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{y}}},i)={\mathbf{A}}_i^\ast \svaldmapY{q}({\mathbf{A}}_i{\boldsymbol{\mathsf{x}}}-{\boldsymbol{\mathsf{y}}}_i)$, with $q=r_{\CY}$ different from the convexity parameter $p$ of the space $\CX$. The results in Fig. \ref{fig:kaczmarz_q} show that this can indeed be beneficial for the performance of the method: the reconstructions are more accurate not only in terms of the solution support, but also in terms of the magnitudes of the non-zero entries. However, the precise mechanism of the excellent performance remains largely elusive.
\begin{figure}[h!]
\centering
\setlength{\tabcolsep}{0pt}
\begin{tabular}{cc}
\includegraphics[width=.488\textwidth]{figures/pYDependence.pdf}
& \includegraphics[width=.488\textwidth]{figures/XpYDependence.pdf} \\
{\scriptsize{(a) Standard vs generalised Kaczmarz }}& {\scriptsize{(b) Changing $\CX$ in generalised Kaczmarz}}
\end{tabular}
\caption{The dependence of the reconstructions in the case of impulse noise on the choice of $q$ parameter in the generalised model \eqref{eqn:q_kaczmarz}. The results are obtained using $N_b=100$ batches, after $250$ epochs.}
\label{fig:kaczmarz_q}
\end{figure}
\subsection{Computed Tomography}
Now we numerically investigate the behaviour of SGD on computed tomography (CT), with respect to the model spaces $\CX$ and $\CY$ and data noise. In CT reconstruction, we aim at determining the density of cross sections of an object by measuring the
attenuation of X-rays as they propagate through the object \cite{Natterer:2001}. Mathematically, the forward map is given by the Radon transform.
In the experiments, the discrete forward operator ${\mathbf{A}}$ is defined by a $2D$ parallel beam geometry, with $180$ projection angles on a $1$ angle separation, $256$ detector elements, and pixel size of $0.1$. The sought-for signal ${\boldsymbol{\mathsf{x}}}^\dag$ is a (sparse) phantom, cf. Fig. \ref{fig:CT_data}(a).
After applying the forward operator ${\mathbf{A}}$, either Gaussian (with mean zero and variance $0.01$) or salt-and-pepper noise is added.
In the latter setting we consider low (with $5\%$ of values changed to either salt or pepper values) and high ($10\%$ of values changed) noise regimes.
The resulting sinograms (i.e. measurement data) are shown in Fig. \ref{fig:CT_data}(b)-(d).
Note that standard quality metrics in image assessment, such as peak signal to noise ratio or mean squared error, are computed using the distance between images in the $\ell^2$-norm, which have an implicit bias towards Hilbert spaces and smooth signals, whereas using a metric that emphasises sparsity is more pertinent to sparsity promoting spaces.
To provide a balanced comparison, we report the following two metrics based on normalised $\ell^1$- and $\ell^2$-norms:
$\delta_1({\boldsymbol{\mathsf{x}}})=\|\xbs^\dagger-{\boldsymbol{\mathsf{x}}}\|_{\ell^1}/\|\xbs^\dagger\|_{\ell^1}$ and $\delta_2({\boldsymbol{\mathsf{x}}})=\|\xbs^\dagger-{\boldsymbol{\mathsf{x}}}\|_{\ell^2}/\|\xbs^\dagger\|_{\ell^2}.$
\begin{figure}[h!]
\centering
\setlength{\tabcolsep}{0pt}
\begin{tabular}{cc}
\includegraphics[height=.3\textwidth]{figures/PHANTOM.pdf}
& \includegraphics[height=.3\textwidth]{figures/SINO_GAUSSIAN_NOISY.pdf}\\
{\scriptsize{(a) Original phantom }}& {\scriptsize{(b) Gaussian noise measurement }}\\
\includegraphics[height=.3\textwidth]{figures/SINO_TRUESNP_LOW.pdf} & \includegraphics[height=.3\textwidth]{figures/SINO_TRUESNP_HIGH.pdf} \\
{\scriptsize{(c) Low noise salt-and-pepper measurement }} & {\scriptsize{(d) High noise salt-and-pepper measurement}}
\end{tabular}
\caption{The plot in {\rm(}a{\rm)} shows the phantom to be recovered and {\rm(}b{\rm)}-{\rm(}d{\rm)} show noisy measurements used in the recovery: in {\rm(}b{\rm)}, random Gaussian noise was added, and {\rm(}c{\rm)}-{\rm(}d{\rm)} are sinogram data degraded by salt-and-pepper noise in the low {\rm(}$5\%${\rm)} and high {\rm(}$10\%${\rm)} noise regimes.}
\label{fig:CT_data}
\end{figure}
First, we show the performance on Gaussian noise, where we compare the Hilbert setting ($\CX=\CY=\CL^2$) with two Banach settings ($\CX=\CL^{1.1}$, $\CY=\CL^2$, and $\CX=\CY=\CL^{1.1}$).
In the reconstruction, we employ step-sizes $\mu_k = \frac{L_{\max}/2}{1+0.05 (k/N_b)^{1/p^\ast+0.01}}$, with $L_{\max} = \max_{j\in[N_b]} \|{\mathbf{A}}_j\|$.
Fig. \ref{fig:CT_gaussian_60} shows exemplary reconstructions.
In all three settings much of the noise is retained in the reconstruction, and whereas the Hilbert setting is better at recovering the magnitude of non-zero entries, the Banach settings are better at recovering the support.
Moreover, we observe that the Banach setting with a sparse signal space $\CX=\CL^{1.1}$, and a smooth observation space $\CY=\CL^2$, has the best performance in terms of $\delta_1$ and $\delta_2$ metrics.
The Hilbert model performs better than the fully sparse model $\CX=\CY=\CL^{1.1}$ in terms of the smooth metric $\delta_2$, but worse in the sparsity promoting metric ($\delta_1$).
We also consider the Banach setting for the generalised model \eqref{eqn:q_kaczmarz}, with $\CX=\CY=\CL^{1.1}$ and $p_{\CY}=1.1$, where we study the effects of early stopping.
Fig. \ref{fig:CT_gaussian_epochs} shows that this setting recovers the support more accurately (and actually does so very early on) and recovers the magnitudes better, but that a form of regularisation (through e.g. early stopping) can be beneficial, since in the later epochs SGD iterates again tend to overshoot on the support.
A similar behaviour can observed for other studied Banach space settings, but not for the Hilbert space setting, which does not recover the support.
\begin{figure}[h!]
\centering
\small
\setlength{\tabcolsep}{0pt}
\begin{tabular}{ccc}
\includegraphics[height=.27\textwidth]{figures/HGD_GAUSSIAN_60.pdf}
& \includegraphics[height=.27\textwidth]{figures/BGD_X11Y2_GAUSSIAN_60.pdf} & \includegraphics[height=.27\textwidth]{figures/BGD_X11Y11_GAUSSIAN_60.pdf} \\
{\scriptsize{(a) $\CX=\CY=\CL^2$ }}& {\scriptsize{(b)$\CX=\CL^{1.1}$, $\CY=\CL^{2}$ }}& {\scriptsize{(c) $\CX=\CY=\CL^{1.1}$ }}\\
{\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $2.643/0.528$ }}& {\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $0.711/0.341$ }}& {\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $2.195/0.620$}}
\end{tabular}
\caption{The reconstruction of the phantom from the observed sinograms degraded by Gaussian noise, cf. Fig. \ref{fig:CT_data}{\rm(}b{\rm)}.
The algorithms use $N_b=60$ batches and were run for $200$ epochs.}
\label{fig:CT_gaussian_60}
\end{figure}
\begin{figure}[h!]
\centering
\small
\setlength{\tabcolsep}{0pt}
\begin{tabular}{ccc}
\includegraphics[height=.27\textwidth]{figures/X11Y11pY_GAUSSIAN_EPOCH_5_60.pdf}
& \includegraphics[height=.27\textwidth]{figures/X11Y11pY_GAUSSIAN_EPOCH_50_60.pdf} & \includegraphics[height=.27\textwidth]{figures/X11Y11pY_GAUSSIAN_EPOCH_200_60.pdf} \\
{\scriptsize{(a) $5$ epochs }}& {\scriptsize{(b) $50$ epochs}}& {\scriptsize{(c) $200$ epochs}}\\
{\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $0.702/0.627$ }}& {\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $0.263/0.235$ }}& {\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $0.604/0.308$}}
\end{tabular}
\caption{The evolution of the quality of reconstruction from sinograms degraded by Gaussian noise with respect to the number of epochs. The algorithm uses $\CX=\CY=\CL^{1.1}$ and $p_{\CY}=1.1$, with $N_b=60$ batches.}
\label{fig:CT_gaussian_epochs}
\end{figure}
We next investigate the performance for low and high salt-and-pepper noise.
We compare the Hilbert setting with two Banach settings: the standard SGD with $\CX=\CY=\CL^{1.1}$ and the generalised model \eqref{eqn:q_kaczmarz} with $\CX=\CY=\CL^{1.1}$ and $p_{\CY}=1.1$.
For the reconstruction, we employ step-sizes $\mu_k = \frac{0.5}{1+0.05 (k/N_b)^{1/p^\ast+0.01}}$.
The results in Fig. \ref{fig:CT_sp_60} show the reconstructions after $200$ epochs with $N_b=60$ batches. In the low noise regime, the Hilbert setting can reconstruct the general shape of the phantom, but retains a lot of the noise and exhibits streaking artefacts in the background. The reconstruction in the high noise regime is of much poorer quality.
The standard Banach SGD shows good behaviour in the low-noise setting, reconstructing well both the sparsity structure and the magnitudes, but its performance degrades in the high noise setting.
In sharp contrast, the model \eqref{eqn:q_kaczmarz} shows a nearly perfect reconstruction performance - the phantom is well recovered, with intensities on the correct scale, for both low and high noise regimes.
Similar as before, we observe that Banach methods tend to slightly overestimate the overall intensities, though the recovered values are comparable to the true solution.
Overall, the Hilbert setting shows a qualitatively worst performance, in both $\ell^1$- and $\ell^2$-norm sense, and the model \eqref{eqn:q_kaczmarz} shows the best performance.
\begin{figure}[h!]
\centering
\small
\setlength{\tabcolsep}{-2pt}
\begin{tabular}{ccc}
\includegraphics[height=.27\textwidth]{figures/HGD_TRUESNP_LOW.pdf}
& \includegraphics[height=.27\textwidth]{figures/X11Y11_TRUESNP_LOW.pdf} & \includegraphics[height=.27\textwidth]{figures/X11Y11pY_TRUESNP_LOW.pdf} \\
{\scriptsize{(a) $\CX=\CY=\CL^2$ in low noise}}& {\scriptsize{(b)$\CX=\CY=\CL^{1.1}$ in low noise}}& {\scriptsize{(c) $\CX=\CY=\CL^{1.1}$, $p_{\CY}=1.1$ in low noise}}\\
{\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $18.67/3.71$ }}&{\scriptsize{ $\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $1.80/0.544$}} &{\scriptsize{ $\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $2.43/3.68\cdot \text{e-}3$ }}\\
\includegraphics[height=.27\textwidth]{figures/HGD_TRUESNP_HIGH.pdf}
& \includegraphics[height=.27\textwidth]{figures/X11Y11_TRUESNP_HIGH.pdf} & \includegraphics[height=.27\textwidth]{figures/X11Y11pY_TRUESNP_HIGH.pdf} \\
{\scriptsize{(a) $\CX=\CY=\CL^2$ in high noise}}& {\scriptsize{(b) $\CX=\CY=\CL^{1.1}$ in high noise}}& {\scriptsize{(c) $\CX=\CY=\CL^{1.1}$, $p_{\CY}=1.1$ in high noise }}\\
{\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $26.61/5.19$ }}& {\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $5.37/1.54$ }}& {\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $3.72/6.03\cdot\text{e-}3$}}
\end{tabular}
\caption{The reconstruction of the phantom from the observed sinograms, degraded with low {\rm(}top{\rm)} and high {\rm(}bottom{\rm)} salt-and-pepper noise, respectively,
obtained using the Hilbert space model {\rm(}$\CX=\CY=\CL^2${\rm)} {\rm(}left{\rm)}, the Banach model {\rm(}$\CX=\CY=\CL^{1.1}${\rm)} {\rm(}middle{\rm)} and the Banach model {\rm(}$\CX=\CY=\CL^{1.1}${\rm)} with the generalised Kaczmarz scheme {\rm(}$p_{\CY}=1.1${\rm)} {\rm(}right{\rm)}. The algorithms use $N_b=60$ batches and were run for $200$ epochs.}
\label{fig:CT_sp_60}
\end{figure}
Lastly, we investigate a more challenging setting with noise affecting not only the sinograms, but also the original phantoms. Then the ground-truth image is only approximately sparse.
The phantom is degraded with Gaussian noise (zero mean and variance $0.01$) after which
we apply the forward operator to the resulting noisy phantom. We then add either Gaussian (zero mean and variance $0.01$) or salt-and-pepper noise (affecting $3\%$ of measurements); see Fig. \ref{fig:CT_prepost_data} for
representative images. The reconstruction algorithms use SGD with a decaying step-size schedule, $\mu_k = \frac{0.2}{1+0.05 (k/N_b)^{1/p^\ast+0.01}}$.
\begin{figure}[h!]
\centering
\small
\setlength{\tabcolsep}{0pt}
\begin{tabular}{ccc}
\includegraphics[height=.22\textwidth]{figures/GAUSSIAN_PHANTOM.pdf}
& \includegraphics[height=.22\textwidth]{figures/GAUSSGAUSS_SINO.pdf} & \includegraphics[height=.22\textwidth]{figures/GAUSSTRUESNP_SINO.pdf} \\
{\scriptsize{(a) Noisy Phantom }}& {\scriptsize{(b) Gaussian measurement noise }}& {\scriptsize{(c) Salt-and-pepper measurement noise}}
\end{tabular}
\caption{The phantoms and sinograms for the forward model with both pre and post measurement noise. The phantom on the left is degraded by Gaussian noise. After applying the forward operator, either Gaussian {\rm(}middle{\rm)} or salt-and-pepper noise {\rm(}right{\rm)} is added to the sinogram.}
\label{fig:CT_prepost_data}
\end{figure}
The reconstructions for data with Gaussian noise in both phantom and sinogram are shown in Fig. \ref{fig:CT_prepost_gaussgauss}. As before, reconstructions in the Hilbert setting are comparable, but slightly worse than that with the Banach ones. Banach methods are better at recovering the sparsity structure of the solution, and have better reconstruction quality metrics, though they do not completely remove the noise.
In the second setting, with the Gaussian noise affecting the phantom and salt-and-pepper noise affecting the sinogram, the difference in reconstruction quality in the Hilbert space and Banach space settings is significantly more pronounced, cf. Fig. \ref{fig:CT_prepost_gausssnp}. In both settings, the choice of spaces $\CX$ and $\CY$ can have a big impact on the reconstruction quality, especially on the amount of noise retained in the background. Moreover, further improvements can be achieved by explicitly penalising the objective function.
\begin{figure}[h!]
\centering
\small
\setlength{\tabcolsep}{0pt}
\begin{tabular}{ccc}
\includegraphics[height=.27\textwidth]{figures/HGD_GAUSSGAUSS.pdf}
& \includegraphics[height=.27\textwidth]{figures/X11Y11pY_GAUSSGAUSS.pdf} & \includegraphics[height=.27\textwidth]{figures/X11Y19pY_GAUSSGAUSS.pdf} \\
{\scriptsize{(a) $\CX=\CY=\CL^2$ }}& {\scriptsize{(b) $\CX=\CY=\CL^{1.1}$, $p_{\CY}=1.1$ }}& {\scriptsize{(c) $\CX=\CL^{1.1}$, $\CY=\CL^{1.9}$, $p_{\CY}=1.9$ }}\\
{\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $5.65/1.12$ }}& {\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $3.16/0.632$}} & {\scriptsize{$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $2.99/0.561$}}
\end{tabular}
\caption{The reconstruction of the phantom from the observed sinograms with pre- and post-measurement Gaussian noise.
The algorithms use $N_b=60$ batches and were run for $200$ epochs.}
\label{fig:CT_prepost_gaussgauss}
\end{figure}
\begin{figure}[h!]
\centering
\small
\setlength{\tabcolsep}{0pt}
\begin{tabular}{ccc}
\includegraphics[height=.27\textwidth]{figures/HGD_GAUSSTRUESNP.pdf}
& \includegraphics[height=.27\textwidth]{figures/X13Y13_GAUSSTRUESNP.pdf} & \includegraphics[height=.27\textwidth]{figures/X11Y11_GAUSSTRUESNP.pdf} \\
(a) $\CX=\CY=\CL^2$ & (b)$\CX=\CY=\CL^{1.3}$, $p_{\CY}=1.3$ & (c) $\CX=\CY=\CL^{1.1}$, $p_{\CY}=1.1$ \\
$\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $17.48/3.52$ & $\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $3.46/0.86$ & $\delta_1({\boldsymbol{\mathsf{x}}})/\delta_2({\boldsymbol{\mathsf{x}}})$: $3.14/0.62$
\end{tabular}
\caption{The reconstructed phantom from the sinograms with a Gaussian pre-measurement and a salt-and-pepper {\rm(}post-{\rm)}measurement noise.
The algorithms use $N_b=60$ batches and were run for $400$ epochs.}
\label{fig:CT_prepost_gausssnp}
\end{figure}
\section*{Acknowledgements}
{We are very grateful to three anonymous referees for their constructive comments which have led to a significant improvement of the quality of the paper.}
|
train/arxiv
|
BkiUdLo5qg5A52fiCoRy
| 5
| 1
|
\section{Introduction}
PWFA is one of the most promising novel acceleration technologies able to generate accelerating gradients in the multi-\SI{}{\giga\volt/\meter} level \cite{Blumenfeld}. There are however challenges that still need to be addressed before this technology can be applied to a future linear collider. Transverse instabilities caused by transverse wakefields, which are fields generated by a driving particle's interaction with the accelerating cavity due to misalignment, are considered one of the main challenges, as this is known to constrain the drive beam to main beam efficiency in CLIC \cite{CLIC_CDR}. Transverse wakefields in PWFA can be several orders of magnitude larger than in metallic cavities due to the significantly smaller dimension of a plasma ion bubble, so a good understanding of possible mitigation methods is therefore necessary for a global parameter optimization for a PWFA-LC (plasma wakefield acceleration linear collider). One such mitigation method is BNS damping \cite{BNS}, a well-known technique in RF accelerators, where a correlated energy spread is induced along the beam to disrupt the coherence buildup of transverse oscillations.
Several conceptual parameter sets for a PWFA-LC have been proposed to identify the main challenges and base parameters, one example being the Snowmass parameter set \cite{Snowmass_Erik}. However, in contrast to CLIC, the effect of transverse wakefields on efficiency has so far not been taken into account in PWFA-LC parameter studies, even though Lebedev et al. have studied the relationship between efficiency and instability, and derived an analytical expression \cite{Lebedev_2017}. In this paper, we will conduct a parameter study of the efficiency of a \SI{1.5}{\tera\electronvolt} plasma wakefield accelerator using the Snowmass parameter set as a basis, but taking into account transverse wakefield and the damping effect of energy spread using the approach of a parameter scan.
\section{Transverse wake function}
Plasma acceleration is very computationally expensive to simulate, hence it is very challenging to consider the effect of transverse instabilities on efficiency using traditional PIC-codes. Several studies have proposed simplified models for transverse beam motion in PWFA using coupled differential equations for the beam and plasma channel centroid \cite{Huang, Mehrling1, Mehrling2}. In this paper, we assume that the transverse forces can be expressed using the wake function formalism \cite{wakefield_Wilson,wakefield_Chao}, which is used for describing the well-known BBU-instability in RF accelerators, and will allow for easier comparison with RF accelerators.
In CLIC \cite{RAST_Daniel}, single beam transverse wakefield for small distances between a driving particle located at $\xi'$ and a witness particle located at $\xi$ is modelled using \begin{equation}
W_\perp(\xi'-\xi)=\frac{2}{\pi\varepsilon_0}\frac{\xi'-\xi}{a^4}\Theta(\xi'-\xi),
\label{eq:transWakeFunction}
\end{equation}
where $\varepsilon_0$ is the permittivity in vacuum, $a$ is the accelerating structure iris radius and $\Theta(\xi)$ is the Heaviside step function. The structure iris is however not well-defined for a plasma, but an effective structure iris \cite{Lebedev_2017, G.Stupakov} can be defined by $a=r_\mathrm{b}(\xi')+\alpha k_\mathrm{p}^{-1}$. Here $r_\mathrm{b}(\xi')$ is the plasma bubble radius at the location of the driving particle, $\alpha$ a numerical coefficient on the order of one, and the plasma skin depth $k_\mathrm{p}^{-1}$ accounts for the penetration depth of the electromagnetic fields. Equation \eqref{eq:transWakeFunction} along with the modification $a=r_\mathrm{b}(\xi')+\alpha k_\mathrm{p}^{-1}$ has been proposed for the PWFA blowout regime in \cite{G.Stupakov}. In this paper, we adopt this wake function, and use the value $\alpha=0.75$, which is the same value used in \cite{G.Stupakov}.
For a beam slice with charge $q$ located at $\xi$, the transverse wake force per unit charge is given by a convolution integral
\begin{equation}
\frac{F_\perp(\xi,s)}{q} = -e\int\limits_{\xi_\sub{H}}^\xi\! W_\perp(\xi'-\xi) \lambda(\xi') X(\xi',s) \,\mathrm{d}\xi',
\label{eq:wakeForce}
\end{equation}
where $e$ is the elementary charge, $\xi_\sub{H}$ is the longitudinal position of the beam head, $\lambda(\xi)$ is the longitudinal number density of the main beam and $X(\xi,s)$ is the mean transverse offset of the beam slice located at $\xi$.
Equation \eqref{eq:wakeForce} gives the transverse force along the main beam after a propagation length $s$, as is illustrated in figure \ref{fig:2019-10-17_wake_convolution_Stupakov-model_s=0m} for a main beam with constant transverse offset propagating along the $\xi$-axis.
\begin{figure}[h]
\includegraphics[width=15pc]{Figures/2019-10-17_wake_convolution_Stupakov-model_s=0m.eps}\hspace{2pc}%
\begin{minipage}[b]{16pc}\caption{The initial transverse force per unit witness charge $F_\perp(\xi,s=0)$ for a main beam with constant transverse offset calculated with equation \eqref{eq:wakeForce} and calculated directly from the output fields of a QuickPIC simulation measured on axis. The longitudinal particle distribution $N(\xi)$ of the main beam is also included in the figure. The beam propagates towards higher values of $\xi$.}
\label{fig:2019-10-17_wake_convolution_Stupakov-model_s=0m}
\end{minipage}
\end{figure}
The evolution of the transverse force at three different beam slices is benchmarked against QuickPIC \cite{QuickPIC_Huang} simulation results, where $\lambda(\xi)$ and $X(\xi)$ in equation \eqref{eq:wakeForce} are extracted from QuickPIC simulations, and the transverse force predicted by the convolution integral is then compared against the corresponding fields extracted from QuickPIC results. Figure \ref{fig:transWakeEvolution_0sigmaZ_from_center_slice_Stupakov}-\ref{fig:transWakeEvolution_-2sigmaZ_from_center_slice_Stupakov} show the evolution of the transverse wake on beam slices located 0-2 $\sigma_z$ behind the main beam center. Inside the ion bubble, the transverse fields acting on the main beam consist of the background ion focusing and intra-beam wakefields, which are similar to dipole fields. To avoid noise, the dipole fields extracted from QuickPIC are measured on axis. Except for the small disagreement for negative amplitudes, the model shows a good agreement with simulations.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-10-10_transWakeEvolution_0sigmaZ_from_center_slice_Stupakov.eps}
\caption{Beam center.}
\label{fig:transWakeEvolution_0sigmaZ_from_center_slice_Stupakov}
\end{subfigure}\hspace{0.35pc}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-10-10_transWakeEvolution_-1sigmaZ_from_center_slice_Stupakov.eps}
\caption{One $\sigma_z$ behind the beam center.}
\end{subfigure}\hspace{0.35pc}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-10-10_transWakeEvolution_-2sigmaZ_from_center_slice_Stupakov.eps}
\caption{$2\sigma_z$ behind the beam center.}
\label{fig:transWakeEvolution_-2sigmaZ_from_center_slice_Stupakov}
\end{subfigure}
\caption{Evolution of the transverse wake force per unit charge at main beam slices at various positions along the beam. The fields extracted from QuickPIC are measured on axis.}
\end{figure}
\section{Simplified quasi-static model}
A simplified quasi-static \cite{QuickPIC_Huang} model was developed in order to exploit the wake function formalism to provide an efficient way of studying transverse instabilities in the main beam in the blow-out regime of a \SI{}{\tera\electronvolt}-scale plasma collider. Similarly to QuickPIC \cite{QuickPIC_Huang}, the simplified model also utilizes the quasi-static approximation, where it is assumed that the ultra-relativistic beam evolves over a much longer time scale compared to the plasma. Mathematically, this is described by the coordinate transformation $(x,y,z,t)\to (x,y,\xi=z-ct,s=ct)$. The time derivative can then be written as
\begin{equation}
\frac{\partial}{\partial t} = \frac{\partial\xi}{\partial t}\frac{\partial}{\partial\xi} + \frac{\partial s}{\partial t}\frac{\partial}{\partial s} = -c\frac{\partial}{\partial\xi} + c\frac{\partial}{\partial s}.
\end{equation}
For an ultra-relativistic beam particle, $\partial_s \gg \partial_\xi$ so that $\partial_t\approx c\partial_s$.
The main beam is sliced longitudinally into slices with equal thicknesses. Assuming that the main beam placed inside the plasma ion bubble does not penetrate the plasma bubble boundary during propagation, we then make the ansatz that the the transverse oscillation of a beam slice located at $\xi$ can be described by
\begin{equation}
\frac{\partial^2}{\partial s^2}X(\xi,s) + \frac{1}{\beta(\xi,s)^2}X(\xi,s) = \frac{e^2}{\mathcalboondox{E}(\xi,s)}\mathcal{W}_\perp(\xi,s),
\label{eq:transverseOscillation}
\end{equation}
where $\beta(\xi,s)=\sqrt{2\gamma(\xi,s)}/k_\mathrm{p}$ is the beta function, and $\mathcalboondox{E}(\xi,s)=\gamma(\xi,s)m_\mathrm{e}c^2$ is the electron energy of an electron located at $\xi$, that has been accelerated by the longitudinal field $E_z(\xi)$ for a distance $s$. The second term of equation \eqref{eq:transverseOscillation} represents the betatron oscillation caused by the focusing forces of the ion background, while the driving term is attributed to the transverse wakefields. All the preceding slices contribute to the driving term through the convolution integral
\begin{equation}
\mathcal{W}_\perp(\xi,s) = \int\limits_{\xi_\sub{H}}^\xi\! W_\perp(\xi'-\xi) \lambda(\xi') X(\xi',s) \,\mathrm{d}\xi'.
\label{eq:transWake}
\end{equation}
The interaction with the plasma and drive beam is represented by the $1/a^4$-dependence of the wake function, and the interaction with the total longitudinal wakefield $E_\parallel(\xi)$. $r_\mathrm{b}(\xi)$ and $E_\parallel(\xi)$ are however not described by this model, and was calculated numerically with QuickPIC in this study. Assuming that $r_\mathrm{b}(\xi)$ and $E_\parallel(\xi)$ do not change significantly during propagation, these quantities only needed to be calculated once in QuickPIC.
These equations are then solved numerically with the quasi-static approximation where the main beam is evolved in $s$, alternating between propagation with frozen transverse forces and interaction with the plasma ion bubble through equation \eqref{eq:transWake} and \eqref{eq:transWakeFunction}, where the transverse forces are updated.
This model was benchmarked against QuickPIC by comparing the mean transverse offset of beam slices located 0-2 $\sigma_z$ behind the beam center. The results are shown in figure \ref{fig:2019-03-20_meanX_0sigmaZ_slice_Stupakov}-\ref{fig:2019-03-20_meanX_-2sigmaZ_slice_Stupakov}. The simplified model agrees very well with the simulation results as long as the main assumptions are valid. The initial offset $X_0$ was chosen to be $X_0=\SI{3.65}{\micro\meter}$, which is on the order of one $\sigma_x$.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-03-20_meanX_0sigmaZ_slice_Stupakov.eps}
\caption{Beam center.}
\label{fig:2019-03-20_meanX_0sigmaZ_slice_Stupakov}
\end{subfigure}\hspace{0.35pc}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-03-20_meanX_-1sigmaZ_slice_Stupakov.eps}
\caption{One $\sigma_z$ behind the beam center.}
\end{subfigure}\hspace{0.35pc}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-03-20_meanX_-2sigmaZ_slice_Stupakov.eps}
\caption{$2\sigma_z$ behind the beam center.}
\label{fig:2019-03-20_meanX_-2sigmaZ_slice_Stupakov}
\end{subfigure}
\caption{Comparison of the mean transverse position of main beam slices located at various positions. $X_0=\SI{3.65}{\micro\meter}$.}
\end{figure}
\section{Evaluation of the Snowmass parameter set}
Figure \ref{fig:2019-06-28_meanX_0sigmaZ_slice_Stupakov} compares a results from a QuickPIC simulation against the simplified model using the Snowmass parameter set. The QuickPIC results show that the Snowmass parameter set produced a highly unstable main beam, and eventually caused the tail of the main beam to come into contact with the bubble boundary. It can be seen that the simplified model and QuickPIC were in good agreement until the beam tail penetrated the plasma at $s\approx\SI{140}{\centi\meter}$, as is depicted in figure \ref{fig:2019-06-28_QEB+QEP}, after which the transverse motion of the beam could not be described by equation \eqref{eq:transverseOscillation}. Such unstable cases are however irrelevant for this study, as this paper aims to find a set of parameters for a stable main beam, and not to model highly unstable oscillations.
Nonetheless, because of the unstable beam, the Snowmass parameter set has to be modified in order to achieve stable propagation with high efficiency and low energy spread. This is done in section \ref{sec:parameterStudy}, where we conduct a parameter study of a \SI{1.5}{\tera\electronvolt} plasma wakefield accelerator using the Snowmass parameter set as a basis.
\begin{figure}[ht]
\begin{minipage}[t]{18pc}
\centering
\includegraphics[width=0.75\columnwidth]{Figures/2019-06-28_meanX_0sigmaZ_slice_Stupakov.eps}
\caption{Comparison of the mean transverse position of the main beam slice located at the center of the beam. The Snowmass parameters were used in this simulation, and resulted in a highly unstable main beam after the beam tail came into contact with the bubble boundary, as seen in figure \ref{fig:2019-06-28_QEB+QEP}. This transverse motion can thus not be described with equation \eqref{eq:transverseOscillation}.}
\label{fig:2019-06-28_meanX_0sigmaZ_slice_Stupakov}
\end{minipage}\hspace{1pc}%
\begin{minipage}[t]{20pc}
\centering
\includegraphics[width=0.7\columnwidth]{Figures/2019-06-28_QEB+QEP.eps}
\caption{Electron number density $n_\mathrm{e}$ per unit initial plasma density $n_0$ and the total longitudinal electric field $E_\parallel(\xi)$ for $s\approx\SI{140}{\centi\meter}$ obtained from QuickPIC simulation with Snowmass parameters. The plasma electron density has been increased by a factor 10 in order to highlight the bubble boundary.}
\label{fig:2019-06-28_QEB+QEP}
\end{minipage}\hspace{1pc}
\end{figure}
\section{Parameter study for a \SI{1.5}{\tera\electronvolt} plasma wakefield acclerator}
\label{sec:parameterStudy}
\subsection{Energy spread, instability and efficiency}
For an initially monochromatic beam with $N$ electrons divided into $n$ slices, the variance of energy is given by
\begin{equation}
\sigma_\mathcalboondox{E}^2 = \frac{1}{N}\sum_{i=1}^n N_i \left( \mathcalboondox{E}_i-\mean{\mathcalboondox{E}} \right)^2 = \frac{1}{N}\sum_{i=1}^n N_i\left( -eE_{\parallel i} s + e\mean{E_\parallel}s \right)^2,
\end{equation}
where $N_i$ and $E_{\parallel i}$ are the number of electrons and the total longitudinal field acting on beam slice $i$ respectively. The relative rms energy spread is then given by
\begin{equation}
\frac{\sigma_\mathcalboondox{E}}{\mean{\mathcalboondox{E}}} = \frac{es}{\mean{\mathcalboondox{E}_0}-e\mean{E_\parallel}s}\sqrt{ \frac{1}{N} \sum_{i=1}^n N_i \left( \mean{E_\parallel}-E_{\parallel i} \right)^2},
\label{eq:sigmaE_E_(s)}
\end{equation}
where $\mean{\mathcalboondox{E}_0}$ is the mean initial energy. In the limit $s\to\infty$, this reduces to
\begin{equation}
\frac{\sigma_\mathcalboondox{E}}{\mean{\mathcalboondox{E}}} = -\frac{1}{\mean{E_\parallel}}\sqrt{ \frac{1}{N} \sum_{i=1}^n N_i \left( \mean{E_\parallel}-E_{\parallel i} \right)^2}.
\label{eq:sigmaE_E_Limit}
\end{equation}
Thus, by using equation \eqref{eq:sigmaE_E_Limit}, the final energy spread can be extrapolated from the initial longitudinal field $E_\parallel(\xi)$, again assuming that $E_\parallel(\xi)$ and the longitudinal particle number distribution do not change significantly during propagation. By extracting $E_\parallel(\xi)$ from QuickPIC simulation results using various combinations of main beam particle number $N_\mathrm{MB}$, rms main beam beam length $\sigma_z$ and beam separation distance $\Delta\xi$, we obtained a series of contour plots for ${\SI{2e9}{}\leq N_\mathrm{MB} \leq\SI{e10}{}}$ that provide an overview over the effect of $N_\mathrm{MB}$, $\sigma_z$ and $\Delta\xi$ on the energy spread. Three examples of such contour plots are shown in figure \ref{fig:2019-07-19_sigmaE_E_inf_contourPlot_N8e+09}-\ref{2019-07-19_sigmaE_E_inf_contourPlot_N1e+10}. Such an overview reveals the region of minimum energy spread in the $\sigma_z$-$\Delta\xi$ plane for various charges, which is crucial for the study of accelerator parameters. Such contour plots are however limited by the chosen resolution of the simulations, so that the distance between actual data points in the $\sigma_z$-direction is \SI{1}{\micro\meter}, and \SI{10}{\micro\meter} in the $\Delta\xi$-direction. This applies to all contour plots in this paper. Furthermore, due to the simulation resolution, $\sigma_z$ was chosen to be $\geq\SI{2}{\micro\meter}$.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-07-19_sigmaE_E_inf_contourPlot_N8e+09.eps}
\caption{$N_\mathrm{MB}=\SI{8e9}{}$.}
\label{fig:2019-07-19_sigmaE_E_inf_contourPlot_N8e+09}
\end{subfigure}\hspace{0.35pc}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-07-15_sigmaE_E_inf_contourPlot_N9e+09.eps}
\caption{$N_\mathrm{MB}=\SI{9e9}{}$.}
\end{subfigure}\hspace{0.35pc}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-07-19_sigmaE_E_inf_contourPlot_N1e+10.eps}
\caption{$N_\mathrm{MB}=\SI{e10}{}$.}
\label{2019-07-19_sigmaE_E_inf_contourPlot_N1e+10}
\end{subfigure}
\caption{Relative rms energy spread vs. beam separation distance $\Delta\xi$ and the rms beam length $\sigma_z$ for main beams with various particle numbers $N_\mathrm{MB}$. The rest of the parameters are from Snowmass parameters.}
\end{figure}
The normalized amplitude \cite{normalizedAmplitude} defined as
\begin{equation}
\Lambda(s) =\sum\limits_{i=1}^n(X_{\sub{N}i}(s)^2 + X_{\sub{N}i}'(s)^2) = \sum\limits_{i=1}^n\left[ \left( \frac{X_i(s)}{\sigma_x(s)} \right)^2 + \left( \frac{X_i'(s)}{\sigma_{x'}(s)} \right)^2 \right],
\end{equation}
where
\begin{equation}
\sigma_x(s) = \sqrt{\frac{\beta(s)\varepsilon_{\mathrm{N}x}}{\gamma(s)}}, \quad \sigma_{x'}(s) = \sqrt{\frac{\varepsilon_{\mathrm{N}x}}{\gamma(s)\beta(s)}}
\end{equation}
and $\varepsilon_{\mathrm{N}x}$ is the normalized emittance, remains constant in the absence of transverse wakefields. The normalized amplification factor $\Lambda_\mathrm{final}/\Lambda_\mathrm{initial}$ can thus be used to quantify the amplification of the transverse jitter of the incoming beam.
For a main beam with charge $Q_\mathrm{MB}$ accelerated in the the wake excited by a drive beam with charge $Q_\mathrm{DB}$, the drive beam to main beam efficiency is defined as
\begin{equation}
\eta = \frac{\Delta\mathcalboondox{E}_\mathrm{MB}}{\mathcalboondox{E}_\mathrm{DB}}\frac{Q_\mathrm{MB}}{Q_\mathrm{DB}},
\end{equation}
where $\Delta\mathcalboondox{E}_\mathrm{MB}$ is the energy gain of the main beam, $\mathcalboondox{E}_\mathrm{DB}$ is the initial drive beam energy. This definition considers all the energy of the DB as spent regardless of how much energy has been extracted. Assuming the drive beam's energy is fully depleted in a plasma of length $L_\mathrm{d}$, the efficiency can also be written as
\begin{equation}
\eta=\frac{E_\mathrm{A}L_\mathrm{d}}{E_\mathrm{D}L_\mathrm{d}}\frac{Q_\mathrm{MB}}{Q_\mathrm{DB}}=T\frac{Q_\mathrm{MB}}{Q_\mathrm{DB}},
\label{eq:efficiency}
\end{equation}
where $E_\mathrm{D}$ is the peak decelerating field of the drive beam and $E_\mathrm{A}$ is the mean accelerating field of the main beam and $T=E_\mathrm{A}/E_\mathrm{D}$ is the transformer ratio.
\subsection{Results}
Using the developed framework, the relation between energy spread, instability and efficiency can now be studied. A main beam parameter scan over ${\SI{2e9}{}\leq N_\mathrm{MB} \leq\SI{e10}{}}$, $\sigma_z$ and $\Delta\xi$ using the Snowmass $T=1$ parameter set as a basis has been performed to obtain values for $\sigma_\mathcalboondox{E}/\mean{\mathcalboondox{E}}$, $\Lambda/\Lambda_0$ and $\eta$. $\sigma_\mathcalboondox{E}/\mean{\mathcalboondox{E}}$ and $\eta$ were calculated from a single QuickPIC time step using equation \eqref{eq:sigmaE_E_Limit} and \eqref{eq:efficiency} respectively, whereas $\Lambda/\Lambda_0$ for a $\SI{1.5}{\tera\electronvolt}$ accelerator was calculated using the simplified quasi-static model. We initially assumed that all parameters used in the parameter scan were able to generate main beams sufficiently stable to satisfy the basic assumptions for the simplified quasi-static model. Large values of $\Lambda/\Lambda_0$ should therefore be ignored in the end results.
The results for $N_\mathrm{MB}=\SI{5e9}{}$ are shown in figure \ref{fig:contourPlot_efficiency}-\ref{fig:scatterPlot_sigmaEE_normAmpFac_efficiency}, as this particle number resulted in the most desirable results. Figure \ref{fig:contourPlot_sigmaEE}-\ref{fig:contourPlot_efficiency} can be used to identify regions in the $\sigma_z$-$\Delta\xi$ plane containing desirable values for $\sigma_\mathcalboondox{E}/\mean{\mathcalboondox{E}}$, $\Lambda/\Lambda_0$ and $\eta$. An optimal set of parameters for an accelerator would however require low values for $\sigma_\mathcalboondox{E}/\mean{\mathcalboondox{E}}$ and $\Lambda/\Lambda_0$, while high values of $\eta$ are desirable. These requirements can be conflicting, so in order to arrive at a reasonable compromise, the data is combined to obtain an overview shown in figure \ref{fig:scatterPlot_sigmaEE_normAmpFac_efficiency}.
\begin{figure}[ht]
\begin{minipage}[t]{16pc}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-07-03_sigmaE_E_inf_contourPlot_N5e+09.eps}
\caption{Contour plot of the relative rms energy spread in the $\sigma_z$-$\Delta\xi$ plane for a main beam with $\SI{5e9}{}$ electrons .}
\label{fig:contourPlot_sigmaEE}
\end{minipage}\hspace{1pc}%
\begin{minipage}[t]{16pc}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-10-18_N5e+09_normAmpFac_contourPlot_Stupakov.eps}
\caption{Contour plot of the normalized amplification factor in the $\sigma_z$-$\Delta\xi$ plane for a main beam with $\SI{5e9}{}$ electrons.}
\label{fig:contourPlot_normAmp}
\end{minipage}\hspace{1pc}
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{16pc}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-10-18_N5e+09_efficiency_contourPlot.eps}
\caption{Contour plot of the efficiency in the $\sigma_z$-$\Delta\xi$ plane for a main beam with $\SI{5e9}{}$ electrons.}
\label{fig:contourPlot_efficiency}
\end{minipage}\hspace{1pc}%
\begin{minipage}[t]{16pc}
\centering
\includegraphics[width=\columnwidth]{Figures/2019-10-18_N5e+09_sigmaEE_normAmpFac_efficiency_scatterPlot_Stupakov.eps}
\caption{Relation between relative rms energy spread, normalized amplification factor and efficiency for a main beam with $\SI{5e9}{}$ electrons. A potential candidate for a new parameter set is marked with a red circle.}
\label{fig:scatterPlot_sigmaEE_normAmpFac_efficiency}
\end{minipage}\hspace{1pc}
\end{figure}
Figure \ref{fig:scatterPlot_sigmaEE_normAmpFac_efficiency} shows several data points of interest, for instance the point $\sigma_\mathcalboondox{E}/\mean{\mathcalboondox{E}}=1.1\%$, $\log(\Lambda/\Lambda_0)=0.8$, $\eta=37.5\%$ marked with a red circle, which corresponds to $\sigma_z=\SI{5}{\micro\meter}$, $\Delta\xi=\SI{200}{\micro\meter}$. A corresponding plot for the initial electron number density and the longitudinal field obtained from QuickPIC simulation is shown in figure \ref{fig:QEB+QEP_optimal}.
These main beam parameters provide improvements over the Snowmass parameter set both in terms of energy spread and stability, but result in a lower efficiency. These parameters and results are summarized in table \ref{tab:parameterComparison}, where the energy spread for the Snowmass parameter set has been re-calculated using the definition \eqref{eq:sigmaE_E_Limit}.
\begin{table}
\caption{Comparison of the Snowmass $T=1$ parameter set and the new parameter set.}
\label{tab:parameterComparison}
\begin{center}
\begin{tabular}{lll}
\br
& Snowmass & New parameters\\
\mr
$N_\mathrm{MB}$ [\SI{e9}{}] & 10 & 5\\
$\sigma_z$ $[\SI{}{\micro\meter}]$ & 20 & 5\\
$\Delta\xi$ $[\SI{}{\micro\meter}]$ & 187 & 200\\
$\sigma_\mathcalboondox{E}/\mean{\mathcalboondox{E}}$ $[\%]$ & 12 & 1.1\\
$\Lambda/\Lambda_0$ & \SI{6.7e2}{} & 6\\
$\eta$ $[\%]$ & 50 & 37.5\\
\br
\end{tabular}
\end{center}
\end{table}
\begin{figure}[h]
\includegraphics[width=15pc]{Figures/2019-07-05_QEB+QEP.eps}\hspace{2pc}%
\begin{minipage}[b]{14pc}\caption{Initial electron number density $n_\mathrm{e}$ per unit initial plasma density $n_0$ and the total longitudinal electric field $E_\parallel(\xi)$ obtained from a QuickPIC simulation with the new parameters.}
\label{fig:QEB+QEP_optimal}
\end{minipage}
\end{figure}
A PWFA multi-\SI{}{\tera\electronvolt} accelerator can be envisioned to be used as the main linac for a linear collider. A core metric of performance for a linear collider is luminosity per power, which scales as $\mathcal{L}/P_\mathrm{AC} \propto \eta/\sqrt{\sigma_z}$
when beam strahlung has been taken into account, assuming that the horizontal beam size can be made sufficiently small, and that the vertical beam size is kept constant \cite{CLIC_CDR}. Assuming that this can be achieved for the new parameter set, the luminosity per power is actually 1.5 times higher than the corresponding value provided by the Snowmass parameters, even though the new parameter set offers a lower efficiency.
\section{Conclusion}
Even though several conceptual parameter sets for a PWFA-LC have been proposed, no PWFA-LC studies have so far considered the constraint of efficiency imposed by transverse instabilities.
In this paper, we described the transverse instabilities in PWFA using the wakefield formalism and benchmarked the results against QuickPIC simulation results. Using the wakefield formalism, a simplified quasi-static model was developed, and was combined with QuickPIC simulations in order to model the evolution of the transverse oscillations of the main beam over a $\SI{1.5}{\tera\electronvolt}$ PWFA accelerator.
We demonstrated that the Snowmass parameter set was unable to provide stable propagation for a main beam consisting of electrons, and we performed a parameter scan over the main beam charge, rms beam length and beam separation distance using the Snowmass parameter set as a basis. The parameter scan provided a new set of parameters that improved the Snowmass parameter set in terms of energy spread, stability and luminosity per power. This parameter study for the main electron beam is however not exhaustive, and did not consider the effects of beam induced ion motion, which has been shown to mitigate hosing \cite{IonMotion}. Furthermore, tolerance studies still remain to be performed, which will be included in future works.
\section*{References}
|
train/arxiv
|
BkiUeJM241xiDnlsQWTC
| 5
| 1
|
\section{Introduction} \label{sec:intro}
OJ~287 $(z=0.306)$ is one of the most luminous and rapidly variable BL-Lacartae
objects (BLLs) at radio to optical frequencies \citep{1985PASP...97.1158S,1989A&AS...80..103S}.
It is also one of the most extensively studied extra-galactic active galactic nuclei
(AGNs) over the entire electromagnetic spectrum from radio to $\gamma$-rays \citep[and
references therein]{1973ApJ...179..721V,2013A&A...559A..20H,2016ApJ...819L..37V,
2017MNRAS.465.4423G,2018MNRAS.478.3199B,2018ApJ...863..175G,2018MNRAS.473.1145K,
2018MNRAS.479.1672K,2018MNRAS.480..407K}.
Apart from the typical stochastic variability of blazars and favorable observational
properties like high radio and optical brightness, the most prominent features
responsible for making the source famous is the presence of a recurrent regular
optical outbursts every $\sim 12$-yr \citep{1988ApJ...325..628S,1996A&A...305L..17S}
and its double-peaked structure \citep{1996A&A...315L..13S}.
Two interpretations have been suggested in the literature for the regular optical outbursts.
One class of models attribute the quasi-periodic outbursts to the interaction dynamics of accretion
disk and SMBHs \citep{1988ApJ...325..628S,1996ApJ...460..207L} in a binary SMBH
system while the other class of models attributes it to the Doppler boosted jet
emission as a consequence of geometrical alignment of precessing single \citep[and references
therein]{,2018MNRAS.478.3199B} or double relativistic jets \citep[and references therein]
{2018arXiv181111514Q}. The very first model by \citet{1988ApJ...325..628S} explained
the periodicity to increased accretion flow due to tidal disturbances induced
by the secondary SMBH in the accretion disk of primary SMBH. The model was modified
after the observation of sharp rise during the 1994 and 1996 outbursts by \citet{1996ApJ...460..207L}
who attributed the periodicity and double-peaked structure as the impact of secondary SMBH
on the primary accretion disk. The disk-impact binary SMBH model has been fairly
successful in predicting the timing of the double-peaked $\sim$ 12-yr quasi-periodic
outbursts \citep{2016ApJ...819L..37V, 2013A&A...559A..20H}. It attributes the flare
emission to thermal bremsstrahlung of the hot gas torn off during the impact and constrains
the SMBH masses to $\sim 1.8\times10^{10}~M_{\odot}$ and $\sim 1.5\times10^8~M_{\odot}$
for the primary and secondary SMBHs, respectively \citep{2012MNRAS.427...77V,2016ApJ...819L..37V}.
The geometrical class of models, on the other hand, argue a total system mass in the range
of a few times $\sim 10^7- 10^9~M_{\odot}$ \citep[and references therein]{2007Ap&SS.310...59S,2018MNRAS.478.3199B,2018arXiv181111514Q}.
From the shape of the broadband energy spectra, it is known that that OJ 287 is a low-peaked
BLL with the peak of the low-energy hump, attributed to synchrotron emission from
the jet, at near-infrared (NIR) energies. The high-energy hump in the X-ray to
$\gamma$-ray band normally peaks at $\sim 100$ MeV \citep{2010ApJ...716...30A,2013MNRAS.433.2380K}.
The synchrotron-self Compton (SSC) mechanism successfully describes it
typical X-ray emission while $\gamma$-ray emission is shown to be due to the inverse
Comptonization of a $\sim$250 K {($\sim 0.022$ eV)} torus photon field \citep[EC-IR][]{2013MNRAS.433.2380K},
contrary to the generally believed SSC origin of high-energy hump in BLLs. However,
during its latest multi-wavelength activity from December 2015 - 2017, OJ~287 exhibited
a hardened MeV-GeV emission, showing a clear shift in the peak of the high-energy hump
to GeV energies \citep{2018MNRAS.473.1145K,2018MNRAS.479.1672K}.
At the same time,
a spectral break between NIR-optical emission was also observed for the first-time
as reported by \citet{2018MNRAS.473.1145K}. The occurrence of NIR-optical
spectral break was traced back to May 2013 (MJD 57439) and continued since then
till March 2016. They further showed that the observed MeV-GeV spectral change
can be naturally reproduced
by external Comptonization but this time by IC of broad line region photons
\citep{2018MNRAS.473.1145K} which have been detected during the previous cycles
of $\sim$12-yr optical outbursts \citep{2010A&A...516A..60N}. The NIR-optical spectral break is most naturally explained by the sandard disk emission of a $\sim 10^{10}~M_\odot$ SMBH. Interestingly, its first appearance in May
2013 \citep[MJD 56439;][]{2018MNRAS.473.1145K,2019BHCB} is very close to
the impact time predicted in the disk-impact binary SMBH model \citep{1996ApJ...460..207L}
in the BH frame. This spectral and temporal coincidence currently tilts
the central engine debate in the favor of disk-impact binary SMBH model.
Survey of literature records show that OJ 287 has exhibited the most dramatic
spectral variations in X-ray energy band. The reported spectral shapes cover all
the possible energy-spectral profiles, from a powerlaw -- the typical X-ray
spectrum of OJ 287 \citep[]{2009PASJ...61.1011S,2010ApJ...716...30A,2013MNRAS.433.2380K}
to flat
ones, \citep[e.g.][]{2017MNRAS.468..426S,2018MNRAS.479.1672K}, extremely soft spectra
\citep[e.g.][]{2001PASJ...53...79I,2018MNRAS.479.1672K} as well as a mixture of
these \citep[e.g.][]{2001PASJ...53...79I,2012MNRAS.427...77V,2018MNRAS.473.1145K}.
As already mentioned, the typical powerlaw X-ray energy-spectra are successfully described by SSC emission \citep{2009PASJ...61.1011S,
2013MNRAS.433.2380K} while the flat and mix spectra (typical+soft) have been argued
to be as a result of mixture of synchrotron and the SSC emission \citep{2001PASJ...53...79I,
2017MNRAS.468..426S} in one interpretation. The other possibility, argued but not
yet studied, is an additional spectral component like Bethe-Heitler emission. The
extremely soft X-ray spectra observed during the 2016-2017 activity is shown to be
a new, additional high-frequency-peaked BLL (HBL) emission component
by \citet{2018MNRAS.479.1672K}, thanks to the coordinated MW follow-ups. In light
of this, the extremely soft X-ray spectra \citep{2001PASJ...53...79I} which have
been observed earlier as well could be the HBL component. Interestingly, within the
limit of available records, strongly soft X-ray spectra seem to be a common feature
of the source, present for a few years around the $\sim 12$-yr quasi-periodic
optical outbursts.
In this work, we perform spectral study of the 2015 and 2018 \xmm{}~observations
of OJ 287, supplementing with multiple \swift{}~XRT/UVOT observations to explore the soft
X-ray excess in the 2015 \xmm{}~data. In the next section, we present
details of observation and data reduction. \S3 presents the systematic
spectral analysis of data and results. In section 4, we report our discussion and
conclusions. We used the cosmological parameters $H_{0} = 67.04~{\rm km~s^{-1}~Mpc^{-1}}$,
$\Omega_m = 0.3183$ and $\Omega_{\Lambda} = 0.6817$ \footnote{http://www.kempner.net/cosmic.php}
to calculate the distance.
\section{Observation and data reduction}
OJ~287 has been observed multiple times by \xmm{}~observatory, mainly around
the {$\sim$ 12-yr} quasi-periodic optical outbursts. Some of the previous
observations have been studied
in detail by various authors \citep{2018MNRAS.473.3638G, 2016MNRAS.462.1508G}. The
latest observation with $\sim$28 ks and the longest exposure ($\sim 129$~ks) of this
object were performed in May 2018 (MJD=58149-58150) and May 2015 (MJD=57149-57150),
respectively. The European Photon Imaging Camera (EPIC)-pn \citep{turner2001}
was operated in the prime large window mode with the thin filter during both the observations.
We also used \swift{}~XRT/UVOT observations from MJD=57140.4 to MJD=57173.6. This period includes the
May 2015 \xmm{}~observation.
We followed the standard reduction procedure using the \xmm{} Science Analysis System
({\tt SAS v15.0}) \citep{2004ASPC..314..759G} with latest calibration files. First, we
reprocessed the EPIC-pn data using {\tt epproc} and obtained event files. We removed
the intervals affected by flaring particle background by examining light curves
above $10\kev$ to get events file. We used single and double events (PATTERN $\le4$)
for the EPIC-pn, and omitted events at the CCD edges and bad pixels (FLAG=0). We extracted the
source spectrum using a circular region of 50 arcsec, centered at the
source. We also obtained a background spectrum from a circular region of the same size
away from the source and free from any sources. The resulted net exposures
were found to be $\sim$ 53 ks and $\sim$ 19 ks for the 2015 and 2018 observations,
respectively. The net count rates in 2-10 keV band were observed to be $0.316\pm0.003$ and $0.299\pm0.004$
counts $s^{-1}$ for the 2015 and 2018 data sets, respectively. We also examined the pileup very carefully using {\tt epatplot}. We
did not find any significant pileup which might affect our analysis. Finally, we generated
response matrix and ancillary response files at the source position using the tools
{\tt rmfgen} and {\tt arfgen}, respectively. We grouped the data using the SAS task
{\tt specgroup} with an oversampling of $3$ and minimum counts of $20$ per bin.
\begin{figure
\includegraphics[scale=0.33,angle=-90.0]{fig1.eps}
\caption{The ratio (Data/Model) plot of~absorbed powerlaw model fitted in $2-10$ \kev
~band and then extended to $0.3-2$\kev. A clear soft excess is present at low energies
in the May 2015 while 2018 data seems consistent with a powerlaw model.}
\label{fig1}
\end{figure}
For the reduction of \swift{}~XRT and UVOT data, we followed the steps described
in \citet{2018MNRAS.474.5351P}. We selected background annular region from 10 arcsec
to 20 arcsec centered at the source coordinates. We also omitted the data points from
bad patches of the CCD in case of UVOT observations.
\section{Data analysis}\label{dataAn}
\subsection{Spectral analysis}\label{subsec:spec}
\subsubsection{X-ray ray emission}
We used {\tt XSPEC v12.10.1} \citep{1996ASPC..101...17A} to analyze the X-ray spectra
of OJ~287 and used the $\chi^2$ statistics for the model fitting. Unless stated
otherwise, the errors on the best-fit parameters are quoted at 90\%~confidence level, corressponding to $\Delta
\chi^{2}$=2.706.
\begin{figure*}
\includegraphics[scale=0.268,angle=-90.0]{fig2a.eps}
\includegraphics[scale=0.268,angle=-90.0]{fig2b.eps}
\includegraphics[scale=0.268,angle=-90.0]{fig2c.eps}
\includegraphics[scale=0.268,angle=-90.0]{fig2d.eps}
\includegraphics[scale=0.268,angle=-90.0]{fig2e.eps}
\includegraphics[scale=0.268,angle=-90.0]{fig2f.eps}
\includegraphics[scale=0.268,angle=-90.0]{fig2g.eps}
\includegraphics[scale=0.268,angle=-90.0]{fig2h.eps}
\includegraphics[scale=0.268,angle=-90.0]{fig2i.eps}
\includegraphics[scale=0.268,angle=-90.0]{fig2j.eps}
\caption{ The best--fit model, data in top panel and residuals (in $\sigma$) in bottom panel are shown for May 2015 (first row) and May 2018 (second row) for models {\tt tbabs$\times$(logpar)} (a,f), {\tt tbabs$\times$(bknpower)} (b,g), {\tt tbabs$\times$(bremss+powerlaw)} (c,h), {\tt tbabs$\times$(relxill+nthcomp)} (d, i) and {\tt tbabs$\times$(optxagnf)} (e, j), respectively
(see Section \ref{dataAn}). Solid line on the data points represents the net composite model while dashed lines are different model components of the net composite model. For all the models, residuals are within 2-3$\sigma$.}
\label{fig2}
\end{figure*}
We considered only EPIC-pn data due to its high signal to noise ratio compared
to the EPIC-MOS. We began by fitting the 2-10 keV band with an absorbed powerlaw
({\tt tbabs$\times$powerlaw}) model. We fixed the absorption column to the Galactic value of $N_{H}= 3.04\times10^{20}~\rm
cm^{-2}$ \citep{dickey1990}. This resulted in a $\chi^{2}/\nu$ of 157.8/116 and 93.2/97
for the May 2015 and 2018 data respectively, where $\nu$ stands for the degree of
freedom. The best-fit power-law photon index $\Gamma$ was found to be $1.91\pm0.03$
and $2.06\pm0.06$ for the two data sets. Thus, both the data sets represent
a different spectral state of the source. We then extrapolated the best-fit model down
to 0.3 keV for both the observations as shown in Fig.~\ref{fig1}. Surprisingly,
the May 2015 data revealed a strong soft X-ray excess, observed rarely in BLLs,
but commonly observed in radio quite AGN such as the narrow-line Seyfert
type 1 (NLS1)\citep{1999ApJS..125..317L,crum2006,2004MNRAS.349L...7G}.
To investigate this, we then systematically fitted the whole range (0.3--10 keV)
with possible phenomenological models, starting first with log-parabola
({\tt logpar}) and broken power-law emission ({\tt bknpower}) and then additionally
thermal bremsstrahlung ({\tt bremss}) as per claims in the literature \citep{2016ApJ...819L..37V}
and finally the two AGNs models: cool Comptonization
\citep[:{\tt optxagn}]{2012MNRAS.420.1848D} and blurred reflection
\citep[: \ie~{\tt relxill}]{2013MNRAS.430.1694D,2004MNRAS.349.1435M}.
A simple {\tt powerlaw} model fit to the 2015 data over 0.3--10 keV band resulted
in a poor fit ($\chi^{2}/\nu=893.7/160$) due to the presence of the strong soft excess.
Since this AGN is a blazar, the X-ray emission may be described phenomenologically
by {\tt logpar} and {\tt bknpower}, independently. Fitting {\tt tbabs$\times$logpar}
over 0.3--10 keV band resulted in $\chi^{2}/\nu=202.8/159$ while {\tt tbabs$\times$bknpower}
fit resulted in $\chi^{2}/\nu=234.6/158$. The best-fit parameters for both models
are listed in Table~\ref{table1} and the corresponding data, model, and residuals
(in $\sigma$) are shown in Fig.~\ref{fig2}~(a) and \ref{fig2}~(b). The {\tt logpar}
fit describes the data fairly well, however, it is not consistent with the
broadband emission of the source (see \S\ref{discussion} and Fig. \ref{fig5}).
Another claim is thermal bremsstrahlung radiation \citep{2016ApJ...819L..37V}
from a $3\times10^5$ K gas \citep{2012MNRAS.427...77V} around the expected $\sim$12-yr
quasi-periodic optical outbursts. This temperature corresponds to $\sim$ 25 eV and is
irrelevant for the observed soft X-ray excess. Nonetheless, we, additionally
explored a redshifted {\tt zbremss} model along with the above considered models.
This model has three parameters--plasma temperature, normalization, and
the source redshift. We allowed plasma temperature and its normalization to vary.
The fit resulted in $\chi^{2}/\nu=211.8/159$. The best-fitting parameters are listed
in Table~\ref{table1} while the plots are shown in top and bottom panels of Fig.
\ref{fig2}~(c). During model fitting, we found an statistically acceptable fit
with a 25 keV plasma temperature. However, this temperature is too
high, dominating the high energy end of the X-ray and is contrary to
the general behavior of blazars.
Since an accretion disk emission has been claimed for the NIR-optical spectral
break in a systematic analysis by \citet{2018MNRAS.479.1672K}, we invoked
disk-based soft excess models used to explain the soft X-ray excess normally seen in
Seyfert type 1 AGN \citep{crum2006,2004MNRAS.349L...7G,2012MNRAS.420.1848D}.
Though the origin is still unclear, two
competing models -- blurred reflection and cool Comptonization have been most
acceptable. Thus, to a simple absorbed {\tt powerlaw} model we added a reflection
model {\tt relxill} which is a combination of {\tt xillver} \citep{ 2011ApJ...731..131G,
2013ApJ...768..146G} and {\tt relline} \citep{2010MNRAS.409.1534D, 2013MNRAS.430.1694D}.
This model calculates the reflected emission at each angle at each radius of the
accretion disc \citep{2014ApJ...782...76G}. The details of parameters of {\tt relxill}
and its different application forms are described briefly on the webpage
document\footnote{\url{http://www.sternwarte.uni-erlangen.de/~dauser/research/relxill/}}.
The applied form of {\tt relxill} assumes that the X-ray source illuminates the
accretion disc in a lamppost geometry \citep{2004MNRAS.349.1435M}. The illumination
is described as a broken emissivity law which has the form $\epsilon \propto
r^{-q_{in}}$ between $r_{in}$ and $r_{br}$; $\epsilon \propto r^{-q_{out}}$ between
$r_{br}$ and $r_{out}$; where $r$ is the radius of the accretion disk,
$q_{in}$, and $q_{out}$ are inner and outer emissivity indices; $r_{in}$, $r_{br}$, and
$r_{out}$ are the inner, break and outer radii of the accretion disk. The other parameters
are spin ($a$), inclination angle ($i$), iron abundance $A_{Fe}$ relative to solar
abundance, illuminating power--law index ($\Gamma$), high energy cutoff ($E_{cut}$),
ionization parameter ($\xi=L/nr^{2}$ with $L$ being the source X-ray luminosity
and $n$ is the hydrogen number density of the disk material) and reflected fraction
denoted by $R$. We fixed the iron abundance to 1, the inclination
to 3$^\circ$, high energy cutoff to 300\kev~and outer radius to $400r_{g}$. We tied
the {\tt relxill} photon index $\Gamma$ to powerlaw photon index $\Gamma$ and hence
the $R$ parameter was fixed to $-1$ under the lamppost scenario. We allowed the rest
of the parameters and the fit with {\tt tbabs$\times$(relxill+powerlaw)} model resulted
in $\chi^{2}/\nu=184.6/154$. To be more realistic, we replaced the phenomenological
powerlaw model by {\tt nthcomp} \citep{1996MNRAS.283..193Z, 1999MNRAS.309..561Z}
which can correctly predict the low energy rollover where Galactic absorption can
modify the spectrum. We fixed the seed photon temperature at 2 eV and electron temperature
associated with X-ray corona to 100 \kev. The fit resulted in $\chi^{2}/\nu= 184.8/154$
with results listed in Table~\ref{table1} and corresponding plots in the top and bottom
panels of Fig.~\ref{fig2}~(d), respectively.
\begin{table}
\caption{Best-fitting parameters of models used to fit the 0.3-10~\kev~band for two \xmm{}~observations. The flux f$_{E}$ is measured in units of $10^{-12}$\funit. ``t" stands for paramter tied to other parameter.}
\label{table1}
\begin{tabular}{lccccc}
\hline
\hline
\small
Model component &2015 May & 2018 May \\
\hline
$N_{H(\rm Galaxy)}$ ($10^{20}\rm cm^{-2}$) &3.04~(f)&3.04~(f)\\
\hline
& \multicolumn{2}{c}{Logpar}\\
slope ($\alpha$) &$2.26\pm0.01$ &$2.09\pm0.01$ \\
curv. term ($\beta$) &$-0.33\pm0.02$ &$-0.06\pm0.04$ \\
Norm.(LP) ($10^{-3}$)&$1.07\pm0.01$ &$1.04\pm0.01$ \\
Stat. ($\chi^{2}/\nu$) &202.8/159&168.9/138 \\
\hline
& \multicolumn{2}{c}{Bknpower}\\
Photon index ($\Gamma_1$) &$2.38\pm0.03$&$2.11\pm0.02$ \\
Photon index ($\Gamma_2$) &$1.96\pm0.03$ &$2.02\pm0.05$ \\
$E_{break}$ (keV)&$1.3\pm0.1$&$1.67_{-0.4}^{+0.8}\pm0.03$ \\
Norm.(BPL)($10^{-3}$)&$1.06\pm0.02$ &$1.04\pm0.01$ \\
Stat. ($\chi^{2}/\nu$)&234.6/158&167.3/137 \\
\hline
& \multicolumn{2}{c}{PL+bremss}\\
Photon index $\Gamma$ &$1.96\pm0.02$&$2.03_{-0.05}^{+0.04}$ \\
Norm. (nth) ($10^{-3}$)&$0.96\pm0.02$ &$1.00_{-0.06}^{+0.04}$ \\
plasma temp. ($\frac{\rm kT_{brem}}{\rm keV}$) &$0.31\pm0.02$ &$0.42_{-0.28}^{+0.21}$\\
Norm. (brem) ($10^{-3}$)&$4.6_{-0.4}^{+0.5}$&$0.61_{-0.38}^{+0.15}$ \\
Stat. ($\chi^{2}/\nu$)&211.8/159&168.9/137\\
f$_{E}$~($0.3-2~\kev$) &$3.1$&$2.8$\\
f$_{E}$~($2-10~\kev$) &$2.64$&2.5\\
\hline
& \multicolumn{2}{c}{Nth.+Relxill}\\
Photon index ($\Gamma$) &$2.21_{-0.12}^{+0.06}$ &$2.03_{-0.02}^{+0.04}$\\
Norm.(nth) ($10^{-3}$)&$0.86_{-0.05}^{+0.07}$ &$0.89_{-0.11}^{+0.12}$ \\
Index1 ($q_{in}$) &$7.1_{-2.2}^{+0.30}$ &$3$ (f) \\
Index2 ($q_{out}$) &$4.2_{-0.9}^{+0.1}$ &$3$ (f) \\
Photon index ($\Gamma$)&$2.21_{-0.12}^{+0.06}$ (t) &$2.03_{-0.02}^{+0.04}$ (t) \\
log($\frac{\rm Ionization~par.}{\rm erg~cm~s^{-1}}$) &$2.3_{-0.5}^{+0.4}$ &$3.3_{-2.2}^{+0.3}$ \\
Inner radius ($r_{g}$)&$1.6_{-0.1}^{+0.3}$ &$11.7_{-9.6}^{+7.0}$\\
Break radius ($r_{g}$) &$4.30_{-0.01}^{+1.6}$ &$11.7_{-9.6}^{+7.0}$ (t) \\
Spin ($a$) &$0.99_{-0.07}^{+0.01}$ &$0.998_{-0.954}^{+0.038}$ \\
Norm.(refl) ($10^{-5}$)&$2.6_{-1.6}^{+0.5}$ &$0.14\pm0.01$ \\
Stat. ($\chi^{2}/\nu$)&184.7/154&162.3/135\\
f$_{E}$~($0.3-2~\kev$)&$3.1$&$2.8$\\
f$_{E}$~($2-10~\kev$)&2.66&2.5 \\
\hline
& \multicolumn{2}{c}{optxagnf}\\
Acc. rate ($\frac{\rm L}{\rm L_{edd}}$) &$0.044_{-0.003}^{+0.016}$ &$0.0084_{-0.0011}^{+0.0038}$ \\
Spin ($a$) &$0.996^{+0.002}_{-0.095}$ &$0.99_{-0.25}^{+0.008}$ \\
Coronal radius ($r_{g}$) &$6.1_{-1.3}^{+6.3}$ &$6.9_{-0.3}^{+3.2}$ \\
Plasma temp. ($\frac{\rm kT_e}{\rm keV})$ &$0.46_{-0.05}^{+0.17}$ &$0.37_{-0.18}^{+0.86}$ \\
Optical depth ($\tau$) &$9.0_{-1.6}^{+0.2}$ &$12.1_{-9.0}^{+1.2}$ \\
Frac. power ($f_{pl}$) &$0.11_{-0.05}^{+0.01}$ &$0.67_{-0.03}^{+0.29}$ \\
Photon index ($\Gamma$)&$1.88_{-0.04}^{+0.05}$ &$2.01_{-0.03}^{+0.07}$ \\
Stat. ($\chi^{2}/\nu$)&188.9/155 &168.1/134 \\
f$_{E}$~$(0.3-2~\kev)$&$3.1$&$2.8$\\
f$_{E}$~($2-10~\kev$) &2.66 &2.5\\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\includegraphics[scale=0.422,angle=-90.0]{fig3a.eps}
\includegraphics[scale=0.422,angle=-90.0]{fig3b.eps}
\includegraphics[scale=0.422,angle=-90.0]{fig3c.eps}
\caption{ Best-fit model, data and residuals for (a) {\tt logpar+bremss}. The fit
shows strong positive residuals in UV/Optical bands (b) {\tt relxill+nthcomp}. (c)
{\tt optxagnf}. No significant residuals (within 2-3$\sigma$) is present in
UV/Optical bands in the best-fitting models related to reflection (b) and
Comptoniozation (c). Solid line on the data points represents the net composite
model used here and dashed lines are different model components of the net composite
model.}
\label{fig3}
\end{figure*}
The other widely argued scenario of AGNs soft X-ray excess attributes it to a
different plasma embedded in the interior region of the accretion disk. As argued
by \citet{2012MNRAS.420.1848D}
the gravitational potential energy is released at each point of the accretion disk
as a blackbody emission down to R$_{corona}$. Below this radius, the gravitational
potential energy is no longer completely thermalized. The energy is distributed into
two types of plasma-- an optically thick ($\tau>1$) cool plasma (kT$_e$$\sim$0.2 keV) for soft X-ray excess emission
and an optically thin ($\tau<1$) hot plasma (kT$_e$$\sim$ 100 keV) emitting power--law
continuum above 2 keV. Thus, to the {\tt powerlaw}, we added
{\tt optxagnf} model which incorporates the above mention scenario. The important
parameters of the model are -- accretion rate relative to the Eddington rate L/L$_{Edd}$,
mass of the BH M$_{BH}$ and its spin $a$, source luminosity distance D$_L$, cool
plasma temperature kT$_e~\sim 0.2 $keV, optical depth $\tau$ of cool plasma, photon
index of power-law continuum $\Gamma$, power fraction of power-law continuum f$_{pl}$
and R$_{corona}$. We fixed the M$_{BH}$ at $2\times10^{10}~M_{\odot}$
\citep{2016ApJ...819L..37V,2018MNRAS.473.1145K}, D$_L$ at 1677 Mpc, f$_{pl}=0$
as the power--law component accounts for the hot Comptonizing component
and we fixed the normalization to unity to get proper flux and luminosity for the source. We tied the power-law photon index to the
photon index of the {\tt optxagnf} model. Rest of the parameters were allowed to vary.
The fit resulted in $\chi^{2}/\nu= 189.5/156$. Further, since {\tt optxagnf} can
describe hard X-ray power-law continuum, we varied the parameter f$_{pl}$ after
removing analytical {\tt powerlaw} model. This resulted in $\chi^{2}/\nu= 189.2/156$
with best-fit model parameters listed in Table~\ref{table1} and the plot in the top and
bottom panels of Fig.~\ref{fig2}~(e), respectively. Additionally, as per other
the claims of geometrical models, we also tested M$_{BH}$ of $\sim10^{8}~M_{\odot}$.
However, it resulted in a super Eddington accretion rate of about 1.3 in Eddington units, contrary to the
expectation of BLLs
\setlength\tabcolsep{1.0pt}
\begin{table}
\fontsize{7.6}{5.5}\selectfont
\centering
\caption{Best-fitting parameters of models used to fit the 0.3-10~\kev~and the UV/Optical bands jointly for 2015~observations. Model 1 : Nth.+Relxill+diskbb; Model 2 : Optxagnf. `t` stands for tied parameter in the fit.}
\label{table2}
\begin{tabular}{lcccccc}
\hline
Model component &\multicolumn{1}{c}{Model 1} & \multicolumn{1}{c}{~~~~~~~~~~~~~~~~~~~~Model 2} \\
\hline
Reddening &$0.13\pm0.03$ &Reddening & $0.16_{-0.04}^{+0.03}$\\
Photon index ($\Gamma$) &$2.22\pm0.08$ &Photon index ($\Gamma$) & $1.87_{-0.03}^{+0.06}$\\
$\frac{kT_{bb}(nth)}{eV}$&$2.03_{-0.03}^{+0.09}$ &--&-- \\
Norm.(nth) ($10^{-3}$)&$0.86_{-0.03}^{+0.05}$ &Acc. rate ($\frac{\rm L}{\rm L_{edd}}$) & $0.040_{-0.004}^{+0.003}$ \\
Index1 ($q1$) &$7.2_{-1.3}^{+1.7}$ &Coronal radius ($r_{g}$) &$5.7_{-0.9}^{+1.9}$ \\
Index2 ($q2$) &$4.4_{-1.1}^{+1.7}$ &Plasma temp. ($\frac{\rm kT_e}{\rm keV})$ & $0.47_{-0.12}^{+0.03}$ \\
Photon index ($\Gamma$)&$2.22\pm0.08$ (t) &Optical depth ($\tau$) & $9.0_{-0.3}^{+2.0}$ \\
log($\frac{\rm Ionization~par.}{\rm erg~cm~s^{-1}}$) &$2.3_{-0.4}^{+0.1}$ &Frac. power ($f_{pl}$) & $0.12_{-0.02}^{+0.05}$ \\
Inner radius ($r_{g}$)&$1.6\pm0.3$ &--& --\\
Spin ($a$) &$0.99_{-0.04}^{+0.01}$ &Spin ($a$) &$0.996_{-0.006}^{+0.002}$ \\
Break radius ($r_{g}$) &$4.2_{-1.9}^{+1.7}$ &--&-- \\
Norm.(refl) ($10^{-5}$)&$2.7_{-1.3}^{+1.1}$ &&-- \\
$\frac{kT_{in}(disk)}{eV}$&$2.03_{-0.03}^{+0.09}$ (t) &--&-- \\
Norm.(disk) ($10^{11}$)&$3.20\pm0.01$ &--&-- \\
Stat. ($\chi^{2}/\nu$)&194/156 &Stat. ($\chi^{2}/\nu$)&200.9/160\\
\hline
\end{tabular}
\end{table}
Application of these models to the May 2018 0.3-10~\kev ~data did not improve the fit
statistics with respect to a simple {\tt powerlaw}, as can be seen from results
in Table \ref{table1} and, neither the observation show any strong
soft X-ray excess (see Fig.~\ref{fig1}).
\subsubsection{X-ray to UV/Optical Emission}
Statistically, log-parabolic emission ({\tt logpar}), blurred reflection ({\tt relxill}) and cool Comptonization
({\tt optxagnf}) models describe the soft X-ray excess equally well. To distinguish between
these models, we used the UVOT data from \swift{}~UVOT (MJD=57149-57150) snapshots, observed
simultaneously with the 2015 \xmm{}~observation. The extrapolation of best-fitting log-parabolic model of X-rays showed strong residuals in UV/optical band. Having corrected from reddening due to our Galaxy and intrinsic to the source, we added {\tt zbremss} model to describe the thermal emission as claimed in studies. We found that the {\tt zbremss} with {\tt logpar} results in the worst statistic (see Fig. \ref{fig3}(a)).
We then extrapolated the best-fit
blurred reflection model to UVOT bands and found positive residuals in the low-energy
bands. Since the reflection model does not include disk component required for
optical-UV spectral break, we added {\tt diskbb} model with the best-fitting parameters
of blurred reflection model. We applied reddening correction due to our Galaxy and
intrinsic to the source. We fitted the UV/Optical and X-ray bands jointly and the
fit resulted in $\chi^{2}/\nu= 189.4/157$. Similarly, we also fitted the UV/Optical/X-ray
bands using the disk Comptonization model {\tt optxagnf} which includes the intrinsic disk emission. Having applied reddening correction, we modeled the full band and the fit
resulted in $\chi^{2}/\nu= 201.3/160$. The best-fit model, data and residuals are
shown in Fig.~\ref{fig3}~(b) and ~\ref{fig3}~(c) with parameters in Table~\ref{table2}.
\begin{figure}[h]
\includegraphics[scale=0.38,angle=0.0]{fig4a_new.eps}
\includegraphics[scale=0.43,angle=0.0]{fig4b_new.eps}
\includegraphics[scale=0.37,angle=0.0]{fig4c.eps}
\caption{{(a)}: Two upper panels show the light curves for UVW1 and
1.5-10 keV bands while the two lower panels show the UVW1 light curve shifted
by $\sim$3-day and 16-day with respect to the X-ray. Here, the UVW1 count rates are
divided by a factor of 170 to match the level with X-ray count rates. {(b)}:
Probability distribution of time-delay for UVW1 band with respect to the 1.5-10 keV
from {\tt Javelin} (green) and {\tt ZDCF} (red) codes (see \S\ref{sec:time}). {(c)}:
Lag results from simple {\tt DCF} method considering different time ranges (red,
blue) and by removing the linear trends from the full light curves (top two panels of
4 (a)).}
\label{fig4}
\end{figure}
\subsection{Timing Analysis}\label{sec:time}
We examined the timing behavior of OJ~287 2015 May observation to look for
lags between X-ray and UV emission. We first checked \xmm{} UVW1 band vis-a-vis
X-rays and did not find any lag between them, suggesting that UV may not be related
to X-ray within $\sim$ day-long observation. In fact, only the optical data
of the 2018 observation show a hint of marginal variability while rest is
statistically consistent with no variability. We then used \swift{}~-XRT and simultaneous
UV observations taken in UVW1 band obtained on a cadence of about half a day during
MJD=57140.4 to MJD=57187.0, as shown in the upper two panels of Fig. \ref{fig4} (a). Observed
light curves are highly variable and the UV band seems to lag behind the 1.5-10 keV X-rays
around MJD=57170 and afterward. This could be due to reprocessing as on short time
scales OJ 287 normally show simultaneous variability \citep{2018MNRAS.473.1145K,
2013MNRAS.433.2380K}, but has shown lag when an additional competing emission component
is present \citep{2018MNRAS.479.1672K}. In such cases, available light curves to date
can not be used for the analysis due to jet dominated emission.
We used {\tt JAVELIN} code \citep{2011ApJ...735...80Z}
to estimate the lag following the procedures described in \citet{2018MNRAS.474.5351P}
and \citet{2017MNRAS.466.1777P}. We found time-lag of $\sim~3$-day and $\sim~16$-
day for UVW1 compared to X-rays (see Fig.~\ref{fig4} (b)). We cross-checked the
lag results with the z-transformed discrete correlation function ({\tt zDCF}:
\citet{2013arXiv1302.1508A}) applying the bootstrap technique \citep{1998PASP..110..660P}.
In the bootstrap method, we extracted 10000 realizations of the two light curves from
the observed light curve pair through Monte Carlo approach by randomizing fluxes
and randomly selecting a subset after excluding 20\%~data points. We then performed
cross-correlation on the extracted pairs using the {\tt zDCF} method, as was done
between the originally observed light curves. This approach is a model-independent
way of accounting for the effects of flux uncertainties and irregular
sampling on the cross-correlation result. The lag results from this are shown in
Fig. \ref{fig4} (b) and as can be seen clearly, the time-lag range agrees with the
one found by the {\tt JAVELIN}.
Within the limitations of data reported here, both the $\sim3$-day
and $\sim16$-day lag values are supported by the light curves; shown by plotting a
shifted UV light curve with respect to the X-ray in the bottom panels
of Fig. 4~(a). Further cross-checking with simple Discrete Cross-
Correlation ({\tt DCF}; \citet{1988ApJ...333..646E}), we noticed some discrepancies. The 3-day lag feature
was missing when the full light curves were used ( top two panels of Fig. \ref{fig4}~(a))
but was recovered when the linear trends were removed from the light curves e.g., \citet{2014MNRAS.444.1469M,2018MNRAS.480.2881M}. The 3-day
lag, however, remains if we consider data after MJD 57157 without even removing the linear
trend from the light curves. These outcomes are shown in Fig.~\ref{fig4}~(c).
In short, while both the lag values are supported by the data, the $\sim 16$-day
lag is consistently present in all the methods while the $\sim$3-day lag is
recovered after eliminating the linear trend while performing {\tt DCF} analysis.
Unfortunately, gaps before and after the used light curves and also
the sampling of available data do not allow any further analysis (e.g. significance estimate).
Regardless however there is a clear indication of lag.
\section{Discussion} \label{discussion}
We performed a spectral and temporal study of OJ~287 based on the 2015 and 2018 long
\xmm~{} observations. Except for a marginal hint of
variability in the 2018 \xmm~optical data, rest is statistically consistent with
no variability within each observation ($\rm \sqrt{variance~ of~ rate} \lesssim$ mean error
in the rate). Spectrally, however, the two observations represent a very different
X-ray spectral state of the source. The 2018 X-ray spectrum shows the most generic
spectral state of the source characterized by a powerlaw spectrum \citep{2001PASJ...53...79I,
2009PASJ...61.1011S,2013MNRAS.433.2380K,2017MNRAS.468..426S} while the 2015 X-ray
spectrum shows strong soft X-ray excess with respect to a powerlaw spectrum below 2.0 keV (figure
\ref{fig1}, \S\ref{subsec:spec}). To best of our knowledge, such (soft X-ray) excesses
-- the focus of our study here, has been reported only once in OJ 287 \citep{2001PASJ...53...79I}. We systematically investigated the emission mechanisms behind the origin of
this excess using models motivated from blazar and normal AGN studies.
\subsection{Blazar based models}
Blazars are known for variability in all the domains of observation. Spectral
changes, as reported in this work, at the low-energy end of the X-ray emission
can physically have multiple origins. In addition to the possibility of an altogether
new emission component \citep[e.g.][]{2018MNRAS.473.1145K,2018MNRAS.479.1672K}, in
the general scheme of blazar emission scenario, an appropriate overlap of synchrotron
and SSC component can mimic a variety of phenomenological spectral shapes.
Literature records on OJ 287 show only one instance of a similar spectral state in the
1994 ASCA observation. A spectral study by \citet{1997PASJ...49..631I} reported a power-law
photon spectral index of $\Gamma \sim 1.67$. However, a careful re-analysis
by \citet{2001PASJ...53...79I} found that a broken-powerlaw spectrum with a break
at 2 keV describe the data statistically better. Further, the
spectral index below 2 keV was consistent with the optical-UV spectrum and hence,
they attributed the soft-excess to the ``synchrotron soft tail''.
In the current case, we followed a
flexible approach and systematically investigated by using with both the possible
phenomenological spectral shapes -- logparabola and broken-powerlaw models. This allows
to capture additional contributions \citep[e.g.][]{2018MNRAS.479.1672K,
2001PASJ...53...79I}. Of the two, we found that a logparabola model provides
statistically acceptable description of the 2015 EPIC-pn data (ref. Table \ref{table1}).
A logparabola spectrum within blazar emission scenarios can simply arise from an
appropriate combination of the high-energy end of a simple powerlaw synchrotron spectrum
or its steeply declining part with the rising part of the SSC emission (ref.
Fig. \ref{fig5}, bottom panel). A look at the NIR and optical SEDs around
the 2015 observation, as shown in the top plot of Fig. \ref{fig5}, clearly show that
the soft X-ray excess lies above the simple powerlaw extrapolation of the NIR data
but below the optical-UV data points. Noting that in the most generic spectral state
of OJ 287 the optical-UV data simply lies on a power-law (log-parabola) extension
of NIR data, the SEDs around 2015 observations suggest two possibilities in the
present context --
{\it CASE-A}: synchrotron spectrum associated with NIR data points extending
to X-rays with a powerlaw or steeply declining tail (Fig. \ref{fig5}
grey band). {\it CASE-B}: optical-UV being synchrotron with a smoothly declining
tail causing the soft X-ray excess (Fig. \ref{fig5} bottom plot). Below we
systematically look into these two possibilities.
\subparagraph*{ CASE-A:}
In this case, the optical-UV data remains unexplained, suggesting additional
broadband emission component. Attributing NIR-optical break to accretion
disk emission as suggested in \citet{2018MNRAS.473.1145K}, the combined emission still
failed to reproduce the UV emission \citep[see Fig. 6 in][also
\citet{2019BHCB}]{2018MNRAS.473.1145K}. Thus, though this interpretation could
provide a viable explanation for soft X-ray excess, the UV data remain unexplained.
\subparagraph*{ CASE-B}
As shown in Fig. \ref{fig5} bottom plot, this scenario successfully reproduces the
optical/UV to soft X-ray emission by using logparabola model (logpar) when
combined with a powerlaw representing the X-ray emission above the soft X-ray band.
The combined logparabola plus powerlaw provide an acceptable
fit to the data ($\chi^2/\nu \simeq 1.2$).
Furthermore, the resulting {\it powerlaw}
index of $\sim$1.6 for X-ray emission is also consistent with the general X-ray
spectra of the source. However, this fails to explain the two NIR data points
unless the synchrotron peak of its broadband SED which normally peaks at NIR
\citep[KJ bands; e.g.][]{2009PASJ...61.1011S,2013MNRAS.433.2380K} has shifted to
optical energies, making
NIR data to be part of spectrum before the peak of synchrotron emission. But NIR-optical
SED comparison with 2009 SED does not support such shift \citep[Fig. 2, ][]
{2019BHCB}. Further, even smoothing of the low-energy
end to match one of the NIR data on the basis that low-energy hump peaks around NIR
bands \citep[e.g.][]{2001PASJ...53...79I,2013MNRAS.433.2380K,2018MNRAS.473.1145K}
leaves the other NIR data points unexplained. It should further be noted that this is
not a one odd observational data as the NIR-optical SED trend has been like this
since May 2013
(MJD 54639) as reported by \citet{2018MNRAS.473.1145K}. Thus, though
phenomenologically {\tt logpar} description is fine for X-rays, it is not consistent
with the broadband emission characteristics of the source during this period,
thereby suggesting some other emission components for the soft X-ray excess.
\begin{figure
\includegraphics{fig5a.eps}
\includegraphics[scale=0.42]{fig5b.eps}
\caption{Top: NIR to X-ray SEDs of OJ 287 around 2015 \xmm{} observation. The solid
curves within the shaded regions are the best fit logparabola and powerlaw model
to the X-ray and NIR/optical-UV data respectively while shaded area represent their
$1\sigma$ range bounded by error in spectral indices only. {\it Bottom:} Best-fitted
logparabola+powerlaw model to the optical to X-ray emission. The dotted curves are
the individual model components (optical to soft-X-ray: logparabola) while the red
curve is the sum of the two components.}
\label{fig5}
\end{figure}
Another proposal in the literature is a dominant thermal bremsstrahlung emission
for the $\sim$ 12-yr quasi-periodic optical outbursts from a thermal gas of
temperature $\sim 3\times10^5$ K \citep{2016ApJ...819L..37V, 2012MNRAS.427...77V}.
However, this temperature corresponds to $\sim$ 25 eV, too small to produce the
observed excess in 0.1 -- 2 keV (Fig. \ref{fig3}a). Considering this scenario
and keeping the temperature free during the fit, we found an
statistically acceptable fit with 25 keV plasma. However, this is unphysical
as it dominates the high-energy end of the X-ray emission and is contradictory
with previous studies and the general X-ray spectral profile of the source being
a powerlaw. Combination of logparabola (for synchrotron and its high energy
tail), thermal bremsstrahlung, and powerlaw (for SSC) to optical to X-ray data
resulted in a very low plasma temperature ($\sim0.1$ eV), making the bremsstrahlung
ineffective with resulting scenario similar to a powerlaw plus logparabola which,
as argued above, are in tension with NIR-optical break.
\subsection{Radio-quiet AGN/disk based models}
The claim of NIR-optical break as accretion disk emission of a $\sim 10^{10}~\rm
M_{\odot}$ SMBH and its presence between May 2013 (MJD 56439) till May 2016 suggests
disk-based soft X-ray excess origin as in radio-quiet AGN as potential
candidates. We, therefore, investigated
this possibility with two of the AGN disk dominant models: cool Comptonization
({\it optxagn}) and blurred reflection, argued for the soft X-ray excess often
observed in Seyfert galaxies; e.g. Mrk~509 \citep{2011A&A...534A..39M}, 1H~0707--495
\citep{2009Natur.459..540F}, II~Zw~177 \citep{2016MNRAS.457..875P}, ESO~113--G010
\citep{2013ApJ...764L...9C}. We found that both cool Comptonization and blurred
reflection plus disk describes the data well (like the logparabola model) and
are equally acceptable statistically (see Table \ref{table1} and \S\ref{dataAn}).
In the cool Comptonization scenario, the best-fit suggests the observed
soft excess is due to inverse Compton scattering of seed photons flux from the disk
($f_{PL}\sim 0.1$ , see table \ref{table1}) in the cool Comptonizing plasma
(kT$_{e}\sim$0.4 keV and $\tau\sim$10 in this work). The derived accretion rate for
observed soft X-ray excess was found to be $\sim$10\%~of Eddington unit. Such a high
accretion rate for prominent soft X-ray excess has been seen in a number of radio-loud
narrow line Seyfert type 1 (RLNLS1) AGNs i.e., 1H~0323$+$342 \citep{2018MNRAS.479.2464G}.
The temperature and optical depth of the cool plasma embedded in the inner region of the
accretion disk are inferred to be $kT_{e}\sim0.5$ keV and $\tau\sim 10$, respectively,
for the soft X-ray excess in 2015 May observation. Such type of cool plasma has
been found in RLNLS1 galaxies i.e., PMN~J0948$+$0022: \citep{2014MNRAS.438.3521D}. The
flux observed for soft excess in the 0.3-2 keV band was found to be $\sim 3\times 10^{-12}\rm~erg~s^{-1}~cm^{-2}$
which is comparable to which is claimed in a RLNLS1 PMN~J0948$+$0022. Thus, the BL-Lac object OJ~287 behaves like a radio loud narrow-line Seyfert galaxies in this particular observation.
Additionally, since SMBH mass is one of the parameter in the cool Comptoniozation
scenario, we also checked it by fitting first a SMBH mass of $\sim 2\times 10^{10}~M_{\odot}$,
as suggested by NIR-optical break and also in the disk-impact binary SMBH model.
This resulted in a accreting rate of $\sim 0.04$ in Eddington units. Fit with an SMBH
mass of $\sim 1\times 10^{8}~M_{\odot}$ as argued by jet precession based models,
on the other hand, resulted in a super Eddington accretion rate $\sim$ 1.3, contrary
to the expectation for BL Lac objects. Thus, the model too supports a very massive SMBH
mass as claimed in the binary SMBH and also from the NIR-optical spectral break.
It should, however, be noted that central engine mass is not a true discriminator
for the two classes of models suggested for $\sim 12$-yr QPO as in the geometrical class
of models the central engine mass is not connected directly with the model parameters
and is inferred based on other observations, unlike the case of disk-impact
binary SMBH model.
In case of X-ray reflection under the lamppost geometry, the blurred
reflection is very intense and strong (ref. table \ref{table1}) close to the inner
edge of the accretion disk. The emissivity pattern is not uniform and it changes
from inner radius to a break radius $R_{br}\sim 4r_g$ (inner emissivity index$\sim7$
and outer emissivity index $\sim4$). Thus, the strong soft excess is
likely due to the strong light bending in the vicinity of the central SMBH.
The best known proxy for the blurred reflection is the broad iron-K$\alpha$
emission line near 6 keV \citep{1995Natur.375..659T}. However, the Fe-K$\alpha$ emission
line is absent in the 2015 observation, and in fact never been detected in OJ 287
or any BLL to best of our knowledge. The fit suggests an intense smearing for
blurred reflection, too strong for Fe-K$\alpha$ emission line to be seen in the data
(see blue dashed line for blurred reflection in Fig. 3(b)). In this scenario,
a likely possibility is that the disk may be illuminated by the base of the jet
\citep{2018MNRAS.473.3584P,2014MNRAS.444.1469M,2018MNRAS.480.2881M}. We found
clear indications of lagging of UVW1 band emission with respect to the hard X-ray emission
($\sim 3$ and $\sim 16$ days, see Fig. \ref{fig4}). Such lags favor the X-ray
reprocessing scenario at accretion disk and have been reported in many AGNs where
UV is found to be lagging behind X-ray emission as expected in the reprocessing scenario
\citep[\eg][]{2017MNRAS.464.3194B, 2018MNRAS.474.5351P, 2014MNRAS.444.1469M,2018MNRAS.480.2881M}. Additional support
for this comes from the general variability trend of OJ 287 where multi-wavelength
variations are normally simultaneous on short timescales \citep{2018MNRAS.473.1145K,
2013MNRAS.433.2380K} with lag reported only when an additional emission component
was competing with its general emission \citep{2018MNRAS.479.1672K}.
The best-fit reflection+disk model in the optical/UV/X-ray bands suggests a inner disk temperature of $\sim$
2 eV (ref. table \ref{table2}). We used theoretical
temperature profile ($T(r) \sim6.3\times10^5(\frac{\dot M_{E}}{M_{8}})^{0.25}
(\frac{r}{R_s})^{-0.75}$, where $\dot M_{E}$, $M_{8}$, $R_S$ and $r$ are accretion
rate in Eddington units, black
hole mass in $10^8~M_{\odot}$, Schwarzschild radius and disk radius from the centre,
respectively) with the best-fit parameter to infer the temperature at the inner
edge of the disk. This provided in a temperature of about {$\sim$ 2.8 eV}, similar to
the one inferred from the X-ray/UV/optical modeling, further supporting the
disk-impact binary SMBH scenario. Further, the normalization of
multi-color blackbody model {i.e. \tt diskbb} is a function of inner
radius of the accretion disk and the luminosity distance along with the
inclination of the source. Using inner radius $R_{in}=1.6~r_g$, mass of
the black hole $M_{BH}=2\times10^{10}~M_{\odot}$, inclination $i=3$
degree and luminosity distance 1652.08 Mpc, we derived the normalization
value to be $8.2\times10^{10}$. This is similar with the
best-fit value listed in Table~\ref{table2}. Thus, both the observed inner disk temperature and the normalization
are in agreement in support for the binary black hole
system with a super heavy super massive black hole at the centre.
Both the AGN disk-based models
suggest a maximally rotating SMBH, contrary to the tightly constrained spin value
of $\sim 0.30$ claimed by \citet{2016ApJ...819L..37V}. We tested blurred reflection
by fixing the spin parameter at 0.30 and the fit-statistic was marginally
disfavored ($\chi^{2}_\nu\sim$1.5). This marginal change for a large change
in the value of spin suggests that current data are not sufficient to
constrain the spin and/or a detailed comparative study is required based
on the theoretical premise of the model.
\section{Summary and Conclusion}\label{Conclusion}
We performed spectral analysis of the two yet unstudied \xmm~{} observations
of OJ 287 performed in 2015 and 2018 respectively. Temporally, both the data are statistically
consistent with non-variable but are spectrally very different. We found that while
the 2018 data represents the typical (most generic) X-ray spectral state of the source
characterized by a powerlaw spectrum, the 2015 data show very strong soft X-ray excess.
The excess lies above the simple power-law extrapolation of the NIR data points but
below the best-fit power-law extrapolation of the optical-UV data points. We systematically
explored the physical process behind the spectral shape vis-a-vis consistency with
known/established observational properties of OJ 287 as listed below.
\begin{itemize}
\item For the X-ray spectrum only, a simple log-parabola model describes the 2015 spectral
state statistically well and can be generated with an appropriate overlap of synchrotron
tail extended to X-ray energies and the SSC spectrum. However, this interpretation is in conflict with the quasi-simultaneous NIR to optical spectrum of the source.
\item Additionally adding a thermal bremsstrahlung emission from a plasma of
temperature 25 keV with logparabola also provides an acceptable statistical fit to
the X-ray data but is inconsistent with the optical spectrum as well as the general X-ray
spectral properties of the source.
\item Accretion disk-based models: reflection and cool Comptonization (Table \ref{table1})
with an intrinsic powerlaw component describes 2015 optical to X-ray spectrum
statistically well and is consistent with the general spectral characteristics of
OJ 287. Timing analysis indicates a lag of UV emission with respect
to X-rays (\S\ref{sec:time}), favoring reflection model. Additionally, these models also favor
a heavy SMBH of mass $\sim 10^{10}~M_\odot$ for OJ 287, as has been argued by
\citet{1988ApJ...325..628S} and \citet{1996ApJ...460..207L} in interpreting the
$\sim$12-yr optical QPO in a binary SMBH framework.
Further, the appearance of the soft excess during 2015 and its absence in 2018 is
consistent with the presence of accretion-disk signature (NIR-optical break) between
May 2013 to November 2016. Based on these considerations, the soft X-ray excess and
UV emission appear to be primarily a result of \it{reflection phenomena}.
\end{itemize}
\section{Acknowledgement}
Authors are grateful to acknowledge the anonymous referee for his/her thoughtful
suggestions and comments which improved the manuscript.
MP thanks the financial support of UGC, India program through DSKPDF fellowship
(grant no.~BSR/2017-2018/PH/0111). MP is also grateful for support of Prof. M.
Sami at the Centre for Theoretical Physics, Jamia Millia Islamia, New Delhi.
PK acknowledge funding from FAPESP (grant no. 2015/13933-0). This research
has made use of archival data of \xmm{} observatory, an ESA science mission directly
funded by ESA Member States and NASA by the NASA Goddard Space Flight Center (GSFC).
This research has also made use of the XRT Data Analysis Software (XRTDAS)
developed under the responsibility of the ASI Science Data Center (ASDC), Italy.
\vspace{5mm}
\facilities{Swift (XRT and UVOT), XMM Newton}
\software{HEASOFT (\url{https://heasarc.gsfc.nasa.gov/docs/software/heasoft/}),
Gnuplot (version: 5.0; \url{http://www.gnuplot.info/})}
\bibliographystyle{aasjournal}
|
train/arxiv
|
BkiUc-TxK6-gD0SrdDDQ
| 5
| 1
|
\section{Introduction}
Mean field games with major and minor players were introduced with the specific intent to extend the realm of applications of the original mean field game paradigm to realistic models for which subgroups of players do not grow in size and as a result, their influence on the remaining population of players, does not disappear in the asymptotic regime of large games. While this generalization captures new potential applications, it \emph{raises the technological bar} in terms of the sophistication of the tools to be used in order to come up with solutions, bringing these models up to par with mean field games with common noise. See for example the monograph \cite{BensoussanChauYam} or the last chapter of \cite{CarmonaDelarue_book_II} for details.
As far as we know, the earliest instance of such a generalization appeared in \cite{Huang} which proposed a linear-quadratic infinite-horizon model with a major player. Soon after, the finite-horizon counterpart of the model was considered in \cite{NguyenHuang1} and a first generalization to nonlinear cases was proposed in \cite{NourianCaines}. We believe theses are the first models of what is now called '\emph{mean field games with major and minor players'}. Still, the state of the major player does not enter the dynamics of the minor players, it only appears in their cost functionals. Later on \cite{NguyenHuang2} discussed a new approach to linear quadratic games in which the major player's state enters the dynamics of the minor players. The authors solve the limiting control problem for the major player using a trick they call ``anticipative variational calculation''.
The asymmetry between major and minor players was emphasized in \cite{BensoussanChauYam}
where the authors insist on the fact that the statistical distribution of the state of a generic minor player should be derived endogenously. Like in \cite{NourianCainesMalhame},
the paper \cite{BensoussanChauYam} characterizes the limiting problem by a set of stochastic partial differential equations. While working with the open loop formulation of the problem, the more recent account \cite{CarmonaZhu} also insists on the endogenous nature of the statical distribution of the state of a generic minor player. In fact, it goes one step further by reformulating the Mean Field Game with major and minor players as the search for a Nash equilibrium in a two player game
over the time evolutions of states, some of which being of a McKean-Vlasov type. Note that, despite the fact that they offer a formal discussion of the general case, both papers \cite{BensoussanChauYam} and \cite{CarmonaZhu} can only provide solutions in the linear quadratic case. For the sake of completeness, we also mention the recent technical report
\cite{JaimungalNourian} where a major player is added to a particular case of the extended (in the sense that the interaction is through the controls) mean field game model of optimal execution introduced in Chapter 1 and solved in Chapter 4 of \cite{CarmonaDelarue_book_I}.
Because of the absence of idiosyncratic noise, the initial conditions of the minor player states are assumed to be independent identically distributed random variables. The authors formulate a fixed point equilibrium problem when the rate of trading of the major player is restricted to be a linear function of the average rate of trading of the minor players, and they solve this fixed point problem
with deterministic controls in the infinite horizon stationary case.
In this paper, we present an alternative formulation for the Mean Field Games with major and minor players. In this approach, the search for Nash equilibria is naturally framed as the search for fixed points for the best response function for both types of players.
As a \emph{fringe benefit} we are able to formulate and tackle the open and closed loop versions of the problem in one go.
Beyond the fact that \cite{BensoussanChauYam} seems to be dealing only with the closed loop formulation of the problem,
the main difference is the fact that instead of looking for a global Nash equilibrium of the whole system, including major and minor players, the authors choose a Stackelberg game strategy in which the major player goes first and chooses its own control to minimize its expected cost, assuming that the response of the minor players to the choice of its control will be to put themselves in the (hopefully unique) mean field game equilibrium in the random environment induced by the control of the major player.
As a result, the finite-player game which is actually solved in \cite{BensoussanChauYam}, is merely a $N$-player game including only the minor players. In particular, the associated propagation of chaos is just a randomized version of the usual propagation of chaos associated to the usual mean field games. Here we follow the same line of attack as in \cite{CarmonaZhu}, making sure that the approximate equilibria obtained for finite player games are in fact $(N+1)$-player game equilibria including the major player as well as the $N$ minor players.
\vskip 4pt
The paper is structured as follows. Our formulation of mean field games with major and minor players is presented in Section \ref{se:alternative}
below. There, we emphasize that as it relies on a fixed point argument in spaces of controls, and we explain how this approach can be used to tackle all sorts of versions of the game, whether the search is for open or closed loop (or even Markovian) equilibria. Next, Section \ref{se:lq} implements this approach in the case of linear quadratic models. We recover the open loop solution of \cite{CarmonaZhu}, and provide a solution for closed loop models.
Section \ref{se:application} concludes with the solution of a generalization including a major player to the mean field game formulation proposed in \cite{NourianCainesMalhame} of a flocking model originally credited to Cucker and Smale \cite{CuckerSmale}. There, the dynamics of a large population of agents are governed by forces depicting the mean reversion of individual velocity to the mean velocity of the population.
While early models of flocking do not involve any form of central coordination, several authors recently propose generalization of the flocking model by introducing leaders in the population. Such leaders have a pivotal impact on the rest of the population. In this spirit, we extend the mean field game formulation of \cite{NourianCainesMalhame} to include a major player which in equilibrium, should act as a free-will leader. We solve this model in the linear quadratic case, and we provide numerical simulations of the solution.
\section{Alternative Formulations for Mean Field Games with Major and Minor Players}
\label{se:alternative}
The goal of this section is to formulate the search for Nash equilibria for mean field games with major and minor players as a fixed point problem on a space of admissible controls. Since our discussion remains at the formal level, we do not introduce these mean field game models as limits of finite player games. We shall do just that only in the case of the linear quadratic models which we solve explicitly in Section \ref{se:lq} below. For pedagogical reasons, we treat separately the open and closed loop problems.
The rationale for this decision comes from the fact that, while solutions to the open and closed loop versions of the standard games often coincide in the mean field limit, this does not seem to be the case for games with major and minor players. Indeed, the characteristics of the state of the major player do not disappear in the limit when the number of minor players tends to infinity.
We shall illustrate this fact in our discussion of the linear quadratic models below.
\vskip 4pt
The general set up of a mean field game with major and minor players is as follows. The dynamics of the state of the system are given by stochastic differential equations of the form:
\begin{equation}
\label{fo:mmmfg_dyn}
\begin{cases}
dX^0_t&=b_0(t,X^0_t,\mu_t,\alpha^0_t)dt +\sigma_0(t,X^0_t,\mu_t,\alpha^0_t) dW^0_t\\
dX_t&=b(t,X_t,\mu_t,X^0_t,\alpha_t,\alpha^0_t)dt +\sigma(t,X_t,\mu_t,X^0_t,\alpha_t,\alpha^0_t dW_t,
\end{cases}
\end{equation}
where ${\boldsymbol W}^0=(W^0_t)_{0\le t\le T}$ and ${\boldsymbol W}=(W_t)_{0\le t\le T}$ are independent Wiener processes in $\mathbb R^{d_0}$ and $\mathbb R^d$ respectively, the quantities $X^0_t$, $\alpha^0_t$ with a superscript $0$ representing the state and the control of the major player while the he quantities $X_t$, $\alpha_t$ without a superscript represent the state and the control of the representative minor player.
The controls $\alpha^0_t$ and $\alpha_t$ take values in closed convex subsets $A_0$ and $A$ of Euclidean spaces $\mathbb R^{k_0}$ and $\mathbb R^k$.
Here $\boldsymbol{\mu}=(\mu_t)_{0\le t\le T}$ is a measure valued process which in equilibrium, is expected to be given by the conditional distributions of the state of the representative minor player given the filtration $\mathbb F^0=(\mathcal F^0_t)_{0\le t\le T}$ generated by the Wiener process ${\boldsymbol W}^0$ driving the dynamics of the state of the major player. Indeed, $\mu_t$ should be understood as a proxy for the empirical measure $\overline\mu^N_t$ of the states of $N$ minor players in the limit $N
\to\infty$. This limit is expected to be $\mu_t=\mathbb P_{X_t|W^0_{[0,t]}}=\mathcal L(X_t|W^0_{[0,t]})$
the conditional distribution of the state of the representative minor player given the initial path $W^0_{[0,t]}$ of the noise common to all the minor players,
namely the noise term driving the equation for the state of the major player. For later reference, we shall denote by $\mathbb F=(\mathcal F_t)_{0\le t\le T}$ the filtration generated by both Wiener processes.
The costs the players try to minimize are of the form:
\begin{equation}
\label{fo:mmmfg_costs}
\begin{cases}
J^0(\boldsymbol{\alpha}^0,\boldsymbol{\alpha})&=\mathbb E\bigl[\int_0^Tf_0(t,X^0_t,\mu_t,\alpha^0_t)dt +g^0(X^0_T,\mu_T)\bigr]\\
J(\boldsymbol{\alpha}^0,\boldsymbol{\alpha})&=\mathbb E\bigl[\int_0^Tf(t,X_t,\mu^N_t,X^0_t,\alpha_t,\alpha^0_t)dt +g(X_T,\mu_T)\bigr],
\end{cases}
\end{equation}
for some running and terminal cost functions $f_0$, $f$, $g_0$ and $g$.
The crucial feature of mean field games with major and minor players is that the dynamics of the state and the costs of the major player depend upon the statistical distribution of the states of the minor players while the states and the costs of the minor players depend upon not only their own states and the statistical distribution of the states of all the minor players, but also on the state and the control of the major player. This is what makes the analysis of these games more difficult than the standard mean field game models.
\vskip 4pt
We first treat the case of open loop equilibria for which we take advantage of the fact that the filtrations are assumed to be generated by the Wiener processes, to write the controls as functions of the paths of these Wiener processes.
\subsubsection*{\textbf{Open Loop Version of the MFG Problem}}
\vskip -6pt
Here, we assume that the controls used by the major player and the representative minor player are of the form:
\begin{equation}
\label{fo:mmmfg_controls}
\alpha^0_t=\phi^0(t,W^0_{[0,T]}),\quad\text{and}\quad \alpha_t=\phi(t,W^0_{[0,T]},W_{[0,T]}),
\end{equation}
for deterministic progressively measurable functions
$\phi^0:[0,T]\times\mathcal C([0,T];\mathbb R^{d_0})\mapsto A_0$ and
$\phi:[0,T]\times\mathcal C([0,T];\mathbb R^{d})\times\mathcal C([0,T];\mathbb R^{d})\mapsto A$.
Progressive measurability of the function $\phi$ means that for each $t\in[0,T]$, and $w^0,w\in\mathcal C([0,T];\mathbb R^{d})$, the value of
$\phi(t,w^0,w)$ depends only upon the restrictions $w^0_{[0,t]}$ and $w_{[0,t]}$ of $w^0$ and $w$ to the interval $[0,t]$. Similarly for $\phi^0$. Our choice for the admissibility of the controls is consistent with our earlier discussion since we assume that the filtration $\mathbb F^0$ and $\mathbb F$ are generated by the Wiener processes ${\boldsymbol W}^0$ and $({\boldsymbol W}^0,{\boldsymbol W})$ respectively.
\vskip 2pt
We understand a Nash equilibrium as a fixed point of the best response map. In the present context, the latter comprises two specific components: the best response of the major player to the behavior of all the minor players, and the best response of a representative minor player to the behavior of the major player and all the other minor players. So we need two separate steps to identify the best response map before we can define a Nash equilibrium as a fixed point of this map.
\vskip 6pt\noindent
\emph{The Major Player Best Response. }
We assume that the representative minor player uses the open loop control given by the progressively measurable function $\phi: (t,w^0,w)\mapsto \phi(t,w^0,w)$, so the problem of the major player is to minimize its expected cost:
\begin{equation}
\label{fo:mmmfg_major_cost}
J^{\phi,0}(\boldsymbol{\alpha}^0)=\mathbb E\Bigl[\int_0^Tf_0(t,X^0_t,\mu_t,\alpha^0_t)dt +g^0(X^0_T,\mu_T)\Bigr]
\end{equation}
under the dynamical constraints:
\begin{equation*}
\begin{cases}
dX^0_t&=b_0(t,X^0_t,\mu_t,\alpha^0_t)dt +\sigma_0(t,X^0_t,\mu_t,\alpha^0_t) dW^0_t\\
dX_t&=b(t,X_t,\mu_t,X^0_t,\phi(t,W^0_{[0,T]}, W_{[0,T]}),\alpha^0_t)dt +\sigma(t,X_t,\mu_t,X^0_t,\phi(t,W^0_{[0,T]}, W_{[0,T]}),\alpha^0_t) dW_t,
\end{cases}
\end{equation*}
where $\mu_t=\mathcal L(X_t|W^0_{[0,t]})$ denotes the conditional distribution of $X_t$ given $W^0_{[0,t]}$. Since we are considering the open loop version of the problem, we search for minima in the class of controls $\boldsymbol{\alpha}^0$ of the form $\alpha^0_t=\phi^0(t,W^0_{[0,T]})$ for a progressively measurable function $\phi^0$. So we frame the major player problem as the search for:
\begin{equation}
\label{fo:OL_MPP}
\phi^{0,*}(\phi)=\text{arg} \inf_{\boldsymbol{\alpha}^0\leftrightarrow \phi^0} J^{\phi,0}(\boldsymbol{\alpha}^0)
\end{equation}
where $\boldsymbol{\alpha}^0\leftrightarrow \phi^0$ means that the infimum is over the set of controls $\boldsymbol{\alpha}^0$ given by progressively measurable functions $\phi^0$.
For the sake of the present discussion, we assume implicitly that the argument of the minimization is not empty and reduces to a singleton.
The important feature of this formulation is that the optimization of the major player appears naturally as an optimal control of the McKean-Vlasov type! In fact, it is an optimal control of the \emph{conditional} McKean-Vlasov type since the distribution appearing in the controlled dynamics is the conditional distribution of the state of the representative minor player.
\vskip 6pt\noindent
\emph{The Representative Minor Player Best Response. }
To formulate the optimization problem of the representative minor player, we
first describe the state of a system comprising a major player and a field of minor players different from the representative minor player we are focusing on.
So we assume that the major player uses a strategy $\boldsymbol{\alpha}^0$ given by a progressively measurable function $\phi^0$ as in $\alpha^0_t=\phi^0(t,W^0_{[0,T]})$, and that the representative of the field of minor players uses a strategy $\boldsymbol{\alpha}$ given by a progressively measurable function $\phi$ in the form $\alpha_t=\phi(t,W^0_{[0,T]},W_{[0,T]})$.
So the dynamics of the state of the system are given by:
\begin{equation*}
\begin{cases}
&dX^0_t=b_0(t,X^0_t,\mu_t,\phi^0(t,W^0_{[0,T]}))dt +\sigma_0(t,X^0_t,\mu_t,\phi^0(t,W^0_{[0,T]})) dW^0_t\\
&dX_t=b(t,X_t,\mu_t,X^0_t,\phi(t,W^0_{[0,T]},W_{[0,T]}),\phi^0(t,W^0_{[0,T]}))dt\\
&\hskip 125pt
+\sigma(t,X_t,\mu_t,X^0_t,\phi(t,W^0_{[0,T]},W_{[0,T]}),\phi^0(t,W^0_{[0,T]})) dW_t,
\end{cases}
\end{equation*}
where as before, $\mu_t=\mathcal L(X_t|W^0_{[0,t]})$ is the conditional distribution of $X_t$ given $W^0_{[0,t]}$. Notice that in the present situation, given the feedback functions $\phi^0$ and $\phi$, this stochastic differential equation in $\mathbb R^{d_0}\times\mathbb R^d$ giving the dynamics of the state of the system is of (conditional) McKean-Vlasov type since $\mu_t$ is the (conditional) distribution of (part of) the state.
\vskip 2pt
As explained earlier, we frame the problem of the representative minor player as the search for the best response to the major player and the field of the (other) minor players. So naturally, we formulate this best response as the result of the optimization problem of a virtual (extra) minor player which chooses a strategy $\overline{\boldsymbol{\alpha}}$ given by a progressively measurable function $\overline\phi$ in the form $\overline\alpha_t=\overline\phi(t,W^0_{[0,T]},W_{[0,T]})$ in order to minimize its expected cost:
$$
J^{\phi^0,\phi}(\bar\boldsymbol{\alpha})=\mathbb E\bigl[\int_0^T f(t,\overline X_t,X^0_t,\mu_t,\bar\alpha_t,\phi^0(t,W^0_{[0,T]}))dt +g(\overline X_T,\mu_t)\bigr],
$$
where the dynamics of the virtual state $\overline X_t$ are given by:
\begin{equation*}
\begin{split}
&d\overline X_t=b(t,\overline X_t,\mu_t,X^0_t,\bar\phi(t,W^0_{[0,T]},W_{[0,T]}),\phi^0(t,W^0_{[0,T]}))dt\\
&\hskip 125pt
+\sigma(t,\overline X_t,\mu_t,X^0_t,\bar\phi(t,W^0_{[0,T]},W_{[0,T]}),\phi^0(t,W^0_{[0,T]})) d\overline W_t,
\end{split}
\end{equation*}
for a Wiener process $\overline{{\boldsymbol W}}=(\overline W_t)_{0\le t\le T}$ independent of the other Wiener processes.
Notice that this optimization problem is not of McKean-Vlasov type. It is merely a classical optimal control problem, though with random coefficients.
As stated above, we search for minima in the class of feedback controls $\overline\boldsymbol{\alpha}$ of the form $\overline\alpha_t=\overline\phi(t,W^0_{[0,T]},W_{[0,T]})$. We denote by:
\begin{equation}
\label{fo:OL_mPP}
\overline\phi^{*}(\phi^0,\phi)=\text{arg} \inf_{\overline{\boldsymbol{\alpha}}\leftrightarrow \overline\phi} J^{\phi^0,\phi}(\bar\boldsymbol{\alpha})
\end{equation}
the result of the optimization.
Again, we assume that the optimal control exists, is given by a progressively measurable function, and is unique for the sake of convenience.
\vskip 4pt
We now formulate the existence of a Nash equilibrium for the mean field game with major and minor player as a fixed point of the best response maps
identified above by its components \eqref{fo:OL_MPP} and \eqref{fo:OL_mPP}.
So by definition, a couple $(\hat\boldsymbol{\alpha}^{0}, \hat\boldsymbol{\alpha})$ of controls given by progressively measurable functions $(\hat\phi^{0}, \hat\phi)$
as above is a Nash equilibrium for the mean field game with major and minor players if it satisfies the fixed point equation:
\begin{equation}
\label{fo:mmmfg_fixed_point}
(\hat\phi^0,\hat\phi)=\big(\phi^{0,*}(\hat\phi),\bar\phi^{*}(\hat\phi^0,\hat\phi)\big).
\end{equation}
\subsubsection*{\textbf{Closed Loop Version of the MFG Problem}}
\vskip -6pt
The way we rewrote the open loop version of the problem may have been rather pompous, but it makes it easy to introduce the closed loop and Markovian versions of the problem.
In this subsection, we assume that the controls used by the major player and the representative minor player are of the form:
$$
\alpha^0_t=\phi^0(t,X^0_{[0,T]},\mu_t),\quad\text{and}\quad \alpha_t=\phi(t,X_{[0,T]},\mu_t,X^0_{[0,T]}), \quad i=1,\cdots,N.
$$
for deterministic progressively measurable functions
$\phi^0:[0,T]\times\mathcal C([0,T];\mathbb R^{d_0})\mapsto A_0$ and
$\phi:[0,T]\times\mathcal C([0,T];\mathbb R^{d})\times\mathcal C([0,T];\mathbb R^{d})\mapsto A$.
The state $X^0_t$ of the major player and the state $X_t$ of the representative minor player evolve according to the same dynamic equations
\eqref{fo:mmmfg_dyn} as before, and the costs are also given by the same formula \eqref{fo:mmmfg_costs}, with $\mu_t=\mathcal L(X_t|W^0_{[0,t]})$. We follow the same strategy as above to define the closed loop Nash equilibria of the game.
\vskip 6pt\noindent
\emph{The Major Player Best Response. }
We assume that the representative minor player uses the progressively measurable feedback function $\phi: (t,x,\mu,x^0)\mapsto \phi(t,x,\mu,x^0)$, so the problem of the major player is to minimize its expected cost \eqref{fo:mmmfg_major_cost}
under the dynamical constraints:
\begin{equation*}
\begin{cases}
dX^0_t&=b_0(t,X^0_t,\mu_t,\alpha^0_t)dt +\sigma_0(t,X^0_t,\mu_t,\alpha^0_t) dW^0_t\\
dX_t&=b(t,X_t,\mu_t,X^0_t,\phi(t,X_{[0,T]},\mu_t,X_{[0,T]}^0),\alpha^0_t)dt +\sigma(t,X_t,\mu_t,X^0_t,\phi(t,X_{[0,T]},\mu_t,X_{[0,T]}^0),\alpha^0_t) dW^i_t,
\end{cases}
\end{equation*}
whereas before $\mu_t=\mathcal L(X_t|W^0_{[0,t]})$ denotes the conditional distribution of $X_t$ given $W^0_{[0,t]}$. As explained earlier, we search for minima in the class of feedback controls $\boldsymbol{\alpha}^0$ of the form $\alpha^0_t=\phi^0(t,X^0_{[0,T]},\mu_t)$, so we frame the major player problem as:
\begin{equation}
\label{fo:CL_MPP}
\phi^{0,*}(\phi)=\text{arg} \inf_{\boldsymbol{\alpha}^0\leftrightarrow \phi^0} J^{\phi,0}(\boldsymbol{\alpha}^0)
\end{equation}
which is an optimal control of the conditional McKean-Vlasov type!
\vskip 6pt\noindent
\emph{The Representative Minor Player Best Response. }
To formulate the optimization problem of the representative minor player, we
first describe a system to which it needs to respond optimally. So we
assume that the major player uses a strategy $\boldsymbol{\alpha}^0$ in feedback form given by a feedback function
$\phi^0$ so that $\alpha^0_t=\phi^0(t,X^0_{[0,T]}\mu_t)$, and that the representative of the field of minor players uses a strategy $\boldsymbol{\alpha}$ given by a
progressively measurable feedback function $\phi$ in the form $\alpha_t=\phi(t,X_{[0,T]},X_{[0,T]}^0,\mu_t)$.
So the dynamics of the state of this system are given by:
\begin{equation*}
\begin{cases}
&dX^0_t=b_0(t,X^0_t,\mu_t,\phi^0(t,X^0_{[0,T]},\mu_t))dt +\sigma_0(t,X^0_t,\mu_t,\phi^0(t,X^0_{[0,T]},\mu_t)) dW^0_t\\
&dX_t=b(t,X_t,\mu_t,X^0_t,\phi(t,X_{[0,T]},X_{[0,T]}^0,\mu_t),\phi^0(t,X^0_{[0,T]},\mu_t))dt \\
&\hskip 125pt
+\sigma(t,X_t,\mu_t,X^0_t,\phi(t,X_{[0,T]},X_{[0,T]}^0,\mu_t),\phi^0(t,X^0_{[0,T]},\mu_t)) dW_t,
\end{cases}
\end{equation*}
where as before, $\mu_t=\mathcal L(X_t|W^0_{[0,t]})$ is the conditional distribution of $X_t$ given $W^0_{[0,t]}$. Again, given the feedback functions $\phi^0$ and $\phi$, this stochastic differential equation in $\mathbb R^{d_0}\times\mathbb R^d$ is of (conditional) McKean-Vlasov type.
\vskip 2pt
As expected, we formulate this best response of the representative minor player as the result of the optimization problem of a virtual (extra) minor player which chooses a strategy $\overline{\boldsymbol{\alpha}}$ given by a feedback function $\overline\phi$ in the form $\overline\alpha_t=\overline\phi(t,\overline X_t,X^0_t,\mu_t)$ in order to minimize its expected cost:
$$
J^{\phi^0,\phi}(\bar\boldsymbol{\alpha})=\mathbb E\Bigl[\int_0^T f(t,\overline X_t,X^0_t,\mu_t,\bar\alpha_t,\phi^0(t,X^0_{[0,T]}\mu_t))dt +g(\overline X_T,\mu_t)\Bigr],
$$
where the dynamics of the virtual state $\overline X_t$ are given by:
\begin{equation*}
\begin{split}
&d\overline X_t=b(t,\overline X_t,\mu_t,X^0_t,\bar\phi(t,\overline X_{[0,T]},X_{[0,T]}^0,\mu_t),\phi^0(t,X^0_{[0,T]},\mu_t))dt \\
&\hskip 125pt
+\sigma(t,\overline X_t,\mu_t,X^0_t,\bar\phi(t,\overline X_{[0,T]},X_{[0,T]}^0,\mu_t),\phi^0(t,X^0_{[0,T]},\mu_t)) d\overline W_t,
\end{split}
\end{equation*}
for a Wiener process $\overline{{\boldsymbol W}}=(\overline W_t)_{0\le t\le T}$ independent of the other Wiener processes.
We search for minima in the class of feedback controls $\overline\boldsymbol{\alpha}$ of the form $\overline\alpha_t=\overline\phi(t,\overline X_{[0,T]},\mu_t,X^0_{[0,T]})$, and we denote the solution by:
\begin{equation}
\label{fo:CL_mPP}
\overline\phi^{*}(\phi^0,\phi)=\text{arg} \inf_{\overline{\boldsymbol{\alpha}}\leftrightarrow \overline\phi} J^{\phi^0,\phi}(\bar\boldsymbol{\alpha}).
\end{equation}
Since the best response map is given by its components \eqref{fo:CL_MPP} and \eqref{fo:CL_mPP}, we define the solution of a Nash equilibrium for the closed loop mean field game with major and minor player as the solution of the same fixed point equation \eqref{fo:mmmfg_fixed_point}, except for the fact that the functions $(\hat\phi^{0}, \hat\phi)$ are now progressively measurable
feedback functions of the type considered here.
\subsubsection{\textbf{Markovian Version of the MFG Problem}}
\vskip -6pt
Here, we assume that the controls used by the major player and the representative minor player are of the form:
$$
\alpha^0_t=\phi^0(t,X^0_t,\mu_t),\quad\text{and}\quad \alpha_t=\phi(t,X_t,\mu_t,X^0_t), \quad i=1,\cdots,N.
$$
for deterministic feedback functions $\phi^0:[0,T]\times\mathbb R^{d_0}\times\mathcal P_2(\mathbb R^d)\mapsto A_0$ and $\phi:[0,T]\times\mathbb R^{d}\times\mathcal P_2(\mathbb R^d)\times\mathbb R^{d_0}\mapsto A$.
The state $X^0_t$ of the major player and the state $X_t$ of the representative minor player evolve according to the same dynamic equations
\eqref{fo:mmmfg_dyn} as before and the costs are also given by the same formula \eqref{fo:mmmfg_costs}, with $\mu_t=\mathcal L(X_t|W^0_{[0,t]})$.
\vskip 6pt\noindent
\emph{The Major Player Best Response. }
We assume that the representative minor player uses the feedback function $\phi: (t,x,\mu,x^0)\mapsto \phi(t,x,\mu,x^0)$, so the problem of the major player is to minimize its expected cost \eqref{fo:mmmfg_major_cost}
under the dynamical constraints:
\begin{equation*}
\begin{cases}
dX^0_t&=b_0(t,X^0_t,\mu_t,\alpha^0_t)dt +\sigma_0(t,X^0_t,\mu_t,\alpha^0_t) dW^0_t\\
dX_t&=b(t,X_t,\mu_t,X^0_t,\phi(t,X_t,\mu_t,X_t^0),\alpha^0_t)dt +\sigma(t,X_t,\mu_t,X^0_t,\phi(t,X_t,\mu_t,X_t^0),\alpha^0_t) dW^i_t,
\end{cases}
\end{equation*}
where as before $\mu_t=\mathcal L(X_t|W^0_{[0,t]})$ denotes the conditional distribution of $X_t$ given $W^0_{[0,t]}$. We search for minima in the class of feedback controls $\boldsymbol{\alpha}^0$ of the form $\alpha^0_t=\phi^0(t,X^0_t,\mu_t)$, so we frame the major player problem as:
\begin{equation}
\label{fo:M_MPP}
\phi^{0,*}(\phi)=\text{arg} \inf_{\boldsymbol{\alpha}^0\leftrightarrow \phi^0} J^{\phi,0}(\boldsymbol{\alpha}^0)
\end{equation}
As before, the optimization problem of the major player is of the conditional Mckean-Vlasov type.
\vskip 6pt\noindent
\emph{The Representative Minor Player Best Response. }
To formulate the optimization problem of the representative minor player, we
first describe a system to which it needs to respond optimally. So we
assume that the major player uses a strategy $\boldsymbol{\alpha}^0$ in feedback form given by a feedback function
$\phi^0$ so that $\alpha^0_t=\phi^0(t,X^0_t,\mu_t)$, and that the representative of the field of minor players uses a strategy $\boldsymbol{\alpha}$ given by a feedback function $\phi$ in the form $\alpha_t=\phi(t,X_t,X_t^0,\mu_t)$.
So the dynamics of the state of this system are given by:
\begin{equation*}
\begin{cases}
&dX^0_t=b_0(t,X^0_t,\mu_t,\phi^0(t,X^0_t,\mu_t))dt +\sigma_0(t,X^0_t,\mu_t,\phi^0(t,X^0_t,\mu_t)) dW^0_t\\
&dX_t=b(t,X_t,\mu_t,X^0_t,\phi(t,X_t,X_t^0,\mu_t),\phi^0(t,X^0_t,\mu_t))dt +\sigma(t,X_t,\mu_t,X^0_t,\phi(t,X_t,X_t^0,\mu_t),\phi^0(t,X^0_t,\mu_t)) dW_t,
\end{cases}
\end{equation*}
where as before, $\mu_t=\mathcal L(X_t|W^0_{[0,t]})$ is the conditional distribution of $X_t$ given $W^0_{[0,t]}$. Again, given the feedback functions $\phi^0$ and $\phi$, this stochastic differential equation in $\mathbb R^{d_0}\times\mathbb R^d$ is of (conditional) McKean-Vlasov type.
\vskip 2pt
As before, we frame the problem of the representative minor player as the search for the best response to the behavior of the major player and the field of the (other) minor players. So we solve the optimization problem of a virtual (extra) minor player which chooses a strategy $\overline{\boldsymbol{\alpha}}$ given by a feedback function $\overline\phi$ in the form $\overline\alpha_t=\overline\phi(t,\overline X_t,X^0_t,\mu_t)$ in order to minimize its expected cost:
$$
J^{\phi^0,\phi}(\bar\boldsymbol{\alpha})=\mathbb E\Bigl[\int_0^T f(t,\overline X_t,X^0_t,\mu_t,\bar\alpha_t,\phi^0(t,X^0_t,\mu_t))dt +g(\overline X_T,\mu_t)\Bigr],
$$
where the dynamics of the virtual state $\overline X_t$ are given by:
\begin{equation*}
\begin{split}
&d\overline X_t=b(t,\overline X_t,\mu_t,X^0_t,\bar\phi(t,\overline X_t,X_t^0,\mu_t),\phi^0(t,X^0_t,\mu_t))dt\\
&\hskip 125pt
+\sigma(t,\overline X_t,\mu_t,X^0_t,\bar\phi(t,\overline X_t,X_t^0,\mu_t),\phi^0(t,X^0_t,\mu_t)) d\overline W_t,
\end{split}
\end{equation*}
for a Wiener process $\overline{{\boldsymbol W}}=(\overline W_t)_{0\le t\le T}$ independent of the other Wiener processes.
We search for minima in the class of feedback controls $\overline\boldsymbol{\alpha}$ of the form $\overline\alpha_t=\overline\phi(t,\overline X_t,\mu_t,X^0_t)$, and we denote the solution by:
\begin{equation}
\label{fo:M_mPP}
\overline\phi^{*}(\phi^0,\phi)=\text{arg} \inf_{\overline{\boldsymbol{\alpha}}\leftrightarrow \overline\phi} J^{\phi^0,\phi}(\bar\boldsymbol{\alpha}).
\end{equation}
Finally, we define the solution of a Nash equilibrium for the Markovian mean field game with major and minor player as the solution of the same fixed point equation \eqref{fo:mmmfg_fixed_point}, except for the fact that the functions $(\hat\phi^{0}, \hat\phi)$ are now
feedback functions of the type considered here.
\section{Linear Quadratic Models}
\label{se:lq}
In this section, we consider the mean field game with major and minor players issued from the finite player game in which the dynamics of the states of the players are given by the following linear stochastic differential equations:
\begin{equation}
\label{fo:finite_players}
\begin{dcases}
dX^{N,0}_t=({L}_0 X^{N,0}_t+B_0 \alpha^{N,0}_t+F_0 \bar{X}^N_t)dt+D_0 dW^0_t,
\\
dX^{N,i}_t=(L X^{N,i}_t+B\alpha^{N,i}_t+F\bar{X}^N_t+GX^0_t)dt+DdW^i_t,\qquad 1\le i\le N,
\end{dcases}
\end{equation}
for $t \in [0,T]$, and we choose $A_{0}=\mathbb R^{k_{0}}$
and $A=\mathbb R^k$.
The coefficients are deterministic constant matrices independent of time.
The real matrices $L_0$, $B_0$, $F_0$ and $D_0$ are of dimensions $d_0\times d_0$, $d_0\times k_0$, $d_0\times d$ and $d_0\times m_0$ respectively. Similarly, the real matrices $L$, $B$, $F$, $G$ and $D$ are of dimensions $d\times d$, $d\times k$, $d\times d$, $d\times d_0$, and $d_0\times m_0$ respectively.
The cost functionals for the major and minor players are given by:
\begin{equation*}
\begin{split}
&J^{N,0}\bigl(\boldsymbol{\alpha}^{N,0},\cdots,\boldsymbol{\alpha}^{N,N}\bigr)
\\
&\hspace{15pt}=\mathbb E\bigg[\int^T_0
\Big[\bigl(X^{N,0}_t-\Psi_0(\bar{X}^N_t)\bigr)^\dagger Q_0
\bigl(X^{N,0}_t-\Psi_0(\bar{X}^N_t)\bigr)+(\alpha^{N,0}_{t})^\dagger R_0 \alpha^{N,0}_t\Big]\,dt\bigg],
\\
&J^{N,i}\bigl(\boldsymbol{\alpha}^{N,0},\cdots,\boldsymbol{\alpha}^{N,N}\bigr)
\\
&\hspace{-5pt}=\mathbb{E}\bigg[\int^T_0 \Big[\bigl(X^{N,i}_t-\Psi(X^{N,0}_t,\bar{X}^N_t)\bigr)^\dagger Q
\bigl(X^{N,i}_t-\Psi(X^{N,0}_t,\bar{X}^N_t))+(\alpha^{N,i}_t)^\dagger R \alpha^{N,i}_t\Big]dt\bigg],
\end{split}
\end{equation*}
in which $Q_0$, $Q$, $R_0$ and $R$ are positive definite symmetric matrices of dimensions $d_0\times d_0$, $d\times d$, $k_0\times k_0$ and $k\times k$, and where the functions $\Psi_0$ and $\Psi$ are defined by:
$$
\Psi_0(X)=H_0 X+\eta_0,\qquad \Psi(X,Y)=H X+H_1 Y+\eta,
$$
for some fixed $d_0\times d$, $d\times d_0$ and $d\times d$ matrices $H_0$, $H$ and $H_1$, and some fixed $\eta_0\in\mathbb R^{d_0}$ and $\eta\in\mathbb R^d$. Here, $\bar{X}^N_t$ stands for the empirical mean $(X^{N,1}_t+\cdots + X^{N,N}_t)/N$.
\vskip 4pt
We chose to study this specific linear quadratic model to match existing literature on the subject. Several variants are possible which can be treated using the same procedure. See for example the application discussed in Section \ref{se:application} below.
\subsubsection*{\textbf{Open-Loop Equilibrium}}
\vskip -6pt
In the mean field limit, the dynamics \eqref{fo:finite_players} of the major player state $X_t^0$ and the state $X_t$ of the representative minor player are given by:
\begin{equation}
\label{fo:mmmfg_alternative_ol_state}
\begin{dcases}
dX_t^0\;\;&= (L_0 X_t^0 + B_0 \alpha_t^0 + F_0 \bar X_t) dt + D_0 dW_t^0\\
dX_t\;\; &= (L X_t + B \alpha_t + F \bar X_t + G X_t^0) dt + D dW_t
\end{dcases}
\end{equation}
where $\bar X_t = \mathbb{E}[X_t | \mathcal{F}_t^0]$ is the conditional expectation of $X_t$ with respect to the filtration generated by the history of the Wiener process ${\boldsymbol W}^0$ up to time $t$. Accordingly, the cost functionals for the major and minor players are given by:
\begin{align*}
&J^0(\boldsymbol{\alpha}^0,\boldsymbol{\alpha}) = \mathbb E\left [ \int_{0}^T [(X_t^0 - H_0 \bar X_t - \eta_0)^{\dagger} Q_0 (X_t^0 - H_0 \bar X_t - \eta_0) + \alpha_t^{0\dagger}R_0 \alpha_t^0] dt\right]\\
&J(\boldsymbol{\alpha}^0,\boldsymbol{\alpha})= \mathbb E\left [ \int_{0}^T [(X_t - H X_t^0 - H_1 \bar X_t -\eta)^{\dagger} Q (X_t - H X_t^0 - H_1 \bar X_t -\eta) + \alpha_t^{\dagger}R \alpha_t] dt\right]
\end{align*}
in which $Q$, $Q_0$, $R$, $R_0$ are symmetric matrices, and $R$, $R_0$ are assumed to be positive definite.
Taking conditional expectations in the equation for the state of the representative minor player we get:
\begin{equation}
\label{fo:mmmfg_cond_exp}
d\bar X_t= [(L + F) \bar X_t + B \overline\alpha_t + G X_t^0]\;dt,
\end{equation}
with $\overline\alpha_t=\mathbb E[\alpha_t|\mathcal F_t^0]$. The idea is now to express the optimization problem of the major player over the dynamics of the couple $(\overline X_t,X^0_t)$. In order to do so, we introduce the following notation:
$$
\begin{array}{c}
\mathbb{X}_t = \left[\begin{array}{c}\bar X_t \\ X_t^0 \end{array}\right],\;\;\mathbb{L}_0 = \left[\begin{array}{cc}L+F&G\\ F_0 &L_0 \end{array}\right],\;\;\mathbb{B}_0 = \left[\begin{array}{c}0 \\ B_0 \end{array}\right],\;\;\mathbb{B} = \left[\begin{array}{c}B \\ 0 \end{array}\right], \mathbb{D}_0 = \left[\begin{array}{c}0 \\ D_0 \end{array}\right]\\\\
\mathbb{F}_0 = \left[\begin{array}{cc}H_0^\dagger Q_0 H_0 & - H_0^\dagger Q_0 \\ -Q_0H_0 & Q_0 \end{array}\right],\;\;f_0= \left[\begin{array}{c}H_0^\dagger Q_0 \eta_0 \\ -Q_0\eta_0 \end{array}\right].
\end{array}
$$
Notice that, the fact that the matrix $Q_0$ is symmetric non-negative definite implies that $\mathbb F_0$ is also symmetric non-negative definite.
This will play a crucial role when we face the solution of certain matrix Riccati equations. The optimization problem of the major player becomes:
$$
\inf_{\boldsymbol{\alpha}^0\in\AA_0}\mathbb{E}\left[\int_{0}^T [ \mathbb{X}_t^\dagger \mathbb{F}_0\mathbb{X}_t +2 \mathbb{X}_t^\dagger f_0 + \eta_0^\dagger Q_0 \eta_0 + \alpha_t^{0\dagger}R_0 \alpha_t^0] dt \right],
$$
where the controlled dynamics are given by:
\begin{equation}
\label{fo:mmmfg_semi_state}
d\mathbb{X}_t = (\mathbb{L}_0 \mathbb{X}_t +\mathbb{B}_0 \alpha_t^0 + \mathbb{B} \overline\alpha_t) dt + \mathbb{D}_0 dW_t^0.
\end{equation}
The reduced Hamiltonian is given by:
\[
H^{(r),\overline\alpha}(t, x, y, \alpha^0) = y^\dagger (\mathbb{L}_0 x +\mathbb{B}_0 \alpha^0 + \mathbb{B} \overline\alpha_t) + x^\dagger \mathbb{F}_0 x +2 x^\dagger f_0 + \eta_0^\dagger Q_0 \eta_0 +\alpha^{0\dagger}R_0 \alpha^0.
\]
Here we added the superscript $\overline\alpha$ for the Hamiltonian in order to emphasize that the optimization of the major player is performed assuming that the representative minor player is using strategy $\boldsymbol{\alpha} \in \AA$. Obviously, $H^{(r),\overline\alpha}$ is a random function, the randomness coming from the realization of the control of the representative minor player. However we see that almost surely $\mathbb R^{d_0+d}\times A_0\ni (x, \alpha^0)\rightarrow H^{(r),\overline\alpha}(t, x, y, \alpha^0)$ is jointly convex, and we can use the sufficient condition of the stochastic maximum principle. Therefore the minimizer of the reduced Hamiltonian and the optimal control are given by:
$$
\hat\alpha^{0} = - \frac12 R_0^{-1} \mathbb{B}^{0\dagger} y,
\qquad\text{and}\qquad
\hat\alpha^0_t = - \frac12 R_0^{-1} \mathbb{B}^{0\dagger} \mathbb Y_t,
$$
respectively, where $(\mathbb X_t, \mathbb Y_t)_{0\le t\le T}$ solves the forward-backward stochastic differential equation:
\begin{equation}
\label{fo:mmmfg_lq_fbsde_major}
\begin{dcases}
d\mathbb X_t &=\;\; (\mathbb L_0 \mathbb{X}_t -\frac12 \mathbb B_0 R_0^{-1}\mathbb B_{0}^{\dagger}\mathbb Y_t + \mathbb B \overline\alpha_t) dt + \mathbb D_0 dW_t^0\\
d\mathbb Y_t &=\;\; -(\mathbb L_{0}^{\dagger} \mathbb Y_t +2\mathbb F_{0}\mathbb X_t + 2f_0) dt + \mathbb Z_t dW_t^0,\;\;\;\mathbb Y_T =0.
\end{dcases}
\end{equation}
\vskip 4pt
We now consider the representative minor player's problem. We fix an admissible strategy $\boldsymbol{\alpha}^0\in\AA_0$ for the major player, and an admissible strategy $\boldsymbol{\alpha}\in\AA$ for the representative of the \emph{other} minor players, and its $\mathbb F^0$-optional projection $\overline\boldsymbol{\alpha}$ defined by $\overline\alpha_t=\mathbb E[\alpha_t|\mathcal F^0_t]$. This prescription leads to the time evolution of the state of a system given by
\eqref{fo:mmmfg_alternative_ol_state}, equation \eqref{fo:mmmfg_cond_exp} after taking conditional expectations, and finally the dynamic equation
\eqref{fo:mmmfg_semi_state}. Given this background state evolution,
the representative minor player needs to solve:
$$
\inf_{\tilde\boldsymbol{\alpha}\in\AA}\mathbb E \left[\int_0^T [(\tilde X_t - [H_1, H] \mathbb X_t - \eta)^{\dagger} Q(\tilde X_t - [H_1, H] \mathbb X_t - \eta)+\tilde\alpha_t^\dagger R \tilde\alpha_t ] dt \right],
$$
where the dynamics of the controlled state $\tilde X_t$ are given by:
$$
d\tilde X_t = (L \tilde X_t + B \tilde\alpha_t + [F,G]\mathbb X_t) dt + D dW_t.
$$
Note that the process $\mathbb{X}_t$ is merely part of the random coefficients of the optimization problem. We introduce the reduced Hamiltonian:
\begin{equation*}
\begin{split}
&H^{(r),\alpha^0,\alpha}(t,\tilde x,\tilde y,\tilde\alpha) =
\tilde y^{\dagger}(L \tilde x + B \tilde\alpha + [F,G] \mathbb{X}_t)\\
&\hskip 40pt
+ (\tilde x - [H_1, H] \mathbb{X}_t - \eta)^{\dagger} Q(\tilde x - [H_1,H] \mathbb{X}_t - \eta)+\tilde\alpha^\dagger R \tilde\alpha.
\end{split}
\end{equation*}
Once again we use the superscript $(\alpha^0,\alpha)$ to emphasize the fact that the optimization is performed under the environment created by the major player using strategy $\boldsymbol{\alpha}^0$ and the population of the other minor players using $\boldsymbol{\alpha}$, leading to the use of its $\mathbb F^0$-optional projection $\overline\boldsymbol{\alpha}$. $H^{(r),\alpha^0,\alpha}$ depends on the random realization of the environment and is almost surely jointly convex in $(\tilde x,\tilde\alpha)$. Applying the stochastic maximum principle, the optimal control exists and is given by $\tilde\alpha_t = -\frac12 R^{-1}B^\dagger \tilde Y_t$, where $(\tilde{{\boldsymbol X}},\tilde{{\boldsymbol Y}})$ solves the following FBSDE:
\begin{equation}
\label{fo:mmmfg_lq_fbsde_minor}
\begin{dcases}
d\tilde X_t &=\;\; (L \tilde X_t -B R^{-1}B^{\dagger}\tilde Y_t + [F,G]\tilde{\mathbb X}_t) dt + D dW_t\\
d\tilde Y_t &=\;\; -\bigl(L^{\dagger} \tilde Y_t +2 Q\bigl( X_t - [H_1, H] \tilde{\mathbb X}_t - \eta\bigr)\bigr) dt + Z_t dW_t + Z_t^0 dW_t^0,
\end{dcases}
\end{equation}
with terminal condition $Y_T =0$. Recall that in this FBSDE, the process $(\mathbb X_t)_{0\le t\le T}$ only acts as a random coefficient.
It is determined \emph{off line} by solving the standard stochastic differential equation:
\begin{equation}
\label{fo:mmmfg_tilde}
d\tilde{\mathbb X}_t =\;\; (\mathbb{L}_0 \tilde{\mathbb X}_t +\mathbb{B}_0 \alpha_t^0 + \mathbb{B} \overline\alpha_t) dt + \mathbb{D}_0 dW_t^0\\
\end{equation}
Notice that equation \eqref{fo:mmmfg_tilde} is exactly the same equation as \eqref{fo:mmmfg_semi_state}. Still, we use a different notation for the solution. Indeed, at this stage of the proof (i.e. before considering the fixed point step), the coefficient processes $(\alpha_t^0)_{0\le t\le T}$ and $(\overline\alpha_t)_{0\le t\le T}$ are (likely to be) different, preventing us from identifying the solutions of \eqref{fo:mmmfg_tilde} and \eqref{fo:mmmfg_semi_state}.
\vskip 4pt
Now that we are done characterizing the solutions of both optimization problems, we identify the fixed point constraint in the framework given by the characterizations of the two optimization problems,
The fixed point condition \eqref{fo:mmmfg_fixed_point} characterizing Nash equilibria in the current set-up says that:
\begin{equation*}
\alpha_t^0 = - \frac12 R_0^{-1} \mathbb B_{0}^{\dagger} \mathbb Y_t,
\end{equation*}
where $(\mathbb Y_t)_{0\le t\le T}$ is the backward component of the solution of \eqref{fo:mmmfg_lq_fbsde_major} with
$\overline\alpha_t =\;\mathbb E[\alpha_t|\mathcal{F}_t^0]$, and:
$$
\alpha_t=\tilde\alpha_t = -\frac12 R^{-1}B^\dagger \tilde Y_t,
$$
where $(\tilde{Y}_t)_{0\le t\le T}$ is the backward component of the solution of \eqref{fo:mmmfg_lq_fbsde_minor} in which the random coefficient
$(\tilde\mathbb X_t)_{0\le t\le T}$ solves \eqref{fo:mmmfg_tilde} with the processes $(\alpha^0_t)_{0\le t\le T}$ and $(\overline\alpha_t)_{0\le t\le T}$ just defined.
So in equilibrium, equations \eqref{fo:mmmfg_tilde} and \eqref{fo:mmmfg_semi_state} have the same coefficients and we can identify their solutions
$(\mathbb X_t)_{0\le t\le T}$ and $(\tilde\mathbb X_t)_{0\le t\le T}$.
\vskip 2pt
The optimal controls for the major and representative minor players are functions of the solution of the following FBSDE which we obtain by putting together the FBSDEs \eqref{fo:mmmfg_lq_fbsde_major} and \eqref{fo:mmmfg_lq_fbsde_minor} characterizing the major and representative minor players' optimization problem:
\begin{equation}
\label{fo:mmmfg_lq_fbsde_Nash}
\begin{dcases}
&d\mathbb X_t = (\mathbb{L}_0 \mathbb X_t -\frac12 \mathbb B_0 R_0^{-1}\mathbb B_{0}^{\dagger}\mathbb Y_t - \frac12\mathbb B R^{-1}B^\dagger \mathbb E[\tilde Y_t|\mathcal F_t^0]) dt + \mathbb D_0 dW_t^0\\
&d\tilde X_t = (L \tilde X_t - \frac12 B R^{-1}B^{\dagger}\tilde Y_t + [F,G]\mathbb X_t) dt + D dW_t\\
&d\mathbb Y_t = -(\mathbb L_{0}^{\dagger} \mathbb Y_t +\mathbb F_{0}\mathbb X_t + f_0) dt + \mathbb Z_t dW_t^0,\;\;\;\mathbb Y_T =0\\
&d\tilde Y_t = -(L^{\dagger} \tilde Y_t + 2 Q\tilde X_t -2 Q [H_1, H] \mathbb X_t - 2 Q\eta) dt + Z_t dW_t + Z_t^0 dW_t^0,\;\tilde Y_T =0.
\end{dcases}
\end{equation}
We summarize the above discussion in the form of a verification theorem for open-loop Nash equilibrium.
\begin{theorem}
If the system \eqref{fo:mmmfg_lq_fbsde_Nash} admits a solution, then the linear quadratic mean field game problem with major and minor players admits an open-loop Nash equilibrium. The equilibrium strategy $(\boldsymbol{\alpha}^0,\boldsymbol{\alpha})$ is given by $\hat\alpha_t^0 = -(1/2)R_0^{-1} \mathbb{B}_{0}^{\dagger} \mathbb{Y}_t$ for the major player and $\hat\alpha_t = - (1/2) R^{-1}B \tilde Y_t$ for the representative minor player.
\end{theorem}
\vskip 4pt
The way the system \eqref{fo:mmmfg_lq_fbsde_Nash} is stated is a natural conclusion of the search for equilibrium as formulated by the fixed point
step following the two optimization problems. However, as convenient as can be, simple remarks can help the solution of this system.
First we notice one could solve for $(\mathbb X_t,\mathbb Y_t)_{0\le t\le T}$ by solving the FBSDE formed by the first and the third equations if we knew
$\overline Y_t =\mathbb E[\tilde Y_t|\mathcal F_t^0]$. By taking conditional expectations with respect to $\mathcal F^0_t$ in the second equation,
and by subtracting the result from the equation satisfied by the first component of the first equation, we identify $\mathbb E[\tilde X_t|\mathcal F_t^0]$ with $\overline X_t$ because they have the same initial conditions. Next, by taking conditional expectations with respect to $\mathcal F^0_t$ in the fourth equation, we see that $(\overline Y_t)_{0\le t\le T}$ should satisfy:
$$
d\overline Y_t = -(L^{\dagger} \overline Y_t +Q\overline X_t - Q [H_1, H] \mathbb X_t - Q\eta) dt + \overline Z_t^0 dW_t^0,\;\overline Y_T =0
$$
Consequently, the solution of \eqref{fo:mmmfg_lq_fbsde_Nash} also satisfies:
\begin{equation}
\label{fo:mmmfg_lq_fbsde_final}
\begin{dcases}
&d\mathbb X_t = (\mathbb{L}_0 \mathbb X_t -\frac12 \mathbb B_0 R_0^{-1}\mathbb B_{0}^{\dagger}\mathbb Y_t - \frac12\mathbb B R^{-1}B^\dagger \overline Y_t) dt + \mathbb D_0 dW_t^0\\
&d\mathbb Y_t = -(\mathbb L_{0}^{\dagger} \mathbb Y_t +2\mathbb F_{0}\mathbb X_t + 2f_0) dt + \mathbb Z_t dW_t^0,\;\;\;\mathbb Y_T =0\\
&d\overline Y_t = -\bigl(L^{\dagger} \overline Y_t + 2 \bigl([Q,0] - Q [H_1, H] \bigr)\mathbb X_t - 2 Q\eta\bigr) dt + \overline Z_t^0 dW_t^0,\;\overline Y_T =0.
\end{dcases}
\end{equation}
Our final remark is that the solution of system \eqref{fo:mmmfg_lq_fbsde_final} is not only necessary, but also sufficient. Indeed, once it is solved,
one can solve for $(\tilde X_t,\tilde Y_t)_{0\le t\le T}$ by solving the affine FBSDE with random coefficients formed by the second and fourth equations of \eqref{fo:mmmfg_lq_fbsde_Nash} and check that $\mathbb E[\tilde Y_t|\mathcal F^0_t]$ is indeed the solution of the third equation of \eqref{fo:mmmfg_lq_fbsde_final}.
\vskip 4pt
Identifying $\mathbb Y_t$ with $[\overline P_t^\dagger,\overline P_t^{0\dagger}]^\dagger$ we recognize the FBSDE used in \cite{CarmonaZhu}.
\subsubsection*{\textbf{A Closed Loop Equilibrium}}
In this section we implement the closed loop alternative formulation of the equilibrium problem. Since we expect that the optimal controls will be in feedback form, we search directly for Markovian controls.
In other words, we assume that the controls used by major player and minor players are respectively of the form:
$$
\alpha_t^0 = \phi^0(t, X_t^0, \bar X_t),
\qquad\text{and}\qquad
\alpha_t = \phi(t, X_t, X_t^0, \bar X_t),
$$
for some $\mathbb R^{k_0}$ and $\mathbb R^k$ valued deterministic functions $\phi^0$ and $\phi$
defined on $[0,T]\times\mathbb R^{d_0}\times \mathbb R^d$ and $[0,T]\times\mathbb R^d\times\mathbb R^{d_0}\times \mathbb R^d$ respectively. For the sake of simplicity, we assume that $A_0= \mathbb R^{k_0}$ and $A=\mathbb R^k$.
So the major player can only observe its own state and the mean of minor player's states, while the representative minor player can observe its own state, the state of the major player, as well as the mean of the other minor players' states.
This version of the equilibrium problem is more difficult than its open loop analog. For that reason, we are not trying to
construct the best response map for all the possible choices of control processes $\boldsymbol{\alpha}^0$ and $\boldsymbol{\alpha}$.
Instead, we construct it for a restricted class of feedback functions $\phi^0$ and $\phi$ in which we can still find a fixed point, hence a Nash equilibrium.
To be more specific, we construct the best responses to controls $\boldsymbol{\alpha}^0$ and $\boldsymbol{\alpha}$ of the form:
\begin{align}
\alpha_t^0 &=\; \phi^0(t, X_t^0, \bar X_t) = \phi^0_0(t) + \phi^0_1(t) X_t^0 + \phi^0_2(t) \bar X_t\label{fo:alpha0}\\
\alpha_t &=\;\;\phi(t, X_t, X_t^0, \bar X_t)= \phi_0(t) + \phi_1(t) X_t + \phi_2(t) X_t^0 + \phi_3(t) \bar X_t\label{fo:alpha}
\end{align}
where the functions $[0,T]\ni t\rightarrow \phi^0_i(t)$ for $i=0,1,2$ and $[0,T]\ni t \rightarrow \phi_i(t)$ for $i = 0,1,2,3$ are matrix-valued deterministic continuous functions with the appropriate dimensions, in other words, $\phi_0^0(t)\in\mathbb R^{k_0}$, $\phi_1^0(t)\in\mathbb R^{k_0\times d_0}$, $\phi_2^0(t)\in\mathbb R^{k_0\times d}$, $\phi_0(t)\in\mathbb R^{k}$, $\phi_1(t)\in\mathbb R^{k\times d}$, $\phi_2(t)\in\mathbb R^{k\times d_0}$, and $\phi_3(t)\in\mathbb R^{k\times d}$.
\vskip 4pt
We first consider the major player's optimization problem. We assume that the representative minor player uses strategy $\alpha_t=\phi(t, X_t, X_t^0, \bar X_t)$ as specified in \eqref{fo:alpha}. Next we look for the control $\boldsymbol{\alpha}^0$
which could be used by the major player to minimize its expected cost. The dynamics of the system is then given by:
\begin{equation}
\label{fo:mmmfg_alternative_cl_state}
\begin{cases}
&\hskip -10pt
dX_t^0 = (L_0 X_t^0 + B_0 \alpha_t^0 + F_0 \bar X_t) dt + D_0 dW_t^0\\
&\hskip -10pt
dX_t = \Bigl[ B\phi_0(t) + (L+B\phi_1(t) ) X_t + (B\phi_2(t)+G) X^0_t + (B\phi_3(t)+F) \overline X_t) \Bigr]dt + D dW_t,
\end{cases}
\end{equation}
where as before $\overline X_t = \mathbb{E}[X_t | \mathcal{F}_t^0]$ is the conditional expectation of $X_t$ with respect to the filtration generated by the history of the Wiener process ${\boldsymbol W}^0$ up to time $t$. In their current form, the dynamics of the couple $(X^0_t,X_t)$ are of a McKean-Vlasov type since the mean of $X_t$ appears in the coefficients of the equation giving $dX^0_t$. However, in
order to find a minimalist version of dynamical equations for a state over which the optimization problem of the major player can be formulated, we take conditional expectations in the equation for the state of the representative minor player. We get:
\begin{equation}
\label{fo:mmmfg_cl_cond_exp}
d\overline X_t= \Bigl[ B\phi_0(t) + (L+B[\phi_1(t) +\phi_3(t)] +F) \overline X_t + (B\phi_2(t)+G) X^0_t \Bigr]dt.
\end{equation}
As in the case of the open loop version of the equilibrium problem, we express the optimization problem of the major player over the dynamics of the couple $(\overline X_t,X^0_t)$. In order to do so, we use the same notation $\mathbb X_t$, $\mathbb F_0$, $f_0$, $\mathbb B_0$, $\mathbb B$, $\mathbb D$ and $\mathbb D_0$ as in the case of our analysis of the open loop problem, and we introduce the following new notation:
$$
\mathbb L^{(cl)}_0 (t)=
\left[\begin{array}{cc}L+B[\phi_1(t)+\phi_3(t)]+F&B\phi_2(t)+G\\ F_0 &L_0 \end{array}\right],
\quad
\mathbb C^{(cl)}_0 = \left[\begin{array}{c}B\phi_0(t) \\0 \end{array}\right],
$$
and the optimization problem of the major player can be formulated exactly as in the open loop case as the minimization:
$$
\inf_{\boldsymbol{\alpha}^0\in\AA_0}\mathbb{E}\left[\int_{0}^T [ \mathbb{X}_t^\dagger \mathbb{F}_0\mathbb{X}_t +2 \mathbb{X}_t^\dagger f_0 + \eta_0^\dagger Q_0 \eta_0 + \alpha_t^{0\dagger}R_0 \alpha_t^0] dt \right]
$$
where the controlled dynamics are given by:
\begin{equation}
\label{fo:mmmfg_semi_state}
d\mathbb X_t =\bigl[\mathbb L^{(cl)}_0(t) \mathbb X_t +\mathbb B_0 \alpha_t^0 + \mathbb C^{(cl)}_0(t)\bigr] dt + \mathbb{D}_0 dW_t^0.
\end{equation}
The reduced Hamiltonian (minus the term $ \eta_0^\dagger Q_0 \eta_0$ which is irrelevant) is given by:
$$
H^{(r),\phi}(t, x, y, \alpha^0) = y^\dagger [\mathbb L^{(cl)}_0 x +\mathbb B_0 \alpha^0 + \mathbb C^{cl}(t)] + x^\dagger \mathbb{F}_0 x +2 x^\dagger f_0 + \alpha^{0\dagger}R_0 \alpha^0.
$$
Applying the stochastic maximum principle, we find that the optimal control is given as before by $\hat\alpha_t^{0} = - (1/2)R_0^{-1}\mathbb B_0^\dagger \mathbb{Y}_t$, where $(\mathbb X_t, \mathbb Y_t, \mathbb Z_t)_{0\le t\le T}$ solves the linear FBSDE:
\begin{equation}
\label{fo:fbsde_xx_yy}
\begin{cases}
d\mathbb X_t &=\;[\mathbb L_0^{(cl)}(t) \mathbb X_t -\frac12 \mathbb B_0 R_0^{-1}\mathbb B_{0}^{\dagger}\mathbb Y_t + \mathbb C^{(cl)}_0(t)] dt + \mathbb{D}_0 dW_t^0\\
d\mathbb Y_t &=\; -[\mathbb L_0^{(cl)}(t)^{\dagger} \mathbb Y_t +2\mathbb F_{0}\mathbb{X}_t + 2 f_0) dt + \mathbb{Z}_t dW_t^0,\;\;\;\mathbb{Y}_T =0.
\end{cases}
\end{equation}
This FBSDE being affine, we expect the decoupling field to be affine as well, so we search for a solution of the form $\mathbb Y_t = K_t\mathbb X_t + k_t$ for two deterministic functions $t\mapsto K_t\in\mathbb R^{(d+d_0)\times (d+d_0)}$ and $t\mapsto k_t\in\mathbb R^{(d+d_0)}$. We compute $d\mathbb Y_t$ applying It\^o's formula to this ansatz, and using the expression for $d\mathbb X_t$ given by the forward equation. Identifying term by term the result with the right hand side of the backward component of the above FBSDE we obtain the following system of ordinary differential equations:
\begin{equation}
\label{fo:first_system}
\begin{cases}
&\hskip -10pt
0=\dot{K}_t - \frac12 K_t \mathbb B_0 R_0^{-1} \mathbb B_0^\dagger K_t + K_t \mathbb L_0^{(cl)}(t) + \mathbb L_0^{(cl)}(t)^\dagger K_t+ \mathbb F_0,\quad K_T = 0\\
&\hskip -10pt
0=\dot{k}_t +\bigl( \mathbb L_0^{(cl)}(t)^\dagger -\frac12 K_t \mathbb B_0 R_0^{-1} \mathbb B_0^\dagger\bigr) k_t + K_t\mathbb C_0^{(cl)}(t) + 2 f_0 ,\;\;\;k_T =0.
\end{cases}
\end{equation}
For any choice of a continuous strategy $t\mapsto (\phi_0(t),\phi_1(t),\phi_2(t),\phi_3(t))$, the first equation is a standard matrix Riccati differential equation. Since the coefficients are continuous and $\mathbb F_0$ is positive definite, the equation admits a unique global solution over $[0,T]$ for any $T>0$. Recall that $R_0$ is symmetric and positive definite. Injecting the solution $t\mapsto K_t$ into the second equation yields a linear ordinary differential equation with continuous coefficients for which the global unique solvability also holds. Therefore the FBSDE \eqref{fo:fbsde_xx_yy} is uniquely solvable and the optimal control exists and is given by:
\begin{equation}
\label{fo:mmmfg_cl_major_opt}
\alpha_t^{0 *} = - \frac12 R_0^{-1}\mathbb B_0^\dagger K_t \mathbb X_t - \frac12 R_0^{-1}\mathbb B_0^\dagger k_t,
\end{equation}
which is an affine function of $X_t^0$ and $\bar X_t$.
\vskip 6pt
We now turn to representative minor player optimization problem. We assume that the major player uses the feedback strategy $\alpha^0_t=\phi^0(t, X_t^0, \bar X_t)$ and the representative of the other minor players uses the feedback strategy $\alpha_t=\phi(t, X_t, X_t^0, \bar X_t)$ of the forms \eqref{fo:alpha0} and \eqref{fo:alpha} respectively. These choices lead to the dynamics of the state $\mathbb X_t = [\overline X_t^\dagger, X_t^{0\dagger}]^\dagger$ given by:
$$
d\mathbb X_t=[\mathbb L^{(cl)}(t) \mathbb X_t + \mathbb C^{(cl)}(t) ]dt + \mathbb{D}_0 dW_t^0
$$
with:
\begin{equation*}
\mathbb L^{(cl)}(t) =\left[\begin{array}{cc}L+F+B(\phi_1(t) + \phi_3(t)) & G + B\phi_2(t) \\ F_0 + B_0\phi^0_2(t) & L_0 + B_0\phi^0_1(t) \end{array}\right],
\quad
\mathbb C^{(cl)}(t) =\left[\begin{array}{c} B\phi_0(t) \\ B_0\phi_0^0(t) \end{array}\right].
\end{equation*}
We wrote $\mathbb L^{(cl)}(t)$ and $\mathbb C^{(cl)}(t)$ instead of $\mathbb L^{(cl),\phi^0,\phi}(t)$ and $\mathbb C^{(cl),\phi^0,\phi}(t)$ in order to simplify the notation. In this environment, we search for the best response of a representative minor player trying to minimize as earlier,
$$
\inf_{\tilde\boldsymbol{\alpha}\in\AA}\mathbb E \left[\int_0^T [(\tilde X_t - [H_1, H] \mathbb X_t - \eta)^{\dagger} Q(\tilde X_t - [H_1, H] \mathbb X_t - \eta)+\tilde\alpha_t^\dagger R \tilde\alpha_t ] dt \right],
$$
where the dynamics of the controlled state $\tilde X_t$ are given as before by:
$$
d\tilde X_t = (L \tilde X_t + B \tilde\alpha_t + [F,G]\mathbb X_t) dt + D dW_t.
$$
Again the process $\mathbb{X}_t$ is merely part of the random coefficients of the optimization problem. We introduce the reduced Hamiltonian:
\begin{equation*}
\begin{split}
&H^{(r),\phi^0,\phi}(t,\tilde x,\tilde y,\tilde\alpha) =
\tilde y^{\dagger}(L \tilde x + B \tilde\alpha + [F,G] \mathbb{X}_t)\\
&\hskip 40pt
+ (\tilde x - [H_1, H] \mathbb{X}_t - \eta)^{\dagger} Q(\tilde x - [H_1,H] \mathbb{X}_t - \eta)+\tilde\alpha^\dagger R \tilde\alpha.
\end{split}
\end{equation*}
and we find that the optimal control is given by $\tilde\alpha_t^*=-\frac12 R^{-1}B^\dagger Y_t$, where $(\tilde X_t, \mathbb X_t,\tilde Y_t, \tilde Z_t,\tilde Z^0_t)_{0\le t\le T}$ solves the linear FBSDE:
\begin{equation*}
\begin{cases}
&\hskip -10pt
d\tilde X_t = (L\tilde X_t -B R^{-1}B^{\dagger}\tilde Y_t + [F,G]\mathbb X_t) dt + D dW_t\\
&\hskip -10pt
d\mathbb X_t=[\mathbb L^{(cl)}(t) \mathbb X_t + \mathbb C^{(cl)}(t) ]dt + \mathbb{D}_0 dW_t^0\\
&\hskip -10pt
d\tilde Y_t = -(L^{\dagger} \tilde Y_t +Q \tilde X_t - Q [H_1,H] \mathbb{X}_t - Q\eta) dt + \tilde Z_t dW_t + \tilde Z_t^0 dW_t^0,\;\;Y_T =0.
\end{cases}
\end{equation*}
Again we search for a solution of the form $\tilde Y_t = \mathbb S_t \mathbb X_t + S_t \tilde X_t + s_t$ for continuous deterministic functions $t\mapsto \mathbb S_t\in\mathbb R^{d\times (d+d_0)}$, $t\mapsto S_t\in\mathbb R^{d\times d}$ and $t\mapsto s_t\in\mathbb R^d$. Proceeding as before, we see that these functions provide a solution to the above FBSDE if and only if they solve the system of ordinary differential equations:
\begin{equation}
\label{fo:second_system}
\begin{cases}
&\hskip -10pt
0 =\dot{S}_t + S_t L + L^\dagger S_t - S_tBR^{-1}B^\dagger S_t + Q,\;\;\;\;S_T = 0\\
&\hskip -10pt
0 =\dot{\mathbb S}_t+ \mathbb S_t \mathbb L^{(cl)}(t) + L^\dagger \mathbb S_t - S_tBR^{-1}B^\dagger \mathbb S_t + S_t [F,G] - Q[H_1,H],\;\;\; \mathbb S_T = 0\\
&\hskip -10pt
0 = \dot{s}_t + (L^\dagger - S_tBR^{-1}B^\dagger) s_t + \mathbb S_t \mathbb C^{(cl)}(t) - Q\eta,\;\;\;\;s_T = 0.
\end{cases}
\end{equation}
The first equation is a standard symmetric matrix Riccati equation. As before, the fact that $Q$ is symmetric and non-negative definite and $R$ is symmetric and positive definite imply that this Riccati equation has a unique solution on $[0,T]$. Note that its solution $S_t$ is symmetric and independent of the inputs feedback functions $\phi^0$ and $\phi$ giving the controls chosen by the major player and the other minor players. Injecting the solution $S_t$ into the second and third equations, leads to a linear system of ordinary differential equations which can be readily solved.
Given such a solution we find that the optimal control can be expressed as:
\begin{equation}
\label{fo:mmmfg_cl_minor_opt}
\tilde\alpha_t^* = -\frac12 R^{-1}B^\dagger[\mathbb S_t \mathbb X_t + S_t X_t + s_t]
\end{equation}
which is indeed an affine function of $X_t$, $X_t^0$ and $\bar X_t$.
\vskip 6pt
Now that the two optimization problems are solved, we can tackle the fixed point step. We just proved that the best response map leaves the set of affine controls of the forms \eqref{fo:alpha0} and \eqref{fo:alpha} invariant. This suggests that we can look for a fixed point in this set. For such a fixed point, we must have:
$$
\alpha_t^{0,*} = \phi^0(t, X_t^0, \overline X_t)=\phi_0^0(t) +\phi_1^0(t)X^0_t +\phi^0_2(t)\overline X_t,
$$
and:
$$
\tilde\alpha_t^* = \phi(t, X_t, X_t^0, \bar X_t)=\phi_0(t) +\phi_1(t)X_t +\phi_2(t)X_t^0 +\phi_3(t)\overline X_t,
$$
which translates into the following equations:
\begin{align*}
[\phi^0_2(t),\phi^0_1(t)] = -\frac12 R_0^{-1}\mathbb{B}_0^\dagger K_t, &\;\;\;\;\; \phi^0_0(t) = -\frac12 R_0^{-1}\mathbb{B}_0^\dagger k_t,\\
[\phi_3(t),\phi_2(t)] = -\frac12 R^{-1}B^\dagger\mathbb S_t, &\;\;\;\;\; \phi_1(t) = -\frac12 R^{-1}B^\dagger S_t,&\phi_0(t) = -\frac12 R^{-1}B^\dagger s_t.
\end{align*}
To complete the construction of the equilibrium, it thus remain to determine the quantities $K_t$, $k_t$, $\mathbb S_t$, $S_t$ and $s_t$ from the systems
\eqref{fo:first_system} and \eqref{fo:second_system}.
As we already noticed, the second equation of \eqref{fo:first_system} can be used to determine $k_t$ from $K_t$. As for \eqref{fo:second_system}, $S_t$ can be obtained by solving the first equation on its own, and once this is done the third equation of \eqref{fo:second_system} can be used to determine $s_t$ from $\mathbb S_t$. In other words, we can solve for $S_t$ by solving the first equation of \eqref{fo:second_system}, and then group the remaining four equations into two systems of ordinary differential equations as follows:
\begin{equation}
\label{fo:Riccatis}
\begin{cases}
&0 =\dot{K}_t + K_t[\mathbb L(t) - \mathbb B R^{-1}B^\dagger \mathbb S_t] + [\mathbb L(t) - \mathbb B R^{-1}B^\dagger \mathbb S_t]^\dagger K_t\\
&\hskip 165pt
-K_t \mathbb B_0R_0^{-1}\mathbb B_0^\dagger K_t + \mathbb L_0\\
&0 =\dot{\mathbb S}(t) + \mathbb S_t\AA(t) + [L^\dagger - S_t BR^{-1}B^\dagger] \mathbb S_t- \mathbb S_t\mathbb B R^{-1}B^\dagger \mathbb S_t\\
&\hskip 75pt
- \mathbb S_t \mathbb B_0R_0^{-1}\mathbb B_0^\dagger K_t + [S_t F - QH_1, S_tG - QH]\\
\end{cases}
\end{equation}
and
\begin{equation}
\label{fo:non_Riccatis}
\begin{cases}
&0=\dot k_t + [\mathbb L(t) - \mathbb B R^{-1}B^\dagger \mathbb S_t]^\dagger k_t - K_t\mathbb B_0R_0^{-1}\mathbb B_0^\dagger k_t
- K_t\mathbb B R^{-1}B^\dagger s_t + f_0\\
&0=\dot s_t + [L^\dagger - S_t BR^{-1}B^\dagger] s_t - \mathbb S_t\mathbb B_0R_0^{-1}\mathbb B_0^\dagger k_t
- \mathbb S_t\mathbb B R^{-1}B^\dagger s_t - Q\eta
\end{cases}
\end{equation}
with $0$ as terminal condition, where we used the notation:
$$
\mathbb L(t) := \mathbb L_0 -\left[ \begin{array}{cc}BR^{-1}B^\dagger S_t&0\\0&0\end{array}\right].
$$
The first system \eqref{fo:Riccatis} comprises two mildly coupled matrix Riccati equations, while the system \eqref{fo:non_Riccatis}, once the solutions of the
first system are identified and substituted for, is a plain linear system whose solution is standard.
In other words, the functions $t\mapsto k_t$ and $t\mapsto s_t$ can easily be determined once a solution $t\mapsto (K_t,\mathbb S_t)$ of system \eqref{fo:Riccatis} is found. In essence, we proved the following verification theorem.
\begin{theorem}
If the system \eqref{fo:Riccatis} of matrix Riccati equations is well posed, then there exists a Nash equilibrium in the family of linear closed loop feedback controls, the optimal controls for the major and minor players being given by the strategies \eqref{fo:mmmfg_cl_major_opt} and \eqref{fo:mmmfg_cl_minor_opt}.
\end{theorem}
\section{Application}
\label{se:application}
In this final section, we apply the theoretical results derived above to a model of flocking inspired by the mean field game formulation proposed in \cite{NourianCainesMalhame} to generalize a basic descriptive model originally proposed by Cucker and Smale in \cite{CuckerSmale}.
In this section, we borrow from the terminology used in the dynamical systems literature on large population behavior, and we call the major player the \emph{leader} while the minor players are call \emph{followers}. However, the reader should not be misled by this terminology: we are not solving a leader-follower game, we are solving for a Nash equilibrium for the mean field game with major and minor players.
, in which the dynamics of a large population of agents are governed by forces depicting the mean reversion of individual's velocity to the mean velocity of the population. Later on, Huang (reference) formulates the flocking model into a mean field game, where the emergent behavior is obtained by the Nash equilibrium of the game. While early models of flocking does not involve any form of central coordination, several authors recently propose generalization of the flocking model by introducing leaders in the population. Such leader has a pivotal impact on the rest of the population. In this spirit, we generalize Huang's formulation of flocking mean field game by introducing a free-will leader pursuing a prescribed schedule of velocity.
\vskip 4pt
Given a population of $N$ minor players (followers), we denote by $V_t^{0,N}$ the velocity of the major player (leader) at time $t$, and by $V_t^{n,N}$ the velocity of the $n$-th follower. The leader and the followers control the drifts of their velocities whose dynamics are
given as It\^o processes:
\begin{equation}
\label{fo:flocking_dynamics}
\begin{cases}
&dV_t^{0,N}=\alpha^0_t dt + \Sigma_0 dW_t^0\\
&dV_t^{n,N}=\alpha^n_t dt + \Sigma dW_t^n
\end{cases}
\end{equation}
where the $d$-dimensional Wiener processes $\{{\boldsymbol W}^i=(W^i_t)_{0\le t\le T};\;i=0,1,\cdots,N\}$ are independent, and $\Sigma_0$ and $\Sigma$ are constant $d\times d$ matrices. We also assume that we are given a deterministic function $[0,T]\ni t \rightarrow \nu_t\in\mathbb R^d$ representing the leader's free will, namely the velocity the major player would like to have while keeping a reasonable distance from the pack. If we denote by $\bar V_t^{N}:= \frac{1}{N}\sum_{n=1}^N V_t^{n,N}$ the average velocity of the followers, the objective of the leader is to minimize its expected costs over the horizon $T$:
\[
J^0 = \mathbb{E}\Bigl[\int_0^T \bigl(\lambda_0\|V^{0,N}_t - \nu_t\|^2 + \lambda_1\|V^{0,N}_t - \bar V^{N}_t\|^2 + (1 - \lambda_0 - \lambda_1) \|\alpha^0_t\|^2 \bigr)dt\Bigr]
\]
where $\lambda_0$ and $\lambda_1$ are positive real numbers satisfying $\lambda_0 + \lambda_1 \le 1$. Similarly, each follower faces a tradeoff between keeping up with the leader and staying close to its peers. So the objective of the $n$-th follower is to minimize:
\[
J^n = \mathbb{E}\Bigl[\int_0^T \bigl( l_0\|V^{n,N}_t - V^{0,N}_t\|^2 + l_1\|V^{n,N}_t - \bar V^{N}_t\|^2 + (1 - l_0 - l_1) \|\alpha^n_t\|^2 \bigr)dt\Bigr]
\]
where $l_0$ and $l_1$ are positive reals satisfying $l_0 + l_1 \le 1$. While the above model is clearly linear quadratic, it does not fit in the framework used in this paper. However, it is plain to remedy this problem by simply doubling the state variable. More specifically, we define $X_t^0 := [V_t^0, V_t^0]$, $X_t := [V_t, V_t]$ and $\bar X_t := [\bar V_t, \bar V_t]$ and we pose:
\begin{equation*}
\begin{split}
L_0 = L = F_0 = F = G = \left[\begin{array}{cc}0 & 0 \\ 0&0 \end{array}\right],\;\;\; B_0 = B = \left[\begin{array}{c}I \\ I \end{array}\right],\;\;D_0= \left[\begin{array}{c}\Sigma_0 \\ \Sigma_0 \end{array}\right],\;\;D= \left[\begin{array}{c}\Sigma \\ \Sigma \end{array}\right]\\
H = \left[\begin{array}{cc}I & 0 \\ 0&0 \end{array}\right],\;\;H_0 = H_1 = \left[\begin{array}{cc}0 & 0 \\ 0&I \end{array}\right],\;\;Q_0 = \left[\begin{array}{cc}\lambda_0I & 0 \\ 0&\lambda_1I \end{array}\right],\;\;Q = \left[\begin{array}{cc}l_0I & 0 \\ 0&l_1I \end{array}\right]\\
\eta_0(t) = \left[\begin{array}{c}\nu(t) \\ 0 \end{array}\right],\;\;\eta = \left[\begin{array}{c}0 \\ 0 \end{array}\right],\;\;R_0 = (1 - \lambda_0 - \lambda_1)I,\;\;R = (1 - l_0 - l_1)I
\end{split}
\end{equation*}
We implemented the solution of this model in the $d=2$ dimensional case choosing
$$
\nu(t) := [-2\pi \sin(2\pi t), 2\pi \cos(2\pi t)]
$$
for the leader's free-will. We also choose $\Sigma_0 = \Sigma = 0.5 I_2$. For a given choice of penalty coefficients $\lambda_0, \lambda_1, l_0, l_1$, we use Euler's method to solve numerically the system of matrix Riccati equation \eqref{fo:Riccatis} over the horizon $T = 5$, and computing closed loop Nash equilibrium strategies of for the leader and the representative follower in the mean field game limit.
We simulate the dynamics of the leader and $N$ followers defined in \eqref{fo:flocking_dynamics}, where we assign the equilibrium control strategies of the mean field game to the leader and each follower.
\vskip 4pt
Figure \ref{figure:optimal_trajectory} shows the trajectories (points in the plane) and the velocities (arrows) of the flock. The leader's trajectory is plotted in black and those of the followers in color. We observe that the prescribed velocity $\nu$ is best followed by the flock when the leader cares more about pursuing its objective and the followers are more committed to follow the leader, rather than sticking with the average of the population. Conversely, if the individuals attribute more importance to staying close with the population, the flock follows an erratic trajectory in the beginning and eventually reaches a common direction of movement.
\begin{figure}[H]
\centering
\includegraphics[scale=0.42, trim = 3mm 0mm 5mm 0mm, clip=true]{trajectory_80_19_1_80_19_1.pdf}
\includegraphics[scale=0.42, trim = 3mm 0mm 3mm 0mm, clip=true]{trajectory_80_19_1_19_80_1.pdf}
\includegraphics[scale=0.42, trim = 3mm 0mm 3mm 0mm, clip=true]{trajectory_19_80_1_80_19_1.pdf}
\includegraphics[scale=0.42, trim = 3mm 0mm 3mm 0mm, clip=true]{trajectory_19_80_1_19_80_1.pdf}
\caption{Optimal velocity and trajectory of follower and leaders}
\label{figure:optimal_trajectory}
\end{figure}
Our simulation also gives a peak into the effect of propagation of chaos, which states that in the limit of an infinite number of followers, the velocities of the followers become independent conditioned on the shock process driving the leader's velocity. To visualize such an effect, for a given number of followers, say $N$, we fix a realization of the Wiener processes ${\boldsymbol W}^0$ driving the dynamics of the leader's velocity. We simulate $S$ copies of the optimal paths $V_t^{0,N}$ and $V_t^{n,N}, n = 1,\dots,N$ where for each sample path we use the same Wiener process we fixed before for the leader, but independent copy of Wiener process for each of the followers. Then for a given $t$, we compute the sample correlation matrix of $V_t^{i,N,(1)}, i = 1,\dots, 5$, which are the first components of the velocity of the first 5 followers at time $t$. Finally, we compute the average of the correlation matrix across time $t\le T$. Figure \ref{figure:propagation_chaos} displays the average correlation matrices for flocks of sizes $N=5, 10, 20, 50, 100$ obtained by following the procedure described above. It can be seen that the correlation between the followers' velocities dramatically reduces to 0 as the size of the flock grows. Indeed, the linearity of the leader and follower strategies implies that the whole system evolves as a vector-valued OU process, and the velocity of any individual at a given time is Gaussian. Since independence is equivalent to null correlation for Gaussian vectors, the convergence of the correlation matrices provides a strong evidence of the conditional propagation of chaos.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3, trim = 15mm 0mm 15mm 0mm, clip=true]{corr5.pdf}
\includegraphics[scale=0.3, trim = 15mm 0mm 15mm 0mm, clip=true]{corr10.pdf}
\includegraphics[scale=0.3, trim = 15mm 0mm 15mm 0mm, clip=true]{corr20.pdf}
\includegraphics[scale=0.3, trim = 15mm 0mm 15mm 0mm, clip=true]{corr50.pdf}
\includegraphics[scale=0.3, trim = 15mm 0mm 3mm 0mm, clip=true]{corr100.pdf}
\caption{Conditional correlation of followers' velocities}
\label{figure:propagation_chaos}
\end{figure}
|
train/arxiv
|
BkiUdk05qYVBXTJ3BqdO
| 5
| 1
|
\section{Introduction}
Coupled cell networks are used to represent systems of coupled
dynamical systems schematically. Such systems appear either in various
biological systems. Networks of eight coupled cells modeling central
pattern generators in quadrupeds can be used to recover the primary
animal gaits \cite{Gaits2,Gaits1}. One of the important conclusions of
the theory of coupled cell systems is that the network itself imposes
constraints on the possible behaviors of the system even when we lack
detailed knowledge of the behavior of the cells within the network. A
recent application to head movement that illustrates the importance of
this is \cite{LieJune}. For further applications see
\cite{StewartNature}.
Mathematically coupled cell networks are a subclass of vertex and edge
labeled directed multigraphs with loops. Vertices with the same label
represent multiple copies of the same dynamical system. Edge labels
represent the type of coupling. A compatibility condition is imposed
that requires every vertex with a given label to receive the same set
of coupling types as inputs. As with any class of graphs there is a
natural notion of \emph{isomorphic} coupled cell networks induced by
bijections between the sets of vertices. This representation of
coupled cell networks follows \cite{Synchrony}. An alternative
approach is outlined in \cite{Field}.
Following Stewart and Golubitsky one may associate to each coupled
cell network a class of ordinary differential equations that are
compatible with the network structure, the class of coupled cell
systems associated to a coupled cell network. As was pointed out in
\cite{Synchrony} it is possible for
non-isomorphic coupled cell networks to have the same class of coupled
cell systems. In this case we term the two coupled cell networks
O.D.E. equivalent. We will give a full description of the coupled cell
systems associated to a coupled cell network for the simple case of
identical edge homogeneous coupled cell networks. For the definition
in the general case see \cite{DiasLinear}.
Aguiar and Dias \cite{Minimal} examine the structure of O.D.E. equivalence
classes for such coupled cell networks. They find a collection of
\emph{canonical normal forms} - a collection of networks whose number
of edges is minimal within the equivalence class. This they term the
\emph{minimal} subclass.
In this paper we consider the simplest type of coupled cell networks,
the identical edge homogeneous coupled cell networks. These are simply
directed multigraphs with loops where every vertex has the same
indegree. Aldosray and Stewart gave an enumeration of these networks
\cite{StewartCounting} counted up to isomorphism.
Using a simpler method specific to the case of homogeneous networks we
recover Theorem 10.3 of \cite{Minimal} which characterizes the minimal
subclass in this case. Furthermore, we are able to give a recursive
formula for enumerating the minimal systems with a given number of
vertices and edges.
\section{Coupled Cell Systems, Coupled Cell Networks, and
O.D.E. Equivalence.}
We will deal exclusively with identical edge homogeneous coupled cell
networks, hereafter referred to simply as networks.
Mathematically such a network is a directed multigraph where loops are
allowed and where every vertex has the same in-degree. If the constant
in-degree is $r$ we will call the network degree $r$. A directed
multigraph consists of a set of vertices $V$ and a multiset of edges
$E$ with elements in $V \times V$. A multiset may be thought of as a
function $E: V \times V \rightarrow \mathbb{N}$; we call this function the
\emph{edge multiplicity function}. The condition that every vertex has
the same in-degree, $r$, is then $v \in V$ $\sum_{u \in V} E(u,v) =
r$.
Given an $n$ cell degree $r$ network $G=(V,E)$, a choice of finite
dimensional phase space $P=\mathbb{R}^d$, and a function $F: P \times P^r
\rightarrow P$ such that $F(x_1; y_1,\dots y_r)$ is invariant under
all permutations of the variables $y_1, \cdots, y_d$, we may produce a
vector field on $P^n$. The vector field for the variable $x_i$
associated to cell $i$ is
\begin{equation*}
\dot{x}_i = F(x_i; x_{j^{(i)}_1},\dots x_{j^{(i)}_r})
\end{equation*}
where $j^{(i)}_1, \dots, j^{(i)}_r$ are the source cells for the $r$
arcs that terminate at vertex $i$. The complete system is
\begin{eqnarray}
\dot{x}_1 &=& F(x_1; x_{j^{(1)}_1},\dots x_{j^{(1)}_{r\phantom{1}}}) \nonumber\\
&\vdots& \nonumber\\
\dot{x}_n &=& F(x_n; x_{j^{(n)}_1},\dots x_{j^{(n)}_{r\phantom{1}}}) \nonumber
\end{eqnarray}
The set of such vector fields is a subset of the vector fields on
$P^n$.
A vector field obtained from a coupled cell network $G$ by a choice of
phase space and function $F$ is referred to as a \emph{coupled cell
system}, or an \emph{admissible vector field}, associated to $G$.
We may consider the class of all admissible vector fields for a given
network $G$ and phase space $P$. We will denote this class of vector
fields by $\mathfrak{X}_G^P$.
\textbf{Definition:} Two coupled cell networks $G_1$ and $G_2$ are
called \emph{O.D.E. equivalent} if there exists a network $G'_2$
isomorphic to $G_2$ such that for all choices of phase space $P$
\begin{equation*}
\mathfrak{X}_{G_1}^P = \mathfrak{X}_{G_2'}^P.
\end{equation*}
More prosaically, an $n$ cell degree $r_1$ network $G_1$ and an $n$
cell degree $r_2$ network $G_2$ are called \emph{O.D.E. equivalent} if
there exists a network $G'_2$ isomorphic to $G_2$ such that
\begin{enumerate}
\item for all choices of phase space $P$ and function $F_1: P \times
P^{r_1} \rightarrow P$ there exists a function $F_2: P \times
P^{r_2} \rightarrow P$ such that for all vertices $i$
\begin{equation*}
F_1(x_i; x_{j^{(i)}_1},\dots x_{j^{(i)}_{r_1\phantom{1}}}) = F_2(x_i; x_{k^{(i)}_1},\dots x_{k^{(i)}_{r_2\phantom{1}}})
\end{equation*}
where $j^{(i)}_1, \dots, j^{(i)}_{r_1}$ are the source cells for the
$r_1$ arcs that terminate at cell $i$ in network $G_1$ and
$k^{(i)}_1, \dots, k^{(i)}_{r_2}$ are the source cells for the $r_2$
arcs that terminate at cell $i$ in network $G_2'$.
\item for all choices of phase space $P$ and function $F_2: P \times
P^{r_2} \rightarrow P$ there exists a function $F_1: P \times
P^{r_1} \rightarrow P$ such that for all vertices $i$
\begin{equation*}
F_1(x_i; x_{j^{(i)}_1},\dots x_{j^{(i)}_{r_1\phantom{1}}}) = F_2(x_i; x_{k^{(i)}_1},\dots x_{k^{(i)}_{r_2\phantom{1}}})
\end{equation*}
where $j^{(i)}_1, \dots, j^{(i)}_{r_1}$ are the source vertices for
the $r_1$ arcs that terminate at vertex $i$ in network $G_1$ and
$k^{(i)}_1, \dots, k^{(i)}_{r_2}$ are the source vertices for the $r_2$
arcs that terminate at vertex $i$ in network $G_2'$.
\end{enumerate}
If we consider $P=\mathbb{R}$ and linear
functions $F_1$ and $F_2$ then we obtain the notion of \emph{linear
equivalence}. It is shown in \cite{DiasLinear} that linear equivalence and
O.D.E. equivalence are equivalent.
\section{Network Operations that Preserve O.D.E. equivalence}
\label{sec:netw-oper-that}
In this section we introduce two operations that can be performed on a
network that preserve the O.D.E. equivalence class. Since we are
dealing exclusively with homogeneous networks these operations are a
small part of the network operations considered in
\cite{Minimal}. Both Lemma \ref{lem:Operations} and Lemma
\ref{lem:Reduced} can be deduced from the more general arguments in
\cite{Minimal}, in particular from Proposition 7.4. For completeness
we give proofs of both Lemma \ref{lem:Operations} and Lemma
\ref{lem:Reduced} using only what is required for our simpler
case. That we may consider only these two network operations and not
more general operations is crucial for the results in Section \ref{sec:Enumerate}.
Here we give go two simple operations on networks that preserve the
O.D.E. equivalence class of the network.
\begin{enumerate}
\item Adding loops: A single loop is added to all vertices
in the network.
\item $k$-Splitting edges: Each edge in the network is replaced by $k$
identical copies of the edge.
\end{enumerate}
Intuitively, it should be clear that these operations preserve the
O.D.E. equivalence class of the network; however, a formal proof is
surprisingly difficult if one does not use the notion of linear
equivalence.
\begin{lem}\label{lem:Operations}
If network $G'$ is obtained from network $G$ by
either of the two network operations above then $G$ and
$G'$ are O.D.E. equivalent.
\end{lem}
\begin{proof}
Using \cite{DiasLinear} it is enough to prove that the two networks
are equivalent when the variables $x_i$ are taken to be in $\mathbb{R}$ and
the function $F$ is taken to be linear. In this case we observe that
for a degree $r$ network the function $F$ must take the form
\begin{equation*}
F(x; y_1, \dots, y_r) = a \, x + b (y_1 + \dots + y_r).
\end{equation*}
Consider a degree $r$ network. Adding a loop to every vertex we
obtain a degree $r+1$ network.
Given a function $F_r: \mathbb{R}^{r}\rightarrow \mathbb{R}$ defined by
\begin{equation*}
F_r(x;y_1, \dots,y_r) = a \, x + b (y_1 + \dots + y_r).
\end{equation*}
we define a function $F_{r+1}: \mathbb{R}^{r+1} \rightarrow \mathbb{R}$ by
\begin{equation*}
F_{r+1}(x; y_1, \dots, y_{r+1})= (a-b) \, x + b( y_1 + \dots +y_{r+1}) .
\end{equation*}
Clearly we have $F_{r+1}(x;x, y_1, \dots, y_r) = F_r(x;y_1, \dots,
y_r)$ and consequently the linear vector fields admissible for the
degree $r$ network are a subset of the linear vector fields
admissible for the $r+1$ degree network. We can easily go the other
direction. Given any function $F_{r+1}: \mathbb{R}^{r+1} \rightarrow \mathbb{R}$ of
the form
\begin{equation*}
F_{r+1}(x; y_1, \dots, y_{r+1})= a \, x + b( y_1 + \dots +y_{r+1})
\end{equation*}
we may define a function $F_{r}: \mathbb{R}^{r} \rightarrow \mathbb{R}$ by
\begin{equation*}
F_{r}(x; y_1, \dots, y_{r})= (a+b) \, x + b( y_1 + \dots +y_r) .
\end{equation*}
Again we have $F_r(x;y_1, \dots, y_r) = F_{r+1} (x; x, y_1, \cdots ,
y_r)$ and consequently we see that the two networks have precisely
the same set of admissible linear vector fields.
Consider a degree $r$ network. Performing the edge splitting
operation we obtain a degree $k \times r$ network. Given any
function $F_r:R^{r}\rightarrow \mathbb{R}$ of the form
\begin{equation*}
F_r(x;y_1, \dots,y_r) = a \, x + b (y_1 + \dots + y_r).
\end{equation*}
we may define a function $F_{k \times r}: \mathbb{R}^{k \times r}
\rightarrow \mathbb{R}$ by
\begin{equation*}
F_{k \times r}(x; y_1, \dots, y_{k\times r})= a \, x + \frac{b}{k} (
y_1 + \dots + y_{k \times r}) .
\end{equation*}
Clearly we have
\begin{equation}
F_{k \times r}(x;\overbrace{y_1,\dots,
y_1}^{k\mathrm{-times}}, \dots, \overbrace{y_r,\dots,
y_r}^{k\mathrm{-times}}) = F_r(x;y_1, \dots,
y_r)\label{eq:Fkr}
\end{equation}
and consequently the linear vector fields admissible for the degree
$r$ network are a subset of the linear vector fields admissible for
the degree $k\cdot r$ network.
Given any function $F_{k \times r}: \mathbb{R}^{k \times r} \rightarrow \mathbb{R}$
of the form
\begin{equation*}
F_{k \times r}(x; y_1, \dots, y_{k\times r})= a \, x + b (
y_1 + \dots + y_{k \times r})
\end{equation*}
we may define a function $F_{r}: \mathbb{R}^{r} \rightarrow \mathbb{R}$ by
\begin{equation*}
F_{r}(x; y_1, \dots, y_{r})= a\, x + k \, b( y_1 + \dots +y_r) .
\end{equation*}
Again equation \eqref{eq:Fkr} holds, and consequently we see that the
two networks have precisely the same set of admissible linear vector
fields.
In both cases we see that the operation produces a new network with
precisely the same set of admissible linear vector fields. Thus we
have that the operations preserve the O.D.E. equivalence class.
\end{proof}
The operations create a network with a larger degree. However, when a
network has the required structure, the inverse of these operations
may be applied to produce a network with a smaller degree.
\begin{lem}\label{lem:Reduced}
For any identical edge homogeneous coupled cell network $G$, there
exists an O.D.E. equivalent network $G_M$ with the following
properties:
\begin{enumerate}
\item At least one vertex has no loops, and
\item The greatest common divisor of the multiplicities of the
edges is 1.
\end{enumerate}
We will refer to $G_M$ as a \emph{reduced} network
associated to $G$. If $G$ is not a reduced network then $G_M$ has
a lower degree than $G$.
\end{lem}
\begin{proof}
Let $s$ denote the minimum number of loops on a vertex in
$G$. Consider the new network $G'$ formed by removing exactly $s$
loops from every vertex. Clearly $G'$ has a vertex with no
loops. Since $G$ may be obtained from $G'$ by adding $s$ loops we
see that $G$ and $G'$ are O.D.E. equivalent. Let $d$ denote the
greatest common divisor of the edge multiplicities in $G'$. We may
form a new network $G_M$ by dividing all the edge multiplicities
by $d$. Since $G'$ had a vertex with no loops so does $G_M$. The
greatest common divisor of the edge multiplicities of $G_M$ is 1
by construction. Since we may obtain $G'$ from $G_M$ by splitting
each edge into $d$ edges we see that $G'$ and $G_M$ are
O.D.E. equivalent by Lemma \ref{lem:Operations}. Thus $G$ and
$G_M$ are O.D.E. equivalent and $G_M$ has the required
properties.
\end{proof}
The use of $G_M$ to denote the reduced network is not
accidental. We will now show that $G_M$ is indeed the unique
minimal network in the O.D.E. equivalence class of $G$. Since any
network is O.D.E. equivalent to such a reduced network, it
suffices to show that two reduced networks that are
O.D.E. equivalent are isomorphic.
\begin{lem}
If $G_1$ and $G_2$ are reduced network,s and $G_1$ and $G_2$ are
O.D.E. equivalent, then $G_1$ and $G_2$ are isomorphic.
\end{lem}
\begin{proof} Let $G_2'$ be the network isomorphic to $G_2$ which
appears in the definition of O.D.E. equivalence. We will show that
$G_1$ and $G_2'$ are equal. If we take the phase space $P$ for the
cells to be $\mathbb{R}$ and consider linear functions of the form $F(x,
y_1, \dots, y_r) = a \, x + b (y_1 + \dots + y_r)$, then we see that
for any choice of $a_1, b_1$ there must exist $a_2, b_2$, and for
any choice of $a_2, b_2$ there must exist $a_1, b_1$, such that
\begin{equation*}
(a_1 \mathrm{Id} + b_1 A) \, x= (a_2 \mathrm{Id} + b_2 B) \, x
\end{equation*}
where $A$ is the adjacency matrix associated to $G_1$, $B$ is the
adjacency matrix associated to $G_2'$, and $x = ( x_1, \dots, x_n)^t
\in \mathbb{R}^n$. Since this holds for all $x \in \mathbb{R}^n$ we must have
\begin{equation}\label{eq:Matrix}
a_1 \mathrm{Id} + b_1 A= a_2 \mathrm{Id} + b_2 B.
\end{equation}
This matrix condition can be reduced to a system of linear equations
of two types:
\begin{eqnarray}
a_1 + b_1 A_{ii} &=a_2 + b_2 B_{ii} & \quad 1 \leq i \leq n\label{eq:Diagonal} \\
\qquad b_1 A_{ij} &= b_2 B_{ij}& \quad 1\leq i,j \leq n, i\neq j \label{eq:OffDiagonal}
\end{eqnarray}
Since both $A$ and $B$ have a zero on the diagonal they must have
some non-zero off diagonal entries in order to have the required row
sums. Now by \eqref{eq:OffDiagonal} we see that $b_1$ and $b_2$ must
have the same sign and that $A_{ij} \neq 0$ if and only if $B_{i j}
\neq 0$.
Since both $A$ and $B$ have at least one zero entry on the diagonal,
either there must be an $1 \leq i \leq n$ such that $A_{ii} =
B_{ii}=0$ or there must exist $i\neq j$ such that $A_{ii}=0$ but
$B_{ii} >0$ and $A_{jj} >0$ but $B_{jj}=0$. If we assume that there
exists $1 \leq i ,j \leq n$ with $i\neq j$ such that $A_{ii}=0$ but
$B_{ii} >0$ and $A_{jj} >0$ but $B_{jj}=0$, then we obtain
\begin{eqnarray}
\label{eq:1}
\qquad \quad a_1 &=& a_2 + b_2 B_{ii}\\
a_1 + b_1 A_{jj} &=& a_2
\end{eqnarray}
from which we immediately get $-b_1 A_{jj} = b_2 B_{ii}$ which
contradicts our earlier observation that $b_1$ and $b_2$ must have
the same sign. Thus there exists an $1 \leq i \leq n$ such that
$A_{ii} = B_{ii}=0$ and we can obtain from \eqref{eq:Diagonal} that
$a_1 = a_2$.
Thus we must have
\begin{equation*}
b_1 A_{i j} = b_2 B_{ij}
\end{equation*}
for all $1 \leq i,j \leq n$. Now $b_2$ divides $b_1 A_{ij}$ for all
$i,j$. Since the greatest common divisor of the entries of $A$ is 1
we must have $b_2$ divides $b_1$. Similarly $b_1$ divides $b_2
B_{ij}$ for all $i,j$. Since the greatest common divisor of the
entries of $B$ is 1, we must have $b_1$ divides $b_2$. Since $b_1$
and $b_2$ have the same sign, we must have $b_1 = b_2$.
Finally we are able to conclude that $A = B$ so $G_1$ is equal to
$G_2'$ as claimed.
\end{proof}
\section{Examples}
First we show how Figure 1 and Figure 2 of \cite{Minimal} are related
using our network operations.
\begin{figure}[h]
\centering
\begin{tabular}{ccc}
\includegraphics[scale=.3]{Network3.pdf}
& \includegraphics[scale=.3]{Network2.pdf}&
\includegraphics[scale=.3]{Network1.pdf}\\
(1)&(2)&(3)
\end{tabular}
\caption{Transforming Figure 1 to Figure 2 of \protect \cite{Minimal} using
network operations. Edge labels represent edge multiplicities. }
\label{fig:Transforming}
\end{figure}
Referring to our Figure \ref{fig:Transforming} notice that network (1)
satisfies our criterion for being a minimal network. If we split each
edge of network (1) into 3 edges then we obtain network (2), which is
O.D.E. equivalent to network (1). If we now adjoin 2 loops to each
vertex of network (2), then we obtain network (3), which is O.D.E. to
network (2) and hence O.D.E. equivalent to network (1).
Next we apply the results of the previous section to the connected 3
cell degree 2 networks examined in \cite{Leite}. They note
that up to permutation there are 38 connected 3 cell degree 2 networks
but that 8 of them are O.D.E. equivalent to the lower degree
networks. Each of these 8 is obtained from one of the 4 minimal
connected 3 cell degree 1 networks by either adjoining a loop to
every cell or by doubling all the edges, see Figure \ref{fig:Example}.
\begin{figure}[h]
\centering
\includegraphics[scale = .7]{8Reducible32Networks.pdf}
\caption{The minimal connected 3 cell degree 1 networks and their
associated connected 3 cell degree 2 networks}
\label{fig:Example}
\end{figure}
\section{Enumeration}
\label{sec:Enumerate}
We begin by outlining the work of Aldosray and Stewart in enumerating
homogeneous coupled cell networks. They use the counting result known
as Burnside's Lemma to enumerate all identical edge homogeneous
coupled cell networks with $n$ cells and degree $r$ counted up to
isomorphism.
To be explicit let us take $V= \{1, \dots, n\}$. Let us denote the set
of all multigraphs on $V$ with constant in-degree $r$ by
$\Omega_{n,r}$. The group of bijections on $V$ is the symmetric group
on $n$ elements, denoted $\mathbf{S}_n$. Each such bijection induces a
map on $\Omega_{n,r}$. Thus we have a group action of $\mathbf{S}_n$
on $\Omega_{n,r}$. Two networks are related by $\mathbf{S}_n$ if and
only if they are isomorphic networks. Since we are counting the
networks up to isomorphism what we actually want to count is the
number of distinct $\mathbf{S}_n$ orbits in $\Omega_{n,r}$. Burnside's
Lemma is a tool for counting the number of orbits of a group action,
it states
\begin{equation*}
|\mathrm{Orb}_{\Omega_{n,r}}(\mathbf{S}_n)| = \frac{1}{|\mathbf{S}_n|} \sum_{g
\in \mathbf{S}_n} |\mathrm{Fix}_{\Omega_{n,r}} (g)|.
\end{equation*}
where $\mathrm{Fix}_{\Omega_{n,r}} (g) = \{ \omega \in \Omega_{n,r} :
g \cdot \omega = \omega \}$. If $g$ and $h$ are conjugate elements of
$\mathbf{S}_n$ then $|\mathrm{Fix}_{\Omega_{n,r}} (g)| =
|\mathrm{Fix}_{\Omega_{n,r}} (h)|$ and consequently we may sum over
conjugacy classes rather than individual elements of $\mathbf{S}_n$. Suppose
that $C_1 \, \dots, C_m$ are the conjugacy classes in $\mathbf{S}_n$. Let
$g_i$ be some representative of the conjugacy class $C_i$. We may
write our sum as
\begin{equation}\label{eq:ModBurnside}
|\mathrm{Orb}_{\Omega_{n,r}}(\mathbf{S}_n)| = \frac{1}{|\mathbf{S}_n|} \sum_{i
=1}^m |C_i| |\mathrm{Fix}_{\Omega_{n,r}} (g_i)|.
\end{equation}
There is a bijection between conjugacy classes of $\mathbf{S}_n$ and
partitions of the integer $n$. Following \cite{StewartCounting} we
will denote a partition of $n$
\begin{equation*}
\alpha_1 \cdot 1 + \alpha_2 \cdot 2 +\cdots +\alpha_n \cdot n = n
\end{equation*}
by $[1^{\alpha_1}2^{\alpha_2}\dots n^{\alpha_n}]$. The multiplicative
form of this notation is perhaps unfortunate but should not cause
confusion. The strength of this notation becomes apparent when we
agree that if $\alpha_i =0$ then the $i^{\alpha_i}$ term in the
expression may be omitted. Using this notation the 7 partitions of
$n=5$ may be expressed as follows:
\begin{equation*}
\begin{array}{l@{\hspace{.5 cm}}l@{\hspace{1 cm}}l@{\hspace{.5 cm}}l}
5 \cdot 1 & [1^5] & 1 \cdot 1 + 2 \cdot 2&[1^12^2]\\
3 \cdot 1 + 1 \cdot 2&[1^32^1]&1 \cdot 2 + 1 \cdot 3&[2^13^1]\\
2 \cdot 1 + 1 \cdot 3&[1^23^1]& 1 \cdot 5&[5^1] \\
1 \cdot 1 + 1 \cdot 4&[1^14^1]
\end{array}
\end{equation*}
The set of all partitions of $n$ will be denoted by $\Pi_n$. An
element of $\mathbf{S}_5$ can be associated to each $\rho \in \Pi_n$ as follows:
\begin{equation*}
\begin{array}{l@{\hspace{.5 cm}}l@{\hspace{1 cm}}l@{\hspace{.5 cm}}l}
{}[1^5] &(1)(2)(3)(4)(5)&[1^12^2] &(1)(2\,3)(4\,5)\\
{} [1^32^1] &(1)(2)(3)(4\,5)&[2^13^1] &(1\,2)(3\,4\,5)\\
{}[1^23^1]&(1)(2)(3\,4\,5)&[5^1]& (1\,2\,3\,4\,5)\\
{}[1^14^1]&(1)(2\,3\,4\,5)
\end{array}
\end{equation*}
Every permutation in $\mathbf{S}_5$ is conjugate to one of the permutations
that correspond to a partition of 5.
Every permutation $\sigma \in \mathbf{S}_n$ may be written as a product of
disjoint cycles in a fashion that is unique up to the order to the
cycles. The lengths of these cycles form a partition on $n$ called the
\emph{cycle type} of the permutate $\sigma$. The permutation
corresponding to a given cycle type is called the \emph{normal form}
of the cycle type. Every permutation is conjugate to the normal form
of its cycle type.
Looking at the formula
\eqref{eq:ModBurnside} we see that it would be advantageous to know the
size of the conjugacy class associated to a given partition of
$n$. The size of the conjugacy class corresponding to
$[1^{\alpha_1}2^{\alpha_2}\cdots n^{\alpha_n}]$ is
\begin{equation}
\label{eq:3}
\frac{n!} {1^{\alpha_1}2^{\alpha_2}\cdots
n^{\alpha_n} \alpha_1! \alpha_2! \dots \alpha_n!}.
\end{equation}
If we consider the partition determines the pattern of parentheses
\begin{equation*}
\begin{array}{ll}
[1^22^23^1] & (\rule{10 pt}{.5 pt} )(\rule{10 pt}{.5 pt} ) (\rule{10 pt}{.5 pt} \,
\rule{10 pt}{.5 pt} ) (\rule{10 pt}{.5 pt}\,
\rule{10 pt}{.5 pt} ) (\rule{10 pt}{.5 pt} \,\rule{10 pt}{.5 pt}\,
\rule{10 pt}{.5 pt} )
\end{array}
\end{equation*}
then $n!$ is the number of ways of writing $1, \dots, n$ in the
blanks.Observing that we can permute each cycle cyclically, that is
\begin{equation*}
( 1 2 3 ) \quad (2 3 1) \quad (3 1 2)
\end{equation*}
are all the same 3-cycle, we must factor out the
$1^{\alpha_1}2^{\alpha_2}\cdots n^{\alpha_n} $ possible ways of
expressing all the cycles. Finally we
observe that we may permute cycles of the same length freely, so we
must factor out the $a_1! \alpha_2 ! \dots \alpha_n!$ possible
orderings of the cycles.
The main difficulty in enumerating the orbits of $\mathbf{S}_n$ lies in
determining the size of the fixed point set
$\mathrm{Fix}_{\Omega_{n,r}}(g_i)$. We will give the formula for this
here and refer the reader to the details in \cite{StewartCounting}.
\textbf{Definition}: Given $\rho \in \Pi_n$ and $s \in \{1, \dots,
n\}$ we may define
\begin{equation*}
\Phi_{s, \rho} (z) = \prod_{k=1}^n (1- z^{\frac{k}{h}})^{-\alpha_k^\rho h}
\end{equation*}
where $h= \gcd (s,k)$.
Clearly $\Phi_{s,\rho}(z)$ is analytic about $0$ and hence we may
write
\begin{equation*}
\Phi_{s, \rho} (z) = \sum_{r=1}^\infty \phi_r(s,\rho) z^r.
\end{equation*}
\begin{thm}[Theorem 8.3 \cite{StewartCounting}]
Let $n, r\in \mathbb{N}\setminus \{0\}$ . Let $H_{n,r}$ denote the number of
$n$ cell degree $r$ networks counted up to isomorphism. $H_{n,r}$
is given by
\begin{equation*}
H_{n,r} = \frac{1}{n!} \sum_{\rho \in \Pi_n} \frac{n!} {1^{\alpha_1}2^{\alpha_2}\cdots
n^{\alpha_n} \alpha_1! \alpha_2! \dots \alpha_n!} \prod_{k=1}^n
\phi_r(k,\rho)^{\alpha_k^\rho} .
\end{equation*}
\end{thm}
We use this theorem to generate Table \ref{tab:Hnr}.
\begin{table}[h]
\centering
\begin{tabular}{rr|rrrrrr}
&&\multicolumn{6}{c}{$r$}\\
&& 1 & 2 & 3 & 4 & 5 & 6 \\ \hline
&1 & 1 & 1 & 1 & 1 & 1 & 1 \\
&2 & 3 & 6 & 10 & 15 & 21 & 28 \\
$n$ &3 & 7 & 44 & 180 & 590 & 1582 & 3724 \\
&4 & 19 & 475 & 6915 & 63420 & 412230 & 2080827 \\
&5 & 47 & 6874 & 444722 & 14072268 & 265076184 & 3405665412 \\
&6 & 130 & 126750 & 43242604 & 5569677210 & 355906501686 & 13508534834704
\end{tabular}
\caption{The number of $n$ cell degree $r$ networks counted up to
isomorphism, $H_{n,r}$. }
\label{tab:Hnr}
\end{table}
This count however includes disconnected coupled cell networks. From
the perspective of dynamical systems we are interested only in the
connected identical edge coupled cell networks. A disconnected system
can be decomposed into a number of connected systems. Thus a
disconnected $n$ cell network corresponds to a partition of $n$ with
$\alpha_n = 0$ i.e. any partition of $n$ except $[n^1]$.
If we denote the number of connected $n$ cell degree $r$ networks by
$K_{n,r}$ then we may enumerate the number of disconnected coupled
cell networks as follows
\begin{equation*}
\sum_{\stackrel{\rho \in \Pi_n}{ \alpha_n^\rho = 0}} \prod_{m=1}^{ n-1}
{K_{m,r} +\alpha_m^\rho -1 \choose \alpha_m^\rho}
\end{equation*}
where
\begin{equation*}
{K_{m,r} +\alpha_m^\rho -1 \choose \alpha_m^\rho}
\end{equation*}
is the number of ways of choosing $\alpha_m^\rho$ networks from the
$K_{m,r}$ distinct connected $m$ cell networks with replacement and
where order does not matter. From this we obtain
\begin{thm}[Theorem 10.1 \cite{StewartCounting}]
Let $n, r\in \mathbb{N}\setminus \{0\}$ . Let $K_{n,r}$ denote the number
of minimal connected $n$ cell degree $r$ networks. We have $K_{1,r}
= H_{1,r} =1$ and for $n \geq 2$
\begin{equation*}
K_{n,r} = H_{n,r} - \sum_{\stackrel{\rho \in \Pi_n}{\alpha_n^\pi =
0} } \prod_{m=1}^{ n-1}
{K_{m,r} +\alpha_m^\rho -1 \choose \alpha_m^\rho}.
\end{equation*}
\end{thm}
We use this theorem to generate Table \ref{tab:Knr}.
\begin{table}[h]
\centering
\begin{tabular}{rr|rrrrrr}
&&\multicolumn{6}{c}{r}\\
&& 1 & 2 & 3 & 4 & 5 & 6 \\\hline
&1 & 1 & 1 & 1 & 1 & 1 & 1 \\
&2 & 2 & 5 & 9 & 14 & 20 & 27 \\
n&3 & 4 & 38 & 170 & 575 & 1561 & 3696 \\
&4 & 9 & 416 & 6690 & 62725 & 410438 & 2076725 \\
&5 & 20 & 6209 & 436277 & 14000798 & 264632734 & 3403484793 \\
&6 & 51 & 117020 & 42722972 & 5554560632 & 355631996061 & 13505066262007
\end{tabular}
\caption{The number of connected $n$ cell degree $r$ networks
counted up to isomorphism, $K_{n,r}$}
\label{tab:Knr}
\end{table}
Now we will use the work of Section 3 to give a recursive formula for
enumerating the connected minimal coupled $n$ cell degree $r$
networks.
\begin{thm}
Let $M_{n,r}$ denote the number of minimal connected $n$ cell degree
$r$ networks. For $n \geq 2$ we have $M_{n,1}= K_{n,1}$ and
\begin{equation*}
M_{n,r} = K_{n,r} - \sum_{s=1}^{r-1} \biggl\lfloor
\frac{r}{s} \biggr\rfloor M_{n,s}.
\end{equation*}
For $n=1$ note that $M_{n,r} = 0$.
\end{thm}
\textbf{Proof:} If a connected $n$ cell degree $r$ network is not
minimal then it is O.D.E. equivalent to a minimal $n$ cell $s$ degree
network where $s < n$. Given a minimal $n$ cell degree $s$ network $G$
the question thus becomes how many non-isomorphic $n$ cell degree $r$
networks can be obtained that are O.D.E. equivalent to $G$. We have
seen that any network $G'$ O.D.E. equivalent to a minimal network $G$
may be obtained from $G$ by a combination of adjoining loops and
splitting edges (and an isomorphism which we may ignore). Let $A$ be
the operation of adjoining a root and $T_k$ the operation of
$k$-splitting the edge. Clearly we have $T_k \circ T_l = T_{k
l}$. There is a commutation relation between $T_k$ and $A$, $T_k
\circ A = A^k \circ T_k$. Using this commutation relation we see that
any combination of adjoining loops and edge splitting can be reduced
to a single $k$-splitting for some $k \geq 1$ followed by adjoining
some number of loops. Given that $G$ has degree $s$ and $G'$ has
degree $r$ the possible choices of $k$ are constrained by $k s \leq
r$. Thus there are $\lfloor r/s \rfloor$ possible values of
$k$. We then adjoin sufficiently many loops to bring the degree to
$r$.
The number of connected minimal $n$ cell degree $r$ networks is thus
given by
\begin{equation*}
M_{n,r} = K_{n,r} - \sum_{s=1}^{r-1} \biggl\lfloor
\frac{r}{s} \biggr\rfloor M_{n,s}
\end{equation*}
with the initial condition that $M_{n,1}= K_{n,1}$ for $n\geq2$. $\hfill \opensquare$
Using this theorem we generate Table \ref{tab:Mnr}.
\begin{table}[h]
\centering
\begin{tabular}{rr|rrrrrr}
&&\multicolumn{6}{c}{$r$}\\
&& 1 & 2 & 3 & 4 & 5 & 6 \\ \hline
&1 & 0 & 0 & 0 & 0 & 0 & 0 \\
&2 & 2 & 1 & 2 & 2 & 4 & 2 \\
$n$&3 & 4 & 30 & 128 & 371 & 982 & 1973 \\
&4 & 9 & 398 & 6265 & 55628 & 347704 & 1659615 \\
&5 & 20 & 6169 & 430048 & 13558332 & 250631916 & 3138415822 \\
&6 & 51 & 116918 & 42605901 & 5511720691 & 350077435378 & 13149391543076
\end{tabular}
\caption{The number of minimal connected $n$ cell degree $r$ networks counted up to isomorphism, $M_{n,r}$. }
\label{tab:Mnr}
\end{table}
It is interesting to note that the number of connected minimal $2$ cell degree
$r$ networks for $r\geq 2$ is given by $\phi(r)$ where $\phi$ is the
Euler totient function. The appearance of the Euler Totient is
explained by the following network diagram:
\begin{figure}[h]
\centering
\includegraphics[scale = .7]{EulerTotientNetwork.pdf}
\caption{ A connected 2 cell degree $r$ network. Edge labels
represent edge multiplicities.}
\end{figure}
In order for a 2 cell network to be minimal at least one vertex must
have no loops. Without losing generality we may suppose that vertex 2
has no loops. Thus vertex 2 must receive $r$ inputs from vertex 1. If
we let $k$, with $k \leq r$, denote the number of edges from vertex 2
to vertex 1 then we see that vertex 1 must have $r-k$ loops. If this
network is to be minimal then the three edge multiplicities, $r$, $k$,
and $r-k$, must be relatively prime. This occurs if and only if $r$
and $k$ are relatively prime. For a fixed $r$ the number of $1 \leq k
\leq r$ for which $r$ and $k$ are relatively prime is
$\phi(r)$. Provided that $r \geq 2$ we may exclude $k=0$ since then
$r-k=r$ and all edge multiplicities have divisor $r$ and hence the
network is not minimal.
If $r=1$ then there are in fact two minimal 2 cell degree 1
networks.
\begin{figure}[h]
\centering
\includegraphics[trim = 0in 0.8in 0in 0.8in]{Totient2.pdf}
\caption{The two minimal 2 cell degree 1 networks. }
\label{h}
\end{figure}
\bibliographystyle{plain}
|
train/arxiv
|
BkiUdfQ5qsNCPfdFeh65
| 5
| 1
|
\section{Introduction}The Anti-de Sitter / Conformal Field Theory correspondence (AdS/CFT) has revealed important connections between
quantum gravity and gauge theory \cite{Maldacena:1997re,Witten:1998qj}. Even though AdS/CFT
provides a prescription for the holographic computation of correlation functions in a strongly coupled gauge theory with a gravity dual, in practice,
computing these correlation functions is quite difficult.
The conformal correlators are related to scattering amplitudes in AdS space. The latter are not defined in a standard fashion, as in Minkowski space, because AdS space does not admit asymptotic states which are needed for the standard definition of the $S$-matrix. Nevertheless, creation and annihilation operators can be defined in AdS space by changing the boundary conditions in the conformal boundary \cite{Penedones:2010ue}. The resulting scattering amplitudes in AdS space are then related to CFT correlation functions.
AdS scattering amplitudes are derived from Witten diagrams which are difficult to calculate in coordinate space \cite{Freedman:1998bj,Liu:1998ty,Freedman:1998tz,D'Hoker:1999ni,D'Hoker:1999pj}. There have been some interesting developments in the computation of such diagrams in momentum space \cite{Raju:2010by,Raju:2012zr,
Raju:2012zs,Raju:2011mp}. Working in momentum space entails taking a Fourier transform of the amplitude, which is well-suited for flat Minkowski space, but does not appear to be advantageous in AdS space. Another approach using Mellin representation of conformal correlation functions was proposed in \cite{Mack:2009mi, Mack:2009gy, Penedones:2010ue}. In more recent work \cite{Paulos:2011ie, Fitzpatrick:2011ia}, it was shown that CFT correlation functions factorize on poles in a Mellin representation, which suggests that Witten diagrams can be computed via a set of Feynman rules, as is the case with correlation functions of field theories on Minkowski space in the momentum (Fourier) representation.
In the case of scalar fields, by taking the Mellin transform, one trades coordinates for Mandelstam invariants of the scattering amplitude. This does not extend straightforwardly to vector or general tensor fields because of the index structure. After taking a Mellin transform, one is still left with expressions which involve coordinates, as well as Mandestam invariants. The index structure complicates calculations which involve integrals over coordinates in AdS space. Our aim is to extend the results of \cite{Paulos:2011ie} and provide a general procedure for the calculation of Witten diagrams involving fields of arbitrary spin. We shall show that diagrams of vector fields can be written in terms of the same Mellin functions as scalar field diagrams. Our method is an iterative procedure that calculates a diagram of a certain order by sewing together lower-order diagrams. The index structure is dealt with by taking advantage of the conformal properties of the correlation functions. Extending our method to higher-spin fields is straightforward and will be reported on in the near future.
The outline of our discussion is the following. In section \ref{secI}, we review the basic ingredients in the embedding formalism which seems to be the most natural framework for Mellin
representation. In sections \ref{secII}, \ref{secIII}, and \ref{secIV}, we calculate explicitly three-, four-, and five-point amplitudes, respectively, for scalar fields with a cubic interaction as well as vector fields.
In section \ref{secV}, we discuss the calculation of a general $N$-point diagram from lower-order constituents.
Finally, in section \ref{secVI}, we summarize our conclusions.
Appendix \ref{Xintegral} contains all necessary integrals over AdS space together with their derivation.
\section{Basics}
\label{secI}
In this section, we review the basic ingredients to be used in our discussion. We adopt the notation used in \cite{Paulos:2011ie}, where further discussion can be found.
It is natural to use the embedding space formalism, which goes back to Dirac \cite{Dirac:1936fq} (also see \cite{Weinberg:2010fx}), as it provides a convenient framework for the computation of Witten diagrams. The embedding is a $(d+2)$-dimensional space flat Minkowski psace ($\mathbb{M}_{d+2}$) with metric given by
\begin{equation}
\label{eqn1}
ds^2 = dX_AdX^A = - (dX^0)^2 + (dX^1)^2 + \dots + (dX^d)^2 + (dX^{d+1})^2~.
\end{equation}
The Euclidean AdS$_{d+1}$ space is defined as the hyperboloid $X^2=-R^2$,
where $X^0>0$ , $X^A \in \mathbb{M}^{d+2}$. Henceforth, we set $R=1$.
In this formalism, it is convenient to think of the conformal boundary of AdS as the space of null rays $P^A$ (with
$P^2=0$, and $P\sim \lambda P$). Then a correlation function of the dual CFT of weight $\Delta$ scales as $\mathcal{F}_\Delta (\lambda P) = \lambda^{-\Delta} \mathcal{F}_\Delta (P)$.
We will be interested in $n$-point correlation functions of the form $\mathcal{F} (P_1, P_2, \dots , P_n)$, and frequently use the notation
\begin{equation}
\label{eqn3}
P_{ij}=-2P_i \cdot P_j \ .
\end{equation}
We will use $X^A$, $Y^A$, etc., for points in the bulk, and $P^A$, $Q^A$, etc., for points on the boundary of AdS space.
The bulk to boundary propagator for a scalar field is given by
\begin{equation}
\label{eqn4}
E (X,P)=\frac{\Gamma(\Delta)}{2 \pi^{d/2} \Gamma(1+\Delta-d/2) }
(-2 P \cdot X )^{-\Delta} ~.
\end{equation}
The bulk to bulk propagator for a scalar field can be written as an integral over the boundary point $Q$ \cite{Penedones:2010ue},
\begin{equation}
\label{eqn6}
G(X,Y) = \int_{-i\infty}^{+i\infty} \frac{dc}{2\pi i} f_{\delta,0} (c) \Gamma (d/2+c) \Gamma (d/2-c)\int_{\partial AdS} dQ (-2X\cdot Q)^{-d/2-c} (-2Y\cdot Q)^{-d/2+c}~, \end{equation}
where
\begin{equation}
\label{eqn7}
f_{\delta,0} (c) = \frac{c \sin \pi c}{2\pi^{d+1} [ (\delta -d/2 )^2 -c^2] }\ . \end{equation}
These expressions for the propagators are crucial for the factorization of amplitudes into lower-point diagrams. We are interested in calculating the $N$-point scalar amplitude
\begin{equation} A^{(N, s)}= \langle \mathcal{O}_{\Delta_1}(P_1) \mathcal{O}_{\Delta_2}(P_2) \cdots \mathcal{O}_{\Delta_N}(P_N) \rangle~, \end{equation}
where $\mathcal{O}_\Delta$ is a conformal operator of scaling dimension $\Delta$.
Similarly, the bulk to boundary propagator for a vector field can be written as \cite{Paulos:2011ie},
\begin{equation}
E_{MA} (X,P) = D_{MA} (\Delta, P) E(X,P) = \frac{\Gamma(\Delta)}{2 \pi^{d/2} \Gamma(1+\Delta-d/2) } J_{MA}(-2 P \cdot X )^{-\Delta}
\end{equation}
where,
\begin{equation} D_{MA} (\Delta , P) = \frac{\Delta -1}{\Delta} \eta_{M A} + \frac{1}{\Delta} \frac{\partial}{\partial P^{M}} P_{A} \end{equation}
and $J_{MA} = \eta_{MA} - \frac{P_AX_M}{P\cdot X}$
has the property, $P^{M} J_{M A}= J_{M A} X^{A}=0$.
$D_{MA}$ is an extremely convenient operator as it organizes and simplifies the index structure of vector amplitudes, allowing us to relate them to amplitudes of scalar fields.
In this regard, a useful identity is
\begin{equation}\label{eq42i4} D_{MA} (\Delta , P)\frac{\partial}{\partial P_{A}} \mathcal{F}_{\Delta-1} (P) = 0~, \end{equation}
where $\mathcal{F}_{\Delta-1}$ is a function of weight $\Delta -1$, i.e., $\mathcal{F}_{\Delta -1} ( P) = \lambda^{-(\Delta -1)} \mathcal{F}_{\Delta -1}(P)$, and therefore $P\cdot \frac{\partial}{\partial P} \mathcal{F}_{\Delta -1} = - (\Delta -1) \mathcal{F}_{\Delta -1}$.
The bulk to bulk propagator for a vector field can be written as an integral over the boundary point $Q$, as in the scalar case,
\begin{eqnarray}
\label{bbp}
G_{AB} (X,Y) &=& \int_{-i\infty}^{+i\infty} \frac{dc}{2\pi i} f_{\delta,1} (c) \Gamma (d/2+c) \Gamma (d/2-c) \nonumber\\ & &
\times \int_{\partial AdS} dQ\eta^{MN}D_{M A} (d/2+c, Q)D_{N B} (d/2-c,Q) (-2X\cdot Q)^{-d/2-c} (-2Y\cdot Q)^{-d/2+c}~, \end{eqnarray}
where,
\begin{equation} f_{\delta,1} (c) = f_{\delta,0} (c) \frac{\frac{d^2}{4}-c^2}{ (\delta - \frac{d}{2} )^2 -c^2}~. \end{equation}
We are interested in calculating the $N$-point vector amplitude
\begin{equation} A^{(N, v)M_1 \dots M_N, a_1\dots a_N}= \langle \mathcal{J}^{M_1,a_1}(P_1) \mathcal{J}^{M_2,a_2}(P_2) \cdots \mathcal{J}^{M_N,a_N} (P_N) \rangle~, \end{equation}
where $a_i$ ($i=1,\dots, N$) are gauge group indices. It should be pointed out that all current operators have dimension $\Delta = d-1$. However, we need to calculate off-shell amplitudes as well, because we are interested in sewing diagrams together in order to form higher-point amplitudes. The two legs to be sewn must be off shell, and have dimensions $\frac{d}{2} \pm c$, on account of \eqref{bbp}. Therefore, we will be generally working with arbitrary dimensions of the external legs of a $N$-point vector amplitude.
The Mellin transform of the above $N$-point amplitudes will be given in terms of Mandelstam invariants $\delta_{ij}$ ($i,j = 1,\dots, N$). They are defined with the properties
\begin{equation}\label{eqMand} \delta_{ii} = 0 \ \ , \ \ \ \ \delta_{ij} = \delta_{ji} \ \ , \ \ \ \ \sum_{j=1}^N \delta_{ij} = \Delta_i ~.\end{equation}
\section{Three-point Amplitudes}
\label{secII}
Having introduced all necessary ingredients, we now proceed to the explicit calculation of amplitudes, starting with the simplest amplitude.
\subsection{Scalar amplitudes}
The three-point amplitude for scalar fields of scaling dimensions $\Delta_i$ interacting via a cubic interaction of coupling constant $g$ is (Fig.\ \ref{3pt})
\begin{equation}\label{eqn8} A^{(3,s)}(\Delta_1,P_1;\Delta_2,P_2;\Delta_3,P_3) =\frac{g}{\prod_i 2\pi^h \Gamma (\Delta_i +1 -d/2)} \mathcal{A}_3 (\Delta_1, P_1; \Delta_2, P_2; \Delta_3, P_3) \end{equation}
where
\begin{equation} \mathcal{A}_3 ( \{\Delta_i , P_i\}) \equiv \int_{ AdS} dX \prod_{i=1}^3 \Gamma (\Delta_i) (-2P_i\cdot X)^{-\Delta_i}~.\end{equation}
\tikzset{
particle/.style={thick,draw=black},
gluon/.style={decorate, draw=black,
decoration={coil,aspect=0}}
}
\begin{center}
\begin{tikzpicture}[node distance=1.5cm and 1.5cm]
\coordinate[label={[xshift=-2pt]left:$P_1$}] (e1);
\coordinate[below right=of e1, ,label={[xshift=6pt]right:$X$}] (aux1);
\coordinate[above right=of aux1,label={[xshift=6pt]right:$P_2$}] (e2);
\coordinate[below=1/cos(45)*1.5cm of aux1,label={[xshift=6pt]below:$P_3$}] (aux2);
\draw[particle] (e1) -- (aux1);
\draw[particle] (aux1) -- (e2);
\draw[particle] (aux1) -- (aux2);
\draw [ultra thick] (aux1) circle [radius=sqrt(1.5cm^2+1.5cm^2)];
\end{tikzpicture}
\captionof{figure}{The three-point scalar amplitude \eqref{eqn8}.}
\label{3pt}
\end{center}
The integral over the bulk vector $X^A$ is of the form \eqref{eqA12}. Using \eqref{eqA16}, we obtain
\begin{equation}\label{eq1f} \mathcal{A}_{3}(\Delta_1,P_1;\Delta_2,P_2;\Delta_3,P_3) = \frac{\pi^{d/2}}{2} \mathcal{M}_3 \Gamma( \delta_{12})\Gamma (\delta_{23}) \Gamma (\delta_{13}) ( P_{12})^{-\delta_{12}} (P_{23})^{-\delta_{23}} (P_{13})^{-\delta_{13}} \end{equation}
where
\begin{equation} \mathcal{M}_3 = \Gamma \left( \frac{\Delta_{1}+\Delta_{2}+\Delta_{3} -d}{2} \right) \end{equation}
is the Mellin transform of the scalar three-point amplitude.
There are no remaining integrals, because the constraints \eqref{eqA11} completely fix the integration variables,
\begin{equation} \delta_{12} = \frac{\Delta_1+\Delta_2-\Delta_3}{2} \ , \ \
\delta_{23} = \frac{\Delta_2+\Delta_3-\Delta_1}{2} \ , \ \
\delta_{31} = \frac{\Delta_1+\Delta_3-\Delta_2}{2}
\end{equation}
which are the Mandelstam invariants \eqref{eqMan} for a three-point amplitude.
The three-point scalar amplitude is
\begin{equation}\label{eq1fa} A^{(3,s)}(\Delta_1,P_1;\Delta_2,P_2;\Delta_3,P_3) = \frac{g\pi^{d/2}}{2 \prod_i 2\pi^{d/2} \Gamma (\Delta_i +1 -d/2)} \mathcal{M}_3 \prod_{i<j} \Gamma( \delta_{ij}) P_{ij}^{-\delta_{ij}}~. \end{equation}
\subsection{Vector amplitudes}
Similarly, a three-point vector amplitude is given by (fig.\ \ref{3ptv2})
\begin{equation}\label{eq20} A^{(3,v)M_1M_2M_3,a_1a_2a_3} (\Delta_1,P_1;\Delta_2,P_2;\Delta_3,P_3) = f^{a_1a_2a_3}\prod_{i=1}^3D^{M_iA_i} (\Delta_i,P_i) \mathcal{A}_{A_1A_2A_3} \end{equation}
where,
\begin{equation}\label{eq21} \mathcal{A}_{A_1A_2A_3} = \int_{AdS} dX I_{A_1A_2A_3} \prod_{i=1}^3 (-2P_i\cdot X)^{-\Delta_i}~, \end{equation}
and the index structure is similar to a gauge theory three point vertex in flat space,
\begin{equation} I_{A_1A_2A_3} = \eta_{A_1A_2} \left( K_{1A_3} - K_{2A_3} \right) + \cdots + \cdots~, \end{equation}
with
\begin{equation} K_A = (-2P\cdot X)^{\Delta} \frac{\partial}{\partial X^A} (-2P\cdot X)^{-\Delta}=- \Delta \frac{P_A}{P\cdot X} \end{equation}
\tikzset{
particle/.style={decorate, draw=black,
decoration={coil,aspect=0.08,segment length=3pt,amplitude=3pt}}}
\begin{center}
\begin{tikzpicture}[thick, node distance=1.5cm and 1.5cm]
\coordinate[label={[xshift=-2pt]left:$P_1$}] (e1);
\coordinate[below right=of e1, ,label={[xshift=6pt]right:$X$}] (aux1);
\coordinate[above right=of aux1,label={[xshift=6pt]right:$P_2$}] (e2);
\coordinate[below=1/cos(45)*1.5cm of aux1,label={[xshift=6pt]below:$P_3$}] (aux2);
\draw[particle] (e1) -- (aux1);
\draw[particle] (aux1) -- (e2);
\draw[particle] (aux1) -- (aux2);
\draw [ultra thick] (aux1) circle [radius=sqrt(1.5cm^2+1.5cm^2)];
\end{tikzpicture}
\captionof{figure}{The three-point vector amplitude \eqref{eq20}.}
\label{3ptv2}
\end{center}
Thus, the three-point vector amplitude \eqref{eq21} is, explicitly,
\begin{equation}\label{eq37a} \mathcal{A}_{A_1A_2A_3} = -\int_{AdS} dX \left[ \eta_{A_1A_2} \Delta_1\frac{P_{1A_3}}{P_1\cdot X} + \eta_{A_2A_3} \left( \Delta_2\frac{P_{2A_1}}{P_2\cdot X} - \Delta_3\frac{P_{3A_1}}{P_3\cdot X} \right) - (1\leftrightarrow 2) \right] \prod_{i=1}^3 \Gamma (\Delta_i)(-2P_i\cdot X)^{-\Delta_i} ~. \end{equation}
As in the scalar case, the integral over the bulk vector $X^A$ is of the form \eqref{eqA12}. Using \eqref{eqA16}, we obtain
\begin{eqnarray}
\mathcal{A}_{A_1A_2A_3} &=& \eta_{A_1A_2} P_{1A_3} \mathcal{A}_{3} (\Delta_1+1,P_1;\Delta_2,P_2; \Delta_3, P_3) \nonumber\\
&& + \eta_{A_2A_3} \left[ P_{2A_1} \mathcal{A}_{3} (\Delta_1,P_1;\Delta_2+1,P_2; \Delta_3, P_3) - P_{3A_1} \mathcal{A}_{3} (\Delta_1, P_1; \Delta_2, P_2; \Delta_3+1, P_3) \right] \nonumber\\
&& - (1\leftrightarrow 2) \end{eqnarray}
Thus, the vector amplitude is written in terms of scalar amplitudes.
As in the scalar case (Eq.\ \eqref{eq1f}), we may write this in terms of a Mellin amplitude,
\begin{equation}\label{eq1fv} \mathcal{A}_{A_1A_2A_3} = \frac{\pi^{d/2}}{2}\mathcal{M}_{A_1A_2A_3} \prod_{i<j} \Gamma\left( \delta_{ij} +\frac{1}{2} \right) P_{ij}^{-\delta_{ij}+\frac{1}{2}} \end{equation}
where
\begin{equation} \mathcal{M}_{A_1A_2A_3} = \Gamma \left( \frac{\Delta_{1}+\Delta_{2}+\Delta_{3} -d+1}{2} \right) \left[ \mathcal{I} (1,2,3) + \mathcal{I} (2,3,1) + \mathcal{I} (3,1,2) \right] \end{equation}
and
\begin{equation} \mathcal{I} (1,2,3) =
\frac{\eta_{A_1A_2}}{P_{12}} \left( \frac{1}{ \delta_{23} -\frac{1}{2}} \frac{P_{1A_3} }{P_{13}} - \frac{1}{\delta_{13} -\frac{1}{2}} \frac{P_{2A_3}}{P_{23}} \right) \end{equation}
The above expressions are simplified if all legs are on shell. Setting $\Delta_1=\Delta_2=\Delta_3 = d-1$, we obtain
\begin{equation} \mathcal{A}^{\mathrm{(on~shell)}}_{A_1A_2A_3} = \pi^{d/2}\Gamma \left( d-2 \right) \left[ \eta_{A_1A_2} \left( P_{23}P_{1A_3} - P_{13} P_{2A_3} \right) + \cdots + \cdots \right] \prod_{j<j} \Gamma (d/2) P_{ij}^{-d/2}~.\end{equation}
In order to use this amplitude in a higher-point diagram, it is convenient to eliminate terms that contain the coordinate that corresponds to the leg which is to be sewn (off-shell) with a free index (i.e., not in a dot product). Without loss of generality we choose the last leg, a practice that we will follow throughout.
Thus, we wish to eliminate terms containing $P_3^{A}$. To this end, we will use the identity \eqref{eq42i4}.
Choosing $\Delta = \Delta_3$, $P^A = P_1^{A_1}$, and $\mathcal{F}_{\Delta_1 -1} (P_1) = \prod_{i<j} ( P_{ij})^{-\delta_{ij}+\frac{1}{2}} $, we obtain
\begin{equation} \left[ \left( \delta_{12} - \frac{1}{2} \right) \frac{P_2^{A_1}}{P_{12}} + \left( \delta_{13} - \frac{1}{2} \right) \frac{P_3^{A_1}}{P_{13}} \right] \prod_{i<j} ( P_{ij})^{-\delta_{ij}+\frac{1}{2}}= 0 \end{equation}
Therefore,
\begin{equation} \mathcal{I} (2,3,1) = \frac{2\eta_{A_2A_3}P_{2A_1} }{ (\delta_{13} -\frac{1}{2})P_{12}P_{23}} \end{equation}
up to terms which vanish upon acting with $D^{M_1A_1}$, and similarly for $\mathcal{I} (3,1,2)$.
There are more terms in the amplitude involving $P_3^{A}$, due to the action of $D^{M_3A_3}$ on the off-shell leg, which also need to be eliminated.
We have
\begin{equation}\label{eq37c} P_{3A_3}\mathcal{I} (1,2,3) + \dots + \dots = \frac{1}{(\delta_{13} -\frac{1}{2})P_{12}} \left( - \eta^{A_1A_2} + \frac{2P_2^{A_1}P_3^{A_2} }{ P_{23}} \right) - ( 1\leftrightarrow 2) \end{equation}
Using the identity \eqref{eq42i4} again, the second term on the right-hand side of \eqref{eq37c} is easily seen to be symmetric, and therefore vanishes. We arrive at an expression which is independent of $P_3$,
\begin{equation}\label{eq37cx} P_{3A_3}\mathcal{I} (1,2,3) + \dots + \dots = \frac{\Delta_1 -\Delta_2}{(\delta_{23} -\frac{1}{2})(\delta_{13} -\frac{1}{2})} \frac{ \eta^{A_1A_2}}{P_{12}} \end{equation}
Notice that in the case of $\Delta_1 = \Delta_2$ (on-shell legs), this vanishes, so the action of $D^{M_3A_3}$ is simple in this case.
Differentiating with respect to $P_3$, we obtain an additional factor,
\begin{equation} \frac{\partial}{\partial P_3^{M_3}} \prod_{i<j} ( P_{ij})^{-\delta_{ij}+\frac{1}{2}} = \left[ (2\delta_{13} -1) \frac{P_1^{M_3}}{P_{13}} + (2\delta_{23} -1) \frac{P_2^{M_3}}{P_{23}} \right] \prod_{i<j} ( P_{ij})^{-\delta_{ij}+\frac{1}{2}} \end{equation}
It follows that
\begin{equation}\label{eq48} D_{M_3A_3}\mathcal{A}^{A_1A_2A_3} = \frac{\pi^{d/2}}{2}\Gamma \left( \frac{\Delta_{1}+\Delta_{2}+\Delta_{3} -d+1}{2} \right) \left[ (\mathcal{D}_3\mathcal{I}) (1,2) - (\mathcal{D}_3\mathcal{I}) (2,1) \right] \prod_{i<j} \Gamma\left( \delta_{ij} +\frac{1}{2} \right) ( P_{ij})^{-\delta_{ij}+\frac{1}{2}}\end{equation}
where
\begin{equation} (\mathcal{D}_3\mathcal{I}) (1,2) = \frac{(\Delta_3+ \Delta_1- \Delta_2 -1) \eta^{A_1A_2} P_{1M_3} - 2(\Delta_3-1)
\delta^{A_1}_{M_3} P_1^{A_2}}{ \Delta_3 (\delta_{23} -\frac{1}{2})P_{12}P_{13}} \end{equation}
In this form, the three-point vector amplitude can be used in higher-point amplitudes in much the same way as its scalar counterpart \eqref{eq1f}.
\section{Four-point Amplitudes}
\label{secIII}
In this section, we calculate scalar and vector four-point amplitudes by sewing together two three-point amplitudes calculated in section \ref{secII}. Using the results in appendix \ref{Xintegral}, the integrals over AdS space are performed with little effort. In the vector case, there is an additional type of diagram contributing due to the existence of a four-point vertex. A quartic interaction can also be added in the scalar case. The calculation proceeds as in the vector case.
\subsection{Scalar amplitudes}
The four-point scalar amplitude reads (Fig.\ \ref{4pt})
\begin{equation}\label{eqA4s0} A^{(4,s)} (\Delta_1,P_1;\Delta_2,P_2;\Delta_3,P_3;\Delta_4,P_4) = \frac{g^2}{\prod_i 2\pi^{d/2} \Gamma (\Delta_i +1 -\frac{d}{2})} \int \frac{dc}{2\pi i} f_{\delta,0} (c) \mathcal{A}_{4} (\Delta_1, P_1; \Delta_2, P_2; \Delta_3, P_3; \Delta_4, P_4 | c) \end{equation}
where
\begin{equation}\label{eqA4s} \mathcal{A}_{4} (\Delta_1, P_1; \Delta_2, P_2; \Delta_3, P_3; \Delta_4, P_4 | c) = \int_{\partial AdS} dQ \mathcal{A}_{3}(\Delta_1,P_1;\Delta_2,P_2;d/2+c,Q) \mathcal{A}_{3}(\Delta_3,P_3;\Delta_4,P_4; d/2-c, Q) \end{equation}
and $\mathcal{A}_{3}$ is given by (\ref{eq1f}).
\tikzset{
particle/.style={thick,draw=black},
particle2/.style={dotted,thick,draw=black
}}
\begin{center}
\begin{tikzpicture}[node distance=1cm and 1.5cm]
\coordinate[label={[xshift=-3pt]left:$P_1$}] (e1);
\coordinate[below right=of e1, label={[xshift=-6pt]right:$~~~X$}] (aux1);
\coordinate[above right=of aux1,label={[xshift=6pt]right:$P_2$}] (e2);
\coordinate[below=2cm of aux1, label={[xshift=-6pt]right:$~~~Y$}] (aux2);
\coordinate[below left=of aux2,label={[xshift=-6pt]left:$P_4$}] (e3);
\coordinate[below right=of aux2,label={[xshift=3pt]right:$P_3$}] (e4);
\draw[particle] (aux2) -- (aux1);
\draw[particle] (e1) -- (aux1);
\draw[particle] (aux1) -- (e2);
\draw[particle] (e3) -- (aux2);
\draw[particle] (aux2) -- (e4);
\node[draw,name path=circle,line width=1.5pt,circle,fit=(e1) (e4),inner sep=.5\pgflinewidth] {};
\path[name path=diameter] let \p1=(aux1), \p2=(aux2)
in (aux1|-0,0.5*\y2+0.5*\y1) -- ++(-3cm,0)
\path[name intersections={of=circle and diameter, by={aux3}}]
\draw[particle2] (aux2) -- (aux3);
\draw[particle2] (aux3) -- (aux1);
\node[label={[xshift=3pt]left: $\int dQ ~~~Q$ }] at (aux3) {};
\end{tikzpicture}
\captionof{figure}{The four-point scalar amplitude \eqref{eqA4s0}.}
\label{4pt}
\end{center}
To integrate over $Q$, we need to calculate
\begin{equation} \int_{\partial AdS} dQ \prod_{i=1}^4 \Gamma (\lambda_i) (-2Q\cdot P_i)^{-\lambda_i}~, \end{equation}
where
\begin{equation} \lambda_1 = \frac{\Delta_1-\Delta_2 + d/2+c}{2} \ , \ \ \lambda_2 = \frac{\Delta_2 - \Delta_1 + d/2+c}{2} \ , \ \
\lambda_3 = \frac{\Delta_3-\Delta_4 +d/2-c}{2} \ , \ \ \lambda_4 = \frac{\Delta_4-\Delta_3 +d/2-c}{2} \end{equation}
Notice that $\lambda_1 + \dots + \lambda_4 =d$.
Using the result \eqref{eqA10} in the Appendix, we obtain
\begin{equation} \int_{\partial AdS} dQ \prod_{i=1}^4\Gamma (\lambda_i) (-2Q\cdot P_i)^{-\lambda_i} = \frac{\pi^{d/2}}{2} \int \prod_{i<j} d\tilde\delta_{ij} \Gamma (\tilde\delta_{ij}) P_{ij}^{-\tilde\delta_{ij}}\end{equation}
where the integration variables are constrained by
\begin{equation}\label{eq24} \sum_{j\ne i} \tilde\delta_{ij} = \lambda_i \end{equation}
The integration variables are related to the Mandelstam invariants by
\begin{equation} \delta_{12} = \frac{\Delta_1+\Delta_2 - d/2-c}{2} + \tilde\delta_{12} \ \ , \ \ \ \ \delta_{34} = \frac{\Delta_3+\Delta_4 -d/2+c}{2} + \tilde\delta_{34}~, \end{equation}
and $\delta_{ij} = \tilde\delta_{ij}$, otherwise.
The constraints \eqref{eq24} in terms of the standard Mandelstam variables read
\begin{equation}\label{eqMan} \sum_{j\ne i} \delta_{ij} = \Delta_i \end{equation}
as expected (Eq.\ \eqref{eqMan}).
The four-point function \eqref{eqA4s} becomes
\begin{equation}\label{eq25a} \mathcal{A}_{4} (\{\Delta_i,P_i\} | c) = \frac{\pi^{d/2}}{2} \int \prod_{i<j} d\delta_{ij} \Gamma (\delta_{ij}) \mathcal{M}_4 (\delta_{ij} | c) P_{ij}^{-\delta_{ij}} \end{equation}
where
\begin{equation}\label{eq31} \mathcal{M}_4 = \frac{\prod_{\sigma=\pm}\Gamma (\frac{\Delta_1+\Delta_2 -d/2+\sigma c}{2})\Gamma (\frac{\Delta_3+\Delta_4 -d/2+\sigma c}{2})\Gamma (\delta_{12} - \frac{\Delta_1+\Delta_2 - d/2+\sigma c}{2}) }{\Gamma (\delta_{12})\Gamma (\delta_{34}) } \end{equation}
Notice that the Mellin transform \eqref{eq31} is a function of a single Mandelstam invariant, $\delta_{12}$, because $\delta_{34}$ and $\delta_{12}$ are related through $\delta_{12} - \delta_{34} = \frac{\Delta_{1} + \Delta_2 - \Delta_3 - \Delta_4}{2}$.
\subsection{Vector amplitudes}
In the vector case, there are two types of diagrams, and we consider them separately.
First we discuss the four-point diagram due to the four-point gauge interaction (Fig.\ \ref{4ptv1}). The amplitude is
\begin{equation}\label{eq20c}
A^{(4,v,(a))M_1 M_2 M_3 M_4,a_1a_2a_3a_4}=\int_{AdS} dX\, I_{A_1A_2A_3A_4}^{a_1a_2a_3a_4} \prod_{i=1}^4 E^{M_iA_i} (X,P_i)~,
\end{equation}
where
\begin{equation} I_{A_1A_2A_3A_4}^{a_1a_2a_3a_4} = \left( f^{a_1a_4b} f^{a_2a_3b} + f^{a_1a_3b} f^{a_2a_4b} \right) \eta_{A_1 A_2} \eta_{A_3 A_4} + \cdots + \cdots \end{equation}
is independent of the points $P_i$ ($i=1,2,3,4$) and $X$.
\tikzset{
particle/.style={decorate, draw=black,
decoration={coil,aspect=0.08,segment length=3pt,amplitude=3pt}}}
\begin{center}
\begin{tikzpicture}[thick, node distance=1.5cm and 1.5cm]
\coordinate[label={[xshift=-2pt]left:$P_1, A_1, a_1$}] (e1);
\coordinate[below right=of e1, ,label={[xshift=6pt]right:$X$}] (aux1);
\coordinate[above right=of aux1,label={[xshift=6pt]right:$P_2,A_2,a_2$}] (e2);
\coordinate[below=1/cos(45)*2.1cm of e1,label={[xshift=6pt]below:$P_4, A_4, a_4~~~~~~~~~~~~~$}] (aux2);
\coordinate[below=1/cos(45)*2.1cm of e2,label={[xshift=6pt]below:$~~~~~~~P_3, A_3, a_3$}] (aux3);
\draw[particle] (e1) -- (aux1);
\draw[particle] (aux1) -- (e2);
\draw[particle] (aux1) -- (aux3);
\draw[particle] (aux1) -- (aux2);
\draw [ultra thick] (aux1) circle [radius=sqrt(1.5cm^2+1.5cm^2)];
\end{tikzpicture}
\captionof{figure}{The four-point vector amplitude \eqref{eq20c}.}
\label{4ptv1}
\end{center}
The integral over the bulk vector $X$ is of the form \eqref{eqA12}. Using the result \eqref{eqA16}, we obtain
\begin{equation}\label{eq44}
A^{(4,v,(a))M_1 M_2 M_3 M_4,a_1a_2a_3a_4} = \frac{\pi^{d/2}}{2\prod_i 2\pi^{d/2} \Gamma (1+\Delta_i -\frac{d}{2})} \prod_{i=1}^4 D^{M_i A_i} (\Delta_i, P_i) \mathcal{A}_{A_1A_2A_3A_4}^{a_1a_2a_3a_4} \end{equation}
where
\begin{equation} \mathcal{A}_{A_1A_2A_3A_4}^{a_1a_2a_3a_4}=\int \mathcal{M}_{A_1A_2A_3A_4}^{a_1a_2a_3a_4} (\delta_{ij})\prod_{i < j} \Gamma (\delta_{ij}) P_{ij}^{-\delta_{ij}} d\delta_{ij}
\ \ , \ \ \ \ \mathcal{M}_{A_1A_2A_3A_4}^{a_1a_2a_3a_4} = \Gamma \left( \frac{\sum_i \Delta_i -d}{2} \right) I_{A_1A_2A_3A_4}^{a_1a_2a_3a_4}\end{equation}
On shell ($\Delta_i = d-1$, $i=1,2,3,4$), this amplitude reads
\begin{equation} \mathcal{A}_{A_1A_2A_3A_4}^{a_1a_2a_3a_4}= \Gamma \left( \frac{3d-4}{2} \right) I_{A_1A_2A_3A_4}^{a_1a_2a_3a_4}\int\prod_{i < j} \Gamma (\delta_{ij}) P_{ij}^{-\delta_{ij}} d\delta_{ij}~. \end{equation}
To use it in a higher-point diagram, we need to act with $D^{M_4A_4}$ and eliminate $P_4$ with a free index.
By using the identity \eqref{eq42i4} with $\Delta = \Delta_3$, $P^A = P_3^{A_3}$, and $\mathcal{F}_{\Delta_3-1} (P_3) = P_{34} \prod_{i<j} P_{ij}^{-\delta_{ij}}$, we obtain
\begin{equation}\label{eq73} \left[ \delta_{13} \frac{P_{34}}{P_{13}} P_1^{A_3} + \delta_{23} \frac{P_{34}}{P_{23}} P_2^{A_3} + (\delta_{34} -1) P_4^{A_3} \right] \prod_{i<j} P_{ij}^{-\delta_{ij}} = 0~,\end{equation}
up to terms which vanish upon acting with $D_{M_3A_3}$.
We deduce
\begin{equation} P_4^{A_4} \eta_{A_3 A_4} = -\frac{1}{\delta_{34} -1} \left[ \delta_{13} \frac{P_{1A_3} }{P_{13}} + \delta_{23} \frac{P_{2A_3}}{P_{23}} \right] P_{34} \end{equation}
Differentiation with respect to $P_{4M_4}$ has the effect of multiplication by a factor given by
\begin{equation} \frac{\partial}{\partial P_{4M_4}} P_{34} \prod_{i=1}^3 P_{i4}^{-\delta_{i4}} = 2 \left[ \delta_{14}\frac{P_{1}^{M_4}}{P_{14}} + \delta_{24}\frac{P_{2}^{M_4}}{P_{24}} + (\delta_{34} -1) \frac{P_{3}^{M_4}}{P_{34}} \right]P_{34} \prod_{i=1}^3 P_{i4}^{-\delta_{i4}}\end{equation}
It follows that in the amplitude \eqref{eq44},
\begin{equation} D^{M_4A_4} \eta_{A_3 A_4} = \frac{\Delta_4 -1}{\Delta_4}\delta_{A_3}^{ M_4} - \frac{2 }{\Delta_4} \left( \delta_{13} \frac{P_{1A_3} }{P_{13}} + \delta_{23} \frac{P_{2A_3}}{P_{23}} \right) \left( \frac{\delta_{14}}{\delta_{34} -1} \frac{P_{34}}{P_{14}} P_{1}^{M_4}+ \frac{\delta_{24}}{\delta_{34} -1} \frac{P_{34}}{P_{24}} P_{2}^{M_4}+ P_{3}^{M_4} \right) \end{equation}
This expression allows us to use this amplitude in the calculation of a higher-point amplitude in which the leg corresponding to $P_4$ is internal.
\tikzset{
particle/.style={decorate, draw=black,
decoration={coil,aspect=0.08,segment length=3pt,amplitude=3pt}}}
\tikzset{
particle2/.style={densely dotted ,decorate, draw=black,
decoration={coil,aspect=0.08,segment length=3pt,amplitude=3pt}}}
\begin{center}
\begin{tikzpicture}[thick,node distance=1cm and 1.5cm]
\coordinate[label={[xshift=-3pt]left:$P_1, {A_1}, a_1$}] (e1);
\coordinate[below right=of e1, label={[xshift=-6pt]right:$~~~~~X$}] (aux1);
\coordinate[above right=of aux1,label={[xshift=6pt]right:$P_2, {A_2}, a_2$}] (e2);
\coordinate[below=2cm of aux1, label={[xshift=-6pt]right:$~~~~~Y$}] (aux2);
\coordinate[below left=of aux2,label={[xshift=-6pt]left:$P_3, {A_3}, a_3$}] (e3);
\coordinate[below right=of aux2,label={[xshift=3pt]right:$P_4, {A_4}, a_4$}] (e4);
\draw[particle] (aux2) -- (aux1);
\draw[particle] (e1) -- (aux1);
\draw[particle] (aux1) -- (e2);
\draw[particle] (e3) -- (aux2);
\draw[particle] (aux2) -- (e4);
\node[draw,name path=circle,line width=1.5pt,circle,fit=(e1) (e4),inner sep=.5\pgflinewidth] {};
\path[name path=diameter] let \p1=(aux1), \p2=(aux2)
in (aux1|-0,0.5*\y2+0.5*\y1) -- ++(-3cm,0)
\path[name intersections={of=circle and diameter, by={aux3}}]
\draw[particle2] (aux2) -- (aux3);
\draw[particle2] (aux3) -- (aux1);
\node[label={[xshift=3pt]left: $\int dQ ~Q$ }] at (aux3) {};
\end{tikzpicture}
\captionof{figure}{The four-point vector amplitude \eqref{eq57o}.}
\label{3ptv}
\end{center}
Next, we calculate the four-point vector amplitude depicted in Fig.\ \ref{3ptv}. We have
\begin{equation}\label{eq57o}
A^{(4,v, (b))M_1 M_2 M_3 M_4,a_1a_2a_3a_4}= g^2 f^{a_1a_2b} f^{a_3a_4b} \int \frac{dc}{2\pi i} f_{\delta,1} (c) \prod_{i=1}^4 D^{M_iA_i} (\Delta_i,P_i) \mathcal{A}_{A_1 A_2 A_3A_4} (\{ \Delta_i, P_i \} |c ) \end{equation}
where
\begin{eqnarray}\label{eq57} \mathcal{A}_{A_1 A_2 A_3A_4} (\{ \Delta_i, P_i \} |c ) &=& \int_{\partial AdS} dQ
\eta_{NN'}D^{NC} (d/2+c, Q)\mathcal{A}_{A_1A_2C} (\Delta_1, P_1; \Delta_2, P_2; d/2+c, Q) \nonumber\\
&&\times D^{N' C'} (d/2-c,Q)\mathcal{A}_{A_3 A_4 C'} (\Delta_3,P_3;\Delta_4, P_4; d/2-c, Q)
\end{eqnarray}
Using \eqref{eq48}, we can express this in terms of the scalar functions \eqref{eqA4s}. The integral over $Q$ corresponding to the product of \eqref{eq48} and its counterpart in the second amplitude (with $1\to 3$ and $2\to 4$) is
\begin{equation} \int_{\partial AdS} dQ \prod_{i=1}^4 \Gamma (\lambda_i)(-2P_i\cdot Q)^{-\lambda_i} = \frac{\pi^{d/2}}{2} \int \prod_{i<j} d\tilde\delta_{ij} \Gamma (\tilde\delta_{ij}) P_{ij}^{-\tilde\delta_{ij}} \end{equation}
where
\begin{equation} \lambda_1 = \frac{\Delta_1-\Delta_2 + \frac{d}{2} +c+1}{2} \ , \ \ \lambda_2 = \frac{\Delta_2 - \Delta_1 + \frac{d}{2}+c-1}{2} \ , \ \
\lambda_3 = \frac{\Delta_3 - \Delta_4 + \frac{d}{2}-c+1}{2} \ , \ \ \lambda_4 = \frac{\Delta_4 - \Delta_3 + \frac{d}{2}-c-1}{2} \end{equation}
The Mandelstam invariants are
\begin{equation} \delta_{12} = \tilde\delta_{12} + \frac{\Delta_1+\Delta_2 - \frac{d}{2} -c+1}{2} \ , \ \ \delta_{34} = \tilde\delta_{34} + \frac{\Delta_3-\Delta_4 - \frac{d}{2} +c+1}{2} \ , \ \ \delta_{13} = \tilde\delta_{13} -1~, \end{equation}
and $\delta_{ij} = \tilde\delta_{ij}$, otherwise.
Thus, the four-point vector amplitude \eqref{eq57} can be put in the form \eqref{eq25a}, as in the scalar case,
\begin{equation}\label{eq25av} \mathcal{A}_{A_1 A_2 A_3A_4} (\{\Delta_i,P_i\} | c) = \frac{\pi^{d/2}}{2} \int \mathcal{M}_{A_1 A_2 A_3A_4} (\delta_{ij} | c) \prod_{i<j} \Gamma (\delta_{ij}) P_{ij}^{-\delta_{ij}} d\delta_{ij} \end{equation}
where
\begin{equation} \mathcal{M}_{A_1 A_2 A_3A_4} (\delta_{ij} | c)= \frac{(\frac{d}{2}-1)^2 -c^2}{\frac{d^2}{4}-c^2} \left[ \delta_{13}\mathcal{I}({1,2,3,4}) -\delta_{14}\mathcal{I}({1,2,4,3}) -\delta_{23}\mathcal{I}({2,1,3,4}) +\delta_{24}\mathcal{I}({2,1,4,3}) \right]
\mathcal{M}_{4} ~,\end{equation}
$\mathcal{M}_4$ is as in the scalar case (Eq.\ \eqref{eq31}), but with the replacements $\Delta_1\to \Delta_1+1,\Delta_3\to\Delta_3+1$,
and
\begin{eqnarray} \mathcal{I}({1,2,3,4})&=& -\frac{ (\frac{d}{2}+c- \Delta_1+ \Delta_2 -1) (\frac{d}{2}-c- \Delta_3+ \Delta_4 -1) }{2[(\frac{d}{2}-1)^2 -c^2]} \eta_{A_1A_2}\eta_{A_3A_4} \nonumber\\ & &
- 2\frac{\frac{d}{2}+c- \Delta_1+ \Delta_2 -1}{ \frac{d}{2}+c-1} \eta_{A_1A_2} \frac{P_{1A_3} P_{3A_4}}{P_{13}}
-2\frac{\frac{d}{2}-c- \Delta_3+ \Delta_4 -1}{\frac{d}{2}-c-1} \eta^{A_3A_4}
\frac{P_{1A_2} P_{3A_1}}{P_{13}} \nonumber\\ & &
+4
\eta_{A_1A_3} \frac{P_{1A_2} P_{3A_4}}{P_{13}}
\end{eqnarray}
The above expressions simplify for on-shell amplitudes. Setting $\Delta_i = d-1$ ($i=1,2,3,4$), we obtain
\begin{eqnarray} \mathcal{M}_{A_1 A_2 A_3A_4} (\delta_{ij} | c) &=& \frac{(\frac{d}{2}-1)^2 -c^2}{\frac{d^2}{4}-c^2} \left[ - \eta_{A_1A_2}\eta_{A_3A_4}
- 2 \eta_{A_1A_2}\left( \frac{P_{1A_3} P_{3A_4}}{P_{13}} + \frac{P_{2A_4} P_{4A_3}}{P_{24}}\right) \right. \nonumber\\ & &
\left. -2 \eta_{A_3A_4} \left(
\frac{P_{1A_2} P_{3A_1}}{P_{13}}+
\frac{P_{2A_1} P_{4A_2}}{P_{24}} \right)
+4
\eta_{A_1A_3} \frac{P_{1A_2} P_{3A_4}}{P_{13}} +4
\eta_{A_2A_4} \frac{P_{2A_1} P_{4A_3}}{P_{24}} \right]
\delta_{13} \mathcal{M}_{4} \nonumber\\ & & - (3\longleftrightarrow 4) ~,\end{eqnarray}
where
\begin{equation}\label{eq31von} \mathcal{M}_4 = \frac{\prod_{\sigma=\pm}\Gamma^2 (\frac{\frac{3d}{2}-1 +\sigma c}{2})\Gamma (\delta_{12} - \frac{\frac{3d}{2} - 1+\sigma c}{2}) }{\Gamma^2 (\delta_{12}) } \end{equation}
To use this function in higher-point amplitudes, it is convenient to eliminate all terms with $P_4^{A_i}$ ($i=1,2,3$).
By using the identity \eqref{eq42i4} with $\Delta = \Delta_1$, $P^A = P_1^{A_2}$, and $\mathcal{F}_{\Delta_1} (P_1) = P_{1}^{A_2} \prod_{i<j} P_{ij}^{-\delta_{ij}}$, we obtain
\begin{equation}\label{eq73v} \left[ \frac{1}{2}\eta^{A_1A_2} + \sum_{k\ne 1}\delta_{1k} \frac{P_k^{A_1}P_1^{A_2}}{P_{1k}} \right] \prod_{i<j} P_{ij}^{-\delta_{ij}} = 0\end{equation}
We deduce from \eqref{eq73} and \eqref{eq73v},
\begin{eqnarray} \mathcal{I}({1, 2, 4,3}) &=& -\frac{ (\frac{d}{2}+c- \Delta_1+ \Delta_2 -1) (\frac{d}{2}-c+ \Delta_3- \Delta_4 -1) }{2[(\frac{d}{2}-1)^2 -c^2]} \eta_{A_1A_2}\eta_{A_3A_4} \nonumber\\ & &
-\frac{2}{\delta_{34} -1}\left( 2
\eta_{A_1A_4} P_{1A_2} - \frac{\frac{d}{2}+c- \Delta_1+ \Delta_2 -1}{ \frac{d}{2}+c-1} \eta_{A_1A_2} P_{1A_4} \right) \left( \delta_{13} \frac{ P_{1A_3}}{P_{13}} + \delta_{23} \frac{ P_{2A_3}}{P_{23}} \right)\frac{P_{34}}{P_{14}} \nonumber\\ & &
+\frac{2}{\delta_{14}}\frac{\frac{d}{2}-c- \Delta_3+ \Delta_4 -1}{\frac{d}{2}-c-1} \eta^{A_3A_4}
\left( \frac{1}{2}\eta^{A_1A_2} + \delta_{12} \frac{P_{2A_1}P_{1A_2}}{P_{12}} + \delta_{13} \frac{P_{3A_1}P_{1A_2}}{P_{13}} \right)
\end{eqnarray}
and similarly for $\mathcal{I}({2,1,4,3})$. $\mathcal{I}({1,2,3,4})$ and $\mathcal{I}({2,1,3,4})$ are unaltered.
Next we act with $D^{M_4A_4}$. We have
\begin{eqnarray}\label{eq88} P_{4}^{A_4}\mathcal{I}({1 ,2, 3,4}) &=& -\frac{ (\frac{d}{2}+c- \Delta_1+ \Delta_2 -1) \frac{d}{2}-c- \Delta_3+ \Delta_4 -1) }{2[(\frac{d}{2}-1)^2 -c^2]} \eta_{A_1A_2} P_{4A_3} \nonumber\\ & &
+\frac{\frac{d}{2}+c- \Delta_1+ \Delta_2 -1}{ \frac{d}{2}+c-1} \eta_{A_1A_2} P_{1A_3} \frac{P_{34}}{P_{13}}
-2\frac{\frac{d}{2}-c- \Delta_3+ \Delta_4 -1}{\frac{d}{2}-c-1} \frac{P_{4A_3}
P_{1A_2} P_{3A_1}}{P_{13}} \nonumber\\ & &
+4
\eta_{A_1A_3} P_{1A_2} \frac{P_{34}}{P_{13}}\end{eqnarray}
and
\begin{eqnarray}\label{eq89}
P_{4}^{A_4}\mathcal{I}({1 ,2, 4,3}) &=& -\frac{ (\frac{d}{2}+c- \Delta_1+ \Delta_2 -1) (\frac{d}{2}-c+ \Delta_3- \Delta_4 -1) }{2[(\frac{d}{2}-1)^2 -c^2]} \eta_{A_1A_2}P_{4A_3} \nonumber\\ & &
-\frac{1}{\delta_{34} -1}\left( 4
P_{4A_1} P_{1A_2} + \frac{\frac{d}{2}+c- \Delta_1+ \Delta_2 -1}{ \frac{d}{2}+c-1} \eta_{A_1A_2} P_{14} \right) \left( \delta_{13} \frac{ P_{1A_3}}{P_{13}} + \delta_{23} \frac{ P_{2A_3}}{P_{23}} \right)\frac{P_{34}}{P_{14}} \nonumber\\ & &
+\frac{2}{\delta_{14}}\frac{\frac{d}{2}-c- \Delta_3+ \Delta_4 -1}{\frac{d}{2}-c-1} P_{4A_3}
\left( \eta_{A_1A_2} + \delta_{12} \frac{P_{2A_1}P_{1A_2}}{P_{12}} + \delta_{13} \frac{P_{3A_1}P_{1A_2}}{P_{13}} \right)
\end{eqnarray}
and similarly for $P_{4}^{A_4}\mathcal{I}({2,1,3,4})$ and $P_{4}^{A_4}\mathcal{I}({2,1,4,3})$.
$P_4$ with a free index is eliminated from \eqref{eq88} using \eqref{eq73} and
\begin{equation}\label{eq73v2} \left[ \frac{1}{2}\eta^{A_1A_3} + (\delta_{13} +1)\frac{P_3^{A_1}P_1^{A_3}}{P_{13}} + \delta_{23} \frac{P_3^{A_1}P_2^{A_3}}{P_{23}} + (\delta_{34} -1) \frac{P_3^{A_1}P_4^{A_3}}{P_{34}} \right] \frac{P_{34}}{P_{13}}\prod_{i<j} P_{ij}^{-\delta_{ij}} = 0\end{equation}
We obtain
\begin{eqnarray}\label{eq88a} \frac{P_{13}}{P_{34}}P_{4}^{A_4}\mathcal{I}({1, 2, 3,4}) &=& \frac{ (\frac{d}{2}+c- \Delta_1+ \Delta_2 -1) (\frac{d}{2}-c- \Delta_3+ \Delta_4 -1) }{2(\delta_{34} -1)[(\frac{d}{2}-1)^2 -c^2]} \eta_{A_1A_2} \left( \delta_{13} P_{1A_3} + \delta_{23} \frac{P_{13}}{P_{23}} P_{2A_3} \right) \nonumber\\ & &
+\frac{\frac{d}{2}+c- \Delta_1+ \Delta_2 -1}{ \frac{d}{2}+c-1} \eta_{A_1A_2} P_{1A_3}
+4
\eta_{A_1A_3} P_{1A_2} \nonumber\\ & &
+\frac{2}{\delta_{34} -1}\frac{\frac{d}{2}-c- \Delta_3+ \Delta_4 -1}{\frac{d}{2}-c-1}
P_{1A_2}\left( \frac{1}{2}\eta_{A_1A_3} + (\delta_{13} +1)\frac{P_{3A_1}P_{1A_3}}{P_{13}} + \delta_{23} \frac{P_{3A_1}P_{2A_3}}{P_{23}} \right)
\end{eqnarray}
Notice that $P_4$ only enters through an overall factor of $P_{34}$.
Similarly, $P_4$ is eliminated from \eqref{eq89} using \eqref{eq73}, \eqref{eq73v}, \eqref{eq73v2}, and
\begin{equation}\label{eq73v3} \left[ \frac{1}{2}\eta^{A_1A_2} P_1^{A_3}+\frac{1}{2}\eta^{A_1A_3} P_1^{A_2}+ \left( \delta_{12} \frac{P_2^{A_1}}{P_{12}} + (\delta_{13} +1) \frac{P_3^{A_1}}{P_{13}} + \delta_{14} \frac{P_4^{A_1}}{P_{14}}\right) P_1^{A_2}P_1^{A_3} \right] \frac{1}{P_{13}}\prod_{i<j} P_{ij}^{-\delta_{ij}} = 0\end{equation}
We obtain
\begin{eqnarray}\label{eq89a}
\frac{P_{13}}{P_{34}}P_{4}^{A_4}\mathcal{I}({1, 2, 4,3}) &=& \frac{ (\frac{d}{2}+c- \Delta_1+ \Delta_2 -1) (\frac{d}{2}-c+ \Delta_3- \Delta_4 -1) }{2(\delta_{34} -1)[(\frac{d}{2}-1)^2 -c^2]} \eta_{A_1A_2} \left(\delta_{13} P_{1A_3} + \delta_{23} \frac{P_{13}}{P_{23}} P_{2A_3} \right) \nonumber\\ & &
+\frac{4\delta_{13}}{\delta_{14}(\delta_{34} -1)}
\left[ \frac{1}{2}\eta_{A_1A_2} P_{1A_3}+\frac{1}{2}\eta_{A_1A_3} P_{1A_2}+ \left( \delta_{12} \frac{P_{2A_1}}{P_{12}} + (\delta_{13} +1) \frac{P_{3A_1}}{P_{13}} \right) P_{1A_2}P_{1A_3} \right] \nonumber\\ & &
+\frac{4 \delta_{23}}{\delta_{14}(\delta_{34} -1)}
\frac{ P_{2A_3}}{P_{23}}P_{13}\left( \frac{1}{2}\eta_{A_1A_2} + \delta_{12} \frac{P_{2A_1}P_{1A_2}}{P_{12}} + \delta_{13} \frac{P_{3A_1}P_{1A_2}}{P_{13}} \right) \nonumber\\ & &
-\frac{1}{\delta_{34} -1} \frac{\frac{d}{2}+c- \Delta_1+ \Delta_2 -1}{ \frac{d}{2}+c-1} \eta_{A_1A_2} \left( \delta_{13} P_{1A_3} + \delta_{23} \frac{ P_{13}}{P_{23}} P_{2A_3} \right) \nonumber\\ & &
-\frac{2}{\delta_{14}(\delta_{34} -1)}\frac{\frac{d}{2}-c- \Delta_3+ \Delta_4 -1}{\frac{d}{2}-c-1}
\left( \eta_{A_1A_2} + \delta_{12} \frac{P_{2A_1}P_{1A_2}}{P_{12}} \right) \left(\delta_{13} P_{1A_3} + \delta_{23} \frac{P_{13}}{P_{23}} P_{2A_3} \right) \nonumber\\ & &
-\frac{2\delta_{13}}{\delta_{14}(\delta_{34} -1)}\frac{\frac{d}{2}-c- \Delta_3+ \Delta_4 -1}{\frac{d}{2}-c-1}
P_{1A_2} \left( \frac{1}{2}\eta_{A_1A_3} + (\delta_{13} +1)\frac{P_{3A_1}P_{1A_3}}{P_{13}} + \delta_{23} \frac{P_{3A_1}P_{2A_3}}{P_{23}} \right)
\end{eqnarray}
Again, $P_4$ only enters through an overall factor of $P_{34}$. It follows that the part of the amplitude involving $P_4$ is $P_{34} \prod_{i=1}^3 P_{i4}^{-\delta_{i4}}$. Therefore, differentiation with respect to $P_{4M_4}$ has the effect of multiplication by an overall factor,
\begin{equation} \frac{\partial}{\partial P_{4M_4}} P_{34} \prod_{i=1}^3 P_{i4}^{-\delta_{i4}} = 2 \left[ \delta_{14}\frac{P_{1}^{M_4}}{P_{14}} + \delta_{24}\frac{P_{2}^{M_4}}{P_{24}} + (\delta_{34} -1) \frac{P_{3}^{M_4}}{P_{34}} \right]P_{34} \prod_{i=1}^3 P_{i4}^{-\delta_{i4}}\end{equation}
The amplitude with $D^{M_4A_4}$ acted upon is given by
\begin{equation}\label{eq25av2} D^{M_4A_4}\mathcal{A}_{A_1 A_2 A_3A_4} (\{\Delta_i,P_i\} | c) = \frac{\pi^{d/2}}{2} \int (\mathcal{D}_4 \mathcal{M})^{M_4}_{A_1A_2A_3} (\delta_{ij} | c) \prod_{i<j} P_{ij}^{-\delta_{ij}} \Gamma (\delta_{ij}) d\delta_{ij} \end{equation}
where
\begin{equation} (\mathcal{D}_4 \mathcal{M})^{M_4}_{A_1A_2A_3} =\left[ \frac{\Delta_4 -1}{\Delta_4}\eta^{M_4A_4} + \frac{2}{\Delta_4} \left( \delta_{14}P_{1}^{M_4}\frac{P_{34}}{P_{14}} + \delta_{24}P_{2}^{M_4}\frac{P_{34}}{P_{24}} + (\delta_{34} -1) P_{3}^{M_4}\right) \frac{P_{4A_4}}{P_{34}} \right] \mathcal{M}_{A_1 A_2 A_3A_4} (\delta_{ij} | c) \end{equation}
In the above expression, the only dependence on $P_4$ is through the ratios $\frac{P_{34}}{P_{14}}$ and $\frac{P_{34}}{P_{24}}$.
This expression will be used in the calculation of five- and higher-point vector amplitudes.
\section{Five-point Amplitudes}
\label{secIV}
In this section, we calculate scalar and vector five-point amplitudes by sewing together three- and four-point amplitudes. The integrals over AdS space that are involved are similar to the ones encountered in the case of four-point amplitudes in section \ref{secIII} and can be performed using the results of appendix \ref{Xintegral} without additional effort. The Mellin amplitudes are found as integrals over the Mandelstam invariants of the constituent four-point amplitudes. These integrals can all be performed, resulting in expressions involving generalized Hypergeometric functions. Thus, our approach provides an alternative to integration over Schwinger parameters \cite{Paulos:2011ie}.
\subsection{Scalar amplitudes}
The five-point scalar amplitude (Fig.\ \ref{5pt}) reads
\begin{equation}\label{eq77} A^{(5,s)} (\Delta_1,P_1;\dots;\Delta_5,P_5) = \frac{g^3}{\prod_i 2\pi^{d/2} \Gamma (\Delta_i + 1 - d/2)} \int \frac{dcdc'}{(2\pi i)^2} f_{\delta_1,0} (c) f_{\delta_2,0} (c') \mathcal{A}_{5}(\{\Delta_i,P_1\} |c,c') \end{equation}
where
\begin{equation} \mathcal{A}_{5}(\{\Delta_i,P_i \} |c,c') = \int_{\partial AdS} dQ
\mathcal{A}_{4}(\Delta_1,P_1;\Delta_2,P_2;\Delta_3, P_3; d/2+c',Q|c) \mathcal{A}_{3} (\Delta_4,P_4; \Delta_5, P_5;d/2-c',Q) \end{equation}
\begin{center}
\tikzset{
particle/.style={thick,draw=black},
particle2/.style={dotted,thick,draw=black
}}
\begin{tikzpicture}[thick, node distance=1cm and 1.5cm]
\coordinate[label={[xshift=-3pt]left:$P_1~$}] (e1);
\coordinate[below right=of e1,label={[xshift=3pt]right:$~X$}] (aux1);
\coordinate[above right=of aux1,label={[xshift=6pt]right:$~~P_2$}] (e2);
\coordinate[below=1cm of aux1] (aux);
\coordinate[below left=of aux2,label={[xshift=-6pt]left:$P_5~$}] (e3);
\coordinate[below right=of aux2,label={[xshift=3pt]right:$~~P_4$}] (e4);
\coordinate[below=1cm of aux1,label={[xshift=3pt]above left:$Z~~~$}] (aux3);
\coordinate[below=2cm of aux1,label={[xshift=3pt]right:$~Y$}] (aux2);
\draw[particle] (e1) -- (aux1);
\draw[particle] (aux1) -- (e2);
\draw[particle] (e3) -- (aux2);
\draw[particle] (aux2) -- (e4);
\draw[particle] (aux1) -- node {} (aux2);
\node[draw,name path=circle,line width=3pt,circle,fit=(e1) (e4),inner sep=.5\pgflinewidth] {};
\path[name path=diameter] let \p1=(aux1), \p2=(aux2)
in (aux1|-0,0.5*\y2+0.5*\y1) -- ++(3cm,0)
\path[name intersections={of=circle and diameter, by={aux3}}]
\path[name path=diameter] let \p1=(aux1), \p2=(aux2)
in (aux1|-0,0.5*\y2+0.5*\y1) -- ++(-3cm,0)
\path[name intersections={of=circle and diameter, by={aux4}}]
\draw[particle2] (aux) -- (aux4);
\draw[particle2] (aux2) -- (aux4);
\draw[particle] (aux) -- (aux3);
\node[label={[xshift=2pt]right:$P_3$}] at (aux3) {};
\node[label={[xshift=2pt]left:$\int dQ~~ Q$}] at (aux4) {};
\end{tikzpicture}
\captionof{figure}{The five-point scalar amplitude \eqref{eq77}.}
\label{5pt}
\end{center}
The integral over $Q$ involves
\begin{equation} \int_{\partial AdS} dQ \prod_{i=1}^5\Gamma (\lambda_i) (-2Q\cdot P_i)^{-\lambda_i} \end{equation}
where
\begin{equation} \lambda_1 = \delta_{14}' \ , \ \ \lambda_2 = \delta_{24}' \ , \ \ \lambda_3 = \delta_{34}' \ , \ \ \lambda_4 = \frac{\Delta_4-\Delta_5 +\frac{d}{2}-c'}{2} \ , \ \ \lambda_5 = \frac{\Delta_5-\Delta_4 + \frac{d}{2}-c'}{2} \end{equation}
and $\delta_{ij}'$ are the Mandelstam invariants for the four-point function constrained by
\begin{eqnarray} \delta_{12}' + \delta_{13}' + \delta_{14}' &=& \Delta_1 \nonumber\\
\delta_{12}' + \delta_{23}' + \delta_{24}' &=& \Delta_2 \nonumber\\
\delta_{13}' + \delta_{23}' + \delta_{34}' &=& \Delta_3 \nonumber\\
\delta_{14}' + \delta_{24}' + \delta_{34}' &=& h+c'
\end{eqnarray}
Working as before, we obtain
\begin{equation} \int_{\partial AdS} dQ \prod_{i=1}^5\Gamma (\lambda_i) (-2Q\cdot P_i)^{-\lambda_i} = \frac{\pi^{d/2}}{2} \int \prod_{i<j} d\tilde\delta_{ij} \Gamma (\tilde\delta_{ij}) P_{ij}^{-\tilde\delta_{ij}}
\end{equation}
where the integration variables are constrained by
\begin{equation}\label{eq245} \sum_{j\ne i} \tilde\delta_{ij} = \lambda_i \end{equation}
The part of the amplitude involving the vectors $P_i$ is
\begin{equation} P_{12}^{-\delta_{12}'} P_{13}^{-\delta_{13}'} P_{23}^{-\delta_{23}'} P_{45}^{- \frac{\Delta_4 + \Delta_5 - d/2 + c'}{2}}\prod_{i<j} P_{ij}^{-\tilde\delta_{ij}} = \prod_{i<j} P_{ij}^{-\delta_{ij}} \end{equation}
where $\delta_{ij}$ are the Mandelstam invariants defined by
\begin{equation} \delta_{12} = \tilde\delta_{12} + \delta_{12}' \ , \ \ \delta_{23} = \tilde\delta_{23} + \delta_{23}' \ , \ \ \delta_{13} = \tilde\delta_{13} + \delta_{13}' \ , \ \
\delta_{45} = \tilde\delta_{45} + \frac{\Delta_4 + \Delta_5 - d/2 + c'}{2} \ , \end{equation}
and $\delta_{ij} = \tilde\delta_{ij}$, otherwise. It is easily seen that they obey the standard constraints \eqref{eqMan}.
The five-point function simplifies to
\begin{equation}\label{eq25a5} \mathcal{A}_{5} (\{\Delta_i,P_i\} | c,c') = \frac{\pi^{d/2}}{2} \int \prod_{i<j} d\delta_{ij} \Gamma (\delta_{ij}) \mathcal{M}_5 (\delta_{ij} | c,c') P_{ij}^{-\delta_{ij}} \end{equation}
where
\begin{eqnarray}\label{eq43} \mathcal{M}_5 (\delta_{ij} | c,c') &=&\int \prod_{i<j} d\delta_{ij}' \frac{\Gamma (\delta_{12} - \delta_{12}')\Gamma (\delta_{12}')\Gamma (\delta_{23} - \delta_{23}')\Gamma (\delta_{23}') \Gamma(\delta_{13} - \delta_{13}')\Gamma (\delta_{13}')}{\Gamma(\delta_{12})\Gamma(\delta_{23})\Gamma(\delta_{13}) } \nonumber\\ & &
\times \frac{\Gamma (\delta_{45} - \frac{\Delta_4+\Delta_5 -d/2+c'}{2})\Gamma (\frac{\Delta_4+\Delta_5 -d/2+c'}{2})}{\Gamma (\delta_{45})} \mathcal{M}_4 (\delta_{12}' |c) \mathcal{M}_3 \end{eqnarray}
and
\begin{equation}\mathcal{M}_3 = \Gamma \left( \frac{\Delta_4+\Delta_5 -\frac{d}{2} -c'}{2} \right) \ , \ \ \mathcal{M}_4 (\delta_{12}' |c) = \frac{\prod_{\sigma =\pm}\Gamma (\frac{\Delta_1+\Delta_2 -d/2+\sigma c}{2})\Gamma (\frac{\Delta_3+c' +\sigma c}{2})\Gamma (\delta_{12}' - \frac{\Delta_1+\Delta_2 - d/2+\sigma c}{2}) }{\Gamma (\delta_{12}')\Gamma (\delta_{34}') } \end{equation}
with $\delta_{34}' = \delta_{12}' - \frac{\Delta_1+\Delta_2-\Delta_3-d/2-c'}{2}$, $\delta_{23}' = \frac{\Delta_1 +\Delta_2 +\Delta_3 -d/2-c'}{2} - \delta_{12}' - \delta_{13}'$.
The two integrals in \eqref{eq43} are performed as follows. From Barnes first lemma, we have
\begin{equation} \int \frac{d\delta_{13}'}{2\pi i} \frac{\Gamma(\delta_{23}') \Gamma (\delta_{23} - \delta_{23}')\Gamma (\delta_{13}') \Gamma (\delta_{13} - \delta_{13}')}{\Gamma (\delta_{13})\Gamma (\delta_{23})} =
\frac{\Gamma (\delta_{13} + \delta_{23} - \frac{\Delta_1+\Delta_2+\Delta_3-d/2-c'}{2} + \delta_{12}') \Gamma (\frac{\Delta_1+\Delta_2+\Delta_3 -d/2-c'}{2} - \delta_{12}')}{\Gamma (\delta_{13} + \delta_{23})}\end{equation}
Next, we need
\begin{eqnarray} \mathcal{F} &=& \int \frac{d\delta_{12}'}{2\pi i} \frac{\Gamma (\delta_{12}' + \delta_{13} + \delta_{23} - \frac{\Delta_1 + \Delta_2 + \Delta_3 -d/2-c'}{2} )\Gamma (\frac{\Delta_1+\Delta_2+\Delta_3 -d/2-c'}{2} - \delta_{12}') \Gamma (\delta_{12} - \delta_{12}') }{\Gamma (\delta_{13} + \delta_{23} )\Gamma (\delta_{12}' - \frac{\Delta_1+\Delta_2-\Delta_3-d/2-c'}{2}) } \nonumber\\ & &
\times \prod_{\sigma = \pm}\Gamma \left( \delta_{12}' - \frac{\Delta_1+\Delta_2 - \frac{d}{2}+\sigma c}{2} \right) \end{eqnarray}
This is calculated with the aid of the identity
\begin{equation} \int \frac{ds}{2\pi i} \frac{\Gamma (a+s) \Gamma (b+s) \Gamma (f-c+s) \Gamma (e-a-b-s)\Gamma (-s)}{\Gamma (f+s)} =
\frac{\Gamma (a) \Gamma(b) \Gamma (e-a) \Gamma (e-b) \Gamma (f-c)}{\Gamma (e) \Gamma (f)} {}_3F_2 \left[ \begin{array}{c}a,b,c \\ e,f \end{array} \right] \end{equation}
where ${}_3F_2 \left[ \begin{array}{c}a,b,c \\ e,f \end{array} \right] = {}_3F_2 ( a,b,c; e,f ; 1)$.
We obtain
\begin{eqnarray} \mathcal{F} &=& \frac{\Gamma (\delta_{45} - \frac{\Delta_4 + \Delta_5 -d/2-c'}{2} ) \Gamma ( \frac{ \Delta_3 +c-c'}{2} )\prod_{\sigma = \pm}\Gamma(\delta_{12} - \frac{\Delta_1+\Delta_2 - \frac{d}{2}+\sigma c}{2}) }{\Gamma (\delta_{45} - \frac{\Delta_4 + \Delta_5-\Delta_3 -d/2-c}{2}) \Gamma (\delta_{12} - \frac{\Delta_1+\Delta_2-\Delta_3-d/2-c'}{2})} \nonumber\\ & &
\times {}_3F_2 \left[ \begin{array}{c} \delta_{45} - \frac{\Delta_4 + \Delta_5 -d/2-c'}{2} ,\delta_{12} - \frac{\Delta_1+\Delta_2 - \frac{d}{2}-c}{2},\frac{\Delta_3+c' + c}{2} \\
\delta_{45} - \frac{\Delta_4 + \Delta_5 -\Delta_3 -d/2-c}{2},\delta_{12} - \frac{\Delta_1+\Delta_2-\Delta_3-d/2-c'}{2} \end{array} \right]~. \end{eqnarray}
where we used $\delta_{12} + \delta_{13} + \delta_{23} - \frac{\Delta_1 + \Delta_2 }{2} = \delta_{45} - \frac{\Delta_4 + \Delta_5}{2}$.
Therefore,
\begin{eqnarray}\label{eq43a} \mathcal{M}_5 (\delta_{ij} | c,c') &=& \frac{ \Gamma ( \frac{ \Delta_3 +c-c'}{2} ) \prod_{\sigma = \pm } \Gamma (\delta_{12} - \frac{\Delta_1+\Delta_2-d/2+\sigma c}{2})\Gamma(\delta_{45} - \frac{\Delta_4+\Delta_5 - \frac{d}{2}+\sigma c'}{2})}{\Gamma (\delta_{12}) \Gamma (\delta_{12} - \frac{\Delta_1+\Delta_2-\Delta_3-d/2-c'}{2}) \Gamma (\delta_{45})\Gamma (\delta_{45} - \frac{\Delta_4 + \Delta_5 -\Delta_3 -d/2-c}{2})} \nonumber\\ & &
\times \prod_{\sigma = \pm} \Gamma \left( \frac{ \Delta_3 +\sigma c+c'}{2} \right) \Gamma\left( \frac{\Delta_1+\Delta_2 - \frac{d}{2}+\sigma c}{2} \right) \Gamma\left( \frac{\Delta_4+\Delta_5 - \frac{d}{2}+\sigma c'}{2} \right) \nonumber\\ & &
\times {}_3F_2 \left[ \begin{array}{c} \delta_{45} - \frac{\Delta_4 + \Delta_5 -d/2-c'}{2} ,\delta_{12} - \frac{\Delta_1+\Delta_2 - \frac{d}{2}-c}{2},\frac{\Delta_3+c' + c}{2} \\
\delta_{45} - \frac{\Delta_4 + \Delta_5 -\Delta_3 -d/2-c}{2},\delta_{12} - \frac{\Delta_1+\Delta_2-\Delta_3-d/2-c'}{2} \end{array} \right]~. \end{eqnarray}
\subsection{Vector amplitudes}
In order to avoid an unnecessarily long calculation, we restrict attention to the case of on-shell amplitudes by setting
$\Delta_i =d-1$ ($i=1,\dots, 5$).
There are two different diagrams which we need to consider separately.
The diagram depicted in figure \ref{5ptva} has amplitude
\begin{equation}
\label{fivepointc}
A^{(5,v, (a))M_1 \dots M_5, a_1\dots a_5}= g^3\int \frac{dc}{2\pi i} f_{\delta,1} (c) \prod_{i=1}^5 D^{M_iA_i} (\Delta_i,P_i) \mathcal{A}_{A_1 A_2 A_3A_4A_5}^{a_1a_2a_3a_4a_5} (\{ \Delta_i, P_i \} |c ) \end{equation}
where
\begin{eqnarray}\label{eq118a} \mathcal{A}_{A_1 A_2 A_3A_4A_5}^{a_1a_2a_3a_4a_5} (\{ \Delta_i, P_i \} |c ) &=& \int_{\partial AdS} dQ
\eta_{NN'}D^{NC} (h+c, Q) \mathcal{A}_{A_1A_2A_3C}^{a_1a_2a_3b} (\Delta_1, P_1; \Delta_2, P_2;\Delta_3 , P_3; d/2+c, Q) \nonumber\\
&&\times D^{N' C'} (d/2-c,Q)f^{a_4a_5b} \mathcal{A}_{A_4 A_5 C'} (\Delta_4,P_4;\Delta_5, P_5; d/2-c, Q)
\end{eqnarray}
\begin{center}
\tikzset{
particle/.style={decorate, draw=black,
decoration={coil,aspect=0.08,segment length=3pt,amplitude=3pt}}}\begin{tikzpicture}[thick, node distance=1cm and 1.5cm]
\coordinate[label={[xshift=-3pt]left:$P_1,{A_1}, a_1~~$}] (e1);
\coordinate[below right=of e1,label={[xshift=3pt]right:$~X$}] (aux1);
\coordinate[above right=of aux1,label={[xshift=6pt]right:$~~P_3,{A_3}, a_3$}] (e2);
\coordinate[below=1cm of aux1] (aux);
\coordinate[above=1.5cm of aux1] (auxabove);
\coordinate[below left=of aux2,label={[xshift=-6pt]left:$P_5, {A_5}, a_5~~$}] (e3);
\coordinate[below right=of aux2,label={[xshift=3pt]right:$~~P_4,{A_4}, a_4$}] (e4);
\coordinate[below=2cm of aux1,label={[xshift=3pt]right:$~Y$}] (aux2);
\draw[particle] (auxabove) -- (aux1);
\draw[particle] (e1) -- (aux1);
\draw[particle] (aux1) -- (e2);
\draw[particle] (e3) -- (aux2);
\draw[particle] (aux2) -- (e4);
\draw[particle] (aux1) -- node {} (aux2);
\node[draw,name path=circle,line width=3pt,circle,fit=(e1) (e4),inner sep=.5\pgflinewidth] {};
\path[name path=diameter] let \p1=(aux1), \p2=(aux2)
in (aux1|-0,0.5*\y2+0.5*\y1) -- ++(-3cm,0)
\path[name intersections={of=circle and diameter, by={aux3}}]
\path[name path=diameter] let \p1=(aux1), \p2=(aux2)
in (aux1|-0,0.5*\y2+0.5*\y1) -- ++(-3cm,0)
\path[name intersections={of=circle and diameter, by={aux4}}]
\draw[particle2] (aux1) -- (aux4);
\draw[particle2] (aux2) -- (aux4);
\node[label={[xshift=2pt]above:$P_2,{A_2},a_1$}] at (auxabove) {};
\node[label={[xshift=2pt]left:$\int dQ ~~Q$}] at (aux4) {};
\end{tikzpicture}
\captionof{figure}{The five-point vector amplitude \eqref{fivepointc}.}
\label{5ptva}
\end{center}
The three-point amplitude in \eqref{eq118a} simplifies to
\begin{eqnarray}\label{eq48xa} D^{N' C'} (d/2-c,Q)\mathcal{A}_{A_4 A_5 C'} &=&\frac{2}{\frac{d}{2}-c} \Gamma \left( \frac{\frac{3d}{2}-c -1}{2} \right)\Gamma \left( \frac{\frac{3d}{2}+c -1}{2} \right) \Gamma^2 \left( \frac{\frac{d}{2}-c +1}{2} \right) \nonumber\\ & &
\times \left[ \eta_{A_4A_5} P_{4}^{N'} - 2
\delta_{A_4}^{N'} P_{4A_5} \right] P_{45}^{-\frac{\frac{3d}{2}+c+1}{2}} ( -2P_4\cdot Q)^{-\frac{\frac{d}{2}-c+1}{2}} ( -2P_{5}\cdot Q)^{-\frac{\frac{d}{2}-c-1}{2}} \nonumber\\ & &
- (4\longleftrightarrow 5)\end{eqnarray}
The four-point amplitude in \eqref{eq118a} simplifies to
\begin{equation} D^{NC} (d/2+c, Q) \mathcal{A}_{A_1A_2A_3C}^{a_1a_2a_3b} = \left( f^{a_1bb'} f^{a_2a_3b'} + f^{a_1a_3b'} f^{a_2bb'} \right) \int (\mathcal{D}_4 \mathcal{M})^{N}_{A_1A_2A_3} (\delta_{ij}' | c) \prod_{i<j} P_{ij}^{-\delta_{ij}'} \Gamma (\delta_{ij}') d\delta_{ij}' + \cdots + \cdots \end{equation}
where
\begin{eqnarray} (\mathcal{D}_4 \mathcal{M})^{N}_{A_1A_2A_3} &=& \Gamma \left( \frac{\frac{5d}{2} -3 +c}{2} \right) \eta_{A_1A_2}
\left[ \frac{\frac{d}{2} +c -1}{\frac{d}{2} +c}\delta_{A_3}^{ N} \right. \nonumber\\ & &
\left. - \frac{2 }{\frac{d}{2} +c} \left( \delta_{13}' \frac{P_{1A_3} }{P_{13}} + \delta_{23}' \frac{P_{2A_3}}{P_{23}} \right) \left( \frac{\delta_{14}'}{\delta_{34}' -1} \frac{P_{3}\cdot Q}{P_{1}\cdot Q} P_{1}^{N}+ \frac{\delta_{24}'}{\delta_{34}' -1} \frac{P_{3}\cdot Q}{P_{2}\cdot Q} P_{2}^{N}+ P_{3}^{N} \right) \right] \end{eqnarray}
The integral over $Q$ involves
\begin{equation}\label{eq101} \int_{\partial AdS} dQ \prod_{i=1}^5\Gamma (\lambda_i) (-2Q\cdot P_i)^{-\lambda_i} = \frac{\pi^{d/2}}{2} \int \prod_{i<j} \frac{d\tilde\delta_{ij}}{2\pi i} \Gamma (\tilde\delta_{ij}) P_{ij}^{-\tilde\delta_{ij}}\end{equation}
where
\begin{equation}\label{eq102} \lambda_1 = \delta_{14}' +n_1\ , \ \ \lambda_2 = \delta_{24}'+n_2 \ , \ \ \lambda_3 = \delta_{34}' +n_3 \ , \ \ \lambda_4 = \frac{\frac{d}{2}-c+1}{2} \ , \ \ \lambda_5 = \frac{\frac{d}{2}-c-1}{2} \end{equation}
and $\delta_{ij}'$ are the Mandelstam invariants for the four-point function, as in the scalar case. The various terms have $n_i \in \{-1,0,+1\}$, with $\sum_i n_i =0$ ($i=1,2,3$).
The integration variables are constrained by
\begin{equation}\label{eq245n} \sum_{j\ne i} \tilde\delta_{ij} = \lambda_i \end{equation}
In terms of the Mandelstam invariants,
\begin{equation}\label{eq104} \tilde\delta_{12} = \delta_{12} - \delta_{12}' \ , \ \ \tilde\delta_{23} = \delta_{23} - \delta_{23}' \ , \ \ \tilde\delta_{13} = \delta_{13} - \delta_{13}' \ , \ \ \tilde\delta_{45} = \delta_{45} - \frac{\frac{3d}{2}+c-3}{2} \ , \ \ \tilde\delta_{i4} = \delta_{i4} + n_i \ \ \ (i=1,2,3)~, \end{equation}
and $\tilde\delta_{ij} = \delta_{ij}$, otherwise.
We arrive at
\begin{eqnarray} \mathcal{A}_{A_1 A_2 A_3A_4A_5} &=& \frac{\pi^{d/2}}{2} \int\prod_{i<j} \frac{d\tilde\delta_{ij}}{2\pi i} \Gamma (\tilde\delta_{ij}) P_{ij}^{-\tilde\delta_{ij}} \nonumber\\ & &
\times \left[ \left( f^{a_1b'b} f^{a_2a_3b} + f^{a_1a_3b} f^{a_2b'b} \right) f^{b'a_4a_5} \mathcal{M}_{A_1\dots A_5} + \text{permutations of (123)} - (4\longleftrightarrow 5) \right] \end{eqnarray}
where
\begin{eqnarray} \mathcal{M}_{A_1\dots A_5} &=& \frac{\frac{d}{2}-c-1}{\frac{d}{2}-c} \Gamma \left( \frac{\frac{3d}{2}-c -1}{2} \right)\Gamma \left( \frac{\frac{3d}{2}+c -1}{2} \right) \Gamma \left( \frac{\frac{5d}{2} -3 +c}{2} \right) \eta_{A_1A_2}\left[ \eta_{A_4A_5} P_{4N} - 2
\eta_{A_4N} P_{4A_5} \right] \nonumber\\ & &
\times \int\prod_{i<j} d\delta_{ij}' \frac{\Gamma (\delta_{12} - \delta_{12}')\Gamma(\delta_{12}')\Gamma (\delta_{23} - \delta_{23}')\Gamma(\delta_{23}')\Gamma (\delta_{13} - \delta_{13}')\Gamma(\delta_{13}')\Gamma (\delta_{45} - \frac{\frac{3d}{2}+c-3}{2})}{\Gamma (\delta_{12})\Gamma (\delta_{23})\Gamma (\delta_{13})\Gamma (\delta_{45})} \nonumber\\ & &
\times \left[ \frac{\frac{d}{2} +c -1}{\frac{d}{2} +c}\delta_{A_3}^{ N} - \frac{2 }{\frac{d}{2} +c} \left( \delta_{13}' \frac{P_{1A_3} }{P_{13}} + \delta_{23}' \frac{P_{2A_3}}{P_{23}} \right) \left( \frac{\delta_{14}}{\delta_{34} -1} P_{1}^{N}+ \frac{\delta_{24}}{\delta_{34} -1} P_{2}^{N}+ P_{3}^{N} \right) \right] \end{eqnarray}
The integrals over the four-point Mandelstam invariants $\delta_{ij}'$ can be performed as in the scalar case.
Next, we turn to the diagram depicted in figure \ref{5ptvb}. It is given by (suppressing standard group theory indices)
\begin{equation}
\label{fivepoint}
A^{(5,v,(b))M_1 M_2 M_3 M_4M_5}= g^3 \int \frac{dcdc'}{(2\pi i)^2} f_{\delta,1} (c) f_{\delta,1} (c') \prod_{i=1}^5 D^{M_iA_i} (\Delta_i,P_i) \mathcal{A}_{A_1 A_2 A_3A_4A_5} (\{ \Delta_i, P_i \} |c,c' ) \end{equation}
where
\begin{eqnarray}\label{eq118} \mathcal{A}_{A_1 A_2 A_3A_4A_5} (\{ \Delta_i, P_i \} |c,c' ) &=& \int_{\partial AdS} dQ
\eta_{NN'}D^{NC} (d/2+c, Q) \mathcal{A}_{A_1A_2A_3C} (\Delta_1, P_1; \Delta_2, P_2;\Delta_3 , P_3; d/2+c, Q|c') \nonumber\\
&&\times D^{N' C'} (h-c',Q)\mathcal{A}_{A_4 A_5 C'} (\Delta_4,P_4;\Delta_5, P_5; d/2-c, Q)
\end{eqnarray}
\begin{center}
\tikzset{
particle/.style={decorate, draw=black,
decoration={coil,aspect=0.08,segment length=3pt,amplitude=3pt}}}\begin{tikzpicture}[thick, node distance=1cm and 1.5cm]
\coordinate[label={[xshift=-3pt]left:$P_1,{A_1}, a_1~~$}] (e1);
\coordinate[below right=of e1,label={[xshift=3pt]right:$~X$}] (aux1);
\coordinate[above right=of aux1,label={[xshift=6pt]right:$~~P_2,{A_2}, a_2$}] (e2);
\coordinate[below=1cm of aux1] (aux);
\coordinate[below left=of aux2,label={[xshift=-6pt]left:$P_5, {A_5}, a_5~~$}] (e3);
\coordinate[below right=of aux2,label={[xshift=3pt]right:$~~P_4,{A_4}, a_4$}] (e4);
\coordinate[below=1cm of aux1,label={[xshift=3pt]above left:$Z~~~$}] (aux3);
\coordinate[below=2cm of aux1,label={[xshift=3pt]right:$~Y$}] (aux2);
\draw[particle] (e1) -- (aux1);
\draw[particle] (aux1) -- (e2);
\draw[particle] (e3) -- (aux2);
\draw[particle] (aux2) -- (e4);
\draw[particle] (aux1) -- node {} (aux2);
\node[draw,name path=circle,line width=3pt,circle,fit=(e1) (e4),inner sep=.5\pgflinewidth] {};
\path[name path=diameter] let \p1=(aux1), \p2=(aux2)
in (aux1|-0,0.5*\y2+0.5*\y1) -- ++(3cm,0)
\path[name intersections={of=circle and diameter, by={aux3}}]
\path[name path=diameter] let \p1=(aux1), \p2=(aux2)
in (aux1|-0,0.5*\y2+0.5*\y1) -- ++(-3cm,0)
\path[name intersections={of=circle and diameter, by={aux4}}]
\draw[particle2] (aux) -- (aux4);
\draw[particle2] (aux2) -- (aux4);
\draw[particle] (aux) -- (aux3);
\node[label={[xshift=2pt]right:$P_3,{A_3},a_3$}] at (aux3) {};
\node[label={[xshift=2pt]left:$\int dQ~~ Q$}] at (aux4) {};
\end{tikzpicture}
\captionof{figure}{The five-point vector amplitude \eqref{fivepoint}.}
\label{5ptvb}
\end{center}
The three-point amplitude in \eqref{eq118} is given by \eqref{eq48xa}.
The four-point amplitude simplifies to
\begin{equation} D^{NC} (d/2+c, Q) \mathcal{A}_{A_1A_2A_3C} =\int (\mathcal{D}_4 \mathcal{M})^{N}_{A_1A_2A_3} (\delta_{ij}' | c') \prod_{i<j} P_{ij}^{-\delta_{ij}'} \Gamma (\delta_{ij}') d\delta_{ij}' \end{equation}
where
\begin{equation} (\mathcal{D}_4 \mathcal{M})^{N}_{A_1A_2A_3} = \frac{\frac{d}{2}+c -1}{\frac{d}{2}+c}\eta^{NC} \mathcal{M}_{A_1 A_2 A_3C}
- \frac{1}{\frac{d}{2}+c} \left( \delta_{14}'P_{1}^{N}\frac{P_{3}\cdot Q}{P_{1}\cdot Q} + \delta_{24}'P_{2}^{N}\frac{P_{3}\cdot Q}{P_{2}\cdot Q} + (\delta_{34}' -1) P_{3}^{N}\right) \frac{Q^{C}}{P_{3}\cdot Q} \mathcal{M}_{A_1 A_2 A_3C} \end{equation}
\begin{eqnarray} \mathcal{M}_{A_1 A_2 A_3C} &=& \frac{(\frac{d}{2}-1)^2 -{c'}^2}{\frac{d^2}{4}-{c'}^2} \left[ \mathcal{I}({1,2,3,4})
\delta_{13}' + \mathcal{I} (1,2,4,3) \delta_{14}' - (1\longleftrightarrow 2) \right]
\nonumber\\ & &
\times \frac{\prod_{\sigma=\pm}\Gamma (\frac{\frac{3d}{2}+\sigma c' -1}{2}) \Gamma ( \frac{d + c + \sigma c' }{2} ) \Gamma ( \delta_{12}' - \frac{\frac{3d}{2}+\sigma c' -1}{2})}{\Gamma (\delta_{12}') \Gamma (\delta_{12}' +1-\frac{\frac{d}{2} +h-c+1}{2})} \end{eqnarray}
\begin{equation} \mathcal{I}({1,2,3,4}) = -\frac{ c-c' }{d-2(c'+1)} \eta_{A_1A_2}\eta_{A_3C}
- 2 \eta_{A_1A_2} \frac{P_{1A_3} P_{3C}}{P_{13}} +4
\eta_{A_1A_3} \frac{P_{1A_2} P_{3C}}{P_{13}}
-2\frac{c-c' }{\frac{d}{2}-c'-1} \eta_{A_3C}
\frac{P_{1A_2} P_{3A_1}}{P_{13}}
\end{equation}
\begin{eqnarray} \mathcal{I}({1, 2, 4,3}) &=& \frac{ c+c'- d +2 }{d-2(c'+1)} \eta_{A_1A_2}\eta_{A_3C}
-\frac{2}{\delta_{34}' -1}\left( 2
\eta_{A_1A_4} P_{1A_2} - \eta_{A_1A_2} P_{1C} \right) \left( \delta_{13}' \frac{ P_{1A_3}}{P_{13}} + \delta_{23}' \frac{ P_{2A_3}}{P_{23}} \right)\frac{P_{3}\cdot Q}{P_{1}\cdot Q} \nonumber\\ & &
-\frac{2}{\delta_{14}'}\frac{c+c' }{\frac{d}{2}-c'-1} \eta_{A_3C}
\left( \frac{1}{2}\eta_{A_1A_2} + \delta_{12}' \frac{P_{2A_1}P_{1A_2}}{P_{12}} + \delta_{13}' \frac{P_{3A_1}P_{1A_2}}{P_{13}} \right)
\end{eqnarray}
\begin{eqnarray}\label{eq88a2} \frac{P_{13}}{-2P_{3}\cdot Q}Q^{C}\mathcal{I}({1, 2, 3,4}) &=& \frac{ c-c' }{(\delta_{34}' -1)(d-2(c'+1))} \eta_{A_1A_2} \left( \delta_{13}' P_{1A_3} + \delta_{23}' \frac{P_{13}}{P_{23}} P_{2A_3} \right)
+ \eta_{A_1A_2} P_{1A_3}
+4
\eta_{A_1A_3} P_{1A_2} \nonumber\\ & &
+\frac{2}{\delta_{34}' -1}\frac{c-c' }{\frac{d}{2}-c'-1}
P_{1A_2}\left( \frac{1}{2}\eta_{A_1A_3} + (\delta_{13}' +1)\frac{P_{3A_1}P_{1A_3}}{P_{13}} + \delta_{23}' \frac{P_{3A_1}P_{2A_3}}{P_{23}} \right) \nonumber\\ & &
\end{eqnarray}
and
\begin{eqnarray}\label{eq89a2}
\frac{P_{13}}{-2P_{3}\cdot Q}Q^{C}\mathcal{I}({1, 2, 4,3}) &=& -\frac{ c+c'- d +2 }{(\delta_{34}' -1)(d-2(c'+1))} \eta_{A_1A_2} \left(\delta_{13}' P_{1A_3} + \delta_{23}' \frac{P_{13}}{P_{23}} P_{2A_3} \right) \nonumber\\ & &
-\frac{1}{\delta_{34}' -1} \eta_{A_1A_2} \left( \delta_{13}' P_{1A_3} + \delta_{23}' \frac{ P_{13}}{P_{23}} P_{2A_3} \right) \nonumber\\ & &
+\frac{4\delta_{13}'}{\delta_{14}'(\delta_{34}' -1)}
\left[ \frac{1}{2}\eta_{A_1A_2} P_{1A_3}+\frac{1}{2}\eta_{A_1A_3} P_{1A_2}+ \left( \delta_{12}' \frac{P_{2A_1}}{P_{12}} + (\delta_{13}' +1) \frac{P_{3A_1}}{P_{13}} \right) P_{1A_2}P_{1A_3} \right] \nonumber\\ & &
+\frac{4 \delta_{23}'}{\delta_{14}'(\delta_{34}' -1)}
\frac{ P_{2A_3}}{P_{23}}P_{13}\left( \frac{1}{2}\eta_{A_1A_2} + \delta_{12}' \frac{P_{2A_1}P_{1A_2}}{P_{12}} + \delta_{13}' \frac{P_{3A_1}P_{1A_2}}{P_{13}} \right) \nonumber\\ & &
-\frac{2}{\delta_{14}'(\delta_{34}' -1)}\frac{c-c' }{\frac{d}{2}-c'-1}
\left( \eta_{A_1A_2} + \delta_{12} \frac{P_{2A_1}P_{1A_2}}{P_{12}} \right) \left(\delta_{13}' P_{1A_3} + \delta_{23}' \frac{P_{13}}{P_{23}} P_{2A_3} \right) \nonumber\\ & &
-\frac{2\delta_{13}'}{\delta_{14}'(\delta_{34}' -1)}\frac{c-c' }{\frac{d}{2}-c'-1}
P_{1A_2} \left( \frac{1}{2}\eta_{A_1A_3} + (\delta_{13}' +1)\frac{P_{3A_1}P_{1A_3}}{P_{13}} + \delta_{23}' \frac{P_{3A_1}P_{2A_3}}{P_{23}} \right) \nonumber\\ & &
\end{eqnarray}
The integral over $Q$ is of the same form as before (Eq.\ \eqref{eq101} with exponents given by \eqref{eq102}). However, this case is slightly more complicated because $n_i \in \{ 0, \pm 1, \pm 2\}$, $\sum_i n_i =0$ ($i=1,2,3$).
The integration variables are constrained by \eqref{eq245n}, and are given
in terms of the Mandelstam invariants by \eqref{eq104}.
\section{Higher-point amplitudes}
\label{secV}
In this section, we suppress group theory indices, as they are not involved in the calculations except as constant factors. Thus, but amplitude we mean a sub-amplitude with a given group theory structure.
Higher point amplitudes can be calculated recursively by sewing together diagrams. Consider two scalar diagrams with $N_1$ and $N_2$ external legs, respectively. Suppose they have been calculated and put in the form
\begin{equation} A_{N_1s} (\Delta_1, P_1; \dots ; \Delta_{N_1} , P_{N_1} ) = \frac{g^{N_1-2}}{\prod_i 2\pi^{d/2} \Gamma (\Delta_i +1 - \frac{d}{2})} \int [dc'] \mathcal{A}_{N_1} ( \{ \Delta_i , P_i \} | [c'] ) \end{equation}
with
\begin{equation} \mathcal{A}_{N_1} ( \{ \Delta_i , P_i \} | [c'] ) = \frac{\pi^{d/2}}{2} \int \mathcal{M}_{N_1} (\delta_{ij}' | [c']) \prod_{i<j} \Gamma (\delta_{ij}') P_{ij}^{-\delta_{ij}'} d\delta_{ij}' \ \ , \ \ \ \ \sum_{i\ne j} \delta_{ij}' = \Delta_i \end{equation}
and similarly
\begin{equation} A_{N_2s} (\Delta_{1}', P_{1}'; \dots ; \Delta_{N_2}' , P_{N_2}' ) = \frac{g^{N_2-2}}{\prod_i 2\pi^{d/2} \Gamma (\Delta_i' +1 - \frac{d}{2})} \int [dc''] \mathcal{A}_{N_2} ( \{ \Delta_i' , P_i' \} | [c''] ) \end{equation}
with
\begin{equation} \mathcal{A}_{N_2} ( \{ \Delta_i' , P_i' \} | [c''] ) = \frac{\pi^{d/2}}{2} \int \mathcal{M}_{N_2} (\delta_{ij}'' | [c'']) \prod_{i<j} \Gamma (\delta_{ij}'') (P_{ij}')^{-\delta_{ij}''} d\delta_{ij}'' \ \ , \ \ \ \ \sum_{i\ne j} \delta_{ij}'' = \Delta_i' \end{equation}
After sewing together the last two legs in the respective diagrams, we create a $N$-point diagram with $N=N_1+N_2-2$. Its amplitude is given by
\begin{equation} A_{Ns} (\Delta_1, P_1; \dots ; \Delta_{N} , P_{N} ) = \frac{g^{N-2}}{\prod_i 2\pi^{d/2} \Gamma (\Delta_i +1 - \frac{d}{2})} \int [dc'][dc'']\frac{dc}{2\pi i} f_{\delta ,0} (c) \mathcal{A}_N ( \{ \Delta_i , P_i \} | [c'],[c''],c ) \end{equation}
with
\begin{equation} \mathcal{A}_N = \int_{\partial AdS} dQ \mathcal{A}_{N_1} (\Delta_1, P_1; \dots ; \Delta_{N_1-1}, P_{N_1-1}; d/2+c , Q | [c'])
\mathcal{A}_{N_2} (\Delta_{N_1}, P_{N_1} ; \dots ; \Delta_N , P_N ; d/2-c, Q | [c'']) \end{equation}
The integral over $Q$ involves
\begin{equation} \int_{\partial AdS} dQ \prod_{i=1}^N \Gamma (\lambda_i) (-2Q\cdot P_i)^{-\lambda_i} = \frac{\pi^{d/2}}{2} \int \prod_{i<j} d\tilde\delta_{ij} \Gamma (\tilde\delta_{ij}) P_{ij}^{-\tilde\delta_{ij}} \end{equation}
with $\sum_{j\ne i} \tilde\delta_{ij} = \lambda_i$, and
\begin{equation} \lambda_i = \delta_{iN_1}' \ \ \ \ (i=1,\dots, N_1-1) \ \ , \ \ \ \ \lambda_{N_1+i} = \delta_{iN_2}'' \ \ \ \ (i=1,\dots, N_2-1) \end{equation}
Then the part of the amplitude involving the vectors $P_i$ is
\begin{equation} \left( \prod_{i<j}^{N_1-1} P_{ij}^{-\delta_{ij}'} \right) \left( \prod_{i<j}^{N_2-1} P_{N_1+i-1 \, N_1+j-1}^{-\delta_{ij}''} \right) \left( \prod_{i<j}^N P_{ij}^{-\tilde\delta_{ij}} \right) = \prod_{i<j} P_{ij}^{-\delta_{ij}} \end{equation}
where $\delta_{ij}$ are the Mandelstam invariants for the $N$-point amplitude given by
\begin{equation} \delta_{ij} = \tilde\delta_{ij} + \delta_{ij}' \ \ (i,j = 1,\dots, N_1-1) \ \ , \ \ \ \ \delta_{N_1+i-1 \, N_1+j-1} = \tilde\delta_{N_1+i-1 \, N_1+j-1} + \delta_{ij}'' \ \ (i,j = 1,\dots, N_2-1)~,\end{equation}
and $\delta_{ij} = \tilde\delta_{ij}$, otherwise. They obey the constraints $\sum_{i\ne j} \delta_{ij} = \Delta_i$, as can easily be checked.
It follows that the $N$-point amplitude can be cast in the form
\begin{equation}\label{eq126} \mathcal{A}_{N} = \frac{\pi^{d/2}}{2} \int \mathcal{M}_{N} \prod_{i<j} \Gamma (\delta_{ij}) P_{ij}^{-\delta_{ij}} d\delta_{ij} \end{equation}
where
\begin{equation} \mathcal{M}_N = \frac{\pi^{d/2}}{2} \int d\delta_{ij}' \int d\delta_{ij}'' \prod_{i<j}^{N_1-1} \frac{\Gamma (\delta_{ij} - \delta_{ij}')\Gamma (\delta_{ij}')}{\Gamma (\delta_{ij})} \prod_{i<j}^{N_2-1} \frac{\Gamma (\delta_{N_1+i-1 \, N_1+j-1} - \delta_{ij}'')\Gamma (\delta_{ij}'')}{\Gamma (\delta_{N_1+i-1 \, N_1+j-1})} \mathcal{M}_{N_1} \mathcal{M}_{N_2}
\end{equation}
The above procedure can be applied to the the case of vector amplitudes which can thus be written in the form \eqref{eq126}. In the vector case, a $N_1$-point diagram is given by
\begin{equation} A_{N_1v}^{M_1\cdots M_{N_1}} (\Delta_1, P_1; \dots ; \Delta_{N_1} , P_{N_1} ) = \int [dc'] \prod_{i=1}^{N_1} D^{M_iA_i} (\Delta_i ,P_i) \mathcal{A}_{A_1\cdots A_{N_1}} ( \{ \Delta_i , P_i \} | [c'] ) \end{equation}
with
\begin{equation} \mathcal{A}_{A_1\cdots A_{N_1}} ( \{ \Delta_i , P_i \} | [c'] ) = \frac{\pi^{d/2}}{2} \int \mathcal{M}_{A_1\cdots A_{N_1}} (\delta_{ij}' | [c']) \prod_{i<j} \Gamma (\delta_{ij}') P_{ij}^{-\delta_{ij}'} d\delta_{ij}' \end{equation}
Similarly, a $N_2$-point diagram is given by
\begin{equation} A_{N_2v}^{M_1\cdots M_{N_2}} (\Delta_1', P_1'; \dots ; \Delta_{N_1}' , P_{N_1}' ) = \int [dc''] \prod_{i=1}^{N_2} D^{M_iA_i} (\Delta_i' ,P_i') \mathcal{A}_{A_1\cdots A_{N_1}} ( \{ \Delta_i' , P_i' \} | [c'] ) \end{equation}
with
\begin{equation} \mathcal{A}_{A_1\cdots A_{N_1}} ( \{ \Delta_i' , P_i' \} | [c''] ) = \frac{\pi^{d/2}}{2} \int \mathcal{M}_{A_1\cdots A_{N_1}} (\delta_{ij}'' | [c'']) \prod_{i<j} \Gamma (\delta_{ij}'') (P_{ij}')^{-\delta_{ij}''} d\delta_{ij}'' \end{equation}
By sewing together these two diagrams, we obtain a $N$-point vector diagram of amplitude
\begin{equation} A_{Nv}^{M_1\cdots M_{N}} (\Delta_1, P_1; \dots ; \Delta_{N} , P_{N} ) = \int [dc'][dc'']\frac{dc}{2\pi i} f_{\delta ,1} (c) \prod_{i=1}^N D^{M_iA_i} (\Delta_i ,P_i) \mathcal{A}_{A_1\cdots A_{N}} ( \{ \Delta_i , P_i \} | [c'],[c''],c ) \end{equation}
with
\begin{eqnarray}\label{eq133} \mathcal{A}_{A_1\cdots A_{N}} &=& \int_{\partial AdS} dQ \eta_{MM'} D^{MC} (d/2+c, Q) \mathcal{A}_{A_1\cdots A_{N_1-1}C} (\Delta_1, P_1; \dots ; \Delta_{N_1-1}, P_{N_1-1}; d/2+c , Q | [c']) \nonumber\\ & &
\times D^{M'C'} (d/2-c,Q) \mathcal{A}_{A_1\cdots A_{N_2-1}C'} (\Delta_{N_1}, P_{N_1} ; \dots ; \Delta_N , P_N ; d/2-c, Q | [c'']) \end{eqnarray}
The integration over $Q$ can be performed in the same way as in the scalar case provided $Q$ only appears in dot products (as in $P_i\cdot Q$, and no terms with $Q$ with a free index exist). This can be ensured by the repeated application of the identity \eqref{eq42i4}, as we have already demonstrated. This leads to expressions for the two factors in \eqref{eq133}, $D^{MC}\mathcal{A}_{A_1\cdots A_{N_1-1}C}$ and $D^{M'C'} \mathcal{A}_{A_1\cdots A_{N_2-1}C'}$, respectively, containing no $Q$ with a free index.
After integrating over $Q$, we arrive at an expression for the amplitude of the form
\begin{equation}\label{eq126v} \mathcal{A}_{A_1\cdots A_{N}} = \frac{\pi^{d/2}}{2} \int \mathcal{M}_{A_1\cdots A_{N}} \prod_{i<j} \Gamma (\delta_{ij}) P_{ij}^{-\delta_{ij}} d\delta_{ij} \end{equation}
where $\mathcal{M}_{A_1\cdots A_{N}}$ is given in terms of the same functions as in the scalar case.
To complete the iteration, we need to apply the identity \eqref{eq42i4} again, as many times as needed, on $D^{M_iA_i} \mathcal{A}_{A_1\cdots A_{N}}$, in order to eliminate all occurrences of $Q$ with a free index. The resulting expressions can then be used for the calculation of higher-point amplitudes.
\section{Conclusion}
\label{secVI}
We discussed an iterative method of calculation of Witten diagrams in AdS space based on the formalism developed in \cite{Paulos:2011ie}. We applied our method to scalar and vector fields and showed that they can both be written in terms of Mellin amplitudes which can be computed explicitly. We showed how this is done in detail for three-, four-, and five-point diagrams. We demonstrated that the index structure in the vector case did not present additional difficulties in the calculation of integrals over AdS space, by taking advantage of the conformal structure of the amplitudes.
Our method can be straightforwardly generalized to higher-spin fields (calculation of correlators of stress-energy tensors, etc). As it provides a systematic way of calculating diagrams, which appears to be uniformly applicable to fields of any spin, it would be interesting to use our method toward the development of general Feynman rules for the calculation of Witten diagrams.
Work in this direction is in progress.
\acknowledgments
We thank Miguel Paulos for all his helpful comments. Research supported in part by the Department of Energy under Grant No.\ DE-FG05-91ER40627.
|
train/arxiv
|
BkiUdPQ5qhDBMCD_RBH5
| 5
| 1
|
\section{Introduction}
\subsection{Graph Finding Problem}
We consider the problem of finding edges of a hidden weighted graph using a certain type of queries. Let $G$ be a weighted graph with $n$ vertices.
In the most general setting, the $n$ vertices are known and no other information about $G$ is given. The problem is finding all edges of $G$ and their weights using queries. Three types of queries have been extensively studied:
\mn
{\bf Detection query:} One chooses a set of vertices and asks if there is an edge with both ends in the set. This type of queries has applications to genom sequencing and has been studied in \cite{AA05,ABKRS04,AC04,AC06,GK97,GK98}.
\mn
{\bf Additive query:} One chooses a set of vertices and asks the sum of weights of edges with both ends in the set. This model has been extensively used in bioinformatics including genom sequencing, and studied in \cite{AC04,BGK05,BM10_STACS,BM10_MFCS,BM11_SODA,BM11_TCS,CK10_AI, Grebinski98,GK98,GK00,Mazzawi10,RS07}.
\mn
{\bf Shortest path query:} One choose a pair of vertices and asks the length of the shortest path between the two vertices. This query arises in the canonical model
of the evolutionary tree literature \cite{Hein89,KZZ03,RS07a}.
\mn
(Our lists of references are far from being exhaustive.)
\mn
In this paper, we focus on the additive queries. The graph finding problem with additive queries is partly motivated by the shotgun sequencing \cite{BAFK01,GK98}, one of the most popular methods for DNA sequencing. In the shotgun sequencing, one needs
to put back separately decoded short fragments of a given genome sequence
into the same order as in the original sequence.
Combined with a biotech method called the multiplex PCR \cite{TRKKS99},
the process is reduced to the problem of finding a hidden graph using additive queries.
The additive queries are also used in the problem of finding the Fourier coefficients of pseudo-Boolean functions, which play crucial roles in evolutionary computation, artificial intelligence, and population genetics \cite{CK10_AI,CJK11_JCSS,CK11_AI}.
In the rest of this paper, we say queries for additive queries and
all logarithms are in base 2, unless otherwise specified.
For unweighted graphs,
Grebinski and
Kucherov presented a few results. For arbitrary graphs on $n$ vertices, they have shown that $O(\frac{n^2}{\log n})$ queries are enough \cite{GK00}. If the hidden graph is known to be a Hamiltonian path or cycle, then $O(n)$ queries are suffice \cite{GK98}.
More generally, if the maximum degree of the hidden graph is known to be at most $d$, then the graph may be found using $O(dn)$ queries \cite{GK00}. Grebinski \cite{Grebinski98} has shown that the same bound $O(dn)$ holds for d-degenerate graphs.
When the hidden graph has at most $m\geq 2$ edges and $m$ is known,
some bounds close to the optimal bound were shown \cite{AC04, RS07} and Choi and Kim \cite{CK10_AI} proved a $O(\frac{m\log (n^2/m)}{\log m})$ bound that is optimal (up to a constant factor). The randomized algorithm presented there uses non-adaptive queries but it is not a polynomial time algorithm, where queries are non-adaptive if each query is independent of answers to the previous queries.
Recently, Mazzawi
\cite{Mazzawi10} constructed a polynomial time algorithm with optimal query complexity. The algorithm is deterministic and uses adaptive queries. She also extended the algorithm to find weighted graphs with positive integer weights.
For weighted graphs,
Choi and Kim \cite{CK10_AI} proved a non-adaptive $O(\frac{m\log n}{\log m})$ query bound, provided that $m$ is at least a polylog of $n$ and the absolute values of all weights are between $n^{-a}$ and $n^b$ for constants $a,b>0$. Bshouty and Mazzawi \cite{BM11_TCS} showed the same bound without the extra conditions. However, it is unlikely that one may able to develop a polynomial time algorithm from those results. In other words, substantially new ideas seem to be needed to design an algorithm that is useful in practical sense. A significant result toward this direction has been shown by Bshouty and Mazzawi \cite{BM10_MFCS}: For weighted graphs with positive real weights, they presented a deterministic polynomial time algorithm that uses an almost optimal number of (adaptive) queries,
$O(\frac{m\log n}{\log m} + m\log\log m)$.
Note that the extra $m\log\log m$ term is larger than the optimal query bound by a $\log\log n$ factor when $\log m =\Omega (\log n)$.
To obtain the optimal query complexity $O(\frac{m\log n}{\log m})$, Choi and Kim \cite{CK12} have recently introduced a randomized polynomial time algorithm that finds the hidden weighted graph with positive real weights.
Another randomized polynomial time algorithm
they introduced uses $O(\frac{m\log n}{\log m})$ queries to find the hidden weighted graph with bounded integer weights.
In this paper, we present a randomized polynomial time algorithm
that works for a quite general class of weighted graphs.
Using the optimal number of queries up to constant factor, the algorithm finds the hidden weighted graph provided that the weight $w(e)$ of each edge $e$ in the graph satisfies $\ga\leq |w(e)| \leq \gb$ for positive constants $\ga$ and $\gb$. The theorem we will prove is slightly more general in the sense that $\ga, \gb$ are not necessarily constants.
\begin{theorem}\label{gfpm}
Let $n,m$ be positive integers with $n^2\geq m\geq 2$ and let $\ga, \gb>0$ be positive real numbers (not necessarily constants) with $2\ga<\gb$.
Suppose a weighted graph $G$ with $n$ vertices and at most $m$ edges is given. If the weight $w(e)$ of each edge in $G$ satisfies $\ga \leq |w(e)|\leq \gb$, then there is a randomized polynomial time algorithm that asks $O(\frac{m \log (\gb/\ga) \log n}{\log m})$ queries to
find all edges with probability $1-O(1/{m^{0.02}})$.
\end{theorem}
\old{It is not clear that the lower and upper bounds for the absolute values of edge weights are really necessary. It seems to be not extremely
surprising even if they were, though.}
Our proof of the theorem heavily relies on a well-known combinatorial search problem. Suppose there are $n$ identical looking coins and some of them are counterfeit. The weights of all authentic coins are the same and known a priori. The weights of counterfeit coins vary but different from the weight of an authentic coin.
The problem is to find all counterfeit coins by weighing sets of coins on a spring scale.
Note that weighing sets of coins on a spring scale may be regarded as additive queries. This problem is also equivalent to the graph finding problem when the graphs are restricted to stars $K_{1,m} $ with known center. The coin weighing problem has been extensively studied. We survey its colorful history and add one more algorithm finding all counterfeit coins when the weights of each counterfeit coin satisfies properties similar to those described in the above theorem.
\subsection{Coin Weighing Problem}
Suppose there are $n$ identically looking coins, some of them are counterfeit. The weights of all authentic coins are the same and known a priori, while the weights of counterfeit coins are unknown but different from the weight of an authentic coin. Without loss of generality, it may be assumed that the weights of authentic coins are $0$ and the weights of counterfeit coins belong to a set of non-zero real numbers. We want to find all counterfeit coins by weighing sets of coins on a spring scale, which we call additive queries or simply queries.
After the coin weighing problem was introduced by Fine \cite{Fine60} and Shapiro \cite{Shapiro60}, a number of results have been published,
mainly focusing on the case that the weights of counterfeit coins are the same \cite{Cantor64,CM66,ER63,Lindstrom64,Lindstrom65,Lindstrom71, Moser70,SS63}: Summarizing some of them briefly, Erd\H{o}s and
R\'{e}nyi \cite{ER63}, in 1963, proved that $\frac{(\log 9 +o(1))n}{\log n}$ queries are enough and $\frac{(2+o(1))n}{\log n}$ queries are required.
(See \cite{LV94} for another proof of the lower bound.) The upper bound was improved to match the lower bound by Cantor and Mills \cite{CM66}, and Lindstr\"{o}m
\cite{Lindstrom65}. Using the M\"obius function, Lindstr\"{o}m \cite{Lindstrom71,Lindstrom75} explicitly constructed a query matrix that asks $\frac{(2+o(1))n}{\log n}$ queries.
The case that the number $m$ of counterfeit coins is also known has been extensively studied too \cite{Capetanakis79b,Capetanakis79a,
DH93,GK00,Lindstrom75,M81,TM78,UTW00}. Recently, Bshouty \cite{Bshouty09} proposed the first polynomial time algorithm
that uses $\frac{(1+o(1))2m\log \frac{n}{m}}{\log m}$ adaptive queries. The query complexity is optimal up to $o(1)$ term.
\old{ When the weights of all coins are non-negative integers and their sum is known (instead of the number $m$ of counterfeit coins), then he was able to modify the algorithm to find all counterfeit coins with an optimal query complexity up to a constant factor.}
Results for the general case, in which the weights of counterfeit coins are not the same, have been obtained only recently. As the results were applied to the (weighted) graph finding problem, our summary is almost the same as in the previous subsection. When the weights of the counterfeit coins can be any (not necessarily positive) real numbers,
Choi and Kim \cite{CK10_AI} proposed an algorithm with a non-adaptive $O(\frac{m\log n}{\log m})$ query bound, under the mild conditions on $m$ and the weights, i.e., $m=\Omega({\rm polylog} n)$ and the absolute values of all weights are between $n^{-a}$ and $n^b$ for constants $a,b>0$. Bshouty and Mazzawi \cite{BM11_TCS} showed the same bound without the extra conditions. Though the query complexities of both algorithms are optimal, the time complexities of them are far from being polynomial. Concerning polynomial time algorithms, Bshouty and Mazzawi \cite{BM10_MFCS} presented a deterministic polynomial time algorithm that uses a near optimal number of (adaptive) queries,
$O(\frac{m\log n}{\log m} + m\log\log m)$, assuming the weights of all counterfeit coins are positive real numbers. They first constructed a search matrix using Fourier representations, and took the divide and conquer approach to guess the sums of the weights of coins. The search matrix played key roles when the sums of the weights were guessed. The processes for checking and correction follow after guessing.
As mentioned before, the extra $m\log\log m$ term is larger than the optimal bound by a $\log \log n$ factor when $\log m=\Omega (\log n)$. Choi and Kim \cite{CK12} presented a polynomial time randomized algorithm to remove the $m\log\log m$ term in the query complexity. Another polynomial time randomized algorithm may be applied to achieve the optimal query complexity, when the weights of counterfeit coins are bounded integers
in absolute values. The key idea is constructing random sets of coins that are useful to control the number of checking and correction processes used by Bshouty and Mazzawi \cite{BM10_MFCS}.
Once the number of checking and correction processes is substantially reduced, less queries are needed.
A randomized algorithm is presented in this paper to achieve the optimal query complexity when the weights of counterfeit coins are any real numbers bounded from below and from above in absolute values.
The theorem we will prove is slightly more general in the sense that some exceptions for the weight condition are allowed.
\begin{theorem}\label{cw} Let $n,m$ be positive integers with $n\geq m\geq 2$ and let $\ga, \gb, \eps >0$ be positive real numbers (not necessarily constants) with $2\ga<\gb, \eps <1/2$.
Suppose $n$ coins are given and there are at most $m$ counterfeit coins among them. The weights of authentic coins are $0$ while the wights of counterfeit coins vary but they are non-zero.
If the weights $w(c)$ of all but $\eps m$ counterfeit coins $c$ satisfy $\ga \leq |w(c)|\leq \gb$ and the weights $w(c)$ of the $\eps m$ counterfeit coins $c$ satisfies just $|w(c)| \leq \gb$, then there is a randomized polynomial time algorithm that asks $O(\frac{m \log (\gb/\ga) \log n}{\log m})$ queries and finds all but $m^{0.8}+2\eps m$ counterfeit coins, with probability $1-O(1/{m^{0.8}})$.
All the remaining counterfeit coins can be found using
$O((m^{0.8}+2\eps m)\log n)$ additional queries, with probability
$1-e^{-\Omega(m^{0.8})}$.
\end{theorem}
\noindent
In the proof of Theorem \ref{cw}, we use the search matrix Bshouty and Mazzawi \cite{BM10_MFCS} developed after constructing random sets of coins as in Choi and Kim \cite{CK12}. Though the guessing processes are the same as in \cite{CK12}, the processes for checking and correction are newly developed using biased random walks.
One may easily verify if the coins declared to be counterfeit by the algorithm in Theorem \ref{cw} are actually counterfeit by directly weighing, using $m$ additional queries. Running the algorithm $O(\mu)$ times with the verification at each time, the error probability may be made arbitrarily small.
\begin{corollary}\label{cwc}
Under the same hypotheses of Theorem \ref{cw} and any integer $\mu \geq 1$, there is a randomized polynomial algorithm that uses $O(\frac{\mu m \log (\gb/\ga) \log n}{\log m})$ queries and finds all but $m^{0.8}+2\eps m$ counterfeit coins with probability $1-O(1/m^{\mu})$. All the remaining counterfeit coins can be found using
$O((m^{0.8}+2\eps m)\log n)$ additional queries, with probability
$1-e^{-\Omega(m^{0.8} )}$.
\end{corollary}
\mn
After presenting the search matrix and two martingale inequalities
in Section \ref{pre}, we prove Theorem \ref{cw} in Section \ref{scwp}. Section \ref{sgfp} is for the proof of Theorem \ref{gfpm}. The concluding remark will follow.
\section{Preliminaries}\label{pre}
As mentioned in the previous section, Bshouty and Mazzawi \cite{BM10_MFCS} used Fourier representation of
certain functions to find a search matrix, i.e., a $0,1$ matrix that
is useful for coin weighing problems. We present properties of the
matrix in a slightly generalized form.
\newcommand{\gc}{\gamma}
\begin{lemma}\label{BM} Let $\gc, m$ be positive integers. Then,
for the smallest integer $t$ satisfying $t2^{t-1} \geq \gc m$, one
can construct, in polynomial time, $2^t \times m$ $0,1$ matrix $S$
and $ 2^t \times 2^t$ matrix $T$ with the following property: For
each $j=1,...,m$, one may find, in polynomial time, a unique
positive integer $i_j \leq 2^{t}$ and a non-negative integer $k_j
\leq \lceil t/\gc \rceil -1 $ satisfying
$$(TS)_{i_{\! j} k} = 2^{-(k-j) \ga}
(TS)_{i_{\! j} j} ~\mbox{ for ~$j+1\leq k\leq j+k_j $},~~{\rm and}~~
(TS)_{i_{\! j} k}=0 ~\mbox{ for ~$ k\geq j+k_j+1 $}, $$ where
$(TS)_{ij}$ is the $ij$ entry of $TS$.
\end{lemma}
Setting $ a_{jk} = \frac{(TS)_{i_{\! j} k}}{(TS)_{i_{\! j} j}}$, we
have the following corollary.
\begin{corollary}\label{sm} Let $\gc, m$ are positive integers and $t$ be
the smallest integer satisfying $t2^{t-1}\geq\gc m$. Then one can
find, in polynomial time, $2^t$ non-adaptive queries, real numbers
$a_{jk}$, and a non-negative integer $k_j \leq \lceil t/\gc \rceil
- 1$, $ j=1,..., m, k=1,..., j-1$, satisfying the following
property: For disjoint sets $A_1,..., A_m$ of coins, the $2^t$
queries yield values $x_j$, in polynomial time,
satisfying
$$
w(A_j) = x_j -\sum_{k=1}^{j-1} a_{j k} w(A_k) - \sum_{k=1}^{k_j} \frac{
w(A_{j+k}) }{2^{k\ga}},
$$
$j=1,..., m$, where $w(A)$ is the sum of weights of all coins in $A$.
In particular, $\frac{(2+o(1))\gc
m}{\log (\gc m)}$ queries are enough to find $x_{j}$'s.
\end{corollary}
We will need the Azuma-Hoeffding martingale inequality too.
The following is from \cite{McDiarmid89}.
\begin{lemma}\label{mar}
Let $Z=(Z_1,\ldots,Z_t)$ be a family of independent random variables
with $Z_\ell$ taking values in a finite set $B_\ell$ for each $\ell$. Suppose
that the real-valued function $f$ defined on $\prod_\ell B_\ell$ satisfies
\begin{equation*}
|f(\mathbf{z})-f(\mathbf{z}')| \leq c_{_\ell}
\end{equation*}
whenever the vectors $\mathbf{z}$ and $\mathbf{z}'$ differ only in
the $\ell^{\rm th}$ coordinate. Then for any $\lambda \geq 0$,
\begin{equation*}
\Pr \left[ |f(Z) - \mathrm{E}[f(Z)]| \geq \lambda \right] \leq
2e^{-2\lambda^2 / \sum_{\ell} c_{_\ell}^2} .
\end{equation*}
\end{lemma}
For our purpose, a more general martingale inequality is needed. The
following version appeared in \cite{Kim95}.
\begin{lemma}\label{gm}
Let $X=(Z_1,\ldots,Z_t)$ be independent identically distributed {\em
(}i.i.d.{\em )} Bernoulli random variables with probability $p$ {\em
(}i.e., $\Pr [Z_i = 1] = p$ and $\Pr [Z_i = 0] = 1-p$ for each
$i${\em )}. Suppose that the real-valued function $f$ defined on
$\{0,1\}^{t}$ satisfies
\begin{equation*}
|f(\mathbf{z})-f(\mathbf{z}')| \leq c_i
\end{equation*}
whenever the vectors $\mathbf{z}$ and $\mathbf{z}'$ differ only in
the $i^{\rm th}$ coordinate. Then for any $\lambda,\rho > 0$,
\begin{equation*}
\Pr \left[ |f(Z) - \mathrm{E}[f(Z)]| \geq \lambda \right] \leq 2
\exp \Big( -\rho \lambda + (\rho^{2}/2)p(1-p) \sum_{i=1}^{t} c_i^2
\exp(\rho c_i) \Big) .
\end{equation*}
\end{lemma}
\section{Coin Weighing Problem}\label{scwp}
\newcommand{\q}{\ell_q}
\newcommand{\lmq}{\ell_q}
Suppose $n$ coins are given, some of which are counterfeit.
The weights of all authentic coins are the same and known a priori, while the weights of counterfeit coins are unknown but different from the weight of an authentic coin. Without loss of generality, we may assume that the weights of authentic coins are $0$ and the weights of counterfeit coins belong to a set of non-zero real numbers. We assume that the number of counterfeit coins is known to be at most $m$.
If $O(m\log n)$ queries are allowed to find counterfeit coins. One may use a randomized binary search:
\mn
{\bf Randomized Binary Search} Suppose a set $A$ of coins is given, and the number of coins is no more than $n$ and there are at most $m\leq n$ counterfeit coins. Then select each coin with probability $1/2$, independently of all other coins. Then weigh the set $A'$ of selected coins. If the weight is non-zero, then find a counterfeit coin among the selected coins, using the deterministic binary search.
\mn
The deterministic binary search is as follows. Divide $A'$ into two parts $A_{1}'$, $A_{2}'$ with size difference at most $1$. If $w(A_{1}')\not=0$, then select $A_{1}'$. Otherwise, select $A_{2}'$.
Keep doing this for the selected set until a counterfeit coin is found.
\mn
Provided that there is a counterfeit coin, it is not hard to see that the probability of the weight of $A'$ being non-zero is at least $1/2$ and the deterministic binary search requires no more than $\lceil \log n\rceil$ queries. The number of queries required to find one counterfeit coin is at most $2+\lceil \log n\rceil$ in expectation. Thus, it is expected that $(\lceil \log n\rceil+2 +o(1))m$ queries are enough to find all counterfeit coins, with hight probability. Here, we show that $(\lceil \log n\rceil+3 )m$ queries are enough, with probability $1-e^{-\Omega(m)}$.
\begin{lemma}\label{rbs} With probability $1-e^{-\Omega(m)}$, the randomized binary search finds all counterfeit coins using $(\lceil \log n\rceil+3 )m$ queries.
\end{lemma}
\noindent
The proof of the lemma is presented in Appendix.
\mn
We first construct random sets of coins and then present the algorithm, for which the time complexity is not optimized but it is clearly a polynomial time algorithm. Some explanation and analysis of the algorithm will follow after the algorithm is presented.
The construction of random sets is the same as in Choi and Kim \cite{CK12}.
\mn {\bf Constructing random sets of coins}: Let $A$ be a set of $n$ or less coins. For an integer $q\geq 2$ and $\ell_q :=\lceil
\log q \rceil$,
we construct random subsets $A_{i,j}$ of $A$,
$i=0,1, ..., \lceil 3 \log n \rceil$,
$j=1,..., 2^{\ell_q +i} $.
For $i=0$, we assign each coin in $A$ a uniform random number among $1,
..., 2^{\q}$, independently of all other coins. The set $A_{0,j}$
consists of all coins with assigned number $j$. Generally, for $i=1,...,\lceil 2\log q \rceil-1$, once all
$A_{i-1,j}$, $j=1, ..., 2^{\lmq+i-1}$, are constructed, we may randomly
divide each set $A_{i-1,j}$ into two parts so that coins in $A_{i-1,j}$
are independently in the first part with probability $1/2$. The
other coins in $A_{i-1,j}$ are to be in the second part. The set of
all coins in the first and second parts are denoted by $A_{i,2j-1}$
and $A_{i,2j}$, respectively. Or equivalently, after assigning
each coin mutually independent random numbers $r_{_0}, r_{_1}, ...,
r_{_{\lceil 2\log q \rceil-1}}$, independently of all other coins,
with
$$ \Pr [ r_{_{\!0}}= a ]= 2^{-\q},~ a=1, ..., 2^{\lmq}~~\mbox{and}~~~
\Pr [ r_{_{\!i}}= a ]=\frac{ 1}{2}, ~~a=0, 1,~~i=1,...,\lceil 2\log
q \rceil-1,
$$
we define $A_{i,j}$ to be the set of all coins with assigned
numbers $r_{_{\!0}}, r_{_{\!1}}, ...., r_{_{\lceil 2\log q \rceil-1}}$
satisfying $j=1+(r_{_{\! 0}}-1)2^i +r_{_{\! 1}} 2^{i-1}+ \cdots +
r_{_{\! i}} $.
For $i \geq \lceil 2\log q \rceil$, $A_{i-1,j}$ may be deterministically
divided
into two parts so that the first part consists of $\lceil
|A_{i-1,j}|/2 \rceil$ coins. As before, the first part is denoted by
$A_{i,2j-1}$, and $A_{i,2j}=A_{i-1,j}\setminus A_{i,2j-1}$. This
construction is expected to stop when all $A_{i, j}$,
$j=1,...,2^{\lm+i}$, consist of one or no coin. As there are $n$ coins, all $A_{i, j}$ consist of one or no coin within $\lceil \log n \rceil$ more rounds after
the random construction ends. For the sake of safeness, we stop the construct when $i=\lceil 3\log n \rceil
\geq \lceil 2\log q \rceil + \lceil \log n \rceil. $
\mn
The following lemma summarize properties of the random subsets $A_{i,j}$ that will be used for the analysis of the algorithm presented later. The proof is essentially in \cite{CK12} and it is presented in Appendix for the sake of completeness.
\begin{lemma}\label{random}
Suppose a set $A$ of $n$ or less coins are given, and the number of counterfeit coins in $A$ is at most $q\geq 2$. If the weights $w(c)$ of all but at most $q/2$ counterfeit coins $c$ satisfy $|w(c)|\geq \ga$.
Then,
with
probability $1-O(\frac{1}{q})$, we have the followings.
\mn
(a) There are at most $\frac{5q}{6}$ counterfeit coins $c$ that satisfy $|w(c)|< \ga$ (not exclusive) or belong to a set $A_{0,j}$ containing more than one counterfeit coin, $j=1,...,2^{\lmq}$.
\mn
(b) For each $i=1,..., \lceil 2\log q \rceil-1$, $A_{i,j}$ contains
at most $\frac{i+2\log q}{i}$ counterfeit coins.
\mn (c) For each $i=1,..., \lceil 2\log q \rceil-1$, there are at most $2^{-(i+1)}q +
q^{3/4}$ sets $A_{i,j}$ that
contain more than one counterfeit coin.
\mn (d) For $i\geq \lceil 2\log q\rceil-1$, each $A_{i,j}$ contains
one or less counterfeit coin.
\mn (e) Each $A_{\lceil 3\log n \rceil,j}$ contains at most one coin.
\end{lemma}
Now we are ready to present the algorithm described in Theorem \ref{cw}.
\mn {\bf Algorithm } (i) (Initially, $q=m$ and $A$ is the set of all $n$ coins.) Construct random subsets $A_{i,j}$ of $A$ as
above with parameter $q$. Then weigh $A_{0,j}$ for all $j=1,...,
2^{\ell_{q}}$, and denote $w_{0, j} = w(A_{0,j}) $ $j=1,...,2^{\ell_{q}}$ and $J_0$ to be the set of all $j$ such that
$|w_{0,j}|\geq \ga$. Then go to (ii), where, in general, $w(B)=\sum_{c\in B} w(c)$ for a set $B$ of coins.
\mn (ii) (Initially $ i=1$ and $J=J_0$.)
After relabeling, we may assume $J=\{1, ..., |J|\}$.
Apply Corollary
\ref{sm} with $\gc_{_i} = \max\{\lceil\log (\frac{6\gb}{\ga}) \rceil, \lceil
\log (\frac{3\gb (i+2\log q)}{i\ga }) \rceil\}$ to $A_{i,2}, ..., A_{i,2|J|}$ and obtain $x_{r}$ satisfying
\begin{equation}\label{main3}
w (A_{i,2r}) = x_{_{r}} -
\sum_{k=1}^{r-1} a_{_{rk}} w (A_{i,2k}) -
\sum_{k=1}^{k_{r}} \frac{
w(A_{i,2(r+k)}) }{2^{k\gc_{i}}}.
\end{equation}
\newcommand{\case}[4]{
\left\{ \begin{array}{ll} {#1} & \mbox{#2} \\ {#3} & \mbox{#4}
\end{array} \right.}
\newcommand{\caseth}[6]{
\left\{ \begin{array}{lll} {#1} & \mbox{#2} \\ {#3} & \mbox{#4}
\\ {#5} & \mbox{#6}
\end{array} \right.}
\mn
Set, inductively in $r=1,..., |J|$,
\begin{equation}\label{uu}
u_{_{2r}}=\caseth{w_{i-1,r}}{ if $|x_{_{r}} -\sum_{k=1}^{r-1}a_{
rk}u_{2k}|\geq \frac{\ga}{2}$}{}{}{~~~~0}{~otherwise,}
\end{equation}
and $u_{_{2r-1}}= w_{i-1,r}-u_{_{2r}}$, $r=1,...,|J|$. Go to (iii) if $i< \lceil 2\log q \rceil$. Otherwise, go to (iv).
\mn
(iii) (Initially, $s=-2$.) Randomly select each $j$ satisfying $u_{\! _j}=0$ and $j \leq \min\{s, 2|J|\}$ with probability $1/2$, independently of all other $j$. Weigh $\cup\{A_{i,j}:{\rm selected}~ j\} $. The weight is $0$ if no $j$ is selected. Do this random weighing $\lceil \log (i^2+1)\rceil+3$ times, independently of all other random weighings. This procedure is called a random test at $s$. If the test is passed, i.e., all weights are $0$, then update $s$ to be $s+2i^2$.
If it is failed and $s\leq 2|J|$, correct $u_{s}$ by weighing $A_{i, s}$, that is, weigh $A_{i, s}$, and update $u_{s}$ to be $w(A_{i, s})$ and $u_{s-1}$ to be $w_{i-1, s/2}-u_s$. (Note that $s$ is even.) Update also $u_{j}$ for all $j > s$ according to (\ref{uu}) and $u_{_{2r-1}}=w_{i-1,r}-u_{_{2r}}$. If the test is failed and $s> 2|J|$, then do nothing.
Update $s$ to be $s-2$ for both cases. This step including all updating
is to be called a correction step of $u_{s}$, or simply a correction step, even for $s>|J|$. It does not necessarily mean that $u_{_s}$ was not $w(A_{i,s})$ just before the correction step though.
\mn If $s\leq 2|J|+ 8i^2
\log q$, repeat (iii) with updated $s$. Otherwise, let $w_{i,j}=u_j$, $j=1,...,2|J|$. Then return to the original label and update $i$, $J$ to be $i+1$, $\{ j: \mbox{$w_{i,j}$ is defined}$ ${\rm and}~|w_{i,j}| \geq \ga \}$, respectively, and go to (ii).
\mn
(iv) Set $w_{i,j}=u_j$, $j=1,...,2|J|$. Then return to the original label and update $J$ to be $\{ j: \mbox{$w_{i,j}$ is defined}$ ${\rm and}~ |w_{i,j}| \geq \ga \}$. If $i< \lceil 3\log n \rceil $, then go to (ii) after updating $i$ to be $i+1$.
If $i= \lceil 3\log n \rceil $, then output $J$ and declare that all coins in $\cup_{j\in J} A_{i,j}$ are counterfeit.
Remove all coins that are declared counterfeit from the set $A$ of all coins and update $q$ to be $5q/6$. If $q> m^{0.8}+ 2\eps m $ go to (i). Otherwise, go to (v).
\mn
(v) Apply the randomized binary
search to find counterfeit coins one by one, using $(\lceil \log n\rceil+3 )(m^{0.8}+2 \eps m)$ queries.
\mn
The core parts of the algorithm are (ii) and (iii).
If $w_{i-1,j}=w(A_{i-1,j})$ and every set $A_{i-1,j}$ contains
at most one coin, then $w(A_{i,2j})= 0$ or $w_{i-1,j}$.
Provided $|\sum_{k=1}^{k_{r}} \frac{
w(A_{i,2(r+k)}) }{2^{k\gc_{i}}}|$ is small enough, say less than
$\ga/2$ (see (a) lemma \ref{main}), it is not hard to show that $u_{2r}= w(A_{i,2r})$ and $u_{2r-1}= w(A_{i,2r-1})$ for all $r$. (See Corollary \ref{bi}.)
This was one of main ideas of Bshouty and Mazzawi \cite{BM10_MFCS}.
In general, as some sets $A_{i-1,j}$ contain more than one counterfeit coin, $u_{2r}$ may or may not be $w(A_{i,2r})$.
If $r$ is the smallest $r$ with $u_{2r}\not =w(A_{i,2r})$,
$u_{2r'}=w(A_{i,2r'})$, $r' > r$, is not guaranteed any more
even if the set $A_{i-1,r'}$ contains only one counterfeit coin.
This is why we introduced
random tests and correction steps in (iii). The random tests
generate a random walk that travels according to the value of $s$.
It turns out that the walk goes forward until it passes or at $2r$.
Once the random walk passes or at $2r$, it goes backward
with a probability close to $1$ (not extremely close to $1$ though).
It is expected that the random walk with correction steps
quickly identifies and corrects $u_{_{2r}}$.
Moreover, it turns out that $r$ is the smallest $r$
with $u_{2r}\not =w(A_{i,2r})$ only if $A_{i-1,r}$ contains
more than one counterfeit coin.
If not many sets $A_{i-1,r}$ contain more than one counterfeit coin
(see (c) of Lemma \ref{random}), the number of queries asked
to identify and correct corresponding $u_{2r}$'s seems
to be reasonably small.
In other words, the lesser the number
of sets $A_{i-1,r}$ containing more than one counterfeit coin is,
the faster $s$ increases. Eventually, $s$ keeps increasing after
all corresponding $u_{2r}$'s
are corrected.
\mn
{\bf Remark.} (a) Though the initial value $-2$ of $s$ looks somewhat strange,
it is natural as $s=2r-2$ when $u_{2r}$ is corrected and
the initial value must be determined as if the imaginary $u_0$ were corrected.
\mn
(b) When the random test fails, it may be tempting to find $A_{i,2r}$
with $w(A_{i,2r})\not=0$, say using a binary search. However,
the number of queries needed to find such a set can be as large
as $\Omega(\log q)$, while our algorithm is expected
to correct $u_{2r}$ using $O(i^2 \log (i^2+1))$ queries.
This save queries when $i$ is small. Though the bound
is not extremely good if $i$ is large, it is not really a
matter as there are much less sets $A_{i-1,j}$ containing
more than one counterfeit coin. (See (c) of Lemma \ref{random}.)
\mn
\mn
To analyze the algorithm, we precisely summarize core properties of the algorithm.
\begin{lemma}\label{main} Suppose (a)-(e) of Lemma \ref{random} hold for $q$ and $w_{i-1,j} = w(A_{i-1,j})$ for a fixed $i=1,..., \lceil 3\log n \rceil$ and all $j=1,..., |J|$. Then we have the followings.
\noindent
(a) For all $r=1,..., |J|$, $ \Big|\sum_{k=1}^{k_{r}} \frac{
w(A_{i,2(r+k)}) }{2^{k\gc_{i}}}\Big| < \frac{\ga}{2}$.
\noindent
(b) If $r$ is the smallest $r$ such that $u_{_{2r}} \not= w(A_{i,2r})$ when $u_{2r}$ is first defined or updated, then neither $w(A_{i,2r-1})$ nor $w(A_{i,2r})$ is zero, especially $A_{i-1,r}$ contains more than one counterfeit coin.
\noindent
(c) Suppose $i< \lceil 2\log q\rceil$ and $u_{_j} = w(A_{i,j})$ for
all $j\leq 2r-2$ at a step.
If $s\leq 2r-2$ at the step, then $s$ keeps
increasing until $s\geq 2r$. And once $s\geq 2r$, $s\geq 2r$
at all the following steps except possibly one step,
which is a correction step of $u_{_{2r}}$ and $s=2r-2$.
\old{\noindent
(d) Suppose $i< \lceil 2\log q\rceil$. If $s\geq 2r$ and $u_{2r} \not= w(A_{i,2r})$
at a step, then the probability that $s$ increases at the next step is at most $\frac{1}{8(i^2+1)}$.}
\end{lemma}
\proof
(a) For $i<\lceil 2\log q \rceil$, since $i \leq 2\log q $, $2^{^{\gc_i}} \geq \frac{3\gb(i+2 \log q)}{i\ga}$ and $|w(A_{i,2(r+k)})|\leq \frac{\gb(i+2\log q)}{i}$ (as $A_{i,2(r+k)}$ contains at most $\frac{i+2\log q}{i}$ counterfeit coins by (b) of Lemma \ref{random}),
we have
$$ \Big|\sum_{k=1}^{k_{r}} \frac{
w(A_{i,2(r+k)}) }{2^{k {{\gc_{i}}}}}\Big|
\leq \sum_{k=1}^{k_{r}} \frac{
\gb(i+2\log q) }{i(\frac{3\gb(i+2 \log q)}{i\ga})^k}
\leq \frac{\ga}{3} +\frac{\ga}{3} \sum_{k=1}^{\infty}
\Big( \frac{2\ga \log q }{12\gb \log q} \Big)^k
\leq \frac{\ga}{3} +
\frac{\ga}{3} \sum_{k=1}^{\infty} \Big(\frac{1
}{6 }\Big)^k < \frac{\ga}{2}.
$$
If $i\geq \lceil 2\log q \rceil$, then each $A_{i,j}$ contains
one or less counterfeit coin by (d) of Lemma \ref{random}, which together with $2^{\gc_i} \geq \frac{6\gb }{\ga}$ gives
$$ \Big|\sum_{k=1}^{k_{r}} \frac{
w(A_{i,2(r+k)}) }{2^{k\gc_{i}}}\Big|
\leq \sum_{k=1}^{k_{r}} \frac{
\gb }{(\frac{6\gb }{\ga})^k}
\leq \frac{\ga}{6} +\frac{\ga}{6} \sum_{k=1}^{\infty}
\Big( \frac{\ga}{6\gb } \Big)^k
\leq \frac{\ga}{6} +
\frac{\ga}{6} \sum_{k=1}^{\infty} \Big(\frac{1
}{6 }\Big)^k < \frac{\ga}{2}.
$$
\mn
(b) As $r$ is the smallest $r$ such that $u_{_{2r}} \not= w(A_{i,2r})$ when $u_{_{2r}}$ is defined or updated, $u_{2j} = w(A_{i,2j})$ for all $j< r$ and hence
$$ w (A_{i,2r}) = x_{_{r}} -
\sum_{k=1}^{r-1} a_{_{rk}} w (A_{i,2k}) -
\sum_{k=1}^{k_{r}} \frac{
w(A_{i,2(r+k)}) }{2^{k\gc_{i}}}= x_{_{r}} -
\sum_{k=1}^{r-1} a_{_{rk}} u_{_{2k}} -
\sum_{k=1}^{k_{r}} \frac{
w(A_{i,2(r+k)}) }{2^{k\gc_{i}}}.$$
If $u_{_{2r}}=0$, $u_{_{2r}}\not=w (A_{i,2r})$ yields that $w (A_{i,2r})\not=0$. On the other hand, $u_{_{2r}}=0$ implies that
$|x_{_{r}} -
\sum_{k=1}^{r-1} a_{_{rk}} u_{_{2k}}| < \ga/2$. This together with (a) gives that $|w (A_{i,2r})| < \ga$. Since $|w (A_{i-1,r})|=|w_{i-1
,j}| \geq \ga$ and $w (A_{i,2r-1})= w (A_{i-1,r})-w (A_{i,2r})$, $w (A_{i,2r-1})\not=0$.
If $u_{_{\! 2r}}=w_{i-1,r}(=w(A_{i-1,r}))
$, then $u_{_{\! 2r}}\not=w (A_{i,2r})$ yields that
$w (A_{i,2r})\not=w (A_{i-1,r})$ and hence $w (A_{i,2r-1})= w (A_{i-1,r})-w (A_{i,2r})\not=0$. On the other hand, $u_{_{\! 2r}}=
w_{i-1,r}$ implies that
$|x_{_{r}} -
\sum_{k=1}^{r-1} a_{_{rk}} u_{_{2k}}| \geq \ga/2$. This together with (a) gives that $|w (A_{i,2r})| >0$, i.e., $w (A_{i,2r})\not=0$
\mn (c) We prove this by reverse induction. For $r=|J|+1$,
if $u_{j}= w(A_{i,j})$ for all $j\leq 2r-2=2|J|$, then $w(A_{i,j})=0$ whenever $u_{j}=0$, for all $j\leq 2|J|$. Thus, the random test must be passed and $s$ keeps increasing regardless of the value of $s$ (as no $u_{_j}$ is updated).
Suppose $u_{j}= w(A_{i,j})$ for all $j\leq 2r-2$ with $r\leq |J|$. Then $w(A_{i,j})=0$ for all $j\leq 2r-2$ with $u_{j}=0$. If $s \leq 2r-2$, the random test must be passed and hence $s$ increases. Once $s>2r-2$, or equivalently $s\geq 2r$ (as $s$ is even), no $u_{j}$ with $j\leq 2r-2$ is updated before a correction step of $u_{_{\! 2r}}$.
If there is no correction step of $u_{_{\! 2r}}$, then $s\geq 2r$ at all the following steps. If $u_{_{\! 2r}}$ is corrected at a step, then $s=2r-2$ and $u_{j}= w(A_{i,j})$ for all $j\leq 2r$ at the step. The induction hypothesis especially yields $s\geq 2r$ at all steps after the correction step.
\mn
\qed
\renewcommand{\S}{{\cal S}}
\mn
To analyze (iii) of the algorithm for a fixed
$i<\lceil 2\log q\rceil$, we may regard the whole process
as a random walk $\S$ that travels
according to the value of $s$. That is, $\S=(s_{_0}, s_{_1}, ...)$,
where $s_{_{k}} $ is the value of $s$ at the end of the $k^{\rm th}$ step.
Note that $\S$ goes backward, i.e., $s$ decreases,
at a step if and only if the step is a correction step.
We will see that $\S$ goes forward until it passes or at $2r$ for the
the smallest $r$ with $u_{_{2r}}\not=w(A_{i,2r})$, and
then $\S$ tends to go backward
with probabilities close (not extremely though) to $1$ until
$u_{_{2r}}$ is corrected.
We partition $S$ into a few subrandom walks that are essentially
independent identically distributed (i.i.d).
They are not exactly i.i.d though.
Let $r_{_{\! 0}}=0$. The $0^{\rm th}$ (sub)random walk $\S_0$ (of $\S$)
starts when the whole process starts and ends at the same time,
that is, $\S_0= (s_{_0})$ (recall $s_{_0}=-2$).
Let $r_{_{\! 1}}$ be the
the smallest $r$ with $u_{_{2r}}\not=w(A_{i,2r})$.
The first random walk $\S_1$
starts immediately after $\S_0$ ends and it ends when
$s=2r_{_{1}}-2$ at a backward step or $s> 2|J|+ 8i^2
\log q$ for the first time, whichever comes first.
Generally, for $\ell\geq 2$, if $\S_{\ell-1}$ ends with $s=2r_{_{\el-1}}-2$,
then let $r_{_{\el}}$ be the smallest $r\leq |J|$
such that $u_{_{2r}} \not= w(A_{i, 2r})$ at the step $\S_{\ell-1}$ ends.
The $\ell^{\rm th}$ random walk $\S_\ell$ starts immediately after $\S_{\ell-1}$
ends, and it ends when
$s=2r_{_{\el}}-2$ at a backward step or $s> 2|J|+ 8i^2
\log q$ for the firs time, whichever comes first.
However, $\S_{\ell}$ does not end at a
forward step with $s=2r_{_{\el}}-2$. In theory,
it is possible that $\S_{\ell}$ is infinite, though
it is not difficult to show that the probability of $\S_{\ell}$
being infinite is $0$.
Both of $r_{_{\el\,'}}$ and $\S_{\ell\,'}$ are not defined for all $\ell' \geq \ell$,
if $\S_{\ell-1}$ is infinite or ends with $s> 2|J|+ 8i^2
\log q$, or $u_{_{2r}} = w(A_{i, 2r})$ for all $r\leq |J|$ at the last step of $\S_{\ell-1}$.
The random walk $\S_{\ell}$ is called good if it is defined
and ends with $s=2r_{_{\el}}-2$.
Note that the last step of a good random walk $\S_\ell$ is
the first correction step of $u_{_{2r_{_{\el}}}}$ after $\S_\ell$ starts.
In other words, a good random walk $\S_{\ell}$ ends when it corrects
$u_{_{2r_{\el}}}$ where $r_{\el}$ is the smallest $r$ such
that $u_{_{2r}} \not= w(A_{i, 2r})$ when it starts.
We also note that $r_{_{\el}}$, $\S_{\ell}$ are defined only
if $\S_{\el-1}$ is good. In particular, $\S_{\ell}$ is good only if
$ \S_{\ell\, '}$ is good for all $\ell' \leq \ell-1$.
\old{
last step of $\S_{\ell-1}$ If there is no correction step of $u_{_{2r_{_{\el-1}}}}$ after $r_{_{\el-1}}$ is defined, , then $r_{_{\el}}$ is not defined. The $\ell^{\rm th}$ random walk $\S_\ell$ starts from (the end of) the step at which $s\geq 2r_{_\ell}$ for the first time after $r_{_{\el}}$ is defined and it ends at the step at which $s=2r_{_{\el}}-2$ or $s> 2|J|+ 8i^2
\log q$ for the first time after it starts. Note that, if the (sub)random walk $\S_\ell$ ends with $s=2r_{_{\el}}-2$, then $u_{_{2r_{_\el}}}$ is corrected at the last step of $\S_\ell$, and then $r_{_{\el+1}}$ is defined at the last step.
\old{By convention, the $0^{\rm th}$ step means when (iii) starts. The $0^{\rm th}$ random walk $\S_0$ starts with $s=0$ at the $0^{\rm th}$ step with $s=0$ and ends}
The above lemma basically says that }
\mn
\begin{corollary}\label{m2} Under the same hypotheses as in Lemma \ref{main} with $i\leq \lceil 2\log q\rceil -1$, we have the followings.
\mn (a) For $\ell\geq 1$, if $\S_{\ell}$ is good, $u_{j}= w(A_{i,j})$
for all $1\leq j\leq 2r_{_{\el}}$ at the last step of $\S_\ell$,
especially $r_{_{\el+1}}> r_{_{\el}}$ if $r_{_{\el+1}}$ is defined.
Furthermore, a good random walk $\S_{\ell}$ starts
with $s=2r_{_{\el-1}}-2$ and keeps going forward
until $s\geq 2r_{_{\el}}$, and then goes back and force
with $s\geq 2r_{_{\el}}$ at all steps except the last step
at which $s= 2r_{_{\el}}-2$.
\mn
(b) If $r_{_\el}$ is defined, then neither $w(A_{i,2r_{_\el}-1})$ nor $w(A_{i,2r_{_\el}})$ is zero and $A_{i-1,r}$ contains more than one counterfeit coin. In particular, $r_{_{\el}}$ and $\S_\ell$ are not defined if $\ell > h_q:=\lceil 2^{-(i+1)}q+q^{3/4}\rfloor$.
\mn
(c) Suppose every $\S_{\ell}$ is good if defined. Then $w_{i,j}= w(A_{i,j})$, for all $j=1,...,2|J|$.
\old{and, for $r=1,..., |J|$, $r=r_{_{\el}}$ for some $\ell\geq 1$ if and only if neither $w(A_{i,2r-1})$ nor $w(A_{i,2r})$ is zero.}
\end{corollary}
\proof
(a) For $\ell\geq 1$, suppose $u_{j}= w(A_{i,j})$ for all
$j\leq 2r_{_{\el-1}}$ at the last step of $\S_{\ell-1}$.
(This is trivial when $\ell=1$.) Since $\S_{\ell}$ is good only
if $\S_{\ell-1}$ is good, the induction hypothesis may be applied to
obtain $r_{_{\el}}> r_{_{\el-1}}$ and hence
$$u_{j}= w(A_{i,j})~~\mbox{ for all}~~ j\leq 2r_{_{\el}}-2$$
at the last step
of $\S_{\ell-1}$. Then, (c) of Lemma \ref{main} gives that $s$ keeps
increasing (without updating $u_j$) after the last step of $\S_{\ell-1}$,
at which $s= 2r_{_{\el-1}}-2$, until $s\geq 2r_{_{\el}}$.
Once $s\geq 2r_{_{\el}}$, no $u_j$ with $j\leq 2r_{_{\el}}-2$ is updated
before the last step of $\S_\ell$.
Since $\S_\ell$ is good, $u_{2r}$ is corrected and hence
$u_{2r}= w(A_{i,2r})$, $u_{2r-1}= w(A_{i,2r-1})$
at the last step of $\S_\ell$. The second part is already shown too.
\mn
(b) Since $r_{_{\el}}> r_{_{\el-1}}$ is defined, $s= r_{_{\el-1}}-2$ at
the last step of $\S_{\ell-1}$
and $u_{_{2r_{_{\el}}}}$ is updated at the last step of
$\S_{\ell-1}$. By (b) of Lemma \ref{main}, neither $w(A_{i,2r-1})$
nor $w(A_{i,2r})$ is zero.
The second part follows from that
all $r_{_{\el}}$'s are distinct and there are at most
$h_q$ sets $A_{i-1,r}$ containing more than one
counterfeit coin (see (c) of Lemma \ref{random}).
\mn
(c) For the largest $\ell$ for which $r_{_\el}$ is defined, as $\S_\ell$ is good and $r_{_{\el+1}}$ is not defined,
$u_{_{2r}}=w(A_{i,2r})$ for all $r=1,..., |J|$ at the last step of $\S_\ell$.
Since $s$ keeps
increasing after the last step and eventually $s> 2|J| + 8i^2\log q$ without updating $u_{_{j}}$'s, we have $w_{i,2r} = w(A_{i,2r})$ for $r=1,...,|J|$, and $w_{{i,2r-1}}=w_{i-1,r}- w_{i,2r}=w(A_{i-1,r})-w(A_{i,2r})=w(A_{i,2r-1})$.
\old{For the second part, it is enough to show that $r=r_{\ell}$ for some $\ell$ provided that neither $w(A_{i,2r-1})$ nor $w(A_{i,2r})$ is zero, as the other direction is proven in (b).
We prove the contrapositive of the statement. Suppose $r\not=r_{_{\el}}$ for all $\ell\geq 1$ and $\ell$ is the largest $\ell \geq 0$ such that $r_{_{\el}} \leq r$. Then
$r> r_{_{\el}} $ and $u_{_{2r}}$ is updated at the last step of $S_{\ell}$, for $\S_\ell$ is good. Thus, $u_{_{2r}}$ is either $0$ or $w_{i-1,r}=w(A_{i-1,r})$ at the last step.
If $r_{_{\el+1}}$ is defined, then $r_{_{\el+1}}>r$ and $u_{_{j}}= w(A_{i,j})$ for all $j\leq 2r_{_{\el+1}}-2$, especially $u_{_{2r}}= w(A_{i,2r})$ at the last step of $S_{\ell}$. If $r_{_{\el+1}}$ is not defined, $u_{_{j}}= w(A_{i,j})$ for all $j\leq 2|J|$, especially $u_{_{2r}}= w(A_{i,2r})$ at the last step of $S_{\ell}$.
However, as $u_{_{2r}}$ is either $0$ or $w(A_{i-1,r})$,
we have $w(A_{i,2r})=w(A_{i-1,r})-w(A_{i,2r-1})=0$ or $w(A_{i-1,r})$,
which implies $w(A_{i,2r})=0$ or $w(A_{i,2r-1})=0$. }
\mn \qed
\mn
If $\lceil 2 \log q \rceil\leq i\leq \lceil 3\log n \rceil$,
each $A_{i,j}$ contains at most one counterfeit coin by (d) of Lemma \ref{random}. Then it easily follows that $w_{i,j}=w(A_{i,j})$ for all $j=1,..., 2|J|$.
\begin{corollary}\label{bi} Under the same hypotheses as in Lemma \ref{main}
with $\lceil 2 \log q \rceil\leq i\leq \lceil 3\log n \rceil$,
$w_{i,j}=w(A_{i,j})$ for all $j=1,..., 2|J|$.
\end{corollary}
\proof Recall that $w_{i,j}=u_{_j}$, where $u_{_j}$'s
are defined in (ii). Take, if any, the smallest $r$ such
that $u_{_{2r}}\not=w(A_{i,2r})$. Then, (b) of Lemma \ref{main}
implies that $A_{i-1,r}$ contains more than one counterfeit coin,
which is not possible as each $A_{i-1,r}$ contains at most one counterfeit
coin due to (d) of Lemma \ref{random}. Hence, $w_{i,2r}=w(A_{i,2r})$
for all $r=1,..., |J|$ and
$w_{i,2r-1} = u_{_{2r-1}}=w(A_{i-1,r})-w(A_{i,2r})=w(A_{i,2r-1})$.
\mn \qed
Corollaries \ref{m2} and \ref{bi} provide all but one basic properties to analyze the algorithm.
The missing property is that, with high probability, every $\S_\ell$ is good if defined, the hypothesis of (c) of Corollary \ref{m2}.
For the query complexity, an upper bound for the number $|\S_\ell|$ of steps in $\S_\ell$ is needed. As we want to bound $|\S_\ell|$ only for good $\S_{\ell}$, we will consider $|\S_\ell|\chi_{_\ell} $, where
$$ \chi_{_\ell} =\caseth{1}{if $\S_\ell$ is good}{}{}{0}{otherwise.}$$
It will be first shown that, after $\S_\ell$ passes or at $2r_{_\el}$,
the random walk goes to backward with probability at least
$1-\frac{1}{8(i^2+1)}$ until $u_{_{2r_{_\el}}}$ is corrected,
which follows from $w(\cup\{A_{i,j}:{\rm selected}~ j\}) \not=0$
with probability at least $1/2$ during the process.
Thus, $\S_\ell$
goes backward by at least $7/4$ in expectation
after $\S_\ell$ passes or at $2r_{_\el}$,
as $\S_\ell$ goes forward by $2i^2$ with probability at most
$\frac{1}{8(i^2+1)}$ and goes backward by $2$ otherwise.
This is why $\S_\ell$ is expected to be good.
The number $F_\ell$ of forward steps in $\S_\ell$ after it passes
or at $2r_{_\el}$ is also expected to be reasonably small,
namely $O(1)$ with a probability close to $1$.
It actually turns out that the probability of $F_\ell=k$ is
at most $e^{-k+1}$ and the sum $\sum_{\ell=1}^{h_q} F_\ell$
may be bounded by $O(h_q)$ with high enough probability, say
with probability $1-e^{-\Omega(q^{3/4})}$, where
$h_q=\lceil 2^{-(i+1)}q+q^{3/4}\rfloor$ as in (b) of Lemma \ref{m2}.
Then, it is not difficult to show that
the number of all steps in $S_{\ell}$ after it passes or at $2r_{_\el}$ is $O( i^2F_\ell)$, especially there are $O(i^2F_\ell)$ backward steps in $S_{\ell}$ by (a) of Corollary \ref{m2}. Therefore, there are $O(i^2 h_q)$ backward steps in $\S$ with high probability. All other steps in $\S$ are forward steps and hence there are
$$O\Big(i^2 h_q+ \frac{|J|+8i^2 \log q+2i^2h_q}{2i^2} \Big)
=O\Big (\frac{|J|}{i^2}+ (i^2+2)\Big(\frac{q}{2^{i+1}}+q^{3/4}\Big)\Big)$$
steps in $\S$. As $O(\log (i^2+1))$ queries are asked at each step, the number of queries asked in the $i^{\rm th}$ round, $i=1,...,\lceil 2\log q \rceil-1$, is $ O\Big(q\Big(\frac{1}{i^2}+ \frac{i^2 }{2^i}\Big)\log (i^2+1) \Big)$ assuming $|J|\leq q$.
The precise statements are presented in the next lemma. Though idea is simple as illustrated above, our proof of the lemma is somewhat lengthy, partly because it is proven rigorously without referring other theories. We prove the lemma at the end of this section. Readers familiar with random walks may skip the proof.
\begin{lemma}\label{m5s} Under the same hypotheses as in Lemma \ref{main} with $i\leq \lceil 2\log q\rceil -1$, if $u_{2r} \not= w(A_{i,2r})$ and $s\geq 2r$ at a step, then the probability that $s$ increases at the next step is at most $\frac{1}{8(i^2+1)}$.
Moreover, every $ \S_\ell$ is good if defined, with probability $1-O(q^{-3})$, and the number $|\S|$ of all steps satisfies
$$ \pr \Big[ |\S| \geq \frac{|J|}{i^2}+ 4(i^2+2)\Big(\frac{q}{2^{i+1}}+q^{3/4}\Big)\Big]
= O( q^{-3}).
$$
\end{lemma}
\old{
$$ \int_{0}^{\infty} \frac{(x^2+2)\log (x^2+1
)}{2^x} dx = \Big(-\frac{(x^2+2)\ln(x^2+2)}{2^x \ln 2} - \frac{2x\ln(x^2+2)}{2^x (\ln 2)^2} - \frac{2}{2^x (\ln 2)^3}\Big)\Big|_{0}^{\infty}. $$}
\mn
\mn
{\bf Correctness of the algorithm} Once Lemmas \ref{random}, \ref{m5s} and Corollaries \ref{m2}, \ref{bi} are established, it is easy to see that the algorithm finds counterfeit coins as desired. In the next lemma, we precisely describe it along with a property needed to bound query complexity.
\mn
\newcommand{\sub}{\subseteq}
\renewcommand{\sup}{\supseteq}
\begin{lemma}\label{m4} For a fixed $q> m^{0.8}+ 2\eps m$, the followings hold
with probability
$1-O(1/q)$, assuming the same in the prior round.
\mn
(a) The statements (a)-(e) of Lemma \ref{random} hold.
\mn
(b) Whenever $w_{i,j}$ is defined, $w_{i,j}= w(A_{i,j})$. In particular, a coin declared to be counterfeit must be counterfeit.
\old{
\mn
(c) The algorithm defines $w_{i,j}$ for every pair $i,j$ such that $A_{i,j}$ contains only one counterfeit coin $c$, and $|w(c)|>\ga$ and $ A_{i,j}\sub A_{0,\ell}$ for $A_{0,\ell}$ containing no counterfeit coin other than $c$.}
\mn
(c) The algorithm finds every counterfeit coin $c$ with $|w(c)|\geq \ga$ that is a unique counterfeit coin of $A_{0,\ell}$ for some $\ell=1,...,2^{\q}$. And the number of remaining counterfeit coins is at most the updated $q$.
\mn
(d) The number of queries asked in all rounds of (iii) is
$O(q)$, where the constant in $O(q)$ is at most
$\sum_{i=1}^{\infty} \frac{5+\log (i^2+1)}{i^2} +\frac{(i^2+2)(5+\log (i^2+1))}{2^{i-1}}+o(1).$
\end{lemma}
\mn
\proof As $q> 2\eps m$ and (c) holds in the prior round, Lemma \ref{random} yields that the statements in (a) hold with probability $1-O(1/q)$. We assume the statements to prove the other properties.
To prove the other properties, we further assume that every $\S_\ell$ is good if defined and that, for each $i=1,..., \lceil 2\log q \rceil-1$ and the number $|\S|$ of all steps in the $i^{\rm th}$ round of (iii),
\beq\label{ss} |\S| \leq \frac{|J|}{i^2}+ 4(i^2+2)\Big(\frac{q}{2^{i+1}}+q^{3/4}\Big),
\enq
both of which hold with probability $1-O(q^{-3})$ by Lemma \ref{m5s}. Then the first part of (b) follows from (c) of Corollary \ref{m2}, and Corollary \ref{bi}. Since every $A_{\lceil 3\log n \rceil,j}$ consists of one or no coin,
each coin $c$ in $\cup_{j\in J} A_{\lceil 3\log n \rceil,j}$
satisfies $|w(c)|= |w(A_{\lceil 3\log n \rceil,j})|=
|w_{\lceil 3\log n \rceil,j}|\geq \ga $ for some $j\in J$, especially, $c$ is counterfeit.
If a coin $c$ with $|w(c)|\geq \ga$ is a unique counterfeit coin in
$A_{0,\ell_0}$, then, for each $i=0,..., \lceil 3\log n \rceil$,
there is a unique $\ell_i$ such that $A_{i, \ell_i}\sub A_{0,\ell_0}$
contains $c$. It is clear, by the way how $A_{i,j}$'s are constructed, that
$A_{i,\ell_i} \sub A_{i-1,\ell_{i-1}}$ for all $i=1,..., \lceil 3\log n \rceil$. Moreover, since $c$ is a unique counterfeit coin of $A_{i,\ell_i}$, $|w(A_{i,\ell_i})|\geq \ga$ for all $i=0,..., \lceil 3\log n \rceil$. For $i=0$, $\ell_0\in J$ as $|w_{0,\ell_0}| = |w(A_{0,\ell_0})|\geq \ga$. For $i\geq 1$, assuming $\ell_{i-1} \in J$ when the prior round ends, $w_{i,\ell_i}= w(A_{i,\ell_i})$ by (c) of Corollary \ref{m2}, as $\ell_{i-1} \in J$ and $A_{i,\ell_i} \sub A_{i-1,\ell_{i-1}}$. Thus, $|w_{i,\ell_i}|=| w(A_{i,\ell_i})|\geq \ga$ implies that $\ell_i$ is in the updated $J$.
We have just shown that $\ell_{i} \in J$ when the $i^{\rm th}$ round ends for each $i$, especially, for $i= \lceil 3\log n \rceil$. As $c
\in A_{i, \ell_i}$ for $i= \lceil 3\log n \rceil$, $c$ is declared
to be counterfeit. The second part of (c) follows from (a) of
Lemma \ref{random}.
Note that $|J|=|\{ j: \mbox{$w_{i,j}$ is defined}$
${\rm and}~|w_{i,j}| \geq \ga \}| \leq q$ by the second part of (c) in the prior round and first part of (b), as $|w_{i,j}|=|w(A_{i,j})|\geq \ga$ implies that $A_{i,j}$ contains a counterfeit coin and the number of such sets is at most the number of counterfeit coins. Since the algorithm asks at most $5+\log (i^2+1)$ queries at each step of $\S$ (one more query is needed in backward steps), \raf{ss} yields that
the number of queries is at most
$$ \sum_{i=1}^{\lceil 2\log q\rceil-1}\Big( \frac{(5+\log (i^2+1))q}{i^2}+ 4(i^2+2)(5+\log (i^2+1))\Big(\frac{q}{2^{i+1}}+q^{3/4}\Big)\Big)= O(q).
$$
\mn
\qed
\mn
\mn
\mn
The lemma especially says that the number of remaining counterfeit coins decreases by factor $5/6$, with probability $1-O(1/{q})$. Applying this inductively until $q\leq m^{0.8}+ 2\eps m$, we know the algorithm find all but at most $m^{0.8}+2\eps m$ counterfeit coins before it goes to (v), with probability $1-O(1/{m^{0.8}})$. All the remaining $m^{0.8}+2\eps m$ counterfeit coins
are found in (v), with probability $1-e^{-\Omega(m^{0.8})}$, by Lemma \ref{rbs}.
\begin{corollary}\label{first} The algorithm find all but at most $m^{0.8}+2\eps m$ counterfeit coins before it goes to (v), with probability $1-O(1/{m^{0.8}})$. All the remaining $m^{0.8}+2\eps m$ counterfeit coins
are found in (v), with probability $1-e^{-\Omega(m^{0.8})}$, by Lemma \ref{rbs}.
\end{corollary}
\mn
{\bf Query Complexity} Suppose (a)-(d) of Lemma \ref{m4} hold for all $q$, which occurs with probability $1-O(1/{m^{0.8}})$. Then
for each $q$, the number of remaining counterfeit coins is at most $q$. Especially, $|J|\leq q$ as seen in last paragraph of the proof of Lemma \ref{m4}.
For each $q$, $2^{\q} \leq 2q$ queries are needed in (i).
For each $q$ and $i$, the number of queries asked in (ii) is
$$ \frac{(2+o(1))\gc_{_i} |J|}{\log (\gc_{_i} |J|)}\leq \caseth{
\frac{ (2+o(1))|J|}{ \log |J|}\Big\lceil \log (\frac{3\gb(i+2 \log q)}{i\ga})\Big\rceil}{
if $i< \lceil 2\log q\rceil$}{}{}{ \frac{ (2+o(1))|J|}{ \log |J|}
\Big\lceil\log (6\gb/\ga) \Big\rceil }{~if $i\geq \lceil 2\log q\rceil$.}
$$
Since $|J|\leq q$ and $$ \sum_{i=1}^{\lceil 2\log
q \rceil-1} \Big\lceil \log (\frac{3\gb(i+2 \log q)}{i\ga}) \Big\rceil
\leq 4\log
q \log (3\gb/\ga)+ \log {2\lceil 2\log q \rceil-1 \choose \lceil 2\log q \rceil-1} \leq 4\log
q \log (3\gb/\ga)+4\log q + 1,
$$
and
$$\sum_{i=\lceil 2\log q \rceil}^{\lceil 3\log n \rceil }
\lceil\log (6\gb/\ga) \rceil \leq 3\log (6\gb/\ga) \log n+ 3\log n
$$
for each $q$, the number of queries asked in (ii) is
O\Big(\frac{q\log (\gb/\ga) \log n}{\log q}\Big)
. $
The number of all queries asked in (iii) for each $q$ is $O(q)$ by (d) of Lemma \ref{m4}.
No query is asked in (iv) and hence the total number of queries asked for fixed $q> m^{0.8}+2\eps m$ is $O\Big(\frac{q\log (\gb/\ga) \log n}{\log q}\Big)$.
As $q$ keeps decreasing by factor of $5/6$, the number of queries asked before the algorithm goes to (v) is $O\Big(\frac{m\log (\gb/\ga) \log n}{\log m}\Big)$.
\mn
\qed
\bn
This together with Corollary \ref{first} implies that, if we artificially stop the algorithm when it asks $\frac{\eta m\log (\gb/\ga) \log n}{\log m}$ queries, for the constant $\eta$ in the $O\Big(\frac{m\log (\gb/\ga) \log n}{\log m}\Big)$ term,
all but at most $m^{0.8} +2\eps m$ counterfeit coins are found with probability $1-O(1/{m^{0.8}})$. As $(\lceil \log n\rceil+3 )(m^{0.8}+2 \eps m)$ queries are asked in (v) of the algorithm,
Theorem \ref{cw} follows. We conclude this section by proving Lemma \ref{m5s}.
\bn
{\bf Proof of Lemma \ref{m5s}} \, Note that each $u_{2r}$ may have one of
three values
$0, w_{i-1, r }$, $w(A_{i,2r})$. Since $u_{2r} \not= w(A_{i,2r})$,
$u_{2r}$ is either $0$ or $w_{i-1,j}$. If $u_{2r}=0$,
then $w(A_{i,2r})\not=0$. If $u_{2r}=w_{i-1,j}(=w(A_{i-1,j}))$,
then $u_{2r-1}=0$ while $w(A_{i,2r})\not= u_{2r}=w(A_{i-1,j})$
yields $w(A_{i,2r-1})=w(A_{i-1,j})- w(A_{i,2r})\not=0$.
Particularly, there is $\ell\leq s$ such that $w(A_{i,\ell})\not=0$
while $u_{\ell} =0$. Suppose the random selection other than $\ell$ is
carried out. Then
the set of coins to be weighed is either
$\cup\{A_{i,j}:{\rm selected}~ j, j\not= \ell \}$ or
$\cup\{A_{i,j}:{\rm selected}~ j, j\not= \ell \} \cup A_{i,\ell}$,
each of which occurs with probability $1/2$.
Since $w(A_{i,\ell})\not=0$ implies that the weights of the two sets are
different,
$$ \pr [w( \cup \{A_{i,j}:{\rm selected}~ j \})=0] \leq 1/2. $$
After independently performing this $\lceil \log (i^2+1)\rceil+3$ times,
the probability that all weights are $0$ is at most
$2^{\lceil \log (i^2+1)\rceil+3}\leq \frac{1}{8(i^2+1)}$.
That is, $s$ increases at the next step with probability at
most $\frac{1}{8(i^2+1)}$.
For the second part, suppose $\S_\ell$ is defined but it is not good,
which especially means that $\S_{\ell-1}$
is good. Then $\S_\ell$ must be infinite or reach a step with
$s> 2|J|+8i^2 \log q$.
As $\S_{\ell}$ starts with $s=2r_{_{\el-1}}-2$, $r_{_{\el-1}} < r_{_{\el}}$, and $u_{2r}= w(A_{i,2r})$ for all $r\leq 2r_{_\el}-2$, the random walk $\S_\ell$ keeps going forward until $s\geq 2r_{_{\el}}$ by (c) of Lemma \ref{main}. Let $\gs_{_\el}$ be the value of $s$ when $\S_\ell$ reaches a step with $s\geq 2r_{_{\el}}$ for the first time. Then
\beq\label{tl}
2r_{_{\el}}\leq \gs_{_\el} \leq 2r_{_{\el}}+2i^2-2,~~{\rm or} ~~0\leq \gs_{_\el}/2 -r_{_{\el}}\leq i^2-1,
\enq
for $s$ increases by $2i^2$.
Hence, there must be at least $\lfloor 4\log q\rfloor$ more forwarding steps to reach a step with $s> 2|J|+8i^2 \log q$, as, otherwise,
$$s\leq \gs_{_\el}+2i^2 (\lfloor 4\log q\rfloor-1)
\leq 2r_{_{\el}}-2+2i^2 +2i^2 (\lfloor 4\log q\rfloor-1)\leq 2|J|+8i^2 \log q. $$
If $\S_\ell$ is infinite, there must be at least $\lfloor 4\log q\rfloor$ more forwarding steps too.
Counting after $\S_\ell$ reaches a step with $s\geq 2r_{_{\el}}$ for the first time, let $T$ be the number of steps in $\S_\ell$ until there are $\lfloor 4\log q\rfloor$ more forwarding steps.
For $\S_\ell$ is not good, there is no correction step of $u_{_{2r_{_\el}}}$, or equivalently $s \geq 2r_{_\el}$ after the count starts, particularly, $T$ satisfies
$$ \gs_{_\el} + 2i^2 \lfloor 4\log q\rfloor -2(T-\lfloor 4\log q\rfloor )\geq 2r_{_{\el}},$$
which, together with \raf{tl}, gives
$$
T \leq (i^2+1)\lfloor 4\log q\rfloor + \gs_{_\el}/2 -r_{_{\el}}
\leq (i^2+1)(\lfloor 4\log q\rfloor +1)
. $$
We have just shown that, for $t= (i^2+1)(\lfloor 4\log q\rfloor +1)$, \beq\label{ng}\pr [ \S_{\ell}~\mbox{is not good} ]
\leq \pr \Big[~\exists\, \lfloor 4\log q\rfloor
~\mbox{forward steps among the
first $t$ or less steps of $\S_\ell$}\, \Big]. \enq
To bound the last probability, it is convenient to introduce an auxiliary random walk $\S^*_\ell$. The infinite random walk $\S^*_\ell$ starts when $\S_\ell$ reaches a step with $s\geq 2r_{_{\el}}$ for the first time and it is the same as $\S_\ell$ until $\S_\ell$ ends. Once $\S_\ell$ ends, $\S^*_\ell$ keeps going forward by $2i^2$ with probability $\frac{1}{8(i^2+1)}$ and backward by $2$
with probability $1-\frac{1}{8(i^2+1)}$.
Then, at any step, $\S^*_\ell$ goes forward with probability at most
$\frac{1}{8(i^2+1)}$.
\old{: If $S_{\ell}$ did not end before the step, then
when the step starts, $s\geq 2r_{\el}$ while $u_{_{2r_{_\el}}}\not= w(A_{i, 2r_{_\el}})$ remains unchanged. By the first part, $\S_\ell$ goes forward at the step with probability at most $\frac{1}{8(i^2+1)}$ and so does $\S^*_\ell$.
If $S_{\ell}$ ended before the step starts, the statement
clearly holds by the definition of $\S^*_\ell$.}
As there are $\lfloor 4\log q\rfloor$ forward steps among the
first $t$ steps of $\S^*_\ell$ if there are $\lfloor 4\log q\rfloor$ forward steps among the
first $t$ or less steps of $\S_\ell$, \raf{ng} gives
$$ \pr [ \S_{\ell}~\mbox{is not good} ]
\leq \pr \Big[ ~\exists\, \lfloor 4\log q\rfloor
~\mbox{forward steps among the
first $t$ steps of $\S^*_\ell$}\, \Big],$$
which is at most ${t \choose \lfloor 4\log q\rfloor }
\Big( \frac{1}{8(i^2+1)} \Big)^{\lfloor 4\log q\rfloor}$.
Therefore, using ${t \choose k} \leq (\frac{et}{k})^k$,
$$
\pr [ \S_{\ell}~\mbox{is not good} ]
\leq {t \choose \lfloor 4\log q\rfloor }
\Big( \frac{1}{8(i^2+1)} \Big)^{\lfloor 4\log q\rfloor}
\leq \exp\Big( \lfloor 4\log q\rfloor \ln \frac{e(i^2+1)(\lfloor 4\log q\rfloor+1)}{8(i^2+1)\lfloor 4\log q\rfloor}\Big). $$
Using $\ln(e/8)\leq -1$ and $\ln (1+y) \leq y$ for $y\geq 0$,
we obtain
$$
\pr [ \S_{\ell}~\mbox{is not good} ]\leq
\exp\Big(-\lfloor 4\log q\rfloor+1\Big)=O(q^{-4}) .$$
Since $\S_\ell$ is defined for at most $h_q$ indices $\ell$ by (b) of Corollary \ref{m2}, and $h_q= O(q)$, Boole's inequality yields the desired bound.
For the last bound, if $S_\ell$ is good, let $F_\ell$ be the number of all forward steps in $\S_\ell$ after $\S_\ell$ reaches a step with $s\geq 2r_{_{\el}}$ for the first time. If $S_\ell$ is not good or not defined, then $F_\ell=0$.
If $F_\ell=k \geq 1$, then $\S_\ell$ must be good and, for
the number $t$ of all steps in $\S_\ell$ after $\S_\ell$ reaches a step with $s\geq 2r_{_{\el}}$ for the first time, we have
$$ \gs_{_\el} + 2i^2 k -2(t -k)= 2r_{_{\el}}-2~~{\rm or} ~~t= (i^2+1)k + \gs_{_\el}/2-r_{_\el}+1\leq
(i^2+1)(k+1), $$
(recall that $\gs_{_\el}$ is the value of $s$ when $\S_\ell$ reaches a step with $s\geq 2r_{_{\el}}$ for the first time).
\old{and $F_\ell$ be the number of forward steps in $\S_\ell$, both until $\S_{\ell}$ ends. If $\S_\ell $ is not defined, $\gs_{_\el}=F_\ell=0$ Then, as $\S_\ell $ ends with $s=2r_{_{\el}}-2$, $\gs_{_\el}$ and $F_\ell$ satisfy
$$ \gs_{_\el} + 2i^2 k -2(t -k)= 2r_{_{\el}}-2~~{\rm or} ~~t= (i^2+1)k + \gs_{_\el}/2-r_{_\el}+1\leq
(i^2+1)(k+1), $$
(recall that $\gs_{_\el}$ is the value of $s$ when $\S_\ell$ reaches a step with $s\geq 2r_{_{\el}}$ for the first time). If $\S_{\ell}$ is not good, then $\gs_{_\el}=F_{\ell}=0$.
We estimate $F_\ell $ rather than $\gs_{_\el}$, as $ F_\ell$ can be bounded more nicely.}
After $\S_\ell$ reaches a step with $s\geq 2r_{_{\el}}$ for the first time, the probability that $\S_{\ell}$ moves forward is at most $\frac{1}{8(i^2+1)}$ until it ends. Moreover, the bound for the probability holds regardless of $F_{\ell\,'}$, $\ell' <\ell$.
The same argument as above gives, for a positive integer $k$,
$$
\pr [
F_\ell =k| F_{1}, ..., F_{\ell-1}]
\leq \pr \Big[ ~\exists\, k
~\mbox{forward steps among the
first $t$ steps of $\S^*_\ell$}\, \Big]
$$
and, by ${t \choose k} \leq (\frac{et}{k})^k$, $\ln (e/8)\leq -1$
and $\ln (1+y) \leq y$ for $y>0$,
$$ \pr [ F_\ell =k| F_{1}, ..., F_{\ell-1}]
\leq {t \choose k} \Big( \frac{1}{8(i^2+1)}\Big)^k
\leq \exp\Big( k \ln \frac{e t}{8k(i^2+1)} \Big)
\leq e^{-k+1}. $$
The inequality still holds when $k=0$.
For $h=h_q =\lfloor 2^{-(i+1)}q +
q^{3/4} \rfloor $,
$$ \pr \Big[ F_1=k_1, ..., F_{h} =k_h \Big]
=\prod_{\ell=1}^{h} \pr \Big[ F_\ell=k_{\ell} \Big | F_1=k_1, ..., F_{\ell-1} =k_{\ell-1} \Big]
\leq e^{-(\sum_{\ell=1}^{h} k_{\ell})+h},$$
implies that
$$ \pr \Big[ \sum_{\ell=1}^{h} F_\ell =k \Big]
= \!\! \! \sum_{k_{_{\el}}\geq 0 \atop k_{_1}+\cdots+ k_{_h}=k}
\pr \Big[ F_1=k_1, ..., F_{h} =k_h \Big]
\leq {k+h \choose h} e^{-k+h}.$$
Since ${k+h \choose h} \leq (\frac{e(k+h)}{h})^h$, we have
$$
\pr \Big[ \sum_{\ell=1}^{h} F_\ell =k \Big]
\leq \exp\Big( h\ln \frac{e(k+h)}{h} -k +h \Big)
= \exp\Big( h\ln \frac{(k+h)}{h} -k +2h \Big). $$
For $k\geq 4h-1$,
$h\ln \frac{(k+h)}{h} -k +2h\leq - 4k/5 +3h$ yields that
$$ \pr \Big[ \sum_{\ell=1}^{h} F_\ell \geq 4h-1 \Big]
=\sum_{k=4h-1}^{\infty} \pr \Big[ \sum_{\ell=1}^{h} F_\ell =k \Big]
\leq \sum_{k=4h-1}^{\infty} e^{-4k/5+3h} \leq 2e^{-(h-4)/5}. $$
\old{Since there are at most $h$ good $S_\ell$'s by (b) of Corollary \ref{m2},
$$ \pr \Big[ \sum_{\ell=1}^{\infty} F_\ell \geq 4h-1 \Big]=
\pr \Big[ \sum_{\ell=1}^{h} F_\ell \geq 4h-1 \Big]
\leq 2e^{-(h-4)/5}.
$$}
Finally, for good $\S_\ell$, the number of forward steps in $\S_\ell$
is $$ \frac{\gs_{_\el}-(2r_{_{\el-1}}-2)}{2i^2}
+F_{\ell}= \frac{r_{_{\el}}-r_{_{\el-1}}}{i^2} + \frac{\gs_{_\el}/2-r_{_{\el}}+1}{i^2}+F_{\ell}, $$
while the number backward steps in $\S_\ell$ is,
$$\frac{1}{2} \Big(
\gs_{_\el} -(2r_{_{\el}}-2)+ 2i^2 F_{\ell}\Big)=
\gs_{_\el}/2 -r_{_{\el}}+1+i^2 F_{\ell}. $$
As $\gs_{_\el}/2 -r_{_\el} \leq i^2-1$ by \raf{tl},
$$ |\S_\ell| \chi_{_\el}\leq \frac{r_{_{\ell}}-r_{_{\ell-1}}}{i^2}+1+
F_{\ell}+ i^2+ i^2 F_{\ell}=\frac{r_{_{\ell}}-r_{_{\ell-1}}}{i^2}+
(i^2 +1)(F_{\ell}+1). $$
Therefore,
\old{ as $\S_\ell'$ is good for $\ell'< \ell$ if $\S_\ell$ is good,
and $r_{_{\el}}> r_{_{\el-1}}$ if they are defined,}
$$ \pr\Big[ \sum_{\ell=1}^{h} |\S_{\ell}|\chi_{_\el} \geq \,
\frac{r^{*}}{i^2} +4(i^2+1)h
\Big] \leq \pr\Big[ \sum_{\ell=1}^{h} F_{\ell}\geq 4h-1\Big]
\leq 2e^{-(h-4)/5}\leq 2e^{-q^{3/4}/5+1},
$$
where $r^{*}= \max\{ r_{_{\el}}: \mbox{$S_\ell$ is good} \}$.
Suppose every $S_{\ell}$ is good if defined. Then there are
$ \lceil \frac{2|J|-(2r^*-2)+8i^2 \log q}{2i^2}\rceil$
more steps after $u_{_{2r^{\!*}}}$ is corrected, and the number $|\S|$ of all steps in $\S$, or equivalently in (iii) for fixed $i$, is
$$ \Big\lceil \frac{|J|-r^*+1+4i^2 \log q}{i^2}\Big\rceil
+ \sum_{\ell=1}^{h} |\S_{\ell}|\chi_{_\el}
\leq \frac{|J|-r^*+1}{i^2} + 4\log q +1 + \sum_{\ell=1}^{h}
|\S_{\ell}|\chi_{_\el}.
$$
Thus, if $\sum_{\ell=1}^{h} |\S_{\ell}|\chi_{_\el}
< \frac{r^*}{i^2} + 4(i^2+1)h$, then
$$|\S| < \frac{|J|}{i^2}+ 4(i^2+1)h + \frac{1}{i^2}+4\log q+1
< \frac{|J|}{i^2}+ 4(i^2+2)\Big(\frac{q}{2^{i+1}}+q^{3/4}\Big). $$
By the contrapositive, if $|\S| \geq \frac{|J|}{i^2}+ 4(i^2+2)\Big(\frac{q}{2^{i+1}}+q^{3/4}\Big)$, then either there is $\S_\ell$ that is defined but not good or
$$\sum_{\ell=1}^{h} |\S_{\ell}|\chi_{_\el}
\geq \, \frac{r_{_{\el}}}{i^2} + 4(i^2+1)h,$$
which gives
\begin{eqnarray*} \pr \Big[ |\S| \geq \frac{|J|}{i^2}+ 4(i^2+2)\Big(\frac{q}{2^{i+1}}+q^{3/4}\Big)\Big]
=O( q^{-3}+ e^{-q^{3/4}/5 } ) = O( q^{-3}).\end{eqnarray*}
\mn
\qed
\mn
\mn
\section{Finding Weighted Graphs}\label{sgfp}
In this section, we present a randomized algorithm finding weighted
graphs using additive queries, where an additive query asks the sum
of weights of edges with both ends in a fixed set. The algorithm
uses coin weighing algorithms presented in the previous section.
Let $G=(V,E,w_G)$ be a weighted graph with $w_G(e)\not= 0$ for all
$e\in E$. We just say graphs for weighted graphs. First of all, it
is enough to consider bipartite graphs: For general graphs, one may
consider two disjoint copies $X,Y$ of $V$. The copy of $u\in V $ in
$X$ and the copy of $v\in V $ in $Y$ form an edge if and only if
$uv$ is an edge in $G$, and, of course, the weight is inherited.
Then a query of type $w(A, B):=\sum_{x\in A, y\in B} w(x,y)$,
$A\subset X, B \subset Y$ is a linear combination of four additive
queries in $G$, that is,
\beq \label{bip} w(A, B)=
w_{_{\!G}} (A\cup B) - w_{_{\!G}}(A \setminus B) -
w_{_{\!G}}(B\setminus A) + w_{_{\!G}} (A \cap B). \enq In the rest of
this section, we consider weighted bipartite graphs $G=(X\cup Y, E,
w)$ with $|X|=|Y|=n$ and $|E|\leq m$.
A query means that one takes two sets $A\subset
X$ and $B\subset Y$ and finds out $w(A,B):=\sum_{a\in A, \in B}
w(a,b)$.
\mn
If $\bO (m\log n)$ queries are allowed, it is easy to find the
graph using the randomized binary search:
\mn
{\bf Randomized Binary Search for Graph} Suppose $n, m\geq 1$ and a bipartite
graph $G=X\cup Y$ with at most $m$ edges and $|X|,|Y|\leq n$ is given.
Then, take random subsets
$X', Y'$ of $X$ and $Y$, respectively, so that each vertex $x\in X$
($y\in Y$, resp.) in $X'$ ($Y'$, resp.) with probability $1/2$,
independently of all other vertices. If $w(X',Y')\not=0 $, find an
edge there using the deterministic binary search. Otherwise, take
a new random sets $X', Y'$ and do it again. Stop when $( 2\lceil \log n \rceil+5)m$ queries are asked. Output all edges found.
\mn
The deterministic binary search means that divide $X'$ into two parts
$X'_1, X'_2$ with size difference at most $1$. If $w(X'_1, Y') \not=0$ take $X'_1$, otherwise, take $X'_2$. Keep doing this until a vertex
$x$ with $w(x,Y') \not= 0$ is found. Then, find $y\in Y'$ with $w(x,y)\not= 0$ using the same method.
\mn
If there is an edge in $G$, the
probability of $w(X',Y')\not=0 $ is at least $1/4$.
It may be shown that
$(2\lceil \log n \rceil+4+o(1))m$ queries are enough to find all
edges in $G$, with high probability. We may prove
$( 2\lceil \log n \rceil+5)m$ queries are enough with probability
$1-e^{-\Omega(m)}$, a proof of which is presented in Appendix.
\begin{lemma}\label{rbs2} The randomized binary search finds all edges of $G$ with probability $1-e^{-\Omega(m)}$.
\end{lemma}
\mn
For a better query complexity, a more sophisticated algorithm is needed.
We first present an algorithm finding all edges of $G$ when the maximum
degree of $G$ is small, say at most $m^{0.1}$.
Then another algorithm is introduced to find vertices of
large degree and edges containing them. Concatenating two algorithms,
the following theorem may be shown.
\begin{theorem}\label{gfp}
Let $n,m$ be positive integers with $n^2\geq m\geq 2$ and let $\ga, \gb>0$ be positive real numbers (not necessarily constants) with $2\ga<\gb$.
Suppose a bipartite (weighted) graph $G$ is given such that each part of $G$ has at most $n$ vertices
and there are $m$ or less edges in $G$.
If the weights $w(e)$ of edges satisfy $\ga \leq |w(e)|\leq \gb$, then there is a randomized polynomial time algorithm that asks $O(\frac{m \log (\gb/\ga) \log n}{\log m})$ queries, and finds all edges with probability $1-O(1/{m^{0.02}})$.
\end{theorem}
\mn
Theorem \ref{gfpm} follows from the theorem and \raf{bip}.
\old{\mn {\bf Finding Large Degree Vertices} To improve the query
complexity, we first find large degree vertices and edges containing
them. .}
\mn
For the first algorithm, let $\gd=0.05$ and assume that the maximum degree of $G$ is less than $m^{2\gd}$.
To present the algorithm, construct a random partition $X_1,
..., X_{m^{1/2+2\gd}}$ of $X$ so that each vertex $x\in X$ is
equally likely in $X_j$, $j=1,...,m^{1/2+2\gd}$, independently of
all other vertices. Similarly, construct a random partition $Y_1,
..., Y_{m^{1/2+2\gd}}$ of $Y$.
\begin{lemma}\label{random2} Under the same hypotheses as in Theorem \ref{gfp}, if the maximum degree of $G$ is less than $m^{2\gd}$, then, with probability $1-(1+o(1)){m^{-\gd}}$, the followings hold.
\mn
(a) For each $i=1,...,m^{1/2+2\gd}$, $|N(X_i) |\leq 2m^{1/2-2\gd}$, where $N (X_i) :=\{ y\in Y:y\sim x ~\mbox{for some $x\in X_i$}\}$.
\mn (b) For each $i=1,...,m^{1/2+2\gd}$ and $y\in Y$, $d(y; X_i) \leq 3$, where $d(y;X_i):=
\{ x\in X_i : x\sim y \}$.
\mn (c) For each $i=1,...,m^{1/2+2\gd}$, the number of vertices $y\in Y$ with $d(y;X_i)\geq 2$ is at most $m^{5\gd}$.
\mn (d) The statements (a)-(c) hold after the roles of $X$ and $Y$ are switched.
\mn (e) Except for $3m^{1-3\gd}$ edges, every edge is a unique edge between $X_i$ and $Y_j$ for some pair $i,j$.
\end{lemma}
\proof Let $p=m^{-1/2-2\gd}$. Then, $\pr[x\in X_i] =p$ for all $x$ and $i$. It is enough to show that (a)-(c) hold with probability $1-o(m^{-\gd})$ and (e) holds with probability $1-m^{-\gd}$.
For (a), as
$$ E[ |N(X_i)|] = \sum_{y\in Y} \Big(1-\pr[ X_i \cap N(y) =\emptyset]\Big)
=\sum_{y\in Y} \Big( 1-(1-p)^{d(y)}\Big) \leq
\sum_{y\in Y} pd(y) \leq pm = m^{1/2-2\gd}, $$
the generalized martingale inequality
(Lemma \ref{gm}) with $p=m^{-1/2-2\gd}$, $c_{_x} = d(x)$, $\lambda =
m^{1/2-2\gd}$, and $\rho=m^{-2\gd}/2$, gives that
$$ \Pr\Big[ |N_H(X_i)| \geq 2m^{1/2-2\gd} \Big] \leq 2\exp\Big(
-\frac{m^{1/2-4\gd}}{2} +\frac{m^{-1/2-6\gd}}{8} \sum_{x\in X }
(d(x))^2 e^{m^{-2\gd}d(x)/2}\Big).$$ Since $e^{m^{-2\gd}d(x)/2} \leq
e^{1/2}\leq 2$ and $\sum_{x\in X } (d(x))^2 \leq m^{2\gd} \sum_{x\in X }
d(x) = m^{1+2\gd}$, we have
$$ \Pr\Big[ |N (X_i)| \geq 2m^{1/2-2\gd} \Big] \leq 2\exp\Big(
-\frac{m^{1/2-4\gd}}{4}\Big), $$
and
$$ \Pr\Big[\, \exists\, i~~ {\rm s.t.} ~ ~ |N (X_i)| \geq 2m^{1/2-2\gd} \Big] \leq 2m^{1/2+2\gd}\exp\Big(
-\frac{m^{1/2-4\gd}}{4}\Big)=o(m^{-\gd}). $$
For (b),
$$ \pr [ d(y; X_i) \geq 4] \leq { d(y) \choose 4} p^4
\leq \frac{(pd(y))^4}{24}. $$
Thus, the probability that there is a pair $y,j$ such that
$d(y, X_j) \geq 4$ is at most
$$ \sum_{j=1}^{m^{1/2+2\gd}}\sum_{y\in Y} \frac{(pd(y))^4}{24}
\leq \frac{p^4 m^{1/2+2\gd} m^{6\gd}}{24}\sum_{y\in Y} d(y)
\leq \frac{m^{-2-8\gd} m^{1/2+2\gd} m^{1+6\gd}}{24}=
\frac{1}{24m^{1/2}}=o(m^{-\gd}). $$
\newcommand{\0}{\emptyset}
For (c), suppose the number $Z_i$ of vertices $y\in Y$ with
$d(y, X_i)\geq 2$ is more than $m^{5\gd}$. Then there are distinct
vertices $y_1,..., y_{m^{\gd}}$ in $Y$ with $d(y_j, X_i)\geq 2$,
$j=1,..., m^{\gd}$, such that $N(y_j)\cap N(y_k)=\0$
for all distinct pairs $j,k=1,..., m^{\gd}$.
This is possible since each fixed $y\in Y$ satisfies
$N(y)\cap N(y')\not=\0$ for at most $m^{4\gd}-1$ vertices $y'\in Y$.
As $r!\geq (\frac{r}{e})^r$ and $(d(y_j))^2 \leq m^{2\gd}d(y_j)$,
$$ \pr[ Z_i > m^{5\gd}] \leq \frac{1}{m^{\gd}!}
\sum_{y_{_1},..., y_{m^{\gd}}}\prod_{j=1}^{m^{\gd}} p^2{d(y_j) \choose 2}
\leq \Big(\frac{ep^2m^{2\gd}}{2m^{\gd}}\Big)^{m^{\gd}}
\sum_{y,..., y_{m^{\gd}}} \prod_{j=1}^{m^{\gd}} d(y_j)
\leq \Big(\frac{ep^2m^{\gd}m}{2}\Big)^{m^{\gd}}
$$
and
$$ \pr[\, \exists \, i ~~{\rm s. t.}~~ Z_i > m^{5\gd}]
\leq m^{1/2+2\gd} \Big(\frac{e}{2m^{3\gd}}\Big)^{m^\gd} =o(m^{-\gd}).$$
For (e), the probability that an edge $e=xy$ is not a unique edge between any pair of $X_i$ and $Y_j$ is
$$ \sum_{i,j=1}^{m^{1/2+2\gd}}
\Pr[ (x,y) \in X_i\times Y_j ] \pr\Big[ \mbox{$\exists\! \!$~\ edge between $X_i$ and $Y_j$ other than $e$}\Big|(x,y) \in X_i\times Y_j
\Big].
$$
Since the conditional probability is at most
$$
(d(x)-1)p + (d(y)-1)p + (m-d(x)-d(y)+1)p^2 \leq 2m^{2\gd} p +
mp^2 \leq 3m^{-4\gd}
$$
and $ \sum_{i,j=1}^{m^{1/2+2\gd}}
\Pr[ (x,y) \in X_i\times Y_j ] =1$, the number $W$ of edges that are not a unique edge between any pair of $X_i$ and $Y_j$ is at most $3m^{1-4\gd}$ in expectation.
Markov inequality implies that $$ \pr[ W \geq 3 m^{1-3\gd}] \leq m^{-\gd}. $$
\mn
\qed
\mn
The next algorithm finds all edges of $G$ when the maximum degree of $G$ is less than $m^{2\gd}$.
\mn
{\bf Algorithm A} (i)
For each $i$, $i=1,..., m^{1/2+2\gd}$, regarding each
$y\in Y$ as a coin with weight $w_i (y):=w_G (X_i,y)=\sum_{x\in X_i} w_{G} (x,y)$,
apply the coin weighing algorithm in Corollary \ref{cwc} to find all counterfeit coins with parameters $(m, n, \ga, \gb,\eps, \mu) $ replaced by $(2m^{1/2-2\gd}, n, \ga, 3\gb, m^{-1/2+7\gd}, \frac{4}{1-4\gd})$.
Let $N_0 (X_i)$ be the set of all counterfeit coins found, $i=1,..., m^{1/2+2\gd}$. Do the same for $Y_j$ and let $N_0 (Y_j)$ be the set of all counterfeit coins found, $j=1,..., m^{1/2+2\gd}$.
\mn (ii) For all pairs $i,j=1,..., m^{1/2+2\gd}$ with $|N_0 (X_i) \cap
Y_j |= |X_i \cap N_0 (Y_j)|=1$, take $y\in N_0 (X_i) \cap Y_j$ and
$x\in X_i \cap N_0 (Y_j)$ and weigh the possible edge $xy$ to obtain
$w_{G} (x,y)$. For each pair $xy$ with $w_{G} (x,y)\not=0$, declare that $xy$ is an edge of $G$
\mn (iii) Find remaining edges one by one by applying the randomized
binary search using no more than $(6\lceil \log n \rceil+15) m^{1-3\gd} $ queries.
\mn
\mn
For the collectedness and the query complexity of the algorithm, we prove the following lemma.
\begin{lemma}\label{aa} Under the same hypotheses as in Theorem \ref{gfp}, if the maximum degree of $G$ is less than $m^{2\gd}$, then, with probability $1-(1+o(1)){m^{-\gd}}$, Algorithm A asks $O(\frac{ m \log (\gb/\ga) \log n}{\log m})$ queries to find all edges of $G$.
\end{lemma}
\proof Suppose (a)-(e) of Lemma \ref{random3} hold. First, we show that
the parameters $(2m^{1/2-2\gd}, n, \ga, 3\gb, m^{-1/2+7\gd})$
satisfy all the requirements in Corollary \ref{cwc}.
If $y$ is counterfeit, then $w_i (y)= w_G(X_i,y) \not= 0$.
This gives $y\in N(X_i)$ and hence the number of counterfeit coins
is at most $|N(X_i)| \leq 2m^{1/2-2\gd}$ by (a) of Lemma \ref{random2}.
The number of all coins is $|Y|\leq n$.
If $y\sim x$ for only one $x \in X_i$, then $|w_i(y)|= |w_G(x,y)|\geq \ga$.
Thus, $0<|w_i(y)|< \ga$ implies $d(y;X_i)\geq 2$. The number of such $y\in Y$
is at most $m^{5\gd}= m^{-1/2+7\gd}\cdot 2m^{1/2-2\gd}$ by (c) of
Lemma \ref{random2}. Since $d(y;X_i)\leq 3$ by (b) of Lemma \ref{random2}, $|w_i(y)|\leq \sum_{x\in X_i} |w(x,y)| \leq 3\gb$. Therefore, the algorithm finds the set $N_{0} (X_i)$ of all counterfeit coins, with probability $1-O(m^{-2})$ for each $X_i$. Similarly, the algorithm finds the set $N_{0} (Y_j)$ of all counterfeit coins, with probability $1-O(m^{-2})$ for each $Y_j$. Since there are $2m^{1/2+2\gd}$ sets $X_i$ and $ Y_j$,
$N_{0} (X_i)=\{ y\in Y: w_{i} (y)\not=0\} $ and
$N_{0} (Y_j)=\{ x\in X: w_{j} (x)\not=0\} )$,
with probability $1-O(1/m)$.
If $e=xy$ is a unique edge between $X_i$ and $Y_j$, then $|w_i (y)|, |w_j (x)|\geq \ga $, especially, $y\in N_0( X_i)$ and $x\in N_0 (Y_j)$. Moreover, as there is no other edge between $X_i$ and $Y_j$, $N_0 (X_i) \cap
Y_j =\{y\}$ and $X_i \cap N_0 (Y_j)=\{ x\}$. Thus, the algorithm finds the edge $e=xy$ in (ii).
By (e) of Lemma \ref{random2}, at most $3m^{1-3\gd}$ edges remain unfound in (ii).
All the remaining edges can be found in (iii) with probability $1-e^{-\Omega(m^{1-3\gd})}$ by Lemma \ref{rbs}.
For the query complexity, in (i), $O(\frac{ m^{1/2-2\gd} \log (\gb/\ga) \log n}{\log m})$ queries are enough for each $X_i$ or $Y_j$. As there are $2m^{1/2+2\gd}$ such sets, $O(\frac{ m \log (\gb/\ga) \log n}{\log m})$ queries are enough in (i). In (ii), if $|N_0 (X_i) \cap
Y_j |= |X_i \cap N_0 (Y_j)|=1$, then there is at least one edge between $X_i$ and $Y_j$.
As there are at most $m$ such pairs $X_i, Y_j$, $m$ queries are enough in (ii).
Since $o(\frac{m\log n}{\log m})$ queries are asked in (iii), the query complexity of the algorithm is
$O(\frac{ m \log (\gb/\ga) \log n}{\log m})$.
\mn\qed
\mn
For general graphs, select each vertex of $Y$ with probability
$m^{-\gd}$, $\gd=0.05$, independently of all other vertices. Let $G_1$ be the induced graph on $X$ and the selected
vertices of $Y$.
\begin{lemma}\label{random3} If $G$ has at most $m$ edges, then the followings hold with probability $1-O(m^{-\gd/2})$.
\mn
(a) The number of edges in $G_1$ is at most $m^{1-\gd/2}$.
\mn (b) If $d_{G_1} (x) \geq m^\gd/2$, then $
d_G (x) \leq 2m^\gd d_{G_1} (x)\leq 3d_{G} (x). $
\mn (c) If $d_{G} (x) \geq m^{2\gd}$, then $d_{G_1} (x) \geq m^\gd/2$.
\end{lemma}
\proof As each edge in $G_1$ with probability $m^{-\gd}$,
the expected number of edges in $G_1$ is at most $m^{1-\gd}$.
Markov Inequality gives
$$ \Pr[ \mbox{the number of edges in $G_1$} \geq m^{1-\gd/2} ] \leq
m^{-\gd/2}. $$
For the degree $d_{G_1} (x)$ of $x$ in $G_1$, as $E[d_{G_1}(x)] = m^{-\gd} d_G(x)$, Lemma \ref{gm} with $c_y =1$ if $y\sim x$ and $c_y =0$
otherwise, $\lambda =\frac{m^{\gd} }{4}$, $\rho=1/2$ gives, for $x\in X$ with $d_G(x) < \frac{m^{2\gd}}{4}$,
$$ \Pr \Big[ |d_{G_1} (x)-m^{-\gd}d_G (x)| \geq \frac{ m^{\gd}}{4}\Big] \leq
2 \exp \Big(-\frac{m^{\gd}}{8}+ \frac{e^{1/2} m^{-\gd}
d_G (x)}{8} \Big) \leq 2\exp \Big(-\frac{m^{\gd} }{16}\Big).
$$
In particular, if $d_G(x) < \frac{m^{2\gd}}{4}$, then
$d_{G_1} (x)-m^{-\gd}d_G (x) < m^\gd/4$, or equivalently,
$d_{G_1} (x) < m^\gd/4+m^{-\gd}d_G (x)< m^\gd/2$, with probability $1-e^{-\Omega(m^\gd)}$.
For (b), it is now enough to show that $
d_G (x) \leq 2m^\gd d_{G_1} (x)\leq 3d_{G} (x)$ when $d_G(x) \geq \frac{m^{2\gd}}{4}$, say, with probability $1-e^{-\Omega(m^{\gd})}$. Lemma \ref{gm} with $c_y =1$ if $y\sim x$ and $c_y =0$
otherwise, $\lambda =\frac{m^{-\gd} d_G(x)}{2}$, $\rho=1/3$ also gives
$$ \Pr \Big[ |d_{G_1} (x)-m^{-\gd}d_G (x)| \geq \frac{ m^{-\gd}d_G (x)}{2}\Big] \leq
2 \exp \Big(-\frac{m^{-\gd} d_G(x)}{6}+ \frac{e^{1/3} m^{-\gd}
d_G (x)}{18} \Big) \leq 2\exp \Big(-\frac{m^{-\gd} d_G (x)}{12}\Big),
$$
for $e^{1/3} \leq 3/2$. If $d_G (x)\geq m^{2\gd}/4$, we have
$ |2m^{\gd} d_{G_1}(x) -2d_{G}(x)| \leq d_G (x)$, or equivalently,
$ d_G (x)\leq 2m^{\gd} d_{G_1}(x) \leq 3d_{G}(x) $,
with probability $1-e^{-\Omega(m^{\gd})}$.
Moreover, if
$d_G (x)\geq m^{2\gd}$, then $2m^{\gd} d_{G_1}(x) \geq d_G (x) \geq m^{2\gd}$. That is, $d_{G_1}(x)\geq m^{\gd}/2$, which shows (c).
\mn
\qed
\bn
\mn
{\bf Algorithm B}
(i) Apply the randomized
binary search to find edges of $G_1$ one by one, using $(2\lceil \log n \rceil +5)m^{1-\gd/2}$ queries. Let $G_2$ be the graph on $X\cup Y$ consisting of all edges found.
\mn (ii) For each vertex $x\in X$ with $ d_{G_2} (x) \geq m^\gd /2$, regard each $y\in Y$ as a coin with weight $w_x (y):=w_G (x,y)$ and apply the coin weighing algorithm in Corollary \ref{cwc} with parameters $(m, n, \ga, \gb,\eps, \mu) $ replaced by $(2m^{\gd} d_{G_2} (x), n, \ga, \gb,0, 1/\gd)$. The vertices $x\in X$ with $ d_{G_2} (x) \geq m^\gd /2$ are called vertices of large degree.
\mn (iii)
Output vertices of large degree and all edges found.
\mn
\mn
Algorithm B has the following property.
\begin{lemma}\label{ab} Under the same hypotheses as in Theorem \ref{gfp}, with probability $1-O(m^{-\gd/2})$,
Algorithm B uses $O(\frac{m \log (\gb/\ga) \log n}{\log m})$ queries to find all vertices $x\in X$ with $d_G(x) \geq m^{2\gd}$ and all edges containing them.
\end{lemma}
\proof Suppose (a) and (b) of Lemma \ref{random3} hold. Then
Lemma \ref{rbs2} yields $G_2=G_1$ with probability
$1-e^{\Omega(m^{1-\gd/2})}$. We assume
that $G_1=G_2$ in the rest of the proof.
In (ii), note that the number of counterfeit coins for
$x$ is $d_{G} (x)$, which is at most $2m^{\gd } d_{G_2} (x) $
for all $d_{G_1} (x)=
d_{G_2} (x)\geq m^{\gd}/2$ by (b) of Lemma \ref{random3}.
Thus, the algorithm in Corollary \ref{cwc} finds $N_G (x)$
for each $x\in X$ satisfying $d_{G_2}(x) \geq m^{\gd}/2$,
with probability $1- O(1/(2m^{\gd } d_{G_2} (x))^{1/\gd})=1- O(1/m^2)$.
As $d_G(x)\geq d_{G_2}(x) $, there are at most $2m^{1-\gd} $
vertices $x \in X$ with $d_{G_2}(x) \geq m^{\gd}/2$ and the algorithm
finds $N_G (x)$ for all such vertices $x\in X$, with probability $1-O(1/m)$.
In particular, if $d_G(x) \geq m^{2\gd}$, then $d_{G_2}(x) \geq
m^{\gd}/2$ by (c) of Lemma \ref{random3} and hence $N_G(x)$ are found.
For the query complexity, $(2\lceil \log n \rceil +5)m^{1-\gd/2}$ queries are asked in (i). In (ii), $O(\frac{ m^{\gd}d_{G_2} (x) \log (\gb/\ga) \log n}{\log m})$ queries are asked for each $x\in X$ with $d_{G_2} (x) \geq m^{\gd}/2$. On the other hand, $d_{G_2} (x) \geq m^{\gd}/2$ implies $2m^{\gd}d_{G_2} (x)\leq 3d_{G} (x)$ by (b) of Lemma \ref{random3}. Thus,
$$\sum_{x: d_{G_2} (x) \geq m^{\gd}/2} m^{\gd}d_{G_2} (x)
\leq \frac{3}{2} \sum_{x\in X} d_{G} (x)= \frac{3m}{2} $$
gives that $O(\frac{m \log (\gb/\ga) \log n}{\log m})$ queries are asked in (ii).
\mn
\qed
\bn
To find all vertices $v$ in $G$ with $d_G (v) \geq m^{2\gd}$, one may
apply Algorithm B twice, one as it is and the other after exchanging roles
of $X$ and $Y$. Then, after removing all vertices found (and all edges
containing any of them), we apply Algorithm A.
Lemmas \ref{aa} and \ref{ab} imply that
\begin{corollary}
Under the same hypotheses as in Theorem \ref{gfp}, there is a polynomial time
randomized algorithm asking $O(\frac{ m \log (\gb/\ga) \log n}{\log m})$ queries
to find all edges of $G$, with probability $1-O(1/m^{0.02})$.
\end{corollary}
If the algorithm in the corollary is forced to stop when
it asks $\frac{ \eta m \log (\gb/\ga) \log n}{\log m}$ queries,
for the constant $\eta$ in the $O(\frac{ m \log (\gb/\ga) \log n}{\log m})$
term,
the desired algorithm in Theorem \ref{gfp} may be obtained.
\mn
\mn
\section{Concluding Remarks}
In this paper, we presented a polynomial time randomized algorithm that
uses $O(\frac{ m \log (\gb/\ga) \log n}{\log m})$ queries, when there are
at most $m$ counterfeit coins and the weights $w(c)$ of all
counterfeit coins
satisfy $\ga\leq |w(c)|\leq \gb$. This plays a key role to find a hidden
weighted graph $G$ satisfying similar conditions. Though there is a non-adaptive algorithm to find all counterfeit coins using $O(\frac{ m \log n}{\log m})$ queries \cite{BM11_TCS}, it is not a polynomial time algorithm.
An obvious question is if there is a polynomial time algorithm to find all
counterfeit coins using $O(\frac{ m \log n}{\log m})$ queries when there is
no restriction on the wights.
The algorithm we presented was a randomized algorithm that uses the optimal
number of queries up to a constant factor. On the other hand, the best
deterministic algorithm uses $\Theta (\frac{m\log n}{\log m} + m\log\log m)$
(see \cite{BM10_MFCS}), it would be good to implement a deterministic
polynomial time algorithm that uses $O(\frac{ m \log n}{\log m})$
queries even when the weights of counterfeit coins are positive real numbers.
\bibliographystyle{plain}
|
train/arxiv
|
BkiUcYTxaL3SufhMoGmZ
| 5
| 1
|
\section*{Abstract}
Within systems biology there is an increasing interest in the stochastic behavior of genetic and biochemical reaction networks. An appropriate stochastic description is provided by the chemical master equation, which represents a continuous time Markov chain (CTMC). In this paper we consider the stochastic properties of a biochemical circuit, known to control eukaryotic cell cycle and possibly involved in oncogenesis, recently proposed in the literature within a deterministic framework.
Due to the inherent stochasticity of biochemical processes and the small number of molecules involved, the stochastic approach should be more correct in describing the real system: we study the agreement between the two approaches by exploring the system parameter space. We address the problem by proposing a simplified version of the model that allows analytical treatment, and by performing numerical simulations for the full model. We observed optimal agreement between the stochastic and the deterministic description of the circuit in a large range of parameters, but some substantial differences arise in at least two cases: 1) when the deterministic system is in the proximity of a transition from a monostable to a bistable configuration, and 2) when bistability (in the deterministic system) is ''masked'' in the stochastic system by the distribution tails. The approach provides interesting estimates of the optimal number of molecules involved in the toggle. Our discussion of the points of strengths, potentiality and weakness of the chemical master equation in systems biology and the differences with respect to deterministic modeling are leveraged in order to provide useful advice for both the bioinformatician practitioner and the theoretical scientist.
Keywords: Toggle switch, chemical master equation, stochastic versus deterministic, biochemical reactions
\section*{Introduction}
Complex cellular responses are often modeled as switching between phenotype states, and despite the large body of deterministic studies and the increasing work aimed to elucidate the effect of intrinsic and extrinsic noise in such systems, some aspects still remain unclear. Molecular noise, which arises from the randomness of the discrete events in the cell (for example DNA mutations and repair) and experimental studies have reported the presence of stochastic mechanisms in cellular processes such as gene expression \cite{McAdams},\cite{Elowitz}, \cite{Kepler}, decisions of the cell fate \cite{Arkin2}, and circadian oscillations \cite{Barkai}. Particularly, low copy numbers of important cellular components and molecules give rise to stochasticity in gene expression and protein synthesis, and it is a fundamental aspect to be taken into account for studying such biochemical models \cite{,Arkin,Cast09}. In this paper, we consider a simplified circuit that is known to govern a fundamental step during the eukaryotic cell cycle that defines cell fate, previously studied by means of a deterministic modeling approach \cite{Aguda}.
Let set the scene by reminding that ''all models are wrong, but some are useful'' (said by George Edward Pelham Box, who was the son-in-law of Ronald Fisher). Biologists make use of qualitative models through graphs; quantitative modeling in biochemistry has been mainly based on the Law of Mass Action which has been used to frame the entire kinetic modeling of biochemical reactions for individual enzymes and for enzymatic reaction network systems \cite{Lund}. The state of the system at any particular instant is therefore regarded as a vector (or list) of amounts or concentrations and the changes in amount or concentration are assumed to occur by a continuous and deterministic process that is computed using the ordinary differential equation (ODE) approach. However, the theory based on the Law of Mass Action does not consider the effect of fluctuations.
If the concentrations of the molecules is not great(on the order of Avogadro's number) we cannot ignore fluctuations. Moreover, biological systems also show heterogeneity which occurs as a phenotypic consequence for a cell population given stochastic single-cell dynamics (when the population is not isogenic and in the same conditions). From a practical point of view, for concentrations greater than about 10 nM, we are safe using ODEs; considering a cell with a volume of $10^{-13}$ liters this corresponds to thousands of molecules that, under poissonian hypothesis, has an uncertainty in the order of $1\%$. If the total number of molecules of any particular substance, say, a transcription factor, is less than 1,000, then a stochastic differential equation or a Monte Carlo model would be more appropriate. Similarly to the deterministic case, only simple systems are analytically tractable in the stochastic approach, i.e. the full probability distribution for the state of the biological system over time can be calculated explicitly, becoming computationally infeasible for systems with distinct processes operating on different timescales.
An active area of research is represented by development of approximate stochastic simulation algorithms. As commented recently by Wilkinson the difference between an 'approximate' and 'exact' model is usually remarkably less than the difference between the 'exact' model and the real biological process \cite{Wilkinson}.
Given we can see this either as an unsatisfactorily state of art or as a promising advancement, we can summarise the methodological approaches as following. Biochemical networks
have been modeled using differential equations when considering continuous variables changing with time, or stochastic differential equations (SDE) for the trajectories of continuous random variables changing with time, and using the Gillespie algorithm that could be thought as an algorithm for the trajectory of discrete random variables changing with time. For a random variable changing with time one can either characterize it by its stochastic trajectories, as by the SDE and the Gillespie algorithm, or one can characterize its probability distribution as a function of time. The corresponding equation for the SDE is the Fokker-Planck equation, and the corresponding equation for the Gillespie algorithm is called the chemical master equation (CME) \cite{Liang}.
Therefore, the chemical master equation is a Markov chain in the limit where time is continuous and it could be thought as the mesoscopic version of the Law of Mass Action, i.e. it extends the Law of Mass Action to the mesoscopic chemistry and biochemistry, see for example \cite{Qian,Wolf}.
Here we compare the results of a stochastic versus deterministic analysis of a microRNA-protein toggle switch involved in tumorigenesis with the aim of identifying the most meaningful amount of information to discriminate cancer and healthy states.
We show that the stochastic counterpart of such deterministic model has many commonalities with the deterministic one, but some differences arise, in particular regarding the number of stable states that can be explored by the system. The disagreement between the stochastic and deterministic description is observed in a ``ghost`` effect caused by the proximity to a deterministic bifurcation \cite{Strogatz}, and in a somehow opposite situation, in which the variance of the stable point can mask the detection of the second peak in the stationary distribution.
In this paper we perform a numerical study of the complete two-dimensional model, but we consider also a simplified, biologically meaningful, version of the model that allows to calculate an exact solution, with a numerical characterization of the parameter ranges in which the two systems produce qualitatively similar results.
\section{Properties of a microRNA toggle switch}
Oncogenes and tumor-suppressor genes are two pivotal factors in tumorigenesis. Recent evidences indicate that MicroRNAs (miRNAs) can function as tumor suppressors and oncogenes, and these miRNAs are referred to as 'oncomirs'. MiRNAs are small, non-coding RNAs that modulate the expression of target mRNAs. The biogenesis pathway of miRNAs in animals was elucidated by \cite{Bartel}. MiRNAs undergo substantial processing since the nuclear transcription where two proteins play an essential role: Drosha and Dicer. Most of miRNA are first processed into pre- miRNA by Drosha. After exportated to the cytoplasm, the pre- miRNA is processed by Dicer into a small double strand RNA (dsRNA) called the miRNA: miRNA duplex. The active strand, which is the mature miRNA is incorporated into the RISC and binds to the target mRNA, whereas the inactive strand is ejected and degraded. In normal tissue, proper regulation of miRNAs maintains a normal rate of development, cell growth, proliferation, differentiation and apoptosis. Tumorigenesis can be observed when the target gene is an oncogene, and the loss of the miRNA, which functions as a tumor suppressor, might lead to a high expression level of the oncoprotein. When a miRNA functions as an oncogene, its constitutive amplification or overexpression could cause repression of its target gene, which has a role of tumor suppressor gene, thus, in this situation, cell is likely to enter tumorigenesis. MiRNAs are often part of toggle switches, with important examples are gene pairs built with oncogenes and tumour suppressor genes \cite{Lim,Lu}.
Here we focus on the amplification of 13q31-q32, which is the locus of the the miR-17-92. The miR-17-92 cluster forms a bistable switch with Myc and the E2F proteins\cite{ODonnell,Bueno,Aguda}.
The oncogene Myc regulates an estimated 10\% to 15\% of genes in the human genome, while the disregulated function of Myc is one of the most common abnormalities in human malignancy \cite{Coller,Remondini}. The other component of the toggle is the E2F family of transcription factors, including E2F1, E2F2 and E2F3, all driving the mammalian cell cycle progression from G1 into S phase. High levels of E2Fs, E2F1 in particular, can induce apoptosis in response to DNA damage. The toggle also interacts with dozens of genes (see figure \ref{fig:comptog} depicts a portion), particularly with the Rb and other key cell cycle players. A summary of the experiments perturbing miRNA/Myc/E2F and E2F/RB behaviours have suggested the following:
\begin{itemize}
\item The Rb/E2F toggle switch is OFF when RB inhibits E2F, i.e. stopping cell proliferation; it is ON when E2F prevails and induces proliferation. Once turned ON by sufficient stimulation, E2F can memorize and maintain this ON state independently of continuous serum stimulation.
\item
The proteins E2F and Myc facilitate the expression of each other and the E2F protein induces the expression of its own gene (positive feedback loop). They also induce the transcription of microRNA-17-92 which in turn inhibits both E2F and Myc (negative feedback loop) \cite{Sylvestre}.
\end{itemize}
Moreover, the increasing levels of E2F or Myc drive the sequence of cellular states, namely, quiescence, cell proliferation (cancer) or cell death (apoptosis).
\begin{figure}[ht]
\label{fig:comptog} \centering
\includegraphics[scale=0.8]{comptog.png}
\caption{The E2F-MYC-miR-17-92 toggle switch with its biochemical environment}
\end{figure}
Although there is increasing amount of research on cell cycle regulation, the mathematical description of even a minimal portion of the E2F, Myc and miR-17-92 toggle switch is far from trivial. Aguda and collaborators \cite{Aguda} have developed a deterministic model, which reduces the full biochemical network of the toggle switch to a protein (representing the E2F-Myc compound) and the microRNA-17-92 cluster (seen as a single element).
It is a 2-dimensional open system, in which $p$ represents the E2f-myc complex and $m$ the miRNA cluster: thus no mass action law holds, i.e. $p+m\neq N$.
The dynamics of $p$ and $m$ concentrations are described by eq. \ref{aguda1} and \ref{aguda2}:
\begin{equation}
\dot{p} = \alpha + \frac{k_1 \cdot p^2}{\Gamma_1 + p^2 + \Gamma_2\cdot m} - \delta\cdot p
\label{aguda1}
\end{equation}
\begin{equation} \dot{m} = \beta + k_2\cdot p -\gamma\cdot m
\label{aguda2}
\end{equation}
The system can be rewritten in an adimensional form as follows:
\begin{equation} \epsilon \dot{\phi} = \alpha' + \frac{k \cdot {\phi}^2} {{\Gamma'}_1 + {\phi}^2 + {\Gamma'}_2\cdot {\mu}} - \phi
\label{aguda3}
\end{equation}
\begin{equation} \dot{\mu} = 1 + \phi - \mu
\label{aguda4}
\end{equation}
Where the parameters are:
$\alpha'=\frac{k_2}{\delta \cdot \beta} \alpha$,
$k=\frac{{k_1}{k_2}}{\delta \beta}$,
${\Gamma'_1}=\frac{k_{2}^2}{\beta^2}\Gamma_1$,
${\Gamma'_2}=\frac{k_{2}^2}{{\beta}\gamma}\Gamma_2$,
$\epsilon=\frac{\gamma}{\delta}$ and the change of variables is:
$\phi= \frac{k_2}{\beta} p$, $\mu=\frac{\gamma}{\beta}m$ and $\tau=\gamma t$.
In the original model \cite{Aguda}, the rate of protein synthesis is not a function of the instantaneous concentration (as assumed in eq.\ref{aguda3} ) but rather of its concentration at some time ${\Delta}$ in the past:
\begin{equation}
\epsilon \dot{\phi} = \alpha' + \frac{k[{\phi}(\tau-\Delta)]^2} {{\Gamma'}_1 + [{\phi}(\tau-\Delta)]^2 + {\Gamma'}_2\cdot {\mu}(\tau-\Delta)} - \phi(\tau).
\end{equation}
We will not consider such delay in our stochastic realization of the model, since it would increase system dimensionality and it does not seem necessary to obtain the features we want to characterize.
The steady state can be studied in the nondimensionalized system and, therefore, the conditions on the parameters for the existence of multiple steady states. In the resulting cubic equation:
\begin{equation}
\alpha' + \frac{k{\phi}^2} {{\Gamma'}_1 + {\phi}^2 + {\Gamma'}_2\cdot (1+\phi)} - \phi = 0
\label{agudaCubic}
\end{equation}
the necessary (but not sufficient) conditions for the existence of 3 steady states (and thus a bistable system) are:
\begin{equation}
(\Gamma'_2 - k) < \alpha' < \bigg(1+\frac{\Gamma'_1}{\Gamma'_2}\bigg)
\end{equation}
\section{The stochastic modeling approach}
The system represented by equations \ref{aguda1} and \ref{aguda2} can studied as a stochastic system through the Chemical Master Equation (CME) approach \cite{VanKampen}.
The resulting CME has two variables, the number of $p$ and $m$ molecules, labeled as $n$ and $m$.
The temporal evolution in the probability, $p_{n,m}(t)$, to have $n$ and $m$ molecules at time $t$ is described by the following equation:
\begin{equation}
\dot{p}_{n,m} = (\mathbb E_n -1) r^n p_{nm}+ (\mathbb E^{-1}_n-1) g^n p_{nm}\\
+(\mathbb E_m -1) r^m p_{nm}+ (\mathbb E^{-1}_m-1) g^m p_{nm}
\end{equation}
This CME is derived under the condition of a one-step Poisson process, $\mathbb E $ and $\mathbb E^{-1}$
are the forward and backward step operators, $g$ and $r$ the generation and recombination terms for the $n$ and $m$ variables, as shown in superscripts.
The two generation and recombination terms associated with the $n$ and $m$ variables are respectively:
\begin{equation}
g^n= \alpha + \frac{k_1 \cdot n^2}{\Gamma_1 + n^2 + \Gamma_2\cdot m}; \qquad r^n= \delta\cdot n \end{equation}
\begin{equation}
g^m= \beta + k_2\cdot n; \qquad r^m= \gamma\cdot m
\end{equation}
We remark that the molecule influxes into the system (represented by the $\alpha$ and $\beta$ terms) could be included in different ways in the stochastic equations, since in the deterministic equations they represent a sort of "mean field" value. As an example, molecules could be added in bursts with specific time distributions, that do not appear in the macroscopic continuous deterministic equations. We will consider the simplest approach, but the choice of different influx patterns should deserve further investigation.
\subsection{The one-dimensional model}
We can reduce the problem from two to one dimension, by considering a different time scale for the two reactions (in particular considering $\dot{m}\gg\dot{p}$) and thus considering the steady state solution for the $m$:
\begin{equation}m=\frac{\beta+k_2\cdot p}{\gamma}=\beta'+k'\cdot p\end{equation}
As a consequence we obtain a deterministic equation for the $p$ only:
\begin{equation}\dot{p}=\alpha+\frac{k_1\cdot p^2}{\Gamma'+\Gamma''\cdot p +
p^2}-\delta\cdot p\end{equation}
Where ${\Gamma}'=\frac{{\Gamma_2} \cdot k_2}{\gamma} $ and ${\Gamma}''={\Gamma_1} + \frac{\Gamma_2 \beta}{\gamma} $.
The stochastic equation for $p_{n}$ is thus as follows:
\begin{equation}
\label{stoch1d}
\dot{p_{n}}=(\mathbb E-1) r_{n} \cdot p_{n} + (\mathbb E^{-1} -1) g_{n} \cdot p_{n} \end{equation}
\begin{equation}g_{n}= \alpha +\frac{k_1 \cdot n^2}{\Gamma'+ \Gamma''\cdot n + n^2};
\qquad r_{n}=\delta \cdot n
\end{equation}
A general solution can be obtained
\begin{equation}
\label{GenSol}
p_{n}^s=\prod_{i=1}^{N} \frac{g(i-1)}{r(i)} \cdot p_0=\prod_{i=1}^{N} \frac{\alpha + \frac{k_1 \cdot
(i-1)^2}{\Gamma'+\Gamma'' \cdot (i-1)+(i-1)^2}} {\delta \cdot i} \cdot p_0
\end{equation}
with an adequate normalization factor imposed on $p_0$:
\begin{equation}
p_0=\frac{1}{1+\sum_{i=1}^{N}\prod_{i=1}^{N}{p_{n_1}^s}}
\end{equation}
We remark that the system is open, thus in theory $N$ is not fixed, but we can truncate the product to a sufficiently high value of $N$ obtaining a good approximation of the whole distribution.
This one-dimensional system (for which an analytical solution can be obtained) will be compared to numerical simulations of the exact one-dimensional and two-dimensional systems.
\section{Model Analysis}
\subsection{The stationary distribution}
The one-dimensional model can show monomodal as well as bimodal stationary distributions, depending on the parameters considered. As an example, we obtain bistability with a set of parameters as shown in Fig. \ref{Staz_Distr_fig}.
\begin{figure}[th]
\centerline{\includegraphics[height=50 mm]{distr_staz.png}}
\caption[]{The stationary distribution for the one-dimensional space, obtained using the following parameters: $\alpha=1.68(molecule/h)$, $\beta=0.202(molecule/h)$, $\delta=0.2 (h^{-1})$, $\gamma=0.2(h^{-1}) $, $\Gamma_1=10300 (molecule^2)$, $ {\Gamma_2}=1006(molecule)$, $k_1=90(molecule/h)$ and $k_2=0.05(h^-1)$.}
\label{Staz_Distr_fig}
\end{figure}
Thus the qualitative features of the two-dimensional deterministic model (i.e. the possibility of being bistable depending on the parameter range) are recovered for the one-dimensional approximation of the stochastic system. Also the two-dimensional stochastic system shows bistability for the same parameters, and they are in optimal agreement for a range of parameters in which the $\dot{m}\gg\dot{p}$ condition holds \\
We also observe some remarkable differences between the deterministic and the stochastic models: there are regions in parameter space in which the deterministic approach shows only one stable state, but in the stochastic system two maxima in the stationary distribution are observed (see Fig. \ref{ghostFig1}).
\begin{figure}\centering
\includegraphics[width=0.5\textwidth]{ghostFig1.png}
\caption{Comparison between the deterministic solution (bottom) and the stationary distribution (top) for the parameter set as in Table \ref{Table}, case 3.}
\label{ghostFig1}
\end{figure}
This difference can be explained qualitatively as follows: for the deterministic system, there are parameter values for which the system is monostable but very close to the "transition point" in which the system becomes bistable. It is known that in these situations a "ghost" remains in the region where the stable point has disappeared \cite{Strogatz}, for which the systems dynamics has a sensible slowing down (i.e. when the system is close to the disappeared fixed point, it remains "trapped" for a longer time close to it, in comparison with other regions). This behaviour results in the presence of a peak in the stationary distribution of the corresponding stochastic systems, that thus remains bistable also when the deterministic system is not anymore.
Another difference is observed: for some parameter values the deterministic system is bistable, but the stochastic distribution shows a clear peak for the maximum with the largest basin of attraction and the smaller peak results ''masked'' by the tail of the distribution around the first peak (see Fig. \ref{ghostFig2}), thus resulting in a monomodal distribution with a long tail.
In practice, the highest state behaves like a sort of metastable state, since the states of the system with a high protein level are visited only occasionally.
\begin{figure}\centering
\includegraphics[width=0.5\textwidth]{ghostFig2.png}
\caption{Comparison between the deterministic solution (bottom) and the stationary distribution (top) for the parameter set as in Table \ref{Table}, case 4.}
\label{ghostFig2}
\end{figure}
\subsection{Numerical analysis}
Here we implemented numerical methods to find the stationary distribution of a CME. The most accurate is the Kernel resolution method: given the complete transition matrix of the system, it is possible to solve numerically the eigenvalue problem, obtaining the correct stationary distribution.
This method, in this case, has a serious drawback: the system is of non-finite size, preventing a complete enumeration of the possible states. Even with a truncation, the system size rises in a dramatic way: the state space for a bidimensional system is of order $N^2$ if $N$ is the truncation limit, and thus the respective transition matrix is of order $N^4$.
This means that even for a relatively small system (with a few hundred of molecules) the matrix size explodes well beyond the computational limits.
The only feasible resolution strategy is a massive exploration of state space by Montecarlo methods, in which single trajectories of the system are simulated: performing this simulations long enough for several times allows to estimate the stationary distribution.
The Montecarlo method we chose is a modified version of the SSA algorithm (also known as the Gillespie algorithm) named logarithmic direct method \cite{Gillespie,Li}, which is a statistically correct simulation of an ergodic Markov system. It is not the fastest algorithm available, as compared to other methods like the next-reaction or the $\tau$-leap method, but it produces a correct estimation of the statistical dispersion of the final state.
For each parameter set we performed 10 simulations for about $10^6 - 10^7$ iteration steps each.
The multiple simulations were averaged together for a better estimation of the stationary distribution, and they allowed also an estimation of the variance over this average distribution.
In the following we discuss four cases that describe the system behaviour for different parameter settings, shown in Table \ref{Table}.
\begin{table}
\caption{Table of the parameter sets for the cases considered.}
\begin{tabular}{|l||c|c|c|c|}
\hline
Par&Case 1&Case 2&Case 3&Case 4\\
\hline
$\alpha$ (molecule/h) &1.0&1.68&1.0&20.0\\
$\delta$ $(h^{-1})$ &1.0&0.20&0.09&1.19\\
$\beta$ (molecule/h) &1.0&0.202&0.0&1.0\\
$\gamma$ $(h^{-1})$ &100.0&0.20&10.0&1.0\\
$k_1$ (molecule/h) &30.0&90.0&12.5&230.0\\
$k_2$ $(h^-1)$ &100.0&0.05&10.0&1.0\\
$\Gamma_1$ (molecule$^2$) &60.0&10300.0&$(72.8)^2$&$(110.0)^2$\\
$\Gamma_2$ (molecule) &10.0&1006.0&10.0&10.0\\
\hline
\end{tabular}
\label{Table}
\end{table}
\begin{figure}
\includegraphics[width=0.5\textwidth]{mod_accordo_1d_var.png}\includegraphics[width=0.5\textwidth]{mod_accordo_2d_var.png}
\caption{Case of good agreement between the theoretical and obtained distribution (see Tab. \ref{Table}, case 1). Left: one-dimensional system, right: two-dimensional system. The thin black line is the theoretical distribution obtained from Eq. \ref{GenSol}. The thick dark grey line is the average of the various simulations, while the grey and light grey areas represent the range of one and two standard deviations from the average distribution.}
\label{Agreement}
\end{figure}
In case 1, we have a system in which the hypothesis of a time-scale separation between $m$ and $p$ is strongly satisfied.
The simulation was performed up to a time limit of $10^3$: we can see how the two resulting distributions are in good agreement with the theoretical one (see Fig. \ref{Agreement}), with the regions of higher variance of the histogram around the maxima and minima of the distribution.
\begin{figure}
\includegraphics[width=0.5\textwidth]{mod_model3_1d_var.png}\includegraphics[width=0.5\textwidth]{mod_model3_2d_var.png}
\caption{Case of poor agreement between the theoretical and obtained distribution (see Tab. \ref{Table}, case 2). Left: one-dimensional system, right: two-dimensional system. The thin black line is the theoretical distribution obtained from Eq. \ref{GenSol}. The thick dark grey line is the average of the various simulation, while the grey and light grey areas represent the range of one and two standard deviations from the average distribution.}
\label{Disagreement}
\end{figure}
In case 2, the time-scale separation assumption does not hold, due to the very low value of $\gamma$ and $k_2$: even if this condition doesn't guarantee that the stationary state will be different from the approximate one-dimensional solution, with this set of parameters we can see a huge difference between the two distributions (Fig. \ref{Disagreement}).
\begin{figure}
\includegraphics[width=0.5\textwidth]{mod_ghost_1d_var.png}\includegraphics[width=0.5\textwidth]{mod_ghost_2d_var.png}
\caption{Case 3, ''ghost effect'': only the biggest peak comes from a deterministic stable point. Left: one-dimensional system, right: two-dimensional system. The thick dark gray line is the average of the various simulation, while the gray and light gray areas represent the range of one and two standard deviations from the average distribution.}
\label{Ghost}
\end{figure}
In case 3, as defined before, we observe a ''ghost'' in which, even if a deterministic stable state does not exist, we can clearly see a second peak in the distribution (Fig. \ref{Ghost}).
In this system the time-scale separation assumption holds, and we can see how both distributions show similar features.
\begin{figure}
\includegraphics[width=0.5\textwidth]{mod_false_ghost_1d_var.png}\includegraphics[width=0.5\textwidth]{mod_false_ghost_2d_var.png}
\caption{Case 4, peak masking effect (parameters as in Tab. \ref{Table}, case 4). The deterministic system has two stable points, but only the peak related to the smallest stable point (with the largest basin of attraction) is visible. Left: one-dimensional system, right: two-dimensional system.}
\label{MaskedPeak}
\end{figure}
In this final case (Tab. \ref{Table}, case 4, Fig. \ref{MaskedPeak}) we can see another effect, in which the peak related to a deterministic stable state is masked by the tail of the stronger peak, becoming just a fat tail.
Even without a strong time-scale separation for the $m$ and $p$ variables, we can see how both systems give a very similar response, evidencing that this effect is very robust.
Increasing the $\gamma$ and $k_2$ values does not affect the distribution as long as their ratio is kept constant.
Note that while there are several computational tools for discrete-state Markov processes such as PRISM \cite{Kwiatkowska}, APNNtoolbox \cite{Buchholz}, SHARPE \cite{Hirel}, or Mobius \cite{Daly}, there is very little for CMTC (see for instance \cite{Didier}). Different modeling approaches for toggle switches do exists in the area of formal methods (see for example \cite{Bella1,Bella2}).
\section{Discussion and Conclusion}
We have studied a stochastic version of a biochemical circuit that is supposed to be involved in cell cycle control, with implications for the onset of severe diseases such as cancer, consisting of a gene cluster (Myc-E2F) and a miRNA cluster (mir-17-92). This cluster has been reported in very large number of cancer types: particularly in different types of lymphomas, glioma, non-small cell lung cancer, bladder cancer, squamous-cell carcinoma of the head and neck, peripheral nerve sheath tumor, malignant fibrous histiocytoma, alveolar rhabdomyosarcoma, liposarcoma and colon carcinomas. This huge variety of cancer stresses the centrality of this toggle switch and suggests that advancement in modeling this toggle could lead to insights into differences between these cancers. This aim is still far but we are delighted to report that our modeling approach shows important results inching to that direction.
First of all, many features are recovered as observed for the deterministic version of the same system, also by means of a further approximation that reduces the system to an unique variable: in this case the system can be treated analytically, and compared to the one- and two-dimensional numerical simulations.
The stochastic approach, that is the exact approach when the number of molecules involved is low, shows a different behaviour than the deterministic one in two situations we have observed. It is noteworthy that the number of molecules involved shows some agreement with the estimates by \cite{Chan} and by \cite{Lim2} for other miRNA-systems (see also \cite{Arvey}). The cell volume is assumed $10^{-13}$ liters, then 1 nM =100 molecules.
First, bistability in the stochastic system (namely, the possibility of having two stable states, one associated to a resting and the other to a proliferative cell state) is observed also in situations in which the corresponding deterministic system is monostable, and this can be explained by the presence of a ''ghost'' state in the deterministic system that is strong enough to produce a second peak in the stationary distribution of the stochastic model.
Secondly, there are situations in which the peak for the stochastic distribution related to the highest level of expression (with parameter values for which the deterministic system is bistable) is masked by the tail of the distribution of the lowest-expression maximum (that is related to the largest basin of attraction in the deterministic model), making the ''proliferative state'' appear almost as a scarcely visited metastable state. This is an interesting behaviour, that should be further investigated in real experimental data of protein concentration and gene expression related to the biochemical circuit considered. The ''metastable'' and the ''fully'' bimodal distributions could be associated to healthy and tumoral cell states respectively, because the highest ''proliferative'' state has different properties in the two cases. From a biological point of view such state, being associated to a dysregulated, disease-related conditions, could actually represent a compendium of several dysregulated states.
We argue that the deterministic approach to this biochemical circuit is not capable to characterize it completely, and the stochastic approach appears more informative: further features unique to the stochastic model could be obtained by considering different time patterns for the molecular influxes to the system, and this point in our opinion should deserve more investigation in a future work. MicroRNAs (miRNAs) express differently in normal and cancerous tissues and thus are regarded as potent cancer biomarkers for early diagnosis. We believe that the potential use of oncomirs in cancer diagnosis, therapies and prognosis will benefit accurate cancer mathematical models.
Given that MiR-17-5p seems to act as both oncogene and tumor suppressor through decreasing the expression levels of anti-proliferative genes and proliferative genes, this behavior is suggestive of a cell type dependent toggle switch. Therefore fitting of experimental data could provide insights into differences among cancer types and on which cell type is behaving differently.
\section*{Acknowledgments} D.R. acknowledges Bologna University 'Progetto Strategico d'Ateneo 2006' funding.
|
train/arxiv
|
BkiUcg7xK6-gD0Srcpwx
| 5
| 1
|
\section{Introduction}
Vision could be easy and require little more than mathematical
modelling: the brightness constancy constraint for optical flow,
sobel-filtering and perspective equations for 3D object recognition,
or lambertian reflectance for shape-from-shading. However, the messiness of the real world has long proved the
assumptions made by these models inadequate: even simple natural
scenes are riddled with shadows, reflections and colored lights that
bounce off surfaces of different materials and mix in complex
ways. Robust systems for scene understanding that can be safely
deployed in the wild (e.g.\ robots, self-driving cars) will probably
require not just tolerance to these factors, as current deep
learning based systems have; they will require factoring out these
variables in their visual representations, such that they do not get
rattled by giant (reflected) trees growing out of potholes in the
road, or even their own shadow or reflection.
A natural framework for handling these factors is to model them as
layers that compose into an overall video. Layered models trace back
to the foundations of computer vision \cite{Wang1994RepresentingMI} but assumed particular models of motion \cite{jojic2001learning},
scene shapes or illumination. Layered models are also
often tailored to particular goals -- such as shadow or specularity removal,
or reflection separation \cite{Szeliski2000} and
rarely accomodate for non-rigidity other than for very specialized
domains (e.g.\ faces \cite{liu2017better}).
\begin{figure}[t]
\begin{center}
%
%
%
%
\includegraphics[width=\linewidth]{figures/teaser_fig2_small.pdf}
\end{center}
\caption[]{\label{fig:street_scene} \small Top, input video showing someone driving a car through the country side, with trees reflecting in its windscreen. Bottom, two videos output by our \textit{visual centrifuge}\footnotemark. In this paper we learn models that can, in the spirit of a centrifuge, separate a single video into multiple layers, \eg to consider the interior of the car or the shapes of the reflected trees in isolation. We do so using few assumptions, by simply training models to separate multiple blended videos -- a task for which training data can be obtained in abundance.}
\end{figure}
\footnotetext{See \url{https://youtu.be/u8QwiSa6L0Q} for a video version of the figure.}
In this paper we aim to learn a video representation that teases apart
a video into layers in a more general data-driven way that does away
with explicit assumptions about shape, motion or illumination. Our
approach is to train a neural network that, in the spirit of a
\textit{visual centrifuge},
separates pairs of videos that we first blend together using uniformly
weighted averaging. Related ideas have been pursued in the audio
domain~\cite{Yu2017,Afouras2018,Ephrat2018}, where signals are waves that really combine
additively by superposition. In the visual domain this approximation is accurate when
dealing with some reflections but not necessarily in other cases of
interest such as shadows or extremely specular surfaces such as
mirrors. However, we hope that by mixing a sufficiently large and
diverse set of videos these cases will also be sporadically
synthesized and the model can learn to separate them (e.g.\ a shadow
from one video will darken the blended video, and needs to be factored
out to reconstruct the second video).
How is it possible for the network to separate the mixture into its constituent videos? There are two
principal cues that can be used: the different motion fields of the two videos, and the semantic content, e.g.\ picking out a car in one video and a cow in another. There are also more subtle cues such as one `layer' may be more blurred or have different colors.
We show that our model, after being trained on blended pairs of
videos from Kinetics-600~\cite{kay_arxiv_2017,Carreira-Kinetics600-2018}, a large video dataset with around
400k 10-second clips of human actions, can indeed spontaneously separate natural reflections
and shadows as well as remove color filters from new individual
(non-blended) videos as shown in \fig~\ref{fig:street_scene}.
While our model is not necessarily more accurate
than existing ones on individual niche tasks in constrained settings,
although it has comparable performance, it can also succeed on a variety of layer separation tasks in totally
unconstrained settings where previous models fail (e.g.\ with people moving around and shaky
cameras).
\\
\vspace{1mm}
\noindent\textbf{Contributions.}
Our contributions are threefold; \textbf{(i)} we propose novel architectures for multi-layered video modelling,
\textbf{(ii)} we show that these models can be learned without supervision, by just separating synthetically blended videos and,
\textbf{(iii)} we observe that these models exhibit color constancy abilities and can factor out shadows and reflections on real world videos.
\\
\section{Related work}
\label{sec:RW}
\noindent\textbf{Image layer composition.}
Many different layer composition types have been developed that model the image generation process.
Intrinsic image approaches~\cite{Barron2015,Fan2018,Finlayson2014,Sinha1993,Tappen2002,Weiss2001} aim to factorize illumination, surface reflectance and shape. Deconvolution algorithms, such as blind deblurring, model the image as a superposition of multiple copies of an original (unblurred)
image~\cite{Cho07,Fergus06,Shan08,Yuan07,Whyte12,Jin2018LearningTE}.
A related problem is the one of color constancy~\cite{Barron2015a, Barron2017}, where the goal is to infer the color of the light illuminating a scene in order to remove it.
\noindent\textbf{Reflection removal.}
Reflections in natural images are a particular case of layer composition, where two or more layers are mixed together through simple addition in order to form the final image.
Most successful classical methods for removing reflections assume access to a sequence of images where the reflection and the background layer have different motions~\cite{Beery2008, Guo2014, KunGai2012, Szeliski2000, Xue2015, Nandoriya2017}.
By recovering the two dominant motions, these methods can recover the original layers through temporal filtering.
The work by Xue et al.~\cite{Xue2015} notably proposes an optimization procedure which alternates between estimating dense optical flow fields encoding the motions of the reflection and the background layers and recovering the layers themselves, which leads to impressive results on images containing natural reflections.
However, all these methods rely on the assumption that the two layers have distinctive and almost constant motions~\cite{Xue2015,Guo2014} and cannot handle cases where multiple objects are moving with independent motions inside the layers.
Recently, Fan et al.~\cite{Fan2017} proposed a deep learning architecture to suppress reflections given a single image only.
The advantage of this and related approaches \cite{zhang2016colorful,Chi2018,Zhang2018SingleIR} is that they are very flexible -- given appropriate data they can in principle operate in unconstrained settings.
\noindent\textbf{Video layer decomposition.}
All previously mentioned approaches are designed to output results for one image.
We focus instead on recovering layers composing a whole video~\cite{kumar2008learning,jojic2001learning}.
As observed in~\cite{Nandoriya2017}, simple extensions of the previous techniques to videos, such as applying the methods in a frame by frame fashion followed by temporal filtering is not satisfactory as it leads to strong temporal flickering, incomplete recovery of the layers and often blurs the objects present in the video.
To alleviate these issues,~\cite{Nandoriya2017} propose an extension of the work of~\cite{Xue2015} but where they adapt both the initialization strategy and the optimization objective in order to take into account the temporal dimension.
The proposed approach strongly alleviates the temporal flickering issue.
However, the method still relies on strong assumptions concerning the relative motions of the two layers and might notably suffer from objects moving fast inside one of the layers.
Differently from~\cite{Nandoriya2017}, we want to rely on semantic cues whenever motion cues are not sufficient.
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{.27\linewidth}
\centering
\includegraphics[height=3.3cm]{figures/input_small.pdf} %
\caption{\small \textbf{Video generation} (\ref{subsec:generation}) \label{fig:generation}}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.45\linewidth}
\centering
\includegraphics[height=3.3cm]{figures/architecture.pdf} %
\caption{\small \textbf{Model architecture} (\ref{subsec:architecture}) \label{fig:architectures}}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.27\linewidth}
\centering
\includegraphics[height=3.3cm]{figures/losses_small.pdf} %
\caption{\small \textbf{Permutation invariant loss} (\ref{subsec:pil}) \label{fig:losses}}
\end{subfigure}
\vspace*{-0.3cm}
\caption{\label{fig:model}
Illustration of the general idea, described in full detail in section~\ref{sec:model}. Two videos are blended together into a single video and this video is passed through a neural network which is trained to separate it back into the two original videos. The hope is that the underlying learned representation captures the concept of natural video layers and that it will then generalize when processing standard videos. Real separation results shown.
}
\end{figure*}
\noindent\textbf{Permutation invariant losses.}
We want to recover the layers composing the image in a blind manner, \ie without making assumptions about the different layers nor giving external cues that could indicate which layer we want to reconstruct.
This is challenging as it relates to the label permutation problem~\cite{Yu2017}.
One solution to this problem proposed in the audio domain is to make use of permutation invariant losses~\cite{Yu2017}.
Here we employ a similar strategy by adapting this principle to video reconstruction.
This is also related to the problem of uncertainty
inherent to the fact that multiple solutions are possible for the
layer decomposition, a situation that can be handled by
designing a network to generate multiple hypotheses, with an appropriate loss to reward only one for
each training sample~\cite{Rupprecht2017,firman2018diversenet,li2018interactive}.
In this work we propose to use permutation
invariant losses in the context of emiting multiple hypotheses for the
layers.
\noindent\textbf{Audio separation.}
Additive layer composition is particularly suitable for modeling how
different audio sources are assembled to form a sound. That is why
our work also relates to the audio separation domain -- in particular to `blind
source separation'. However, much of the literature on blind audio separation, such as for
the well known `Cocktail Party problem', requires multiple audio channels (microphones)
as input~\cite{Comon10}, which is not the situation we consider in this work.
Though deep learning has brought fresh interest in the single audio channel case,
e.g.~\cite{Erdogan2015PhasesensitiveAR,Wang2018EndtoEndSS}.
Recent work
has revisited the cocktail party
problem while also using visual cues~\cite{Afouras2018, Ephrat2018,Gao2018LearningTS}.
\noindent\textbf{Layers beyond graphics.}
Others have also investigated the use of image layers composition for other purposes than computers graphic applications.
For example, recent work explores additive layer composition as a data augmentation technique for image level classification~\cite{Inoue2018, Tokozume2018, Zhang2017}.
Interestingly, \cite{Zhang2017} shows that simply mixing images and labels in an additive fashion improves generalization and robustness to adversarial examples as well as stabilizes training of generative models.
Such techniques have not yet been extended to the video domain as we do in this work.
\section{Deep layer separation by synthetic training}
\label{sec:model}
In this section we describe our model which is trained end-to-end to reconstruct layers composing an input video.
We generate the training data synthetically using a simple additive layer composition as explained in section~\ref{subsec:generation}.
In section~\ref{subsec:architecture}, we describe the model architecture to tackle our problem.
Finally, we motivate our choice of loss in section~\ref{subsec:pil}.
\fig~\ref{fig:model} summarizes our approach.
\subsection{Video generation process}
\label{subsec:generation}
Real videos with ground truth layer decomposition are hard to get at scale.
To be able to train a neural network for this task, we instead generate artificial videos for which we have easily access to ground truth.
In practice, we average two videos with various coefficients, a simple strategy already proposed in~\cite{Szeliski2000} to evaluate image decomposition models.
More formally, given two videos $V^{1}, V^{2}\in\mathbb{R}^{T\times H \times W \times 3}$, where $T$ is the total number of frames, $H$ and $W$ the frame's height and width and $3$ corresponds to the standard RGB channels, we generate a training video $V$ as follows:
\begin{equation}
\label{eq:generation}
V = (1-\alpha) \cdot V^{1} + \alpha \cdot V^{2},
\end{equation}
where $\alpha\in\left[0,1\right]$ is a variable mixing parameter.
This process is illustrated in \fig~\ref{fig:generation}.
Despite this apparent simple data generation scheme, we show in
section~\ref{sec:applications} that this is sufficient to train a model
that can generalize to real videos with layer composition including
shadows and reflections.
\subsection{Model architecture}
\label{subsec:architecture}
We use an encoder-decoder type architecture that, given an input mixed video, outputs two or more videos aiming at recovering the original layers composing the input (see \fig~\ref{fig:architectures}).
We denote by $V$ the input video and by $O$ the $n$ outputs of the network, where $O^i$ corresponds to the $i$-th outputed video.
Below, we give details about our particular design choices.
\noindent\textbf{3D ConvNet.}
As demonstrated by previous work~\cite{Xue2015}, motion is a major cue to reconstruct the composing layers.
For this reason, we leverage a 3D ConvNet architecture able to capture both appearance and motion patterns
at multiple temporal scales to succeed at the task.
For the encoder, we use the I3D architecture~\cite{carreira17quovadis} which has proven to be effective for video classification.
For the decoder, we propose a simple architecture which consists of a succession of 3D Transposed Convolutions~\cite{Dumoulin2016} that we detail in Appendix~\ref{app:architecture}.
\noindent\textbf{U-Net.}
To improve the quality of reconstruction, we follow the U-Net architecture~\cite{Ronneberger2015} that has proved its worth in many dense reconstruction tasks, e.g.~\cite{Isola2017ImagetoImageTW},
and add skip connections between the encoder and the decoder (see Appendix~\ref{app:architecture} for details).
\noindent\textbf{Output layers.}
Although our synthetic video are composed by mixing only two videos,
we found it helpful to allow our models to produce more than two outputs. This
is to alleviate the problem of uncertainty~\cite{Rupprecht2017}
inherent to our task, \ie multiple solutions for the layers are often
possible and satisfactory to reconstruct the input. To output $n$ videos, we simply
increase the number of channels at the output of the network; given a
video $V\in\mathbb{R}^{T\times H \times W \times 3}$, the network is
designed to output $O\in\mathbb{R}^{T\times H\times W\times 3n}$. This
means that the separation of the outputs only happens at the end of the
network, which makes it possible for it to perform quality verification along the way (e.g. check that the outputs sum correctly to the input). Although introducing multiple alternative outputs may lower applicability in some cases, simple
strategies can be adopted to automatically choose two outputs out of
$n$ at test time, such as selecting the two most dissimilar video layers (which we do
by selecting the most distant outputs in pixel space).
\noindent\textbf{Predictor-Corrector.}
We also give the model the possibility to further correct its initial predictions by stacking a second encoder-decoder network after the first one.
This is inspired by the success of iterative computation architectures~\cite{carreira2016human,Newell2016} used in the context of human pose estimation.
Given an initial input mixed video $V\in\mathbb{R}^{T\times H \times W \times 3}$ and $n$ target output layers, the first network, the \emph{predictor}, outputs an initial guess for the reconstruction $\tilde{O}\in\mathbb{R}^{T\times H \times W \times 3n}$.
The second network, the \emph{corrector}, takes $\tilde{O}$ as input and outputs $\Delta\in\mathbb{R}^{T\times H \times W \times 3n}$ such that the final output of the network is defined as
$
O = \tilde{O}+\Delta.
$
Because the role of these two networks are different, they do not share weights.
We train the two networks end-to-end from scratch without any specific two-stage training procedure.
\subsection{Permutation invariant loss}
\label{subsec:pil}
One challenge of our approach lies in the fact that we do not have any a priori information about of the order of the input video layers.
Therefore it is hard to enforce the network to output a given layer at a specific position. This challenge is usually refered as the permutation label problem~\cite{Yu2017}.
To overcome this problem, we define a training loss which is permutation invariant (see~\fig~\ref{fig:losses}).
More formally, given the two original ground truth videos $\{V^{1},V^{2}\}$ and the outputs of our network $O$ defined previously, we set up the training loss as:
\begin{equation}
\label{eq:training_loss}
\mathcal{L}\left(\{V^{1}, V^{2}\}, O\right)=\min_{(i,j) | i\neq j} \ell(V^{1}, O^i)+\ell(V^{2}, O^j),
\end{equation}
where $\ell$ is a reconstruction loss for videos.
Following previous work~\cite{Mathieu2016}, we define $\ell$ for two videos $U$ and $V$ as follows:
\begin{equation}
\label{eq:recons_loss}
\ell(U, V) = \frac{1}{2T} \left(\sum_t \Vert U_t-V_t \Vert_1 + \Vert \nabla(U_t)-\nabla(V_t) \Vert_1\right),
\end{equation}
where $\Vert \cdot \Vert_1$ is the $L_1$ norm and $\nabla(\cdot)$ is the spatial gradient operator.
We noticed that adding the gradient loss was useful to set more emphasis on edges which were usually harder to capture when compared to constant areas.
\section{Experiments}
\label{sec:experiments}
We trained models on the task of unmixing averaged pairs of videos then tested these models on individual videos from the web and in the wild. The models were trained on pairs of videos from the Kinetics-600 dataset~\cite{Carreira-Kinetics600-2018} training set, which has approximately 400k 10s long videos (250 frames). We evaluated generalization on the Kinetics-600 validation set, which has 30k videos. We used standard augmentation procedures: random left-right flipping and random spatiotemporal cropping, where the shortest side of the video was first resized to 1.15x the desired crop size. Most of the experiments used 32-frame clips with 112x112 resolution for fast iteration. We also trained the full proposed architecture on 64-frame clips with 224x224 resolution -- we report results with this model in the applications section~\ref{sec:applications}.
We tried sampling the blending parameter $\alpha$ of Eq.~\eqref{eq:generation} in $\left[0.25,0.75\right]$ without observing a strong influence on the results when compared to fixed sampling scheme.
Therefore, we simply use $\alpha=0.5$.
\subsection{Architecture Evaluation}
Here we compare the performance of multiple architectural variations
on the learning task of separating averaged videos.
We first evaluate using the reconstruction loss, and then use a downstream task -- that of human action recognition.
All architectures
share the same basic predictor module. All models were trained using
SGD with momentum, with the same hyperparameters: learning rate 0.1,
momentum 0.99, no weight decay and batch size of 10 clips. The
learning rate is lowered to 0.05 at 100k iterations, 0.025 at 150k and
0.01 at 200k. The models are trained for a total of 240k
iterations. At test time moving averages are used in batch
normalization layers.
The first observation was that even the simplest model works: using the permutation-invariant loss, the blended videos separate into the original ones. The loss of the basic predictor model with two output video layers, is provided in table~\ref{table:baselines} and can be contrasted with two baselines: 1) outputing twice the blended video, 2) outputting two different layers, but using the predictor with random weights (no training). The loss of the trained model is significantly lower, although the layers are still somewhat noisy. Our more advanced models are more accurate.
\begin{table}[ht]
\centering
\begin{tabular}{@{}cc@{}}
\toprule
Model & Validation loss \\ \midrule
Identity & 0.361 \\
Predictor (no training) & 0.561 \\
Predictor (trained) & 0.187 \\ \bottomrule
\end{tabular}
\caption{\label{table:baselines} \small Validation losses obtained by the basic predictor -- an encoder-decoder model producing two output layers. \textit{Identity} is a baseline where the two output video layers are just copies of the input blended video. The second baseline is the predictor without any training, using the initial random weights. }
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/qual_figs_kineticsnormal_small.pdf}
\end{center}
\vspace*{-0.5cm}
\caption{\label{fig:standard_results} \small Example outputs of the model on blended Kinetics validation clips. Due to lack of space we show a single frame per clip. Original unblended videos are shown on the rightmost columns. Overall the network is able to unmix videos with a good accuracy even when confronted with hard examples, \eg, videos from the same class. The first five rows show successful separations. The last three show rare cases where the network cuts and pastes in a coherent manner some objects between videos.}
\vspace*{-0.2cm}
\end{figure}
We also found that predicting more than 2 layers for each video results in substantially better unmixing --
we observed that often the outputs formed two clear clusters of video layers, and that
two of the layers among the predicted set are considerably more accurate than those obtained when predicting just $2$ overall.
These results can be found in table~\ref{table:analysis}, second column. %
We think that producing additional layers is mainly helping the
training process by allowing the model to hedge against factors like
differences in brightness, which may be impossible to invert, and to
focus on separating the content (objects, etc.).
Table~\ref{table:analysis} also shows the benefits of the predictor-corrector architecture, using a single correction module, especially when predicting multiple (more than 2) video layers. It is also likely that additional correction steps would improve performance further -- we plan to verify this in future work. %
The results in the rest of the paper used the predictor-corrector architecture with $4$ output video layers.
\begin{table}[ht]
\centering
\begin{tabular}{@{}ccc@{}}
\toprule
\# output video layers & Predictor & Predictor-Corrector \\ \midrule
2 & 0.187 & 0.172\\
4 & 0.159 & 0.133\\
8 & 0.151 & - \\
12 & 0.150 & - \\ \bottomrule
\end{tabular}
\caption{\label{table:analysis} \small Validation loss when producing various number of output video layers, for a simple predictor model and for the predictor-corrector model. Larger sets of layers tend to contain higher-quality reconstructions of the original videos, but this starts to saturate at around 4 -- there is little improvement when increasing to 8 or 12 for the predictor model and we did not experiment with such large sets of output layers on the more memory-demanding predictor-corrector model. Finally, the predictor-corrector model outperforms the predictor by a significant margin, especially when computing $4$ output video layers.}
\end{table}
\noindent
\textbf{Additional loss functions.} We mention here two loss functions that we experimented with, but that
ultimately brought no benefit and are not used. First, it might be expected that it is important to
enforce that the output layers should recompose into the original mixed video as a consistency check. This can
be achieved by adding a loss function to the objective:
\begin{equation}
\ell(V, (1-\alpha)\cdot O^i + \alpha\cdot O^j),
\end{equation}
where $i$ and $j$ are respectively the indexes of the layers matched to $V_1$ and $V_2$
according to equation~(\ref{eq:training_loss}).
However, we did not observe an outright improvement -- possibly because for real sequences (see below) the
strict addition is only a weak model for layer formation.
We also considered
enforcing diversity in the outputs through an explicit loss term,
$-\ell(O^i,O^j)$.
This also did not bring immediate improvement
(and without reconstruction constraints and proper tuning was generating absurdly diverse outputs).
Note also that in general the outputs are diverse when measured with simple diversity losses,
despite some small cross-talk, so more efforts might be needed to design a more appropriate diversity loss.
\noindent
\textbf{Evaluating on a downstream task.}
We evaluated the quality of the separated videos for the task of human action recognition.
To this end, we tested I3D (that has been trained on the standard Kinetics training set): (a) directly on mixed pairs of videos, (b) on \textit{centrifugally}-unmixed pairs of videos, and (c) on the original clean pairs of videos on the validation set of the Kinetics dataset
(using only 64-frames clips for simplicity, though better results can be obtained on the full 250-frame clips).
We used a modified version of accuracy for evaluation -- as we have two different videos,
we allow the methods to make two predictions.
We consider a score of $1$ if we recover the two ground truth labels, a score of $0.5$ if we recover only one of the two labels and a score of $0$ otherwise.
For method (a), we simply take its top 2 predictions.
For method (b) and (c), we take the top-1 predictions of the two branches.
In this setting, the centrifuge process improved accuracy from $22\%$ for (a) to $44\%$ for (b). However, there is still a gap with the original setup (c) which achieves an accuracy of $60\%$. The gap is presumably due to persistent artifacts in the unmixed videos.
\subsection{Psychophysics of Layered Representations}
Having established the benefits of our proposed architecture, it is interesting to probe into it and see what it has learned, its strengths and weak points which we attempted to do by running a series of psychophysics-like experiments.
\noindent \textbf{Color.} In human vision the colors of objects are perceived as the same across different lighting conditions -- independently of whether the sun is shining bright at mid-day, or nearing sunset and factoring out any cast shadows. We experimented with an extreme notion of color constancy and transformed Kinetics videos as if they had been captured by cameras with different pure-colored filters: black, white, green, red, yellow, blue, cyan and magenta, by averaging them with \textit{empty} videos having just those colors. We did not train on this data, instead we simply used the best model trained to separate pairs of Kinetics videos. We observed that the model generalized quite well to these videos and did accurately reconstruct the two layers in most cases -- one Kinetics video and one pure color video -- and the results are shown in \fig~\ref{fig:color_const_barplot}. It can be seen that the task is easier for black and white filters, which is natural since it corresponds roughly to just darkening or brightening a video. The hardest cases are magenta and green filters, perhaps because these colors are less common in our training data -- we leave this analysis for future work, the main point here being that the models generalize well to very different layer compositions. Results for an example frame are shown in \fig~\ref{fig:color_const}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/color_constancy_jb-crop.pdf}
\end{center}
\vspace*{-0.5cm}
\caption{\label{fig:color_const_barplot} \small Loss obtained by the predictor-corrector model when separating Kinetics videos from pure-colored video of different hues. The loss obtained when separating pairs of Kinetics-videos is shown for reference as the gray bar -- note that the while the model is accurate at separating pairs of Kinetics videos, for which it was explicitly trained, it is even better at separating most of these pure-colored videos, a task for which it was not trained for. Some colors, however, make the task quite hard -- magenta and green, perhaps due to less frequent in the natural videos from Kinetics.}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/color_const_jb_small.pdf}
\end{center}
\vspace*{-0.5cm}
\caption{\label{fig:color_const} \small Top: frame from original video. 2nd row: same frame from same video after mixing with different colored videos. 3rd and 4th rows: $2$ video layer outputs from our predictor-corrector. Note that the reconstructions of the original video are quite similar and that the colored layers are also well reconstructed, despite a highly colorful scene (e.g.\ the clown's shirt has yellow sleeves).}
\vspace*{-0.5cm}
\end{figure}
\noindent \textbf{Motion vs.\ Static Cues.} Motion plays a critical role in engineered solutions (in constrained settings) to problems such as reflection removal (e.g.\ \cite{Xue2015}). To understand how important motion is in our models, compared to static scene analysis, we trained a second predictor-corrector model with $4$ output layers, using the exact same experimental setting as before, but now training on videos without motion. We generated these \textit{frozen} videos by sampling a frame from a normal video and repeating it $32$ times to get each $32$-frame clip. We then evaluated the two models on both normal and frozen videos to see how they generalize. We also tried mixing pairs composed of one normal video and one frozen video. The $6$ different val. losses appear in table \ref{table:frozen_normal}.
We found that motion is an important cue in our system: it is harder to unmix frozen videos than motion ones. Also, the system trained on motion videos is worse on mixed frozen videos than the model trained on frozen videos. However, if just one of the videos is frozen then the motion-trained model excels and does better even than when both videos have motion -- perhaps during training the model receives some examples like. Finally, the model trained on frozen videos does poorly when processing inputs which contain motion -- this is natural, since it never seen those in training. Interestingly, we noticed also that the sampled layers tend to be significantly more diverse for frozen videos, reflecting the fact that they are more ambiguous.
To further support that point, we computed an average diversity metric, $\min_{i\neq j} \ell(O^i, O^j)$, over 1K runs.
For the frozen video model on frozen videos, we obtained an average diversity score of $0.079$ versus $0.045$ for our standard model on motion videos.
\fig~\ref{fig:diversity} shows outputs with maximum diversity score for both models.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/qual_figs_diversity_wo_gt_small.pdf}
\end{center}
\vspace*{-0.5cm}
\caption{\label{fig:diversity} \small
Example videos where our models produce highly diverse sets of layers.
\textbf{Top 4 rows}: layers output by a model trained and tested on frozen videos;
\textbf{Bottom 2 rows}: layers output by a model trained and tested on regular videos. In both cases we sort the videos by layer diversity (from least diverse on the top to most diverse on the bottom).
We observe that the diversity in output video layers is much higher for the model on frozen videos -- motion is a strong cue to disambiguate between the layers.
Note that we selected these blended videos automatically by blending many pairs and choosing here the ones that maximize the diversity metric, $\min_{i\neq j} \ell(O^i, O^j)$ (shown on the left), over 1K runs.
}
\end{figure}
\begin{table}[ht]
\centering
\begin{tabular}{@{}cccc@{}}
\toprule
Train/Test & 2 frozen & 2 normal & 1 frozen 1 normal\\
\midrule
2 frozen & 0.165 & 0.233 & 0.198 \\
2 normal & 0.205 & 0.133 & 0.127 \\
\bottomrule
\end{tabular}
\caption{\label{table:frozen_normal} Loss obtained when training/testing on pairs of frozen/normal videos, and testing on pairs of frozen/normal videos or when blending one frozen and one normal video. A frozen video is a video obtained by just repeating many times a single frame from a normal video, such that it does not have motion.}
\end{table}
\begin{table}[ht]
\centering
\small
\begin{tabular}{@{}ccc@{}}
\toprule
Encoder endpoint & depth & Validation loss \\ \midrule
Mixed\_3c & 7 & 0.214 \\
Mixed\_4f & 17 & 0.181 \\
Mixed\_5c & 21 & 0.187 \\ \bottomrule
\end{tabular}
\caption{\label{table:depth} \small Validation losses obtained when using three increasingly deeper subnetworks of the I3D encoder. The two deeper models achieve lower loss, indicating the value of high-capacity and wide receptive fields in space and time on this task.}
\end{table}
\noindent \textbf{Low-level vs.\ high-level features.} Another interesting question is whether the models are relying on high-level semantic cues (e.g. people's shapes, scene consistency) or just on low-level ones (e.g. texture, edges, flow). We ran several experiments to try to shed light on this.
First, we revisited the basic predictor model and varied the depth of the encoder architecture, by taking features from three different layers of I3D: ``Mixed\_3c", ``Mixed\_4f" and ``Mixed\_5c" (the default elsewhere in the paper). These correspond respectively to encoders with 7, 17 and 21 convolutional layers. The results in table~\ref{table:depth} show that the two deeper encoders perform considerably better than the shallower one, suggesting that higher-level, semantic features matter, but this may also be due to greater fitting capacity and/or larger spatio-temporal receptive fields being required.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=\textwidth]{figures/qual_figs_real_small.pdf}
\end{center}
\caption{\label{fig:qual_res_real} Results of our model on real-world videos containing transparency, reflections, shadows and even smoke.}
%
\end{figure*}
As a second experiment we ran the predictor-corrector model on blended videos formed of pairs from the same Kinetics human action classes, and found that the average loss was 0.145, higher than 0.133 when operating random pairs of videos. However this may also be explained by actions in the same class having similar low-level statistics.
As a third experiment we measured again the unmixing losses, but this time we recorded also two distances between each video in a pair that gets blended together, using euclidean distance on features from an I3D action classifier trained with supervision on Kinetics-600. One distance between low-level features (averaging features from the second convolutional layer) and the other between high-level features (averaging deeper "Mixed\_5c" features). We then measured the Pearson correlation between the losses and each of the two distances. We found a negative correlation of -0.23 between high-level distance and loss, confirming that videos showing similar (low-distance) actions tend to be hard to separate, but a weaker positive correlation between losses and low-level distances of 0.14, showing that low-level similarity is less of a challenge for unmixing.
\section{Applications}
\label{sec:applications}
In this section, we discuss the applicability of our method to real videos. For these experiments we trained the proposed model on 64-frame clips with 224x224 resolution. We first discuss the computational efficiency of our architecture in section~\ref{subsec:efficiency} before showing results on videos composed of various naturally layered phenomena such as reflections, shadows or occlusions in section~\ref{subsec:qual_res}.
\subsection{Efficiency}
\label{subsec:efficiency}
Our base network takes approximately 0.5 seconds to process a $64$-frame clip at $224\times 224$ resolution, using $4$ output layers. If we use our biggest model, the corrector-predictor, it then takes approximately twice that time. These timings are reported using a single P4000 Nvidia GPU. Note that this is significantly faster than the timings reported by techniques in related areas, such as for reflection removal ~\cite{Xue2015} which require minutes to process a similar video.
In addition, our network can seamlessly be applied to longer and higher definition videos as it is fully convolutional.
\subsection{Real world layer decomposition}
\label{subsec:qual_res}
We now demonstrate that, even if trained with synthetic videos, the proposed model is able to generalize to standard videos, sourced from the web. A selection showcasing various types of natural video layers such as reflections, shadows and smoke is presented in \fig~\ref{fig:qual_res_real}. The model tends to perform quite well across many videos, in regions of the videos where such compositions do occur; outside those regions it sometimes distorts the videos (or perhaps we do not understand exactly what layers the model is considering).
We also compare visually to a method that is specifically designed for reflection removal~\cite{Xue2015} in \fig~\ref{fig:qual_res_compar}.
Even if our results look less vivid than~\cite{Xue2015}, the centrifuge does a reasonable job at this task while making fewer assumptions.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=\linewidth]{figures/qual_res_comparison_freeman_small.pdf}
\end{center}
\vspace*{-0.5cm}
\caption{\label{fig:qual_res_compar} Comparison of the centrifuge with a method specifically engineered for the purpose of reflection removal~\cite{Xue2015} (we unfortunately do not have their results for the first and third frames).}
\vspace*{-0.5cm}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We have presented a model that can be trained to reconstruct back individual videos that were synthetically mixed together, in the spirit of real-life centrifuges which separate materials into their different components. We explored what were the important bits to suceed at the training task, namely a permutation invariant loss, the depth of the network, the ability to produce multiple hypotheses and the recursive approach with our predictor-corrector model. We also investigated what are the cues used by our model and found evidence that it relies on both semantic \emph{and} low-level cues, especially motion.
Our main scientific goal, however, was to find out what such a system would do when presented with a single (not synthetically mixed) video and we verified that it learns to tease apart shadows, reflections, and lighting. One can only hope that, as we look at the world through the lenses of more advanced models of this kind, we can uncover new layers of reality that are not immediately apparent, similar to what hardware-based advanced such as microscopes and telescopes have done in the past -- but now in the pattern recognition domain.
Much work remains to be done, in particular on how to control the
layer assignment process to make it more useful for applications,
which may include robust perceptual frontends for safety-critical
systems operating in complex visual scenes (e.g. self-driving cars) or
in video editing packages. Future work should also consider relaxing the uniform mixing
of videos that we employed here -- both to make the learning problem harder but hopefully also
to improve the visual quality of the separated layers.
\noindent
\textbf{Acknowledgments.}
We would like to thank Relja Arandjelovi\'c, Carl Doersch, Viorica Patraucean and Jacob Walker for valuable discussions, and the anonymous reviewers for very helpful comments.
{\small
\bibliographystyle{ieee}
|
train/arxiv
|
BkiUd1c5qhLA_wzE9jjL
| 5
| 1
|
\section{Introduction}
The well known tendency for the blue optical continuum in
distant, luminous radio galaxies to be aligned with their
radio sources represents a potentially important evolutionary
phase of radio galaxies. The blue
optical continuum associated with the Fanaroff-Riley class two (FR II)
radio sources in powerful radio galaxies often extends several
tens of kiloparsecs into their halos along the axis of the radio
source. The extended radio
and optical continuum is usually associated with bright nebular emission.
This so-called ``alignment effect'' becomes
increasingly prominent in powerful radio galaxies
at redshifts $z \gae 0.6$, but is rarely strong
in radio galaxies at redshifts less than $z=0.1$ (see McCarthy 1993
for a review).
Distant, FR II radio sources are frequently bent and distorted in the vicinity
of strong nebular emission. The nebular gas velocity fields are usually
turbulent and shearing, with characteristic velocities of several hundreds of
kilometers per second. Such properties seem to indicate that
strong interactions (e.g. momentum exchange, photon heating and
ionization) are occurring between the radio source and the nebular gas.
Remarkably, processes associated with the radio source,
which originates in a powerful nuclear engine, can affect the
photometric properties of the host galaxy to large galactic distances.
Several emission mechanisms have been proposed to explain
the extended, blue optical continuum
including star formation, scattered light from an obliquely-directed
active nucleus, synchrotron radiation, nebular
continuum, and inverse Compton radiation (di Serego Alighieri \etal\
1989; Fabian 1989; De Young 1989; Rees 1989;
Begelman \& Cioffi 1989; Daly 1990, 1992; Dickson \etal\ 1995).
Although each of these emission mechanisms is interesting in its
own right, collectively they imply
essentially two important consequences for the evolution
of radio galaxies. Star formation
at the implied levels of several tens to over a thousand solar masses
per year (e.g. 4C41.17, Dey \etal\ 1997) would
have a significant impact on the development of the stellar composition
and structure of the host galaxy. The remaining emission mechanisms in general
imply the presence of a powerful central engine, a dense
interstellar medium, and strong magnetic fields, but may be otherwise
inconsequential to the development of the stellar composition
and structure of the galaxy. However, the alignment effect cannot be
reduced to a unique mechanism, as one or more of these emission mechanisms
may contribute significantly to the effect in a particular galaxy
and among galaxies.
It has been known for some time that central dominant cluster
galaxies (CDGs) selected on the basis of high surface brightness X-ray
emission tend to have unusually blue cores accompanied by
bright, spatially extended nebular emission, and
Fanaroff-Riley class 1 (FR I) radio sources
(Baum 1992; Cardiel, Gorgas, \& Aragon-Salamanca 1997; McNamara 1997).
Furthermore, the bright, blue continuum in two objects (the A2597
and A1795 CDGs)
lies along their radio sources in a manner similar
to the high redshift, FR II radio galaxies, but on a smaller scale.
Although these two objects
share some of the alignment properties seen in FR IIs,
they are noteworthy in several
respects. They reside at the centers of bright,
cluster-scale, thermal X-ray emission that has been
interpreted as the signature of a large reservoir of cooling gas
(i.e. a ``cooling flow''). They harbor less-luminous,
FR I radio sources whose
1.4 GHz radio powers are less than $10^{26}~{\rm W~Hz}^{-1}$, and
their aligned blue continuum, or ``blue lobes'', is preferentially
associated with their radio lobes, rather
than their radio jets. Finally,
A2597 and A1795 are relatively nearby at redshifts of 0.082 and 0.064
respectively. A redshift or radio luminosity dependence on the degree
and frequency of strong alignments would suggest an
active evolutionary phase of
radio galaxies that occurred at a special cosmic epoch or under
special circumstances.
In order to explore the emission mechanism of the radio-aligned
continua in the A2597 and A1795 CDGs, we have obtained U-band polarimetry
of the blue continuum along their radio sources.
Polarimetry can be used to discriminate between
the two favored emission mechanisms: highly polarized scattered light from a hidden
active nucleus
(Sarazin \& Wise 1993; Crawford \& Fabian 1994; Murphy \&
Chernoff 1993; Sarazin \etal\ 1995) and the unpolarized light from
star formation (MO93; De Young 1995). The scattering model was proposed
in part because of its success in explaining
the often highly polarized continuum in the distant
FR II radio galaxies (di Serego Alighieri \etal\
1989; Scarrott, Rolf \& Tadhunter 1990; Jannuzi \& Elston 1991;
Tadhunter et al. 1992; di Serego Alighieri, Cimatti \& Fosbury 1993;
Antonucci 1993; Jannuzi 1994; Jannuzi
\etal\ 1995; Dey et al. 1996; Cimatti \etal\ 1997), and in
response to the paradigm that seeks to unify FR I radio
sources and BL Lac objects (Urry and Padovani 1995).
The scattering models are appealing in the cases of A2597 and A1795
because the dense and dusty environments associated with their
cooling flows provides a suitable scattering medium, were the FR I radio
sources indeed BL Lac objects seen at an oblique angle to the line of sight
(Padovani \& Urry 1990; Urry and Padovani 1995; Sarazin \&
Wise 1993; Murphy \& Chernoff 1993). Were the blue lobes shown
to be highly polarized with the electric vectors orthogonal to
the radio axis, the result could be interpreted as a significant
step toward verifying such a ``unified scheme.''
We have addressed these issues using sensitive U-band
polarimetric observations of the blue lobes.
Although these galaxies are intrinsically faint at U,
and the throughput of the telescope and
polaroid filter at U are low, the U-band offers the largest contrast
between the blue continuum and the red background population of
the cD galaxy.
The U-band is therefore most sensitive to the blue continuum
and least prone to error when estimating the
amount of presumably unpolarized background light that would
dilute any polarized signal from the blue lobes.
The background starlight limits the accuracy of the
polarization measurements (\S4.1).
Therefore, minimizing the amount of dilution by starlight
is critical to maximizing the sensitivity to low
levels of polarized light. In addition, shorter wavelength observations are
more useful for comparison to the high redshift FR~II radio galaxies,
where R and I-band observations correspond to rest wavelengths in the U-band.
In a study similar to that presented here for A2597, we
found A1795's aligned continuum, or blue lobes,
to be unpolarized (McNamara \etal\ 1996a).
In subsequent papers the lobes were shown to
be resolved into what appear to be star clusters using
images obtained with the Hubble Space Telescope (McNamara \etal\ 1996b;
Pinkney \etal\ 1996), clearly demonstrating the emission
mechanism for the lobes in A1795 to be a population of young stars.
In this paper we present the results
of a similar polarimetric study of the blue lobes in the
A2597 CDG.
\section{Observations}
The images were obtained with the Mayall 4 meter telescope of the Kitt Peak
National Observatory
during the nights of 8, 9, and 10 November, 1996. We used the
Tektronix $2048\times 2048$ pixel (scale$=0.47$ {\rm arcsec}/pixel) CCD detector mounted
at the prime focus. The Q and U Stokes parameter
images were
constructed from a series of exposures obtained through a combination
of a U-band filter attached to a copper sulfate
blocking filter, and one of four Polaroid
filters (HNP'B sheet Polaroid) with transmission axes at
$0\deg ,~ 45\deg ,~ 90\deg ,~{\rm and } ~135\deg$.
We obtained 9 or 10 CCD images exposed for 800 seconds at each
position angle for a total of 31,200 seconds of integration time.
The data were taken during transparent but
unphotometric conditions. Further details of our observing and data reduction
technique are presented in McNamara \etal\ 1996.
\section{Properties of the Central Dominant Galaxy}
In Figure 1 we present a composite image of the A2597 CDGs blue lobes
embedded in the smoothed, grayscale contours of the U-band image of
the galaxy. The superposed white contours show the 8.44 GHz radio
emission mapped with the VLA, presented earlier
in Sarazin \etal\ (1995). The composite
U-band image was constructed first by subtracting the model for the background
galaxy discussed in Section 4.1 from the summed,
31.2 ksec U-band image, leaving the
bright blue lobes in residual. The residual image was then multiplied
by an arbitrary factor,
and the U-band image was smoothed with a 4 pixel FWHM Gaussian kernel.
The residual and U-band images
were scaled logarithmically, added,
and displayed in grayscale. This rendering allows the brightest
and bluest regions in black to be seen against the background
galaxy contours in gray and the radio source in white. The structure seen in
earlier U and I data by MO93 and Sarazin \etal\ (1995) is clearly
seen in our new U-band data, although the pixels in the new data
subtend a larger angular size, and details on scales smaller than
$\simeq 2$ arcsec may be unreliable.
We will not repeat the detailed
discussions of the galaxy's properties presented in MO93 and
Sarazin \etal\ (1995), but we will mention a few salient
properties pertaining to this discussion. The radio jets are seen in Figure 1 to emerge from
the nucleus in a north-east/south-west direction along the minor
photometric axis of the galaxy. The bright blue lobes, shown in black,
are located near the radio lobes. They are brightest and bluest 2--3 arcsec
from the nucleus, where the colors of the blue lobes are 0.7--1.0
magnitudes bluer in $U-I$ than the colors of a normal
giant elliptical at that radius.
The most striking indication that the radio source and
matter associated with the blue lobes are interacting is
the sharp bend in the radio structure to the south-west, where the
radio lobe seems to be expanding and bending at the location
of the southern blue lobe. O'Dea, Baum, and Gallimore (1994)
discovered H I in absorption against the radio lobes with
a broad, $\sim 410~\kms$ FWHM, turbulent velocity structure
whose mean velocity is consistent with the mean velocity
of the galaxy. Sarazin \etal\ (1995) suggested that the radio
jets may have been deflected
and the radio lobes disrupted as the outwardly moving radio plasma
collided with the H I clouds. The H I clouds may be associated
with the bright emission-line nebula embedded in the inner 20 kpc
of the galaxy (O'Dea, Baum, and Gallimore 1994).
\section{Polarization Analysis }
We determined the degree of polarization of the U-band light emitted from the
entire central blue region and from the blue lobes individually.
To do so, we extracted the net fluxes in these regions
using
circular synthetic apertures applied to the sum of the CCD frames
for each transmission angle, after subtracting the sky background from
each of the CCD frames.
The Stokes flux for each aperture was computed by taking
flux differences
between transmission axes, $S_\theta$, as $Q=S_{0\deg}-S_{90\deg}$,
$U=S_{45\deg}-S_{135\deg}$. The normalized Stokes
flux was found as $q_{\rm
n}=Q/(S_{0\deg}+S_{90\deg})$ and $u_{\rm n}=U/(S_{45\deg}+S_{135\deg}$),
where the degree of instrumental plus total polarization was found as
$P=\sqrt{q_{\rm n}^2 + u_{\rm n}^2}$. The degree of total polarization (background stars
plus blue lobes)
of the flux at each aperture position on the galaxy was found by
subtracting the mean instrumental polarization determined using the presumably
intrinsically unpolarized flux for eight reference objects in the field surrounding the
central galaxy as $P_{\rm tot}= P_{\rm gal}-P_{\rm ref}$.
A summary of the polarization measurements and the sizes and locations of the
apertures are given in Table 1. Columns
1--3 give the locations of the aperture centers with respect to the
nucleus, defined as the peak in the U-band flux. The offsets from the
nucleus are given in column 2, and the position angles measured from
north through east are given in column 3. Column 4 lists
the diameters of the circular apertures.
Column 5 lists the total polarization found in each
aperture, and column 6 gives the RMS deviations about the
mean instrumental polarizations for the eight reference objects. Column
7 lists the upper limits to the degree of polarization of the blue
lobe light in each aperture,
after correcting for background light. The procedure we used to
derive the data listed in column 7 is described in Section 4.1.
In column 8, we list the net polarization after accounting
for dilution by the stellar background (i.e. column 7) and
nebular emission (\S4.2).
An inspection of columns 5 and 6 in Table 1 shows no significant
total polarization for the large central aperture or the
smaller apertures circumscribing the blue lobes.
The upper limits to the degree of polarization of the
total light from the lobes (lobes $+$ galaxy) is less than $2\%$, based on
the scatter in the measured degree of instrumental polarization of the reference
objects. The statistical error in the each of the
Stokes parameters, which included variations in the U-band sky
background ($\leq 1\%$) and photon statistics ($\ll 1\%$), was found to be
less than $1\%$.
\subsection{Stellar Background Model}
In order to determine the upper limits to the intrinsic polarization
of the blue lobes, we modeled and removed the contribution of
presumably unpolarized starlight from the galaxy.
The background galaxy model was constructed by first measuring the
U-band radial surface brightness
profile of the galaxy. This profile was constructed
by extracting fluxes from elliptical annuluses
with shapes defined by the I-band major axis position angles and
isophotal ellipticities, based on data from McNamara \& O'Connell (1993).
The radial surface brightness
model was constructed by fitting a straight line to the U-band radial surface brightness
profile in magnitudes per square arcsec against semimajor axis to the $1/4$-power.
This $R^{1/4}$-law profile was fit to the data at
radii between 13 and 20 arcseconds, well beyond the
the blue central region of the galaxy where the colors are apparently
typical of a normal cD galaxy. The surface brightness
profile is shown as solid dots, and the fitted $R^{1/4}$-law profile
is shown as a solid line in Figure 2. The model profile is
extrapolated inward in order to construct the model of the older
background population at the location of the lobes. The departure of
the observed surface brightness profile above the $R^{1/4}$-law profile
is clearly seen in the inner 8 arcsec or so where the excess blue
light and line emission are observed (McNamara \& O'Connell 1993; Sarazin \etal\ 1995;
Cardiel, Gorgas, \& Aragon-Salamanca 1997). The $R^{1/4}$-law
profile overestimates the contribution of background light in
the inner arcsec or so of the galaxy. This should have little
effect on the estimated contribution of background light at
the locations of the blue lobes. The model surface brightness
profile was then applied to the corresponding semimajor axis
locations in the artificial image of the galaxy whose shape was
identical to the mean isophotal shape of the I-band image of
the galaxy.
The contribution of background light at the locations of the apertures
placed on the image of the real galaxy was found by measuring the flux in the model
galaxy using identical apertures and locations as for the real galaxy
and reference objects (i.e. Table 1).
The Stokes parameters and degree of polarization of the net flux
from the lobes after subtracting the galactic stellar model background,
$f_{\rm M(r)}$, were found as $q=Q/(S_{0\deg}+S_{90\deg}-{f_{\rm M(r)}})$,
$u=U/(S_{45\deg}+S_{135\deg}-{f_{\rm M(r)}})$, $P=\sqrt{q^2 + u^2}$,
and $P_{\rm *}=P_{\rm gal}-P_{\rm ref}.$
The values of $P_*$ found for each aperture are given
in column 7 of Table 1. They show that the three sigma upper limit to
the degree of polarization of the net flux from the lobes is
less than $5\%$. The three sigma limit was computed
by adding an offset to the surface brightness model profile
equal to three times the statistical error in
the zero point of the $R^{1/4}$-law fit to the data, and then
following through with the analysis of the
model data as described above. The RMS error associated with the measurement
of the total polarization prior to considering the background model is
less than $2\%$. Therefore, dilution by ambient starlight
surrounding the blue lobes is the factor limiting the precision
of the polarization measurement. The $3 \sigma$ upper limit to
the total polarized flux (lobe + galaxy) of both lobes at $U$
is $<1.7 \times 10^{-15}~{\rm erg~cm}^{-2}~{\rm s}^{-1}$.
This figure was estimated using
calibrated photometry (McNamara \& O'Connell 1993; Sarazin \etal\ 1995),
and the upper limit of 2\% to the degree of polarization of the
total light at $U$ presented here.
\subsection{Nebular Emission}
Diffuse Nebular line emission in the vicinity of the A2597 CDGs
blue lobes is
quite strong (Voit \& Donahue 1997), and the contributions of
diffuse nebular continuum and line radiation to the U-band could be
significant. Unpolarized nebular radiation would dilute a
polarized signal.
In order to determine the fraction of the U-band color
excess that may be attributable to unpolarized nebular continuum,
we estimated the amount of recombination radiation from
hydrogen and helium, two photon emission,
and bremsstrahlung radiation that would be expected for the conditions
in A2597. We calculated these emissions
relative to the strength of the H$\beta$ emission line using
the Case B emission coefficients tabulated in Aller (1984) and
Osterbrock (1974).
The contribution of nebular continuum to the $U-B$ color excess
in the vicinity of the blue lobes can be described then as,
\begin{equation}
\Delta (U-B)\simeq -2.5~{\rm log}\left[1 + {\epsilon(T,\Delta\lambda_U) EW({\rm H}\beta) \over \Delta \lambda_U }\right].
\end{equation}
In this expression, $EW({\rm H}\beta$) is the equivalent width
of the H$\beta$ emission feature, $\Delta \lambda_U\simeq 600$ \AA\ is the approximate
effective width of the $U$ passband, and
$\epsilon(T,\Delta\lambda_U)$ is the ratio of
the strength of the nebular continuum averaged over the $U$ passband
to the H$\beta$ line flux. We assumed half solar abundances, a gas density of
$200 ~{\rm cm}^{-3}$, and an ion temperature of 10,000 K,
appropriate to the nebula surrounding
the blue lobes (Voit \& Donahue 1997).
The strength of the nebular continuum does not depend significantly
on the ion density for the densities of interest here, but
it does depend somewhat on ion temperature.
We have
adopted $\epsilon=2.8$ to be consistent with the Voit \& Donahue
(1997) analysis. We are unaware of a tabulated measurement in the literature
of the H$\beta$ equivalent width for the A2597 CDG in the vicinity
of the blue lobes. Therefore, we estimated the H$\beta$ equivalent
width to be
$EW({\rm H}\beta)=28$ \AA\ in the nucleus using the tabulated H$\beta$
flux and the continuum spectrum given
in Voit \& Donahue (1997). Cardiel \etal\ (1998) measured
the radial profile of the H$\beta$ equivalent width and similarly found it
to be 28 \AA\ in the nucleus, after correcting for H$\beta$
absorption. They found that equivalent width decreases with radius to
$\simeq 15-23$ \AA\ at the radius of the blue lobes. Unfortunately
Cardiel's measurements were made using a slit placed perpendicular
to the blue lobes along the major axis of the galaxy, so we do not
know the precise value of the H$\beta$ equivalent width at the
location of the lobes.
We find that the color excess at the location of the blue lobes that can be attributed to
unpolarized nebular continuum is $\Delta (U-B)\gae -0.07 ~{\rm to}~ -0.11$
magnitudes. This color excess corresponds to H$\beta$ equivalent widths of 20\AA\ and
30\AA\ respectively, which bracket the values in the nucleus and at
the radius of the blue lobes. The upper bound on
the color excess contributed by unpolarized nebular emission
should be reasonable because the
nebular emission is centrally concentrated (Heckman \etal\
1989), and our estimate exceeds the nuclear value.
Nonetheless, our estimate
should be taken with due caution, absent a direct measurement, as the H$\beta$ equivalent width
could be larger near the blue lobes than we have assumed.
The observed total color excess in A2597's lobes is $\Delta (U-B)\simeq -0.6$ magnitudes
(McNamara 1997; Cardiel, Gorgas, \& Aragon-Salamanca 1997).
Therefore, we expect the unpolarized nebular continuum to comprise
only $\simeq 10-15$\% of the observed total color excess.
The uncertainty in the measurement of the total color excess is
caused primarily by the unknown dust distribution within the nebula.
Voit \& Donahue (1997) found $\simeq 0.3$ magnitudes of
extinction at $U$ based on departures of observed Balmer line
ratios from Case B predictions. This extinction should not affect appreciably
our estimate of the percentage of the color excess that is
contributed by nebular and stellar continuum unless the gas, dust,
and young stars are distributed differently. The existing data
do not allow us to determine whether this is so.
The equivalent width of A2597's [O II]$\lambda$3727 emission feature
is $\simeq 247$ \AA\ in the nucleus (Voit \& Donahue 1997).
However, assuming the [O II] equivalent width decreases
with radius proportionally to the H$\beta$
feature, we expect the [O II] equivalent width at the location
of the lobes to be roughly between 130--200 \AA .
The [O II] feature's location is redshifted to $\lambda$4034 \AA\ in the
laboratory frame, where the throughput of the U-band filter is about
5\%. We estimate the contribution of [O II] emission to the
$U$ band at the location of the lobes to be between 0.01--0.02 magnitudes, or about 3\% of the
color excess.
In summary, we estimate the total nebular contribution to the $U$-band
color excess, including [O II] emission plus nebular continuum,
to be between 13--18\%. This continuum would dilute a polarized signal
from the blue lobes and increase the
upper limits to the degree of polarization by one percent or
less (i.e. $P_{\rm *,neb}$, Table 1, column 8).
\section{Radiation Mechanisms for the Blue Lobes}
\subsection{Interpretation of the Polarization Upper Limits}
\label{sec:radiation_scatter}
We now calculate the expected polarization of the blue lobes in
A2597 if they are due to electron scattering or dust scattering,
and compare to the observed upper limit.
The expected linear polarization of the blue lobes in the A1795 CDG
due to electron scattering by the cooling flow was calculated by
Sarazin \& Wise (1993) and was
discussed in detail in McNamara et al.\ (1996a).
The blue lobes in A2597 are similar in many respects, and our analysis
follows that for A1795 (McNamara et al.\ 1996a).
We assume that the blue lobes are due to scattering of beamed radiation.
We consider beamed rather than isotropic emission
because no strong nuclear point source is seen in A2597
(Crawford \& Fabian 1993; Sarazin \& Wise 1993).
The model polarization was calculated for the scattered light only,
without dilution by the background galaxy light.
First, we consider the possibility that the blue lobes are due to electron
scattering.
We assume single electron scattering, as the observed electron scattering
optical depth of the cooling flow is small (Sarazin et al.\ 1995).
In this limit, the polarization is independent of the scale of the
electron density of the cooling flow. Of course, the polarization is
always independent of the flux of the anisotropic nuclear source."
The assumptions included in our scattering model are identical
to those discussed in McNamara et al. (1996a), with the
exception of the opening angle of the scattering cones.
We assume that the electron density, $n_e$, in the cooling flow varies
with radius, $r$, as $n_e \propto r^{-1}$, which gives a reasonable
fit to the observed X-ray surface brightness at small radii
(Sarazin et al.\ 1995).
For this calculation, we assume that anisotropic radiation from
the central nucleus is conical, initially unpolarized, and uniformly
illuminated.
The polarization is calculated including the effects of averaging along
the line-of-sight through the beam and averaging the azimuthal
polarization across the projected width of the beams.
The polarization depends slightly on the observed width of the lobes.
We estimate an angular half-width of
$\phi_{max} \approx 35 - 45^\circ$ for the NE lobe and
$\phi_{max} \approx 25 - 35^\circ$ for the SW lobe.
We adopt $\phi_{max} = 35^\circ$, but note that
the resulting polarization is not strongly dependent on this assumption
or any of the other assumptions, as shown in Sarazin \& Wise (1993)
and McNamara et al.\ (1996a).
The predicted polarization does depend
strongly on the angle $\theta$ between our line-of-sight and
the central direction of the beams.
(See Figure 1 in Sarazin \& Wise [1993] for the definitions of
the angles.)
For each value of the angle $\theta$, we determine the
angular width of the beams $\theta_b$ which is consistent with the
observed width $\phi_{max}$ of the blue lobes in A2597.
The existence of distinct lobes and the fact that the nucleus is
not extremely bright both require that our line of sight be
outside of the beams, so that $\theta_b < \theta$.
Figure 3 shows the predicted polarization (solid line)
of the electron--scattered lobe light, $P$, as a function of the angle of
the beams to the line-of-sight, $\theta$.
The observed upper limit of $P < 6$\% is shown as a long dashed horizontal
line.
The observed upper limit is only consistent with the prediction
of the simple electron scattering model if $\theta < 20^\circ$.
The probability, $P_o$, that the beam would be randomly oriented this close to our
line-of-sight is $P_o< 6$\%.
(Note that for small polarizations, the probability and the polarization are
always nearly equal.)
Thus, the consistency of the observed upper limit on the polarization
of the lobes with the simple electron scattering model would require
an unlikely near alignment of the beams with our line of sight.
A very similar result was found for the blue lobes in A1795, where the
limit on the orientation angle and probability were $\theta < 22^\circ$
and $P_o < 7$\%.
The strong upper limits on the polarization of the lobes makes it unlikely
that they are due to electron scattering.
While a single case might be the result of an unlucky alignment, it seems
very unlikely that both could be explained this way.
This calculation of the polarization in Figure 3
assumed that the emission from the nucleus of the AGN was unpolarized.
If the nucleus contains a BL Lac object, then the nuclear emission
might itself be highly polarized.
In most situations, this will increase the polarization of the scattered light
(Sarazin \& Wise 1993).
However, for certain very restrictive choices of the angles, the polarization
can decrease, although this requires that the parameters be at least as
finely tuned as in the case for scattering of unpolarized radiation
(McNamara et al.\ 1996a).
We have also calculated the predicted polarization for scattering by dust.
We assumed the dust scattering properties given by White (1979) for
the standard MRN model for interstellar dust.
The predicted polarization for dust scattering is shown by the short
dashed curve in Figure 3.
Unlike electron scattering, dust scattering is not symmetric between
forward and backward scattering, and thus the two lobes would have
different polarization.
Since neither of the two observed lobes shows any evidence of polarization,
the curve shown is the maximum of the polarizations of the two lobes.
At small and large values of $\theta$, the polarization is larger for
the back lobe (the one further away from us, for which the scattering
is predominantly back scattering).
At intermediate angles ($37^\circ \le \theta \le 61^\circ$) the
polarization of the front lobe is higher.
At small angles ($\theta < 37^\circ$), the predicted polarization due to
back scattering is radial for the more strongly polarization lobe.
At all other angles, the predicted polarization is azimuthal.
(The polarization due to electron scattering is always azimuthal for
an initially unpolarized source.)
For both A1795 and A2597, the lack of observed polarization in the lobes
and the fact that they are not very strongly asymmetric in their brightness
makes it unlikely that they are due to dust scattering.
The polarization produced by dust scattering is smaller than that produced
by electrons.
As a result, the limit on the angle $\theta$ is weaker.
The observed limit on the polarization of less than 6\% implies that
$\theta \le 22^\circ$ or $30^\circ \le \theta \le 44^\circ$,
for which the probability is $P_o \le 23$\%.
However, dust scattering is not symmetric forward to back, and the same
conditions which would lower the polarization (small $\theta$)
would produce rather
asymmetric lobes, with a ratio of fluxes which would exceed $\ge$6.
While the two lobes are certainly not symmetric, the ratio of their
surface brightnesses is $\la$4
(Sarazin et al.\ 1995).
We have also determined the predicted polarization due to dust
scattering in A1795
(McNamara et al.\ 1996a).
There, the 3-$\sigma$ upper limit polarization leads to an upper limit on
the angle of $\theta \le 46^\circ$, a probability of
$P_o \le 31$\%, and a flux ratio of $\ge$5.8.
Given these limits on dust scattering and the stronger limits on
electron scattering, it is improbable
that the lobes are due scattering of beamed light from the nuclei of the
galaxies. The absence of a polarized signal or of a detailed correspondence
between the radio and optical morphologies renders synchrotron radiation
an unlikely emission mechanism.
Furthermore, Compton scattering of microwave background photons by relativistic electrons
associated with the radio source (c.f. Daly 1992) is incompatible
with the object's proximity and radio power, and optical bremsstrahlung radiation from the diffuse X-ray source would
be too week to explain the blue lobes for the observed gas densities
in A2597 (Sarazin \etal\ 1995).
Finally, our polarization measurements probe directly the
paradigm that seeks to unify FR I radio
sources and BL Lac objects (e.g. Urry and Padovani 1995).
The A2597 and A1795 CDGs reside in hot cluster atmospheres
with central gas densities of $\sim 10^{-1}~{\rm cm}^{-3}$.
Were the CDGs to contain typical BL Lac nuclei,
$\sim 1\%$ of the anisotropically
emitted radiation from the BL Lac would be scattered off of electrons
into the line of sight, which should be detectable (Sarazin \& Wise 1993).
Furthermore, the alignment of the blue lobes with the
radio sources (McNamara \& O'Connell 1993) is consistent with the
scattering hypothesis, which prompted Sarazin and Wise to
investigate its feasibility. Their scattering model assumes
anisotropic nuclear emission directed obliquely to the line of sight,
with luminosities comparable to
a typical BL Lac object ($L=10^{47} \mathfont{ {\rm ergs\ s}^{-1}}$) in the spectral range
$\Delta \nu=10^8-10^{18}$ Hz. The scattering medium was assumed
to be an electron gas of comparable density to the X-ray-determined
values at the centers of the cooling flows. The predicted $U$-band surface
brightness of the scattered light matched closely the observed surface
brightness of the lobes in A2597 and A1795 found by
McNamara \& O'Connell (1993). However, the remaing critical question
was the degree of polarization of the $U$-band lobe emission,
which should be $> 8\%$ were the blue lobes scattered light. The
polarization measurements presented here were intended to test
the scattering model. Our upper limits are then inconsistent with the
scattering model and do not support the FR I--BL Lac unification
paradigm to the extent that the assumptions made
by Sarazin \& Wise (1993) are reasonable.
\subsection{Radio Triggered Star Formation}
Following on the previous section, we conclude
that the radiation from blue lobes is most likely primarily continuum from young, blue stars.
This interpretation receives further support by a recent analysis of the
nebular emission surrounding the lobes (Voit \& Donahue 1997).
The spatial correlation between the blue lobes and radio lobes shown
in Figure 1, McNamara \& O'Connell (1993), and Sarazin \etal\ (1995)
suggests that the
blue lobes and the radio source are related either by chance:
the radially expanding radio jets happened upon the dense
clouds associated with the blue, star-forming regions, or causality:
the star formation was triggered by the radio source.
A mere coincidence seems unlikely. There is one other obvious
example of this phenomenon in the A1795 CDG, which, as
A2597, was identified
among the roughly two dozen or so clusters with large cooling flows
whose CDGs have been well imaged from the ground (McNamara 1997).
The bluest and presumably the youngest regions of star formation
have been shown, using HST imagery, to lie along the edges of the radio lobes
in A1795 (McNamara \etal\ 1996; Pinkney \etal\ 1996). Although the
bluest regions in A2597 are located near the
radio lobes (i.e. Figure 1), HST images show that
they do not correlate strongly with the edges of the radio lobes
(Koekemoer \etal\ in preparation), as is seen in A1795. Although this would
render A2597 a less compelling case for radio-triggered star
formation, the mechanism by which star formation is induced
is poorly understood, and predictions based on such models are
uncertain.
De Young (1995) presented a model to explain the lobes
in A1795 as a burst of star formation
triggered by the rapid collapse of cold clouds
compressed by shocks along the expanding radio jets.
This general scenario is consistent with
bends in the radio sources of both objects occurring
near regions of dust extinction, H$\alpha$ emitting gas, and
in A2597, near H I absorption clouds (McNamara \etal\ 1996; O'Dea
\etal\ 1994; Koekemoer \etal\ in preparation), which indicates
that the radio sources are interacting with cold gas clouds.
However, De Young's model does not readily explain the
location of the bluest, and presumably the youngest
star clusters along the edges of the
radio lobes in A1795, rather than along the edges of the jets.
The weaker correlation between the bluest regions and the radio
source in A2597 is not necessarily surprising.
A strong correlation between the edges of the radio source
and the sites of star formation should be short lived
(McNamara \& O'Connell 1993; De Young 1995; McNamara \etal\
1996a; Cardiel, Gorgas, \& Aragon-Salamanca 1997).
The nebular and H I gas velocities of a few hundred
kilometers per second near the
lobes are disordered (Heckman \etal\ 1989; O'Dea \etal\ 1994).
Assuming the star formation is fueled by cold gas with
a similar velocity structure,
the stellar lobes should disperse quickly, and certainly in less than
the local free-fall time of a few tens of Myr.
Therefore, it is possible that we have caught the burst
of star formation in A2597 several Myr after it was
initiated by the radio source, and stars have begun to
disperse as the radio source expands outward.
The ground-based photometry and radio data are not capable of
pinpointing the relative ages of the radio source and
regions of star formation to reliably test this hypothesis.
Nonetheless, the short dispersal timescale would be
consistent with both the lower limit on the radio age of
0.5 Myr (Sarazin \etal\ 1995) and with the colors for a burst of star
formation that occurred roughly 5 Myr ago with the local initial mass function and solar
abundances (McNamara \& O'Connell 1993). The stellar mass of the
blue lobes composed of such a population would be $\sim 10^8~\msun$,
which implies a total star formation
rate in both lobes of roughly $20 ~\msunyr$. This star formation rate
and stellar mass does not include adjustments upward for extinction,
and adjustments downward for nebular emission (\S 5.3).
Including a $\Delta(U-B)\simeq -0.3$
magnitude extinction correction, and a $\Delta(U-B)\simeq +0.1$
magnitude nebular emission correction, the young stellar mass and
star formation rate would increase by about 68\%. This systematic
increase does not exceed the uncertainties of the estimate of the
uncorrected mass,
which matches reasonably well with the mass of neutral hydrogen of $\sim
7\times 10^7\msun $ estimated from
the VLA H I absorption measurements (O'Dea \etal\ 1994).
\section{A Comparison Between the Polarized Luminosities and Radio
Power for Radio Aligned CDGs and High Redshift Radio Galaxies}
In this section we explore the similarities and differences between the
alignment properties of the high redshift radio
galaxies (HzRGs)
and those in A2597 and A1795. We wish to determine whether
they are fundamentally different types of object,
or whether they are similar albeit on much different spatial
and energy scales. In order to do so,
we will begin by contrasting a few of their relevant properties, most
importantly their polarized luminosities and radio powers.
The blue optical continuum found along the radio sources
in HzRGs is often polarized at levels of several to greater than ten percent.
The electric vectors are, in general, nearly perpendicular
to the radio and optical continuum axes. In addition, they have
strong, extended, nebular line emission near their radio sources.
Based on these facts, a consensus
has emerged that would explain the aligned optical continuum
as being primarily scattered light from a powerful,
misdirected active nucleus or QSO, plus a smaller but
significant contribution of nebular continuum
(e.g. Dey 1998; Cimatti \etal\ 1997; Dey \etal\ 1996; Stockton,
Ridgway, \& Kellogg 1996; Dickson \etal\ 1995). While this may be
the case in the majority of aligned HzRGs with reliable
polarimetry, it is not always true.
For example, the aligned
components in 3C 285 (van Breugel \& Dey 1993)
and 4C 41.17 (Dey \etal\ 1997) are unpolarized. Their aligned
continua appear to originate from young stellar populations.
Furthermore, even in those objects with a high degree of
polarization, star formation at some level cannot always be excluded
(Cimatti \etal\ 1996).
Therefore, there appear to be at least three
physical processes corresponding to three
emission mechanisms that contribute to the alignment effect in HzRGs:
scattered light, nebular continuum, and star formation.
In contrast, the alignment seen in A2597 is associated with a smaller,
lower power, FR I radio source.
Its radio-aligned, $U$-band optical continuum
is composed primarily of light from a young, $\sim 10$ Myr old
stellar population ($\gae 80\%$), a
small contribution of nebular emission ($\sim 10-20\%$),
and at most a minor contribution of scattered light.
The situation is similar in the A1795 CDG (McNamara \etal\ 1996a).
In addition, the blue optical emission found near the radio lobe
of the FR I radio source PKS 0123-016A, the well-known ``Minkowski's Object,''
is primarily from a young stellar population (van Breugel \etal\ 1985).
Therefore, both star formation, and to a small degree nebular
emission, contribute to the aligned
optical components in the FR Is. However, we are unaware of
any evidence of scattered radiation playing a major
role in producing the aligned optical components of an FR I radio galaxy.
The environments of the FR I radio sources in the
A2597 and A1795 central cluster galaxies
and HzRGs that exhibit the alignment
effect are similar in at least three general but important
respects. Both types of radio source seem to be found in elliptical
host galaxies (Rigler \etal\ 1992; Cimatti \etal\ 1994; Dey 1998).
Second, one can infer by the presence of radio sources
that they contain a central engine that powers their radio sources.
Third, by virtue of the presence of strong nebular line emission
and recent star formation, the host galaxies must harbor
reservoirs of cool gas (e.g. McNamara 1997 and references therein).
A2597 and A1795 are dissimilar to the powerful HzRGs,
lacking evidence for bright, blue, unresolved continua in their
nuclei (e.g. McNamara \etal\ 1996b; Pinkney \etal\ 1996) or
broad emission lines (Heckman \etal\ 1989).
The characteristic that perhaps best distinguishes between
A2597, A1795, and the HzRGs is radio power.
We calculated the radio powers for several
high redshift 3C radio galaxies observed with the Keck Observatory
(e.g. Dey 1998). The radio powers
in the restframe 1.4 GHz bandpass, $P_{1.4}$, were found by interpolating
between measured flux densities bracketing the restframe
frequency $\nu=1.4/(1+z)$ GHz. We have assumed $\ho=50~\hounit$ and
$q_0=0.5$ throughout our calculations. The computed flux densities
and radio powers are presented in Table 2. In columns 1 and 2
we list the radio galaxy name and redshift.
Column 3 lists the restframe radio flux
density; column 4 lists the restframe radio
power; column 5 lists the degree of polarization in the
restframe U passband, determined from spectropolarimetry
obtained with the Keck Observatory. In column
6 we list the restframe U-band polarized continuum luminosity. The polarized
luminosity was determined as
\begin{equation}
L(U)_{\rm pol}=4\pi D_{\rm lum}^2f_{\lambda 3600}\Delta \lambda_U P(U),
\end{equation}
\noindent
where $f_{\lambda 3600}$ is total flux at 3600 \AA , and
$P(U)$ is the degree of polarization at 3600 \AA\ in the
rest frame. The effective rest frame $U$
passband is assumed to be $\Delta \lambda_U = 600$ \AA, and
$D_{\rm lum}$ is the luminosity distance to the radio galaxy.
In column 7 we index the references to the data.
We have included entries in Table 2 for A2597 and A1795 for
comparison. Because we
computed the polarized luminosities and radio powers of the HzRGs
in the restframe U-band, they can be compared directly
to those for A2597 and A1795.
The radio power is plotted against the restframe U-band polarized continuum
luminosity in Figure 4.
Note the difference of a factor of several thousand
in radio luminosity between the A2597, A1795, and the aligned
HzRGs. In addition, the polarized luminosities of the HzRGs
are $\sim 600\times$ larger than the upper limits for the FR Is.
In all likelihood, the much larger radio powers in HzRGs are
the result of a much stronger AGN. Consequently, the HzRGs are
capable of producing a much higher scattered light
intensity for a given ambient electron or dust density.
Now we ask whether
we would have detected any polarized flux in A2597 and A1795
if the polarized luminosity scales in proportion
to the radio power. The solid line in Fig. 4 represents
$L(U)_{\rm pol}\propto P_{\rm rad}$. The line is scaled to
the median of the 3C radio galaxies. We can see that
the ratio of polarized luminosity to radio power for the
HzRGs extrapolated downward to the radio power of the aligned
FR Is falls a factor of about 30 below the upper limits for
A1795 and A2597.
Therefore, if $L(U)_{\rm pol}\propto P_{\rm rad}$ our polarimetry
for A2597 and A1795 would not have detected the polarized signal.
The composition (electrons, cold gas, dust),
density, and spatial distribution of the interstellar medium of the host
galaxy, and the orientation of the AGN are factors
that would contribute scatter to the relationship plotted in Figure 4.
However, by using several, well-observed 3C galaxies,
we have attempted to average over these
factors. In any case, when the line is scaled
to the upper envelope of the group of 3C galaxies
in Figure 4, the A2597 and A1795 upper limits remain well above
the line. Therefore, we conclude with reasonable confidence
that were the AGN in the A1795 and A2597 CDGs the dwarf siblings of the
aligned HzRGs, we would not have detected their polarized
signals. This
result depends on the adopted cosmology in detail, but the
conclusion is unaffected.
Our conclusion rests uncomfortably on an extrapolation of 3--4 orders
of magnitude in radio power and a factor of 600 in polarized
luminosity. It would be reassuring to find and explore
alignment effect radio galaxies that lie between the FR I and
HzRGs in Figure 4.
Furthermore, if A1795 and A2597 indeed contain weak AGN similar
in nature to the HzRGs, a polarized signal of
$\sim 2\times 10^{-16}~{\rm erg~cm^{-2}~sec^{-1}}$ should
be present, if our extrapolation to low radio power is
correct. This flux is roughly an order of magnitude
below our limits for A1795 and A2597. However, a polarized flux at this
level may be detectable near the nucleus using high resolution
space polarimetry, or ground-based polarimetry using a
large telescope in excellent seeing.
Finally, we highlight earlier remarks that
pure Thompson scattering models imply the presence
of large cooling flows in HzRGs (Cimatti \etal\ 1994).
This suggestion is supported by the detection
of possibly extended, luminous X-ray emission surrounding some HzRGs
(e.g. 3C356; see Crawford (1997) for a recent review).
The possibility that some HzRGs may be located at
the base of distant cooling flows would be
an additional thread tying together the HzRGs and
the low redshift cooling flows. In addition
to supplying a scattering medium, cooling flows
are capable of fueling star formation and the central engine.
\section{Conclusions}
We have found a three-sigma upper limit
to the degree of polarization of the $U$-band light emitted from A2597's blue
lobes to be less than $6\%$. This limit includes
corrections for dilution by background starlight and nebular emission.
The $U$-band emission from the blue lobes is composed
of $75-88\%$ stellar continuum from young stars, $13-18$\%
nebular emission, and less than 6\% scattered light.
Earlier studies of the CDG in A1795 (McNamara \etal\ 1996a,b),
which has similar properties to the CDG in A2597,
came to similar conclusions.
Our limits do not support the conjecture that the blue
lobes are scattered light from an obscured BL Lac or
blazar nucleus associated with the FR I radio source PKS
2322$-$122. However, if the beamed AGN luminosity and
hence the polarized luminosity scales with radio luminosity,
the FR I radio galaxies in A2597 and A1795 could be scaled-down versions of the high redshift
radio galaxies exhibiting the alignment effect. AGN that would be
present in these objects must have low $U$-band luminosities.
The asymmetries in the radio structures
of the A1795 and A2597 CDGs may have resulted from interactions between
the radio jets and dense, dusty clouds encountered by the jets.
The star formation associated with the blue lobes
may have been triggered by these interactions.
\acknowledgements
We thank Nicolas Cardiel for providing us with H$\beta$ equivalent widths.
B. R. M. was supported by
grant NAS8-39073 to the Smithsonian Astrophysical Observatory.
C. L. S. thanks Bill Sparks for a very useful conversation.
C. L. S. was supported in part by NASA Astrophysical Theory Program
grant 5-3057 and NASA ROSAT grants NAG 5-4787 and NAG 5-3308.
\newpage
\begin{center}
\begin{table*}
\caption{Polarization Measurements}
\vspace{2.5 mm}
\begin{tabular}{lccccccc}\hline\hline
Location& $r$ & PA & Ap &$P_{\rm tot}$ &$\sigma_{\rm p}$& $P_{\rm *}$ & $P_{\rm *,neb}$\\
~~& (arcsec) & (degrees) & (arcsec) &($\%$)&($\%$)&($\%$) &($\%$)\\
\hline
Nucleus& ... & ... &10.4 & $-0.9$ & 1.4 & $<4.9$&$<5.8$ \\
NE Lobe& 2.7& 45 & 4.70 & $~~0.5$ & 2.0 & $<4.8$&$<5.7$ \\
SW Lobe& 3.4& 236 & 4.70 & $-1.5$ & 2.0 & $<2.9$& $<3.4$\\
\hline\hline
\end{tabular}
\end{table*}
\end{center}
\begin{center}
\begin{table*}
\caption{Radio and Polarized Luminosities of Radio Galaxies}
\vspace{2.5 mm}
\begin{tabular}{lcccccc}\hline\hline
Object& $z$ &$S_{1.4}$ & $P_{1.4}$ &$P(U)$ &$L_{\rm pol}(U)$ &Refs\\
~~& ~~~ & (Jy) &(log[${\rm W~Hz}^{-1}$]) & ($\%$)&$(10^{42}~{\rm erg~sec}^{-1})$&~~~\\
\hline
3C 13 &1.351 & 5.56 &28.81 & 10 & 25& 1,7,10\\
3C 256&1.824 & 3.54 &28.90 &11 & 3.7& 2,11\\
3C 265&0.811 &12.98 &28.68 &10 & 8.9& 3,7,12\\
3C 324&1.206 &13.59 &29.08 &12 & 9.4& 4,11\\
3C 356&1.079 & 7.96 &28.75 &8 & 6.6& 1,7\\
3C 368&1.132 & 9.58 &28.87 &$<3~~~~~$ &$<34~~~~~~~~$ &5,7,11\\
3C 441&0.707 &11.55 &28.50 & 3 &2.2 &5,13\\
\hline
A2597 &0.082 &2.02 &25.79& $<6~~~~$& $<0.15~~~~$&8,9\\
A1795 &0.064 &0.97 &25.23& $<6~~~~$& $<0.04~~~~$&6,8\\
\hline\hline
\end{tabular}
1) Cimatti \etal\ (1997)
2) Dey \etal\ (1996) \\
3) di Serego Alighieri \etal\ (1996)
4) Cimatti \etal\ (1996) \\
5) Dey (1998)
6) McNamara \etal\ (1996) \\
7) White \& Becker (1992)
8) Heckman \etal\ (1989) \\
9) this paper
10) Ficarra \etal\ (1985)\\
11) Wright \& Otrupcek (1990)
12) Becker \etal\ (1991)\\
13) Pilkington \& Scott (1965)
\end{table*}
\end{center}
\newpage
|
train/arxiv
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.